←back to thread

112 points favoboa | 1 comments | | HN request time: 0.209s | source
Show context
bryant ◴[] No.44431158[source]
A few weeks ago, I processed a product refund with Amazon via agent. It was simple, straightforward, and surprisingly obvious that it was backed by a language model based on how it responded to my frustration about it asking tons of questions. But in the end, it processed my refund without ever connecting me with a human being.

I don't know whether Amazon relies on LLMs or SLMs for this and for similar interactions, but it makes tons of financial sense to use SLMs for narrowly scoped agents. In use cases like customer service, the intelligence behind LLMs is all wasted on the task the agents are trained for.

Wouldn't surprise me if down the road we start suggesting role-specific SLMs rather than general LLMs as both an ethics- and security-risk mitigation too.

replies(5): >>44431884 #>>44431916 #>>44432173 #>>44433836 #>>44441923 #
torginus ◴[] No.44431916[source]
I just had my first experience with a customer service LLM. I needed to get my account details changed, and for that I needed to use the customer support chat.

The LLM told me what sort of information they need, and what is the process, after which I followed through the whole thing.

After I went through the whole thing it reassured me everything is in order, and my request is being processed.

For two weeks, nothing happened, I emailed the (human) support staff, and they responded to me, that they can see no such request in their system, turns out the LLM hallucinated the entire customer flow and was just spewing BS at me.

replies(6): >>44431940 #>>44431999 #>>44432155 #>>44432498 #>>44432522 #>>44433879 #
1. exe34 ◴[] No.44431940[source]
That's why I take screenshots of anything that I don't get an email confirmation for.