←back to thread

418 points speckx | 1 comments | | HN request time: 0s | source
Show context
jawns ◴[] No.44974805[source]
Full disclosure: I'm currently in a leadership role on an AI engineering team, so it's in my best interest for AI to be perceived as driving value.

Here's a relatively straightforward application of AI that is set to save my company millions of dollars annually.

We operate large call centers, and agents were previously spending 3-5 minutes after each call writing manual summaries of the calls.

We recently switched to using AI to transcribe and write these summaries. Not only are the summaries better than those produced by our human agents, they also free up the human agents to do higher-value work.

It's not sexy. It's not going to replace anyone's job. But it's a huge, measurable efficiency gain.

replies(39): >>44974847 #>>44974853 #>>44974860 #>>44974865 #>>44974867 #>>44974868 #>>44974869 #>>44974874 #>>44974876 #>>44974877 #>>44974901 #>>44974905 #>>44974906 #>>44974907 #>>44974929 #>>44974933 #>>44974951 #>>44974977 #>>44974989 #>>44975016 #>>44975021 #>>44975040 #>>44975093 #>>44975126 #>>44975142 #>>44975193 #>>44975225 #>>44975251 #>>44975268 #>>44975271 #>>44975292 #>>44975458 #>>44975509 #>>44975544 #>>44975548 #>>44975622 #>>44975923 #>>44976668 #>>44977281 #
dsr_ ◴[] No.44974877[source]
Pro-tip: don't write the summary at all until you need it for evidence. Store the call audio at 24Kb/s Opus - that's 180KB per minute. After a year or whatever, delete the oldest audio.

There, I've saved you more millions.

replies(10): >>44974925 #>>44975015 #>>44975017 #>>44975057 #>>44975100 #>>44975212 #>>44975220 #>>44975321 #>>44975382 #>>44975421 #
sillyfluke ◴[] No.44975421[source]
You also will have saved them all the cost of the AI summaries that are incorrect as well.

The parent states:

>Not only are the summaries better than those produced by our human agents...

Now, since they have not mentioned what it took to actually verify that the AI summaries were in fact better than the human agents, I'm sceptical they did the necessary due dillengence.

Why do I think this? Because I have actually tried to do such a verification. In order to verify that the AI summary is actually correct you have to engage in the incredibly tedious task of listening to original recording literally second by second and make sure that what is said does not conflict with the AI summary in question. Not only did the AI summary fail at this test, it failed in the first recording I tested.

The AI summary stated that "Feature x was going to be in Release 3, not 4" whereas the in the recording it is stated that the feature will be in Release 4 not 3, literally the opposite of what the AI said.

I'm sorry but the fact that the AI summary is nicely formatted and has not missed a major topic of conversation means fuck all if the details that are are discussed are spectacularly wrong from a decision tracking perspective, as in literally the opposite of what is stated.

And I know "why" the Ai summary fucked up, because in that instance the topic of conversation was about how there was some confusion about which release that feature was going to be in, that's why the issue was a major item of the meeting agenda in the first place. Predicably, the AI failed to follow the convoluted discussion and "came to" the opposite conclusion.

In short, no fucking thanks.

replies(3): >>44975487 #>>44975553 #>>44975657 #
roywiggins ◴[] No.44975487[source]
In the context of call centers in particular I actually can believe that a moderately inaccurate AI model could be better on average than harried humans writing a summary after the call. Could a human do better carefully working off a recording, absolutely, but that's not what needs to be compared against.

It just has to be as good as a call center worker with 3-5 minutes working off their own memory of the call, not as good as the ground truth of the call. It's probably going to make weirder mistakes when it makes them though.

replies(2): >>44975517 #>>44975640 #
trenchpilgrim ◴[] No.44975517[source]
Especially humans whose jobs are performance-graded on how quickly they can start talking to the next customer.
replies(1): >>44976056 #
1. Imustaskforhelp ◴[] No.44976056{3}[source]
Yeah Maybe that's fair in the current world we live in.

But the solution isn't to use AI instead of not trusting the agents / customer service rep because their performance is graded on how quickly they can start talking to next

The solution is to change the economics in the way that the workers are incentivized to write good summaries, maybe paying them more and not grading them in such a way will help.

I am imagining some company saying AI is good enough because they themselves are using the wrong grading technique and AI is best option in that. SO in that sense, AI just benchmarked maxxed in that if that makes sense. Man, I am not even kidding but I sometimes wonder how economies of scale can work so functionally different from common sense. Like it doesn't make sense at this point.