←back to thread

418 points speckx | 2 comments | | HN request time: 0s | source
Show context
jawns ◴[] No.44974805[source]
Full disclosure: I'm currently in a leadership role on an AI engineering team, so it's in my best interest for AI to be perceived as driving value.

Here's a relatively straightforward application of AI that is set to save my company millions of dollars annually.

We operate large call centers, and agents were previously spending 3-5 minutes after each call writing manual summaries of the calls.

We recently switched to using AI to transcribe and write these summaries. Not only are the summaries better than those produced by our human agents, they also free up the human agents to do higher-value work.

It's not sexy. It's not going to replace anyone's job. But it's a huge, measurable efficiency gain.

replies(39): >>44974847 #>>44974853 #>>44974860 #>>44974865 #>>44974867 #>>44974868 #>>44974869 #>>44974874 #>>44974876 #>>44974877 #>>44974901 #>>44974905 #>>44974906 #>>44974907 #>>44974929 #>>44974933 #>>44974951 #>>44974977 #>>44974989 #>>44975016 #>>44975021 #>>44975040 #>>44975093 #>>44975126 #>>44975142 #>>44975193 #>>44975225 #>>44975251 #>>44975268 #>>44975271 #>>44975292 #>>44975458 #>>44975509 #>>44975544 #>>44975548 #>>44975622 #>>44975923 #>>44976668 #>>44977281 #
dsr_ ◴[] No.44974877[source]
Pro-tip: don't write the summary at all until you need it for evidence. Store the call audio at 24Kb/s Opus - that's 180KB per minute. After a year or whatever, delete the oldest audio.

There, I've saved you more millions.

replies(10): >>44974925 #>>44975015 #>>44975017 #>>44975057 #>>44975100 #>>44975212 #>>44975220 #>>44975321 #>>44975382 #>>44975421 #
doorhammer ◴[] No.44975220[source]
Sentiment analysis, nuanced categorization by issue, detecting new issues, tracking trends, etc, are the bread and butter of any data team at a f500 call center.

I'm not going to say every project born out of that data makes good business sense (big enough companies have fluff everywhere), but ime anyway, projects grounded to that kind of data are typically some of the most straight-forward to concretely tie to a dollar value outcome.

replies(2): >>44975479 #>>44975836 #
1. adrr ◴[] No.44975836{3}[source]
Those have been done for 10+ years. We were running sentiment analysis on email support to determine prioritization back in 2013. Also ran bayesian categorization to offer support reps quick responses/actions. Don't need expensive LLMs it.
replies(1): >>44976143 #
2. doorhammer ◴[] No.44976143[source]
Yeah, I was a QA data analyst supporting three multi-thousand agent call-centers for an F500 in 2012 and we were using phoneme matching for transcript categorization. It was definitely good enough for pretty nuanced analysis.

I'm not saying any given department should, by some objective measure, switch to LLMs and I actually default to a certain level of skepticism whenever my department talks about applications.

I'm just saying I can imagine plausible realities where an intelligent and competent person would choose to switch toward using LLMs in a call center context.

There are also a ton of plausible realities where someone is just riding the hype train gunning for the next promotion.

I think it's useful to talk about alternate strategies and how they might compare, but I'm personally just defaulting to assuming the OP made a reasonable decision and didn't want to write a novel to justify it (a trait I don't suffer from, apparently), vs assuming they just have no idea what they're doing.

Everyone is free to decide which assumed reality they want to respond to. I just have a different default.