←back to thread

418 points speckx | 1 comments | | HN request time: 0.251s | source
Show context
jawns ◴[] No.44974805[source]
Full disclosure: I'm currently in a leadership role on an AI engineering team, so it's in my best interest for AI to be perceived as driving value.

Here's a relatively straightforward application of AI that is set to save my company millions of dollars annually.

We operate large call centers, and agents were previously spending 3-5 minutes after each call writing manual summaries of the calls.

We recently switched to using AI to transcribe and write these summaries. Not only are the summaries better than those produced by our human agents, they also free up the human agents to do higher-value work.

It's not sexy. It's not going to replace anyone's job. But it's a huge, measurable efficiency gain.

replies(39): >>44974847 #>>44974853 #>>44974860 #>>44974865 #>>44974867 #>>44974868 #>>44974869 #>>44974874 #>>44974876 #>>44974877 #>>44974901 #>>44974905 #>>44974906 #>>44974907 #>>44974929 #>>44974933 #>>44974951 #>>44974977 #>>44974989 #>>44975016 #>>44975021 #>>44975040 #>>44975093 #>>44975126 #>>44975142 #>>44975193 #>>44975225 #>>44975251 #>>44975268 #>>44975271 #>>44975292 #>>44975458 #>>44975509 #>>44975544 #>>44975548 #>>44975622 #>>44975923 #>>44976668 #>>44977281 #
jordanb ◴[] No.44975016[source]
We use Google meet and it has Gemini transcriptions of our meetings.

They are hilariously inaccurate. They confuse who said what. They often invert the meaning "Joe said we should go with approach x" where Joe actually said we should not do X. It also lacks context causing it to "mishear" all of our internal jargon to "shit my iPhone said" levels.

replies(6): >>44975247 #>>44975295 #>>44975356 #>>44975601 #>>44975899 #>>44976157 #
1. nostrademons ◴[] No.44976157[source]
I also use Gemini notes for all my meetings and find them quite helpful. The key insight is: they don’t have to be particularly accurate. Their primary purpose is to remind me (or the other participants) of what was discussed, what considerations were brought up, and what the eventual decision was. If it inverts the conclusion and forgets a “not”, we’re going to catch that, because we were all in the meeting too. It’s their to jog our memory of what was said, because it’s much easier to recognize correct information than recall it, it’s not the authoritative source of truth on the meeting.

This gets to a common misconception when it comes to GenAI uses: it functions best as “augmented intelligence” rather than “artificial intelligence”. Meaning that it’s at its best when there’s still a human in the loop and the AI supplements the parts the person are bad at rather than replacing the person entirely. We see this with coding, where AI is very good at writing scaffolding, large-scale refactoring, picking decent libraries, reading API docs and generating code that calls it appropriately, etc but still needs a human to give it very specific directions for anything subtle, and someone to review carefully for bugs and security holes.