←back to thread

197 points baylearn | 2 comments | | HN request time: 0.438s | source
Show context
Animats ◴[] No.44474788[source]
"A disturbing amount of effort goes into making AI tools engaging rather than useful or productive."

Right. It worked for social media monetization.

"... hallucinations ..."

The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own. The solution the AI industry has settled on is to make hallucinations an externality, like pollution. They're fine as long as someone else pays for the mistakes.

LLMs have a similar problem to Level 2-3 self-driving cars. They sort of do the right thing, but a human has to be poised to quickly take over at all times. It took Waymo a decade to get over that hump and reach level 4, but they did it.

replies(3): >>44474981 #>>44475475 #>>44475555 #
1. jasonsb ◴[] No.44475555[source]
> The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own.

AI system can be trusted to do most of the things on their own. You can't trust them for actions with irreversible consequences, but everything else is ok.

I can use them to write documents, code, create diagrams, designs etc. I just need to verify the result, but that's 10% of the actual work. I would say that 90% of modern day office work can be done with the help of AI.

replies(1): >>44477298 #
2. daxfohl ◴[] No.44477298[source]
And for a lot of things, we don't trust single humans to do it on their own either. It's just a matter of risk and tolerance. AI isn't really any different, except it's currently far less reliable than humans for many tasks. But for some tasks it's more reliable. And the gap could close for other tasks pretty quickly. Or not. But I don't think getting to zero hallucinations is a prereq for anything.