←back to thread

197 points baylearn | 8 comments | | HN request time: 0.82s | source | bottom
1. Animats ◴[] No.44474788[source]
"A disturbing amount of effort goes into making AI tools engaging rather than useful or productive."

Right. It worked for social media monetization.

"... hallucinations ..."

The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own. The solution the AI industry has settled on is to make hallucinations an externality, like pollution. They're fine as long as someone else pays for the mistakes.

LLMs have a similar problem to Level 2-3 self-driving cars. They sort of do the right thing, but a human has to be poised to quickly take over at all times. It took Waymo a decade to get over that hump and reach level 4, but they did it.

replies(3): >>44474981 #>>44475475 #>>44475555 #
2. cal85 ◴[] No.44474981[source]
When you say “do anything in their own”, what kind of things do you mean?
replies(1): >>44475256 #
3. Animats ◴[] No.44475256[source]
Take actions which have consequences.
4. nunez ◴[] No.44475475[source]
Waymo "did it" in very controlled environments, not in general. They're still a ways away from solving self-driving in the general case.
replies(2): >>44475572 #>>44475630 #
5. jasonsb ◴[] No.44475555[source]
> The elephant in the room. Until that problem is solved. AI systems can't be trusted to do anything on their own.

AI system can be trusted to do most of the things on their own. You can't trust them for actions with irreversible consequences, but everything else is ok.

I can use them to write documents, code, create diagrams, designs etc. I just need to verify the result, but that's 10% of the actual work. I would say that 90% of modern day office work can be done with the help of AI.

replies(1): >>44477298 #
6. Animats ◴[] No.44475572[source]
Los Angeles and San Francisco are not "very controlled environments".
7. __loam ◴[] No.44475630[source]
They've done over 70 million rider only miles on public roads.
8. daxfohl ◴[] No.44477298[source]
And for a lot of things, we don't trust single humans to do it on their own either. It's just a matter of risk and tolerance. AI isn't really any different, except it's currently far less reliable than humans for many tasks. But for some tasks it's more reliable. And the gap could close for other tasks pretty quickly. Or not. But I don't think getting to zero hallucinations is a prereq for anything.