←back to thread

The AI Investment Boom

(www.apricitas.io)
271 points m-hodges | 1 comments | | HN request time: 0.001s | source
Show context
GolfPopper ◴[] No.41898170[source]
I've yet to find an "AI" that doesn't seamlessly hallucinate, and I don't see how "AIs" that hallucinate will ever be useful outside niche applications.
replies(12): >>41898196 #>>41898203 #>>41898630 #>>41898961 #>>41899137 #>>41899339 #>>41900217 #>>41901033 #>>41903589 #>>41903712 #>>41905312 #>>41908344 #
jacurtis ◴[] No.41899339[source]
I've never met a human that doesn't "hallucinate" either (either intentionally or unintentionally). Humans either intentionally lie or will fill in gaps in their knowledge with assumptions or inaccurate information. Most human generated content on social media is inaccurate, to an even higher percentage than what ChatGPT gives me.

I guess humans are worthless as well since they are notoriously unreliable. Or maybe it just means that artificial intelligence is more realistic than we want to admit, since it mimics humans exactly as we are, deficiencies and all.

This is kind of like the self-driving car debate. We don't want to allow self-driving cars until we can guarantee that they have a zero percent failure rate.

Meanwhile we continue to rely on human drivers which leads to 50,000 deaths per year in America alone, all because we refuse to accept a failure rate of even one accident from a self-driving car.

replies(2): >>41899534 #>>41904280 #
1. johnnyanmac ◴[] No.41904280[source]
you're missing one big detail. Humans are liable, AI isn't. And AI providers do all they can to deny liability. The businesses using AI sure aren't doing better either.

If you're not confident enough in your tech to be held liable, we're going to have issues. We figured out (sort of) human liability eons ago. So it doesn't matter if it's less safe. It matters that we can make sure to prune out and punish unsafe things. Like firing or jailing a human.