←back to thread

The AI Investment Boom

(www.apricitas.io)
271 points m-hodges | 1 comments | | HN request time: 0s | source
Show context
GolfPopper ◴[] No.41898170[source]
I've yet to find an "AI" that doesn't seamlessly hallucinate, and I don't see how "AIs" that hallucinate will ever be useful outside niche applications.
replies(12): >>41898196 #>>41898203 #>>41898630 #>>41898961 #>>41899137 #>>41899339 #>>41900217 #>>41901033 #>>41903589 #>>41903712 #>>41905312 #>>41908344 #
1. Mabusto ◴[] No.41908344[source]
I think the goal of minimizing hallucinations needs to be adjusted. When a human "lies", there is a familiarity to it - "I think the restaurant is here." "Wasn't he in Inception?", like humans are good at conveying which information they're certain of and what they're uncertain of, either with vocal tone, body language or signals in the writing style. I've been trying to use Gemini to just ask simple questions and it's hallucinations really put me off. It will confidently tell me lies and now my lizard brain just sees it as unreliable and I'm less likely to ask it things, only because it's not at all able to indicate which information it's certain of. We're never going to get rid of hallucinations because of the probabilistic nature in which LLMs work, but we can get better at adjusting how these are presented to humans.