←back to thread

The AI Investment Boom

(www.apricitas.io)
271 points m-hodges | 1 comments | | HN request time: 0.247s | source
Show context
GolfPopper ◴[] No.41898170[source]
I've yet to find an "AI" that doesn't seamlessly hallucinate, and I don't see how "AIs" that hallucinate will ever be useful outside niche applications.
replies(12): >>41898196 #>>41898203 #>>41898630 #>>41898961 #>>41899137 #>>41899339 #>>41900217 #>>41901033 #>>41903589 #>>41903712 #>>41905312 #>>41908344 #
edanm ◴[] No.41898630[source]
You don't really need to imagine this though - generative AI is already extremely useful in many non-nice applications.
replies(1): >>41900379 #
jmathai ◴[] No.41900379[source]
There's a camp of people who are hyper-fixated on LLM hallucinations as being a barrier for value creation.

I believe that is so far off the mark for a couple reasons:

1) It's possible to work around hallucinations in a more cost effective way than relying on humans to always be correct.

2) There are many use cases where hallucinations aren't such a bad thing (or even a good thing) for which we've never really had a system as powerful as LLMs to build for.

There's absolutely very large use cases for LLMs and it will be pretty disruptive. But it will also create net new value that wasn't possible before.

I say that as someone who thinks we have enough technology as it is and don't need any more.

replies(3): >>41900417 #>>41900450 #>>41904140 #
1. johnnyanmac ◴[] No.41904140[source]
the most important aspect of any company worth its salt is liability. If the LLM provider isn't providing liability (and so far they haven't), then hallucinations are a complete deal breaker. You don't want to be on the receiving end of a precedent setting lawsuit just to save some pennies on labor.

There can be uses, but if you you're falling on deaf ears as a B2B if you don't solve this problem. Consumers accept inaccuracies, not businesses. And that's also sadly where it works best and why consumers soured on it. It's being used to work as chatbots that give worse service, and make consumers work more for something an employee could resolve in seconds.

as it's worked for millenia, human have accountability, and any disaster can start the PR spin by reprimanding/firing a human who messes up. We don't have that for AI yet. And obviously, no company wants to bear that burden.