←back to thread

The AI Investment Boom

(www.apricitas.io)
271 points m-hodges | 7 comments | | HN request time: 0.001s | source | bottom
Show context
GolfPopper ◴[] No.41898170[source]
I've yet to find an "AI" that doesn't seamlessly hallucinate, and I don't see how "AIs" that hallucinate will ever be useful outside niche applications.
replies(12): >>41898196 #>>41898203 #>>41898630 #>>41898961 #>>41899137 #>>41899339 #>>41900217 #>>41901033 #>>41903589 #>>41903712 #>>41905312 #>>41908344 #
1. edanm ◴[] No.41898630[source]
You don't really need to imagine this though - generative AI is already extremely useful in many non-nice applications.
replies(1): >>41900379 #
2. jmathai ◴[] No.41900379[source]
There's a camp of people who are hyper-fixated on LLM hallucinations as being a barrier for value creation.

I believe that is so far off the mark for a couple reasons:

1) It's possible to work around hallucinations in a more cost effective way than relying on humans to always be correct.

2) There are many use cases where hallucinations aren't such a bad thing (or even a good thing) for which we've never really had a system as powerful as LLMs to build for.

There's absolutely very large use cases for LLMs and it will be pretty disruptive. But it will also create net new value that wasn't possible before.

I say that as someone who thinks we have enough technology as it is and don't need any more.

replies(3): >>41900417 #>>41900450 #>>41904140 #
3. datavirtue ◴[] No.41900417[source]
Yeah, they just want it to go away. The same way they wish Windows and GUIs and people in general would just go away.
replies(1): >>41904205 #
4. babyent ◴[] No.41900450[source]
For sure, sending customers into a never ending loop when they want support. That's been my experience with most AI support so far. It sucks. I like Amazon's approach where they have a basic chat bot (probably doesn't even use LLMs) that then escalates to a actual human being in some low cost country.

I kind of like the Chipotle approach. I have a problem with my order, it just refunds me instantly and sometimes gives me a add-on for free.

Honestly I only use LLM for one thing - I give it a set of TS definitions and user input, and ask it to fit those schemas if it can and to not force something if it isn't 100% confident.

I know some people whose whole company is based around the use of AI to send emails or messages, and in reality they're logged into their terminals real time fixing errors before actually sending out the emails. Basically, they are mechanical turks and they even say they're looking at labor in India or Africa to pay them peanuts to address these.

5. johnnyanmac ◴[] No.41904140[source]
the most important aspect of any company worth its salt is liability. If the LLM provider isn't providing liability (and so far they haven't), then hallucinations are a complete deal breaker. You don't want to be on the receiving end of a precedent setting lawsuit just to save some pennies on labor.

There can be uses, but if you you're falling on deaf ears as a B2B if you don't solve this problem. Consumers accept inaccuracies, not businesses. And that's also sadly where it works best and why consumers soured on it. It's being used to work as chatbots that give worse service, and make consumers work more for something an employee could resolve in seconds.

as it's worked for millenia, human have accountability, and any disaster can start the PR spin by reprimanding/firing a human who messes up. We don't have that for AI yet. And obviously, no company wants to bear that burden.

6. johnnyanmac ◴[] No.41904205{3}[source]
I'm just tired of all the lies and theft. People can use the tech you want. Just don't pretend it's yours when you spent decades strengthening copyright law then you decide to break the laws you helped make.
replies(1): >>41906566 #
7. snapcaster ◴[] No.41906566{4}[source]
You're saying "yours" and "you" but from what I can tell you're describing completely different sets of people as some kind of hypocritical single entity