←back to thread

63 points tejonutella | 2 comments | | HN request time: 0s | source
Show context
haswell ◴[] No.43304096[source]
This is a frustrating piece on many levels, but mostly because it doesn't really scratch the surface of what is worrisome about AI. It sets up straw men and knocks them down, but not much else.

It seems to boil down to:

1. LLMs aren't actually "learning" the way humans do so we shouldn't be worried

2. LLMs don't actually "understand" anything so we shouldn't be worried

3. Technology has always been advancing and we've always been freaking out about that, so we shouldn't be worried

4. If your job is automate-able, it probably should be eliminated anyway

What's scary is not that these models are smarter than us, but that we are dumb enough to deploy them in critical contexts and trust the output they generate.

What's scary isn't that these models are so good they'll replace us, but that despite how limited they are, someone will make the decision to replace humans anyway.

What's scary isn't that LLMs will displace good developers, but that LLMs put the power of development in the hands of people who have no idea what they're wielding.

> Sure, with a millions upon millions of training examples, of course you can mimic intelligence. If you already know what’s going to be on the test, common patterns for answers in the test, or even the answer key itself, then are you really intelligent? OR are you just regurgitating information from billions of past tests?

How different are humans from this description in actuality? What are we if not the results of a process that has been optimized by millions upon millions of iterations over long periods of time?

replies(2): >>43304181 #>>43304293 #
war-is-peace ◴[] No.43304181[source]
> What's scary isn't that these models are so good they'll replace us, but that despite how limited they are, someone will make the decision to replace humans anyway.

is this a real threat? if a system/company decides to replace a human with something that is less capable wouldn't that just result in it becoming irrelevant/bankrupt as it is replaced by other companies doing the same thing the more efficient (and in this case traditional) way?

replies(3): >>43304202 #>>43304234 #>>43304323 #
ed-209 ◴[] No.43304202[source]
Not necessarily. Imagine a health insurance provider even partially automating their claim (dis)approval process - it could be both lucrative and devastating.
replies(3): >>43304246 #>>43304267 #>>43304321 #
haswell ◴[] No.43304246[source]
Adding to this, government use cases would be most likely to cause issues because they’re often relevant regardless of how badly they suck.

There are already active discussions about AI being used in government for “efficiency” reasons.

replies(2): >>43304294 #>>43304906 #
1. war-is-peace ◴[] No.43304294[source]
i suppose that links back to the other comment i made - is hype the root issue you are trying to get at?

would be interesting to see what examples there are of this in recent history

replies(1): >>43304570 #
2. haswell ◴[] No.43304570[source]
I'm not entirely sure what you're getting at re: hype.

While there is undoubtedly a lot of hype around these tools right now, that hype is based on a pretty major leap in technology that has fundamentally altered the landscape going forward. There are some great use cases that legitimize some of the hype.

As far as concrete examples, see the sibling comment with the anecdote regarding health insurance denial. There are also portions of the tech industry focused on rolling these tools out in business environments. They're publicly reporting their earnings, and discussing the role AI is playing in major business deals.

Look at players like Salesforce, ServiceNow, Atlassian, etc. They're all rapidly rolling out various AI capabilities into their existing customer bases. They have giant sales forces all actively pushing these capabilities. They also sell to governments. Hype or not, it adds up to real world outcomes.

Public statements by Musk about his intention to use AI also come to mind, and he's repeatedly shown a willingness to break things in the pursuit of his goals.