←back to thread

183 points WolfOliver | 1 comments | | HN request time: 0s | source
Show context
manoDev ◴[] No.45066299[source]
I'm tired of the anthropomorphization marketing behind AI driving this kind of discussion. In a few years, all this talk will sound as dumb as stating "MS Word spell checker will replace writers" or "Photoshop will replace designers".

We'll reap the productivity benefits from this new tool, create more work for ourselves, output will stabilize at a new level and salaries will stagnate again, as it always happens.

replies(9): >>45066425 #>>45066524 #>>45067057 #>>45067320 #>>45067348 #>>45067450 #>>45068047 #>>45068717 #>>45068934 #
ACCount37 ◴[] No.45066524[source]
I'm tired of all the "yet another tool" reductionism. It reeks of cope.

It took under a decade to get AI to this stage - where it can build small scripts and tiny services entirely on its own. I see no fundamental limitations that would prevent further improvements. I see no reason why it would stop at human level of performance either.

replies(11): >>45066554 #>>45066563 #>>45066599 #>>45066617 #>>45066649 #>>45066675 #>>45066708 #>>45066751 #>>45067130 #>>45067218 #>>45067573 #
phailhaus ◴[] No.45066708[source]
> I see no fundamental limitations

How about the fact that AI is only trained to complete text and literally has no "mind" within which to conceive or reason about concepts? Fundamentally, it is only trained to sound like a human.

replies(1): >>45067054 #
ACCount37 ◴[] No.45067054[source]
The simplest system that acts entirely like a human is a human.

An LLM base model isn't trained for abstract thinking, but it still ends up developing abstract thinking internally - because that's the easiest way for it to mimic the breadth and depth of the training data. All LLMs operate in abstracts, using the same manner of informal reasoning as humans do. Even the mistakes they make are amusingly humanlike.

There's no part of an LLM that's called a "mind", but it has a "forward pass", which is quite similar in function. An LLM reasons in small slices - elevating its input text to a highly abstract representation, and then reducing it back down to a token prediction logit, one token at a time.

replies(2): >>45067212 #>>45068143 #
phailhaus ◴[] No.45068143{3}[source]
> The simplest system that acts entirely like a human is a human.

LLM's do not act entirely like a human. If they did, we'd be celebrating AGI!

replies(1): >>45068748 #
1. ACCount37 ◴[] No.45068748{4}[source]
They merely act sort of like a human. Which is entirely expected - given that the datasets they're trained on only capture some facets of human behavior.

Don't expect them to show mastery of spatial reasoning or agentic behavior or physical dexterity out of the box.

They still capture enough humanlike behavior to yield the most general AI systems ever built.