←back to thread

Grok 3: Another win for the bitter lesson

(www.thealgorithmicbridge.com)
129 points kiyanwang | 1 comments | | HN request time: 0.463s | source
Show context
ArtTimeInvestor ◴[] No.43112245[source]
It looks like the USA is bringing all technology in-house that is needed to build AI.

TSMC has a factory in the USA now, ASML too. OpenAI, Google, xAI and Nvidia are natively in the USA.

While no other country is even close to build AI on their own.

Is the USA going to "own" the world by becoming the keeper of AI? Or is there an alternative future that has a probability > 0?

replies(7): >>43112250 #>>43112266 #>>43112288 #>>43112313 #>>43113081 #>>43113084 #>>43113181 #
lompad ◴[] No.43112266[source]
You implicitly assume, LLMs are actually important enough to make a difference on the geopolitical level.

So far, I haven't seen any indication that this is the case. And I'd say, hyped up speculations by people financially incentivized to hype AI should be taken with an entire mine full of salt.

replies(5): >>43112290 #>>43112419 #>>43112691 #>>43112716 #>>43113043 #
ArtTimeInvestor ◴[] No.43112290[source]
First, its not just about LLMs. Its not an LLM that replaced human drivers in Waymo cars.

Second, how could AI not be the deciding geopolitical factor of the future? You expect progress to stop and AI not to achieve and surpass human intelligence?

replies(3): >>43112319 #>>43112382 #>>43112437 #
Eikon ◴[] No.43112319[source]
> You expect progress to stop and AI not to achieve and surpass human intelligence?

A word generator is not intelligence. There’s no “thinking” involved here.

To surpass human intelligence, you’d first need to actually develop intelligence, and llms will not be it.

replies(1): >>43112519 #
willvarfar ◴[] No.43112519[source]
I get that LLMs are just doing a probabilistic prediction etc. Its all Hutter Prize stuff.

But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

A completely different tack: if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains', in what way is that not artificial and intelligence? And if we can do that, what is to stop that manufactured brain from not being twice or ten times larger than a humans?

replies(6): >>43112567 #>>43112751 #>>43112930 #>>43113206 #>>43113272 #>>43113546 #
habinero ◴[] No.43113546[source]
> But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

I see this statement thrown around a lot and I don't understand why. We don't process information like computers do. We don't learn like they do, either. We have huge portions of our brains dedicated to communication and problem solving. Clearly we're not stochastic parrots.

> if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains'

I think y'all vastly underestimate how complex and difficult a task this is.

It's not even "draw a circle, draw the rest of the owl", it's "draw a circle, build the rest of the Dyson sphere".

It's easy to _say_ it, it's easy to picture it, but actually doing it? We're basically at zero.

replies(1): >>43114490 #
fragmede ◴[] No.43114490[source]
> Clearly we're not stochastic parrots

On Internet comment sections, that's not clear to me. Memes are incredibly infectious, and we can see by looking at, say, a thread about Nvidia. It's inevitable that someone is going to ask about a moat. In a thread about LLMs, the likelihood of stoichastic parrots getting a mention approaches one, as the thread gets longer. what does it all mean?

replies(1): >>43116732 #
1. staticman2 ◴[] No.43116732[source]
You seem to be confusing brain design with uniqueness.

If every single human on earth was an identical clone with the same cultural upbringing and similar language conversation choices and opinions and feelings, they still wouldn't work like an LLM and still wouldn't be stochastic parots.