Most active commenters
  • ArtTimeInvestor(5)
  • habinero(4)
  • fragmede(3)

←back to thread

Grok 3: Another win for the bitter lesson

(www.thealgorithmicbridge.com)
129 points kiyanwang | 54 comments | | HN request time: 1.09s | source | bottom
1. ArtTimeInvestor ◴[] No.43112245[source]
It looks like the USA is bringing all technology in-house that is needed to build AI.

TSMC has a factory in the USA now, ASML too. OpenAI, Google, xAI and Nvidia are natively in the USA.

While no other country is even close to build AI on their own.

Is the USA going to "own" the world by becoming the keeper of AI? Or is there an alternative future that has a probability > 0?

replies(7): >>43112250 #>>43112266 #>>43112288 #>>43112313 #>>43113081 #>>43113084 #>>43113181 #
2. OccamsMirror ◴[] No.43112250[source]
Are LLMs really going to own the world?
replies(3): >>43112275 #>>43112276 #>>43112284 #
3. lompad ◴[] No.43112266[source]
You implicitly assume, LLMs are actually important enough to make a difference on the geopolitical level.

So far, I haven't seen any indication that this is the case. And I'd say, hyped up speculations by people financially incentivized to hype AI should be taken with an entire mine full of salt.

replies(5): >>43112290 #>>43112419 #>>43112691 #>>43112716 #>>43113043 #
4. ArtTimeInvestor ◴[] No.43112275[source]
It looks like neural network based software is to surpass humans in intelligence in every task in the forseeable future.

If one country moves along this direction faster than the others, no country will stand a chance to compete with them militarily or economically.

replies(3): >>43112439 #>>43112575 #>>43116003 #
5. ben_w ◴[] No.43112276[source]
LLMs aren't the only kind of AI.

Having hardware and software suppliers all together makes it more likely even if you assume (like I do) that we're at least one paradigm shift away from the right architecture, despite how impressively general Transformers have been.

But software is easy to exfiltrate, so I think anyone with hardware alone can catch up extremely fast.

6. throw310822 ◴[] No.43112284[source]
Intelligence is everything. These things are intelligent- already superhuman in speed and a few limited domains, soon they're going to exceed humans in almost every respect. The advantage they give to the country that owns them is nuclear-weapons like.
replies(2): >>43113583 #>>43119372 #
7. cgcrob ◴[] No.43112288[source]
I would expect it will be the market leader yes. But is there a market large enough to support the investment? That is debatable. If there isn’t then they will be in a deficit that is likely to do serious damage to the economy and investor confidence.

Currently there is no hard ROI on LLMs for example other than force bundling and using it to leverage soft outcomes (layoffs) and generating trash. User interest and revenue drops off fairly quickly. And there are regulations coming in elsewhere.

It’s really not looking good.

8. ArtTimeInvestor ◴[] No.43112290[source]
First, its not just about LLMs. Its not an LLM that replaced human drivers in Waymo cars.

Second, how could AI not be the deciding geopolitical factor of the future? You expect progress to stop and AI not to achieve and surpass human intelligence?

replies(3): >>43112319 #>>43112382 #>>43112437 #
9. losteric ◴[] No.43112313[source]
US has been reshoring hardware for a while, but that didn’t stop DeepSeek and certainly won’t prevent presently allied powers from building AIs.

A big lesson seems to be that one can rapidly close the gap, with much less compute, once paths have been blazed by others. There’s a first-mover disadvantage.

replies(1): >>43112383 #
10. Eikon ◴[] No.43112319{3}[source]
> You expect progress to stop and AI not to achieve and surpass human intelligence?

A word generator is not intelligence. There’s no “thinking” involved here.

To surpass human intelligence, you’d first need to actually develop intelligence, and llms will not be it.

replies(1): >>43112519 #
11. lompad ◴[] No.43112382{3}[source]
>First, its not just about LLMs. Its not an LLM that replaced human drivers in Waymo cars.

As far as I know, Waymo is still not even remotely able to operate in any kind of difficult environment, even though insane amounts of money have been poured into it. You are vastly overstating its capabilities.

Is it cool tech? Sure. Is it safely going to replace all drivers? Doubt, very much so.

Secondly, this only works if progress in AI does not stagnate. And, again, you have no grounds to actually make that claim. It's all built on the fanciful imagination that we're close to AGI. I disagree heavily and think, it's much further away than people profiting financially from the hype tend to claim.

replies(1): >>43112804 #
12. ArtTimeInvestor ◴[] No.43112383[source]
DeepSeek has built their software on Nvidia hardware which needs ASML and TSMC hardware to be built.

Even China has not yet managed to even remotely catch up with this hardware stack. Even though the trail has been blazed by ASML, TSMC and Nvidia.

replies(1): >>43113221 #
13. OtherShrezzing ◴[] No.43112419[source]
I think ground-zero for that line of thought is with Leopold Aschenbrenner[0]. Who I believe now runs an AI focused hedge fund.

[0] https://situational-awareness.ai

14. ozornin ◴[] No.43112437{3}[source]
> how could AI not be the deciding geopolitical factor of the future?

Easily. Natural resources, human talent, land and supply chains all are and will be more important factors than AI

> You expect progress to stop

no

> and AI not to achieve and surpass human intelligence

yes

15. viraptor ◴[] No.43112439{3}[source]
> no country will stand a chance to compete with them militarily or economically.

It really depends on how they go about it. It can easily instead end up with lots of people without work, no social security and disillusioned with the country. Instead of being economically great, the country may end up fighting uprisings and sabotage.

16. willvarfar ◴[] No.43112519{4}[source]
I get that LLMs are just doing a probabilistic prediction etc. Its all Hutter Prize stuff.

But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

A completely different tack: if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains', in what way is that not artificial and intelligence? And if we can do that, what is to stop that manufactured brain from not being twice or ten times larger than a humans?

replies(6): >>43112567 #>>43112751 #>>43112930 #>>43113206 #>>43113272 #>>43113546 #
17. dkjaudyeqooe ◴[] No.43112567{5}[source]
Human (and other animal) brains probably are probabilistic, but we don't understand their structure or mechanism in fine enough detail to replicate them, or simulate them.

People think LLMs are intelligent because intelligence is latent within the text they digest, process and regurgitate. Their performance reflects this trick.

18. hagbarth ◴[] No.43112575{3}[source]
How so? First of all, assuming ASI is developed, as it stands now, it will be owned by a private corporation, not a nation state.

ASI also will not be magic. Like what exactly would it be doing that enables the country to subject the others? Develop new weapons? We already have the capability to destroy earth. Actually come to think of it, if ASI is an existential threat to other nations, maybe the rational action would be to nuke whichever country develops it first. To safe the world.

You see what I am saying? There is such a thing as the real world with real constraints.

replies(1): >>43112736 #
19. fnordsensei ◴[] No.43112691[source]
They seem popular enough that they could be leveraged to influence opinion and twist perception, as has been done with social media.

Or, as is already being done, use them to influence opinion and twist perception within tools and services that people already use, such as social media.

replies(1): >>43112995 #
20. tankenmate ◴[] No.43112716[source]
It's an economic benefit. It's not a panacea but it does make some tasks much cheaper.

On the other hand if the economic benefit isn't shared across the whole of society it will become a destabilising factor and hence reduce the overall economic benefit it might have otherwise borne.

21. ◴[] No.43112736{4}[source]
22. grumbel ◴[] No.43112751{5}[source]
I don't think the probabilistic prediction is a problem. The problem with current LLM is that they are limited to doing "System 1" thinking, only giving you a fast instinctive response to a question. While that works great for a lot of small problems, it completely falls apart on any larger task that requires multiple steps or backtracking. "System 2" thinking is completely missing as is the ability to just self-iterate on their own output.

Reasoning models are trying to address that now, but monologueing in token-space still feels more like a hack than a real solution, but it does improve their performance a good bit nonetheless.

In practical terms all this means is that current LLMs still need a hell of a lot of hand holding and fail at anything more complex, even if their "System 1" thinking is good enough for the task (e.g. they can write Tetris in 30sec no problem, but they can't write SuperMarioBros at all, since that has numerous levels that would blow the context window size).

replies(1): >>43114402 #
23. technocrat8080 ◴[] No.43112804{4}[source]
Vastly overstating its capabilities? SF is ~crawling~ with them 24/7 and I've yet to meet someone who's had a bad experience in one of them. They operate more than well enough to replace rideshare drivers, and they have been.
replies(2): >>43112863 #>>43113432 #
24. dash2 ◴[] No.43112863{5}[source]
But SF is a single US city built on a grid. Try London or Manila.
replies(2): >>43113192 #>>43113605 #
25. Eikon ◴[] No.43112930{5}[source]
> But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

If you believe in free will, then we are not.

26. krainboltgreene ◴[] No.43112995{3}[source]
So has Kendrick Lamar’s’ hit song, but no one is suggesting that it has geopolitical implications.
27. spacebanana7 ◴[] No.43113043[source]
The same stack is required for other AI stuff like diffusion models as well.
28. spacebanana7 ◴[] No.43113081[source]
> Is the USA going to "own" the world by becoming the keeper of AI?

China has a realistic prospect of developing an independent stack.

It'll be very difficult, especially at the level of developing good enough semiconductor fabs with EUV. However, they're not starting from scratch in terms of a domestic semiconductor industry. And their software development / AI research capabilities are already near par with the US.

But they do have a whole of nation approach to this, and are willing to do whatever it takes.

29. wallaBBB ◴[] No.43113084[source]
What factories are TSMC and ASML operating in US?
30. ZiiS ◴[] No.43113181[source]
Even if you believe that all those companies are exclusively working towards the USA's aims and ignore that the output of TSMC and ASML's US factories are not yet a rounding error on their production. Do you seriously doubt that espionage still works?
31. namaria ◴[] No.43113192{6}[source]
That's usually how it goes with 'AI'. It is very impressive on the golden path, but the world is 80% edge cases.
32. ◴[] No.43113206{5}[source]
33. ZiiS ◴[] No.43113221{3}[source]
PRC considers Taiwan hence TSMC to be part of China. Whilst it is easy to disagree with this politically; if push came to shove, it would be much harder to disagree practically.
replies(1): >>43122063 #
34. sampo ◴[] No.43113272{5}[source]
> But how are animals with nerve-centres or brains different?

In current LLM neural networks, the signal proceeds in one direction, from input, through the layers, to output. To the extend that LLM's have memory and feedback loops, it's that they write the output of the process to text, and then read that text and process it again though their unidirectional calculations.

Animal brains have circular signals and feedback loops.

There are Recurrent Neural Network (RNN) architectures, but current LLM's are not these.

35. Y-bar ◴[] No.43113432{5}[source]
SF has pretty much the best weather there is to drive in. Try putting them on Minnesota winter roads, or muddy roads in Kansas for example.
replies(1): >>43114373 #
36. habinero ◴[] No.43113546{5}[source]
> But how are animals with nerve-centres or brains different? What do we think us humans do differently so we are not just very big probabilistic prediction systems?

I see this statement thrown around a lot and I don't understand why. We don't process information like computers do. We don't learn like they do, either. We have huge portions of our brains dedicated to communication and problem solving. Clearly we're not stochastic parrots.

> if we develop the technology to engineer animal-style nerves and form them into big lumps called 'brains'

I think y'all vastly underestimate how complex and difficult a task this is.

It's not even "draw a circle, draw the rest of the owl", it's "draw a circle, build the rest of the Dyson sphere".

It's easy to _say_ it, it's easy to picture it, but actually doing it? We're basically at zero.

replies(1): >>43114490 #
37. habinero ◴[] No.43113583{3}[source]
This is just flat out not true. They're not intelligent and not capable of becoming so. They aren't reliable, by design.

They're a wildly overhyped solution in search of a problem.

replies(2): >>43114088 #>>43116100 #
38. rafaelmn ◴[] No.43113605{6}[source]
With nicest weather on the planet probably
39. throw310822 ◴[] No.43114088{4}[source]
I don't understand this attitude and I am not sure where it comes from- either from generic skepticism, or from some sort of psychological refusal.(*) It's just obvious to me that you're completely wrong and you'll have a hard wake up, eventually.

* "I know how this works and it's just numbers all the way down" is not an argument of any validity, just to be clear- everything eventually is just physics, blind mechanics.

replies(2): >>43124908 #>>43135641 #
40. fragmede ◴[] No.43114373{6}[source]
How stupid of Google. Instead of getting their self driving car technology to work in a blizzard first, and then working on getting it working in a city, they choose to get it working in a city first, before getting it to work in inclement weather. What idiots!
replies(1): >>43114590 #
41. fragmede ◴[] No.43114402{6}[source]
give it a filesystem, like you can with Claude computer use, and you can have it make and forget memories to adapt for a limited context window size
42. fragmede ◴[] No.43114490{6}[source]
> Clearly we're not stochastic parrots

On Internet comment sections, that's not clear to me. Memes are incredibly infectious, and we can see by looking at, say, a thread about Nvidia. It's inevitable that someone is going to ask about a moat. In a thread about LLMs, the likelihood of stoichastic parrots getting a mention approaches one, as the thread gets longer. what does it all mean?

replies(1): >>43116732 #
43. Y-bar ◴[] No.43114590{7}[source]
I hope you are sarcastic! Because it is quite expected that they would test where it is easy first. The stupid ones are those who parrot the incorrect assumption that self-driving cars are comparable to humans at general driving where statistics on general driving includes lots of driving in suboptimal condition.
44. rocmcd ◴[] No.43116003{3}[source]
If this is true, then shouldn't we expect an economic "bump" from NN/LLMs/AI as they are today?

I have not noticed companies or colleagues 10x'ing (hell, or even 1.5x'ing) their productivity from these tools. What am I missing?

replies(2): >>43118138 #>>43123448 #
45. Workaccount2 ◴[] No.43116100{4}[source]
My non-tech company already uses LLMs where we used to contract software people (for 2 years now - no unresolveable issues). I myself also used LLMs to write an app which is used by people on the production floor now (I'm not a programmer and definitely don't know kotlin).

Maybe LLMs can't work on huge code bases yet, but for writing bespoke software for individuals who need a computer to do xyz but can't speak the language, it already is working wonders.

Being dismissive of LLMs while sitting above their current scope of capabilities gives strong Microsoft 2007 vibes; "The iPhone is a laughable device that presents no threat to windows mobile".

replies(2): >>43119386 #>>43135727 #
46. staticman2 ◴[] No.43116732{7}[source]
You seem to be confusing brain design with uniqueness.

If every single human on earth was an identical clone with the same cultural upbringing and similar language conversation choices and opinions and feelings, they still wouldn't work like an LLM and still wouldn't be stochastic parots.

47. ArtTimeInvestor ◴[] No.43118138{4}[source]
What do your colleagues do?

I see people getting replaced by AI left and right.

Translators, illustrators, voice over artists, data researchers, photographers, models, writers, personal assistants, drivers, programmers ...

48. staticman2 ◴[] No.43119372{3}[source]
"The advantage they give to the country that owns them is nuclear-weapons like."

I think the idea that the United States "owns" Grok 3 would be news to Musk and the idea it "owns" ChatGPT would be news to Altman.

49. riku_iki ◴[] No.43119386{5}[source]
> Maybe LLMs can't work on huge code bases yet

its also not just about code base size, but also about your expectation of output quality/correctness.

50. quesera ◴[] No.43122063{4}[source]
The common belief appears to be that PRC can successfully assimilate Taiwan, but not with an intact and operable semiconductor industry.
51. mh- ◴[] No.43123448{4}[source]
There's an implicit assumption here that if a colleague did figure out how to (e.g.) 10x their output with new tools, the employer would capture all (e.g.) 10x of that increased productivity.
52. Amekedl ◴[] No.43124908{5}[source]
Check out operations research.

The amount of “work” done there is staggering and yet adoption appears abysmal, using such solutions with success only happens as part of a really “well oiled” machine.

And what about the simple difficulty going from 99% to 99.9%. What percentage are we even talking about today? We don’t know, but very rich people think it is cool and blindly keep investing more billions.

53. habinero ◴[] No.43135641{5}[source]
You're entirely free to go on and on about how amazing the emperor's clothes are. Nobody can stop you. :)

It's fine to chase hype for hobbyist or starter projects, but part of being an engineer is understanding how things actually work and what their limitations are.

It's not a virtue to deify a statistics model and make it your entire personality.

54. habinero ◴[] No.43135727{5}[source]
If you're (1) doing something basic and (2) don't care about correctness or quality or reliability and (3) don't need to change or maintain it, then by all means, use it. It's literally no different than copying off StackOverflow (and probably being generated from it.)

If you aren't an engineer, I get why you think it's magic. Everything is magic when you don't understand how it works.

Nobody thought the iPhone was magic. It was an instant hit because the capabilities were immediate and obvious, and Apple had a long history of being able to execute.

If you find it useful, by all means, use it. But this is the new blockchain.