←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 1 comments | | HN request time: 0.203s | source
Show context
yapyap ◴[] No.42947816[source]
> If you believe that AI development is a prime national security advantage, then you absolutely should want even more money poured into AI development, to make it go even faster.

This, this is the problem for me with people deep in AI. They think it’s the end all be all for everything. They have the vision of the ‘AI’ they’ve seen in movies in mind, see the current ‘AI’ being used and to them it’s basically almost the same, their brain is mental bridging the concepts and saying it’s only a matter of time.

To me, that’s stupid. I observe the more populist and socially appealing CEOs of these VC startups (Sam Altman being the biggest, of course.) just straight up lying to the masses, for financial gain, of course.

Real AI, artificial intelligence, is a fever dream. This is machine learning except the machines are bigger than ever before. There is no intellect.

and the enthusiasm of these people that are into it feeds into those who aren’t aware of it in the slightest, they see you can chat with a ‘robot’, they hear all this hype from their peers and they buy into it. We are social creatures after all.

I think using any of this in a national security setting is stupid, wasteful and very, very insecure.

Hell, if you really care about being ahead, pour 500 billion dollars into quantum computing so u can try to break current encryption. That’ll get you so much further than this nonsensical bs.

replies(17): >>42947884 #>>42947936 #>>42947969 #>>42948058 #>>42948088 #>>42948174 #>>42948256 #>>42948288 #>>42948303 #>>42948370 #>>42948454 #>>42948458 #>>42948594 #>>42948604 #>>42948615 #>>42948820 #>>42949189 #
1. ninetyninenine ◴[] No.42948820[source]
I agree agi wont solve national security but saying this isn’t intelligence is false.

This is ai and trend lines point to an intelligence that matches or barely exceeds human intellect in the future.

You’re part of a trend of people in denial. When LLMs first came out there were hordes of people on HN claiming it was just a stochastic parrot and LLMs displayed zero intellectual ability. It is now abundantly clear that this not true.

We don’t fully understand LLMs. That’s why gains like COT are just black box adjustments that come from changing external configurations. We have no way to read the contents of the black box and make adjustments off of it. Yet idiots like you can make such vast and hard claims when nobody really fully understands these things. You’re delusional.

I agree that LLMs won’t allow us to make some super weapon to give us some edge in national security.