←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 3 comments | | HN request time: 0.751s | source
Show context
yapyap ◴[] No.42947816[source]
> If you believe that AI development is a prime national security advantage, then you absolutely should want even more money poured into AI development, to make it go even faster.

This, this is the problem for me with people deep in AI. They think it’s the end all be all for everything. They have the vision of the ‘AI’ they’ve seen in movies in mind, see the current ‘AI’ being used and to them it’s basically almost the same, their brain is mental bridging the concepts and saying it’s only a matter of time.

To me, that’s stupid. I observe the more populist and socially appealing CEOs of these VC startups (Sam Altman being the biggest, of course.) just straight up lying to the masses, for financial gain, of course.

Real AI, artificial intelligence, is a fever dream. This is machine learning except the machines are bigger than ever before. There is no intellect.

and the enthusiasm of these people that are into it feeds into those who aren’t aware of it in the slightest, they see you can chat with a ‘robot’, they hear all this hype from their peers and they buy into it. We are social creatures after all.

I think using any of this in a national security setting is stupid, wasteful and very, very insecure.

Hell, if you really care about being ahead, pour 500 billion dollars into quantum computing so u can try to break current encryption. That’ll get you so much further than this nonsensical bs.

replies(17): >>42947884 #>>42947936 #>>42947969 #>>42948058 #>>42948088 #>>42948174 #>>42948256 #>>42948288 #>>42948303 #>>42948370 #>>42948454 #>>42948458 #>>42948594 #>>42948604 #>>42948615 #>>42948820 #>>42949189 #
1. dotancohen ◴[] No.42948615[source]

  > Real AI, artificial intelligence, is a fever dream. This is machine learning except the machines are bigger than ever before. There is no intellect.
That sounds to me like dismissing the idea that a Russian SSBN might cross the Pacific and nuke Los Angeles because "submarines can't swim".

Even if the machine learning isn't really intelligent, it is still capable of performing IF..THEN..ELSE operations, which could have detrimental effects for [some subset of] humans.

And even if you argue that such a machine _shouldn't_ be used for whatever doomsday scenario would harm us, rest assured that someone, somewhere, who either does not understand what the machines are designed to do or just pretends that they work like magic, will put the machines in a position to make such a decision.

replies(1): >>42953253 #
2. UncleEntity ◴[] No.42953253[source]
One could hope...

Even at the height of the Cold War there was always a human between <leader presses button> and <nukes go aflyin'>.

--edit--

...which has me wondering if a president even has the constitutional authority to destroy the entire planet and if one could interpret their command as a 'lawful order'. Makes one think.

replies(1): >>42953522 #
3. willglynn ◴[] No.42953522[source]
On the topic of fail-deadly nukes:

https://en.wikipedia.org/wiki/Dead_Hand