←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 10 comments | | HN request time: 1.108s | source | bottom
Show context
yapyap ◴[] No.42947816[source]
> If you believe that AI development is a prime national security advantage, then you absolutely should want even more money poured into AI development, to make it go even faster.

This, this is the problem for me with people deep in AI. They think it’s the end all be all for everything. They have the vision of the ‘AI’ they’ve seen in movies in mind, see the current ‘AI’ being used and to them it’s basically almost the same, their brain is mental bridging the concepts and saying it’s only a matter of time.

To me, that’s stupid. I observe the more populist and socially appealing CEOs of these VC startups (Sam Altman being the biggest, of course.) just straight up lying to the masses, for financial gain, of course.

Real AI, artificial intelligence, is a fever dream. This is machine learning except the machines are bigger than ever before. There is no intellect.

and the enthusiasm of these people that are into it feeds into those who aren’t aware of it in the slightest, they see you can chat with a ‘robot’, they hear all this hype from their peers and they buy into it. We are social creatures after all.

I think using any of this in a national security setting is stupid, wasteful and very, very insecure.

Hell, if you really care about being ahead, pour 500 billion dollars into quantum computing so u can try to break current encryption. That’ll get you so much further than this nonsensical bs.

replies(17): >>42947884 #>>42947936 #>>42947969 #>>42948058 #>>42948088 #>>42948174 #>>42948256 #>>42948288 #>>42948303 #>>42948370 #>>42948454 #>>42948458 #>>42948594 #>>42948604 #>>42948615 #>>42948820 #>>42949189 #
1. spacebanana7 ◴[] No.42947936[source]
> I think using any of this in a national security setting is stupid

What about AI enabled drones and guided missiles/rockets? The case for their effectiveness is relatively simple in terms of jamming resistance.

replies(5): >>42947985 #>>42947994 #>>42948104 #>>42948147 #>>42963119 #
2. pjc50 ◴[] No.42947985[source]
Like a lot of AI boosters, would you like to explain how that works other than magic AI dust? Some forms of optical guidance are already in use, but there's other limitations (lighting! weather!)
replies(1): >>42948122 #
3. GTP ◴[] No.42947994[source]
This somehow reminds me of a certain killer robot from a Black Mirror episode ;)
4. amarcheschi ◴[] No.42948104[source]
I would say that they don't require an 500bln$ investment. AFAIK, drone that help lock on target have started being used in Ukraine
replies(1): >>42948150 #
5. spacebanana7 ◴[] No.42948122[source]
Sure thing. The basic idea would be:

1) Have a camera on your drone 2) Run some frames through a locally running version of something like AWS Rekognition's celebrity identification service but for relevant military targets. 3) Navigate towards coordinates of target individuals

It isn't exactly magic, here's a video of a guy doing navigation with openCV on images: https://www.youtube.com/watch?v=Nrzs3dQ9exw

replies(1): >>42948442 #
6. swiftcoder ◴[] No.42948147[source]
drone and missile guidance system development has been using ML for decades at this point. That's just as much "AI" as anything currently coming out of the LLM craze.
replies(1): >>42956403 #
7. spacebanana7 ◴[] No.42948150[source]
I generally agree, piggybacking on innovations in smartphone GPUs / batteries will probably be enough to get locally running AI models in drones.
8. Hauthorn ◴[] No.42948442{3}[source]
I believe this is a capability that the Switchblade 600 or STM KARGU already has.

https://en.wikipedia.org/wiki/STM_Kargu

9. int_19h ◴[] No.42956403[source]
It's not just target guidance at this point. There are prototypes of drone swarms, for example.
10. theGnuMe ◴[] No.42963119[source]
I think jamming resistance is a red herring. AI weapons will have their own failure modes due to jamming. Any sensor modality will have its own particular weakness. Also reasoning model malfunctions as well i.e. hallucinations.

Not to mention false GPS etc...