Actually I think a lot of people have it - just yesterday I saw someone on reddit claim Google was evil because it was secretly founded by the US military. And they were American. That's their military!
Propping up evil figure/regime/ideology (Bolsheviks/Communists) to justify remorseless evilness (Concentration camps/Nuclear bomb) isn't new nor unique, but particularly predictable.
It isn't this that's insane, but a total belief purity of weapons that is.
The problem with building AI weapons is that eventually it will be in the hands of people who are morally bankrupt and therefore will use them to do evil.
There is at this moment little evidence that autonomous weapons will cause more collateral damage than artillery shells and regular air strikes. The military usefulness on other other hand seems to be very high and increasing.
Ideally no one, and if the cost / expertise is so niche that only a handful of sophisticated actors could possibly actually do it, then in fact (by way of enforceable treaty) no one.
In my garage, I have some pretty nasty "weapons" - notably a couple of chainsaws, some drills, chisels, lump/sledge/etc hammers and a fencing maul! The rest are merely: mildly malevolent.
You don't need an AI (whatever that means) to get medieval on someone. On the bright side the current state of AI (whatever that means) is largely bollocks.
Sadly, LLMs have and will be wired up to drones and the results will be unpredictable.
A car is a tool. It can be used as a weapon.
Even water and air can be used as a weapon if you try hard enough. There is probably nothing on this planet that couldn't be used as a weapon.
That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.
I'm sure this sounds like a big nothingburger from the perspective of, you know, people he isn't threatening.
How can you excuse that behaviour? How can you think someone like that can be trusted with any weapons? How naive and morally bankrupt do you have to be to build a gun for that kind of person, and think that it won't be used irresponsibly?
Anyone who wants to establish deterrence against superiors or peers, and open up options for handling weaker opponents.
> enforceable treaty
Such a thing does not exist. International affairs are and will always be in a state of anarchy. If at some point they aren't, then there is no "international" anymore.
Look it up.
How would we go about doing that?
Every kind of nefarious way to keep the truth at bay in authoritarian regimes is always on the table. From the cracking of iPhones to track journalists covering these regimes, to snooping on email, to using AI to do this? Is just all the same thing, just updated and improved tools.
Just like Kevin Mitnick selling zero day exploits to the highest bidder, I have a hard time seeing how these get developed and somehow stay out of reach of the regimes you speak of.
That it won't is a mixture of cowardice, cynical opportunism, and complicity with unprovoked aggression.
In which case, I posit that yes, if you're fine with threatening or inflicting violence on innocent people, you don't have a moral right to 'self-defense'. It makes you a predator, and arming a predator is a mistake.
You lose any moral ground you have when you are an unprovoked aggressor.
We're talking about making war slightly more expensive for yourself to preserve the things that matter, which is a trade-off that we make all the time. Even in war you don't have to race for the bottom for every marginal fraction-of-a-percent edge. We've managed to e.g. ban antipersonnel landmines, this is an extremely similar case.
> How would you enforce it after you get nuked?
And yet we've somehow managed to avoid getting into nuclear wars.
Like skydiving without a parachute, I think we should accept it is a bad idea without needing a double blind study
So you're in favor of losing a war and becoming a subject of the enemy? While it's certainly tempting to think that unilateralism can work, I can hardly see how.
When China attacks with AI weapons do you expect the free world to fight back armed with moral superiority? No. We need even more lethal AI weapons.
Mutual assured destruction has worked so far for nukes.
Should companies with consumer brands never make weapons? Sure, and while we're at it, let's ban knives because they can be used for both chopping vegetables and stabbing people. The issue isn't the technology itself. It's how it's regulated, controlled, and used. And as for cyber terrorism? That's a problem with bad actors, not with the tools themselves.
So, by all means, keep pointing out the hypocrisy of a company that makes YouTube Kids and killer AI. Just don't pretend like you're not benefiting from the same duality every time you use a smartphone or the internet which don't forget is a technology born, ironically, from military research.
we have, of course, developed all three. they have gone a long way towards keeping us safe over the past century.
There are more options than arming an aggressor and capitulating to foreign powers. It's a false dichotomy to suggest it.
Of course. My point was, it is insane for those who do.
I never said that. Please don't reply to comments you made up in your head.
Using AI doesn't automagically equate to winning a war. Using AI could mean the AI kills all your own soldiers by mistake. AI is stupid, it just is. It "hallucinates" and often leads to wrong outcomes. And it has never won a war, and there's no guarantee that it would help to win any war.
It's legitimate to worry about scaled, automated control of weapons, since it could allow a very small number of people to harm a much larger number of people. That removes one of our best checks we have against the misuse of weaponry. If you have to muster a whole army to go kill a bunch of people, they can collectively revolt. (It's not always _easy_ but it's possible.)
Automating weapons is a lot like nuclear weapons in some ways. Once the hard parts are done (refining raw oar), the ability for a small number of people to harm a vast number of others is serious. People are right to worry about it.
My point though is that this is the only use case for such systems. The common comparisons to things like knives are invalid for this reason.
And I don’t doubt there’s still a lot of subterfuge happening as we speak, most of which we’ll never hear about until something goes very wrong.