←back to thread

63 points tejonutella | 2 comments | | HN request time: 0s | source
Show context
ed-209 ◴[] No.43304078[source]
A gun is not smart or magical but is nevertheless a powerful tool that can be scary depending on who is holding it. Accordingly, I worry less about my occupation and more about the moral character of those wielding it. Further i worry about "smart" people who have not been acquainted with the dark side of human nature facilitating bad actors.
replies(4): >>43304106 #>>43304335 #>>43304352 #>>43304450 #
wat10000 ◴[] No.43304352[source]
A gun’s purpose isn’t to be smart. The gun equivalent of this post would be “Why guns still aren’t very good at killing” and that would be a serious problem for guns if that were true.
replies(1): >>43304435 #
darkerside ◴[] No.43304435[source]
They're not good at killing. They do nothing in their own. People are very good at killing using guns. The analogy is actually perfect.
replies(2): >>43304526 #>>43304915 #
wat10000 ◴[] No.43304915[source]
What’s the AI equivalent to point a gun, pull the trigger, and it kills the target? Something, something, and it’s smart, what are the somethings?
replies(1): >>43305868 #
1. bumby ◴[] No.43305868[source]
Not the OP, but my best guess is it’s an alignment problem, just like gun killing what the owner is not intending to. So the power of AI to make decisions that are out of alignment with society’s needs are the “something, something’s.” As in the above healthcare examples, it can be efficient at denying healthcare claims. The lack of good validation can obfuscate alignment with bad incentives.
replies(1): >>43306298 #
2. wat10000 ◴[] No.43306298[source]
I guess it depends on what you see as the purpose of AI. If the purpose is to be smart, it’s not doing very well. (Yet?) If the purpose is to deflect responsibility, it’s working great.