←back to thread

235 points colinprince | 1 comments | | HN request time: 0.213s | source
Show context
lawlessone ◴[] No.43717502[source]
>I-powered bots across social media and the internet to talk to people they suspect are anything from violent sex criminals all the way to vaguely defined “protestors” with the hopes of generating evidence that can be used against them.

so what if the bot radicalizes them?

replies(5): >>43717582 #>>43717589 #>>43717646 #>>43717697 #>>43717868 #
darknavi ◴[] No.43717868[source]
Can you imagine what would happen if we used the same resources to talk people down instead of rile them up?

Some people get sent down a dark path and finding someone to pull them up out of it can really help.

Instead I'd guess that these programs can likely drive them deeper and over the edge.

replies(4): >>43718481 #>>43718678 #>>43719000 #>>43719968 #
1. ChrisMarshallNY ◴[] No.43718481[source]
That would be a really decent application of AI.

We already have the beginnings of "AI therapists." Not sure how well they'll work, but they probably won't make people's pathologies worse.

As opposed to just about Every. Single. Online. Social. Network.

There's just waaaay too much lovely money to be made, by feeding people's ids.