←back to thread

14 points redasadki | 2 comments | | HN request time: 0s | source
Show context
redasadki ◴[] No.45683864[source]
Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering—the very tropes many of us have worked for decades to banish.

The alarms are valid.

The images are harmful.

But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

The problem is not the tool.

The problem is the user.

replies(2): >>45683946 #>>45684348 #
Retric ◴[] No.45684348[source]
The problem is the tool.

To suggest otherwise is to suggest anyone should be able to buy nuclear weapons which on their own do nothing.

Bad actors can only leverage what exists. All the benefits and harms comes from the existence of those tools so it’s a good idea to consider if making such things makes the world better or worse.

replies(2): >>45684509 #>>45684741 #
1. redasadki ◴[] No.45684741[source]
This assumes 'we' (ie societies) are in a position to stop it - whether that's nuclear weapons or AI. If we are not, then what can be usefully done is going to shift… by a lot.
replies(1): >>45685037 #
2. Retric ◴[] No.45685037[source]
> This assumes 'we' (ie societies) are in a position to stop it

There’s major advantages to understanding the world as it is independent anything else. People make tradeoffs around harm all the time, pretending it doesn’t exist is pointless.

We can mitigate harm from earthquakes and blizzards independently from our ability to prevent such events. That comes from understanding such events as more than just acts of gods who would happily use other means should we try to mitigate the harm from earthquakes etc.