←back to thread

14 points redasadki | 3 comments | | HN request time: 0.64s | source
Show context
redasadki ◴[] No.45683864[source]
Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering—the very tropes many of us have worked for decades to banish.

The alarms are valid.

The images are harmful.

But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

The problem is not the tool.

The problem is the user.

replies(2): >>45683946 #>>45684348 #
Retric ◴[] No.45684348[source]
The problem is the tool.

To suggest otherwise is to suggest anyone should be able to buy nuclear weapons which on their own do nothing.

Bad actors can only leverage what exists. All the benefits and harms comes from the existence of those tools so it’s a good idea to consider if making such things makes the world better or worse.

replies(2): >>45684509 #>>45684741 #
1. jmull ◴[] No.45684509[source]
We might want to treat two things differently when for one of them, its only function is unimaginably massive destruction and for the other it’s to produce words and images.
replies(1): >>45684956 #
2. Retric ◴[] No.45684956[source]
Treating them differently based on the harm they cause, is still judging them based on the harm they cause rather than treating them as a neutral entity.
replies(1): >>45685529 #
3. jmull ◴[] No.45685529[source]
I don’t think it makes any sense to ignore the immediate consequences of using/abusing a tool when trying to determine the nature of any regulations or other curbs around that tool.