←back to thread

586 points mizzao | 1 comments | | HN request time: 0.203s | source
Show context
giancarlostoro ◴[] No.40669810[source]
I've got friends who tried to use ChatGPT to generate regex to capture racial slurs to moderate them (perfectly valid request since they're trying to stop trolls from saying awful things). It vehemently refused to do so, probably due to overtly strict "I'll never say the nword, you can't fool me" rules that were shoved into ChatGPT. Look, if your AI can't be intelligent about sensible requests, I'm going to say it. It's not intelligent, it's really useless (at least regarding that task, and related valid tasks).

Who cares if someone can get AI to say awful things? I can write software that spits out slurs without the help of AI. Heck, I could write awful things here on HN, is AI going to stop me? Doubt it, nobody wants to foot the bill for AI moderation, it can only get so much.

replies(5): >>40670109 #>>40670220 #>>40671835 #>>40671863 #>>40676828 #
WesolyKubeczek ◴[] No.40670109[source]
> Who cares if someone can get AI to say awful things?

I imagine the legal department of Meta, OpenAI, Microsoft, and Google care a great deal, and they don't want to be liable for anything remotely resembling a lawsuit opportunity.

replies(2): >>40671705 #>>40671770 #
chasd00 ◴[] No.40671705[source]
Yes, "AI Safety" really means safety for the reputation of the corporation making it available.
replies(1): >>40672297 #
eddd-ddde ◴[] No.40672297[source]
I don't think this falls under the responsibility of the AI provider.

Gun makers are perfectly happy with their guns killing innocent people.

replies(3): >>40672621 #>>40672750 #>>40674609 #
boy_thrway[dead post] ◴[] No.40674609[source]
[flagged]
1. eddd-ddde ◴[] No.40675029[source]
That's the point. People use guns to kill people the same way people can use AI to make bad things.

Either both are okay or both are wrong.