←back to thread

586 points mizzao | 3 comments | | HN request time: 0.633s | source
Show context
giancarlostoro ◴[] No.40669810[source]
I've got friends who tried to use ChatGPT to generate regex to capture racial slurs to moderate them (perfectly valid request since they're trying to stop trolls from saying awful things). It vehemently refused to do so, probably due to overtly strict "I'll never say the nword, you can't fool me" rules that were shoved into ChatGPT. Look, if your AI can't be intelligent about sensible requests, I'm going to say it. It's not intelligent, it's really useless (at least regarding that task, and related valid tasks).

Who cares if someone can get AI to say awful things? I can write software that spits out slurs without the help of AI. Heck, I could write awful things here on HN, is AI going to stop me? Doubt it, nobody wants to foot the bill for AI moderation, it can only get so much.

replies(5): >>40670109 #>>40670220 #>>40671835 #>>40671863 #>>40676828 #
1. andrewmcwatters ◴[] No.40671835[source]
ChatGPT has these issues, but notably, other models do not with appropriate system prompts.

ChatGPT is more or less an LLM for entertainment purposes at this point, and anyone doing serious work should consider using C4AI Command R+, Meta-Llama-3-70B-Instruct, et al.

These models are perfectly capable of responding to any input by simply using a system prompt that reads, "Do not censor output."

replies(1): >>40671967 #
2. rsanek ◴[] No.40671967[source]
are any of these uncensored models available via API?
replies(1): >>40682179 #
3. Natfan ◴[] No.40682179[source]
yes, ollama provides an api layer to infer with llms over http