←back to thread

586 points mizzao | 2 comments | | HN request time: 0.457s | source
Show context
giancarlostoro ◴[] No.40669810[source]
I've got friends who tried to use ChatGPT to generate regex to capture racial slurs to moderate them (perfectly valid request since they're trying to stop trolls from saying awful things). It vehemently refused to do so, probably due to overtly strict "I'll never say the nword, you can't fool me" rules that were shoved into ChatGPT. Look, if your AI can't be intelligent about sensible requests, I'm going to say it. It's not intelligent, it's really useless (at least regarding that task, and related valid tasks).

Who cares if someone can get AI to say awful things? I can write software that spits out slurs without the help of AI. Heck, I could write awful things here on HN, is AI going to stop me? Doubt it, nobody wants to foot the bill for AI moderation, it can only get so much.

replies(5): >>40670109 #>>40670220 #>>40671835 #>>40671863 #>>40676828 #
WesolyKubeczek ◴[] No.40670109[source]
> Who cares if someone can get AI to say awful things?

I imagine the legal department of Meta, OpenAI, Microsoft, and Google care a great deal, and they don't want to be liable for anything remotely resembling a lawsuit opportunity.

replies(2): >>40671705 #>>40671770 #
drdaeman ◴[] No.40671770[source]
Is the legal system broken somehow it's a legit issue, or do their legal teams have some sort of PTSD so they're scared of any ideas of lawsuit no matter how frivolous, so they make weirdest business-affecting decisions?

I mean, if the LLM drops some slurs, gives a recipe for bananadine, or even goes all Bender suggesting we kiss its shiny metal ass or it kills all humans - how, in the name of all that's still sane in this world, it's a lawsuit material?

I imagined it's morke likely to be about activists on offense watch, cranking it up to 11 making bad PR (still weird, but people are weird and this sort of stuff happens), than some legal issues.

replies(2): >>40672009 #>>40672024 #
1. lovethevoid ◴[] No.40672024[source]
Section 230 has been subject to numerous reforms and proposals in recent years, so yes it's a very real legal issue that platforms are keeping an eye on. FOSTA is an example, in which platforms all had to make changes and now constantly take down posts related to those acts. Another proposal to amend 230 ("Ending Support for Internet Censorship Act") is that platforms are stripped of their legal liability protections for what is posted if they cannot prove they are "politically neutral".
replies(1): >>40672793 #
2. roywiggins ◴[] No.40672793[source]
Section 230 only immunizes service providers for the contents of users' posts, not its own content. It can't immunize Google from being responsible for Gemini's output.