←back to thread

586 points mizzao | 1 comments | | HN request time: 0.21s | source
Show context
matt-attack ◴[] No.40681018[source]
I have yet to see a single compelling argument for all of this censoring of LLMs that everyone just seems to accept as table stakes. From the article:

> While this safety feature is crucial for preventing misuse

How did we all just accept to start using "safety" in this context? We're talking about a computer program that emits text. How on earth did "safety" come into this?

Why is the text any different than the text of a book? Are people constantly hand-wringing about what words might appear in books that adults read? Why do we not hear these same people going on and on about the "dangers of the written word" in book form?

I simply refuse to accept any of this BS about text-based AI being "dangerous". It's 1984-level censorship.

Are there ideas we feel adults just shouldn't hear? Is there super secret knowledge we don't think adults should discover?

Can any one truly justify this ridiculous notion that "we absolutely must censor *text*"? I feel like I'm living in a bizzaro world where otherwise clear thinking, liberal-minded, anti-book-burning, anti-censorship, free-speech advocates just all knee-jerk and parrot this ludicrous notion that "AI typing out text is DANGEROUS".

replies(1): >>40687346 #
biimugan ◴[] No.40687346[source]
Surely there's some difference between text that a human wrote with some purpose, even a nefarious one, and an AI writing text with complete amorality. To lump those two things together is missing the point entirely. When I read the writings of a human that I suspect might be derogatory or objectionable in nature, I can look that person up and get a better understanding of their world view, their politics, their past writings, and so on. I can build context around the words they're uttering. With an AI, there's no plausible mechanism to do that. Whether you like it or not, the vast majority of people are not skeptical of plausible-sounding falsehoods. And it's made 100 times worse when there's no mechanism for such a person to research what is being uttered. The vast majority of "objectionable" (if you want to call that) human writing is, at the very least, sensible in some context, and it's possible to learn that context to make an informed judgement. It's just simply not the case with AI. And when there's a media brand sitting at the top of the page, and there are advertisements, and it's in the news, and when there's trillions (?) in financial backing -- sorry, people are going to take what comes out of these systems at face value. That much is proven.

Personally I'm fine with zero censorship of generative AI. But then ban advertising about it, ban making false, unproveable claims about how great it is, require prominent legal warnings, make it much easier to sue companies whose AI systems produce negative societal outcomes, prohibit any kind of trademark or copyright on AI-generated material, and so on. But of course, nobody wants to do any of that, because that would ruin the cash cow. This isn't a censorship-gone-awry story. It's a story of companies trying their best to convince regulatory authorities to leave them alone.

replies(1): >>40688086 #
1. matt-attack ◴[] No.40688086[source]
Could your reasoning be any more condescending to people? What because a computer program spit it out, you think the poor uneducated masses are going to accept it at face value? Really? You ask about the context… great question! Well, the answer is simple, the context is all written language. These LLM don’t spit out text from nothing, instead, their context is all of humanity’s written work.

If you’re not happy with the reasoning that comes out of that training, then point your finger and humanity, not the tool that’s just summarizing it. Don’t shoot the messenger.