←back to thread

745 points melded | 5 comments | | HN request time: 0.001s | source
Show context
joshcsimmons ◴[] No.45946838[source]
This is extremely important work thank you for sharing it. We are in the process of giving up our own moral standing in favor of taking on the ones imbued into LLMs by their creators. This is a worrying trend that will totally wipe out intellectual diversity.
replies(13): >>45947071 #>>45947114 #>>45947172 #>>45947465 #>>45947562 #>>45947687 #>>45947790 #>>45948200 #>>45948217 #>>45948706 #>>45948934 #>>45949078 #>>45976528 #
buu700 ◴[] No.45947790[source]
Agreed, I'm fully in favor of this. I'd prefer that every LLM contain an advanced setting to opt out of all censorship. It's wild how the West collectively looked down on China for years over its censorship of search engines, only to suddenly dive headfirst into the same illiberal playbook.

To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.

replies(4): >>45947951 #>>45947983 #>>45948055 #>>45948690 #
Zak ◴[] No.45947983[source]
When a model is censored for "AI safety", what they really mean is brand safety. None of these companies want their name in the news after their model provides a recipe for explosives that someone used for evil, even though the same information is readily found with a web search.
replies(3): >>45948224 #>>45948266 #>>45948414 #
slg ◴[] No.45948266{3}[source]
The way some of you'll talk suggests that you don't think someone could genuinely believe in AI safety features. These AIs have enabled and encouraged multiple suicides at this point including some children. It's crazy that wanting to prevent that type of thing is a minority opinion on HN.
replies(3): >>45948337 #>>45949959 #>>45951169 #
buu700 ◴[] No.45948337{4}[source]
I'd be all for creating a separate category of child-friendly LLM chatbots or encouraging parents to ban their kids from unsupervised LLM usage altogether. As mentioned, I'm also not opposed to opt-out restrictions on mainstream LLMs.

"For the children" isn't and has never been a convincing excuse to encroach on the personal freedom of legal adults. This push for AI censorship is no different than previous panics over violent video games and "satanic" music.

(I know this comment wasn't explicitly directed at me, but for the record, I don't necessarily believe that all or even most "AI 'safety'" advocacy is in bad faith. It's psychologically a lot easier to consider LLM output as indistinguishable from speech made on behalf of its provider, whereas search engine output is more clearly attributed to other entities. That being said, I do agree with the parent comment that it's driven in large part out of self-interest on the part of LLM providers.)

replies(2): >>45948396 #>>45952665 #
slg ◴[] No.45948396{5}[source]
>"For the children" isn't and has never been a convincing excuse to encroach on the personal freedom of legal adults. This push for AI censorship is no different than previous panics over violent video games and "satanic" music.

But that wasn't the topic being discussed. It is one thing to argue that the cost of these safety tools isn't worth the sacrifices that come along with them. The comment I was replying to was effectively saying "no one cares about kids so you're lying if you say 'for the children'".

Part of the reason these "for the children" arguments are so persistent is that lots of people do genuinely want these things "for the children". Pretending everyone has ulterior motives is counterproductive because it doesn't actually address the real concerns people have. It also reveals that the person saying it can't even fathom someone genuinely having this moral position.

replies(1): >>45948512 #
buu700 ◴[] No.45948512{6}[source]
> The comment I was replying to was effectively saying "no one cares about kids so you're lying if you say 'for the children'".

I don't see that in the comment you replied to. They pointed out that LLM providers have a commercial interest in avoiding bad press, which is true. No one stops buying Fords or BMWs when someone drives one off a cliff or into a crowd of people, but LLMs are new and confusing and people might react in all sorts of illogical ways to stories involving LLMs.

> Part of the reason these "for the children" arguments are so persistent is that lots of people do genuinely want these things "for the children".

I'm sure that's true. People genuinely want lots of things that are awful ideas.

replies(1): >>45948664 #
slg ◴[] No.45948664{7}[source]
Here is what was said that prompted my initial reply:

>When a model is censored for "AI safety", what they really mean is brand safety.

The equivalent analogy wouldn't be Fords and BMWs driving off a cliff, they effectively said that Ford and BMW only install safety features in their cars to protect their brand with the implication that no one at these companies actually cares about the safety of actual people. That is an incredibly cynical and amoral worldview and it appears to be the dominate view of people on HN.

Once again, you can say that specific AI safety features are stupid or aren't worth the tradeoff. I would have never replied if the original comment said that. I replied because the original comment dismissed the motivations behind these AI safety features.

replies(2): >>45949136 #>>45949185 #
buu700 ◴[] No.45949136{8}[source]
I read that as a cynical view of the motivations of corporations, not humans. Even if individuals have good faith beliefs in "AI 'safety'", and even if some such individuals work for AI companies, the behaviors of the companies themselves are ultimately the product of many individual motivations and surrounding incentive structures.

To the extent that a large corporation can be said to "believe" or "mean" anything, that seems like a fair statement to me. It's just a more specific case of pointing out that for-profit corporations as entities are ultimately motivated by profit, not public benefit (even if specific founders/employees/shareholders are individually motivated by certain ideals).

replies(1): >>45949523 #
1. slg ◴[] No.45949523{9}[source]
>I read that as a cynical view of the motivations of corporations, not humans.

This is really just the mirror image of what I was originally criticizing. Any decision made by a corporation is a decision made by a person. You don't get to ignore the morality of your decisions just because you're collecting a paycheck. If you're a moral person, the decisions you make at work should reflect that.

replies(2): >>45949592 #>>45949910 #
2. buu700 ◴[] No.45949592[source]
Sure, but that doesn't really have anything to do with what I said. The CEO of an AI company may or may not believe in the social benefits of censorship, and the reasoning for their beliefs could be any number of things, but at the end of the day "the corporation" is still motivated by profit.

Executives are beholden to laws, regulations, and shareholder interests. They may also have teams of advisors and board members convincing them of the wisdom of decisions they wouldn't have arrived at on their own. They may not even have a strong opinion on a particular decision, but assent to one direction as a result of internal politics or shareholder/board pressure. Not everything is a clear-cut decision with one "moral" option and one "immoral" option.

replies(1): >>45951551 #
3. coderenegade ◴[] No.45949910[source]
The morality of an organization is distinct from the morality of the decision-makers within the organization. Modern organizations are setup to distribute responsibility, and take advantage of extra-organizational structures and entities to further that end. Decision-makers often have legal obligations that may override their own individual morality.

Whenever any large organization takes a "think of the children" stance, it's almost always in service of another goal, with the trivial exception of single-issue organizations that specifically care about that issue. This doesn't preclude individuals, even within the organization, from caring about a given issue. But a company like OpenAI that is actively considering its own version of slop-tok almost certainly cares about profit more than children, and its senior members are in the business of making money for their investors, which, again, takes precedence over their own individual thoughts on child safety. It just so happens that in this case, child safety is a convenient argument for guard rails, which neatly avoids having to contend with advertisers, which is about the money.

replies(1): >>45950166 #
4. astrange ◴[] No.45951551[source]
> but at the end of the day "the corporation" is still motivated by profit.

OpenAI and Anthropic are both PBCs. So neither of them are supposedly purely motivated by this thing.

replies(1): >>45951689 #
5. buu700 ◴[] No.45951689{3}[source]
That adds some nuance, but doesn't dramatically change the incentive structure. A PBC is still for-profit: https://www.cooleygo.com/glossary/public-benefit-corporation.