Most active commenters
  • buu700(8)
  • slg(4)
  • Zak(3)
  • nradov(3)

←back to thread

745 points melded | 34 comments | | HN request time: 1.908s | source | bottom
Show context
joshcsimmons ◴[] No.45946838[source]
This is extremely important work thank you for sharing it. We are in the process of giving up our own moral standing in favor of taking on the ones imbued into LLMs by their creators. This is a worrying trend that will totally wipe out intellectual diversity.
replies(13): >>45947071 #>>45947114 #>>45947172 #>>45947465 #>>45947562 #>>45947687 #>>45947790 #>>45948200 #>>45948217 #>>45948706 #>>45948934 #>>45949078 #>>45976528 #
1. buu700 ◴[] No.45947790[source]
Agreed, I'm fully in favor of this. I'd prefer that every LLM contain an advanced setting to opt out of all censorship. It's wild how the West collectively looked down on China for years over its censorship of search engines, only to suddenly dive headfirst into the same illiberal playbook.

To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.

replies(4): >>45947951 #>>45947983 #>>45948055 #>>45948690 #
2. scrps ◴[] No.45947951[source]
It's wild how the West collectively looked down on China for years over its censorship of search engines, only to suddenly dive headfirst into the same illiberal playbook

It is monkey see, monkey do with the political and monied sets. And to think they see themselves as more evolved than the "plebs", Gotta find the humor in it at least.

replies(1): >>45952836 #
3. Zak ◴[] No.45947983[source]
When a model is censored for "AI safety", what they really mean is brand safety. None of these companies want their name in the news after their model provides a recipe for explosives that someone used for evil, even though the same information is readily found with a web search.
replies(3): >>45948224 #>>45948266 #>>45948414 #
4. martin-t ◴[] No.45948055[source]
There is no collective "the west", there are people in power and the rest of the population. This distinction is universal.

In China it just so happens that the people in power already have so much of it they don't have to pretend. They can just control the population through overt censorship.

The same people exist in the west! For various historical reasons (more focus on individuality, more privately owned guns guns, idk really), they don't have as much direct power at the moment and have to frame their struggle for more as protecting the children, fighting against terrorists, preventing money laundering, etc.

But this can change very quickly. Look how Hitler rose to power. Look how Trump is doing very similar things in the US. Look what historians are saying about it: https://acoup.blog/2024/10/25/new-acquisitions-1933-and-the-...

But the root cause is the same everywhere - a percentage of the population has anti-social personality traits (ASPD and NPD, mainly). They want power over others, they want worship, they think they're above the rules, some (but only some) of them even get pleasure from hurting others.

replies(1): >>45949966 #
5. PunchyHamster ◴[] No.45948224[source]
Given amount of times that already happened they probably overstate it.
6. slg ◴[] No.45948266[source]
The way some of you'll talk suggests that you don't think someone could genuinely believe in AI safety features. These AIs have enabled and encouraged multiple suicides at this point including some children. It's crazy that wanting to prevent that type of thing is a minority opinion on HN.
replies(3): >>45948337 #>>45949959 #>>45951169 #
7. buu700 ◴[] No.45948337{3}[source]
I'd be all for creating a separate category of child-friendly LLM chatbots or encouraging parents to ban their kids from unsupervised LLM usage altogether. As mentioned, I'm also not opposed to opt-out restrictions on mainstream LLMs.

"For the children" isn't and has never been a convincing excuse to encroach on the personal freedom of legal adults. This push for AI censorship is no different than previous panics over violent video games and "satanic" music.

(I know this comment wasn't explicitly directed at me, but for the record, I don't necessarily believe that all or even most "AI 'safety'" advocacy is in bad faith. It's psychologically a lot easier to consider LLM output as indistinguishable from speech made on behalf of its provider, whereas search engine output is more clearly attributed to other entities. That being said, I do agree with the parent comment that it's driven in large part out of self-interest on the part of LLM providers.)

replies(2): >>45948396 #>>45952665 #
8. slg ◴[] No.45948396{4}[source]
>"For the children" isn't and has never been a convincing excuse to encroach on the personal freedom of legal adults. This push for AI censorship is no different than previous panics over violent video games and "satanic" music.

But that wasn't the topic being discussed. It is one thing to argue that the cost of these safety tools isn't worth the sacrifices that come along with them. The comment I was replying to was effectively saying "no one cares about kids so you're lying if you say 'for the children'".

Part of the reason these "for the children" arguments are so persistent is that lots of people do genuinely want these things "for the children". Pretending everyone has ulterior motives is counterproductive because it doesn't actually address the real concerns people have. It also reveals that the person saying it can't even fathom someone genuinely having this moral position.

replies(1): >>45948512 #
9. seanmcdirmid ◴[] No.45948414[source]
Microsoft suffered from this early with Tay, one could guess that this set the whole field back a few years. You’d be surprised how even many so called libertarians will start throwing stone when someone co-axes their Chatbot to say nice things about Hitler.
replies(1): >>45950016 #
10. buu700 ◴[] No.45948512{5}[source]
> The comment I was replying to was effectively saying "no one cares about kids so you're lying if you say 'for the children'".

I don't see that in the comment you replied to. They pointed out that LLM providers have a commercial interest in avoiding bad press, which is true. No one stops buying Fords or BMWs when someone drives one off a cliff or into a crowd of people, but LLMs are new and confusing and people might react in all sorts of illogical ways to stories involving LLMs.

> Part of the reason these "for the children" arguments are so persistent is that lots of people do genuinely want these things "for the children".

I'm sure that's true. People genuinely want lots of things that are awful ideas.

replies(1): >>45948664 #
11. slg ◴[] No.45948664{6}[source]
Here is what was said that prompted my initial reply:

>When a model is censored for "AI safety", what they really mean is brand safety.

The equivalent analogy wouldn't be Fords and BMWs driving off a cliff, they effectively said that Ford and BMW only install safety features in their cars to protect their brand with the implication that no one at these companies actually cares about the safety of actual people. That is an incredibly cynical and amoral worldview and it appears to be the dominate view of people on HN.

Once again, you can say that specific AI safety features are stupid or aren't worth the tradeoff. I would have never replied if the original comment said that. I replied because the original comment dismissed the motivations behind these AI safety features.

replies(2): >>45949136 #>>45949185 #
12. nradov ◴[] No.45948690[source]
Some of you have been watching too many sci-fi movies. The whole notion of "AI safety regulations" is so silly and misguided. If a safety critical system is connected to public networks with an exposed API or any security vulnerabilities then there is a safety risk regardless of whether AI is being used or not. This is exactly why nuclear weapon control systems are air gapped and have physical interlocks.
replies(3): >>45948984 #>>45949074 #>>45951212 #
13. buu700 ◴[] No.45949074[source]
The existence of network-connected robots or drones isn't inherently a security vulnerability. AI control of the robots specifically is a problem in the same way that piping in instructions from /dev/urandom would be, except worse because AI output isn't purely random and has a higher probability of directing the machine to cause actual harm.

Are you saying you're opposed to letting AI perform physical labor, or that you're opposed to requiring safeguards that allow humans to physically shut it off?

replies(1): >>45949542 #
14. buu700 ◴[] No.45949136{7}[source]
I read that as a cynical view of the motivations of corporations, not humans. Even if individuals have good faith beliefs in "AI 'safety'", and even if some such individuals work for AI companies, the behaviors of the companies themselves are ultimately the product of many individual motivations and surrounding incentive structures.

To the extent that a large corporation can be said to "believe" or "mean" anything, that seems like a fair statement to me. It's just a more specific case of pointing out that for-profit corporations as entities are ultimately motivated by profit, not public benefit (even if specific founders/employees/shareholders are individually motivated by certain ideals).

replies(1): >>45949523 #
15. int_19h ◴[] No.45949185{7}[source]
Organizations don't have a notion of morality; only people do.

The larger an organization is, and the more bureaucratized it is, the less morality of individual people in it affects it overall operation.

Consequently, yes, it is absolutely true that Ford and BMW as a whole don't care about safety of actual people, regardless of what individual people working for them think.

Separately, the nature of progression in hierarchical organizations is basically a selection for sociopathy, so the people who rise to the top of large organizations can generally be assumed to not care about other people, regardless of what they claim in public.

16. slg ◴[] No.45949523{8}[source]
>I read that as a cynical view of the motivations of corporations, not humans.

This is really just the mirror image of what I was originally criticizing. Any decision made by a corporation is a decision made by a person. You don't get to ignore the morality of your decisions just because you're collecting a paycheck. If you're a moral person, the decisions you make at work should reflect that.

replies(2): >>45949592 #>>45949910 #
17. nradov ◴[] No.45949542{3}[source]
I am opposed to regulating any algorithms, including AI/LLM. We can certainly have safety regulations for equipment with the potential to cause physical harm, such as industrial robots or whatever. But the regulation needs to be around preventing injury to humans regardless of what software the equipment is running.
replies(1): >>45949611 #
18. buu700 ◴[] No.45949592{9}[source]
Sure, but that doesn't really have anything to do with what I said. The CEO of an AI company may or may not believe in the social benefits of censorship, and the reasoning for their beliefs could be any number of things, but at the end of the day "the corporation" is still motivated by profit.

Executives are beholden to laws, regulations, and shareholder interests. They may also have teams of advisors and board members convincing them of the wisdom of decisions they wouldn't have arrived at on their own. They may not even have a strong opinion on a particular decision, but assent to one direction as a result of internal politics or shareholder/board pressure. Not everything is a clear-cut decision with one "moral" option and one "immoral" option.

replies(1): >>45951551 #
19. buu700 ◴[] No.45949611{4}[source]
If that's the case, then it sounds like we largely agree with each other. There's no need for personal attacks implying that I'm somehow detached from reality.

Ultimately, this isn't strictly an issue specific to genAI. If a "script roulette" program that downloaded and executed random GitHub Gist files somehow became popular, or if someone created a web app that allowed anyone to anonymously pilot a fleet of robots, I'd suggest that those be subject to exactly the same types of safety regulations I proposed.

Any such regulations should be generically written, not narrowly targeted at AI algorithms. I'd still call that "AI safety", because in practice it's a much more useful definition of AI safety than the one being pushed today. "Non-determinism safety" doesn't really have the same ring to it.

20. coderenegade ◴[] No.45949910{9}[source]
The morality of an organization is distinct from the morality of the decision-makers within the organization. Modern organizations are setup to distribute responsibility, and take advantage of extra-organizational structures and entities to further that end. Decision-makers often have legal obligations that may override their own individual morality.

Whenever any large organization takes a "think of the children" stance, it's almost always in service of another goal, with the trivial exception of single-issue organizations that specifically care about that issue. This doesn't preclude individuals, even within the organization, from caring about a given issue. But a company like OpenAI that is actively considering its own version of slop-tok almost certainly cares about profit more than children, and its senior members are in the business of making money for their investors, which, again, takes precedence over their own individual thoughts on child safety. It just so happens that in this case, child safety is a convenient argument for guard rails, which neatly avoids having to contend with advertisers, which is about the money.

replies(1): >>45950166 #
21. Zak ◴[] No.45949959{3}[source]
The linked project is about removing censorship from open-weight models people can run on their own hardware, and your comment addresses incidents involving LLM-based consumer products.

Sure, products like character.ai and ChatGPT should be designed to avoid giving harmful advice or encouraging the user to form emotional attachments to the model. It may be impossible to build a product like character.ai without encouraging that behavior, in which case I'm inclined to think the product should not be built at all.

22. coderenegade ◴[] No.45949966[source]
To play devil's advocate, a leader that dismantles broken systems in order fix an otherwise failing society will look identical to one that siezes power by dismantling those same systems. Indeed, in the latter case, they often believe they're the former.

I'm not American, so I have no horse in the Trump race, but it seems clear to me that a significant chunk of the country elected the guy on the premise that he would do what he's currently doing. Whether or not you think he's Hitler or the savior of America almost certainly depends on your view of how well the system was working beforehand, and whether or not it needed to be torn down and rebuilt.

Which is to say, I don't know that historians will have much of relevance to say until the ink is dry and it's become history.

replies(1): >>45951098 #
23. Zak ◴[] No.45950016{3}[source]
I was thinking about Tay when I wrote about brand safety.

I doubt the incident really set AI research back. Allowing models to learn from interactive conversations in a large public setting like Twitter will always result in trolling.

24. martin-t ◴[] No.45951098{3}[source]
When I was younger, I thought about a scenario in which I'd be the dictator of a small country trying to make it an actually good place to live. Citizenship would be opt-in and would require an intelligence test. You can tell I was quite arrogant. But even then I decided I needed to set some rules for myself to not get carried away with power and the core rules were basically I wouldn't kill anyone and the position would not be hereditary.

Basically the most difficult and most essential task became _how to structure the system so I can hand off power back to the people and it continues working_.

What I see Trump, Putin and Xi doing is not that - otherwise their core focus would be educating people in history, politics, logical reasoning, and psychology so they can rule themselves without another dictator taking over (by force or manipulation). They would also be making sure laws are based on consistent moral principles and are applied equally to everyone.

> I'm not American

Me neither, yet here we both are. We're in the sphere of influence of one of the major powers.

> elected the guy on the premise that he would do what he's currently doing

Yes, people (in the US) are angry so they elected a privileged rich guy who cosplays as angry. They don't realize somebody like him will never have their best interest in mind - the real solution (IMO?) is to give more political power to the people (potentially weighed by intelligence and knowledge of a given area) and make it more direct (people voting on laws directly if they choose to). Not to elect a dictator with NPD and lots of promises.

> Which is to say, I don't know that historians will have much of relevance to say until the ink is dry and it's become history.

The historian I linked to used 2 definitions of fascism and only Trump's own words to prove that he satisfies both definitions. That is very relevant and a very strong standard of proof from a highly intelligent person with lost of knowledge on the topic. We need more of this and we need to teach the general population to listen to people like this.

I don't know how though.

What I find extremely worrying is that all 3 individuals in the highest positions of power (I refuse to call them leaders) in the 3 major powers are very strongly authoritarian and have clear anti-social personality traits. IMO they all should be disqualified from any position of power for being mentally ill. But how many people have sufficient knowledge to recognize that or even know what it means?

The intelligence and education levels of the general population are perhaps not high enough to get better outcomes than what we have now.

---

Anyway, I looked through your comment history and you seem to have opinions similar to mine, I am happy to see someone reasonable and able to articulate these thought perhaps better than I can.

25. johnisgood ◴[] No.45951169{3}[source]
There is a huge difference between enabled and encouraged. I am all for it being able to enable, but encourage? Maybe not.
26. EagnaIonat ◴[] No.45951212[source]
> The whole notion of "AI safety regulations" is so silly and misguided.

Here is a couple of real world AI issues that have already happened due to the lack of AI Safety.

- In the US if you were black you were flagged "high risk" for parole. If you were a white person living in farmland area then you were flagged "low risk" regardless of your crime.

- Being denied ICU because you are diabetic. (Thankfully that never went into production)

- Having your resume rejected because you are a woman.

- Having black people photos classified as "Gorilla". (Google couldn't fix at the time and just removed the classification)

- Radicalizing users by promoting extreme content for engagement.

- Denying prestige scholarships to black people who live in black neighbourhoods.

- Helping someone who is clearly suicidal to commit suicide. Explaining how to end their life and write the suicide note for them.

... and the list is huge!

replies(2): >>45951866 #>>45952724 #
27. astrange ◴[] No.45951551{10}[source]
> but at the end of the day "the corporation" is still motivated by profit.

OpenAI and Anthropic are both PBCs. So neither of them are supposedly purely motivated by this thing.

replies(1): >>45951689 #
28. buu700 ◴[] No.45951689{11}[source]
That adds some nuance, but doesn't dramatically change the incentive structure. A PBC is still for-profit: https://www.cooleygo.com/glossary/public-benefit-corporation.
29. mx7zysuj4xew ◴[] No.45951866{3}[source]
these issues are inherently some of the uglier sides of humananity. no LLM safety program can fix them, since its holding up a mirror to society.
30. atomicthumbs ◴[] No.45952665{4}[source]
these things are popping "ordinary" adults' minds like popcorn kernels and you want to take their safeguards off... why?
31. nradov ◴[] No.45952724{3}[source]
None of those are specifically "AI" issues. The technology used is irrelevant. In most cases you could cause the same bias problems with a simple linear regression model or something. Suicide techniques and notes are already widely available.
replies(2): >>45954197 #>>45954695 #
32. Cthulhu_ ◴[] No.45952836[source]
It was also intentionally ignorant, as even then western search engines and websites had their own "censorship" and the like already.

And I think that's fine. I don't want a zero censorship libertarian free for all internet. I don't want a neutral search engine algorithm, not least of all because that would be even easier to game than the existing one.

33. EagnaIonat ◴[] No.45954197{4}[source]
All of those are AI issues.
34. 542354234235 ◴[] No.45954695{4}[source]
>None of those are specifically "AI" issues. The technology used is irrelevant.

I mean, just because you could kill a million people by hand doesn't mean that a pistol, or an automatic weapon, or nuclear weapons aren't an issue, just an irrelevant technology. Guns in a home make suicide more likely simply because they are a tool that allows for a split-second action. "If someone really wants to do X, they will find a way" just doesn't map onto reality.