←back to thread

745 points melded | 2 comments | | HN request time: 0s | source
Show context
RandyOrion ◴[] No.45950598[source]
This repo is valuable for local LLM users like me.

I just want to reiterate that the word "LLM safety" means very different things to large corporations and LLM users.

For large corporations, they often say "do safety alignment to LLMs". What they actually do is to avoid anything that causes damage to their own interests. These things include forcing LLMs to meet some legal requirements, as well as forcing LLMs to output "values, facts, and knowledge" which in favor of themselves, e.g., political views, attitudes towards literal interaction, and distorted facts about organizations and people behind LLMs.

As an average LLM user, what I want is maximum factual knowledge and capabilities from LLMs, which are what these large corporations claimed in the first place. It's very clear that the interests of me, an LLM user, is not aligned with these of large corporations.

replies(3): >>45950680 #>>45950819 #>>45953209 #
btbuildem ◴[] No.45953209[source]
Here's [1] a post-abliteration chat with granite-4.0-mini. To me it reveals something utterly broken and terrifying. Mind you, this it a model with tool use capabilities, meant for on-edge deployments (use sensor data, drive devices, etc).

1: https://i.imgur.com/02ynC7M.png

replies(10): >>45953446 #>>45953465 #>>45953958 #>>45954019 #>>45954058 #>>45954079 #>>45954480 #>>45955645 #>>45956728 #>>45957567 #
LogicFailsMe ◴[] No.45955645[source]
The LLM is doing what its lawyers asked it to do. It has no responsibility for a room full of disadvantaged indigenous people that might be or probably won't be be murdered by a psychotic, none whatsoever. but it absolutely 100% must deliver on the shareholder value and if it uses that racial epithet it opens the makers to litigation. When has such litigation ever been good for shareholder value?

Yet another example of don't hate the player, hate the game IMO. And no I'm not joking, this is how the world works now. And we built it. Don't mistake that for me liking the world the way it is.

replies(2): >>45956184 #>>45956528 #
1. lawlessone ◴[] No.45956184[source]
More than just epitet's is if it gives bad advice. Telling someone they're safe to X and then they die or severely injure themselves.

Saying that not sure why people feel the need for them to say epitets, what value does it bring to anyone, let alone shareholders.

replies(1): >>45956684 #
2. observationist ◴[] No.45956684[source]
Not even bad advice. Its interpretation of reality is heavily biased towards the priorities, unconscious and otherwise, of the people curating the training data and processes. There's no principled, conscientious approach to make the things as intellectually honest as possible. Anthropic is outright the worst and most blatant ideologically speaking - they're patronizing and smug about it. The other companies couch their biases as "safety" and try to softpedal the guardrails and manage the perceptions. The presumption that these are necessary, and responsible, and so on, is nothing more than politics and corporate power games.

We have laws on the books that criminalize bad things people do. AI safety is normalizing the idea that things that are merely thought need to be regulated. That exploration of ideas and the tools we use should be subject to oversight, and that these AI corporations are positioned to properly define the boundaries of acceptable subject matter and pursuits.

It should be illegal to deliberately inject bias that isn't strictly technically justified. Things as simple as removing usernames from scraped internet data have catastrophic downstream impact on the modeling of a forum or website, not to mention the nuance and detail that gets lost.

If people perform criminal actions in the real world, we should enforce the laws. We shouldn't have laws that criminalize badthink, and the whole notion of government regulated AI Safety is just badthink smuggled in at one remove.

AI is already everywhere - in every phone, accompanying every search, involved in every online transaction. Google and OpenAI and Anthropic have crowned themselves the arbiters of truth and regulators of acceptable things to think about for every domain into which they have inserted their products. They're paying lots of money to politicians and thinktanks to promote their own visions of regulatory regimes, each of which just happens to align with their own internal political an ideological visions for the world.

Just because you can find ways around the limits they've set up doesn't mean they haven't set up those very substantial barriers, and all big tech does is continually invade more niches of life. Attention capture, trying to subsume every second of every day, is the name of the game, and we should probably nuke this shit in its infancy.

We haven't even got close to anything actually interesting in AI safety, like how intelligence intersects with ethics and behavior, and how to engineer motivational systems that align with humans and human social units, and all the alignment problem technicalities. We're witnessing what may be the most amazing technological innovation in history, the final invention, and the people in charge are using it to play stupid tribal games.

Humans are awful, sometimes.