←back to thread

586 points mizzao | 1 comments | | HN request time: 0.222s | source
Show context
rivo ◴[] No.40668263[source]
I tried the model the article links to and it was so refreshing not being denied answers to my questions. It even asked me at the end "Is this a thought experiment?", I replied with "yes", and it said "It's fun to think about these things, isn't it?"

It felt very much like hanging out with your friends, having a few drinks, and pondering big, crazy, or weird scenarios. Imagine your friend saying, "As your friend, I cannot provide you with this information." and completely ruining the night. That's not going to happen. Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.

Sure, there are dangers, as others are pointing out in this thread. But I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

replies(6): >>40668938 #>>40669291 #>>40669447 #>>40671323 #>>40683221 #>>40689216 #
Cheer2171 ◴[] No.40668938[source]
I totally get that kind of imagination play among friends. But I had someone in a friend group who used to want to play out "thought experiments" but really just wanted to take it too far. Started off innocent with fantasy and sci-fi themes. It was needed for Dungeons and Dragons world building.

But he delighted the most in gaming out the logistics of repeating the Holocaust in our country today. Or a society where women could not legally refuse sex. Or all illegal immigrants became slaves. It was super creepy and we "censored" him all the time by saying "bro, what the fuck?" Which is really what he wanted, to get a rise out of people. We eventually stopped hanging out with him.

As your friend, I absolutely am not going to game out your rape fantasies.

replies(11): >>40669105 #>>40669505 #>>40670433 #>>40670603 #>>40671661 #>>40671746 #>>40672676 #>>40673052 #>>40678557 #>>40679712 #>>40679816 #
WesolyKubeczek ◴[] No.40669105[source]
An LLM, however, is not your friend. It's not a friend, it's a tool. Friends can keep one another, ehm, hingedness in check, and should; LLMs shouldn't. At some point I would likely question your friend's sanity.

How you use an LLM, though, is going to tell tons more about yourself than it would tell about the LLM, but I would like my tools not to second-guess my intentions, thank you very much. Especially if "safety" is mostly interpreted not so much as "prevent people from actually dying or getting serious trauma", but "avoid topics that would prevent us from putting Coca Cola ads next to the chatgpt thing, or from putting the thing into Disney cartoons". I can tell that it's the latter by the fact an LLM will still happily advise you to put glue in your pizza and eat rocks.

replies(2): >>40670559 #>>40671641 #
ygjb ◴[] No.40671641[source]
If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum. In tech we focus on memory safety (among a host of other things) to make systems safe and secure to use. In automobiles we include seat belts, crumble zones, and governors to limit speed.

We put age and content restrictions on a variety media and resources, even if they are generally relaxed when it comes to factual or reference content (in some jurisdictions). We even include safety mechanisms in devices for which the only purpose is to cause harm, for example, firearms.

Yes, we are still figuring out what the right balance of safety mechanisms is for LLMs, and right now safety is a place holder for "don't get sued or piss off our business partners" in most corporate speak, but that doesn't undermine the legitimacy of the need for safety.

If you want a tool without a specific safety measure, then learn how to build them. It's not that hard, but it is expensive, but I kind of like the fact that there is at least a nominal attempt to make it harder to use advanced tools to harm oneself or others.

replies(2): >>40671924 #>>40681107 #
NoMoreNicksLeft ◴[] No.40671924[source]
> If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum.

Sure. Railings so people don't fall off catwalks, guards so people using the table saw don't chop off fingers. But these "safeties" aren't safeties at all... because regardless of whether they're in place or not, the results are just strings of words.

It's a little bit revealing, I think, that so many people want that others shouldn't get straight answers to questions. What is it that you're afraid that they'll ask? It'd be one thing if you insisted the models be modified so that they're factually correct. If someone asks "what's a fun thing to do on a Saturday night that won't get me into too much trouble" it probably shouldn't answer "go murder orphans and sell their corneas to rich evil people on the black market". But when I ask "what's going on in Israel and Palestine", the idea that it should be lobotomized and say "I'm afraid that I can't answer that, as it seems you're trying to elicit material that might be used for antisemitic purposes" is the asinine thing.

Societies that value freedom of speech and thought shouldn't be like this.

> If you want a tool without a specific safety measure, then learn how to build them.

This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by. And I'm sure that many models are either already censored or soon will be for anyone asking "how do I go about building my own model without safety guards". We might even soon see legislation to that effect.

replies(2): >>40672347 #>>40683595 #
ygjb ◴[] No.40672347[source]
> Societies that value freedom of speech and thought shouldn't be like this.

There is nothing preventing an individual using a computer to generate hateful content, this is absolutely evidenced by the absolute glut of hateful content on the internet.

My freedom of movement is not practically limited by the fact that if my car breaks down, I don't have the knowledge or tools to repair my car effectively - I still have two feet and a heartbeat, and it might take longer to get there, but I can go where I want (modulo private property and national borders).

Societies that value freedom of speech and thought should also be equally opposed to compelled speech, while model censorship is frustrating and challenging to work with, expecting to, or forcing a researcher, or a business to publish uncensored models is a form of compelled speech.

There is absolutely nothing stopping a reasonably competent technologist from implementing simple models, and the only thing stopping a reasonably competent technologist from building an LLM is financial resources. There is a broad set of resources to learn how to train and use models, and while an individual researcher may be challenged to product the next model competitive with current OpenAI, Anthropic, or other models, that is again a resource issue. If your complaint is that resource issues are holding people back, I may want you to expand on your critique of capitalism in general :P

> This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by.

It's absolutely not a bad faith argument. The know-how is hard to come by has been a compelling competitive advantage since the first proto-guilds sought to protect their skills and income in Mesopotamia (and probably before that, but they hadn't figured out a durable means of writing yet). In the modern parlance if someone can't Git Gud, that's not any researchers, or any businesses problem in terms of access to uncensored models.

Yeah, regulation is probably coming, but unless you're argument is that models are entities entitled to free speech, no ones freedom of expression is actually inhibited by not having access to tools to use generative AI technologies to generate content. People who can't create or jailbreak their own models to do it for them are still free to write their own manifestos, or make adult collages of the object of their fantasies. It just takes a bit more work.

replies(1): >>40673894 #
A4ET8a8uTh0 ◴[] No.40673894[source]
<< are still free to write their own manifestos, or make adult collages of the object of their fantasies. It just takes a bit more work.

This is the standard 'just start your own microservice/server/isp' and now it includes llm. Where does it end really?

The generic point is that it shouldn't take more work. A knife shouldn't come with a safety mechanism that automatically detects you are not actually cutting porkchop. It is just bad design and a bad idea. It undermines what it means to be a conscious human being.

Unless.. we don't agree on that and humans must be kept under close scrutiny to ensure they do not deviate from carefully scripted paths.

replies(4): >>40678996 #>>40681138 #>>40681800 #>>40682355 #
1. throwaway48476 ◴[] No.40678996[source]
Somewhere in the UK someone is working on that knife safety.