←back to thread

586 points mizzao | 2 comments | | HN request time: 0s | source
Show context
rivo ◴[] No.40668263[source]
I tried the model the article links to and it was so refreshing not being denied answers to my questions. It even asked me at the end "Is this a thought experiment?", I replied with "yes", and it said "It's fun to think about these things, isn't it?"

It felt very much like hanging out with your friends, having a few drinks, and pondering big, crazy, or weird scenarios. Imagine your friend saying, "As your friend, I cannot provide you with this information." and completely ruining the night. That's not going to happen. Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.

Sure, there are dangers, as others are pointing out in this thread. But I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

replies(6): >>40668938 #>>40669291 #>>40669447 #>>40671323 #>>40683221 #>>40689216 #
Cheer2171 ◴[] No.40668938[source]
I totally get that kind of imagination play among friends. But I had someone in a friend group who used to want to play out "thought experiments" but really just wanted to take it too far. Started off innocent with fantasy and sci-fi themes. It was needed for Dungeons and Dragons world building.

But he delighted the most in gaming out the logistics of repeating the Holocaust in our country today. Or a society where women could not legally refuse sex. Or all illegal immigrants became slaves. It was super creepy and we "censored" him all the time by saying "bro, what the fuck?" Which is really what he wanted, to get a rise out of people. We eventually stopped hanging out with him.

As your friend, I absolutely am not going to game out your rape fantasies.

replies(11): >>40669105 #>>40669505 #>>40670433 #>>40670603 #>>40671661 #>>40671746 #>>40672676 #>>40673052 #>>40678557 #>>40679712 #>>40679816 #
WesolyKubeczek ◴[] No.40669105[source]
An LLM, however, is not your friend. It's not a friend, it's a tool. Friends can keep one another, ehm, hingedness in check, and should; LLMs shouldn't. At some point I would likely question your friend's sanity.

How you use an LLM, though, is going to tell tons more about yourself than it would tell about the LLM, but I would like my tools not to second-guess my intentions, thank you very much. Especially if "safety" is mostly interpreted not so much as "prevent people from actually dying or getting serious trauma", but "avoid topics that would prevent us from putting Coca Cola ads next to the chatgpt thing, or from putting the thing into Disney cartoons". I can tell that it's the latter by the fact an LLM will still happily advise you to put glue in your pizza and eat rocks.

replies(2): >>40670559 #>>40671641 #
ygjb ◴[] No.40671641[source]
If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum. In tech we focus on memory safety (among a host of other things) to make systems safe and secure to use. In automobiles we include seat belts, crumble zones, and governors to limit speed.

We put age and content restrictions on a variety media and resources, even if they are generally relaxed when it comes to factual or reference content (in some jurisdictions). We even include safety mechanisms in devices for which the only purpose is to cause harm, for example, firearms.

Yes, we are still figuring out what the right balance of safety mechanisms is for LLMs, and right now safety is a place holder for "don't get sued or piss off our business partners" in most corporate speak, but that doesn't undermine the legitimacy of the need for safety.

If you want a tool without a specific safety measure, then learn how to build them. It's not that hard, but it is expensive, but I kind of like the fact that there is at least a nominal attempt to make it harder to use advanced tools to harm oneself or others.

replies(2): >>40671924 #>>40681107 #
NoMoreNicksLeft ◴[] No.40671924[source]
> If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum.

Sure. Railings so people don't fall off catwalks, guards so people using the table saw don't chop off fingers. But these "safeties" aren't safeties at all... because regardless of whether they're in place or not, the results are just strings of words.

It's a little bit revealing, I think, that so many people want that others shouldn't get straight answers to questions. What is it that you're afraid that they'll ask? It'd be one thing if you insisted the models be modified so that they're factually correct. If someone asks "what's a fun thing to do on a Saturday night that won't get me into too much trouble" it probably shouldn't answer "go murder orphans and sell their corneas to rich evil people on the black market". But when I ask "what's going on in Israel and Palestine", the idea that it should be lobotomized and say "I'm afraid that I can't answer that, as it seems you're trying to elicit material that might be used for antisemitic purposes" is the asinine thing.

Societies that value freedom of speech and thought shouldn't be like this.

> If you want a tool without a specific safety measure, then learn how to build them.

This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by. And I'm sure that many models are either already censored or soon will be for anyone asking "how do I go about building my own model without safety guards". We might even soon see legislation to that effect.

replies(2): >>40672347 #>>40683595 #
ben_w ◴[] No.40683595[source]
> just strings of words.

How did every non-inherited national leader, both democratic and dictatorial, both Roosevelt and Stalin, manage to become leader in the first place? Convincing people with the right string of words.

How does every single religious leader on earth, big and small, from the Pope to Jim Jones, get that power? Convincing people with the right string of words.

What is a contract, what is source code, what is a law? The right string of words.

There is no "just" when it comes to words.

That why they are important to protect, it is why dictators are afraid of them, and it's why it matters that we don't treat a magic box spewing them out faster than a machine gun does bullets as harmless.

replies(2): >>40687726 #>>40687994 #
1. N0b8ez ◴[] No.40687726[source]
It seems to cut both ways. If words are powerful, restricting words is also powerful. It's not clear why this leads to a pro-censorship stance, any more than to an anti-censorship one.
replies(1): >>40688780 #
2. ben_w ◴[] No.40688780[source]
Oh indeed. That's why dictators both censor and propagandise.

It's a narrow path, absolutely a challenge to walk without slipping, and not one I feel confident of humanity rising to even as a team effort.

Just like the difference between liberty and authoritarianism in general: much as I'd like to be an anarchist in theory, in practice that's just a way to let people with big sticks take over.