Most active commenters
  • A4ET8a8uTh0(5)
  • ygjb(4)
  • matt-attack(3)

←back to thread

586 points mizzao | 13 comments | | HN request time: 1.662s | source | bottom
Show context
rivo ◴[] No.40668263[source]
I tried the model the article links to and it was so refreshing not being denied answers to my questions. It even asked me at the end "Is this a thought experiment?", I replied with "yes", and it said "It's fun to think about these things, isn't it?"

It felt very much like hanging out with your friends, having a few drinks, and pondering big, crazy, or weird scenarios. Imagine your friend saying, "As your friend, I cannot provide you with this information." and completely ruining the night. That's not going to happen. Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.

Sure, there are dangers, as others are pointing out in this thread. But I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

replies(6): >>40668938 #>>40669291 #>>40669447 #>>40671323 #>>40683221 #>>40689216 #
Cheer2171 ◴[] No.40668938[source]
I totally get that kind of imagination play among friends. But I had someone in a friend group who used to want to play out "thought experiments" but really just wanted to take it too far. Started off innocent with fantasy and sci-fi themes. It was needed for Dungeons and Dragons world building.

But he delighted the most in gaming out the logistics of repeating the Holocaust in our country today. Or a society where women could not legally refuse sex. Or all illegal immigrants became slaves. It was super creepy and we "censored" him all the time by saying "bro, what the fuck?" Which is really what he wanted, to get a rise out of people. We eventually stopped hanging out with him.

As your friend, I absolutely am not going to game out your rape fantasies.

replies(11): >>40669105 #>>40669505 #>>40670433 #>>40670603 #>>40671661 #>>40671746 #>>40672676 #>>40673052 #>>40678557 #>>40679712 #>>40679816 #
WesolyKubeczek ◴[] No.40669105[source]
An LLM, however, is not your friend. It's not a friend, it's a tool. Friends can keep one another, ehm, hingedness in check, and should; LLMs shouldn't. At some point I would likely question your friend's sanity.

How you use an LLM, though, is going to tell tons more about yourself than it would tell about the LLM, but I would like my tools not to second-guess my intentions, thank you very much. Especially if "safety" is mostly interpreted not so much as "prevent people from actually dying or getting serious trauma", but "avoid topics that would prevent us from putting Coca Cola ads next to the chatgpt thing, or from putting the thing into Disney cartoons". I can tell that it's the latter by the fact an LLM will still happily advise you to put glue in your pizza and eat rocks.

replies(2): >>40670559 #>>40671641 #
ygjb ◴[] No.40671641[source]
If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum. In tech we focus on memory safety (among a host of other things) to make systems safe and secure to use. In automobiles we include seat belts, crumble zones, and governors to limit speed.

We put age and content restrictions on a variety media and resources, even if they are generally relaxed when it comes to factual or reference content (in some jurisdictions). We even include safety mechanisms in devices for which the only purpose is to cause harm, for example, firearms.

Yes, we are still figuring out what the right balance of safety mechanisms is for LLMs, and right now safety is a place holder for "don't get sued or piss off our business partners" in most corporate speak, but that doesn't undermine the legitimacy of the need for safety.

If you want a tool without a specific safety measure, then learn how to build them. It's not that hard, but it is expensive, but I kind of like the fact that there is at least a nominal attempt to make it harder to use advanced tools to harm oneself or others.

replies(2): >>40671924 #>>40681107 #
NoMoreNicksLeft ◴[] No.40671924[source]
> If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum.

Sure. Railings so people don't fall off catwalks, guards so people using the table saw don't chop off fingers. But these "safeties" aren't safeties at all... because regardless of whether they're in place or not, the results are just strings of words.

It's a little bit revealing, I think, that so many people want that others shouldn't get straight answers to questions. What is it that you're afraid that they'll ask? It'd be one thing if you insisted the models be modified so that they're factually correct. If someone asks "what's a fun thing to do on a Saturday night that won't get me into too much trouble" it probably shouldn't answer "go murder orphans and sell their corneas to rich evil people on the black market". But when I ask "what's going on in Israel and Palestine", the idea that it should be lobotomized and say "I'm afraid that I can't answer that, as it seems you're trying to elicit material that might be used for antisemitic purposes" is the asinine thing.

Societies that value freedom of speech and thought shouldn't be like this.

> If you want a tool without a specific safety measure, then learn how to build them.

This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by. And I'm sure that many models are either already censored or soon will be for anyone asking "how do I go about building my own model without safety guards". We might even soon see legislation to that effect.

replies(2): >>40672347 #>>40683595 #
1. ygjb ◴[] No.40672347[source]
> Societies that value freedom of speech and thought shouldn't be like this.

There is nothing preventing an individual using a computer to generate hateful content, this is absolutely evidenced by the absolute glut of hateful content on the internet.

My freedom of movement is not practically limited by the fact that if my car breaks down, I don't have the knowledge or tools to repair my car effectively - I still have two feet and a heartbeat, and it might take longer to get there, but I can go where I want (modulo private property and national borders).

Societies that value freedom of speech and thought should also be equally opposed to compelled speech, while model censorship is frustrating and challenging to work with, expecting to, or forcing a researcher, or a business to publish uncensored models is a form of compelled speech.

There is absolutely nothing stopping a reasonably competent technologist from implementing simple models, and the only thing stopping a reasonably competent technologist from building an LLM is financial resources. There is a broad set of resources to learn how to train and use models, and while an individual researcher may be challenged to product the next model competitive with current OpenAI, Anthropic, or other models, that is again a resource issue. If your complaint is that resource issues are holding people back, I may want you to expand on your critique of capitalism in general :P

> This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by.

It's absolutely not a bad faith argument. The know-how is hard to come by has been a compelling competitive advantage since the first proto-guilds sought to protect their skills and income in Mesopotamia (and probably before that, but they hadn't figured out a durable means of writing yet). In the modern parlance if someone can't Git Gud, that's not any researchers, or any businesses problem in terms of access to uncensored models.

Yeah, regulation is probably coming, but unless you're argument is that models are entities entitled to free speech, no ones freedom of expression is actually inhibited by not having access to tools to use generative AI technologies to generate content. People who can't create or jailbreak their own models to do it for them are still free to write their own manifestos, or make adult collages of the object of their fantasies. It just takes a bit more work.

replies(1): >>40673894 #
2. A4ET8a8uTh0 ◴[] No.40673894[source]
<< are still free to write their own manifestos, or make adult collages of the object of their fantasies. It just takes a bit more work.

This is the standard 'just start your own microservice/server/isp' and now it includes llm. Where does it end really?

The generic point is that it shouldn't take more work. A knife shouldn't come with a safety mechanism that automatically detects you are not actually cutting porkchop. It is just bad design and a bad idea. It undermines what it means to be a conscious human being.

Unless.. we don't agree on that and humans must be kept under close scrutiny to ensure they do not deviate from carefully scripted paths.

replies(4): >>40678996 #>>40681138 #>>40681800 #>>40682355 #
3. throwaway48476 ◴[] No.40678996[source]
Somewhere in the UK someone is working on that knife safety.
4. matt-attack ◴[] No.40681138[source]
I agree - but where we are with LLM is even worse than your hypothetical knife. The knife is a real object - what we're talking about is the censorship of thoughts and ideas. What else is the written word but that? How did a society that was established on free-speech just decided that the written word was so dangerous all of a sudden? How manipulative is it to even use the word "danger" with respect to text. The distain one must have for free-speech to even think that danger enters into the equation.
replies(1): >>40684794 #
5. aredox ◴[] No.40681800[source]
There is no security settings knife - except there are plenty of safety mechanism around knives.

But anyway, your LLM is less a knife and more a Katana sharp enough to cut through bones in one swoop. Remind me the restrictions around something like a Katana ?

replies(1): >>40685816 #
6. ygjb ◴[] No.40682355[source]
> This is the standard 'just start your own microservice/server/isp' and now it includes llm. Where does it end really?

With people who aren't good enough to build it own pissing and moaning about it? >The generic point is that it shouldn't take more work. A knife shouldn't come with a safety mechanism that automatically detects you are not actually cutting porkchop. It is just bad design and a bad idea. It undermines what it means to be a conscious human being.

First, you are comparing rockets to rocks here. A knife is a primitive tool, literally one of the most basic we can make (like seriously, take a knapping class, it's really fun!). To make a knife you can range from finding two rocks and smacking them together, to the most advanced metallurgy and ceramics. To date, the only folks able to make LLMs work are those operating at the peak of (more or less) 80 centuries of scientific and industrial development. Little bit of a gap there.

Second, there are many knife manufacturers that refuse to sell or ship products to specific businesses or regions, for a range of reasons related to brand relationships, political beliefs, and export restrictions.

Third, knifes aren't smart; there is already an industry for smart guns, and if there is a credible safety reason to make a smart knife that includes a target control or activation control system, you can bet that it will be implemented somewhere.

Finally, you make the assumption that I believe humans must be kept under close scrutiny because I agree with LLM safety controls. That is absolutely not the case - I just don't believe that a bunch of hot garbage people (in this case the racists and bigots who want to use LLMs to proliferate hate, people who create deep fakes of kids and celebrities) or a bunch of horny folks (ranging from people who want sexy time chat bots to, or just 'normal' generated erotic content) should be able to compel individuals or businesses to release the tools to do that.

You are concerned about freedom of expression, and I am concerned about freedom from compulsion (since I have already stated that I don't believe that losing access to LLMs breaks freedom of expression).

replies(1): >>40685889 #
7. ygjb ◴[] No.40684794{3}[source]
Who is being censored if an LLM is not able to generate inferences about a specific topic?

The information the user of the LLM is still available, just not through that particular interface. The interactions the user of the LLM is seeking are not available, but that interaction is not an original thought or idea of the user, since they are asking the LLM to infer or synthesize new content.

> How did a society that was established on free-speech just decided that the written word was so dangerous all of a sudden?

The written word has absolutely always been dangerous. This idea is captured succinctly in the expression "The pen is mightier than the sword."; ideas are dangerous to those with power, that is why freedom of expression is so important.

> The disdain one must have for free-speech to even think that danger enters into the equation.

This is asinine. You want dangerous text? Here is a fill in the blanks that someone can complete. f"I will pay ${amount} for {illegal_job} to do {illegal_thing} to {targeted_group} by or on {date} at {location}." Turning that into an actual sentence, with intent behind it would be a crime in many jurisdictions, and that is one of the most simple, contrived examples.

Speech, especially inciting speech, is a form of violence, and it runs head long into freedom of speech or freedom of expression, but it's important to for societies to find ways to hold the demagogues that rile people into harmful action accountable.

replies(2): >>40685731 #>>40688095 #
8. A4ET8a8uTh0 ◴[] No.40685731{4}[source]
<< The written word has absolutely always been dangerous. This idea is captured succinctly in the expression "The pen is mightier than the sword."; ideas are dangerous to those with power, that is why freedom of expression is so important.

One feels there is something of a contradiction in this sentence that may be difficult to reconcile. If the freedom of expression is so important, restricting it should be the last thing we do and not the default mode.

<< Turning that into an actual sentence, with intent behind it would be a crime in many jurisdictions, and that is one of the most simple, contrived examples.

I have mild problem with the example as it goes into the area of illegality vs immorality. Right now, we are discussing llms not producing outputs that are not illegal, but deemed wrong ( too biased, too offensive or whatnot -- but not illegal ). Your example does not follow that qualification.

<< Speech, especially inciting speech, is a form of violence,

No. Words are words. Actions are actions. The moment you start mucking around those definitions, you are asking yourself for trouble you may not have thought through. Also, for the purposes of demonstration only, jump off a bridge. Did you jump off a bridge? No? If not, why not.

<< it's important to for societies to find ways to hold the demagogues that rile people into harmful action accountable.

Whatever happened to being held accountable for actually doing things?

replies(1): >>40688049 #
9. A4ET8a8uTh0 ◴[] No.40685816{3}[source]
<< Remind me the restrictions around something like a Katana ?

The analogy kinda breaks, but the katana comparison is the interesting part[1] so lets explore it further. Most US states have their own regulations, but overall after you are 18 you are the boss with some restrictions imposed upon 'open carry'( for lack of a better term ). IL ( no surprise there ) and especially Chicago[2] ( even less of surprise ) has a lot of restrictions that are fairly close to silly.

If we tried to impose same type of restrictions on llms, we would need to start with age ( and from there, logically, person below 18 should not be using unlocked PC for fear of general potential for mischief ) and then, likely, prohibit use for unlocked cellphones that can run unapproved apps. It gets pretty messy. And that is assuming federal and not state regulation, which would vary greatly across US.

Is it a good idea?

'In the US, katanas fall under the same legal category as knives. From the age of 18, it is absolutely lawful to possess a katana in the US. However, ownership laws vary by state, but most states allowing you to own and display a katana in your home. Restrictions may apply on "carrying a katana" publicly.'

[1]https://katana.store/blogs/samurai-sword-buying-guide/are-ka... [2]https://codelibrary.amlegal.com/codes/chicago/latest/chicago...

10. A4ET8a8uTh0 ◴[] No.40685889{3}[source]
<< That is absolutely not the case - I just don't believe that a bunch of hot garbage people (in this case the racists and bigots who want to use LLMs to proliferate hate, people who create deep fakes of kids and celebrities) or a bunch of horny folks (ranging from people who want sexy time chat bots to, or just 'normal' generated erotic content) should be able to compel individuals or businesses to release the tools to do that.

I will admit that I actually gave you some initial credit, because, personally, I do believe there is some limited merit to the security argument. However, stating you can and should dictate how to use llms is something I can't support. This is precisely the one step away from tyranny, because it is the assholes that need protection and not saints.

But more to the point, why do you think you got the absolute right to limit people's ability to do what they think is interesting to them ( even if it includes things one would deem unsavory )?

<< You are concerned about freedom of expression, and I am concerned about freedom from compulsion (since I have already stated that I don't believe that losing access to LLMs breaks freedom of expression

How are you compelled? I don't believe someone using llms to generate horny chats compels you to do anything. I am open to an argument here, but it is a stretch.

11. matt-attack ◴[] No.40688049{5}[source]
Thank you. Very well put!

I don’t care what is considered illegal in certain jurisdictions. That’s off topic. Sodomy is illegal in certain jurisdictions. Are you going to try to convince me that I should give two shits about what two or three or four people choose to stick in what hole in the privacy of their homes? We’re taking about this insidious language of LLMs being “dangerous”.

If an LLM printed the text written by the GP about funding a hit, I fail to see how even that is “dangerous”.

I can write a bash script right now that prints that same thing, and I can post it to GitHub. Is anyone going to give two shits about it?

Someone has to explain how an LLM producing that same text is any different than my bash script printing to STDOUT. There’s not fucking difference. A program printed some text and there’s no argument behind the case that it’s dangerous.

replies(1): >>40689273 #
12. matt-attack ◴[] No.40688095{4}[source]
> who is being censored… The author of the program obviously.

If I write a bash script that echos “kill all the Jews”, and you choose to censor it, just who do you think is being censored? The intel professor? No! The author of the bash script obviously!

13. A4ET8a8uTh0 ◴[] No.40689273{6}[source]
<< I don’t care what is considered illegal in certain jurisdictions.

I think this is where it gets messy. I care what happens in my jurisdiction, because this is where the laws I am subject to are enforced. The part that aggravates me is that the llms are purposefully neutered in stupid ways that are not even trying to enforce laws, but rather current weird zeitgeist that has somehow been deemed appropriate to be promoted by platforms.

<< A program printed some text and there’s no argument behind the case that it’s dangerous.

As I mentioned in my previous posts, I accept some level of argumentation from security standpoint ( I suppose those could be argued to be dangerous ), but touching touchy topics is not that.

At the end of the day, I will say that this censorship in itself is dangerous. Do you know why? When I was a little boy, I learned of censorship relatively late, because it was subtle ( overt restriction on what you could read and write typically indicated useful information and was sought after ). It didn't make censorship less insidious, but at least it didn't immediately radicalize a lot of people. This 'I am afraid I can't let you do that Dave' message I get from censored llm is that overt censorship that is already backfiring from that perspective.

<< Someone has to explain how an LLM producing that same text is any different than my bash script printing to STDOUT.

The only real difference is that it has more complex internals and therefore its outputs are more flexible than most programs. The end result is the same ('text on screen'), but how it gets there is different. Good bash script will give you the information needed as long as it is coded right; it is a purpose built tool. LLMs, OTOH, are a software equivalent of personal computer idea.

ok. i think i need coffee