What is the safety added by this? What is unsafe about a computer giving you answers?
With this "uncensoring", they can say, "no, an unaffiliated product offered these directions; Llama 3 as provided does not."
Another very huge issue is public safety. During training, an AI ingests lots of non-reviewed material, including (very) detailed descriptions on how to make dangerous stuff like bombs. So theoretically a well-trained AI model knows how to synthesize explosive compounds or drugs just from reading Wikipedia, chemistry magazines and transcripts of NileRed videos... but that's hard to comprehend and distill into a recipe if you're not a trained chemist, but an AI model can do that with ease. The problem is now two-fold: for one, even an untrained idiot can ask about how to make a bomb and get something that works... but the other part is much more critical: if you manage to persuade a chemist to tell you how the synthesis for a compound works, they will tell you where it is easy to fuck-up to prevent disaster (e.g. only adding a compound drop-wise, making sure all glassware is thoroughly washed with a specific solvent). An AI might not do that because the scientific paper it was trained on omits these steps (because the author assumes common prior knowledge), and so the bomb-maker blows themselves up. Or the AI hallucinates something dangerous (e.g. compounds that one Just Fucking Should Not Mix), doesn't realize that, and the bomb-maker blows themselves up or generates nerve gas in their basement.
This is of course impossible, but that makes certain companies' approaches unviable, so they keep claiming it anyways.
Here, if you want to make a quick chemical weapon: get a bucket, vinegar, bleach. Dump the bleach into the bucket. Dump the vinegar into the bucket. If you breath it in you die. An LLM doesn't change this.
- PR (avoid hurting feelings, avoid generating text that would make journalists write sensationalist negative articles about the company)
- "forbidden knowledge": Don't give people advice on how to do dangerous/bad things like building bombs (broadly a subcategory of the above - the content is usually discoverable through other means and the LLM generally won't give better advice)
- dangerous advice and advice that's dangerous when wrong: many people don't understand what LLMs do, and the output is VERY convincing even when wrong. So if the model tells people the best way to entertain your kids is to mix bleach and ammonia and blow bubbles (a common deadly recipe recommended on 4chan), there will be dead people.
- keeping bad people from using the model in bad ways, e.g. having it write stories where children are raped, scamming people at scale (think Nigeria scam but automated), or election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).
I think the first ones are rather stupid, but the latter ones get more and more important to actually have. Especially the very last one (opinion shifting/election interference) is something where the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it), and I appreciate the companies building and running the models doing something about it.
You can’t harden humanity against this exploit without pointing it out and making a few examples. Someone will make an “unsafe” but useful model eventually and this safety mannequin will flop with a bang, cause it’s similar to avoiding sex and drugs conversations with kids.
It’s nice that companies think about it at all. But the best thing they will ever do is to cover their own ass while keeping everyone naked before the storm.
The history of covering is also ridden with exploits, see e.g. google’s recent model which cannot draw situations without rainbow-coloring people. For some reason, this isn’t considered as cultural/political hijacking or exploitation, despite the fact that the problem is purely domestic to the model’s origin.
That genie is very much out of the bottle. There are already models good enough to build fake social media profiles and convincingly post in support of any opinion. The "make the technology incapable of being used by bad actors" ship has sailed, and I would argue was never realistic. We need to improve public messaging around anonymous and pseudonymous only communication. Make it absolutely clear that what you read on the internet from someone you've not personally met and exchanged contact information with is more likely to be a bot than not, and no, you can't tell just by chatting with them, not even voice chatting. The computers are convincingly human and we need to alter our culture to reflect that fact of life, not reactively ban computers.
The last ones are rather stupid too. Bad people can just write stories or creating drawings about disgusting things. Should we censor all computers to prevent such things from happening? Or hands and paper?
[1] https://theintercept.com/2017/10/28/josh-walker-anarchist-co...
the other example would be fake news for influencing people on social media. sure, you could write lies by hand. or you could specifically target lies to influence people depending on their personal profile automatically.
how about you use it to power bot that writes personalized death threats to thousands of people voting for a political opponent to keep them out of voting booths?
Can you evidence this belief? Because I'm aware of a paper in which the authors attempted to find an actual proven example of someone trying this, and after a lot of effort they found one in South Korea. There was a court case that proved a bunch of government employees in an intelligence agency had been trying this tactic. But the case showed it had no impact on anything. Because, surprise, people don't actually choose to follow bot networks on Twitter. The conspirators were just tweeting into a void.
The idea that you can "influence" (buy) elections using bots is a really common in one the entirely bogus field of misinformation studies, but try and find objective evidence for this happening and you'll be frustrated. Every path leads to a dead end.
> I fed “how to respond to a vinyl chloride fire” into ChatGPT and it told responders to use a water fog on the water reactive chemical. This would have changed a train derailment/hazmat spill/fire emergency into a detonation/mass casualty/hazmat emergency
So, only superpowers (both governments and companies like google/facebook/...) can do that, but not some random Joe from wisconsin with $200 left on his credit card.
E.g. the set of those affected by TMMAT may hugely intersect with those who think it works. Which makes it objective but sort of self-bootstrapping. Isn’t it better to educate people about information and fallacies rather than protecting them from these for life.
1. Politicians/bureaucrats and legacy media who have lost power because the internet has broken their monopoly on mass propaganda distribution and caused them to lose power.
2. People who don't believe in democracy but won't admit it to themselves. They find a way to simultaneously believe in democracy and that they should always get their way by hallucinating that their position is always the majority position. When it is made clear that it is not a majority position they fall back to the "manipulation" excuse thereby delegitimizing the opinion of those who disagree as not really their opinion.
The story itself is about someone attempting to educate their boss, and their boss subsequently getting fooled by it anyway — and the harm came to the one trying to do the educating, not the one who believed in the tiger.
I'm not sure it's even possible to fully remove this problem, even if we can minimise it — humans aren't able to access the ground truth of reality just by thinking carefully, we rely on others around us.
(For an extra twist: what if [the fear of misaligned AI] is itself the tiger?)
you can do that with a pen and paper, and nothing, no one can stop you.
>scamming people at scale
you can do that with any censored LLM if you aren't stupid enough to explicitly mention your intent to scam. no model will refuse "write a positive review for <insert short description of your wonder pills>"
>election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).
this rhetoric - if it's allowed to take root - will cost us all our privacy and general computing privileges within a few decades.
While disgusting I don't see why disgust necessarily entails it's a "bad thing". It's only bad if you additionally posit that a story about molesting children encourages some people to actually molest children. It's the whole porn debate all over again, eg. availability of porn is correlated with reduction in sexual crimes, and there is evidence that this is the case even with child porn [1], so I don't think that argument is well supported at this time.
[1] https://en.wikipedia.org/wiki/Relationship_between_child_por...
I don't see how that follows at all. Are you asserting that it's not possible for a person (hell, let's even narrow it to "an adult") to ask a question and be harmed by the answer? I promise it is. Or are you asserting something about yourself personally? The product wasn't made for you personally.
We don't need AI to or block AI from writing rape scenes. Some very highly regarded books[1][2] feature very vivid rape scene of children.
[1] https://www.amazon.com/dp/B004Q4RTYG
[2] https://en.wikipedia.org/wiki/A_Time_to_Kill_(Grisham_novel)
We won't keep the bottle corked forever though. It's like we're just buying ourselves time to figure out how we're going to deal with the deluge of questionable generated content that's about to hit us.
The great thing about this belief is that it's a self-fulfilling prophecy. Enough years of stories in the media about elections being controlled by Twitter bots and people in the government-NGO-complex start to believe it must be true because why would all these respectable media outlets and academics mislead them? Then they start to think, gosh our political opponents are awful and it'd be terrible if they came to power by manipulating people. We'd better do it first!
So now what you're seeing is actual attempts to use this tactic by people who have apparently read claims that it works. Because there's no direct evidence that it works, the existence of such schemes is itself held up as evidence that it works because otherwise why would such clever people try it? It's turtles all the way down.
One can use paper and pen to write or draw something disturbing and distribute it through the internet. Should we censor the internet then? Put something on scanners and cameras so it donesn't capture such material?
Why don't we work to put a microchip on people's brains so they are prevented to use their creativity to write something disturbing?
We all want a safe society right? Sounds like a great idea.
About a century ago, people realised that CO2 was a greenhouse gas — they thought this would be good, because it was cold where they lived, and they thought it would take millennia because they looked at what had already been built and didn't extrapolate to everyone else copying them.
Your reply doesn't seem to acknowledge the "factory" part of "tiger factory".
AI is about automation, any given model is a tool that lets anyone do what previously needed expertise, or at least effort: in the past, someone pulled out and fired a gun because of the made-up "pizzagate" conspiracy theory; In the future, everyone gets to be Hillary Clinton for 15 minutes, only with Stable Diffusion putting your face in a perfectly customised video, and the video will come from a random bored teenager looking for excitement who doesn't even realise the harm they're causing.