What is the safety added by this? What is unsafe about a computer giving you answers?
- PR (avoid hurting feelings, avoid generating text that would make journalists write sensationalist negative articles about the company)
- "forbidden knowledge": Don't give people advice on how to do dangerous/bad things like building bombs (broadly a subcategory of the above - the content is usually discoverable through other means and the LLM generally won't give better advice)
- dangerous advice and advice that's dangerous when wrong: many people don't understand what LLMs do, and the output is VERY convincing even when wrong. So if the model tells people the best way to entertain your kids is to mix bleach and ammonia and blow bubbles (a common deadly recipe recommended on 4chan), there will be dead people.
- keeping bad people from using the model in bad ways, e.g. having it write stories where children are raped, scamming people at scale (think Nigeria scam but automated), or election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).
I think the first ones are rather stupid, but the latter ones get more and more important to actually have. Especially the very last one (opinion shifting/election interference) is something where the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it), and I appreciate the companies building and running the models doing something about it.
You can’t harden humanity against this exploit without pointing it out and making a few examples. Someone will make an “unsafe” but useful model eventually and this safety mannequin will flop with a bang, cause it’s similar to avoiding sex and drugs conversations with kids.
It’s nice that companies think about it at all. But the best thing they will ever do is to cover their own ass while keeping everyone naked before the storm.
The history of covering is also ridden with exploits, see e.g. google’s recent model which cannot draw situations without rainbow-coloring people. For some reason, this isn’t considered as cultural/political hijacking or exploitation, despite the fact that the problem is purely domestic to the model’s origin.
That genie is very much out of the bottle. There are already models good enough to build fake social media profiles and convincingly post in support of any opinion. The "make the technology incapable of being used by bad actors" ship has sailed, and I would argue was never realistic. We need to improve public messaging around anonymous and pseudonymous only communication. Make it absolutely clear that what you read on the internet from someone you've not personally met and exchanged contact information with is more likely to be a bot than not, and no, you can't tell just by chatting with them, not even voice chatting. The computers are convincingly human and we need to alter our culture to reflect that fact of life, not reactively ban computers.
The last ones are rather stupid too. Bad people can just write stories or creating drawings about disgusting things. Should we censor all computers to prevent such things from happening? Or hands and paper?
Can you evidence this belief? Because I'm aware of a paper in which the authors attempted to find an actual proven example of someone trying this, and after a lot of effort they found one in South Korea. There was a court case that proved a bunch of government employees in an intelligence agency had been trying this tactic. But the case showed it had no impact on anything. Because, surprise, people don't actually choose to follow bot networks on Twitter. The conspirators were just tweeting into a void.
The idea that you can "influence" (buy) elections using bots is a really common in one the entirely bogus field of misinformation studies, but try and find objective evidence for this happening and you'll be frustrated. Every path leads to a dead end.
So, only superpowers (both governments and companies like google/facebook/...) can do that, but not some random Joe from wisconsin with $200 left on his credit card.
E.g. the set of those affected by TMMAT may hugely intersect with those who think it works. Which makes it objective but sort of self-bootstrapping. Isn’t it better to educate people about information and fallacies rather than protecting them from these for life.
1. Politicians/bureaucrats and legacy media who have lost power because the internet has broken their monopoly on mass propaganda distribution and caused them to lose power.
2. People who don't believe in democracy but won't admit it to themselves. They find a way to simultaneously believe in democracy and that they should always get their way by hallucinating that their position is always the majority position. When it is made clear that it is not a majority position they fall back to the "manipulation" excuse thereby delegitimizing the opinion of those who disagree as not really their opinion.
The story itself is about someone attempting to educate their boss, and their boss subsequently getting fooled by it anyway — and the harm came to the one trying to do the educating, not the one who believed in the tiger.
I'm not sure it's even possible to fully remove this problem, even if we can minimise it — humans aren't able to access the ground truth of reality just by thinking carefully, we rely on others around us.
(For an extra twist: what if [the fear of misaligned AI] is itself the tiger?)
you can do that with a pen and paper, and nothing, no one can stop you.
>scamming people at scale
you can do that with any censored LLM if you aren't stupid enough to explicitly mention your intent to scam. no model will refuse "write a positive review for <insert short description of your wonder pills>"
>election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).
this rhetoric - if it's allowed to take root - will cost us all our privacy and general computing privileges within a few decades.
While disgusting I don't see why disgust necessarily entails it's a "bad thing". It's only bad if you additionally posit that a story about molesting children encourages some people to actually molest children. It's the whole porn debate all over again, eg. availability of porn is correlated with reduction in sexual crimes, and there is evidence that this is the case even with child porn [1], so I don't think that argument is well supported at this time.
[1] https://en.wikipedia.org/wiki/Relationship_between_child_por...
We don't need AI to or block AI from writing rape scenes. Some very highly regarded books[1][2] feature very vivid rape scene of children.
[1] https://www.amazon.com/dp/B004Q4RTYG
[2] https://en.wikipedia.org/wiki/A_Time_to_Kill_(Grisham_novel)
We won't keep the bottle corked forever though. It's like we're just buying ourselves time to figure out how we're going to deal with the deluge of questionable generated content that's about to hit us.
The great thing about this belief is that it's a self-fulfilling prophecy. Enough years of stories in the media about elections being controlled by Twitter bots and people in the government-NGO-complex start to believe it must be true because why would all these respectable media outlets and academics mislead them? Then they start to think, gosh our political opponents are awful and it'd be terrible if they came to power by manipulating people. We'd better do it first!
So now what you're seeing is actual attempts to use this tactic by people who have apparently read claims that it works. Because there's no direct evidence that it works, the existence of such schemes is itself held up as evidence that it works because otherwise why would such clever people try it? It's turtles all the way down.
One can use paper and pen to write or draw something disturbing and distribute it through the internet. Should we censor the internet then? Put something on scanners and cameras so it donesn't capture such material?
Why don't we work to put a microchip on people's brains so they are prevented to use their creativity to write something disturbing?
We all want a safe society right? Sounds like a great idea.
About a century ago, people realised that CO2 was a greenhouse gas — they thought this would be good, because it was cold where they lived, and they thought it would take millennia because they looked at what had already been built and didn't extrapolate to everyone else copying them.
Your reply doesn't seem to acknowledge the "factory" part of "tiger factory".
AI is about automation, any given model is a tool that lets anyone do what previously needed expertise, or at least effort: in the past, someone pulled out and fired a gun because of the made-up "pizzagate" conspiracy theory; In the future, everyone gets to be Hillary Clinton for 15 minutes, only with Stable Diffusion putting your face in a perfectly customised video, and the video will come from a random bored teenager looking for excitement who doesn't even realise the harm they're causing.