https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
//edit: remove the referral tags from URL
https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
//edit: remove the referral tags from URL
…yeah?
The publication date on this article is about halfway between GPT-3 and ChatGPT releases.
> I wonder who pays the bills of the authors. And your bills, for that matter.
Also, what a weirdly conspiratorial question. There's a prominent "Who are we?" button near the top of the page and it's not a secret what any of the authors did or do for a living.
also it's not conspiratorial to wonder if someone in silicon valley today receives funding through the AI industry lol like half the industry is currently propped up by that hype, probably half the commenters here are paid via AI VC investments
This forum has been so behind for too long.
Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1
Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.
First stage is denial, I get it, not easy to swallow the gravity of what’s coming.
Though that doesn't mean that the current version of language models will ever achieve AGI, and I sincerely doubt they will. They'll likely be a component in the AI, but likely not the thing that "drives"
Holy shit. That's a hell of a called shot from 2021.
No. Altman is in his current position because he's highly effective at consolidating power and has friends in high places. That's it. Everything he says can be seen as marketing for the next power grab.
I don't think much has happened on these fronts (owning to a lack of interest, not technical difficulty). AI boyfriends/roleplaying etc. seems to have stayed a very niche interest, with models improving very little over GPT3.5, and the actual products are seemingly absent.
It's very much the product of the culture war era, where one of the scary scenarios show off, is a chatbot riling up a set of internet commenters and goarding them lashing out against modern leftist orthodoxy, and then cancelling them.
With all thestrongholds of leftist orthodoxy falling into Trump's hands overnight, this view of the internet seems outdated.
Troll chatbots still are a minor weapon in information warfare/ The 'opinion bubbles' and manipulation of trending topics on social media (with the most influential content still written by humans), to change the perception of what's the popular concensus still seem to hold up as primary tools of influence.
Nowadays, when most people are concerned about stuff like 'will the US go into a shooting war against NATO' or 'will they manage to crash the global economy', just to name a few of the dozen immediately pressing global issues, I think people are worried about different stuff nowadays.
At the same time, there's very little mention of 'AI will take our jobs and make us poor' in both the intellectual and physical realms, something that's driving most people's anxiety around AI nowadays.
It also puts the 'superintelligent unaligned AI will kill us all' argument very often presented by alignment people as a primary threat rather than the more plausible 'people controlling AI are the real danger'.
OK, say I totally believe this. What, pray tell, are we supposed to do about it?
Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.
I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.
So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.
There is a strong financial incentive for a lot of people on this site to deny they are at risk from it, or to deny what they are building has risk and they should have culpability from that.
If that's really true, why is there such a big push to rapidly improve AI? I'm guessing OpenAI, Google, Anthropic, Apple, Meta, Boston Dynamics don't really believe this. They believe AI will make them billions. What is OpenAI's definition of AGI? A model that makes $100 billion?
For something like this, saying “There is no evidence showing it” is a good enough refutation.
Counterpointing that “Well, there could be a lot of this going on, but it is in secret.” - that could be a justification for any kooky theory out there. Bigfoot, UFOs, ghosts. Maybe AI has already replaced all of us and we’re Cylons. Something we couldn’t know.
The predictions are specific enough that they are falsifiable, so they should stand or fall based on the clear material evidence supporting or contradicting them.
Come on, be real. Do you honestly think that would make a lick of difference? Maybe, at best, delay things by a couple months. But this is a worldwide phenomenon, and humans have shown time and time again that they are not able to self organize globally. How successful do you think that political organization is going to be in slowing China's progress?
Look into the specific claims and it's not as amazing. Like the claim that models will require an entire year to train, when in reality it's on the order of weeks.
The societal claims also fall apart quickly:
> Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.
This is a common trend in rationalist and "X-risk" writers: Write a big article with mostly safe claims (LLMs will get bigger and perform better!) and a lot of hedging, then people will always see the article as primarily correct. When you extract out the easy claims and look at the specifics, it's not as impressive.
This article also shows some major signs that the author is deeply embedded in specific online bubbles, like this:
> Most of America gets their news from Twitter, Reddit, etc.
Sites like Reddit and Twitter feel like the entire universe when you're embedded in them, but when you step back and look at the numbers only a fraction of the US population are active users.
https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating...
Nuclear deterrence -- human cloning -- bioweapon proliferation -- Antarctic neutrality -- the list goes on.
> How successful do you think that political organization is going to be in slowing China's progress?
I wish people would stop with this tired war-mongering. China was not the one who opened up this can of worms. China has never been the one pushing the edge of capabilities. Before Sam Altman decided to give ChatGPT to the world, they were actively cracking down on software companies (in favor of hardware & "concrete" production).
We, the US, are the ones who chose to do this. We started the race. We put the world, all of humanity, on this path.
> Do you honestly think that would make a lick of difference?
I don't know, it depends. Perhaps we're lucky and the timelines are slow enough that 20-30% of the population loses their jobs before things become unrecoverable. Tech companies used to warn people not to wear their badges in public in San Francisco -- and that was what, 2020? Would you really want to work at "Human Replacer, Inc." when that means walking out and about among a population who you know hates you, viscerally? Or if we make it to 2028 in the same condition. The Bonus Army was bad enough -- how confident are you that the government would stand their ground, keep letting these labs advance capabilities, when their electoral necks were on the line?
This defeatism is a self-fulfilling prophecy. The people have the power to make things happen, and rhetoric like this is the most powerful thing holding them back.
Thank you. As someone who lives in Southeast Asia (and who also has lived in East Asia -- pardon the deliberate vagueness, for I do not wish to reveal too many potentially personally identifying information), this is how many of us in these regions view the current tensions between China and Taiwan as well.
Don't get me wrong; we acknowledge that many Taiwanese people want independence, that they are a people with their own aspirations and agency. But we can also see that the US -- and its European friends, which often blindly adopt its rhetoric and foreign policy -- is deliberately using Taiwan as a disposable pawn to attempt to provoke China into a conflict. The US will do what it has always done ever since the post-WW2 period -- destabilise entire regions of countries to further its own imperialistic goals, causing the deaths and suffering of millions, and then leaving the local populations to deal with the fallout for many decades after.
Without the US intentionally stoking the flames of mutual antagonism between China and Taiwan, the two countries could have slowly (perhaps over the next decades) come to terms with each other, be it voluntary reunification or peaceful separation. If you know a bit of Chinese history, it is not entirely far-fetched at all to think that the Chinese might eventually agree to recognising Taiwan as an independent nation, but now this option has now been denied because the US has decided to use Taiwan as a pawn in a proxy conflict.
To anticipate questions about China's military invasion of Taiwan by 2027: No, I do not believe it will happen. Don't believe everything the US authorities claim.
In general it's worth weighting the opinions of people who are leaders in a field, about that field, over people who know little about it.
There is nothing happening!
The thing that is happening is not important!
The thing that is happening is important, but it's too late to do anything about it!
Well, maybe if you had done something when we first started warning about this...
See also: Covid/Climate/Bird Flu/the news.
That's exactly what the true AGI X-Riskers think! Sama acknowledges the intense risk but thinks the path forward is inevitable anyway so hoping that building intelligence will give them the intelligence to solve alignment. The other camp, a la Yudkowsky, believe it's futile to just hope it gets solved without AGI capabilities first becoming more intelligent, powerful, and disregarding any of our wishes. And then we've ceded any control of our future to an uncaring system that treats us as a means to achieve its original goals like how an ant is in the way of a Google datacenter. I don't see how anyone who thinks "maybe stock number go up as your only goal is not the best way to make people happy", can miss this.
It describes almost anything.