The only other thing I can imagine is not very charitable: intellectual greed.
It can't just be that, can it? I genuinely don't understand. I would love to be educated.
It’s going to take money, what if your AGI has some tax policy ideas that are different from the inference owners?
Why would they let that AGI out into the wild?
Let’s say you create AGI. How long will it take for society to recover? How long will it take for people of a certain tax ideology to finally say oh OK, UBI maybe?
The last part is my main question. How long do you think it would take our civilization to recover from the introduction of AGI?
Edit: sama gets a lot of shit, but I have to admit at least he used to work on the UBI problem, orb and all. However, those days seem very long gone from the outside, at least.
> I hope AGI can be used to automate work
You people need a PR guy, I'm serious. OpenAI is the first company I've ever seen that comes across as actively trying to be misanthropic in its messaging. I'm probably too old-fashioned, but this honestly sounds like Marlboro launching the slogan "lung cancer for the weak of mind".
Expected outcome is usually something like a Post-Scarcity society, this is a society where basic needs are all covered.
If we could all live in a future with a free house and a robot that does our chores and food is never scarce we should works towards that, they believe.
The intermiddiete steps aren't thought out, in the same way that for example the communist manifesto does little to explain the transition from capitalism to communism. It simply says there will be the need for things like forcing the bourgiese to join the common workers and there will be a transition phase but no clear steps between either system.
Similarly many AGI proponents think in terms of "wouldnt it be cool if there was an AI that did all the bits of life we dont like doing", without systemic analysis that many people do those bits because they need money to eat for example.
AGI applied to the inputs (or supply chain) of what is needed for inference (power, DC space, chips, network equipment, etc) will dramatically reduced costs of inference. Most of the costs of stuff today are driven by the scarcity of "smart people's time". The raw resources of material needed are dirt cheap (cheaper than water). Transforming raw resources into useful high tech is a function of applied intelligence. Replace the human intelligence with machine intelligence, and costs will keep dropping (faster than the curve they are already on). Economic history has already shown this effect to be true; as we develop better tools to assist human productivity, the unit cost per piece of tech drops dramatically (moore's law is just one example, everything that tech touches experiences this effect).
If you look at almost any universal problem with the human condition, one important bottleneck to improving it is intelligence (or "smart people's time").
We have abundance. The elite took it all.
Every single dollar gained from increased productivity over the past 50 years has been given to them, by changes in tax structures primarily, and their current claim is that actually we need to give them more.
Because that's all they know. More. A bigger slice of the pie. They demonstrably do not want to make the pie bigger.
Making the whole pie bigger dilutes their control and power over resources. They'd rather just take more of the current pie.
The Orb was never ever ever meant to ever fix anything about UBI or even help it happen.
It was always about creating a hyped enough cryptocoin he could use as an ATM to fund himself and other things. That's what all these assholes got into crypto for, like, demonstrably. It was always about taking investment from fools who could not punish you for screwing them over, and then taking your bag and going home to play.
The orb was a sales and marketing gimmick. There's nothing it could do that couldn't be done by commodity fingerprint scanners.