https://projects.propublica.org/nonprofits/organizations/810...
What's needed is closer to a government agency like NASA, with multiple independent inspectors like IAEA empowered by law to establish guardrails, report to congress, and pump the brakes if needed. Think Gibson's "Turing Agency."
They could mandate open sourcing the technology that is developed, maintain feedback channels with private and public enterprises, and provide the basis for sensible use of narrow AI while we collectively fund sensible safety, cognition, consciousness, and AGI research.
If we woke up tomorrow to aliens visiting us from a distant galaxy, and one alien was 100 times more intelligent and capable than the average human, we would be confronted with something terrifying.
Stuart Russell likens AI to such an alien giving us a heads up that it's on the way, and we may be looking at several years to several decades before the alien/AI arrives. We have a chance to get our shit together sufficient to meet the challenges we may face - whether or not you believe AI could pose an existential threat, or that it could destabilize civilization in horrible ways, it's probably unarguably rational to establish institutions and frank discussions now, well before any potential crisis.
Heck, it's not like we hold our governments to account for spending at the scale of NASA - even a few tens of billions is a drop in the bucket, and it could also be a federal jobs program, incentivizing careers and research in a crucial technology sector.
Allowing self-regulated private sector corporations operating in the tech market to be the fundamental drivers of AI is probably a recipe for dystopian abuses. This will, regardless of intetions, lead to further corrosion of individual rights to free expression, privacy, intellectual property, and so on. Even if a majority of the negative impact isn't regulatory or "official" in nature, allowing these companies to impose cultural shifts on us is a terrible thing.
Companies and corporations should be subject to humans, not infringe on human agency. Right now we have companies that are effectively outside the control of any individual human, even the most principled and respectable CEOs, because the legal rules we operate them by are not aligned with the well-being of society at large, but the group of people who have invested time and/or money in the legal construct. It's worked pretty well, but at the speed and scale of the modern Tech industry, it's very clear that our governmental and social institutions are not equipped to mitigate the harms.
NASA and the space race are probably the most recent and successful analogy to the quest for AGI, so maybe that's a solution worth trying again.