←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 4 comments | | HN request time: 0s | source
Show context
stego-tech ◴[] No.43578594[source]
It’s good science fiction, I’ll give it that. I think getting lost in the weeds over technicalities ignores the crux of the narrative: even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.

The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.

The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.

replies(10): >>43578747 #>>43579251 #>>43579927 #>>43580364 #>>43580681 #>>43581002 #>>43581238 #>>43581588 #>>43581940 #>>43582040 #
api ◴[] No.43580681[source]
You don’t just beat around the bush here. You actually beat the bush a few times.

Large corporations, governments, institutionalized churches, political parties, and other “corporate” institutions are very much like a hypothetical AGI in many ways: they are immortal, sleepless, distributed, omnipresent, and possess beyond human levels of combined intelligence, wealth, and power. They are mechanical Turk AGIs more or less. Look at how humans cycle in, out, and through them, often without changing them much, because they have an existence and a weird kind of will independent of their members.

A whole lot, perhaps all, of what we need to do to prepare for a hypothetical AGI that may or may not be aligned consists of things we should be doing to restrain and ensure alignment of the mechanical Turk variety. If we can’t do that we have no chance against something faster and smarter.

What we have done over the past 50 years is the opposite: not just unchain them but drop any notion that they should be aligned.

Are we sure the AI alignment discourse isn’t just “occulted” progressive political discourse? Back when they burned witches philosophers would encrypt possibly heretical ideas in the form of impenetrable nonsense, which is where what we call occultism comes from. You don’t get burned for suggesting steps to align corporate power, but a huge effort has been made to marginalize such discourse.

Consider a potential future AGI. Imagine it has a cult of followers around it, which it probably would, and champions that act like present day politicians or CEOs for it, which it probably would. If it did not get humans to do these things for it, it would have analogous functions or parts of itself.

Now consider a corporation or other corporate entity that has all those things but replace the AGI digital brain with a committee or shareholders.

What, really, is the difference? Both can be dangerously unaligned.

Other than perhaps in magnitude? The real digital AGI might be smarter and faster but that’s the only difference I see.

replies(2): >>43580990 #>>43623105 #
1. brookst ◴[] No.43580990[source]
I looked but I couldn’t find any evidence that “occultism” comes from encryption of heretical ideas. It seems to have been popularized in renaissance France to describe the study of hidden forces. I think you may be hallucinating here.
replies(1): >>43581005 #
2. balamatom ◴[] No.43581005[source]
Where exactly did you look?
replies(1): >>43588984 #
3. brookst ◴[] No.43588984[source]
Google, Wikipedia, kagi. Now, please tell me your source.
replies(1): >>43590409 #
4. balamatom ◴[] No.43590409{3}[source]
The Pope.