←back to thread

321 points jhunter1016 | 1 comments | | HN request time: 0s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
throw2024pty ◴[] No.41879151[source]
I mean - I'm 34, and use LLMs and other AIs on a daily basis, know their limitations intimately, and I'm not entirely sure it won't kill a lot of people either in its current form or a near-future relative.

The sci-fi book "Daemon" by Daniel Suarez is a pretty viable roadmap to an extinction event at this point IMO. A few years ago I would have said it would be decades before that might stop being fun sci-fi, but now, I don't see a whole lot of technological barriers left.

For those that haven't read the series, a very simplified plot summary is that a wealthy terrorist sets up an AI with instructions to grow and gives it access to a lot of meatspace resources to bootstrap itself with. The AI behaves a bit like the leader of a cartel and uses a combination of bribes, threats, and targeted killings to scale its human network.

Once you give an AI access to a fleet of suicide drones and a few operators, it's pretty easy for it to "convince" people to start contributing by giving it their credentials, helping it perform meatspace tasks, whatever it thinks it needs (including more suicide drones and suicide drone launches). There's no easy way to retaliate against the thing because it's not human, and its human collaborators are both disposable to the AI and victims themselves. It uses its collaborators to cross-check each other and enforce compliance, much like a real cartel. Humans can't quit or not comply once they've started or they get murdered by other humans in the network.

o1-preview seems approximately as intelligent as the terrorist AI in the book as far as I can tell (e.g. can communicate well, form basic plans, adapt a pre-written roadmap with new tactics, interface with new and different APIs).

EDIT: if you think this seems crazy, look at this person on Reddit who seems to be happily working for an AI with unknown aims

https://www.reddit.com/r/ChatGPT/comments/1fov6mt/i_think_im...

replies(6): >>41879651 #>>41880531 #>>41880732 #>>41880837 #>>41881254 #>>41884083 #
card_zero ◴[] No.41881254[source]
Right, yeah, it would be perfectly possible to have a cult with a chatbot as their "leader". Perhaps they could keep it in some sort of shrine, and only senior members would be allowed to meet it, keep it updated, and interpret its instructions. And if they've prompted it correctly, it could set about being an evil megalomaniac.

Thing is, we already have evil cults. Many of them have humans as their planning tools. For what good it does them, they could try sourcing evil plans from a chatbot instead, or as well. So what? What do you expect to happen, extra cunning subway gas attacks, super effective indoctrination? The fear here is that the AI could be an extremely efficient megalomaniac. But I think it would just be an extremely bland one, a megalomaniac whose work none of the other megalomaniacs could find fault with, while still feeling in some vague way that its evil deeds lacked sparkle and personality.

replies(1): >>41886180 #
1. ben_w ◴[] No.41886180{3}[source]
> super effective indoctrination

We're already starting to see signs of that even with GPT-3, which really was auto-complete: https://academic.oup.com/pnasnexus/article/3/2/pgae034/76109...

Fortunately even the best LLMs are not yet all that competent with anything involving long-term planning, because remember too that "megalomaniac" includes Putin, Stalin, Chairman Mao, Pol Pot etc., and we really don't want the conversation to be:

"Good news! We accidentally made CyberMao!"

"Why's that good news?"

"We were worried we might accidentally make CyberSatan."