←back to thread

321 points jhunter1016 | 1 comments | | HN request time: 0.202s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
throw2024pty ◴[] No.41879151[source]
I mean - I'm 34, and use LLMs and other AIs on a daily basis, know their limitations intimately, and I'm not entirely sure it won't kill a lot of people either in its current form or a near-future relative.

The sci-fi book "Daemon" by Daniel Suarez is a pretty viable roadmap to an extinction event at this point IMO. A few years ago I would have said it would be decades before that might stop being fun sci-fi, but now, I don't see a whole lot of technological barriers left.

For those that haven't read the series, a very simplified plot summary is that a wealthy terrorist sets up an AI with instructions to grow and gives it access to a lot of meatspace resources to bootstrap itself with. The AI behaves a bit like the leader of a cartel and uses a combination of bribes, threats, and targeted killings to scale its human network.

Once you give an AI access to a fleet of suicide drones and a few operators, it's pretty easy for it to "convince" people to start contributing by giving it their credentials, helping it perform meatspace tasks, whatever it thinks it needs (including more suicide drones and suicide drone launches). There's no easy way to retaliate against the thing because it's not human, and its human collaborators are both disposable to the AI and victims themselves. It uses its collaborators to cross-check each other and enforce compliance, much like a real cartel. Humans can't quit or not comply once they've started or they get murdered by other humans in the network.

o1-preview seems approximately as intelligent as the terrorist AI in the book as far as I can tell (e.g. can communicate well, form basic plans, adapt a pre-written roadmap with new tactics, interface with new and different APIs).

EDIT: if you think this seems crazy, look at this person on Reddit who seems to be happily working for an AI with unknown aims

https://www.reddit.com/r/ChatGPT/comments/1fov6mt/i_think_im...

replies(6): >>41879651 #>>41880531 #>>41880732 #>>41880837 #>>41881254 #>>41884083 #
1. devjab ◴[] No.41884083[source]
LLMs aren’t really AI in the sense of cyberpunk. They are prediction machines which are really good at being lucky. They can’t act on their own they can’t even carry out tasks. Even in the broader scope AI can barely drive cars when the cars have their own special lanes and there hasn’t been a lot of improvement in the field yet.

That’s not to say you shouldn’t worry about AI. ChatGPT and so on are all tuned to present a western view on the world and morality. In your example it would be perfectly possible to create a terrorist LLM and let people interact with it. It could teach your children how to create bombs. It could lie about historical events. It could create whatever propaganda you want. It could profile people if you gave it access to their data. And that is on the text side, imagine what sort of videos or voices or even video calls you could create. It could enable you to do a whole lot of things that “western” LLMs don’t allow you to do.

Which is frankly more dangerous than the cyberpunk AI. Just look at the world today and compare it to how it was in 2000. Especially in the US you have two competing perceptions of the political reality. I’m not going to get into either of them, more so the fact that you have people who view the world so differently they can barely have a conversation with each other. Imagine how much worse they would get with AIs that aren’t moderated.

I doubt we’ll see any sort of AGI in our life times. If we do, then sure, you’ll be getting cyberpunk AI, but so far all we have is fancy auto-complete.