Most active commenters
  • sickofparadox(3)

←back to thread

321 points jhunter1016 | 11 comments | | HN request time: 0s | source | bottom
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
throw2024pty ◴[] No.41879151[source]
I mean - I'm 34, and use LLMs and other AIs on a daily basis, know their limitations intimately, and I'm not entirely sure it won't kill a lot of people either in its current form or a near-future relative.

The sci-fi book "Daemon" by Daniel Suarez is a pretty viable roadmap to an extinction event at this point IMO. A few years ago I would have said it would be decades before that might stop being fun sci-fi, but now, I don't see a whole lot of technological barriers left.

For those that haven't read the series, a very simplified plot summary is that a wealthy terrorist sets up an AI with instructions to grow and gives it access to a lot of meatspace resources to bootstrap itself with. The AI behaves a bit like the leader of a cartel and uses a combination of bribes, threats, and targeted killings to scale its human network.

Once you give an AI access to a fleet of suicide drones and a few operators, it's pretty easy for it to "convince" people to start contributing by giving it their credentials, helping it perform meatspace tasks, whatever it thinks it needs (including more suicide drones and suicide drone launches). There's no easy way to retaliate against the thing because it's not human, and its human collaborators are both disposable to the AI and victims themselves. It uses its collaborators to cross-check each other and enforce compliance, much like a real cartel. Humans can't quit or not comply once they've started or they get murdered by other humans in the network.

o1-preview seems approximately as intelligent as the terrorist AI in the book as far as I can tell (e.g. can communicate well, form basic plans, adapt a pre-written roadmap with new tactics, interface with new and different APIs).

EDIT: if you think this seems crazy, look at this person on Reddit who seems to be happily working for an AI with unknown aims

https://www.reddit.com/r/ChatGPT/comments/1fov6mt/i_think_im...

replies(6): >>41879651 #>>41880531 #>>41880732 #>>41880837 #>>41881254 #>>41884083 #
1. sickofparadox ◴[] No.41880732[source]
It can't form plans because it has no idea what a plan is or how to implement it. The ONLY thing these LLMs know how to do is predict the probability that their next word will make a human satisfied. That is all they do. People get very impressed when they prompt these things to pretend like they are sentient or capable of planning, but that's literally the point, its guessing which string of meaningless (to it) characters will result in a user giving it a thumbs up on the chatgpt website.

You could teach me how to phonetically sound out some of China's greatest poetry in Chinese perfectly, and lots of people would be impressed, but I would be no more capable of understanding what I said than an LLM is capable of understanding "a plan".

replies(5): >>41880885 #>>41881071 #>>41881183 #>>41881444 #>>41884552 #
2. directevolve ◴[] No.41880885[source]
… but ChatGPT can make a plan if I ask it to. And it can use a plan to guide its future outputs. It can create code or terminal commands that I can trivially output to my terminal, letting it operate my computer. From my computer, it can send commands to operate physical machinery. What exactly is the hard fundamental barrier here, as opposed to a capability you speculate it is unlikely to realize in practice in the next year or two?
replies(2): >>41881055 #>>41882442 #
3. Jerrrrrrry ◴[] No.41881055[source]
you are asking for goalposts?

as if they were stationary!

4. willy_k ◴[] No.41881071[source]
A plan is a set of steps oriented towards a specific goal, not some magical artifact only achievable through true consciousness.

If you ask it to make a plan, it will spit out a sequence of characters reasonably indistinguishable from a human-made plan. Sure, it isn’t “planning” in the strict sense of organizing things consciously (whatever that actually means), but it can produce sequences of text that convey a plan, and it can produce sequences of text that mimic reasoning about a plan. Going into the semantics is pointless, imo the artificial part of AI/AGI means that it should never be expected to follow the same process as biological consciousness, just arrive at the same results.

replies(1): >>41883074 #
5. highfrequency ◴[] No.41881183[source]
Sure, but does this distinction matter? Is an advanced computer program that very convincingly imitates a super villain less worrisome than an actual super villain?
6. MrScruff ◴[] No.41881444[source]
If the multimodal model has embedded deep knowledge about words, concepts, moving images - sure it won’t have a humanlike understanding of what those ‘mean’, but it will have it’s own understanding that is required to allow it to make better predictions based on it’s training data.

It’s true that understanding is quite primitive at the moment, and it will likely take further breakthroughs to crack long horizon problems, but even when we get there it will never understand things in the exact way a human does. But I don’t think that’s the point.

7. sickofparadox ◴[] No.41882442[source]
Brother, it is not operating your computer, YOU ARE!
replies(1): >>41884460 #
8. alfonsodev ◴[] No.41883074[source]
Yes, and what people miss is that it can be recursive, those steps can be passed to other instances that know how to sub task each step and choose best executor for the step. The power comes in the swarm organization of the whole thing, which I believe is what is behind o1-preview, specialization and orchestration, made transparent.
9. esafak ◴[] No.41884460{3}[source]
Nothing is preventing bad actors from using them to operate computers.
replies(1): >>41890204 #
10. smus ◴[] No.41884552[source]
>the ONLY thing these LLMs know how to do is predict the probability that their next word

This is super incorrect. The base model is trained to predict the distribution of next words (which obviously necessitates a ton of understanding about the language)

Then there's the RLHF step, which teaches the model about what humans want to see

But o1 (which is one of these LLMs) is trained entirely differently to do reinforcement learning on problem solving (we think), so it's a pretty different paradigm. I could see o1 planning very well

11. sickofparadox ◴[] No.41890204{4}[source]
I mean nothing is preventing bad actors from writing their own code to do that either? This makes it easier (kind of) but the difference between a copilot written malware and a human one doesn't really change anything. Its a chat bot - it doesn't have agency.