←back to thread

321 points jhunter1016 | 4 comments | | HN request time: 0s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
throw2024pty ◴[] No.41879151[source]
I mean - I'm 34, and use LLMs and other AIs on a daily basis, know their limitations intimately, and I'm not entirely sure it won't kill a lot of people either in its current form or a near-future relative.

The sci-fi book "Daemon" by Daniel Suarez is a pretty viable roadmap to an extinction event at this point IMO. A few years ago I would have said it would be decades before that might stop being fun sci-fi, but now, I don't see a whole lot of technological barriers left.

For those that haven't read the series, a very simplified plot summary is that a wealthy terrorist sets up an AI with instructions to grow and gives it access to a lot of meatspace resources to bootstrap itself with. The AI behaves a bit like the leader of a cartel and uses a combination of bribes, threats, and targeted killings to scale its human network.

Once you give an AI access to a fleet of suicide drones and a few operators, it's pretty easy for it to "convince" people to start contributing by giving it their credentials, helping it perform meatspace tasks, whatever it thinks it needs (including more suicide drones and suicide drone launches). There's no easy way to retaliate against the thing because it's not human, and its human collaborators are both disposable to the AI and victims themselves. It uses its collaborators to cross-check each other and enforce compliance, much like a real cartel. Humans can't quit or not comply once they've started or they get murdered by other humans in the network.

o1-preview seems approximately as intelligent as the terrorist AI in the book as far as I can tell (e.g. can communicate well, form basic plans, adapt a pre-written roadmap with new tactics, interface with new and different APIs).

EDIT: if you think this seems crazy, look at this person on Reddit who seems to be happily working for an AI with unknown aims

https://www.reddit.com/r/ChatGPT/comments/1fov6mt/i_think_im...

replies(6): >>41879651 #>>41880531 #>>41880732 #>>41880837 #>>41881254 #>>41884083 #
1. xyzsparetimexyz ◴[] No.41879651[source]
You're in too deep of you seriously believe that this is possible currently. All these chatgpt things have a very limited working memory and can't act without a query. That reddit post is clearly not an ai.
replies(3): >>41880726 #>>41883411 #>>41886232 #
2. burningChrome ◴[] No.41880726[source]
>> You're in too deep of you seriously believe that this is possible currently.

I'm not a huge fan of AI, but even I've seen articles written about its limitations.

Here's a great example:

https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-hum...

Sooner than even the most pessimistic among us have expected, a new, evil artificial intelligence bent on destroying humankind has arrived.

Known as Chaos-GPT, the autonomous implementation of ChatGPT is being touted as "empowering GPT with Internet and Memory to Destroy Humanity."

So how will it do that?

Each of its objectives has a well-structured plan. To destroy humanity, Chaos-GPT decided to search Google for weapons of mass destruction in order to obtain one. The results showed that the 58-megaton “Tsar bomb”—3,333 times more powerful than the Hiroshima bomb—was the best option, so it saved the result for later consideration.

It should be noted that unless Chaos-GPT knows something we don’t know, the Tsar bomb was a once-and-done Russian experiment and was never productized (if that’s what we’d call the manufacture of atomic weapons.)

There's a LOT of things AI simply doesn't have the power to do and there is some humorous irony to the rest of the article about how knowing something is completely different than having the resources and ability to carry it out.

3. int_19h ◴[] No.41883411[source]
We have models with context size well over 100k tokens - that's large enough to fit many full-length books. And yes, you need an input for the LLM to generate an output. Which is why setups like this just run them in a loop.

I don't know if GPT-4 is smart enough to be successful at something like what OP describes, but I'm pretty sure it could cause a lot of trouble before it fails either way.

The real question here is why this is concerning, given that you can - and we already do - have humans who are doing this kind of stuff, in many cases, with considerable success. You don't need an AI to run a cult or a terrorist movement, and there's nothing about it that makes it intrinsically better at it.

4. ben_w ◴[] No.41886232[source]
For a while, I have been making use of Clever Hans as a metaphor. The horse seemed smarter than it really was.

They can certainly appear to be very smart due to having the subjective (if you can call it that) experience of 2.5 million years of non-stop reading.

That's interesting, useful, and is both an economic and potential security risk all by itself.

But people keep putting these things through IQ tests; as there's always a question about "but did they memorise the answers?", I think we need to consider the lowest score result to be the highest that they might have.

At first glance they can look like the first graph, with o1 having an IQ score of 120; I think the actual intelligence, as in how well it can handle genuinely novel scenarios in the context window, are upper-bounded by the final graph, where it's more like 97:

https://www.maximumtruth.org/p/massive-breakthrough-in-ai-in...

So, with your comment, I'd say the key word is: "currently".

Correct… for now.

But also:

> All these chatgpt things have a very limited working memory and can't act without a query.

It's easy to hook them up to a RAG, the "limited" working memory is longer than most human's daily cycle, and people already do put them into a loop and let them run off unsupervised despite being told this is unwise.

I've been to a talk where someone let one of them respond autonomously in his own (cloned) voice just so people would stop annoying him with long voice messages, and the other people didn't notice he'd replaced himself with an LLM.