←back to thread

Playing in the Creek

(www.hgreer.com)
346 points c1ccccc1 | 2 comments | | HN request time: 0.545s | source
Show context
A_D_E_P_T ◴[] No.43652264[source]
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.

Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.

As Dwarkesh once asked:

> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> Shouldn’t we be expecting that kind of stuff?

I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.

Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.

More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.

replies(6): >>43652313 #>>43652314 #>>43653096 #>>43658616 #>>43659076 #>>43659525 #
tvc015 ◴[] No.43652313[source]
Aren’t semiautonomous drones already killing soldiers in Ukraine? Can you not imagine a future with more conflict and automated killing? Maybe that’s not seen as AI risk per se?
replies(2): >>43652564 #>>43653668 #
1. UncleMeat ◴[] No.43653668[source]
The LessWrong-style AI risk is "AI becomes so superhuman that it is indistinguishable from God and decides to destroy all humans and we are completely powerless against its quasi-divine capabilities."
replies(1): >>43659835 #
2. ben_w ◴[] No.43659835[source]
With the side-note that, historically, humans have found themselves unable to distinguish a lot of things from God, e.g. thunderclouds — and, more recently, toast.