←back to thread

Playing in the Creek

(www.hgreer.com)
346 points c1ccccc1 | 1 comments | | HN request time: 0.325s | source
Show context
A_D_E_P_T ◴[] No.43652264[source]
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.

Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.

As Dwarkesh once asked:

> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> Shouldn’t we be expecting that kind of stuff?

I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.

Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.

More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.

replies(6): >>43652313 #>>43652314 #>>43653096 #>>43658616 #>>43659076 #>>43659525 #
tvc015 ◴[] No.43652313[source]
Aren’t semiautonomous drones already killing soldiers in Ukraine? Can you not imagine a future with more conflict and automated killing? Maybe that’s not seen as AI risk per se?
replies(2): >>43652564 #>>43653668 #
A_D_E_P_T ◴[] No.43652564[source]
That's not "AI risk" because they're still tools that lack independent volition. Somebody's building them and setting them loose. They're not building themselves and setting themselves loose, and it's far from clear how to get there from here.

Dumb bombs kill people just as easily. One 80-year old nuke is, at least potentially, more effective than the entirety of the world's drones.

replies(1): >>43653239 #
ben_w ◴[] No.43653239[source]
Oh, but it is an AI risk.

The analogy is with stock market flash-crashes, but those can be undone if everyone agrees "it was just a bug".

Software operates faster than human reaction times, so there's always pressure to fully automate aspects of military equipment, e.g. https://en.wikipedia.org/wiki/Phalanx_CIWS

Unfortunately, a flash-war from a bad algorithm, from a hallucination, from failing to specify that the moon isn't expected to respond to IFF pings even when it comes up over the horizon from exactly the direction you've been worried about finding a Soviet bomber wing… those are harder to undo.

replies(1): >>43657902 #
A_D_E_P_T ◴[] No.43657902[source]
"AI Safety" in that particular context is easy: Keep humans in the loop and don't give AIs access to sensitive systems. With certain small antipersonnel drones excepted, this is already the policy of all serious militaries.

Besides, that's simply not what the LW crowd is talking about. They're talking about, e.g., hypercompetent AIs developing novel undetectable biological weapons that kill all humans on purpose. (This is the "AI 2027" scenario.)

Yet, as far as I'm aware, there's not a single important discovery or invention made by AI. No new drugs, no new solar panel materials, no new polymers, etc. And not for want of trying!

They know what humans know. They're no more competent than any human; they're as competent as low-level expert humans, just with superhuman speed and memory. It's not clear that they'll ever be able to move beyond what humans know and develop hypercompetence.

replies(1): >>43659622 #
1. ben_w ◴[] No.43659622[source]
> With certain small antipersonnel drones excepted

And mines, and the CIWS I linked to and several like it (I think SeaRAM is similar autonomy to engage), and the Samsung SGR-A1 whose autonomy led to people arguing that we really ought to keep humans in the loop: https://en.wikipedia.org/wiki/Lethal_autonomous_weapon

The problem is, the more your adversaries automate, the more you need to automate to keep up. Right now we can even have the argument about the SGR-A1 because it's likely to target humans who operate at human speeds and therefore a human in the loop isn't a major risk to operational success. Counter Rocket Artillery Mortar systems already need to be autonomous because human eyes can't realistically track a mortar in mid-flight.

There were a few times in the cold war where it was luck that the lack of technology that forced us to rely on humans in the loop, humans who said "no".

People are protesting against fully autonomous weapons because they're obviously useful enough to be militarily interesting, not just because they're obviously threatening.

> Besides, that's simply not what the LW crowd is talking about.

LW talks about every possible risk. I got the flash-war idea from them.

> Yet, as far as I'm aware, there's not a single important discovery or invention made by AI. No new drugs, no new solar panel materials, no new polymers, etc. And not for want of trying!

For about a decade after Word Lens showed the world that it was possible to run real time augmented reality translations on a smartphone, I've been surprising people — even fellow expat software developers — that this exists and is possible.

Today, I guess I have to surprise you with the 2024 Nobel Prize in Chemistry. Given my experience with Word Lens, I fully expect to keep on surprising people with this for another decade.

Drugs/biosci:

• DSP-1181: https://www.bbc.com/news/technology-51315462

• Halicin: https://en.wikipedia.org/wiki/Halicin

• Abaucin: https://en.wikipedia.org/wiki/Abaucin

The aforementioned 2024 Nobel Prize for AlphaFold: https://en.wikipedia.org/wiki/List_of_Nobel_laureates_in_Che...

PV:

• Materials: https://www.chemistryworld.com/news/ai-aids-discovery-of-sol...

• Other stuff: https://www.weforum.org/stories/2024/08/how-ai-can-help-revo...

Polymers:

https://arxiv.org/abs/2312.06470

https://arxiv.org/abs/2312.03690

https://arxiv.org/abs/2409.15354

> They know what humans know. They're no more competent than any human; they're as competent as low-level expert humans, just with superhuman speed and memory. It's not clear that they'll ever be able to move beyond what humans know and develop hypercompetence.

One of the things humans know is "how to use lab equipment to get science done": https://www.nature.com/articles/s44286-023-00002-4

"Just with superhuman speed and memory" is a lot, even if they were somehow otherwise limited to a human equivalent of IQ 90.