←back to thread

Playing in the Creek

(www.hgreer.com)
346 points c1ccccc1 | 1 comments | | HN request time: 0.206s | source
Show context
A_D_E_P_T ◴[] No.43652264[source]
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.

Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.

As Dwarkesh once asked:

> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> Shouldn’t we be expecting that kind of stuff?

I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.

Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.

More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.

replies(6): >>43652313 #>>43652314 #>>43653096 #>>43658616 #>>43659076 #>>43659525 #
1. ben_w ◴[] No.43653096[source]
A perfect AI isn't a threat: you can just tell it to come up with a set of rules whose consequences would never be things that we today would object to.

A useless AI isn't a threat: nobody will use it.

LLMs, as they exist today, are between these two. They're competent enough to get used, but will still give incorrect (and sometimes dangerous) answers that the users are not equipped to notice.

Like designing US trade policy.

> Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.

What does the latter have to do with the former?

> Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans.

Why would the destruction of humanity need to use a novel mechanism, rather than a well-known one?

> And this hasn't changed at all over the past five years.

They're definitely different now than 5 years ago. I played with the DaVinci models back in the day, nobody cared because that really was just very good autocomplete. Even if there's a way to get the early models to combine knowledge from different domains, it wasn't obvious how to actually make them do that, whereas today it's "just ask".

> Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.

And write code. Not great code, but "it'll do" code. And use APIs.

> More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.

While I'd agree they lack the competence to do so, I don't see how this matters. Humans are lazy and just tell the machine to do the work for them, give themselves a martini and a pay rise, then wonder why "The Machine Stops": https://en.wikipedia.org/wiki/The_Machine_Stops

The human half of this equation has been shown many times in the course of history. Our leaders treat other humans as machines or as animals, give themselves pay rises, then wonder why the strikes, uprisings, rebellions, and wars of independence happened.

Ironically, the lack of imagination of LLMs, the very fact that they're mimicking us, may well result in this kind of AI doing exactly that kind of thing even with the lowest interpretation of their nature and intelligence — the mimicry of human history is sufficient.

--

That said, I agree with you about the limitations of using them for research. Where you say this:

> I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.

I had similar with NotebookLM, where I put in one of my own blog posts and it missed half the content and re-interpreted half the rest in a way that had nothing much in common with my point. (Conversely, this makes me wonder: how many humans misunderstand my writing?)