←back to thread

Playing in the Creek

(www.hgreer.com)
346 points c1ccccc1 | 1 comments | | HN request time: 0.218s | source
Show context
A_D_E_P_T ◴[] No.43652264[source]
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.

Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.

As Dwarkesh once asked:

> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> Shouldn’t we be expecting that kind of stuff?

I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.

Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.

More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.

replies(6): >>43652313 #>>43652314 #>>43653096 #>>43658616 #>>43659076 #>>43659525 #
1. lukeschlather ◴[] No.43659076[source]
The thing about LLMs is that they're trained exclusively on text, and so they don't have much insight into these sorts of problems. But I don't know if anyone has tried making a multimodal LLM that is trained on x-ray tomography of parts under varying loads tagged with descriptions of what the parts are for - I suspect that such a multimodal model would be able to give you a good answer to that question.