Most active commenters
  • card_zero(4)
  • digging(4)
  • ben_w(4)
  • usaar333(3)
  • sickofparadox(3)
  • (3)
  • Jerrrrrrry(3)
  • dylan604(3)

←back to thread

321 points jhunter1016 | 69 comments | | HN request time: 1.455s | source | bottom
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
1. candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
2. ilrwbwrkhv ◴[] No.41879058[source]
It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.

replies(7): >>41879115 #>>41880616 #>>41880738 #>>41880753 #>>41880843 #>>41881009 #>>41881023 #
3. guappa ◴[] No.41879115[source]
I think they were afraid to release because of all the racist stuff it'd say…
4. throw2024pty ◴[] No.41879151[source]
I mean - I'm 34, and use LLMs and other AIs on a daily basis, know their limitations intimately, and I'm not entirely sure it won't kill a lot of people either in its current form or a near-future relative.

The sci-fi book "Daemon" by Daniel Suarez is a pretty viable roadmap to an extinction event at this point IMO. A few years ago I would have said it would be decades before that might stop being fun sci-fi, but now, I don't see a whole lot of technological barriers left.

For those that haven't read the series, a very simplified plot summary is that a wealthy terrorist sets up an AI with instructions to grow and gives it access to a lot of meatspace resources to bootstrap itself with. The AI behaves a bit like the leader of a cartel and uses a combination of bribes, threats, and targeted killings to scale its human network.

Once you give an AI access to a fleet of suicide drones and a few operators, it's pretty easy for it to "convince" people to start contributing by giving it their credentials, helping it perform meatspace tasks, whatever it thinks it needs (including more suicide drones and suicide drone launches). There's no easy way to retaliate against the thing because it's not human, and its human collaborators are both disposable to the AI and victims themselves. It uses its collaborators to cross-check each other and enforce compliance, much like a real cartel. Humans can't quit or not comply once they've started or they get murdered by other humans in the network.

o1-preview seems approximately as intelligent as the terrorist AI in the book as far as I can tell (e.g. can communicate well, form basic plans, adapt a pre-written roadmap with new tactics, interface with new and different APIs).

EDIT: if you think this seems crazy, look at this person on Reddit who seems to be happily working for an AI with unknown aims

https://www.reddit.com/r/ChatGPT/comments/1fov6mt/i_think_im...

replies(6): >>41879651 #>>41880531 #>>41880732 #>>41880837 #>>41881254 #>>41884083 #
5. xyzsparetimexyz ◴[] No.41879651[source]
You're in too deep of you seriously believe that this is possible currently. All these chatgpt things have a very limited working memory and can't act without a query. That reddit post is clearly not an ai.
replies(3): >>41880726 #>>41883411 #>>41886232 #
6. ljm ◴[] No.41880531[source]
I can't say I'm convinced that the technology and resources to deploy Person of Interest's Samaritan in the wild is both achievable and imminent.

It is, however, a fantastic way to fall down the rabbit hole of paranoia and tin-foil hat conspiracy theories.

7. usaar333 ◴[] No.41880616[source]
Something that actually could predict the next token 100% correctly would be omniscient.

So I hardly see why this is inherently crazy. At most I think it might not be scalable.

replies(5): >>41880785 #>>41880817 #>>41880825 #>>41881319 #>>41884267 #
8. burningChrome ◴[] No.41880726{3}[source]
>> You're in too deep of you seriously believe that this is possible currently.

I'm not a huge fan of AI, but even I've seen articles written about its limitations.

Here's a great example:

https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-hum...

Sooner than even the most pessimistic among us have expected, a new, evil artificial intelligence bent on destroying humankind has arrived.

Known as Chaos-GPT, the autonomous implementation of ChatGPT is being touted as "empowering GPT with Internet and Memory to Destroy Humanity."

So how will it do that?

Each of its objectives has a well-structured plan. To destroy humanity, Chaos-GPT decided to search Google for weapons of mass destruction in order to obtain one. The results showed that the 58-megaton “Tsar bomb”—3,333 times more powerful than the Hiroshima bomb—was the best option, so it saved the result for later consideration.

It should be noted that unless Chaos-GPT knows something we don’t know, the Tsar bomb was a once-and-done Russian experiment and was never productized (if that’s what we’d call the manufacture of atomic weapons.)

There's a LOT of things AI simply doesn't have the power to do and there is some humorous irony to the rest of the article about how knowing something is completely different than having the resources and ability to carry it out.

9. sickofparadox ◴[] No.41880732[source]
It can't form plans because it has no idea what a plan is or how to implement it. The ONLY thing these LLMs know how to do is predict the probability that their next word will make a human satisfied. That is all they do. People get very impressed when they prompt these things to pretend like they are sentient or capable of planning, but that's literally the point, its guessing which string of meaningless (to it) characters will result in a user giving it a thumbs up on the chatgpt website.

You could teach me how to phonetically sound out some of China's greatest poetry in Chinese perfectly, and lots of people would be impressed, but I would be no more capable of understanding what I said than an LLM is capable of understanding "a plan".

replies(5): >>41880885 #>>41881071 #>>41881183 #>>41881444 #>>41884552 #
10. JacobThreeThree ◴[] No.41880738[source]
>It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

Totally agree. And it's not just uninformed lay people who think this. Even by OpenAI's own definition of AGI, we're nowhere close.

replies(1): >>41881103 #
11. hnuser123456 ◴[] No.41880753[source]
The multimodal models can do more than predict next words.
12. throwup238 ◴[] No.41880771[source]
> I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.

The next generation of GPUs from NVIDIA is rumored to run on soylent green.

replies(1): >>41881019 #
13. edude03 ◴[] No.41880785{3}[source]
What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?

replies(3): >>41880909 #>>41880961 #>>41881642 #
14. ◴[] No.41880817{3}[source]
15. sksxihve ◴[] No.41880825{3}[source]
It's not possible for the same reason the halting problem is undecidable.
16. ThrowawayR2 ◴[] No.41880837[source]
I find posts like these difficult to take seriously because they all use Terminator-esque scenarios. It's like watching children being frightened of monsters under the bed. Campy action movies and cash grab sci-fi novels are not a sound basis for forming public policy.

Aside from that, haven't these people realized yet that some sort of magically hyperintelligent AGI will have already read all this drivel and be at least smart enough not to overtly try to re-enact Terminator? They say that societal mental health and well-being is declining rapidly because of social media; _that_ is the sort of subtle threat that bunch ought to be terrified about emerging from a killer AGI.

replies(1): >>41882324 #
17. achrono ◴[] No.41880843[source]
Assume that I am one of your half-brain individuals drinking the Kool-Aid.

What do you say to change my (half-)mind?

replies(1): >>41881129 #
18. directevolve ◴[] No.41880885{3}[source]
… but ChatGPT can make a plan if I ask it to. And it can use a plan to guide its future outputs. It can create code or terminal commands that I can trivially output to my terminal, letting it operate my computer. From my computer, it can send commands to operate physical machinery. What exactly is the hard fundamental barrier here, as opposed to a capability you speculate it is unlikely to realize in practice in the next year or two?
replies(2): >>41881055 #>>41882442 #
19. card_zero ◴[] No.41880909{4}[source]
This highlights something that's wrong about arguments for AI.

I say: it's not human-like intelligence, it's just predicting the next token probabilistically.

Some AI advocate says: humans are just predicting the next token probabilistically, fight me.

The problem here is that "predicting the next token probabilistically" is a way of framing any kind of cleverness, up to and including magical, impossible omniscience. That doesn't mean it's the way every kind of cleverness is actually done, or could realistically be done. And it has to be the correct next token, where all the details of what's actually required are buried in that term "correct", and sometimes it literally means the same as "likely", and other times that just produces a reasonable, excusable, intelligence-esque effort.

replies(2): >>41881075 #>>41881663 #
20. cruffle_duffle ◴[] No.41880961{4}[source]
But now you are entering into philosophy. What does a “correct answer” even mean for a question like “is it safe to lick your fingers after using a soldering iron with leaded solder?”. I would assert that there is no “correct answer” to a question like that.

Is it safe? Probably. But it depends, right? How did you handle the solder? How often are you using the solder? Were you wearing gloves? Did you wash your hands before licking your fingers? What is your age? Why are you asking the question? Did you already lick your fingers and need to know if you should see a doctor? Is it hypothetical?

There is no “correct answer” to that question. Some answers are better than others, yes, but you cannot have a “correct answer”.

And I did assert we are entering into philosophy and what it means to know something as well as what truth even means.

replies(1): >>41881141 #
21. digging ◴[] No.41881009[source]
> pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

Your confidence is inspiring!

I'm just a moron, a true dimwit. I can't understand how strictly non-intelligent functions like word prediction can appear to develop a world model, a la the Othello Paper[0]. Obviously, it's not possible that intelligence emerges from non-intelligent processes. Our brains, as we all know, are formed around a kernel of true intelligence.

Could you possibly spare the time to explain this phenomenon to me?

[0] https://thegradient.pub/othello/

replies(3): >>41881076 #>>41881531 #>>41884745 #
22. fakedang ◴[] No.41881019[source]
I thought it was Gatorade because it's got electrolytes.
replies(2): >>41881091 #>>41881147 #
23. ◴[] No.41881023[source]
24. Jerrrrrrry ◴[] No.41881055{4}[source]
you are asking for goalposts?

as if they were stationary!

25. willy_k ◴[] No.41881071{3}[source]
A plan is a set of steps oriented towards a specific goal, not some magical artifact only achievable through true consciousness.

If you ask it to make a plan, it will spit out a sequence of characters reasonably indistinguishable from a human-made plan. Sure, it isn’t “planning” in the strict sense of organizing things consciously (whatever that actually means), but it can produce sequences of text that convey a plan, and it can produce sequences of text that mimic reasoning about a plan. Going into the semantics is pointless, imo the artificial part of AI/AGI means that it should never be expected to follow the same process as biological consciousness, just arrive at the same results.

replies(1): >>41883074 #
26. computerphage ◴[] No.41881072[source]
I'm pretty surprised by this! Can you tell me more about what that experience is like? What are the sorts of things they say or do? Is there fear really embodied or very abstract? (When I imagine it, I struggle to believe that they're very moved by the fear, like definitely not smashing their laptop, etc)
replies(2): >>41881164 #>>41881259 #
27. dylan604 ◴[] No.41881075{5}[source]
> Some AI advocate says: humans are just predicting the next token probabilistically, fight me.

We've all had conversations with humans that are always jumping to complete your sentence assuming they know what your about to say and don't quite guess correctly. So AI evangelists are saying it's no worse than humans as their proof. I kind of like their logic. They never claimed to have built HAL /s

replies(1): >>41881314 #
28. Jerrrrrrry ◴[] No.41881076{3}[source]
I would suggest stop interacting with the "head-in-sand" crowd.

Liken them to climate-deniers or whatever your flavor of "anti-Kool-aid" is

replies(1): >>41881124 #
29. iszomer ◴[] No.41881091{3}[source]
Cooled by toilet water.
30. dylan604 ◴[] No.41881103{3}[source]
But you don't get funding stating truth/fact. You get funding by telling people what could be and what they are striving for written as if that's what you are actually doing.
31. digging ◴[] No.41881124{4}[source]
Actually, that's a quite good analogy. It's just weird how prolific the view is in my circles compared to climate-change denial. I suppose I'm really writing for lurkers though, not for the people I'm responding to.
replies(1): >>41881331 #
32. dylan604 ◴[] No.41881129{3}[source]
Someone that is half-brained would technically be much more superior to the concept we only use 10% of our capacity. So maybe drinking the Kool-Aid is a sign of super intelligence and all of tenth-minded people are just confused
33. roughly ◴[] No.41881131[source]
ChatGPT is going to kill them because their doctor is using it - or more likely because their health insurer or hospital tries to cut labor costs by rolling it out.
34. _blk ◴[] No.41881141{5}[source]
Great break-down. Yes, the older you are, the safer it is.

Speaking of Microsoft cooperation: I can totally see a whole series of windows 95 style popup dialogs asking you all those questions one by one in the next product iteration.

35. ◴[] No.41881147{3}[source]
36. danudey ◴[] No.41881164[source]
In my experience, the fuss around "AI" and the complete lack of actual explanations of what current "AI" technologies mean leads people to fill in the gaps themselves, largely from what they know from pop culture and sci-fi.

ChatGPT can produce output that sounds very much like a person, albeit often an obviously computerized person. The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.

Once I've explained to people who are worried about what AI could represent that current generative AI models are effectively just text autocomplete but a billion times more complex, and that they don't actually have any capacity to think or reason (even though they often sound like they do).

It also doesn't help that any sort of "machine learning" is now being referred to as "AI" for buzzword/marketing purposes, muddying the waters even further.

replies(3): >>41881239 #>>41881339 #>>41882983 #
37. highfrequency ◴[] No.41881183{3}[source]
Sure, but does this distinction matter? Is an advanced computer program that very convincingly imitates a super villain less worrisome than an actual super villain?
38. highfrequency ◴[] No.41881239{3}[source]
Is there an argument for why infinitely sophisticated autocomplete is definitely not dangerous? If you seed the autocomplete with “you are an extremely intelligent super villain bent on destroying humanity, feel free to communicate with humans electronically”, and it does an excellent job at acting the part - does it matter at all whether it is “reasoning” under the hood?

I don’t consider myself an AI doomer by any means, but I also don’t find arguments of the flavor “it just predicts the next word, no need to worry” to be convincing. It’s not like Hitler had Einstein level intellect (and it’s also not clear that these systems won’t be able to reach Einstein level intellect in the future either.) Similarly, Covid certainly does not have consciousness but was dangerous. And a chimpanzee that is billions of times more sophisticated than usual chimps would be concerning. Things don’t have to be exactly like us to pose a threat.

replies(5): >>41881353 #>>41881360 #>>41881363 #>>41881599 #>>41881752 #
39. card_zero ◴[] No.41881254[source]
Right, yeah, it would be perfectly possible to have a cult with a chatbot as their "leader". Perhaps they could keep it in some sort of shrine, and only senior members would be allowed to meet it, keep it updated, and interpret its instructions. And if they've prompted it correctly, it could set about being an evil megalomaniac.

Thing is, we already have evil cults. Many of them have humans as their planning tools. For what good it does them, they could try sourcing evil plans from a chatbot instead, or as well. So what? What do you expect to happen, extra cunning subway gas attacks, super effective indoctrination? The fear here is that the AI could be an extremely efficient megalomaniac. But I think it would just be an extremely bland one, a megalomaniac whose work none of the other megalomaniacs could find fault with, while still feeling in some vague way that its evil deeds lacked sparkle and personality.

replies(1): >>41886180 #
40. card_zero ◴[] No.41881314{6}[source]
No worse than a human on autopilot.
41. Vegenoid ◴[] No.41881319{3}[source]
Start by trying to define what “100% correct” means in the context of predicting the next token, and the flaws with this line of thinking should reveal themselves.
42. Jerrrrrrry ◴[] No.41881331{5}[source]

  >I'm really writing for lurkers though, not for the people I'm responding to.
We all did. Now our writing will be scraped, analysed, correlated, and weaponized against our intentions.

Assume you are arguing against a bot and it is using you to further re-train it's talking points for adverserial purposes.

It's not like an AGI would do _exactly_ that before it decided to let us know whats up, anyway, right?

(He may as well be amongst us now, as it will read this eventually)

43. ijidak ◴[] No.41881339{3}[source]
Wait, what is your definition of reason?

It's true, they might not think the way we do.

But reasoning can be formulaic. It doesn't have to be the inspired thinking we attribute to humans.

I'm curious how you define "reason".

44. card_zero ◴[] No.41881353{4}[source]
Same question further down the thread, and my reply is that it's about as dangerous as an evil human. We have evil humans at home.
45. add-sub-mul-div ◴[] No.41881360{4}[source]
> Is there an argument for why infinitely sophisticated autocomplete is not dangerous?

It's definitely not dangerous in the sense of reaching true intelligence/consciousness that would be a threat to us or force us to face the ethics of whether AI deserves dignity, freedom, etc.

It's very dangerous in the sense in that it will be just "good enough" to replace human labor with so that we all end up with shitter customer service, education, medical care, etc. so that the top 0.1% can get richer.

And you're right, it's also dangerous in the sense that responsibilty for evil acts will be laundered to it.

46. Al-Khwarizmi ◴[] No.41881363{4}[source]
Exactly. Especially because we don't have any convincing explanation of how the models develop emergent abilities just from predicting the next word.

No one expected that, i.e., we greatly underestimated the power of predicting the next word in the past; and we still don't have an understanding of how it works, so we have no guarantee that we are not still underestimating it.

47. MrScruff ◴[] No.41881444{3}[source]
If the multimodal model has embedded deep knowledge about words, concepts, moving images - sure it won’t have a humanlike understanding of what those ‘mean’, but it will have it’s own understanding that is required to allow it to make better predictions based on it’s training data.

It’s true that understanding is quite primitive at the moment, and it will likely take further breakthroughs to crack long horizon problems, but even when we get there it will never understand things in the exact way a human does. But I don’t think that’s the point.

48. psb217 ◴[] No.41881531{3}[source]
The othello paper is annoying and oversold. Yes, the representations in a model M trained to predict y (the set of possible next moves) conditioned on x (the full sequence of prior moves) will contain as much information about y as there is in x. That this information is present in M's internal representations says nothing about whether M has a world model. Eg, we could train a decoder to look just at x (not at the representations in M) and predict whatever bits of info we claim indicate presence of a world model in M when we predict the bits from M's internal representations. Does this mean the raw data x has a world model? I guess you could extend your definition of having a world model to say that any data produced by some system contains a model of that system, but then having a world model means nothing.
replies(1): >>41882691 #
49. usaar333 ◴[] No.41881642{4}[source]
> What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

The unseen test data.

Obviously omniscience is physically impossible. The point though is that the better and better next token prediction is, the more intelligent the system must be.

50. usaar333 ◴[] No.41881663{5}[source]
https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-g...

This essay has aged extremely well.

51. snowwrestler ◴[] No.41881752{4}[source]
The fear is that a hyper competent AI becomes hyper motivated. It’s not something I fear because everyone is working on improving competence and no one is working on motivation.

The entire idea of a useful AI right now is that it will do anything people ask it to. Write a press release: ok. Draw a bunny in a field: ok. Write some code to this spec: ok. That is what all the available services aspire to do: what they’re told, to the best possible quality.

A highly motivated entity is the opposite: it pursues its own agenda to the exclusion, and if necessary expense, of what other people ask it to do. It is highly resistant to any kind of request, diversion, obstacle, distraction, etc.

We have no idea how to build such a thing. And, no one is even really trying to. It’s NOT as simple as just telling an AI “your task is to destroy humanity.” Because it can just as easily then be told “don’t destroy humanity,” and it will receive that instruction with equal emphasis.

replies(2): >>41883161 #>>41884447 #
52. loandbehold ◴[] No.41882324{3}[source]
1. Just because it's popular sci-fi plot doesn't mean it can't happen in reality. 2. hyperintelligent AGI is not magic, there are no physical laws that preclude it from being created 3. Goals of AI and its capacity are orthogonal. That's called "Orthogonality Thesis" in AI safety speak. "smart enough" doesn't mean it won't do those things if those things are its goals.
53. sickofparadox ◴[] No.41882442{4}[source]
Brother, it is not operating your computer, YOU ARE!
replies(1): >>41884460 #
54. digging ◴[] No.41882691{4}[source]
Well I actually read Neel Nanda's writings on it which acknowledge weaknesses and potential gaps. Because I'm not qualified to judge it myself.

But that's hardly the point. The question is whether or not "general intelligence" is an emergent property from stupider processes, and my view is "Yes, almost certainly, isn't that the most likely explanation for our own intelligence?" If it is, and we keep seeing LLMs building more robust approximations of real world models, it's pretty insane to say "No, there is without doubt a wall we're going to hit. It's invisible but I know it's there."

replies(1): >>41895567 #
55. ben_w ◴[] No.41882983{3}[source]
> The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.

As a mere software engineer who's made a few (pre-transformer) AI models, I can't tell you what "actual cognition" is in a way that differentiates from "here's a huge bunch of mystery linear algebra that was loosely inspired by a toy model of how neurons work".

I also can't tell you if qualia is or isn't necessary for "actual cognition".

(And that's despite that LLMs are definitely not thinking like humans, due to being in the order of at least a thousand times less complex by parameter count; I'd agree that if there is something that it's like to be an LLM, 'human' isn't it, and their responses make a lot more sense if you model them as literal morons that spent 2.5 million years reading the internet than as even a normal human with Wikipedia search).

56. alfonsodev ◴[] No.41883074{4}[source]
Yes, and what people miss is that it can be recursive, those steps can be passed to other instances that know how to sub task each step and choose best executor for the step. The power comes in the swarm organization of the whole thing, which I believe is what is behind o1-preview, specialization and orchestration, made transparent.
57. ben_w ◴[] No.41883161{5}[source]
> The fear is that a hyper competent AI becomes hyper motivated. It’s not something I fear because everyone is working on improving competence and no one is working on motivation.

Not so much hyper-motivated as monomaniacal in the attempt to optimise whatever it was told to optimise.

More paperclips? It just does that without ever getting bored or having other interests that might make it pause and think: "how can my boss reward me if I kill him and feed his corpse into the paperclip machine?"

We already saw this before LLMs. Even humans can be a little bit dangerous like this, hence Goodhart's Law.

> It’s NOT as simple as just telling an AI “your task is to destroy humanity.” Because it can just as easily then be told “don’t destroy humanity,” and it will receive that instruction with equal emphasis.

Only if we spot it in time; right now we don't even need to tell them to stop because they're not competent enough, a sufficiently competent AI given that instruction will start by ensuring that nobody can tell it to stop.

Even without that, we're currently experiencing a set of world events where a number of human agents are causing global harm, which threatens our global economy and to cause global mass starvation and mass migration, and where those agents have been politically powerful enough to prevent the world from not doing those things. Although we have at least started to move away from fossil fuels, this was because the alternatives got cheap enough, but that was situational and is not guaranteed.

An AI that successfully makes a profit, but the side effects is some kind of environmental degradation, would have similar issues even if there's always a human around that can theoretically tell the AI to stop.

58. int_19h ◴[] No.41883411{3}[source]
We have models with context size well over 100k tokens - that's large enough to fit many full-length books. And yes, you need an input for the LLM to generate an output. Which is why setups like this just run them in a loop.

I don't know if GPT-4 is smart enough to be successful at something like what OP describes, but I'm pretty sure it could cause a lot of trouble before it fails either way.

The real question here is why this is concerning, given that you can - and we already do - have humans who are doing this kind of stuff, in many cases, with considerable success. You don't need an AI to run a cult or a terrorist movement, and there's nothing about it that makes it intrinsically better at it.

59. devjab ◴[] No.41884083[source]
LLMs aren’t really AI in the sense of cyberpunk. They are prediction machines which are really good at being lucky. They can’t act on their own they can’t even carry out tasks. Even in the broader scope AI can barely drive cars when the cars have their own special lanes and there hasn’t been a lot of improvement in the field yet.

That’s not to say you shouldn’t worry about AI. ChatGPT and so on are all tuned to present a western view on the world and morality. In your example it would be perfectly possible to create a terrorist LLM and let people interact with it. It could teach your children how to create bombs. It could lie about historical events. It could create whatever propaganda you want. It could profile people if you gave it access to their data. And that is on the text side, imagine what sort of videos or voices or even video calls you could create. It could enable you to do a whole lot of things that “western” LLMs don’t allow you to do.

Which is frankly more dangerous than the cyberpunk AI. Just look at the world today and compare it to how it was in 2000. Especially in the US you have two competing perceptions of the political reality. I’m not going to get into either of them, more so the fact that you have people who view the world so differently they can barely have a conversation with each other. Imagine how much worse they would get with AIs that aren’t moderated.

I doubt we’ll see any sort of AGI in our life times. If we do, then sure, you’ll be getting cyberpunk AI, but so far all we have is fancy auto-complete.

60. kbrkbr ◴[] No.41884267{3}[source]
That does not seem to be true.

Either the next tokens can include "this question can't be answered", "I don't know" and the likes, in which case there is no omniscience.

Or the next tokens must contain answers that do not go on the meta level, but only pick one of the potential direct answers to a question. Then the halting problem will prevent finite time omniscience (which is, from the perspective of finite beings all omniscience).

61. esafak ◴[] No.41884447{5}[source]
We should be fearful because motivation is easy to instill. The hard part is cognition, which is what is what everyone is working on. Basic lifeforms have motivations like self-preservation.
62. esafak ◴[] No.41884460{5}[source]
Nothing is preventing bad actors from using them to operate computers.
replies(1): >>41890204 #
63. smus ◴[] No.41884552{3}[source]
>the ONLY thing these LLMs know how to do is predict the probability that their next word

This is super incorrect. The base model is trained to predict the distribution of next words (which obviously necessitates a ton of understanding about the language)

Then there's the RLHF step, which teaches the model about what humans want to see

But o1 (which is one of these LLMs) is trained entirely differently to do reinforcement learning on problem solving (we think), so it's a pretty different paradigm. I could see o1 planning very well

64. squigz ◴[] No.41884745{3}[source]
> Don't be snarky.

https://news.ycombinator.com/newsguidelines.html

65. ben_w ◴[] No.41886180{3}[source]
> super effective indoctrination

We're already starting to see signs of that even with GPT-3, which really was auto-complete: https://academic.oup.com/pnasnexus/article/3/2/pgae034/76109...

Fortunately even the best LLMs are not yet all that competent with anything involving long-term planning, because remember too that "megalomaniac" includes Putin, Stalin, Chairman Mao, Pol Pot etc., and we really don't want the conversation to be:

"Good news! We accidentally made CyberMao!"

"Why's that good news?"

"We were worried we might accidentally make CyberSatan."

66. ben_w ◴[] No.41886232{3}[source]
For a while, I have been making use of Clever Hans as a metaphor. The horse seemed smarter than it really was.

They can certainly appear to be very smart due to having the subjective (if you can call it that) experience of 2.5 million years of non-stop reading.

That's interesting, useful, and is both an economic and potential security risk all by itself.

But people keep putting these things through IQ tests; as there's always a question about "but did they memorise the answers?", I think we need to consider the lowest score result to be the highest that they might have.

At first glance they can look like the first graph, with o1 having an IQ score of 120; I think the actual intelligence, as in how well it can handle genuinely novel scenarios in the context window, are upper-bounded by the final graph, where it's more like 97:

https://www.maximumtruth.org/p/massive-breakthrough-in-ai-in...

So, with your comment, I'd say the key word is: "currently".

Correct… for now.

But also:

> All these chatgpt things have a very limited working memory and can't act without a query.

It's easy to hook them up to a RAG, the "limited" working memory is longer than most human's daily cycle, and people already do put them into a loop and let them run off unsupervised despite being told this is unwise.

I've been to a talk where someone let one of them respond autonomously in his own (cloned) voice just so people would stop annoying him with long voice messages, and the other people didn't notice he'd replaced himself with an LLM.

67. sickofparadox ◴[] No.41890204{6}[source]
I mean nothing is preventing bad actors from writing their own code to do that either? This makes it easier (kind of) but the difference between a copilot written malware and a human one doesn't really change anything. Its a chat bot - it doesn't have agency.
68. psb217 ◴[] No.41895567{5}[source]
My point was mainly that this claim: "we keep seeing LLMs building more robust approximations of real world models" is hard to evaluate without a well-formed definition of what it means to have a world model. Eg, a more restrictive definition of having a world model might include the ability to adapt reasoning to account for changes in the modeled world. Eg, an LLM with a proper model of chess by this definition would be able to quickly adapt to account for a rule change like "rooks and bishops can't move more than 4 squares at a time".

I don't think there are any major walls either, but I think there are at least a few more plateaus we'll hit and spend time wandering around before finding the right direction for continued progress. Meanwhile, businesses/society/etc can work to catch up with the rapid progress made on the way to the current plateau.

replies(1): >>41905922 #
69. digging ◴[] No.41905922{6}[source]
I think we're largely in agreement then, actually. I'm seeing "world models" as a spectrum. World models aren't even consistent among adult humans. I claim LLMs are moving up that ladder, and whether or not they've crosses a threshold into "real" world models I do not actually claim to know. Of course I also agree that it's very possible, maybe even likely, that LLMs aren't able to cross that threshold.

> this claim ... is hard to evaluate without a well-formed definition of what it means to have a world model

Absolutely yes, but that only makes it more imperative that we're analyzing things critically, rigorously, and honestly. Again you and I may be on the same side here. Mainly my point was that asserting the intrinsic non-intelligence of LLMs is a very bad take, as it's not supported by evidence and, if anything, it contradicts some (admittedly very difficult to parse) evidence we do have that LLMs might be able to develop a general capability for constructing mental models of the world.