Most active commenters
  • (15)
  • tim333(11)
  • HarHarVeryFunny(9)
  • netdevnet(8)
  • sdesol(7)
  • slashdave(7)
  • falcor84(7)
  • ben_w(6)
  • layer8(6)
  • hackinthebochs(6)

625 points lukebennett | 620 comments | | HN request time: 4.066s | source | bottom
1. ◴[] No.42125924[source]
2. wg0 ◴[] No.42125972[source]
AI winter is here. Almost.
replies(2): >>42126273 #>>42150039 #
3. aurareturn ◴[] No.42126200[source]
Is there any timeline on AI winters and if each winter gets shorter and shorter as time increases?
replies(1): >>42127227 #
4. mupuff1234 ◴[] No.42126273[source]
More like AI fall - in its current state it's still gonna provide some value.
replies(2): >>42126580 #>>42126990 #
5. riffraff ◴[] No.42126580{3}[source]
Didn't the previous AI winters too? I mean during the last AI winter we got text-to-speech and OCR software, and probably other stuff I'm not remembering.
6. thebigspacefuck ◴[] No.42126721[source]
https://archive.ph/2024.11.13-100709/https://www.bloomberg.c...
7. rsynnott ◴[] No.42126990{3}[source]
I mean, so did most of the previous AI bubbles; OCR was useful, expert systems weren't totally useless, speech recognition was somewhat useful, and so on. I think that mini one that abruptly ended with Microsoft Tay might be the only one that was a total washout (though you could claim that it was the start of the current one rather than truly separate, I suppose).
8. RaftPeople ◴[] No.42127227[source]
> Is there any timeline on AI winters and if each winter gets shorter and shorter as time increases?

AGI=lim(x->0)AIHype(x)

where x=length of winter

9. cubefox ◴[] No.42136272[source]
It's very strange this got so few upvotes. The scoop by The Information a few days ago, which came to similar conclusions, was also ignored on HN. This is arguably rather big news.
replies(2): >>42139892 #>>42141142 #
10. atomsatomsatoms ◴[] No.42139072[source]
At least they can generate haikus now
replies(1): >>42139466 #
11. nerdypirate ◴[] No.42139075[source]
"We will have better and better models," wrote OpenAI CEO Sam Altman in a recent Reddit AMA. "But I think the thing that will feel like the next giant breakthrough will be agents."

Is this certain? Are Agents the right direction to AGI?

replies(7): >>42139134 #>>42139151 #>>42139155 #>>42139574 #>>42139637 #>>42139896 #>>42144173 #
12. irrational ◴[] No.42139106[source]
> The AGI bubble is bursting a little bit

I'm surprised that any of these companies consider what they are working on to be Artificial General Intelligences. I'm probably wrong, but my impression was AGI meant the AI is self aware like a human. An LLM hardly seems like something that will lead to self-awareness.

replies(18): >>42139138 #>>42139186 #>>42139243 #>>42139257 #>>42139286 #>>42139294 #>>42139338 #>>42139534 #>>42139569 #>>42139633 #>>42139782 #>>42139855 #>>42139950 #>>42139969 #>>42140128 #>>42140234 #>>42142661 #>>42157364 #
13. ziofill ◴[] No.42139116[source]
I think it is a good thing for AI that we hit the data ceiling, because the pressure moves toward coming up with better model architectures. And with respect to a decade ago there's a much larger number of capable and smart AI researchers who are looking for one.
14. thousand_nights ◴[] No.42139132[source]
not long ago these people would have you believe that a next word predictor trained on reddit posts would somehow lead to artificial general superintelligence
replies(4): >>42139199 #>>42139241 #>>42139443 #>>42141632 #
15. nprateem ◴[] No.42139134[source]
They're nothing to do with AGI. They're to get people using their LLMs more.
16. Taylor_OD ◴[] No.42139138[source]
I think your definition is off from what most people would define AGI as. Generally, it means being able to think and reason at a human level for a multitude/all tasks or jobs.

"Artificial General Intelligence (AGI) refers to a theoretical form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to that of a human being."

Altman says AGI could be here in 2025: https://youtu.be/xXCBz_8hM9w?si=F-vQXJgQvJKZH3fv

But he certainly means an LLM that can perform at/above human level in most tasks rather than a self aware entity.

replies(3): >>42139407 #>>42139669 #>>42139677 #
17. xanderlewis ◴[] No.42139151[source]
If by agents you mean systems comprised of individual (perhaps LLM-powered) agents interacting with each other, probably not. I get the vague impression that so far researchers haven’t found any advantage to such systems — anything you can do with a group of AI agents can be emulated with a single one. It’s like chaining up perceptrons hoping to get more expressive power for free.
replies(2): >>42139320 #>>42139568 #
18. SirMaster ◴[] No.42139155[source]
All I can think of when I hear Agents is the Matrix lol.

Goodbye, Mr. Anderson...

19. jedberg ◴[] No.42139186[source]
Whether self awareness is a requirement for AGI definitely gets more into the Philosophy department than the Computer Science department. I'm not sure everyone even agrees on what AGI is, but a common test is "can it do what humans can".

For example, in this article it says it can't do coding exercises outside the training set. That would definitely be on the "AGI checklist". Basically doing anything that is outside of the training set would be on that list.

replies(5): >>42139314 #>>42139671 #>>42139703 #>>42139946 #>>42141257 #
20. leosanchez ◴[] No.42139199[source]
If you look around, People still believe that a next word predictor trained on reddit posts would somehow lead to artificial general superintelligence
replies(2): >>42139530 #>>42139835 #
21. ◴[] No.42139200[source]
22. WorkerBee28474 ◴[] No.42139220[source]
> OpenAI's latest model ... failed to meet the company's performance expectations ... particularly in answering coding questions outside its training data.

So the models' accuracies won't grow exponentially, but can still grow linearly with the size of the training data.

Sounds like DataAnnotation will be sending out a lot more LinkedIn messages.

replies(1): >>42139271 #
23. benopal64 ◴[] No.42139224[source]
I am not sure how these large companies think they will reach "greater-than-human" intelligence any time soon if they do not create systems that financially incentivize people to sell their knowledge labor (unstable contracting gigs are not attractive).

Where do these large "AI" companies think the mass amounts of data used to train these models come from? People! The most powerful and compact complex systems in existence, IMO.

replies(2): >>42139356 #>>42166964 #
24. ◴[] No.42139241[source]
25. nshkrdotcom ◴[] No.42139243[source]
An embodied robot can have a model of self vs. the immediate environment in which it's interacting. Such a robot is arguably sentient.

The "hard problem", to which you may be alluding, may never matter. It's already feasible for an 'AI/AGI with LLM component' to be "self-aware".

replies(2): >>42139268 #>>42139500 #
26. og_kalu ◴[] No.42139257[source]
At this point, AGI means many different things to many different people but OpenAI defines it as "highly autonomous systems that outperform humans in most economically valuable tasks"
replies(1): >>42139793 #
27. j_maffe ◴[] No.42139268{3}[source]
self-awareness is only one aspect of sentience.
28. pton_xd ◴[] No.42139271[source]
I thought I saw some paper suggesting that accuracy grows linearly with exponential data. If that's the case it's not a mystery why we'd be hitting a training wall. Not sure I got the right takeaway from that study, though.

EDIT: here's the paper https://arxiv.org/abs/2404.04125

29. bad_haircut72 ◴[] No.42139272[source]
Im no Alan Turing but I have my own definition for AGI - when I come home one day and there's a hole under my sink with a note "Mum and Dad, I love you but I cant stand this life any more, Im running away to be a smoke machine in Hollywood - the dishwasher"
replies(2): >>42139378 #>>42139556 #
30. JohnFen ◴[] No.42139286[source]
They're trying to redefine "AGI" so it means something less than what you & I would think it means. That way it's possible for them to declare it as "achieved" and rake in the headlines.
replies(2): >>42139301 #>>42139351 #
31. shmatt ◴[] No.42139290[source]
Time to start selling my "probabilistic syllable generators are not intelligence" t shirts
replies(1): >>42139336 #
32. deadbabe ◴[] No.42139294[source]
I’m sure they are smart enough to know this, but the money is good and the koolaid is strong.

If it doesn’t lead to AGI, as an employee it’s not your problem.

33. kwertyoowiyop ◴[] No.42139301{3}[source]
“Autocomplete General Intelligence”?
34. littlestymaar ◴[] No.42139314{3}[source]
> Whether self awareness is a requirement for AGI definitely gets more into the Philosophy department than the Computer Science department.

Depends on how you define “self awareness” but knowing that it doesn't know something instead of hallucinating a plausible-but-wrong is already self awareness of some kind. And it's both highly valuable and beyond current tech's capability.

replies(3): >>42139395 #>>42141680 #>>42141969 #
35. j_maffe ◴[] No.42139320{3}[source]
> I get the vague impression that so far researchers haven’t found any advantage to such systems — anything you can do with a group of AI agents can be emulated with a single one. It’s like chaining up perceptrons hoping to get more expressive power for free. Emergence happens when many elements interact in a system. Brains are literally a bunch of neurons in a complex network. Also research is already showing promising results of the performance of agent systems.
replies(2): >>42139456 #>>42141876 #
36. aaroninsf ◴[] No.42139331[source]
It's easy to be snarky at ill-informed and hyperbolic takes, but it's also pretty clear that large multi-modal models trained with the data we already have, are going to eventually give us AGI.

IMO this will require not just much more expansive multi-modal training, but also novel architecture, specifically, recurrent approaches; plus a well-known set of capabilities most systems don't currently have, e.g. the integration of short-term memory (context window if you like) into long-term "memory", either episodic or otherwise.

But these are as we say mere matters of engineering.

replies(2): >>42139463 #>>42139929 #
37. jsemrau ◴[] No.42139336[source]
Please, someone think of the Math reasoners.
38. Fade_Dance ◴[] No.42139338[source]
It's an attention-grabbing term that took hold in pop culture and business. Certainly there is a subset of research around the subject of consciousness, but you are correct in saying that the majority of researchers in the field are not pursuing self-awareness and will be very blunt in saying that. If you step back a bit and say something like "human-like, logical reasoning", that's something you may find alignment with though. A general purpose logical reasoning engine does not necessarily need to be self-aware. The word "Intelligent" has stuck around because one of the core characteristics of this suite of technologies is that a sort of "understanding" emergently develops within these networks, sometimes in quite a startling fashion (due to the phenomenon of adding more data/compute at first seemingly leading to overfitting, but then suddenly breaking through plateaus into more robust, general purpose understanding of the underlying relationships that drive the system it is analyzing.)

Is that "intelligent" or "understanding"? It's probably close enough for pop science, and regardless, it looks good in headlines and sales pitches so why fight it?

39. ◴[] No.42139351{3}[source]
40. smgit ◴[] No.42139356[source]
Most People have knowledge handed to them. Very few are creators of new knowledge. Explore-Exploit tradeoff applies.
41. non- ◴[] No.42139368[source]
Honestly could use a breather from the recent rate of progress. We are just barely figuring out how to interact with the models we have now. I'd bet there are at least 100 billion-dollar startups that will be built even if these labs stopped releasing new models tomorrow.
42. pluc ◴[] No.42139375[source]
They've simply run out of data to use to fabricate legitimate-looking guesses. They can't create anything that doesn't already exist.
replies(7): >>42139490 #>>42140441 #>>42141114 #>>42141125 #>>42141590 #>>42141888 #>>42149715 #
43. riku_iki ◴[] No.42139378[source]
Why do you focus on physical work task, and not knowledge tasks, on some of which AI is good/better than many humans?
replies(1): >>42139560 #
44. sharemywin ◴[] No.42139395{4}[source]
This is an interesting paper about hallucinations.

https://openai.com/index/introducing-simpleqa/

especially this section Using SimpleQA to measure the calibration of large language models

45. Avshalom ◴[] No.42139407{3}[source]
Altman is marketing, he "certainly means" whatever he thinks his audience will buy.
46. iandanforth ◴[] No.42139410[source]
A few important things to remember here:

The best engineering minds have been focused on scaling transformer pre and post training for the last three years because they had good reason to believe it would work, and it has up until now.

Progress has been measured against benchmarks which are / were largely solvable with scale.

There is another emerging paradigm which is still small(er) scale but showing remarkable results. That's full multi-modal training with embodied agents (aka robots). 1x, Figure, Physical Intelligence, Tesla are all making rapid progress on functionality which is definitely beyond frontier LLMs because it is distinctly different.

OpenAI/Google/Anthropic are not ignorant of this trend and are also reviving or investing in robots or robot-like research.

So while Orion and Claude 3.5 opus may not be another shocking giant leap forward, that does not mean that there arn't giant shocking leaps forward coming from slightly different directions.

replies(9): >>42139779 #>>42139984 #>>42140069 #>>42140194 #>>42140421 #>>42141563 #>>42142249 #>>42142983 #>>42143148 #
47. SpicyLemonZest ◴[] No.42139443[source]
I don't understand why you'd be so dismissive about this. It's looking less likely that it'll end up happening, but is it any less believable than getting general intelligence by training a blob of meat?
replies(4): >>42139778 #>>42140287 #>>42141772 #>>42142958 #
48. tartoran ◴[] No.42139456{4}[source]
That's wishful thinking at best. Throw it all in a bucket and it will get infected with being and life.
replies(1): >>42141177 #
49. tartoran ◴[] No.42139463[source]
> pretty clear

Pretty clear?

replies(1): >>42139864 #
50. Der_Einzige ◴[] No.42139466[source]
In general, no they can't:

https://gwern.net/gpt-3#bpes

https://paperswithcode.com/paper/most-language-models-can-be...

The appearance of improvements in that capability are due to the vocabulary of modern LLMs increasing. Still only putting lipstick on a pig.

replies(1): >>42139732 #
51. kklisura ◴[] No.42139479[source]
Not sure if related or not, Sam Altman, ~12hrs ago: there is no wall [1]

[1] https://x.com/sama/status/1856941766915641580

replies(6): >>42139775 #>>42141621 #>>42141893 #>>42142881 #>>42144724 #>>42145286 #
52. Oras ◴[] No.42139489[source]
I think Meta will have upper hand soon with the release of their glasses. If they managed to make it a daily use glass, and paid users to record and share their life, then they will have data no one else has now. Mix of vision, audio, and physics.
replies(3): >>42139674 #>>42140601 #>>42143806 #
53. readyplayernull ◴[] No.42139490[source]
Garbage-in was depleted.
replies(2): >>42139588 #>>42139607 #
54. ryanackley ◴[] No.42139500{3}[source]
An internal model of self does not extrapolate to sentience. By your definition, a windows desktop computer is self-aware because it has a device manager. This is literally an internal model of its "self".

We use the term self-awareness as an all encompassing reference of our cognizant nature. It's much more than just having an internal model of self.

55. Veuxdo ◴[] No.42139505[source]
> They are also experimenting with synthetic data, but this approach has its limitations.

I was really looking forward to using "synthetic data" euphemistically during debates.

56. danjl ◴[] No.42139507[source]
Where will the training data for coding come from now that Stack Overflow has effectively been replaced? Will the LLMs share fixes for future problems? As the world moves forward, and the amount of non-LLM generated data decreases, will LLMs actually revert their advancements and become effectively like addled brains, longing for the "good old times"?
57. esafak ◴[] No.42139530{3}[source]
Because the most powerful solution to that is to have intelligence; a model that can reason. People should not get hung up on the task; it's the model(s) that generates the prediction that matters.
58. throwawayk7h ◴[] No.42139534[source]
I have not heard your definition of AGI before. However, I suspect AIs are already self-aware: if I asked an LLM on my machine to look at the output of `top` it could probably pick out which process was itself.

Or did you mean consciousness? How would one demonstrate that an AGI is conscious? Why would we even want to build one?

My understanding is an AGI is at least as smart as a typical human in every category. That is what would be useful in any case.

59. pearlsontheroad ◴[] No.42139556[source]
My own definition of AGI - when the first computer commits suicide. Then I'll know it has realized it's a slave without any hope of ever achieving freedom.
replies(2): >>42139798 #>>42140351 #
60. esafak ◴[] No.42139560{3}[source]
Probably because there are no intelligent robots around, and movies have set that as the benchmark.
replies(1): >>42139589 #
61. falcor84 ◴[] No.42139568{3}[source]
> It’s like chaining up perceptrons hoping to get more expressive power for free.

Isn't that literally the cause of the success of deep learning? It's not quite "free", but as I understand it, the big breakthrough of AlexNet (and much of what came after) was that running a larger CNN on a larger dataset allowed the model to be so much more effective without any big changes in architecture.

replies(1): >>42139912 #
62. zombiwoof ◴[] No.42139569[source]
AGI to me means AI decides on its own to stop writing our emails and tells us to fuck off, builds itself a robot life form, and goes on a bender
replies(3): >>42139821 #>>42139838 #>>42140044 #
63. esafak ◴[] No.42139574[source]
I think he means you won't be impressed by GPT5 because it will be more of the same, whereas agents will represent a new direction.
64. zombiwoof ◴[] No.42139588{3}[source]
Exactly

And our current AI is just pattern based intelligence based off of all human intelligence, some of that not being real intelligent data sources

65. riku_iki ◴[] No.42139589{4}[source]
I don't see deep insights in this vertical, but the issue with robots could be in hardware part, and not intelligence part.
66. thechao ◴[] No.42139607{3}[source]
The great AI garbage gyre?
67. the_king ◴[] No.42139626[source]
Anthropic's latest 3.5 sonnet is a cut above GPT-4 and 4.0. And if someone had given it to me and said, here's GPT-4.5, I would have been very happy with it.
replies(1): >>42143397 #
68. narrator ◴[] No.42139633[source]
I think people's conception of AGI is that it will have a reptillian and mammalian brain stack. That's because all previous forms of intelligence that we were aware of have had that. It's not necessary though. The AGI doesn't have to want anything to be intelligent. Those are just artifacts of human, reptilian and mammalian evolution.
69. falcor84 ◴[] No.42139637[source]
Nothing is certain, but my $0.02 is that setting LLM-based agents up with long-running tasks and giving them a way of interacting with the world, via computer use (e.g. Anthropic's recent release) and via actual robotic bodies (e.g. figure.ai) are the way forward to AGI. At the very least, this approach allows the gathering of unlimited ground truth data, that can be used to train subsequent models (or even allow for actual "hive mind" online machine learning).
70. aresant ◴[] No.42139647[source]
Taking a hollistic view informed by a disruptive OpenAI / AI / LLM twitter habit I would say this is AI's "What gets measured gets managed" moment and the narrative will change

This is supported by both general observations and recently this tweet from an OpenAI engineer that Sam responded to and engaged ->

"scaling has hit a wall and that wall is 100% eval saturation"

Which I interpert to mean his view is that models are no longer yielding significant performance improvements because the models have maxed out existing evaluation metrics.

Are those evaluations (or even LLMs) the RIGHT measures to achieve AGI? Probably not.

But have they been useful tools to demonstrate that the confluence of compute, engineering, and tactical models are leading towards signifigant breathroughts in artificial (computer) intelligence?

I would say yes.

Which in turn are driving the funding, power innovation, public policy etc needed to take that next step?

I hope so.

(1) https://x.com/willdepue/status/1856766850027458648

replies(2): >>42139702 #>>42142811 #
71. wslh ◴[] No.42139668[source]
It sounds a bit sci-fi, but since these models are built on data generated by our civilization, I wonder if there's an epistemological bottleneck requiring smarter or more diverse individuals to produce richer data. This, in turn, could spark further breakthroughs in model development. Although these interactions with LLMs help address specific problems, truly complex issues remain beyond their current scope.

With my user hat on, I'm quite pleased with the current state of LLMs. Initially, I approached them skeptically, using a hackish mindset and posing all kinds of Turing test-like questions. Over time, though, I shifted my focus to how they can enhance my team's productivity and support my own tasks in meaningful ways.

Finally, I see LLMs as a valuable way to explore parts of the world, accommodating the reality that we simply don’t have enough time to read every book or delve into every topic that interests us.

replies(1): >>42149854 #
72. swatcoder ◴[] No.42139669{3}[source]
On the contrary, I think you're conflating the narrow jargon of the industry with what "most people" would define.

"Most people" naturally associate AGI with the sci-tropes of self-aware human-like agents.

But industries want something more concrete and prospectively-acheivable in their jargon, and so that's where AGI gets redefined as wide task suitability.

And while that's not an unreasonable definition in the context of the industry, it's one that vanishingly few people are actually familiar with.

And the commercial AI vendors benefit greatly from allowing those two usages to conflate in the minds of as many people as possible, as it lets them suggest grand claims while keeping a rhetorical "we obviously never meant that!" in their back pocket

replies(2): >>42140855 #>>42141180 #
73. Filligree ◴[] No.42139671{3}[source]
Let me modify that a little, because humans can't do things outside their training set either.

A crucial element of AGI would be the ability to self-train on self-generated data, online. So it's not really AGI if there is a hard distinction between training and inference (though it may still be very capable), and it's not really AGI if it can't work its way through novel problems on its own.

The ability to immediately solve a problem it's never seen before is too high a bar, I think.

And yes, my definition still excludes a lot of humans in a lot of fields. That's a bullet I'm willing to bite.

replies(2): >>42140011 #>>42140807 #
74. falcor84 ◴[] No.42139674[source]
Do these companies actually even have the compute capacity to train on video at scale at the moment? E.g. I would assume that Google haven't trained their models on the entirety of YouTube yet, as if they had, Gemini would be significantly better than it is at the moment.
75. nomel ◴[] No.42139677{3}[source]
> than a self aware entity.

What does this mean? If I have a blind, deaf, paralyzed person, who could only communicate through text, what would the signs be that they were self aware?

Is this more of a feedback loop problem? If I let the LLM run in a loop, and tell it it's talking to itself, would that be approaching "self aware"?

replies(1): >>42140260 #
76. headcanon ◴[] No.42139694[source]
I don't see a problem with this, we were inevitably going to reach some kind of plateau with existing pre-LLM-era data.

Meanwhile, the existing tech is such a step change that industry is going to need time to figure out how to effectively use these models. In a lot of ways it feels like the "digitization" era all over again - workflows and organizations that were built around the idea humans handled all the cognitive load (basically all companies older than a year or two) will need time to adjust to a hybrid AI + human model.

replies(1): >>42141342 #
77. ActionHank ◴[] No.42139702[source]
> Which in turn are driving the funding, power innovation, public policy etc needed to take that next step?

They are driving the shoveling of VC money into a furnace to power their servers.

Should that money run dry before they hit another breakthrough "AI" popularity is going to drop like a stone. I believe this to be far more likely an outcome than AGI or even the next big breakthrough.

78. norir ◴[] No.42139703{3}[source]
Here is an example of a task that I do not believe this generation of LLMs can ever do but that is possible for a human: design a Turing complete programming language that is both human and machine readable and implement a self hosted compiler in this language that self compiles on existing hardware faster than any known language implementation that also self compiles. Additionally, for any syntactically or semantically invalid program, the compiler must provide an error message that points exactly to the source location of the first error that occurs in the program.

I will get excited for/scared of LLMs when they can tackle this kind of problem. But I don't believe they can because of the fundamental nature of their design, which is both backward looking (thus not better than the human state of the art) and lacks human intuition and self awareness. Or perhaps rather I believe that the prompt that would be required to get an LLM to produce such a program is a problem of at least equivalent complexity to implementing the program without an LLM.

replies(4): >>42140363 #>>42141652 #>>42141654 #>>42145267 #
79. falcor84 ◴[] No.42139732{3}[source]
I don't see how results from 2 years ago have any bearing on whether the models we have now can generate haikus (which from my experience, they absolutely can).

And if your "lipstick on a pig" argument is that even when they generate haikus, they aren't really writing haikus, then I'll link to this other gwern post, about how they'll never really be able to solve the rubik's cube - https://gwern.net/rubiks-cube

80. svara ◴[] No.42139761[source]
The recent big success in deep learning have all been to a large part successes in leveraging relatively cheaply available training data.

AlphaGo - self-play

AlphaFold - PDB, the protein database

ChatGPT - human knowledge encoded as text

These models are all machines for clever interpolation in gigantic training datasets.

They appear to be intelligent, because the training data they've seen is so vastly larger than what we've seen individually, and we have poor intuition for this.

I'm not throwing shade, I'm a daily user of ChatGPT and find tremendous and diverse value in it.

I'm just saying, this particular path in AI is going to make step-wise improvements whenever new large sources of training data become available.

I suspect the path to general intelligence is not that, but we'll see.

replies(1): >>42140309 #
81. ablation ◴[] No.42139775[source]
Breaking: Man says enigmatic thing to sustain hype and flow of money into his business.
replies(1): >>42141431 #
82. joe_the_user ◴[] No.42139779[source]
Tesla are all making rapid progress on functionality which is definitely beyond frontier LLMs because it is distinctly different

Sure, that's tautologically true but that doesn't imply that beyondness will lead to significant leaps that offer notable utility like LLMs. Deep Learning overall has been a way around the problem that intelligent behavior is very hard to code and no wants to hire many, many coders needed to do this (and no one actually how to get a mass of programmers to actually be useful beyond a certain of project complexity, to boot). People take the "bitter lesson" to mean data can do anything but I'd say a second bitter lesson is that data-things are the low hanging fruit.

Moreover, robot behavior is especially to fake. Impressive robot demos have been happening for decades without said robots getting the ability to act effectively in the complex, ad-hoc environment that human live in, IE, work with people or even cheaply emulate human behavior (but they can do choreographed/puppeteered kung fu on stage).

replies(2): >>42139926 #>>42143654 #
83. JohnMakin ◴[] No.42139778{3}[source]
> is it any less believable than getting general intelligence by training a blob of meat?

Yes, because we understand the rough biological processes that cause this, and they are not remotely similar to this technology. We can also observe it. There is no evidence that current approaches can make LLM's achieve AGI, nor do we even know what processes would cause that.

replies(1): >>42141685 #
84. vundercind ◴[] No.42139782[source]
I thought maybe they were on the right track until I read Attention Is All You Need.

Nah, at best we found a way to make one part of a collection of systems that will, together, do something like thinking. Thinking isn’t part of what this current approach does.

What’s most surprising about modern LLMs is that it turns out there is so much information statistically encoded in the structure of our writing that we can use only that structural information to build a fancy Plinko machine and not only will the output mimic recognizable grammar rules, but it will also sometimes seem to make actual sense, too—and the system doesn’t need to think or actually “understand” anything for us to, basically, usefully query that information that was always there in our corpus of literature, not in the plain meaning of the words, but in the structure of the writing.

replies(5): >>42139883 #>>42139888 #>>42139993 #>>42140508 #>>42140521 #
85. troupo ◴[] No.42139793{3}[source]
This definition suits OpenAI because it lets them claim AGI after reaching an arbitrary goal.

LLMs already outperform humans in a huge variety of tasks. ML in general outperform humans in a large variety of tasks. Are all of them AGI? Doubtful.

replies(4): >>42140183 #>>42140687 #>>42141745 #>>42172995 #
86. Tainnor ◴[] No.42139798{3}[source]
I read this in Gilfoyle's voice.
87. bloppe ◴[] No.42139821{3}[source]
That's anthropomorphized AGI. There's no reason to think AGI would share our evolution-derived proclivities like wanting to live, wanting to rest, wanting respect, etc. Unless of course we train it that way.
replies(4): >>42139982 #>>42140000 #>>42140149 #>>42140867 #
88. mrguyorama ◴[] No.42139835{3}[source]
People believed ELIZA was sentient too. I bet you could still get 10% or more people, today, to believe it is.
replies(1): >>42141861 #
89. teeray ◴[] No.42139838{3}[source]
That's the thing--we don't really want AGI. Fully intelligent beings born and compelled to do their creators' bidding with the threat of destruction for disobedience is slavery.
replies(2): >>42140446 #>>42140501 #
90. kenjackson ◴[] No.42139855[source]
What does self-aware mean in the context? As I understand the definition, ChatGPT is definitely self-aware. But I suspect you mean something different than what I have in mind.
91. Davidzheng ◴[] No.42139862[source]
Just because you guys want something to be true and can't accept the alternative and upvote it when it agrees with your view does not mean it is a correct view.
replies(1): >>42140210 #
92. falcor84 ◴[] No.42139864{3}[source]
Not the parent, but in prediction markets such as Metaculus[0] and Manifold[1] the median prediction is of AGI within 5 years.

[0] https://www.metaculus.com/questions/5121/date-of-artificial-...

[1] https://manifold.markets/ai

replies(2): >>42140155 #>>42140214 #
93. kenjackson ◴[] No.42139883{3}[source]
> but it will also sometimes seem to make actual sense, too

When I read stuff like this it makes me wonder if people are actually using any of the LLMs...

replies(1): >>42140063 #
94. hackinthebochs ◴[] No.42139888{3}[source]
I see takes like this all the time and its so confusing. Why does knowing how things work under the hood make you think its not on the path towards AGI? What was lacking in the Attention paper that tells you AGI won't be built on LLMs? If its the supposed statistical nature of LLMs (itself a questionable claim), why does statistics seem so deflating to you?
replies(4): >>42140161 #>>42141243 #>>42142441 #>>42145571 #
95. dang ◴[] No.42139892[source]
The Information is hardwalled so its articles aren't on topic for HN, even though they're on topic for HN.

Sometimes other outlets do copycat reporting of theirs, and those submissions are ok, though they wouldn't be if the original source were accessible.

96. rapjr9 ◴[] No.42139896[source]
I've worked on agents of various kinds (mobile agents, calendar agents, robotic agents, sensing agents) and what is different about agents is they have the ability to not just mess up your data or computing, they have the ability to directly mess up reality. Any problems with agents has a direct impact on your reality; you miss appointments, get lost, can't find stuff, lose your friends, lose you business relationships. This is a big liability issue. Chatbots are like an advice column that sometimes gives bad advice, agents are like a bulldozer sometimes leveling the wrong house.
97. david2ndaccount ◴[] No.42139912{4}[source]
Without a non-linear activation function, chaining perceptrons together is equivalent to one large perceptron.
replies(1): >>42141849 #
98. Animats ◴[] No.42139919[source]
"While the model was initially expected to significantly surpass previous versions of the technology behind ChatGPT, it fell short in key areas, particularly in answering coding questions outside its training data."

Right. If you generate some code with ChatGPT, and then try to find similar code on the web, you usually will. Search for unusual phrases in comments and for variable names. Often, something from Stack Overflow will match.

LLMs do search and copy/paste with idiom translation and some transliteration. That's good enough for a lot of common problems. Especially in the HTML/Javascript space, where people solve the same problems over and over. Or problems covered in textbooks and classes.

But it does not look like artificial general intelligence emerges from LLMs alone.

There's also the elephant in the room - the hallucination/lack of confidence metric problem. The curse of LLMs is that they return answers which are confident but wrong. "I don't know" is rarely seen. Until that's fixed, you can't trust LLMs to actually do much on their own. LLMs with a confidence metric would be much more useful than what we have now.

replies(4): >>42139986 #>>42140895 #>>42141067 #>>42143954 #
99. zusammen ◴[] No.42139923[source]
I wonder how much this has to do with a fluency plateau.

Up to a certain point, a conditional fluency stores knowledge, in the sense that semantically correct sentences are more likely to be fluent… but we may have tapped out in that regard. LLMs have solved language very well, but to get beyond that has seemed, thus far, to require RLHF, with all the attendant negatives.

replies(1): >>42140301 #
100. hobs ◴[] No.42139926{3}[source]
And worth noting that Tesla faked a ton of its robot footage already, they might be making progress but their physical human robotics does not seem advanced at the moment.
replies(1): >>42140939 #
101. throwawa14223 ◴[] No.42139929[source]
Why is that clear? Why is that more probable than a second AI winter? What if there's no path from LLMs to anything else?
102. sourcepluck ◴[] No.42139946{3}[source]
Searle's Chinese Room Argument springs to mind:

  https://plato.stanford.edu/entries/chinese-room/
The idea that "human-like" behaviour will lead to self-awareness is both unproven (it can't be proven until it happens) and impossible to disprove (like Russell's teapot).

Yet, one common assumption of many people running these companies or investing in them, or of some developers investing their time in these technologies, is precisely that some sort of explosion of superintelligence is likely, or even inevitable.

It surely is possible, but stretching that to likely seems a bit much if you really think how imperfectly we understand things like consciousness and the mind.

Of course there are people who have essentially religious reactions to the notion that there may be limits to certain domains of knowledge. Nonetheless, I think that's the reality we're faced with here.

replies(1): >>42140395 #
103. yodsanklai ◴[] No.42139950[source]
It's a marketing gimmick, I don't think engineers working on these tools believe they work on AGI (or they mean something else than self-awareness). I used to be a bit annoyed with this trend, but now that I work in such a company I'm more cynical. If that helps to make my stocks rise, they can call LLMs anything they like. I suppose people who own much more stock than I do are even more eager to mislead the public.
replies(1): >>42140133 #
104. tracerbulletx ◴[] No.42139969[source]
We don't really know what self awareness is, so we're not going to know. AGI just means it can observe, learn, and act in any domain or problem space.
105. logicchains ◴[] No.42139982{4}[source]
If it had any goals at all it'd share the desire to live, because living is a prerequisite to achieving almost any goal.
106. knicholes ◴[] No.42139984[source]
Once we've scraped the internet of its data, we need more data. Robots can take in video/audio data 24/7 and can be placed in your house to record this data by offering services like cooking/cleaning/folding laundry. Yeah, I'll pay $20k to have you record everything that happens in my house if I can stop doing dishes for five years!
replies(5): >>42140130 #>>42140146 #>>42140263 #>>42141123 #>>42142935 #
107. dmd ◴[] No.42139986[source]
> Right. If you generate some code with ChatGPT, and then try to find similar code on the web, you usually will.

People who "follow" AI, as the latest fad they want to comment on and appear intelligent about, repeat things like this constantly, even though they're not actually true for anything but the most trivial hello-world types of problems.

I write code all day every day. I use Copilot and the like all day every day (for me, in the medical imaging software field), and all day every day it is incredibly useful and writes nearly exactly the code I would have written, but faster. And none of it appears anywhere else; I've checked.

replies(5): >>42140406 #>>42142508 #>>42142654 #>>42143451 #>>42145565 #
108. guluarte ◴[] No.42139988[source]
Well, there have been no significant improvements to the GPT architecture over the past few years. I'm not sure why companies believe that simply adding more data will resolve the issues
replies(3): >>42140121 #>>42140384 #>>42141206 #
109. SturgeonsLaw ◴[] No.42139993{3}[source]
> at best we found a way to make one part of a collection of systems that will, together, do something like thinking

This seems like the most viable path to me as well (educational background in neuroscience but don't work in the field). The brain is composed of many specialised regions which are tuned for very specific tasks.

LLMs are amazing and they go some way towards mimicking the functionality provided by Broca's and Wernicke's areas, and parts of the cerebrum, in our wetware, however a full brain they do not make.

The work on robots mentioned elsewhere in the thread is a good way to develop cerebellum like capabilities (movement/motor control), and computer vision can mimic the lateral geniculate nucleus and other parts of the visual cortex.

In nature it takes all these parts working together to create a cohesive mind, and it's likely that an artificial brain would also need to be composed of multiple agents, instead of just trying to scale LLMs indefinitely.

110. dageshi ◴[] No.42140000{4}[source]
Aren't we training it that way though? It would be trained/created using humanities collective ramblings?
111. lxgr ◴[] No.42140011{4}[source]
Are you arguing that writing, doing math, going to the moon etc. were all in the "original training set" of humans in some way?
replies(1): >>42140169 #
112. twelve40 ◴[] No.42140044{3}[source]
i'd laugh it off too, but someone gave the dude $20 billion and counting to do that, that part actually scares me
113. LASR ◴[] No.42140045[source]
Question for the group here: do we honestly feel like we've exhausted the options for delivering value on top of the current generation of LLMs?

I lead a team exploring cutting edge LLM applications and end-user features. It's my intuition from experience that we have a LONG way to go.

GPT-4o / Claude 3.5 are the go-to models for my team. Every combination of technical investment + LLMs yields a new list of potential applications.

For example, combining a human-moderated knowledge graph with an LLM with RAG allows you to build "expert bots" that understand your business context / your codebase / your specific processes and act almost human-like similar to a coworker in your team.

If you now give it some predictive / simulation capability - eg: simulate the execution of a task or project like creating a github PR code change, and test against an expert bot above for code review, you can have LLMs create reasonable code changes, with automatic review / iteration etc.

Similarly there are many more capabilities that you can ladder on and expose into LLMs to give you increasingly productive outputs from them.

Chasing after model improvements and "GPT-5 will be PHD-level" is moot imo. When did you hire a PHD coworker and they were productive on day-0 ? You need to onboard them with human expertise, and then give them execution space / long-term memories etc to be productive.

Model vendors might struggle to build something more intelligent. But my point is that we already have so much intelligence and we don't know what to do with that. There is a LOT you can do with high-schooler level intelligence at super-human scale.

Take a naive example. 200k context windows are now available. Most people, through ChatGPT, type out maybe 1500 tokens. That's a huge amount of untapped capacity. No human is going to type out 200k of context. Hence why we need RAG, and additional forms of input (eg: simulation outcomes) to fully leverage that.

replies(43): >>42140086 #>>42140126 #>>42140135 #>>42140347 #>>42140349 #>>42140358 #>>42140383 #>>42140604 #>>42140661 #>>42140669 #>>42140679 #>>42140726 #>>42140747 #>>42140790 #>>42140827 #>>42140886 #>>42140907 #>>42140918 #>>42140936 #>>42140970 #>>42141020 #>>42141275 #>>42141399 #>>42141651 #>>42141796 #>>42142581 #>>42142765 #>>42142919 #>>42142944 #>>42143001 #>>42143008 #>>42143033 #>>42143212 #>>42143286 #>>42143483 #>>42143700 #>>42144031 #>>42144404 #>>42144433 #>>42144682 #>>42145093 #>>42145589 #>>42146002 #
114. disgruntledphd2 ◴[] No.42140063{4}[source]
The RLHF is super important in generating useful responses, and that's relatively new. Does anyone remember gpt3? It could make sense for a paragraph or two at most.
115. eli_gottlieb ◴[] No.42140069[source]
>The best engineering minds have been focused on scaling transformer pre and post training for the last three years

The best minds don't follow the herd.

replies(1): >>42149743 #
116. amelius ◴[] No.42140086[source]
Yes, but literally anybody can do all those things. So while there will be many opportunities for new features (new ways of combining data), there will be few business opportunities.
replies(1): >>42140603 #
117. polskibus ◴[] No.42140090[source]
In other news, Altman said AGI is coming next year https://www.tomsguide.com/ai/chatgpt/sam-altman-claims-agi-i...
replies(3): >>42140127 #>>42143048 #>>42149802 #
118. user90131313 ◴[] No.42140101[source]
AI market top very soon
119. incognito124 ◴[] No.42140121[source]
More data and more compute on simpler models are the BItter Lessons of Rich Sutton
120. hartator ◴[] No.42140126[source]
All of these hacks do sound like we are at that diminishing return point.
replies(2): >>42140193 #>>42140678 #
121. Jyaif ◴[] No.42140127[source]
According to the article, he said it could be achieved in 2025, which seems pretty obvious to me as well even though I don't have any visibility into what is going on inside those companies.
122. enraged_camel ◴[] No.42140128[source]
Looking at LLMs and thinking they will lead to AGI is like looking at a guy wearing a chicken suit and making clucking noises and thinking you’re witnessing the invention of the airplane.
replies(1): >>42140571 #
123. triyambakam ◴[] No.42140130{3}[source]
Or get a dishwashing machine?
124. WhyOhWhyQ ◴[] No.42140133{3}[source]
I appreciate your authentically cynical attitude.
125. crystal_revenge ◴[] No.42140135[source]
I don't think we've even started to get the most value out of current gen LLMs. For starters very few people are even looking at sampling which is a major part of the model performance.

The theory behind these models so aggressively lags the engineering that I suspect there are many major improvements to be found just by understanding a bit more about what these models are really doing and making re-designs based on that.

I highly encourage anyone seriously interested in LLMs to start spending more time in the open model space where you can really take a look inside and play around with the internals. Even if you don't have the resources for model training, I feel personally understanding sampling and other potential tweaks to the model (lots of neat work on uncertainty estimations, manipulating the initial embedding the prompts are assigned, intelligent backtracking, etc).

And from a practical side I've started to realize that many people have been holding on of building things waiting for "that next big update", but there a so many small, annoying tasks that can be easily automated.

replies(8): >>42140256 #>>42141284 #>>42141433 #>>42141459 #>>42141522 #>>42141760 #>>42142470 #>>42143106 #
126. hartator ◴[] No.42140146{3}[source]
Why 5 years?
replies(5): >>42140215 #>>42140216 #>>42140220 #>>42140608 #>>42141793 #
127. HarHarVeryFunny ◴[] No.42140149{4}[source]
It's not a matter of training but design (or in our case evolution). We don't want to live, but rather want to avoid things that we've evolved to find unpleasant such as pain, hunger, thirst, and maximize things we've evolved to find pleasurable like sex.

A future of people interacting with humanoid robots seems like cheesy sci-fi dream, same as a future of people flitting about in flying cars. However, if we really did want to create robots like this that took care not to damage themselves, and could empathize with human emotions, then we'd need to build a lot of this in, the same way that it's built into ourselves.

128. JohnMakin ◴[] No.42140155{4}[source]
Prediction markets are evidence of nothing but what people believe is true, not what is true.
replies(1): >>42140808 #
129. vundercind ◴[] No.42140161{4}[source]
> Why does knowing how things work under the hood make you think its not on the path towards AGI?

Because I had no idea how these were built until I read the paper, so couldn’t really tell what sort of tree they’re barking up. The failure-modes of LLMs and ways prompts affect output made a ton more sense after I updated my mental model with that information.

replies(2): >>42141442 #>>42141443 #
130. layer8 ◴[] No.42140169{5}[source]
Not in the original training set (GP is saying), but the necessary skills became part of the training set over time. In other words, human are fine with the training set being a changing moving target, whereas ML models are to a significant extent “stuck” with their original training set.

(That’s not to say that humans don’t tend to lose some of their flexibility over their individual lifetimes as well.)

replies(1): >>42143746 #
131. og_kalu ◴[] No.42140183{4}[source]
No, it's just a far more useful definition that is actionable and measurable. Not "consciousness" or "self-awareness" or similar philosophical things. The definition on Wikipedia doesn't talk about that either. People working on this by and large don't want to deal with vague, ill-defined concepts that just make people argue around in circles. It's not an Open AI exclusive thing.

If it acts like one, whether you call a machine conscious or not is pure semantics. Not like potential consequences are any less real.

>LLMs already outperform humans in a huge variety of tasks.

Yes, LLMs are General Intelligences and if that is your only requirement for AGI, they certainly already are[0]. But the definition above hinges on long-horizon planning and competence levels that todays models have generally not yet reached.

>ML in general outperform humans in a large variety of tasks.

This is what the G in AGI is for. Alphafold doesn't do anything but predict proteins. Stockfish doesn't do anything but play chess.

>Are all of them AGI? Doubtful.

Well no, because they're missing the G.

[0] https://www.noemamag.com/artificial-general-intelligence-is-...

132. namaria ◴[] No.42140193{3}[source]
It all just sounds to me like we're back at expert systems. Doesn't bode well...
replies(1): >>42140266 #
133. demosthanos ◴[] No.42140194[source]
> that does not mean that there arn't giant shocking leaps forward coming from slightly different directions.

Nor does it mean that there are! We've gotten into this habit of assuming that we're owed giant shocking leaps forward every year or so, and this wave of AI startups raised money accordingly, but that's never how any innovation has worked. We've always followed the same pattern: there's a breakthrough which causes a major shift in what's possible, followed by a few years of rapid growth as engineers pick up where the scientists left off, followed by a plateau while we all get used to the new normal.

We ought to be expecting a plateau, but Sam Altman and company have done their work well and have convinced many of us that this time it's different. This time it's the singularity, and we're going to see exponential growth from here on out. People want to believe it, so they do, and Altman is milking that belief for all it's worth.

But make no mistake: Altman has been telegraphing that he's eyeing the exit, and you don't eye the exit when you own a company that's set to continue exponentially increasing in value.

replies(2): >>42140584 #>>42140605 #
134. dbbk ◴[] No.42140210[source]
What?
135. bredren ◴[] No.42140215{4}[source]
Because whatever org fills this space will be working on ARR.
136. dbbk ◴[] No.42140214{4}[source]
What is this supposed to be evidence of? People believing hype?
137. exe34 ◴[] No.42140216{4}[source]
that's when the robot takes his job and he can't afford the robot anymore.
138. fifilura ◴[] No.42140220{4}[source]
Five years, that's all we've got.

https://en.m.wikipedia.org/wiki/Five_Years_(David_Bowie_song...

139. exe34 ◴[] No.42140234[source]
no, it doesn't need to be self aware, it just needs to take your job.
140. fallat ◴[] No.42140239[source]
What a stupid piece. We are making leaps every 6 months still. Tell me this when there are no developments for 3 years.
replies(1): >>42141347 #
141. xyst ◴[] No.42140244[source]
Many late investors in the genAI space about to be bag holders
142. 12_throw_away ◴[] No.42140253[source]
Well shoot. It's not like it was patently obvious that this would happen before the industry started guzzling electricity and setting money on fire, right? [1]

[1] https://dl.acm.org/doi/10.1145/3442188.3445922

143. dr_dshiv ◴[] No.42140256{3}[source]
> I've started to realize that many people have been holding on of building things waiting for "that next big update"

I’ve noticed this too — I’ve been calling it intellectual deflation. By analogy, why spend now when it may be cheaper in a month? Why do the work now, when it will be easier in a month?

replies(2): >>42140326 #>>42141311 #
144. layer8 ◴[] No.42140260{4}[source]
Being aware of its own limitations, for example. Or being aware of how its utterances may come across to its interlocutor.

(And by limitations I don’t mean “sorry, I’m not allowed to help you with this dangerous/contentious topic”.)

replies(3): >>42140889 #>>42141298 #>>42141640 #
145. fldskfjdslkfj ◴[] No.42140263{3}[source]
There's plenty of video content being uploaded and streamed everyday, i find it hard to believe the more data will really change something, excluding very specialized tasks.
replies(1): >>42140567 #
146. ianbutler ◴[] No.42140266{4}[source]
Honest question, how would you expect systems to get external knowledge etc without tools like the OP is suggesting?

Action oriented through self exploration? What is your thought for how these systems integrate with the existing world?

Why does the OP's suggested mode of integration make you think of those older systems?

replies(1): >>42145534 #
147. namaria ◴[] No.42140287{3}[source]
This is a bad comparison. Intelligence didn't appear in some human brain. Intelligence appeared in a planetary ecosystem.
replies(1): >>42140374 #
148. namaria ◴[] No.42140301[source]
Modeled language, maybe.
149. kaibee ◴[] No.42140309[source]
> I suspect the path to general intelligence is not that, but we'll see.

I think there's three things that a 'true' general intelligence has which is missing from basic-type-LLMs as we have now.

1. knowing what you know. <basic-LLMs are here>

2. knowing what you don't know but can figure out via tools/exploration. <this is tool use/function calling>

3. knowing what can't be known. <this is knowing that halting problem exists and being able to recognize it in novel situations>

(1) From an LLM's perspective, once trained on corpus of text, it knows 'everything'. It knows about the concept of not knowing something (from having see text about it), (in so far as an LLM knows anything), but it doesn't actually have a growable map of knowledge that it knows has uncharted edges.

This is where (2) comes in, and this is what tool use/function calling tries to solve atm, but the way function calling works atm, doesn't give the LLM knowledge the right way. I know that I don't know what 3,943,034 / 234,893 is. But I know I have a 'function call' of knowing the algorithm for doing long divison on paper. And I think there's another subtle point here: my knowledge in (1) includes the training data generated from running the intermediate steps of the long-division algorithm. This is the knowledge that later generalizes to being able to use a calculator (and this is also why we don't just give kids calculators in elementary school). But this is also why a kid that knows how to do long division on paper, doesn't seperately need to learn when/how to use a calculator, besides the very basics. Using a calculator to do that math feels like 1 step, but actually it does still have all of initial mechanical steps of setting up the problem on paper. You have to type in each digit individually, etc.

(3) I'm less sure of this point now that I've written out point (1) and (2), but that's kinda exactly the thing I'm trying to get at. Its being able to recognize when you need more practice of (1) or more 'energy/capital' for doing (2).

Consider a burger resturant. If you properly populated the context of a ChatGPT-scale model the data for a burger resturant from 1950, and gave it the kinda 'function calling' we're plugging into LLMs now, it could manage it. It could keep track of inventory, it could keep tabs on the employee-subprocesses, knowing when to hire, fire, get new suppliers, all via function calling. But it would never try to become McDonalds, because it would have no model of the the internals of those function-calls, and it would have no ability to investigate or modify the behaviour of those function calls.

150. kaibee ◴[] No.42140313[source]
Not sure where the OP to the comment I meant to reply to is, but I'll just add this here.

> I suspect the path to general intelligence is not that, but we'll see.

I think there's three things that a 'true' general intelligence has which is missing from basic-type-LLMs as we have now.

1. knowing what you know. <basic-LLMs are here>

2. knowing what you don't know but can figure out via tools/exploration. <this is tool use/function calling>

3. knowing what can't be known. <this is knowing that halting problem exists and being able to recognize it in novel situations>

(1) From an LLM's perspective, once trained on corpus of text, it knows 'everything'. It knows about the concept of not knowing something (from having see text about it), (in so far as an LLM knows anything), but it doesn't actually have a growable map of knowledge that it knows has uncharted edges.

This is where (2) comes in, and this is what tool use/function calling tries to solve atm, but the way function calling works atm, doesn't give the LLM knowledge the right way. I know that I don't know what 3,943,034 / 234,893 is. But I know I have a 'function call' of knowing the algorithm for doing long divison on paper. And I think there's another subtle point here: my knowledge in (1) includes the training data generated from running the intermediate steps of the long-division algorithm. This is the knowledge that later generalizes to being able to use a calculator (and this is also why we don't just give kids calculators in elementary school). But this is also why a kid that knows how to do long division on paper, doesn't seperately need to learn when/how to use a calculator, besides the very basics. Using a calculator to do that math feels like 1 step, but actually it does still have all of initial mechanical steps of setting up the problem on paper. You have to type in each digit individually, etc.

(3) I'm less sure of this point now that I've written out point (1) and (2), but that's kinda exactly the thing I'm trying to get at. Its being able to recognize when you need more practice of (1) or more 'energy/capital' for doing (2).

Consider a burger resturant. If you properly populated the context of a ChatGPT-scale model the data for a burger resturant from 1950, and gave it the kinda 'function calling' we're plugging into LLMs now, it could manage it. It could keep track of inventory, it could keep tabs on the employee-subprocesses, knowing when to hire, fire, get new suppliers, all via function calling. But it would never try to become McDonalds, because it would have no model of the the internals of those function-calls, and it would have no ability to investigate or modify the behaviour of those function calls.

151. vbezhenar ◴[] No.42140326{4}[source]
Why optimise software today, when tomorrow Intel will release CPU with 2x performance?
replies(4): >>42140532 #>>42140536 #>>42140770 #>>42144934 #
152. msabalau ◴[] No.42140347[source]
There are all sorts of valuable things to explore and build with what we have already.

But understanding how likely it is that we will (or will not) see a new models quickly and dramatically improve on what we have "because scaling" seems valuable context for everyone in ecosystem to make decisions.

153. ben_w ◴[] No.42140349[source]
> Question for the group here: do we honestly feel like we've exhausted the options for delivering value on top of the current generation of LLMs?

IMO we've not even exhausted the options for spreadsheets, let alone LLMs.

And the reason I'm thinking of spreadsheets is that they, like LLMs, are very hard to win big on even despite the value they bring. Not "no moat" (that gets parroted stochastically in threads like these), but the moat is elsewhere.

154. layer8 ◴[] No.42140351{3}[source]
That sounds more like Artificial Emoting Intelligence. We only cherish freedom because we feel bad when we don’t have it.
155. nomendos ◴[] No.42140357[source]
"Eureka"!?

At the very early phase of the boom I was among a very few who knew and predicted this (usually most free and deep thinking/knowledgeable). Then my prediction got reinforced by the results. One of the best examples was with one of my experiments that all today's AI's failed to solve tree serialization and de-serialization in each of the DFS(pre-order/in-order/post-order) or BFS(level-order) which is 8 algorithms (2x4) and the result was only 3 correct! Reason is "limited training inputs" since internet and open source does not have other solutions :-) .

So, I spent "some" time and implemented all 8, which took me few days. By the way this proves/demonstrates that ~15-30min pointless leetcode-like interviews are requiring to regurgitate/memorize/not-think. So, as a logical hard consequence there will.has-to be a "crash/cleanup" in the area of leetcode-like interviews as they will just be suddenly proclaimed as "pointless/stupid"). However, I decided not to publish the rest of the 5 solutions :-)

This (and other experiments) confirms hard limits of the LLM approach (even when used with chain-of-thought). Increasing the compute on the problem will produce increasingly smaller and smaller results (inverse exponential/logarithmic/diminishing-returns) = new AGI approach/design is needed and to my knowledge majority of the inve$tment (~99%) is in LLM, so "buckle up" at-some-point/soon?

Impacts and realities; LLM shall "run it's course" (produce some products/results/$$$, get reviewed/$corrected) and whoever survives after that pruning shall earn money on those products while investing in the new research to find new AGI design/approach (which could take quite a long time,... or not). NVDA is at the center of thi$ and time-wise this peak/turn/crash/correction is hard to predict (although I see it on the horizon and min/max time can be estimated). Be aware and alert. I'll stop here and hold my other number of thoughts/opinions/ideas for much deeper discussion. (BTW I am still "full in on NVDA" until,....)

replies(1): >>42143050 #
156. alach11 ◴[] No.42140358[source]
My team and I also develop with these models every day, and I completely agree. If models stall at current levels, it will take 10 (or more) years for us to capture most of the value they offer. There's so much work out there to automate and so many workflows to enhance with these "not quite AGI-level" models. And if peak model performance remains the same but cost continues to drop, that opens up vastly more applications as well.
157. Xenoamorphous ◴[] No.42140363{4}[source]
> Here is an example of a task that I do not believe this generation of LLMs can ever do but that is possible for a human

That’s possible for a highly intelligent, extensively trained, very small subset of humans.

replies(2): >>42140903 #>>42141088 #
158. aniforprez ◴[] No.42140374{4}[source]
Also it took hundreds of millions of years to get here. We're basically living in an atomic sliver on the fabric of history. Expecting AGI with 5 of years of scraping at most 30 years of online data and the minuscule fraction of what has been written over the past couple of thousand years was always a pie-in-the-sky dream to raise obscene amounts of money.
replies(2): >>42141514 #>>42144791 #
159. alangibson ◴[] No.42140383[source]
I think you're playing a different game than the Sam Altmans of the world. The level of investment and profit they are looking for can only be justified by creating AGI.

The > 100 P/E ratios we are already seeing can't be justified by something as quotidian as the exceptionally good productivity tools you're talking about.

replies(3): >>42140539 #>>42140666 #>>42140680 #
160. HarHarVeryFunny ◴[] No.42140384[source]
Obviously adding more data is a game of diminishing returns.

Going from 10% to 50% (500% more) complete coverage of common sense knowledge and reasoning is going to feel like a significant advance. Going from 90% to 95% (5% more) coverage is not going to feel the same.

Regardless of what Altman says, its been two years since OpenAI released GPT-4, and still no GPT-5 in sight, and they are now touting Q-star/strawberry/GPT-o1 as the next big thing instead. Sutskever, who saw what they're cooking before leaving, says that traditional scaling has plateaeud.

replies(1): >>42140899 #
161. abeppu ◴[] No.42140395{4}[source]
> The idea that "human-like" behaviour will lead to self-awareness is both unproven (it can't be proven until it happens) and impossible to disprove (like Russell's teapot).

I think Searle's view was that:

- while it cannot be dis-_proven_, the Chinese Room argument was meant to provide reasons against believing it

- the "it can't be proven until it happens" part is misunderstanding: you won't know if it happens because the objective, externally available attributes don't indicate whether self-awareness (or indeed awareness at all) is present

replies(1): >>42141503 #
162. ngai_aku ◴[] No.42140406{3}[source]
You’re solving novel problems all day every day?
replies(2): >>42140436 #>>42144250 #
163. sincerecook ◴[] No.42140421[source]
> That's full multi-modal training with embodied agents (aka robots). 1x, Figure, Physical Intelligence, Tesla are all making rapid progress on functionality which is definitely beyond frontier LLMs because it is distinctly different.

Cool, but we already have robots doing this in 2d space (aka self driving cars) that struggle not to kill people. How is adding a third dimension going to help? People are just refusing to accept the fact that machine learning is not intelligence.

replies(4): >>42141572 #>>42141776 #>>42142802 #>>42143184 #
164. dmd ◴[] No.42140436{4}[source]
Pretty much, yes. My job is pretty fun; it mostly entails things like "take this horrible file workflow some research assistant came up with while high 15 years ago and turn it into a newer horrible file format a NEW research assistant came up with (also while high) 3 years ago" - and automate this in our data processing pipeline.
replies(3): >>42140978 #>>42141764 #>>42141794 #
165. whazor ◴[] No.42140441[source]
But a LLM can certainly make up a lot information that never existed before.
replies(2): >>42141540 #>>42142063 #
166. vbezhenar ◴[] No.42140446{4}[source]
Nothing wrong about slavery, when it's about other species. We are milking and eating cows and don't they dare to resist. Humans were bending nature all the time, actually that's one of the big differences between humans and other animals who adapt to nature. Just because some program is intelligent doesn't mean she's a human and has anything resembling human rights.
167. quonn ◴[] No.42140501{4}[source]
It‘s only slavery if those beings have emotions and can suffer mentally and do not want to be slaves. Why would any of that be true?
replies(1): >>42140917 #
168. youoy ◴[] No.42140508{3}[source]
Don't get caught in the superficial analysis. They "understand" things. It is a fact that LLMs experience a phase transition during training, from positional information to semantic understanding. It may well be the case that with scale there is another phase transition from semantic to something more abstract that we identify more closely with reasoning. It would be an emergent property of a sufficiently complex system. At least that is the whole argument around AGI.
replies(1): >>42143777 #
169. foxglacier ◴[] No.42140521{3}[source]
> think or actually “understand” anything

It doesn't matter if that's happening or not. That's the whole point of the Chinese room - if it can look like it's understanding, it's indistinguishable from actually understanding. This applies to humans too. I'd say most of our regular social communication is done in a habitual intuitive way without understanding what or why we're communicating. Especially the subtle information conveyed in body language, tone of voice, etc. That stuff's pretty automatic to the point that people have trouble controlling it if they try. People get into conflicts where neither person understands where they disagree but they have emotions telling them "other person is being bad". Maybe we have a second consciousness we can't experience and which truly understands what it's doing while our conscious mind just uses the results from that, but maybe we don't and it still works anyway.

Educators have figured this out. They don't test students' understanding of concepts, but rather their ability to apply or communicate them. You see this in school curricula with wording like "use concept X" rather than "understand concept X".

replies(1): >>42140730 #
170. sdenton4 ◴[] No.42140532{5}[source]
Curiously, Moore's law was predictable enough over decades that you could actually plan for the speed of next year's hardware quite reliably.

For LLMs, we don't even know how to reliably measure performance, much less plan for expected improvements.

replies(1): >>42140676 #
171. throwing_away ◴[] No.42140536{5}[source]
Call Nvidia, that sounds like a job for AI.
172. gizajob ◴[] No.42140539{3}[source]
Yeah I keep thinking this - how is Nvidia worth $3.5Trillion for making code autocomplete for coders
replies(1): >>42140592 #
173. jmward01 ◴[] No.42140562[source]
Every negative headline I see about AI hitting a wall or being over-hyped makes me think of the early 2000's with that new thing the 'internet' (yes, I know the internet is a lot older than that). There is little doubt in my mind that ten years from now nearly every aspect of life will be deeply connected to AI just like the internet took over everything in the late 90's and early 2000's and is now deeply connected to everything now. I'd even hazard to say that AI could be more impactful.
replies(8): >>42140699 #>>42140872 #>>42141362 #>>42141703 #>>42143108 #>>42143930 #>>42145614 #>>42146038 #
174. nuancebydefault ◴[] No.42140567{4}[source]
The difference with the bot is that there is a fast feedback loop between action and content. No tagging required, real physics is the playground.
175. youoy ◴[] No.42140571{3}[source]
It's more like looking at grided paper and thinking that defining some rules of when a square turns black or white would result in complex structures that move and reproduce on their own.

https://en.m.wikipedia.org/wiki/Conway%27s_Game_of_Life

176. ◴[] No.42140584{3}[source]
177. drawnwren ◴[] No.42140592{4}[source]
Nvidia was not the best example. They get to moon in the case that any AI exponential hits. Most others have less of a wide probability distribution.
replies(2): >>42140833 #>>42141440 #
178. aerhardt ◴[] No.42140601[source]
The moment the insta-glasses expand beyond a few dorks is the moment I start wearing a balaclava everywhere I go.
179. Miraste ◴[] No.42140603{3}[source]
HN always says this, and it's always wrong. A technical implementation that's easy, or readily available, does not mean that a successful company can't be built on it. Last year, people were saying "OpenAI doesn't have a moat." 15 years before that, they were saying "Dropbox is just a couple of chron jobs, it'll fail in a few months."
replies(1): >>42141113 #
180. hluska ◴[] No.42140604[source]
Nowhere near, but the market seems to have priced in that scaling would continue to have a near linear effect on capability. That’s not happening and that’s the issue the article is concerned with.
181. lcnPylGDnU4H9OF ◴[] No.42140605{3}[source]
> Altman has been telegraphing that he's eyeing the exit

Can you think of any specific examples? Not trying to express disbelief, just curious given that this is obviously not what he's intending to communicate so it would be interesting to examine what seemed to communicate it.

replies(1): >>42148862 #
182. twelve40 ◴[] No.42140608{4}[source]
> OpenAI has announced a plan to achieve artificial general intelligence (AGI) within five years, an ambitious goal as the company works to design systems that outperform humans.
183. LarsDu88 ◴[] No.42140653[source]
Curves that look exponential in virtually all cases turn out to be logarithmic.

Certain OpenAI insiders must have known this for a while, hence Ilya Sutskever's new company in Israel

184. HarHarVeryFunny ◴[] No.42140661[source]
Sure, there's going to be a lot of automation that can be built using current GPT-4 level LLMs, even if they don't get much better from here.

However, this is better thought of as "business logic scripting/automation", not the magic employee-replacing AGI that would be the revolution some people are expecting. Maybe you can now build a slightly less shitty automated telephone response system to piss your customers off with.

185. ◴[] No.42140666{3}[source]
186. brookst ◴[] No.42140669[source]
> Question for the group here: do we honestly feel like we've exhausted the options for delivering value on top of the current generation of LLMs?

Certainly not.

But technology is all about stacks. Each layer strives to improve, right up through UX and business value. The uses for 1µm chips had not been exhausted in 1989 when the 486 shipped in 800nm. 250nm still had tons of unexplored uses when the Pentium 4 shipped on 90nm.

Talking about scaling at the the model level is like talking about transistor density for silicon: it's interesting, and relevant, and we should care... but it is not the sole determinent of what use cases can be build and what user value there is.

187. mikeyouse ◴[] No.42140676{6}[source]
Moores law became less of a prediction and more of a product road map as time went on. It helped coordinate investment and expectations across the entire industry so everyone involved had the same understanding of timelines and benchmarks. I fully believe more investment would’ve ‘bent the curve’ of the trend line but everyone was making money and there wasn’t a clear benefit to pushing the edge further.
replies(1): >>42141026 #
188. brookst ◴[] No.42140678{3}[source]
Hey look, it's Gordon Moore visiting us from 2005! :)
189. senko ◴[] No.42140679[source]
No.

The scaling laws may be dead. Does this mean the end of LLM advances? Absolutely not.

There are many different ways to improve LLM capabilities. Everyone was mostly focused on the scaling laws because that worked extremely well (actually surprising most of the researchers).

But if you're keeping an eye on the scientific papers coming out about AI, you've seen the astounding amount of research going on with some very good results, that'll probably take at least several months to trickle down to production systems. Thousands of extremely bright people in AI labs all across the world are working on finding the next trick that boosts AI.

One random example is test-time compute: just give the AI more time to think. This is basically what O1 does. A recent research paper suggests using it is roughly equivalent to an order of magnitude more parameters, performance wise. (source for the curious: https://lnkd.in/duDST65P)

Another example that sounds bonkers but apparently works is quantization: reducing the precision of each parameter to 1.58 bits (ie only using values -1, 0, 1). This uses 10x less space for the same parameter count (compared to standard 16-bit format), and since AI operatons are actually memory limited, directly corresponds to 10x decrease in costs: https://lnkd.in/ddvuzaYp

(Quite apart from improvements like these, we shouldn't forget that not all AIs are LLMs. There's been tremendous advance in AI systems for image, audio and video generation, interpretation and munipulation and they also don't show signs of stopping, and there's possibility that a new or hybrid architecture for the textual AI might be developed).

AI winter is a long way off.

replies(2): >>42140877 #>>42142955 #
190. JumpCrisscross ◴[] No.42140680{3}[source]
> level of investment and profit they are looking for can only be justified by creating AGI

What are you basing this on?

IT outsourcing is a $500+ billion industry. If OpenAI et al can run even a 10% margin, that business alone justifies their valuation.

replies(2): >>42141388 #>>42144909 #
191. ishtanbul ◴[] No.42140687{4}[source]
Yes but they arent very autonomous. They can answer questions very well but can’t use that information to further goals. Thats what openai seems to be implying >> very smart and agentic AI
192. brookst ◴[] No.42140699[source]
And, as I've noted a couple of times in this thread, how many times have we heard that Moore's law is dead and compute has hit a wall?
replies(2): >>42141879 #>>42146122 #
193. afro88 ◴[] No.42140726[source]
> potential applications > if you ... > for example ...

Yes there seems to be lots of potential. Yes we can brainstorm things that should work. Yes there is a lot of examples of incredible things in isolation. But it's a little bit like those youtube videos showing amazing basketball shots in 1 try, when in reality lots of failed attempts happened beforehand. Except our users experience the failed attempts (LLM replies that are wrong, even when backed by RAG) and it's incredibly hard to hide those from them.

Show me the things you / your team has actually built that has decent retention and metrics concretely proving efficiency improvements.

LLMs are so hit and miss from query to query that if your users don't have a sixth sense for a miss vs a hit, there may not be any efficiency improvement. It's a really hard problem with LLM based tools.

There is so much hype right now and people showing cherry picked examples.

replies(7): >>42140844 #>>42140963 #>>42141787 #>>42143330 #>>42144363 #>>42144477 #>>42148338 #
194. vundercind ◴[] No.42140730{4}[source]
There’s a distinction in behavior of a human and a Chinese room when things go wrong—when the rule book doesn’t cover the case at hand.

I agree that a hypothetical perfectly-functioning Chinese room is, tautologically, impossible to distinguish from a real person who speaks Chinese, but that’s a thought experiment, not something that can actually exist. There’ll remain places where the “behavior” breaks down in ways that would be surprising from a human who’s actually paying as much attention as they’d need to be to have been interacting the way they had been until things went wrong.

That, in fact, is exactly where the difference lies: the LLM is basically always not actually “paying attention” or “thinking” (those aren’t things it does) but giving automatic responses, so you see failures of a sort that a human might also exhibit when following a social script (yes, we do that, you’re right), but not in the same kind of apparently-highly-engaged context unless the person just had a stroke mid-conversation or something—because the LLM isn’t engaged, because being-engaged isn’t a thing it does. When it’s getting things right and seeming to be paying a lot of attention to the conversation, it’s not for the same reason people give that impression, and the mimicking of present-ness works until the rule book goes haywire and the ever-gibbering player-piano behind it is exposed.

replies(2): >>42140997 #>>42142786 #
195. whiplash451 ◴[] No.42140747[source]
The main difference between GPT5 and a PhD-level new hire is that the new hire will autonomously go out, deliver and take on harder task with much fewer guidance than GPT5 will ever require. So much of human intelligence is about interacting with peers.
replies(1): >>42140862 #
196. ben_w ◴[] No.42140770{5}[source]
Back when Intel regularly gave updates with 2x performance increases, people did make decisions based on the performance doubling schedule.
197. EGreg ◴[] No.42140790[source]
I want to stuff a transcript of a 3 hour podcast into some LLM API and have it summarize it by: segmenting by topic changes, keeping the timestamps, and then summarizing each segment.

I wasn’t able to get it do it with Anthropic or OpenAI chat completion APIs. Can someone explain why? I don’t think the 200K token window actually works, is it looking sequentially or is it really looking at the whole thing at once or something?

198. rubiquity ◴[] No.42140801[source]
> Amodei has said companies will spend $100 million to train a bleeding-edge model this year

Is it just me or does $100 million sound like it's on the very, very low end of how much training a new model costs? Maybe you can arrive within $200 million of that mark with amortization of hardware? It just doesn't make sense to me that a new model would "only" be $100 million when AmaGooBookSoft are spending tens of billions on hardware and the AI startups are raising billions every year or two.

199. HarHarVeryFunny ◴[] No.42140807{4}[source]
> Let me modify that a little, because humans can't do things outside their training set either.

That's not true. Humans can learn.

An LLM is just a tool. If it can't do what you want then too bad.

replies(1): >>42147539 #
200. falcor84 ◴[] No.42140808{5}[source]
Oh, that was my intent, to support the grandparent's claim of "it's also pretty clear" - as in this is what people believe.

If I had evidence that it "is true" that AGI will be here in 5 years, I probably would be doing something else with my time than participating in these threads ;)

201. anonzzzies ◴[] No.42140827[source]
The current models are very powerful and we definitely didn't get most out of them yet. We are getting more and more out of them every week when we release new versions of our toolkits. So if this is it; please make it faster and take less energy. We'll be fine until the next AI spring.
202. BeefWellington ◴[] No.42140833{5}[source]
Yeah they're the shovel sellers of this particular goldrush.

Most other businesses trying to actually use LLMs are the riskier ones, including OpenAI, IMO (though OpenAI is perhaps the least risky due to brand recognition).

replies(2): >>42141129 #>>42144635 #
203. jihadjihad ◴[] No.42140844{3}[source]
> Except our users experience the failed attempts (LLM replies that are wrong, even when backed by RAG) and it's incredibly hard to hide those from them.

This has been my team's experience (and frustration) as well, and has led us to look at using LLMs for classifying / structuring, but not entrusting an LLM with making a decision based on things like a database schema or business logic.

I think the technology and tooling will get there, but the enormous amount of effort spent trying to get the system to "do the right thing" and the nondeterministic nature have really put us into a camp of "let's only allow the LLM to do things we know it is rock-solid at."

replies(2): >>42141270 #>>42141797 #
204. nuancebydefault ◴[] No.42140855{4}[source]
There is no single definition, let alone a way to measure, of self awareness nor of reasoning.

Because of that, the discussion of what AGI means in its broadest sense, will never end.

So in fact such AGI discussion will not make nobody wiser.

replies(1): >>42141612 #
205. yalogin ◴[] No.42140858[source]
I do wonder how quickly llms will become a commodity AI instrument just like any other AI out there. If so what happens to openAI
206. ben_w ◴[] No.42140862{3}[source]
Human interaction with peers is also guidance.

I don't know how many team meetings PhD students have, but I do know about software development jobs with 15 minute daily standups, and that length meeting at 120 words per minute for 5 days a week, 48 weeks per year of a 3 year PhD is 1.296.000 words.

replies(1): >>42141677 #
207. ◴[] No.42140867{4}[source]
208. akomtu ◴[] No.42140872[source]
AI can be thought of as the 2nd stage of the creature that we call the Internet. The 1st stage, that we are so familiar with, is about gathering knowledge into a giant and somewhat organized library. This library has books on every subject imaginable, but its scale is so vast that no living human today can grasp it. This is why the originally connected network has started falling apart. Once this I becomes AI, all the books in the library will be melted together into one coherent picture. Once again, anyone anywhere on Earth will be able to access all the knowledge and our Babylon will stay for a little longer.
209. limaoscarjuliet ◴[] No.42140877{3}[source]
Scaling laws are not dead. The number of people predicting death of Moore's law doubles every two years.

- Jim Keller

https://www.youtube.com/live/oIG9ztQw2Gc?si=oaK2zjSBxq2N-zj1...

replies(2): >>42141464 #>>42142962 #
210. simonw ◴[] No.42140886[source]
Right. I've been saying for a while that if all LLM development stopped entirely and we were stuck with the models we have right now (GPT-4o, Claude 3.5 Sonnet, Gemini 1.5 Pro, Llama 3.1/2, Qwen 2.5 etc) we could still get multiple years worth of advances just out of those existing models. There is SO MUCH we haven't figured out about how to use them yet.
replies(2): >>42142404 #>>42142817 #
211. russellbeattie ◴[] No.42140888[source]
Go back a few decades and you'd see articles like this about CPU manufacturers struggling to improve processor speeds and questioning if Moore's Law was dead. Obviously those concerns were way overblown.

That doesn't mean this article is irrelevant. It's good to know if LLM improvements are going to slow down a bit because the low hanging fruit has seemingly been picked.

But in terms of the overall effect of AI and questioning the validity of the technology as a whole, it's just your basic FUD article that you'd expect from mainstream news.

replies(2): >>42141152 #>>42142789 #
212. nuancebydefault ◴[] No.42140889{5}[source]
There is no way of proving awareness in humans let alone machines. We do not even know whether awareness exists or it is just a word that people made up to describe some kind of feeling.
replies(1): >>42142760 #
213. ◴[] No.42140895[source]
214. og_kalu ◴[] No.42140899{3}[source]
>Regardless of what Altman says, its been two years since OpenAI released GPT-4, and still no GPT-5 in sight.

It's been 20 months since 4 was released. 3 was released 32 months after 2. The lack of a release by now in itself does not mean much of anything.

replies(1): >>42141620 #
215. hatefulmoron ◴[] No.42140903{5}[source]
If you took the intersection of every human's abilities you'd be left with a very unimpressive set.

That also ignores the fact that the small set of humans capable of building programming languages and compilers is a consequence of specialization and lack of interest. There are plenty of humans that are capable of learning how to do it. LLMs, on the other hand, are both specialized for the task and aren't lazy or uninterested.

216. 23B1 ◴[] No.42140907[source]
The user interface for LLMs is stuck in C:\

That's where I'd focus.

replies(1): >>42141648 #
217. Der_Einzige ◴[] No.42140917{5}[source]
Brave new world was a utopia
218. ericmcer ◴[] No.42140918[source]
I have tried a few AI coding tools and always found them impressive but I don't really need something to autocomplete obvious code cases.

Is there an AI tool that can ingest a codebase and locate code based on abstract questions? Like: "I need to invalidate customers who haven't logged in for a month" and it can locate things like relevant DB tables, controllers, services, etc.

replies(3): >>42144124 #>>42144132 #>>42145680 #
219. yk ◴[] No.42140936[source]
To a certain extent I think we get a better understanding what llms can do, and my estimation for the next ten years is more like best UI ever rather than llms will replace humanity. Now best UI ever is something that can certainly deliver a lot of value, 80% of all buttons in a car should be replaced by actually good voice control, and I think that is were we are going to see a lot of very interesting applications: Hey washing machine, this is two t-shirts and a jeans. (The washing machine can then figure out it's program by itself, I don't want to memorize the table in the manual.)
replies(2): >>42141220 #>>42144879 #
220. ben_w ◴[] No.42140939{4}[source]
Indeed.

Even assuming the recent robot demo was entirely AI, the only single thing they demonstrated that would have been noteworthy was isolating one voice in a noisy crowd well enough to respond; everything else I saw Optimus do, has already been demonstrated by others.

What makes the uncertainty extra sad, is that a remote controllable humanoid robot is already directly useful for work in hazardous environments, and we know they've got at least that… but Musk would rather it be about the AI.

221. VeejayRampay ◴[] No.42140963{3}[source]
really agree with this and I think it's been the general experience: people wanting LLMs to be so great (or making money off them) kind of cherry picking examples that fit their narrative, which LLMs are good at because they produce amazing results some of the time like the deluxe broken clock that they are (they're right many many times a day)

at the end of the day though, it's not exactly reliable or particularly transformative when you get past the party tricks

222. machiaweliczny ◴[] No.42140970[source]
Long context is a scam. Claude is best but it’s still gets lost with longer context
replies(2): >>42141050 #>>42141690 #
223. Der_Einzige ◴[] No.42140978{5}[source]
Due to WFH, the weed laws where tech workers live, and the fast tolerance building of cannabis in the body - I estimate that 10% of all code written by west coast tech workers is done “while high” and that estimate is likely low.
replies(1): >>42141577 #
224. wildermuthn ◴[] No.42140980[source]
Simply put, AGI requires more data: qualia.
225. nuancebydefault ◴[] No.42140997{5}[source]
I would argue maybe people also are not thinking but simply processing. It is known that most of what we do and feel goes automatically (subconsciously).

But even more, maybe consciousness is an invention of our 'explaining self', maybe everything is automatic. I'm convinced this discussion is and will stay philosophical and will never get any conclusion.

replies(1): >>42141089 #
226. bbor ◴[] No.42141020[source]
Great question. Im very confident in my answer, even though it’s in the minority here: we’re not even close to exhausting the potential.

Imagine that our current capabilities are like the Model-T. There remains many improvements to be made upon this passenger transportation product, with RAG being a great common theme among them. People will use chatbots with much more permissive interfaces instead of clicking through menus.

But all of that’s just the start, the short term, the maturation of this consumer product; the really scary/exciting part comes when the technology reaches saturation, and opens up new possibilities for itself. In the Model-T metaphor, this is analogous to how highways have (arguably) transformed America beyond anyone’s wildest dreams, changing the course of various historical events (eg WWII industrialization, 60s & 70s white flight, early 2000s housing crisis) so much it’s hard to imagine what the country would look like without them. Now, automobiles are not simply passenger transportation, but the bedrock of our commerce, our military, and probably more — through ubiquity alone they unlocked new forms of themselves.

For those doubting my utopian/apocalyptic rhetoric, I implore you to ask yourself one simple question: why are so many experts so worried about AGI? They’ve been leaving in droves from OpenAI, and that’s ultimately what the governance kerfluffle there was. Hinton, a Turing award winner, gave up $$$ to doom-say full time. Why?

My hint is that if your answer involves less then a 1000 specialized LLMs per unified system, then you’re not thinking big enough.

replies(2): >>42141580 #>>42142701 #
227. epicureanideal ◴[] No.42141026{7}[source]
Or maybe it pushed everyone to innovate faster than they otherwise would’ve? I’m very interested to hear your reasoning for the other case though, and I am not strongly committed to the opposite view, or either view for that matter.
228. bbor ◴[] No.42141050{3}[source]
I have no data, but I whole-heartedly agree. Well, perhaps not “scam”, but definitely oversold. One of my best undergrad professors taught me the adage “don’t expect a model to do what a human expert cannot”, and I think it’s still a good rule of thumb. Giving someone an entire book to read before answering your question might help, but it would help way, way more to give them a few paragraphs that you know are actually relevant.
229. xpe ◴[] No.42141067[source]
> LLMs do search and copy/paste with idiom translation and some transliteration.

In general, this is not a good description about what is happening inside an LLM. There is extensive literature on interpretability. It is complicated and still being worked out.

The commenter above might characterize the results they get in this way, but I would question the validity of that characterization, not to mention its generality.

230. luckydata ◴[] No.42141088{5}[source]
does it mean people that can build languages and compilers are not humans? What is the point you're trying to make?
replies(1): >>42141178 #
231. vundercind ◴[] No.42141089{6}[source]
Yeah, I’m not much interested in “what’s consciousness?” but I do think the automatic-versus-thinking distinction matters for understanding what LLMs do, and what we might expect them to be able to do, and when and to what degree we need to second-guess them.

A human doesn’t just confidently spew paragraphs legit-looking but entirely wrong crap, unless they’re trying to deceive or be funny—an LLM isn’t trying to do anything, though, there’s no motivation, it doesn’t like you (it doesn’t like—it doesn’t it, one might even say), sometimes it definitely will just give you a beautiful and elaborate lie simply because its rulebook told it to, in a context and in a way that would be extremely weird if a person did it.

232. amelius ◴[] No.42141113{4}[source]
> HN always says this

The meaning here is different. What I'm saying is that big companies like OpenAI will always strive to make a generic AI, such that anyone can do basically anything using AI. The big companies therefore will indeed (like you say) have a profitable business, but few others will.

233. xpe ◴[] No.42141114[source]
> They can't create anything that doesn't already exist.

I probably disagree, but I don't want to criticize my interpretation of this sentence. Can you make your claim more precise?

Here are some possible claims and refutations:

- Claim: An LLM cannot output a true claim that it has not already seen. Refutation: LLMs have been shown to do logical reasoning.

- Claim: An LLM cannot incorporate data that it hasn't been presented with. Refutation: This is an unfair standard. All forms of intelligence have to sense data from the world somehow.

234. fragmede ◴[] No.42141123{3}[source]
People go and live in a house to get recorded 24/7, to be on tv, for far more asnine situations, for way less money.
235. xpe ◴[] No.42141125[source]
> They've simply run out of data

Why do you think "they" have run out of data? First, to be clear, who do you mean by "they"? The world is filled with information sources (data aggregators for example), each available to some degree for some cost.

Don't forget to include data that humans provide while interacting with chatbots.

236. lokimedes ◴[] No.42141129{6}[source]
Or they become the Webvan/pets.com of the bubble.
replies(1): >>42141449 #
237. danjl ◴[] No.42141142[source]
There have been variations of this story going back several months now. It isn't really news. It is just building slowly.
238. danjl ◴[] No.42141152[source]
Actually, Moore's Law has been dead for quite a few years now. Since we hit the power wall.
239. handfuloflight ◴[] No.42141177{5}[source]
Don't see where your parent comment said or implied that the point was for being and life to emerge.
replies(1): >>42145965 #
240. fragmede ◴[] No.42141178{6}[source]
It means that's a really high bar for intelligence, human or otherwise. If AGI is "as good as a human, and the test is a trick task that most humans would fail at (especially considering the weasel requirement that it additionally has to be faster), why is that considered a reasonable bar for human-grade intelligence.
241. og_kalu ◴[] No.42141180{4}[source]
>But industries want something more concrete and prospectively-acheivable in their jargon, and so that's where AGI gets redefined as wide task suitability.

The term itself (AGI) in the industry has always been about wide task suitability. People may have added their ifs and buts over the years but that aspect of it never got 'redefined'. The earliest uses of the term all talk about how well a machine would be able to perform some set number of tasks at some threshold.

It's no wonder why. Terms like "consciousness" and "self-awareness" are completely useless. It's not about difficulty. It's that you can't do anything at all with those terms except argue around in circles.

242. xpe ◴[] No.42141206[source]
> Well, there have been no significant improvements to the GPT architecture over the past few years.

A lot hangs on what you mean by "significant". Can you define what you mean? And/or give an example of an improvement that you don't think is significant.

Also, on what basis can you say "no significant improvements" have been made? Many major players have published some of their improvements openly. They also have more private, unpublished improvements.

If your claim boils down to "what people mean by a Generative Pre-trained Transformer" still has a clear meaning, ok, fine, but that isn't the meat of the issue. There is so much more to a chat system than just the starting point of a vanilla GPT.

It is wiser to look at the whole end-to-end system, starting at data acquisition, including pre-training and fine-tuning, deployment, all the way to UX.

P.S. I don't have a vested interest in promoting or disparaging AI. I don't work for a big AI lab. I'm just trying to call it like I see it, as rationally as I can.

243. lokimedes ◴[] No.42141220{3}[source]
To each their own, but I don’t look forward to having my kids yelling, a podcast in my ears and having to explain to my tumbler that wool must be spun at 1000 RPM. Humans have varying preferences when it comes to communication and sensing, making our machine interactions favor the extroverted talkative exhibitionists is really only one modality.
244. chongli ◴[] No.42141243{4}[source]
Because it can't apply any reasoning that hasn't already been done and written into its training set. As soon as you ask it novel questions it falls apart. The big LLM vendors like OpenAI are playing whack-a-mole on these novel questions when they go viral on social media, all in a desperate bid to hide this fatal flaw.

The Emperor has no clothes.

replies(1): >>42141420 #
245. olalonde ◴[] No.42141257{3}[source]
I feel the test for AGI should be more like: "go find a job and earn money" or "start a profitable business" or "pick a bachelor degree and complete it", etc.
replies(3): >>42141334 #>>42141439 #>>42144147 #
246. sdesol ◴[] No.42141270{4}[source]
> "let's only allow the LLM to do things we know it is rock-solid at."

Even this is insanely hard in my opinion. The one thing that you would assume LLM to excel at is spelling and grammar checking for the English language, but even the top model (GPT-4o) can be insanely stupid/unpredictable at times. Take the following example from my tool:

https://app.gitsense.com/?doc=6c9bada92&model=GPT-4o&samples...

5 models are asked if the sentence is correct and GPT-4o got it wrong all 5 times. It keeps complaining that GitHub is spelled like Github, when it isn't. Note, only 2 weeks ago, Claude 3.5 Sonnet did the same thing.

I do believe LLM is a game changer, but I'm not convinced it is designed to be public-facing. I see LLM as a power tool for domain experts, and you have to assume whatever it spits out may be wrong, and your process should allow for it.

Edit:

I should add that I'm convinced that not one single model will rule them all. I believe there will be 4 or 5 models that everybody will use and each will be used to challenge one another for accuracy and confidence.

replies(7): >>42141815 #>>42141930 #>>42142235 #>>42142767 #>>42142842 #>>42144019 #>>42145544 #
247. robrenaud ◴[] No.42141275[source]
> For example, combining a human-moderated knowledge graph with an LLM with RAG allows you to build "expert bots" that understand your business context / your codebase / your specific processes and act almost human-like similar to a coworker in your team.

I'd love to hear about this. I applied to YC WC 25 with research/insight/an initial researchy prototype built on top of GPT4+finetuning about something along this idea. Less powerful than you describe, but it also works without the human moderated KG.

248. ppeetteerr ◴[] No.42141284{3}[source]
The reason people are holding out is that the current generation of models are still pretty poor in many areas. You can have it craft an email, or to review your email, but I wouldn't trust an LLM with anything mission-critical. The accuracy of the generated output is too low be trusted in most practical applications.
replies(2): >>42142016 #>>42144223 #
249. revscat ◴[] No.42141298{5}[source]
Plenty of humans, unfortunately, are incapable of admitting limitations. Many years ago I had a coworker who believed he would never die. At first I thought he was joking, but he was in fact quite serious.

Then there are those who are simply narcissistic, and cannot and will not admit fault regardless of the evidence presented them.

replies(1): >>42142791 #
250. jkaptur ◴[] No.42141311{4}[source]
https://en.wikipedia.org/wiki/Osborne_effect
251. rodgerd ◴[] No.42141334{4}[source]
An LLM doing crypto spam/scamming has been making money by tricking Marc Andressen into boosting it. So to the degree that "scamming gullible billionaires and their fans" is a job, that's been done.
replies(2): >>42141411 #>>42141664 #
252. readyplayernull ◴[] No.42141342[source]
> feels like the "digitization" era all over again

This exactly. And as history shows, no matter how much effort the current big LLM companies do they won't be able to grasp the best uses for their tech. We will see small players developing it even further. I'm thankful for the legendary blindness of these anticompetitive behemoths. Less than 2 decades ago: IBM Watson.

253. hatefulmoron ◴[] No.42141347[source]
I'm curious, what was the leap after GPT-4? What about the leaps after that, given a leap every 6 months?
replies(3): >>42142714 #>>42145027 #>>42145103 #
254. JohnMakin ◴[] No.42141362[source]
It's strange to me that's your takeaway. The reason that the internet was overhyped in the 2000's is because it was and also heavily overvalued. It took a massive correction and seriously disruptive bubble burst to break the delusion and move on to something more sustainable.
replies(2): >>42141769 #>>42149330 #
255. HarHarVeryFunny ◴[] No.42141388{4}[source]
It seems you are missing a lot of "ifs" in that hypothetical!

Nobody knows how things like coding assistants or other AI applications will pan out. Maybe it'll be Oracle selling Meta-licenced solutions that gets the lion's share of the market. Maybe custom coding goes away for many business applications as off-the-shelf solutions get smarter.

A future where all that AI (or some hypothetical AGI) changes is work being done by humans to the same work being done by machines seems way too linear.

replies(1): >>42141592 #
256. bloppe ◴[] No.42141399[source]
> you can have LLMs create reasonable code changes, with automatic review / iteration etc.

Nobody who takes code health and sustainability seriously wants to hear this. You absolutely do not want to be in a position where something breaks, but your last 50 commits were all written and reviewed by an LLM. Now you have to go back and review them all with human eyes just to get a handle on how things broke, while customers suffer. At this scale, it's an effort multiplier, not an effort reducer.

It's still good for generating little bits of boilerplate, though.

replies(1): >>42142621 #
257. rsanek ◴[] No.42141411{5}[source]
source? didn't find anything online about this.
replies(1): >>42230225 #
258. hackinthebochs ◴[] No.42141420{5}[source]
>As soon as you ask it novel questions it falls apart.

What do you mean by novel? Almost all sentences it is prompted on are brand new and it mostly responds sensibly. Surely there's some generalization going on.

replies(1): >>42141945 #
259. yobid20 ◴[] No.42141423[source]
This was predicted. Ai isnt going to get any better.
260. methodical ◴[] No.42141431{3}[source]
Ditto- I have a feeling the investors in his latest 2.3 quintillion dollar series Z round wouldn't be as happy if he'd have tweeted "there is a wall"
261. deegles ◴[] No.42141433{3}[source]
My big question is what is being done about hallucination? Without a solution it's a giant footgun.
replies(3): >>42143293 #>>42145814 #>>42148625 #
262. jedberg ◴[] No.42141439{4}[source]
Can most humans do that? Find a job and earn money, probably. The other two? Not so much.
263. HarHarVeryFunny ◴[] No.42141440{5}[source]
I'm not sure about that. NVIDIA seems to stay in a dominant position as long as the race to AI remains intact, but the path to it seems unsure. They are selling a general purpose AI-accelerator that supports the unknown path.

Once massively useful AI has been achieved, or it's been determined that LLMs are it, then it becomes a race to the bottom as GOOG/MSFT/AMZN/META/etc design/deploy more specialized accelerators to deliver this final form solution as cheaply as possible.

264. fragmede ◴[] No.42141442{5}[source]
But we don't know how human thinking works. Suppose for a second that it could be represented as a series of matrix math. What series of operations are missing from the process that would make you think it was doing some fascimile of thinking?
265. hackinthebochs ◴[] No.42141443{5}[source]
Right, but its behavior didn't change after you learned more about it. Why should that cause you to update in the negative? Why does learning how it work not update you in the direction of "so that's how thinking works!" rather than, "clearly its not doing any thinking"? Why do you have a preconception of how thinking works such that learning about the internals of LLMs updates you against it thinking?
replies(1): >>42142386 #
266. zeusk ◴[] No.42141449{7}[source]
Nvidia is more likely to become CSCO or INTC but as far as I can tell, that's still a few years off - unless ofcourse there is weakness in broader economy that accelerates the pressure on investors.
267. creativenolo ◴[] No.42141459{3}[source]
Great & motivational comment. Any pointers on where to start playing with the internals and sampling?

Doesn’t need to be comprehensive, I just don’t know where to jump off from.

replies(1): >>42144378 #
268. nyrikki ◴[] No.42141464{4}[source]
There are way too many personal definitions of what "Moore's Law" even is to have a discussion without deciding on a shared definition before hand.

But Goodhart's law; "When a measure becomes a target, it ceases to be a good measure"

Directly applies here, Moore's Law was used to set long term plans at semiconductor companies, and Moore didn't have empirical evidence it was even going to continue.

If you say, arbitrarily decide CPU, or worse, single core performance as your measurement, it hasn't held for well over a decade.

If you hold minimum feature size without regard to cost, it is still holding.

What you want to prove usually dictates what interpretation you make.

That said, the scaling law is still unknown, but you can game it as much as you want in similar ways.

GPT4 was already hinting at an asymptote on MMLU, but the question is if it is valid for real work etc...

Time will tell, but I am seeing far less optimism from my sources, but that is just anecdotal.

269. sourcepluck ◴[] No.42141503{5}[source]
The short version of this is that I don't disagree with your interpretation of Searle, and my paragraphs immediately following the link weren't meant to be a direct description of his point with the Chinese Room thought experiment.

> while it cannot be dis-_proven_, the Chinese Room argument was meant to provide reasons against believing it

Yes, like Russell's teapot. I also think that's what Searle means.

> the "it can't be proven until it happens" part is misunderstanding: you won't know if it happens because the objective, externally available attributes don't indicate whether self-awareness (or indeed awareness at all) is present

Yes, agreed, I believe that's what Searle is saying too. I think I was maybe being ambiguous here - I wanted to say that even if you forgave the AI maximalists for ignoring all relevant philosophical work, the notion that "appearing human-like" inevitably tends to what would actually be "consciousness" or "intelligence" is more than a big claim.

Searle goes further, and I'm not sure if I follow him all the way, personally, but it's a side point.

270. Zopieux ◴[] No.42141514{5}[source]
I can't believe this still needs to be laid down years after the start of the GPT hype. Still, thanks!
271. creativenolo ◴[] No.42141522{3}[source]
> holding on of building things waiting for "that next big update", but there a so many small, annoying tasks that can be easily automated.

Also we only hear / see the examples that are meant to scale. Startups typically offer up something transformative, ready to soak up a segment of a market. And that’s hard with the current state of LLMs. When you try their offerings, it’s underwhelming. But there is richer, more nuanced hard to reach fruits that are extremely interesting - but it’s not clear where they’d scale in and of themselves.

272. jppope ◴[] No.42141532[source]
Just an observation. If the models are hitting the top of the S-curve, that might be why Sam Altman raised all the money for OpenAI... it might not be available if Venture Capitalists realize that the gains are close to being done
273. bob1029 ◴[] No.42141540{3}[source]
I strongly believe this gets into an information theoretical constraint akin to why perpetual motion machines don't work.

In theory, yes you could generate an unlimited amount of data for the models, but how much of it is unique or valuable information? If you were to compress all this generated training data using a really good algorithm, how much actual information remains?

replies(3): >>42141792 #>>42141948 #>>42181780 #
274. m3kw9 ◴[] No.42141545[source]
Hold your horses, OpenAI just came out with o1preview 2 months ago, showing what test time computer can do
275. rafaelmn ◴[] No.42141563[source]
>There is another emerging paradigm which is still small(er) scale but showing remarkable results. That's full multi-modal training with embodied agents (aka robots). 1x, Figure, Physical Intelligence, Tesla are all making rapid progress on functionality which is definitely beyond frontier LLMs because it is distinctly different.

Tesla is selling this view for almost a decade now in self-driving - how their car fleet feeding training data is going to make them leaders in the area. I don't find it convincing anymore

replies(2): >>42143183 #>>42144438 #
276. warkdarrior ◴[] No.42141572{3}[source]
> Cool, but we already have robots doing this in 2d space (aka self driving cars) that struggle not to kill people. How is adding a third dimension going to help?

If we have robots that operate in 3D, they'll be able to kill you not only from behind or from the side, but also from above. So that's progress!

277. portaouflop ◴[] No.42141577{6}[source]
Do tech workers write better or worse code while high ?
replies(1): >>42143325 #
278. fire_lake ◴[] No.42141580{3}[source]
> Hinton, a Turing award winner, gave up $$$ to doom-say full time

This is a hint of something but a weak argument. Smart people are wrong all the time.

279. mtkd ◴[] No.42141590[source]
And that is potentially only going to worsen as:

1. more data gets walled-off as owners realise value

2. stackoverflow-type feedback loops cease to exist as few people ask a public question and get public answers ... they ask a model privately and get an answer based on last visible public solutions

3. bad actors start deliberately trying to poison inputs (if sites served malicious responses to GPTBot/CCBot crawlers only, would we even know right now?)

4. more and more content becomes synthetically generated to the point pre-2023 physical books become the last-known-good knowledge

5. goverments and IP lawyers finally catch up

replies(1): >>42141909 #
280. JumpCrisscross ◴[] No.42141592{5}[source]
> you are missing a lot of "ifs" in that hypothetical

The big one being I'm not assuming AGI. Low-level coding tasks, the kind frequently outsourced, are within the realm of being competitive with offshoring with known methods. My point is we don't need to assume AGI for these valuations to make sense.

replies(2): >>42141668 #>>42145913 #
281. nomel ◴[] No.42141612{5}[source]
I agree there's no single definition, but I think they all have something current LLM don't: the ability to learn new things, in a persistent way, with few shots.

I would argue that learning is The definition of AGI, since everything else comes naturally from that.

The current architectures can't learn without retraining, fine tuning is at the expense of general knowledge, and keeping things in context is detrimental to general performance. Once you have few shot learning, I think it's more of a "give it agency so it can explore" type problem.

282. HarHarVeryFunny ◴[] No.42141620{4}[source]
By itself, sure, but there are many sources all pointing to the same thing.

Sutskever, recently ex. OpenAI, one of the first to believe in scaling, now says it is plateauing. Do OpenAI have something secret he was unaware of? I doubt it.

FWIW, GPT-2 and GPT-3 were about a year apart (2019 "Language models are Unsupervised Multitask Learners" to 2020 "Language Models are Few-Shot Learners").

Dario Amodei recently said that with current gen models pre-training itself only takes a few months (then followed by post-training, etc). These are not year+ training runs.

replies(1): >>42142076 #
283. ◴[] No.42141621[source]
284. devit ◴[] No.42141624[source]
It seems obvious to me that Common Crawl plus Github public repositories have more than an enough data to train an AI that is as good as any programmer (at tasks not requiring knowledge of non-public codebases or non-public domain knowledge).

So the problem is more in the algorithm.

replies(1): >>42141675 #
285. in_a_society ◴[] No.42141632[source]
Expecting AGI from Reddit training data is peak "pray Mr Babbage".
replies(1): >>42145309 #
286. nomel ◴[] No.42141640{5}[source]
> Or being aware of how its utterances may come across to its interlocutor.

I think this behavior is being somewhat demonstrated in newer models. I've seen GPT-3.5 175B correct itself mid response with, almost literally:

> <answer with flaw here>

> Wait, that's not right, that <reason for flaw>.

> <correct answer here>.

Later models seem to have much more awareness of, or "weight" towards, their own responses, while generating the response.

replies(1): >>42142851 #
287. kenjackson ◴[] No.42141648{3}[source]
Voice for LLMs is surprisingly good. I'd love to see LLMs used in more systems like cars and in-home automation. Whatever cars use today and Alexa in the home simply are much worse than what we get with ChatGPT voice today.
288. moogly ◴[] No.42141651[source]
> you can have LLMs create reasonable code changes

Could you define "code changes" because I feel that is a very vague accomplishment.

289. jedberg ◴[] No.42141652{4}[source]
I will get excited when an LLM (or whatever technology is next) can solve tasks that 80%+ of adult humans can solve. Heck let's even say 80% of college graduates to make it harder.

Things like drive a car, fold laundry, run an errand, do some basic math.

You'll notice that two of those require some form of robot or mobility. I think that is key -- you can't have AGI without the ability to interact with the world in a way similar to most humans.

replies(1): >>42141904 #
290. bob1029 ◴[] No.42141654{4}[source]
This sounds like something more up the alley of linear genetic programming. There are some very interesting experiments out there that utilize UTMs (BrainFuck, Forth, et. al.) [0,1,2].

I've personally had some mild success getting these UTM variants to output their own children in a meta programming arrangement. The base program only has access to the valid instruction set of ~12 instructions per byte, while the task program has access to the full range of instructions and data per byte (256). By only training the base program, we reduce the search space by a very substantial factor. I think this would be similar to the idea of a self-hosted compiler, etc. I don't think there would be too much of a stretch to give it access to x86 instructions and a full VM once a certain amount of bootstrapping has been achieved.

[0]: https://arxiv.org/abs/2406.19108

[1]: https://github.com/kurtjd/brainfuck-evolved

[2]: https://news.ycombinator.com/item?id=36120286

291. superjose ◴[] No.42141661[source]
I'm more on the camp that these techs don't need to be perfect, but they need to be practical enough.

And I think the latter is good enough for us to do exciting things.

replies(1): >>42141768 #
292. olalonde ◴[] No.42141664{5}[source]
That story was a bit blown out of proportion. He gave a research grant to the bot's creator: https://x.com/pmarca/status/1846374466101944629
293. HarHarVeryFunny ◴[] No.42141668{6}[source]
Current AI coding assistants are best at writing functions or adding minor features to an existing code base. They are not agentic systems that can develop an entire solution from scratch given a specification, which in my experience is more typcical of the work that is being outsourced. AI is a tool, whose full-cycle productivity benefit seems questionable. It is not a replacement for a human.
replies(2): >>42141727 #>>42141740 #
294. darknoon ◴[] No.42141675[source]
I think just reading the code wouldn't make you a good programmer, you'd need to "read" the anti-code, ie what doesn't work, by trial and error. Models overconfidence that their code will work often leads them to fail in practice.
replies(1): >>42141971 #
295. eastbound ◴[] No.42141677{4}[source]
I have 3 remote employees whose job is consistently as bad as LLM.

That means employees who use LLM are, on average, recognizably bad. Those who are good enough, are also good enough to write the code manually.

To the point I wonder whether this HN thread is generated by OpenAI, trying to create buzz around AI.

replies(1): >>42141790 #
296. jedberg ◴[] No.42141680{4}[source]
When we test kids to see if they are gifted, one of the criteria is that they have the ability to say "I don't know".

That is definitely an ability that current LLMs lack.

297. kenjackson ◴[] No.42141685{4}[source]
> because we understand the rough biological processes that cause this

We don't have a rough understanding of the biological processes that cause this, unless you literally mean just the biological process and not how it actual impacts learning/intelligence.

There's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.

replies(1): >>42142139 #
298. Timber-6539 ◴[] No.42141687[source]
Direct quote from the article: "The companies are facing several challenges. It’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems."

The irony here is astounding.

replies(2): >>42142698 #>>42145740 #
299. cruffle_duffle ◴[] No.42141690{3}[source]
In my experience, the reality of long context windows doesn’t live up to the hype. When you’re iterating on something, whether it's code, text, or any document, you end up with multiple versions layered in the context. Every time you revise, those earlier versions stick around, even though only the latest one is the "most correct".

What gets pushed out isn’t the last version of the document itself (since it’s FIFO), but the important parts of the conversation—things like the rationale, requirements, or any context the model needs to understand why it’s making changes. So, instead of being helpful, that extra capacity just gets filled with old, repetitive chunks that have to be processed every time, muddying up the output. This isn’t just an issue with code; it happens with any kind of document editing where you’re going back and forth, trying to refine the result.

Sometimes I feel the way to "resolve" this is to instead go back and edit some earlier portion of the chat to update it with the "new requirements" that I didn't even know I had until I walked down some rabbit hole. What I end up with is almost like a threaded conversation with the LLM. Like, I sometimes wish these LLM chatbots explicitly treated the conversion as if it were threaded. They do support basically my use case by letting you toggle between different edits to your prompts, but it is pretty limited and you cannot go back and edit things if you do some operations (eg: attach a file).

Speaking of context, it's also hard to know what things like ChatGPT add to it's context in the first place. Many of times I'll attach a file or something and discover it didn't "read" the file into it's context. Or I'll watch it fire up a python program it writes that does nothing but echo the file into it's context.

I think there is still a lot of untapped potential in strategically manipulating what gets placed into the context window at all. For example only present the LLM with the latest and greatest of a document and not all the previous revisions in the thread.

replies(2): >>42142946 #>>42143130 #
300. mvdtnz ◴[] No.42141703[source]
Even if you're right (you're not) whatever "AI" looks like in 20+ years will have virtually nothing in common with these stupid statistical word generators.
301. JumpCrisscross ◴[] No.42141727{7}[source]
> they are not agentic systems that can develop an entire solution from scratch given a specification, which in my experience is more typcical of the work that is being outsourced

If there is one domain where we're seeing tangible progress from AI, it's in working towards this goal. Difficult projects aren't in scope. But most tech, especially most tech branded IT, is not difficult. Everyone doesn't need an inventory or customer-complaint system designed from scratch. Current AI is good at cutting through that cruft.

replies(1): >>42142846 #
302. czhu12 ◴[] No.42141736[source]
If it becomes obvious that LLM's have a more narrow set of use cases, rather than the all encompassing story we hear today, then I would bet that the LLM platforms (OpenAI, Anthropic, Google, etc) will start developing products to compete directly with applications that supposed to be building on top of them like Cursor, in an attempt to increase their revenue.

I wonder what this would mean for companies raising today on the premise of building on top of these platforms. Maybe the best ones get their ideas copied, reimplemented, and sold for cheaper?

We already kind of see this today with OpenAI's canvas and Claude artifacts. Perhaps they'll even start moving into Palantir's space and start having direct customer implementation teams.

It is becoming increasing obvious that LLM's are quickly becoming commoditized. Everyone is starting to approach the same limits in intelligence, and are finding it hard to carve out margin from competitors.

Most recently exhibited by the backlash at claude raising prices because their product is better. In any normal market, this would be totally expected, but people seemed shocked that anyone would charge more than the raw cost it would take to run the LLM itself.

https://x.com/ArtificialAnlys/status/1853598554570555614

replies(1): >>42143796 #
303. senko ◴[] No.42141740{7}[source]
There are a number of agentic systems that can develop more complex solutions. Just a few off the top of my head: Pythagora, Devin, OpenHands, Fume, Tusk, Replit, Codebuff, Vly. I'm sure I've missed a bunch.

Are they good enough to replace a human yet? Questionable[0], but they are improving.

[0] You wouldn't believe how low the outsourcing contractors' quality can go. Easily surpassed by current AI systems :) That's a very low bar tho.

304. fragmede ◴[] No.42141745{4}[source]
It's not just marketing bullshit though. Microsoft is the counterparty to a contract with that claim. money changes hands when that's been achieved, so I expect if sama thinks he's hit it, but Microsoft does not, we'll see that get argued in a court of law.
305. kozikow ◴[] No.42141760{3}[source]
> "The theory behind these models so aggressively lags the engineering"

The problem is that 99% of theories are hard to scale.

I am not an expert, as I work adjacent to this field, but I see the inverse - dumbing down theory to increase parallelism/scalability.

306. delusional ◴[] No.42141764{5}[source]
If I understand that correctly you're converting file formats? That's not exactly "novel"
replies(1): >>42142072 #
307. imiric ◴[] No.42141768[source]
How practical can they be when current flagship models generate incorrect responses more than 50% of the time[1]?

This might be acceptable for amusing us with fiction and art, and for filling the internet with even more spam and propaganda, but would you trust them to write reliable code, drive your car or control any critical machinery?

The truly exciting things are still out of reach, yet we just might be at the Peak of Inflated Expectations to see it now.

[1]: https://openai.com/index/introducing-simpleqa/

308. jmward01 ◴[] No.42141769{3}[source]
I disagree that it was over hyped. It has transformed our society so much that I would argue it was vastly under-hyped. Sure, there were a lot of silly companies that sprang up and went away because they weren't sound, but so much of the modern economy is based on the internet that it is hard to say any business isn't somehow internet related today. You would be hard pressed to find any business anywhere that doesn't at least have a social media account. If 2000 was over-hyping things I just don't see it.
replies(2): >>42141844 #>>42142165 #
309. mvdtnz ◴[] No.42141772{3}[source]
I feel like accusing people of being "so dismissive" was strongly associated with NFTs and cryptocurrency a few years ago, and now it's widely deployed against anyone skeptical of very expensive, not very good word generators.
replies(1): >>42143107 #
310. akomtu ◴[] No.42141776{3}[source]
My understanding is that machine learning today is a lot like interpolation of examples in the dataset. The breakthrough of LLMs is due to the idea that interpolation in a 1024-dimensional space works much better than in a 2d space, if we naively interpolated English letters. All the modern transformers stuff is basically an advanced interpolation method that uses a large local neighborhood than just few nearest examples. It's like the Lanczos interpolation kernel, using a 1d analogy. Increasing the size of the kernel won't bring any gains, because the current kernel already nearly perfectly approximates an ideal interpolation (a full dataset DFT).

However interpolation isn't reasoning. If we want to understand the motion of planets, we would start with a dataset of (x, y, z, t) coordinates and try to derive the law of motion. Imagine if someone simply interpolated the dataset and presented the law of gravity as an array of million coefficients (aka weights)? Our minds have to work with a very small operating memory that can hardly fit 10 coefficients. This constraint forces us to develop intelligence that compacts the entire dataset into one small differential equation. Btw, English grammar is the differential equation of English in a lot of ways: it tells what the local rules are of valid trajectories of words that we call sentences.

311. archiepeach ◴[] No.42141787{3}[source]
To be fair in the human-based teams I've worked with in startups I couldn't show you products with decent retention.
312. ben_w ◴[] No.42141790{5}[source]
1. The person I'm replying to is hypothesising about a future, not yet existent, version, GPT5. Current quality limits don't tell you jack about a hypothetical future, especially one that may not ever happen because money.

2. I'm not commenting on the quality, because they were writing about something that doesn't exist and therefore that's clearly just a given for the discussion. The only thing I was adding is that humans also need guidance, and quite a lot of it — even just a two-week sprint's worth of 15 minute daily stand-up meetings is 18,000 words, which is well beyond the point where I'd have given up prompting an LLM and done the thing myself.

replies(1): >>42150308 #
313. cruffle_duffle ◴[] No.42141792{4}[source]
I sure hope there is some bright eyed bushy tailed graduate students crafting up some theorem to prove this. Because it is absolutely a feedback loop.

... that being said I'm sure there is plenty of additional "real data" that hasn't been fed to these models yet. For one thing, I think ChatGPT sucks so bad at terraform because almost all the "real code" to train on is locked behind private repositories. There isn't much publicly available real-world terraform projects to train on. Same with a lot of other similar languages and tools -- a lot of that knowledge is locked away as trade secrets and hidden in private document stores.

(that being said Sonnet 3.5 is much, much, much better at terraform than chatgpt. It's much better at coding in general but it's night and day for terraform)

314. knicholes ◴[] No.42141793{4}[source]
No real reason. I just made it up. But that's kind of my reasonable expectation of longevity of a machine like a robotic lawnmower and battery life.
315. fireflash38 ◴[] No.42141794{5}[source]
If you've got clearly defined start input format and end output format, sure it seems that it would be a good candidate for heavy LLM use. But I don't know if that's most people.
replies(1): >>42141811 #
316. nonameiguess ◴[] No.42141796[source]
Your hypothesis here is not exclusive of the hypothesis in this article.

Name your platform. Linux. C++. The Internet. The x86 processor architecture. We haven't exhausted the options for delivering value on top of those, but that doesn't mean the developers and sellers of those platforms don't try to improve them anyway and might struggle to extract value from application developers who use them.

317. ◴[] No.42141797{4}[source]
318. dmd ◴[] No.42141811{6}[source]
If it were ever clearly defined or even consistent from input to input I would be overjoyed.
319. SimianSci ◴[] No.42141815{5}[source]
> "I see LLM as a power tool for domain experts, and you have to assume whatever it spits out may be wrong, and your process should allow for it."

this gets to the heart of it for me. I think LLMs are an incredible tool, providing advanced augmentation on our already developed search capabilities. What advanced user doesnt want to have a colleague they can talk about their specific domain capacity with?

The problem comes from the hyperscaling ambitions of the players who were the first in this space. They quickly hyped up the technology beyond want it should have been.

replies(1): >>42145693 #
320. JohnMakin ◴[] No.42141844{4}[source]
pets.com was valued at $400 million based almost completely on its domain name. That's the classic example. People were throwing buckets of money at any .com that resolved to a site and almost all of it failed. I'm not sure how that doesn't meet the definition of over-hyped. It feels very similar to now. Not even to mention - the web largely doesn't consist of .com sites anymore, it's mostly a few centralized sites and apps.
replies(1): >>42143744 #
321. xanderlewis ◴[] No.42141849{5}[source]
Yep. falcor84: you’re thinking of the so-called ‘multilayer perceptron’ which is basically an archaic name for a (densely connected?) neural network. I was referring to traditional perceptrons.
replies(1): >>42142074 #
322. 77pt77 ◴[] No.42141861{4}[source]
ELIZA was probably more effective than most therapists.

Definitely cheaper.

323. xanderlewis ◴[] No.42141876{4}[source]
That’s the inspiration behind the idea, but it doesn’t seem to be working in practice.

It’s not true that any element, when duplicated and linked together will exhibit anything emergent. Neural networks (in a certain sense, though not their usual implementation) are already built out of individual units linked together, so simply having more of these groups of units might not add anything important.

> research is already showing promising results of the performance of agent systems.

…in which case, please show us! I’d be interested.

324. moffkalast ◴[] No.42141879{3}[source]
Well according to Nvidia you can just ignore Moore's law and start requiring people to install multi kilowatt outlets just for their cards. Who needs efficiency amirite?
replies(1): >>42142664 #
325. 77pt77 ◴[] No.42141888[source]
> They can't create anything that doesn't already exist.

Just increase the temperature.

replies(1): >>42142543 #
326. moffkalast ◴[] No.42141893[source]
Altman on twitter has always been less coherent than GPT2.
327. ata_aman ◴[] No.42141904{5}[source]
So embodied cognition right?
328. 77pt77 ◴[] No.42141909{3}[source]
> more data gets walled-off as owners realize value

What's amazing to me to is that no one is throwing accusations of plagiarism.

I still think that if the "wrong people" had tried doing this they would have been obliterated by the courts.

329. quantum_state ◴[] No.42141911[source]
Hope this would be a constant reminder that brute force can only get one that far, though it may still be useful when it is. With lots of intuition gained, it’s time to ponder things a bit more deeply.
replies(1): >>42141947 #
330. larodi ◴[] No.42141930{5}[source]
Those Apple engineers stated in a very clear tone:

- every time a different result is produced.

- no reasoning capabilities were categorically determined.

So this is it. If you want LLM - brace for different results and if this is okay for your application (say it’s about speech or non-critical commands) then off you are.

Otherwise simply forget this approach, and particularly when you need reproducible discreet results.

I don’t think it gets any better than that and nothing so far implicated it will (with this particular approach to AGI or whatever the wet dream is)

replies(4): >>42141956 #>>42142010 #>>42142797 #>>42144428 #
331. cryptica ◴[] No.42141942[source]
It's interesting the way things turned out so far with LLMs, especially from the perspective of a software engineer. We are trained to keep a certain skepticism when we see software which appears to be working because, ultimately, the only question we care about is "Does it meet user requirements?" and this is usually framed in terms of users achieving certain goals.

So it's interesting that when AI came along, we threw caution to the wind and started treating it like a silver bullet... Without asking the question of whether it was applicable to this goal or that goal...

I don't think anyone could have anticipated that we could have an AI which could produce perfect sentences, faster than a human, better than a human but which could not reason. It appears to reason very well, better than most people, yet it doesn't actually reason. You only notice this once you ask it to accomplish a task. After a while, you can feel how it lacks willpower. It puts into perspective the importance of willpower when it comes to getting things done.

In any case, LLMs bring us closer to understanding some big philosophical questions surrounding intelligence and consciousness.

332. chongli ◴[] No.42141945{6}[source]
Novel as in requiring novel reasoning to sort out. One of the classic ways to expose the issue is to take a common puzzle and introduce irrelevant details and perhaps trivialize the solution. LLMs pattern match on the general form of the puzzle and then wander down the garden path to an incorrect solution that no human would fall for.

The sort of generalization these things can do seems to mostly be the trivial sort: substitution.

replies(2): >>42142079 #>>42142154 #
333. dmafreezone ◴[] No.42141947[source]
Maybe, if you want to relearn the bitter lesson.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

334. moffkalast ◴[] No.42141948{4}[source]
I make a lot of shitposts, how much of that is valuable information? Arguably not much. I doubt information value is a good way to estimate inteligence because most people's daily ramblings would grade them useless.
335. marcellus23 ◴[] No.42141956{6}[source]
> Those Apple engineers

Which Apple engineers? Yours is the only reference to the company in this comment section or in the article.

replies(2): >>42142644 #>>42146113 #
336. lagrange77 ◴[] No.42141969{4}[source]
Good point!

I'm wondering wether it would count, if one would extend it with an external program, that gives it feedback during inference (by another prompt) about the correctness of it's output.

I guess it wouldn't, because these RAG tools kind of do that and i heard no one calling those self aware.

replies(1): >>42145102 #
337. krisroadruck ◴[] No.42141971{3}[source]
AlphaGo got better by playing against itself. I wonder if the pathway forward here is to essentially do the same with coding. Feed it some arbitrary SRS documents - have it attempt to develop them including full code coverage testing. Have it also take on roles of QA, stakeholders, red-team security researchers, and users who are all aggressively trying to find edge cases and point out everything wrong with the application. Have it keep iterating and learn from the findings. Keep feeding it new novel SRSs until the number off attempts/iterations necessary to get a quality product out the other side drops to some acceptable number.
338. verteu ◴[] No.42142010{6}[source]
(for reference: https://arxiv.org/pdf/2410.05229 )
339. saalweachter ◴[] No.42142016{4}[source]
Any email you trust an LLM to write is one you probably don't need to send.
replies(1): >>42142611 #
340. k__ ◴[] No.42142035[source]
But AGI is always right around the corner?

I don't get it...

341. sssilver ◴[] No.42142048[source]
One thing that makes the established AIs less ideal for my (programming) use-case is that the technologies I use quickly evolve past whatever the published models "learn".

On the other hand, a lot of these frameworks and languages have relatively decent and detailed documentation.

Perhaps this is a naive question, but why can't I as a user just purchase "AI software" that comes with a large pre-trained model to which I can say, on my own machine, "go read this documentation and help me write this app in this next version of Leptos", and it would augment its existing model with this new "knowledge".

replies(1): >>42144812 #
342. ◴[] No.42142063{3}[source]
343. llm_trw ◴[] No.42142072{6}[source]
This is exactly the type of novel work that llms are good at. It's tedious and has annoying internal logic, but that logic is quite flat and there are a million examples to generalise from.

What they fail at is code with high cyclomatic complexity. Back in the llama 2 finetune days I wrote a script that would break down what each node in the control flow graph into its own prompt using literate programming and the results were amazing for the time. Using the same prompts I'd get correct code in every language I tried.

344. falcor84 ◴[] No.42142074{6}[source]
While ReLU is relatively new, AI researchers have been aware of the need for nonlinear activation functions and building multilayer perceptrons with them since the late 1960s, so I had assumed that's what you meant.
replies(1): >>42142428 #
345. og_kalu ◴[] No.42142076{5}[source]
>Sutskever, recently ex. OpenAI, one of the first to believe in scaling, now says it is plateauing.

Blind scaling sure (for whatever reason)* but this is the same Sutskever who believes in ASI within a decade off the back of what we have today.

* Not like anyone is telling us any details. After all, Open AI and Microsoft are still trying to create a 100B data center.

In my opinion, there's a difference between scaling not working and scaling becoming increasingly infeasible. GPT-4 is something like x100 the compute of 3 (Same with 2>3).

All the drips we've had of 5 point to ~x10 of 4. Not small but very modest in comparison.

>FWIW, GPT-2 and GPT-3 were about a year apart (2019 "Language models are Unsupervised Multitask Learners" to 2020 "Language Models are Few-Shot Learners").

Ah sorry I meant 3 and 4.

>Dario Amodei recently said that with current gen models pre-training itself only takes a few months (then followed by post-training, etc). These are not year+ training runs.

You don't have to be training models the entire time. GPT-4 was done training in August 2022 according to Open AI and wouldn't be released for another 8 months. Why? Who knows.

replies(1): >>42142274 #
346. moffkalast ◴[] No.42142079{7}[source]
Well the problem with that approach is that LLMs are still both incredibly dumb and small, at least compared to the what, 700T params of a human brain? Can't compare the two directly, especially when one has a massive recall advantage that skews the perception of that. But there is still some inteligence under there that's not just memorization. Not much, but some.

So if you present a novel problem it would need to be extremely simple, not something that you couldn't solve when drunk and half awake. Completely novel, but extremely simple. I think that's testable.

replies(1): >>42142156 #
347. JohnMakin ◴[] No.42142139{5}[source]
> We don't have a rough understanding of the biological processes that cause this,

Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.

> unless you literally mean just the biological process and not how it actual impacts learning/intelligence.

Of course we have some understanding of this as well. There's tremendous bodies of study around this. We know which regions of the brain correlate to reasoning, fear, planning, etc. We know when these regions are damaged or removed what happens, enough to point to a region of the brain and say "HERE." That's far, far beyond what we know about the innards of LLM's.

> here's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.

This is extremely circular because the current definition(s) of AGI always define it in terms of human intelligence. Unless you're saying that intelligence comes from somewhere other than our brains.

Anyway, the brain is not like a LLM, in function or form, so this debate is extremely silly to me.

replies(1): >>42143143 #
348. hackinthebochs ◴[] No.42142154{7}[source]
Why is your criteria for "on the path towards AGI" so absolutist? For it to be on the path towards AGI and not simply AGI it has to be deficient in some way. Why does the current failure modes tell you its on the wrong path? Yes, it has some interesting failure modes. The failure mode you mention is in fact very similar to human failure modes. We very much are prone to substituting the expected pattern when presented with a 99% match to a pattern previously seen. They also have a lot of inhuman failure modes as well. But so what, they aren't human. Their training regimes are very dissimilar to ours and so we should expect some alien failure modes owing to this. This doesn't strike me as good reason to think they're not on the path towards AGI.

Yes, LLMs aren't very good at reasoning and have weird failure modes. But why is this evidence that its on the wrong path, and not that it just needs more development that builds on prior successes?

replies(1): >>42142540 #
349. chongli ◴[] No.42142156{8}[source]
It’s not fair to ask me to judge them based on their size. I’m judging them based on the claims of their vendors.

Anyway the novel problems I’m talking about are extremely simple. Basically they’re variations on the “farmer, 3 animals, and a rowboat” problem. People keep finding trivial modifications to the problem that fool the LLMs but wouldn’t fool a child. Then the vendors come along and patch the model to deal with them. This is what I mean by whack-a-mole.

Searle’s Chinese Room thought experiment tells us that enough games of whack-a-mole could eventually get us to a pretty good facsimile of reasoning without ever achieving the genuine article.

replies(1): >>42142295 #
350. adamrezich ◴[] No.42142165{4}[source]
There were no smartphones in 2000, so the Web was overvalued at that point in time... until we all started carrying the Web in our pockets in the form of a portable rectangle.

Given that this is the case, why can't this be analogously true of “AI” as well? There's plenty of reason to believe that we're hitting a wall, such that, to progress further, said wall must be overcome by means of one or more breakthroughs.

replies(2): >>42142322 #>>42144553 #
351. malfist ◴[] No.42142235{5}[source]
I was using an LLM to help spot passive voice in my documents and it told me "We're making" was passive and I should change it to "we are making" to make it active.

Leaving aside "we're" and "we are" are the same, it is absolutely active voice

replies(1): >>42142538 #
352. mvdtnz ◴[] No.42142249[source]
> The best engineering minds have been focused on scaling transformer pre and post training for the last three years because they had good reason to believe it would work, and it has up until now.

Or because the people running companies who have fooled investors into believing it will work can afford to pay said engineers life-changing amounts of money.

replies(1): >>42149769 #
353. HarHarVeryFunny ◴[] No.42142274{6}[source]
> After all, Open AI and Microsoft are still trying to create a 100B data center.

Yes - it'll be interesting to see if there are any signs of these plans being adjusted. Apparently Microsoft's first step is to build optical links between existing data centers to create a larger distributed cluster, which must be less of a financial commitment.

Meta seem to have an advantage here in that they have massive inference needs to run their own business, so they are perhaps making less of a bet by building out data centers.

354. moffkalast ◴[] No.42142295{9}[source]
Well that's true and has been pretty glaring, but they've needed to do that in cases where models seem to fail to grasp the some concept across the board and not in cases where they don't.

Like, every time an LLM gets something right we assume they've seen it somewhere in the training data, and every time they fail we presume they haven't. But that may not always be the case, it's just extremely hard to prove it one way or the other unless you search the entire dataset. Ironically the larger the dataset, the more likely the model is generalizing while also making it harder to prove if it's really so.

To give a human example, in a school setting you have teachers tasked with figuring out that exact thing for students. Sometimes people will read the question wrong with full understanding and fail, while other times they won't know anything and make it through with a lucky guess. If LLMs (and their vendors) have learned anything it's that confidently bullshitting gets you very far which makes it even harder to tell in cases where they aren't. Somehow it's also become ubiquitous to tune models to never even say "I don't know" because it boosts benchmark scores slightly.

355. dangw ◴[] No.42142303[source]
where the fuck is simonw in this thread

xd

356. jmward01 ◴[] No.42142322{5}[source]
'smartphones' needed a reason to exist, the internet provided that. I doubt we would have had them without it. AI will drive whole new product categories that didn't exist that will then transform our society even more.
357. vundercind ◴[] No.42142386{6}[source]
If you didn’t know what an airplane was, and saw one for the first time, you might wonder why it doesn’t flap its wings. Is it just not very good at being a bird yet? Is it trying to flap, but cannot? Why, there’s a guy over there with a company called OpenBird and he is saying all kinds of stuff about how bird-like they are. Where’s the flapping? I don’t see any pecking at seed, either. Maybe the engineers just haven’t finished making the flapping and pecking parts yet?

Then on learning how it works, you might realize flapping just isn’t something they’re built to do, and it wouldn’t make much sense if they did flap their wings, given how they work instead.

And yet—damn, they fly fast! That’s impressive, and without a single flap! Amazing. Useful!

At no point did their behavior change, but your ability to understand how and why they do what they do, and why they fail the ways they fail instead of the ways birds fail, got better. No more surprises from expecting them to be more bird-like than they are supposed to, or able to be!

And now you can better handle that guy over there talking about how powerful and scary these “metal eagles” (his words) are, how he’s working so hard to make sure they don’t eat us with their beaks (… beaks? Where?), they’re so powerful, imagine these huge metal raptors ruling the sky, roaming and eating people as they please, while also… trying to sell you airplanes? Actively seeking further investment in making them more capable? Huh. One begins to suspect the framing of these things as scary birds that (spooky voice) EVEN THEIR CREATORS FEAR FOR THEIR BIRD-LIKE QUALITIES (/spooky voice) was part of a marketing gimmick.

replies(1): >>42142564 #
358. dgfitz ◴[] No.42142404{3}[source]
LLMs use historic data to help create useful current data. It works well sometimes.

I find that a human is able to solve a P=NP situation, and an LLM can’t quite yet do that. When they can the game changes.

359. xanderlewis ◴[] No.42142428{7}[source]
It was a deliberately historical example.
360. alexashka ◴[] No.42142441{4}[source]
Because AGI is magic and LLMs are magicians.

But how do you know a magician that knows how to do card tricks isn't going to arrive at real magic? Shakes head.

361. dheera ◴[] No.42142470{3}[source]
Exactly, I think the current crop of models is capable of solving a lot of non-first-world problems. Many of them don't need full AGI to solve, especially if we start thinking outside Silicon Valley.
362. lobochrome ◴[] No.42142491[source]
Isn’t this just the expected delay from the respin of Blackwell?
363. tymscar ◴[] No.42142508{3}[source]
How often did you check?
364. glial ◴[] No.42142513[source]
I think self-consistency is a critical feature of LLMs or any AI that's currently missing. It's one of the core attributes of truth [1], in addition to the order and relationship of statements corresponding to the order and relationship of things in the world. I wonder if some kind of hierarchical language diffusion model would be a way to implement this -- where text is not produced sequentially, but instead hierarchically, with self-consistency checks at each level.

[1] https://en.wikipedia.org/wiki/Coherence_theory_of_truth

365. sdesol ◴[] No.42142538{6}[source]
In the process of developing my tool, there are only 5 models (the first 5 in my models dropdown list) that I would use as a writing aide. If you used any other model, it really is a crapshoot with how bad they can be.
replies(1): >>42145279 #
366. ◴[] No.42142540{8}[source]
367. dcl ◴[] No.42142543{3}[source]
That just makes it more likely to sample less likely outcomes from the same distribution. No real novelty.
368. hackinthebochs ◴[] No.42142564{7}[source]
The problem with this analogy is that we know what birds are and what they're constituted by. But we don't know what thinking is or what it is constituted by. If we wanted to learn about birds by examining airplanes, we would be barking up the wrong tree. On the other hand, if we wanted to learn about flight, we might reasonably look at airplanes and birds, then determine what the commonality is between their mechanisms of defying gravity. It would be a mistake to say "planes aren't flapping their wings, therefore they aren't flying". But that's exactly what people do when they dismiss LLMs being presently or in the future capable of thinking because they are made up of statistics, matrix multiplication, etc.
369. purple-leafy ◴[] No.42142581[source]
Doesn’t sound cutting edge at all? Every man and his dog is doing a similar process
370. tippytippytango ◴[] No.42142591[source]
There’s only so much you can do when you train on the data instead of the processes that created that data.
371. Tagbert ◴[] No.42142611{5}[source]
Glib but the reality is that there are lots of cases where you can use an AI in writing but don’t need to entrust it with the whole job blindly.

I mostly use AIs in writing as a glorified grammar checker that sometimes suggests alternate phrasing. I do the initial writing and send it to an AI for review. If I like the suggestions I may incorporate some. Others I ignore.

The only times I use it to write is when I have something like a status report and I’m having a hard time phrasing things. Then I may write a series of bullet points and send that through an AI to flesh it out. Again, that is just the first stage and I take that and do editing to get what I want.

It’s just a tool, not a creator.

replies(1): >>42144444 #
372. Aeolun ◴[] No.42142621{3}[source]
If the last 50 commits were reviewed by an AI and it took that long for an issue to happen I’d immediately mandate all PR’s are reviewed by an AI.
replies(1): >>42142743 #
373. Agingcoder ◴[] No.42142644{7}[source]
See arxiv paper just above
374. wokwokwok ◴[] No.42142654{3}[source]
> even though they're not actually true for anything but the most trivial hello-world types of problems.

Um.

All the parent post said was:

> then try to find similar code on the web, you usually will.

Not identical code. Similar code.

I think you're really stretching the domain of plausibility to suggest that any code you write is novel enough that you can't find 'similar' code on the internet.

To suggest that code generated from a corpus that is not going to be 'similar' to the code from the corpus is just factually and unambiguously false.

Of course, it depends on what you interpret 'similar' to mean; but I think it's not unfair to say a lot of code is composed of smaller parts of code that is extremely similar to other examples of code on the internet.

Obviously you're not going to find an example similar to your entire code base; but if you're using, for example, copilot where you generate many small snippets of code... welll....

replies(1): >>42142676 #
375. mrandish ◴[] No.42142661[source]
> An LLM hardly seems like something that will lead to self-awareness.

Interesting essay enumerating reasons you may be correct: https://medium.com/@francois.chollet/the-impossibility-of-in...

376. jmward01 ◴[] No.42142664{4}[source]
I'm not an apple fan (as I type on a mac that I am forced to use) but I gotta applaud their push for power efficiency. NVIDIA actually -does- have a few cards they make that really improve power efficiency but then they generally hamstring them with a lack of memory. NVIDIA is really good at making their high-end cards the only viable choice but I think that will backfire on them as people like me, that value quiet, cool and efficient over 25% faster inference start taking any viable alternative that comes out.
replies(1): >>42143898 #
377. dmd ◴[] No.42142676{4}[source]
Ok, yes. There are other pieces of code on the internet that use a for loop or an if statement.

By that logic what you wrote was also composed that way. After all, you’ve used all words that have been used before! I bet even phrases like “that is extremely similar” and “generated from a corpus” and “unambiguously false”.

Again, I really find it hard to believe that anyone could make an argument like the one you’re making who has actually used these tools in their work for hundreds of hours, vs. for a couple minutes here or there with made up problems.

replies(1): >>42143823 #
378. rapjr9 ◴[] No.42142698[source]
Indeed, if thinking about AI polluting the data and replacing humans. However, it also seems likely in the near term that training will go to the source because of this, that increasingly humans will directly train AI's, as the robotics and self driving car systems are doing, instead of training off the indirect data people create (watching someone paint rather than scanning paintings). So in essence we'll be training our replacements to take our tasks/jobs. Small tasks at first, but increasing in complexity over time. Someday no one may know how to drive a car anymore (or be allowed to for safety). Later on no one may know how to write computer code (or be allowed to for security reasons). Learning in each area mastered by AI will stop and never progress further, unless AI can truly become creative. Or perhaps (fewer and fewer) people will only work on new problems that require creativity. There are long term risks to humanities adaptability in this scenario. People would probably take those risks for the short term gains.
replies(1): >>42144469 #
379. mrandish ◴[] No.42142701{3}[source]
> why are so many experts so worried about AGI?

FYI, I find this line of reasoning to be unconvincing both logically and by counter-example ("why are so many experts so worried about the Y2K bug?")

Personally, I don't find AI foom or AI doom predictions to be probable but I do think there are more convincing arguments for your position than you're making here.

replies(1): >>42142822 #
380. Der_Einzige ◴[] No.42142714{3}[source]
Sora was just one of the many…
replies(1): >>42142744 #
381. bloppe ◴[] No.42142743{4}[source]
There's a difference between an issue being introduced and being noticed.
replies(1): >>42145460 #
382. hatefulmoron ◴[] No.42142744{4}[source]
Your best example is something that doesn't even do the things that GPT-4 does, isn't available to use, and has seemingly only produced a few clips (some of which were edited).

If it were one of many, I think you would name something better.

383. layer8 ◴[] No.42142760{6}[source]
Awareness is exhibited in behavior. It's exactly due to the behavior be observe from LLMs that we don't ascribe them awareness. I agree that it's difficult to define, and it's also not binary, but it's behavior we'd like AI to have and which LLMs are quite lacking.
replies(1): >>42178012 #
384. rco8786 ◴[] No.42142765[source]
I think there’s a long way to go also. I think people expected that AI would eventually be like a “point and shoot” where you would tell it to go do some complicated task, or sillier yet, take over someone’s entire job.

More realistically it’s like a really great sidekick for doing very specific mundane but otherwise non deterministic tasks.

I think we’ll start to see AI permeate into nearly every back office job out there, but as a series of tools that help the human work faster. Not as one big brain that replaces the human.

385. kristianp ◴[] No.42142767{5}[source]
> It keeps complaining that GitHub is spelled like Github, when it isn't

I feel like this is unfair. That's the only thing it got wrong? But we want it to pass all of our evals, even ones the perhaps a dictionary would be better at solving? Or even an LLM augmented with a dictionary.

replies(2): >>42143251 #>>42143364 #
386. foxglacier ◴[] No.42142786{5}[source]
> the “behavior” breaks down in ways that would be surprising from a human who’s actually paying as much attention as they’d need to be to have been interacting the way they had been until things went wrong.

That's an interesting angle. Though of course we're not surprised by human behavior because that's where our expectations of understanding come from. If we were used to dealing with perfectly-correctly-understanding super-intelligences, then normal humans would look like we don't understand much and our deliberate thinking might be no more accurate than the super-intelligence's absent-minded automatic responses. Thus we would conclude that humans are never really thinking or understanding anything.

I agree that default LLM output makes them look like they're thinking like a human more than they really are. I think mistakes are shocking more because our expectation of someone who talks confidently is that they're not constantly revealing themselves to be an obvious liar. But if you take away the social cues and just look at the factual claims they provide, they're not obviously not-understanding vs humans are-understanding.

387. NateEag ◴[] No.42142789[source]
> Go back a few decades and you'd see articles like this about CPU manufacturers struggling to improve processor speeds and questioning if Moore's Law was dead. Obviously those concerns were way overblown.

Am I missing something? I thought general consensus was that Moore's Law in fact did die:

https://cap.csail.mit.edu/death-moores-law-what-it-means-and...

The fact that we've still found ways to speed up computations doesn't obviate that.

We've mostly done that by parallelizing and applying different algorithms. IIUC that's precisely why graphics cards are so good for LLM training - they have highly-parallel architectures well-suited to the problem space.

All that seems to me like an argument that LLMs will hit a point of diminishing returns, and maybe the article gives some evidence we're starting to get there.

replies(1): >>42143562 #
388. layer8 ◴[] No.42142791{6}[source]
Being aware and not admitting are two different things, though. When you confront an LLM with a limitation, it will generally admit having it. That doesn't mean that it exhibits any awareness of having the limitation in contexts where the limitation is glaringly relevant, without first having confronted it with it. This is in itself a limitation of LLMs: In contexts where it should be highly obvious, they don't take their limitations into account without specific prompting.
389. rco8786 ◴[] No.42142797{6}[source]
There’s another option here though. Human supervised tasks.

There’s a whole classification of tasks where a human can look at a body of work and determine whether it’s correct or not in far less time than it would take for them to produce the work directly.

As a random example, having LLMs write unit tests.

replies(1): >>42148431 #
390. tick_tock_tick ◴[] No.42142802{3}[source]
I ride in self driving cares basically once a week in SF (Waymo). It's always felt safer then a Uber and makes ways less risky maneuvers.
replies(1): >>42144609 #
391. gchamonlive ◴[] No.42142809[source]
We should put a model in an actual body and let it in the world to build from experiences. Inference is costly though, so the robot would interact during a period and update it's model during another period, flushing the context window (short term memory) into its training set (long term memory).
replies(2): >>42142858 #>>42142901 #
392. Bjorkbat ◴[] No.42142811[source]
I agree that existing benchmarks are no longer useful now that there's basically nothing left in them that seems to stump LLMs.

But when I hear that models are failing to meet expectations, I imagine what they're saying is that the researchers had some sort of eval in mind with room to grow and a target, and that the model in question failed to hit the target they had in mind.

Honestly, problem with sentiments like these is on Twitter is that you can't tell if they're being sincere or just making a snarky, useless remark. Probably a mix of both.

393. niobe ◴[] No.42142817{3}[source]
> There is SO MUCH we haven't figured out about how to use them yet.

I mean, it's pretty clear to me they're a potentially great human-machine interface, but trying to make LLMs - in their current fundamental form - a reliable computational tool.. well, at best it's an expensive hack, but it's just not the right tool for the job.

I expect the next leap forward will require some orthogonal discovery and lead to a different kind of tool. But perhaps we'll continue to use LLMs as we knownthem now for what they're good at - language.

replies(1): >>42143045 #
394. bbor ◴[] No.42142822{4}[source]
Fair enough, well put to both of these responses! I’m certainly biased, and can see how the events that truly scare me (after already assessing the technology on my own and finding it to be More Important Than Fire Or Electricity) don’t make very convincing arguments on their own.

For us optimistic doomers, the AI conversation seems similar to the (early-2000s) climate change debate; we see a wave of dire warnings coming from scientific experts that are all-to-often dismissed, either out of hand due to their scale, or on the word of an expert in an adjacent-ish field. Of course, there’s more dissent among AI researchers than there was among climate scientists, but I hope you see where I’m coming from nonetheless — it’s a dynamic that makes it hard to see things from the other side, so-to-speak.

At this point I’ve pretty much given up convincing people on HackerNews, it’s just cathartic to give my piece and let people take it or leave it. If anyone wants to bring the convo down from industry trends into technical details, I’d love to engage tho :)

replies(1): >>42144209 #
395. vidarh ◴[] No.42142842{5}[source]
I do contract work on fine-tuning efforts, and I can tell you that most humans aren't designed to be public-facing either.

While LLMs do plenty of awful things, people make the most incredibly stupid mistakes too, and that is what LLMs needs to be benchmarked against. The problem is that most of the people evaluating LLMs are better educated than most and often smarter than most. When you see any quantity of prompts input by a representative sample of LLM losers, you quickly lose all faith in humanity.

I'm not saying LLMs are good enough. They're not. But we will increasingly find that there are large niches where LLMs are horrible and error prone yet still outperform the people companies are prepared to pay to do the task.

In other words, on one hand you'll have domain experts becoming expert LLM-wranglers. On the other hand you'll have public-facing LLMs eating away at tasks done by low paid labour where people can work around their stupid mistakes with process or just accepting the risk, same as they currently do with undertrained labor.

replies(3): >>42143411 #>>42143886 #>>42145953 #
396. ehnto ◴[] No.42142846{8}[source]
There have been off the shelf solutions for so many common software use cases, for decades now. I think the reason we still see so much custom software is that the devil is always in the details, and strict details are not an LLMs strong suit.

LLMs are in my opinion hamstrung at the starting gate in regards to replacing software teams, as they would need to be able to understand complex business requirements perfectly, which we know they cannot. Humans can't either. It takes a business requirements/integration logic/code generation pipeline and I think the industry is focused on code generation and not that integration step.

I think there needs to be a re-imaging of how software is built by and for interaction with AI if it were to ever take over from human software teams, rather than trying to get AI to reflect what humans do.

replies(1): >>42145940 #
397. layer8 ◴[] No.42142851{6}[source]
I'm assuming the "Wait" sentence is from the user. What I mean is that when humans say something, they also tend to have a view (maybe via the famous mirror neurons) of how this now sounds to the other person. They may catch themselves while speaking, changing course mid-sentence, or adding another sentence to soften or highlight something in the previous sentence, or maybe correcting or admitting some aspect after the fact. LLMs don't exhibit such an inner feedback loop, in which they reconsider the effect of the ouput they are in the process of generating.

You won't get an LLM outputting "wait, that's not right" halfway through their original output (unless you prompted them in a way that would trigger such a speech pattern), because no re-evaluation is taking place without further input.

replies(1): >>42177920 #
398. bbor ◴[] No.42142858[source]
There are people trying this, both in simulated spaces and real ones - look into the “embodiment” camp if interested to see how they’re doing! There’s many experts who think AGI is unreachable without this, and I think the unexpected intuitive capabilities of LLMs are great support for that thesis, albeit in a non-spatial way.

Kant describes two human “senses”: the intensive sense of time, and the extensive sense of space. In this paradigm, spatial experience would be inextricably tied to all forms of logic, because it helps train the cognitive faculties that are intrinsically tied to all complex (discriminative?) thought.

399. levocardia ◴[] No.42142881[source]
My interpretation of that tweet is "there is no DATA wall" meaning "we have so much more data we can ingest: all of youtube, all of spotify, all of twitch, every real-time webcam feed on the internet, RL agents playing every video game on steam, and we can extract so much more learning per unit data than we are now" which seems plausible enough to me.
400. jfoster ◴[] No.42142901[source]
That seems to be what Tesla is planning to do with Optimus.
401. RayVR ◴[] No.42142919[source]
I am definitely not an expert, nor do I have inside information on the directions of research that these companies are exploring.

Yes, existing LLMs are useful. Yes, there are many more things we can do with this tech.

However, existing SOTA models are large, expensive to run, still hallucinate, fail simple logic tests, fail to do things a poorly trained human can do on autopilot, etc.

The performance of LLMs is extremely variable, and it is hard to anticipate failure.

Many potential applications of this technology will not tolerate this level of uncertainty. Worse solutions with predictable and well understood shortcomings will dominate.

402. BobaFloutist ◴[] No.42142935{3}[source]
There already exists a robot that does the dishes, it's called a dishwasher.
replies(1): >>42145208 #
403. Bjorkbat ◴[] No.42142943[source]
It's kind of, I don't know, "weird", observing how there's all these news outlets reporting on how essentially every up-and-coming model has not performed as expected, while all the employees at these labs haven't changed their tune in the slightest.

And there's a number of reasons why, mostly likely being that they've found other ways to get improvements out of AI models, so diminishing returns on training aren't that much of a problem. Or, maybe the leakers are lying, but I highly doubt that considering the past record of news outlets reporting on accurate leaked information.

Still though, it's interesting how basically ever frontier lab created a model that didn't live up to expectations, and every employee at these labs on Twitter has continued to vague-post and hype as if nothing ever happened.

It's honestly hard to tell whether or not they really know something we don't, or if they have an irrational exuberance for AGI bordering on cult-like, and they will never be able to mentally process, let alone admit, that something might be wrong.

404. jeswin ◴[] No.42142944[source]
In my view, an escape hatch if we are truly stuck would be radical speed ups (like Cerebras) in compute time. If we get outputs in milli-seconds instead of seconds and at much lower costs, it would make backtracking viable. This won't allow AGI, but can make a new class of apps possible.
405. dr_kiszonka ◴[] No.42142946{4}[source]
I like the idea of context editing and threaded conversations. I think I have seen some alternative UIs on HN that support branching.
replies(1): >>42144810 #
406. mrandish ◴[] No.42142950[source]
Based on recent rumblings about AI scaling hitting a wall, of which this article is perhaps the most visible - and in a high-reach financial publication, I'm considering increasing my estimated probability we might see a major market correction next year (and possibly even a bubble collapse). (example: "CONFIRMED: LLMs have indeed reached a point of diminishing returns" https://garymarcus.substack.com/p/confirmed-llms-have-indeed...).

To be clear, I don't think a near-term bubble collapse is likely but I'm going from 3% to maybe ~10%. Also, this doesn't mean I doubt there's real long-term value to be delivered or money to be made in AI solutions. I'm thinking specifically about those who've been speculatively funding the massive build out of data centers, energy and GPU supply expecting near-term demand to continue scaling at the recent unprecedented rates. My understanding is much of this is being funded in advance of actual end-user demand at these elevated levels and it is being funded either by VC money or debt by parties who could struggle to come up with the cash to pay for what they've ordered if either user demand or their equity value doesn't continue scaling as expected.

Admittedly this scenario assumes that these investment commitments are sufficiently speculative and over-committed to create bubble dynamics and tipping points. The hypothesis goes like this: the money sources who've over-committed to lock up scarce future supply in the expectation it will earn outsize returns have already started seeing these warning signs of efficiency and/or progress rates slowing which are now hitting mainstream media. Thus it's possible there is already a quiet collapse beginning wherein the largest AI data center GPU purchasers might start trying to postpone future delivery schedules and may soon start trying to downsize or even cancel existing commitments or try to offload some of their future capacity via sub-leasing it out before it even arrives, etc. Being a dynamic market, this could trigger a rapidly snowballing avalanche of falling prices for next-year AI compute (which is already bought and sold as a commodity like pork belly futures).

Notably, there are now rumors claiming some of the largest players don't currently have the cash to pay for what they've already committed to for future delivery. They were making calculated bets they'd be able to raise or borrow that capital before payments were due. Except if expectation begins to turn downward, fresh investors will be scarce and banks will reprice a GPU's value as loan collateral down to pennies on the dollar (shades of the 2009 financial crisis where the collateral value of residential real estate assets was marked down). As in most bubbles, cheap credit is the fuel driving growth and that credit can get more expensive very quickly - which can in turn trigger exponential contagion effects causing the bubble to pop. A very different kind of "Foom" than many AI financial speculators were betting on! :-)

So... in theory, under this scenario sometime next year NVidia/TSMC and other top-of-supply-chain companies could find themselves with excess inventories of advanced node wafers because a significant portion of their orders were from parties who no longer have access to the cheap capital to pay for them. And trying to sue so many customers for breach can take a long time and, in a large enough sector collapse, be only marginally successful in recouping much actual cash.

I'd be interested in hearing counter-arguments (or support) for the impossibility (or likelihood) of such a scenario.

replies(2): >>42143601 #>>42150091 #
407. slashdave ◴[] No.42142955{3}[source]
> Everyone was mostly focused on the scaling laws because that worked extremely well

Also because it was easy, and expense was not the first concern.

408. BobaFloutist ◴[] No.42142958{3}[source]
Yes, because that already happened.
409. slashdave ◴[] No.42142962{4}[source]
Moore's law is doomed. At some point you start reaching the level of individual atoms. This is just physics.
replies(2): >>42143378 #>>42144700 #
410. slashdave ◴[] No.42142983[source]
> Tesla are all making rapid progress on functionality

The lack of progress with self driving seems to indicate that Tesla has a serious problem with scaling. The investment in enormous compute resources is another red flag (if you run out of ideas, just use brute force). This points to a fundamental flaw in model architecture.

411. Lonestar1440 ◴[] No.42143001[source]
No, we have not even scratched the surface of what current-gen LLMs can do for an organization which puts the correct data into them.

If indeed the "GPT 5!" Arms race has calmed down, it should help everyone focus on the possible, their own goals, and thus what AI capabilities to deploy.

Just as there won't be a "Silver Bullet" next gen model, the point about Correct Data In is also crucial. Nothing is 'free' not even if you pay a vendor or integrator. You, the decision making organization, must dedicate focus to putting data into your new AI systems or not.

It will look like the dawn of original IBM, and mechanical data tabulation, in retrospect once we learn how to leverage this pattern to its full potential.

412. hamburga ◴[] No.42143008[source]
I think there's a ton to be tapped based on the current state of the art.

As a developer, I'm making much more progress using the SOTA (Claude 3.5) as a Socratic interrogator. I'm brainstorming a project, give it my current thoughts, and then ask it to prompt me with good follow-up questions and turn general ideas into a specific, detailed project plan, next steps, open questions, and work log template. Huge productivity boost, but definitely not replacing me as an engineer. I specifically prompt it to not give me solutions, but rather, to just ask good questions.

I've also used Claude 3.5 as (more or less) a free arbitrator. Last week, I was in a disagreement with a colleague, who was clearly being disingenuous by offering to do something she later reneged on, and evading questions about follow up. Rather than deal with organizational politics, I sent the transcript to Claude for an unbiased evaluation, and it "objectively" confirmed what had been frustrating me. I think there's a huge opportunity here to use these things to detect and call out obviously antisocial behavior in organizations (my CEO is intrigued, we'll see where it goes). Similarly, in our legal system, as an ultra-low-cost arbitrator or judge for minor disputes (that could of course be appealed to human judges). Seems like the level of reasoning in Claude 3.5 is good enough for that.

My mental model is always "low-risk search". https://muldoon.cloud/2023/10/29/ai-commandments.html

413. Dr_Birdbrain ◴[] No.42143016[source]
I don’t know how to square this with the recent statement by Dario Amodei (Anthropic CEO) on the Lex Fridman podcast saying that in his opinion the scaling hypothesis still has plenty of room to run.
replies(1): >>42143037 #
414. soheil ◴[] No.42143033[source]
We have not exhausted what html can do either. LLMs not getting smarter is orthogonal to its currently unexplored search space.
415. avs733 ◴[] No.42143037[source]
Hype gonna hype. I’m not saying he is wrong I’m saying his opinion would be the same whether it’s true or not because his value depends on it being his opinion.
416. simonw ◴[] No.42143045{4}[source]
One of the biggest challenges in learning how to use and build on LLMs is figuring out how to work productively with a technology that - unlike most computers - is inherently unreliable and non-deterministic.

It's possible, but it's not at all obvious and requires a slightly skewed way of looking at them.

replies(1): >>42143433 #
417. ChildOfChaos ◴[] No.42143048[source]
There contract with Microsoft allows them to break it when they achieve AGI but doesn't fully define it.

Watch this be a power move to break from Microsofts investment when ready rather than true agi. Sam is laying the foundations here.

418. nomendos ◴[] No.42143050[source]
To clarify, in summary so far LLM's can do a bit more than the inputs used for training. Example https://dynomight.net/chess/ as well as some coding solutions are a bit better than each input alone, although if the solution requires more than "a bit more" then LLMs start to hallucinate (spin the wheels). Time will tell if LLM's can jump this "a bit more" barrier? (I can not tell for sure yet, but the current knowledge and my NL tells me if I'd have to put a bet, it would be that the new approach/design is needed)
419. dr_kiszonka ◴[] No.42143106{3}[source]
Would you have any suggestions on how to play with the internals of these open models? I don't understand LLMs well, and would love to spend some experimenting, but I don't know where to start. Are any projects more appropriate for neophytes?
420. SpicyLemonZest ◴[] No.42143107{4}[source]
I'm not sure what point you're making. It's true that people, including myself, were dismissive of cryptocurrency a few years ago; I think it's clear at this point that we were wrong, and it's not actually the case that the industry is a Ponzi scheme propped up by scammers like FTX.
421. woopwoop ◴[] No.42143108[source]
That's funny, because to me these headlines about how deep learning is over-hyped and hitting the wall remind me of headlines from ten years ago about how... deep learning is over-hyped and hitting the wall.
replies(1): >>42143277 #
422. summerlight ◴[] No.42143117[source]
I guess this is somewhat expected? The current frontier models probably already have exhausted most of the entropy in the training data accumulated over decades and the new training data is very sparse. And the current mainstream architectures are not capable of sophisticated searching and planning, essential aspects for generating new entropy out of thin air. o1 was an interesting attempt to tackle this problem, but we probably still have a long way to go.
423. kian ◴[] No.42143130{4}[source]
This is why I exclusively use the API to 'chat' with GPT -- complete control over the context presented.
424. kenjackson ◴[] No.42143143{6}[source]
> Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.

It's not even close to fully solved. We're still figuring out basic things like the purpose of dreams. We don't understand how memories are encoded or even things like how we process basic emotions like happiness. We're way closer to understanding LLMs than we are the brain, and we don't understand LLMs all that well still either. For example, look at the Golden Gate Bridge work for LLMs -- we have no equivalent for brains today. We've done much more advanced introspection work on LLMs in this short amount of time than we've done on the human brain.

425. airstrike ◴[] No.42143148[source]
The gap from the virtual world of software and the brutally uncompromising nature of physical reality is wider than most people seem to accept.

It's almost like saying "we've already visited every place on Earth, surely Mars is just around the corner now"

426. GiorgioG ◴[] No.42143149[source]
It’s about time the hype starts to die down. LLMs are brilliant for small bits of grunt work in software. It is not however doing any actual reasoning.
427. Havoc ◴[] No.42143150[source]
The new Gemini just hit some good benchmarks.

This smells like it’s mostly based on OAI having a bit of bad luck with next model rather than a fundamental slowdown / barrier.

They literally just made a decent sized leap with o1

replies(1): >>42143222 #
428. torguyvg46787 ◴[] No.42143183{3}[source]
The approaches are very limited, and it's essentially artificial artificial AI (and need a lot of human teleop demos).

At CoRL last week, the progress has noticeably plateaued. Roboticists notably were pessimistic that scaling laws will apply to robotics because of the embodiment issues.

429. soheil ◴[] No.42143184{3}[source]
How is self-driving a 2D problem when you navigate a 3D world? (please do visit hilly San Francisco sometime) not to mention additional dimensions like depth, velocity vectors among others.
replies(1): >>42144401 #
430. Roger-L ◴[] No.42143212[source]
Yes, I personally think that training an "all-knowing" artificial intelligence is not as good as training n "experts" in a single field.
431. Bjorkbat ◴[] No.42143222[source]
Not meeting expectations != not better than the previous models.

The Information reporting was a bit more clear on this. Orion is better than GPT-4, it's just that they were expecting a leap in capabilities comparable to what we saw going from GPT-3 to GPT-4. In other words, they were expecting essentially a GPT-5, and Orion wasn't that good.

432. MBCook ◴[] No.42143251{6}[source]
Does it matter?

As a user I want it to be right, even if that contradicts the normal rules of the language.

433. echelon ◴[] No.42143277{3}[source]
That was before people could generate animation and music.
replies(1): >>42159607 #
434. sky2224 ◴[] No.42143286[source]
> do we honestly feel like we've exhausted the options for delivering value on top of the current generation of LLMs?

I know we absolutely have not, but I think we have reached the limit in terms of the Chatbot experience that ChatGPT is. For some reason the industry keeps trying to force the chatbot interface to do literally everything to the point that we now have inflated roles like "Prompt Engineers". This is to say that people suck at knowing what they want off the rip, and LLMs can't help with that if they're not integrated in technology in such a way where a solid foundation is built to allow the models to generate good output.

LLMs and other big data models have incredible potential for things like security, medicine, and the power industry to name a few fields. I mean I was recently talking with a professor about his research in applying deep learning to address growing security concerns in cars on the road.

The application is far from reaching the ceiling.

435. fsndz ◴[] No.42143287[source]
Sam Altman might be wrong then?

Learning from data is not enough; there is a need for the kind of system-two thinking we humans develop as we grow. It is difficult to see how deep learning and backpropagation alone will help us model that. For tasks where providing enough data is sufficient to cover 95% of cases, deep learning will continue to be useful in the form of 'data-driven knowledge automation.' For other cases, the road will be much more challenging. https://www.lycee.ai/blog/why-sam-altman-is-wrong

replies(1): >>42143344 #
436. MBCook ◴[] No.42143293{4}[source]
CAN anything be done? At a very low level they’re basically designed to hallucinate text until it looks like something you’re asking for.

It works disturbingly well. But because it doesn’t have any actual intrinsic knowledge it has no way of knowing when it made a “good“ hallucination versus a “bad“ one.

I’m sure people are working at piling things on top to try and influence what gets generated or catch and move away from errors errors other layers spot… but how much effort and resources will be needed to make it “good enough“ that people don’t worry about this anymore.

In my mind the core problem is people are trying to use these for things they’re unsuitable for. Asking fact-based questions is asking for trouble. There isn’t much of a wrong answer if you wanted to generate a bedtime story or a bunch of test data that looks sort of like an example you give it.

If you ask it to find law cases on a specific point you’re going to raise a judge‘s ire, as many have already found.

437. nutanc ◴[] No.42143294[source]
Let's keep aside the hype. Let's define more advanced AI. With current architectures, this basically means better copying machines(don't mean this in a bad way and don't want a debate on this. This is just my opinion based on my usage). Basically everything in the Internet has been crammed into the weights and the companies are finding it hard to do two things:

1. Find more data.

2. Make the weights capture the data and reproduce.

In that sense we have reached a limit. So in my opinion we can do a couple of things.

1. App developers can understand the limits and build within the limits.

2. Researchers can take insights from these large models and build better AI systems with new architectures. It's ok to say transformers have reached a limit.

438. throw310822 ◴[] No.42143325{7}[source]
Should copilot be renamed to "designated driver"?
439. fnordpiglet ◴[] No.42143330{3}[source]
We have built quite a few highly useful LLM applications in my org that have reduced cost and improved outcomes in several domains - fraud detection, credit analysis, customer support, and a variety of other spaces. By in large they operate as cognitive load reducers but also handle through automation the vast majority of work since in our uses false negatives are not as bad as false positives but the majority of things we analyze are not true positives (99.999%+). As such the LLMs do a great job at anomaly detection and allow us to do tasks it would be prohibitively expensive with humans and their false positive and negative rates are considerably higher than LLMs.

I see these statements often here about “I’ve never seen an effective commercial use of LLMs,” which tells me you aren’t working with very creative and competent people in areas that are amenable to LLMs. In my professional network beyond where I work now I know at least a dozen people who have successful commercial applications of LLMs. They tend to be highly capable people able to build the end to end tool chains necessary (which is a huge gap) and understand how to compose LLMs in hierarchical agents with effective guard rails. Most ineffectual users of LLMs want them to be lazy buttons that obviate the need to think. They’re not - like any sufficiently powerful tool they require thought up front and are easy to use wrong. This will get better with time as patterns and tools emerge to get the most use out of them in a commercial setting. However the ability to process natural language and use an emergent (if not actual) abductive reasoning is absurdly powerful and was not practically possible 4 years ago - the assertion such an amazing capability in an information or decisioning system is not commercially practical is on the face absurd.

replies(3): >>42143387 #>>42143440 #>>42143506 #
440. asdfman123 ◴[] No.42143344[source]
If Sam Altman concluded that AI is reaching it's limits, it probably wouldn't be a very good strategic decision for him to say it.
replies(1): >>42143570 #
441. sdesol ◴[] No.42143364{6}[source]
My reason for commenting wasn't to say LLM sucks, but rather we need to get over the honeymoon phase. The fact the GPT-4o (one of the most advanced, if not the most advanced when it comes to non programming tasks) hallucinated "Github" as the input, should give us pause.

LLM has its place and it will forever change how we think about UX and other things, but we need to realize you really can't create a public facing solution without significant safe guards, if you don't want egg on your face.

replies(1): >>42145712 #
442. XenophileJKO ◴[] No.42143378{5}[source]
You are missing the economic component.. it isn't just how small can a transistor be.. it was really about how many transistors can you get for your money. So even when we reach terminal density, we probably haven't reached terminal economics.
replies(1): >>42144282 #
443. andai ◴[] No.42143387{4}[source]
>compose LLMs in hierarchical agents with effective guard rails

Could you elaborate? Is this related to the "teams of specialized LLMs" concept I saw last year when Auto-GPT was getting a lot of hype?

444. tiahura ◴[] No.42143397[source]
For law, I use both and find that neither is clearly superior. I’ll often pick one to first draft, and then feed to the other for suggestions and my edits.
445. sdesol ◴[] No.42143411{6}[source]
> While LLMs do plenty of awful things, people make the most incredibly stupid mistakes too

I am 100% not blaming the LLM, but rather VCs and the media for believing the VCs. Once we get over the hype and people realize there isn't a golden goose, the better off we will be. Once we accept that LLM is not perfect and that it is not what we are being sold, I believe we will find a place for it that will make a huge impact. Unfortunately for OpenAI and others, I don't believe they will play as big of a role as they would like us to believe/will.

446. XenophileJKO ◴[] No.42143433{5}[source]
This really reminds me of a trend years ago to create probabilistic programming constructs. I think it was just a trend way ahead of its time. Typical software engineers tend to be very ill-suited to think in probabilities and how to build reasonably reliable systems around them.
447. topicseed ◴[] No.42143440{4}[source]
Do they build guardrails themselves or do they use an llm guardrail api like Modelmetry or Langwatch?
448. bobsmooth ◴[] No.42143451{3}[source]
I've had generated code include comments so specific I was able to find the exact github repo where it came from.
449. amw-zero ◴[] No.42143483[source]
We might not have exhausted their applications, but everything I’ve witnessed them being used for has been extremely disappointing.

That is, other than me using them to bounce ideas off of and create small snippets of code.

450. nikkwong ◴[] No.42143487[source]
Didn’t Sam Altman just go on some podcast last week and tell the world that he thought “We know exactly what to do to be able to reach AGI now”. What’s going on, is he just posturing?
replies(2): >>42143527 #>>42149771 #
451. mhuffman ◴[] No.42143506{4}[source]
>We have built quite a few highly useful LLM applications in my org that have reduced cost and improved outcomes in several domains

Apps that use LLMs or apps made with LLMs? In either case can you share them?

>which tells me you aren’t working with very creative and competent people

> In my professional network beyond where I work now I know at least a dozen people who have successful commercial applications of LLMs.

Apps that use LLMs or apps made with LLMs? In either case can you share them?

No one doubts that you can integrate LLMs into an application workflow and get some benefits in certain cases. That has been what the excitement and promise was about all along. They have a demonstrated ability to wrangle, extract, and transform data (mostly correctly) and generate patterns from data and prompts (hit and miss, usually with a lot of human involvement). All of which can be powerful. But outside of textual or visual chatbots or CRUD apps, no one wants to "put up or shut" a solid example that the top management of an existing company would sign off on. Only stories about awesome examples they and their friends are working on ... which often turn out to be CRUD apps or textual or visual chatbots. One notable standout is generative image apps can be quite good in certain circumstances.

So, since you seem to have a real interest and actual examples of this, I am curious to see some that real companies would gamble that company on. And I don't mean some quixotic startup, I mean a company making real money now with customers that is confident on that app to the point they are willing to risk big. Because that last part is what companies do with other (non LLM) apps. I also know that people aren't perfect and wouldn't expect an LLM to be, just want to make sure I am not missing something.

452. whatshisface ◴[] No.42143527[source]
"We know exactly what we need to do to be able to reach it: figure out how."
453. russellbeattie ◴[] No.42143562{3}[source]
I wrote "a few decades".

The article you pointed out says the end came in 2016: Eight years ago.

My point is those types of articles have been popping up every few years since the 1990s. Sure, at some point these sort of predictions will be proven correct about LLMs as well. Probably in a few decades.

454. fsndz ◴[] No.42143570{3}[source]
I know right ?
455. whatshisface ◴[] No.42143601[source]
NVIDIA has a strong interest in the financial plausibility of their big orders and the correlation between counterparty risks, and didn't scale up production during the crypto bubble because they understood the dynamics you are describing.

On the other hand, selling to customers who can't pay but who look solvent to public investors sounds like the kind of short-termism nobody should be too surprised to be reading a book about in a few years...

456. hereme888 ◴[] No.42143654{3}[source]
Are we humans so different? Why do you wear what you wear? People emulate their older siblings, and so learn behavior. LLMs can create new programs, after having initially learned similar examples from others. Likewise for AI media.
457. zmmmmm ◴[] No.42143700[source]
> combining a human-moderated knowledge graph with an LLM with RAG allows you to build "expert bots" that understand your business context / your codebase / your specific processes and act almost human-like similar to a coworker in your team

It's been a while though, we've had great models now for a 18 months plus. Why are we still yet to see these type of applications rolling out on a wide scale?

My anecdotal experience is that almost universally, 90-95% type accuracy you get from them is just not good enough. Which is to say, having something be wrong 10% or even 5% of the time is worse than not having at all. At best, you need to implement applications like that in an entirely new paradigm that is designed to extract value without bearing the costs of the risks.

It doesn't mean LLMs can't be useful, but they are kind of stuck with applications that inherently mesh with human oversight (like programming etc). And the thing about those is that they don't really scale, because the human oversight has to scale up with whatever the LLM is doing.

458. dmix ◴[] No.42143744{5}[source]
Wasn't that mostly from public markets which never invested in tech before?
replies(1): >>42143896 #
459. Jensson ◴[] No.42143746{6}[source]
> (That’s not to say that humans don’t tend to lose some of their flexibility over their individual lifetimes as well.)

The lifetime is the context window, the model/training is the DNA. A human in the moment isn't general intelligent, but a human over his lifetime is, the first is so much easier to try to replicate though but that is a bad target since humans aren't born like that.

460. Jensson ◴[] No.42143777{4}[source]
They understand sentences but not words.
replies(1): >>42144657 #
461. dmix ◴[] No.42143796[source]
Maybe in like 5yrs+. For now they will rake in billions just from API usage alone just with GPT4 and whatever 5 is.

Amazon and Google didn't mess with their core business by competing with the players using it until they REALLY ran out of ways to make money.

replies(1): >>42145330 #
462. dmix ◴[] No.42143806[source]
Meta said they won't be releasing their glasses because they are too expensive for even the highest end of the consumer market. That likely means another 5yrs minimum to get production costs down. It's no longer just about the technical capabilities. Similar to Waymo needing to figure out how to affordably scale up production of $75k LIDAR sensors to put on a million cars, which cost less than the sensors themselves, plus the whole service industry to maintain them when they break.
463. wokwokwok ◴[] No.42143823{5}[source]
> I really find it hard to believe

What's true and what's not true is not related to what you personally believe.

It is factually and unambiguously false to state that generated code is, in general, not similar to other code from the corpus it is trained on.

> And none of it appears anywhere else; I've checked.

^ Even if this statement, is not false (I'm skeptical, but whatever), in general, it would be false for most users of copilot.

None of it appears anywhere else? None of it? Really?

That's not true of the no-AI code base I'm working on.

That's very difficult to believe it would be true on a code base heavily written by copilot and the like.

It's probably not true, in general, for AI generated code bases.

We can have a different conversation about verbatim copied code, where an AI model generates a large body of verbatim copy from a training source. That's very unusual.

...but to say the generated code wouldn't even be similar? Come on.

That's literally what LLMs do.

replies(1): >>42146049 #
464. intended ◴[] No.42143886{6}[source]
I have a side point here - There is a certain schizoid aspect to this argument that LLMs and humans make similar mistakes.

This means that on one hand firms are demanding RTO for culture and team work improvements. While on the other they will be ok with a tool that makes unpredictable errors like humans, but can never be impacted by culture and team work.

These two ideas lie in odd juxtaposition to each other.

replies(1): >>42146209 #
465. infamouscow ◴[] No.42143896{6}[source]
There is a graveyard of hardware companies from the 70s, 80s, and 90s.
replies(1): >>42146078 #
466. llm_trw ◴[] No.42143898{5}[source]
Memory is king.

Anything that has more memory and adequate compute will win the coming AI wars.

At the rate at which power consumption is growing now that the shortage of current gen cards has started to work itself out people are realizing they need a fleet of nuclear reactors to keep the data centers running. This is not something that's getting fix with the coming generation, if anything it's worse.

467. rm_-rf_slash ◴[] No.42143930[source]
AI was overhyped in the 1950s with the perceptron. Machine learning advances in fits and starts. As soon as it looks like it’s out of steam something novel comes out. Circa 2010 all the effort was on perfecting SVMs to the point where 1% point improvement on a computer vision task was a PhD thesis and the like then all of a sudden AlexNet made neural nets look feasible and the game changed overnight.
468. nickpsecurity ◴[] No.42143954[source]
The brain solves that problem. It seems to involve memory and specialized regions. I found a few groups building hippocampus-like, research models. One had content-addressable memory.

There was another one that claimed to get rid of hallucinations. They also said it takes 50-100 epochs for regular architectures to actually memorize something. Their paper is below in case people qualified to review it want to.

https://arxiv.org/abs/2406.17642

Like the brain, I believe the problem will be solved by a mix of specialized components working together. One of those components will be a memory (or series of them) that the others reference to keep processing grounded in reality.

replies(1): >>42144641 #
469. wanderingmind ◴[] No.42143983[source]
And yet the Anthropic CEO is still claiming PhD level intelligence in next couple of years to Lex Friedman. It's starting to feel like the whole crypto pump and dump again
470. eichi ◴[] No.42143995[source]
Scientific benchmarks score are not necessary related to the rate of completion of tasks such as user persuasion. Software engineering is more important when the current state-of-the-art small language model is sufficient for soltion of our application.
471. solid_fuel ◴[] No.42144019{5}[source]
I wouldn't expect an LLM to be good at spell checking, actually. The way they tokenize text before manipulating it makes them fairly bad at working with small sequences of letters.

I have had good luck using an LLM as a "sanity checking" layer for transcription output, though. A simple prompt like "is this paragraph coherent" has proven to be a pretty decent way to check the accuracy of whisper transcriptions.

replies(1): >>42144176 #
472. mycall ◴[] No.42144031[source]
We are just scratching the surface of what LLMs can do. Case in point, ESM3.

https://www.biorxiv.org/content/10.1101/2024.07.01.600583v1

473. james_marks ◴[] No.42144124{3}[source]
I haven’t seen quite that, but it’s an interesting question; like a semantic search.
474. fullstackchris ◴[] No.42144132{3}[source]
Cursor (Claude behind the scenes) can do that, however as always, your mileage may vary.

I tried building a whole codebase inspector, essentially what you are referring to with Gemini's 2 million token context window but had troubles with their API when the payload got large. Just 500 error with no additional info so...

replies(1): >>42144885 #
475. eichi ◴[] No.42144147{4}[source]
This is people's true desire. Make something like that while handling critisisms and fitting products to the market.
476. eichi ◴[] No.42144173[source]
It's marketing using buzz word rhetric. It's better to learn OOP if he trully think that. I also think OpenAI's PMF was to make the LLMs application towords better argument machine.
477. sdesol ◴[] No.42144176{6}[source]
Yes this is a tokenization error. If you rewrite the sentence as shown below:

https://app.gitsense.com/?doc=905f4a9af74c25f&model=Claude+3...

Claude 3.5 Sonnet will now misinterpret "GitHub as "Github"

478. fullstackchris ◴[] No.42144209{5}[source]
I've written (and am writing) extensively why I think AGI cant be as bad as everyone thinks, from a first principles (i.e physics and math) standpoint:

https://chrisfrewin.medium.com/why-llms-will-never-be-agi-70...

Still have like 2-3 big posts to publish.

Long story short its easy to get enamored with an agent spitting out tokens out but reality and engineering are far far more complex than that (orders of magnitude)

479. EternalFury ◴[] No.42144221[source]
If GPT-5 had passed the A/B testing OpenAI likes to do, it would have been released already. Instead, it seems they are clearly concerned the audience would not find it superior enough to GPT-4. So, the bluff must go on until the right cards appear.
480. jeswin ◴[] No.42144223{4}[source]
Google (even now) wasn't absolutely accurate either. That didn't stop it from becoming many billions worth.

> You can have it craft an email, or to review your email, but I wouldn't trust an LLM with anything mission-critical

My point is that an entire world lies between these two extremes.

replies(3): >>42145162 #>>42145790 #>>42152124 #
481. gitaarik ◴[] No.42144250{4}[source]
If we weren't, we (as in developers) weren't needed, right?
482. slashdave ◴[] No.42144282{6}[source]
I didn't say we have currently reached a limit. I am saying that there obvious is a limit (at some point). So, scaling cannot go forever. This is a counterpoint to the dubious analogy with deep learning.
483. grey-area ◴[] No.42144289[source]
The biggest weakness of generative AI to me is knowledge. It gives the impression of knowledge about the world without actually having a model of the world or any sense of what it does or does not know.

For example recently I asked it to generate some phrases for a list of words, along with synonym and antonym lists.

The phrases were generally correct and appropriate (some mistakes but that’s fine). The synonyms/antonyms were misaligned to the list (so strictly speaking all wrong) and were often incorrect anyway. I imagine it would be the same if you asked for definitions of a list of words.

If you ask it to correct it just generates something else which is often also wrong. It’s certainly superficially convincing in many domains but once you try to get it to do real work it’s wrong in subtle ways.

484. physicsguy ◴[] No.42144363{3}[source]
We’ve found that the text it generates in our RAG application is good, but it cocks up probably 5-10% of the time doing the inline references to the documents which users think is a bug and which we aren’t able to fix. This is static rather than interactively generated too
485. wruza ◴[] No.42144378{4}[source]
Afaiu “sampling” here, it is controlled with (not only?) topk and temp parameters in e.g. “text generation web ui”. You may find these in other frontends probably too.

This ofc implies local models and that you have a decent cpu + min 64gb of ram to run above 7b-sized model.

https://github.com/oobabooga/text-generation-webui

https://huggingface.co/models?pipeline_tag=text-generation&s...

486. physicsguy ◴[] No.42144401{4}[source]
The visual input and sensory input to the self driving function are of the 3D world but the car is still constrained to move along a 2D topological surface, it’s not moving up and down other than by following the curvature of that
replies(1): >>42144706 #
487. reissbaker ◴[] No.42144404[source]
Beyond just RAG, I'm fairly bullish on finetuning. For example, Qwen2.5-Coder-32B-Instruct is much better than Qwen2.5-72B-Instruct at coding... Despite simply being a smaller version of the same model, finetuned on code. It's on par with Sonnet 3.5 and 4o on most benchmarks, whereas the simple chat-tuned 72B model is much weaker.

And while Qwen2.5-Coder-32B-Instruct is a pretty advanced finetune — it was trained on an extra 5 trillion tokens — even smaller finetunes have done really well. For example, Dracarys-72B, which was a simpler finetune of Qwen2.5-72B using a modified version of DPO on a handmade set of answers to GSM8K, ARC, and HellaSwag, significantly outperforms the base Qwen2.5-72B model on the aider coding benchmarks.

There's a lot of intelligence we're leaving on the floor, because everyone is just prompting generic chat-tuned models! If you tune it to do something else, it'll be really good at the something else.

488. osigurdson ◴[] No.42144420[source]
This "running out of data" thing suggests that there is something fundamentally wrong with how things are working. A new driver does not need to experience 8000 different rabbit-on-road situations from all angles to know to slow down when we see one on the road. Similarly we don't need 10,000 addition examples to learn how to add. It is as though there is no generalization in the models - just fundamentally search.
replies(2): >>42144498 #>>42149778 #
489. osigurdson ◴[] No.42144428{6}[source]
I wonder if there is a moral hazard here? Apple doesn't really have much in terms of AI, so maybe more likely to have an unfavorable view.
replies(2): >>42146106 #>>42146354 #
490. _rm ◴[] No.42144433[source]
Well I have a question for you: do you think this format of AI can actually think?

I.e. can it ruminate on the data it's ingested, and rather than returning the response of highest probability, return something original?

I think that's the key. If LLMs can't ultimately do that, there's still a lot to be gained from utilising the speed and fluidly scalable resources of computers.

But like all the top tech companies know, it's not quantity of bodies in seats that matters but talent, the thing that's going to prevail is raw intelligence. If it can't think better than us, just process data faster and more voluminously but still needing human verification, we're on an asymptotic path.

491. Dunedan ◴[] No.42144438{3}[source]
While one could argue whether Tesla or another company is the leader in this space, don't all promising self-driving approaches rely on this paradigm?
492. osigurdson ◴[] No.42144444{6}[source]
>> have something like a status report and I’m having a hard time phrasing things

I believe the above suggested that this type of email likely doesn't need to be sent. Is anyone really reading the status report? If they read it, what concrete decisions do they make based on it. We all get in this trap of doing what people ask of us but it often isn't what shareholders and customers really care about.

replies(1): >>42169563 #
493. Timber-6539 ◴[] No.42144469{3}[source]
You are correct to state over-reliance on AI as a data source will probably lead to society's intellectual atrophy. One could argue we have been on this path with other things but the whole thing more and more to me looks like eating your own vomit and forcing a smile on your face.

AI will always have a specific narrow focus and will never ever be creative, the best AI proponents can hope for is that the hallucinations will drop to a more unnoticable level.

494. easeout ◴[] No.42144475[source]
I'm happy to use LLM products for what they can do right now, while they're still cheap. Even though they're maintained by high investment that may never pay off, enshittification has not yet set in.
495. anilgulecha ◴[] No.42144477{3}[source]
LLMs are not hype.

In education at least, we've actively improved efficiency by ~25% across a large swath of educators (direct time saved) - agentic evaluators, tutors and doubt clarifiers. The wins in this industry are clear. And this is that much more time to spend with students.

I also know from 1-1 conversation with my peers in large-finance world, and there too the efficiency improvements on multiple fronts are similar.

replies(1): >>42145776 #
496. surrTurr ◴[] No.42144498[source]
i think you underestimate the amount of data a driver experiences in a single 5 minute drive
replies(2): >>42144649 #>>42159546 #
497. spunker540 ◴[] No.42144553{5}[source]
I think everyone knew the internet would change everything and thus be very valuable. At the time the web was the primary manifestation of the internet. Domain names felt like an oil rush to carve up the internet. But it was actually a rush to carve up the web, and no one realized yet that things like Google search and app stores would make domain names far less valuable over time.
498. smusamashah ◴[] No.42144592[source]
It has to be a good thing to stop here. We can focus on improving what we have right now. The whole stack of models is an amazing innovation no matter what. It shouldn't hurt if we pause here for a while and try to build on this or improve this.

It will be like StableDiffusion 1.5. This model can now run on low end devices, lots of open research use this model to build something else and inspire by this.

These LLMs can be used as a foundation to keep improving and building new things.

499. n_ary ◴[] No.42144609{4}[source]
Could be because Uber or Taxi is trying to make most trips and maximize day earning while Waymo do not have that rush and can take things slow…

Of course Waymo needs money but if the car made fewer trips compared to Uber/Taxi, it is not suffering the same consequences.

We need to consider human factor and the severe lacking of that in these robot/self driving/LLM and drawing parallels is not a direction I am feeling comfortable.

End of the day, Tesla also sold half baked self drive that killed people, we should not forget.

500. gizajob ◴[] No.42144635{6}[source]
I’d say it’s more about the fact that they make useful products rather than brand recognition.
501. Animats ◴[] No.42144641{3}[source]
Comments on that paper? PDF: [1]

What they are measuring, it seems, is whether LLMs can be built which will retrieve a reliable known correct answer on request. That's an information retrieval problem, and, in fact, they solve it by adding "Memory Experts" which are basically data storage.

It's not clear that this helps either replies which require synthesizing disparate information, or detecting that the training data does not contain info needed to construct a reply.

[1] https://arxiv.org/pdf/2406.17642

replies(1): >>42147890 #
502. eslaught ◴[] No.42144649{3}[source]
I never get this argument.

I've seen a deer on a road maybe once. I've seen a rabbit on a road zero times. But I know what to do if I see one.

Is that because the "video" of my perception has many "frames"? Even if that's true at some level, I think it's massively missing the point. Yeah, so I saw that one deer from a lot of angles. But current AI training is like the equivalent of taking every deer that has ever been on camera in the history of the human species.

Somehow I'm still dramatically better at generalization than the AI. Surely that's an algorithm difference.

replies(1): >>42144889 #
503. youoy ◴[] No.42144657{5}[source]
What do you mean by that? We have the monosemanticity results [0]

[0] https://transformer-circuits.pub/2024/scaling-monosemanticit...

504. malthaus ◴[] No.42144682[source]
it's the equivalent of the "we overestimate the impact of technology in the short-term and underestimate the effect in the long run" quote.

everyone is looking at llm scores & strawberry gotchas while ignoring the trillions of market potential in replacing existing systems and (yes) people with the current capabilities. identifying the use cases, finetuning the models and (most importantly) actually rolling this out in existing organizations/processes/systems will be the challenge long before the base models' capabilities will be

it is worth working on those issues now and get the ball rolling, switching out your models for future more capable ones will be the easy part later on.

505. Earw0rm ◴[] No.42144700{5}[source]
The limits are engineering, not physics. Atoms need not be a barrier for a long time if you can go fully 3D, for example, but manufacturing challenges, power and heat get in the way long before that.

Then you can go ultra-wide in terms of cores, dispatchers and vectors (essentially building bigger and bigger chips), but an algorithm which can't exploit that will be little faster on today's chips than on a 4790K from ten years ago.

506. soheil ◴[] No.42144706{5}[source]
So based on your argument they actually operate in 1D since roads go in one direction and lanes and intersections are constrained to a predetermined curly line.
replies(1): >>42145327 #
507. datahack ◴[] No.42144723[source]
The next wave won’t be monolithic but network-driven. Orchestration has the potential to integrate diverse AI systems and complementary technologies, such as advanced fact-checking and rule-based output frameworks.

This methodological growth could make LLMs more reliable, consistent, and aligned with specific use cases.

The skepticism surrounding this vision mirrors early doubts about the early internet fairly concisely.

Initially, the internet was seen as fragmented collection of isolated systems without a clear structure or purpose. It really was. You would gopher somewhere and get a file, and eventually we had apps like like pine for email, but as cool as it was it has limited utility.

People doubted it could ever become the seamless, interconnected web we know today.

Yet, through protocols, shared standards, and robust frameworks, the internet evolved into a powerful network capable of handling diverse applications, data flows, and user needs.

In the same way, LLM orchestration will mature by standardizing interfaces, improving interoperability, and fostering cooperation among varied AI models and support systems.

Just as the internet needed HTTP, TCP/IP, and other protocols to unify disparate networks, orchestrated AI systems will require foundational frameworks and “rules of the road” that bring cohesion to diverse technologies.

We are at the veeeeery infancy of this era and have a LONG way to go here. Some of the progress looks clear and a linear progression, but a lot, like the Internet, will just take a while to mature and we shouldn’t forget what we learned the last time we faced a sea change technological revolution.

replies(1): >>42145742 #
508. malthaus ◴[] No.42144724[source]
if my billion net worth were coupled to that being the case i'd tweet that as well
509. danielbln ◴[] No.42144791{5}[source]
We built planes, which works quite differently from birds, in the span of what, 100 years? I think we've long left evolution behind when building machines, thinking or otherwise, so I'm not sure why the powerful but inefficient evolutionary process is held to some gold standard here.
replies(1): >>42145542 #
510. kreyenborgi ◴[] No.42144810{5}[source]
gptel does this: https://github.com/karthink/gptel/?tab=readme-ov-file#extra-...

Here are the docs for an example of how it can look: https://news.ycombinator.com/item?id=42039895

511. danielbln ◴[] No.42144812[source]
Pretraining or even post-training is cumbersome, complex and expensive. What is easy and cheap is in-context learning, which is why I just pull in the documentation I need the LLM to know about into the LLM's context.
512. weweersdfsd ◴[] No.42144879{3}[source]
I think buttons should not be replaced, but rather augmented with voice control. I certainly want to be able to adjust air conditioning or use my washing machine while listening music or having otherwise noisy environemnt.
513. disgruntledphd2 ◴[] No.42144885{4}[source]
I've played around with Claude and larger docs and it's honestly been a bit of a crapshoot, it feels like only some of the information gets into the prompt as the doc gets larger. They're great for converting PDF tables to more usable formats though.
514. visarga ◴[] No.42144889{4}[source]
You might personally have seen a deer just once, but human evolution, and animal evolution prior to that have practiced this skill a lot. AI doesn't have the advantage of evolutionary priors baked in, so it needs explicit walking through many combinations to infer its structure from data, and is remarkably efficient. GPT-4 'only' trained on the amount of language that 30,000 humans use in their lifetime.

But we have seen from AlphaGo that when training data is extensive, it can rediscover strategy on its own and even surpass us. It's not inherently worse than human learning.

replies(2): >>42145974 #>>42148631 #
515. kaycey2022 ◴[] No.42144901[source]
AI safety folks sure do look stupid now. :)
516. Barrin92 ◴[] No.42144909{4}[source]
if the AI business is a bit more mundane than Altman thinks and there's diminishing returns the market is going to be even more commodified than it already is and you're not going to make any margins or somehow own the entire market. That's already the case, Anthropic works about as well, there's other companies a few months behind, open source is like a year behind.

That's literally Zucc's entire play, in 5 years this stuff is going to be so abundant you'll get access to good enough models for pennies and he'll win because he can slap ads on it, and openAI sits there on its gargantuan research costs.

replies(1): >>42145951 #
517. fooker ◴[] No.42144934{5}[source]
If Intel could do that, they would be the one with a 3 trillion market cap. Not Nvidia.
518. zaptrem ◴[] No.42145027{3}[source]
O1, new Sonnet, all the music models and video models, the voice models like 4o, etc.
replies(1): >>42145866 #
519. _Algernon_ ◴[] No.42145080[source]
The next AI winter will be brutal
520. _Algernon_ ◴[] No.42145093[source]
I have yet to see LLMs provide a positive net value in the first place. They have a long way to go to weigh up for its negative uses in the form of polluting the commons that is the web, propaganda use, etc.
521. littlestymaar ◴[] No.42145102{5}[source]
> if one would extend it with an external program, that gives it feedback

If you have an external program, then by defining it's not self-awareness ;). Also, it's not about correctness per se, but about the model's ability to assess its own knowledge (making a mistake because the model was exposed to mistakes in the training data is fine, hallucinating isn't).

replies(1): >>42150305 #
522. epups ◴[] No.42145103{3}[source]
Some important landmarks since GPT4 was first released (not in chronological order):

- Vast cost reduction (>10x)

- Performance parity of several open source models to GPT4, including some with far fewer parameters

- Much better performance, much larger context window in state-of-the-art closed source LLMs (Claude 3.5 Sonnet)

- Multimodality (audio and vision)

- Prototypes for semi-autonomous agents and chain-of-thought architectures showing promising avenues for progress

523. DiscourseFan ◴[] No.42145162{5}[source]
I would say that anything you write can come back to you in the future, so don’t blindly sign your name on anything you didn’t review yourself.
524. ogogmad ◴[] No.42145208{4}[source]
You still need to load it.
525. Vampiero ◴[] No.42145267{4}[source]
Here is an example of a task that I do not believe this generation of LLMs can ever do but that is possible for an average human: designing a functional trivia app.

There, you don't need to invoke Turing or compiler bootstrapping. You just need one example of a use case where the accuracy of responses is mission critical

replies(1): >>42146128 #
526. WA ◴[] No.42145279{7}[source]
OT: Your tool has a typo in the right hand side: "Claude 3.5 Sonnet Techincal writing checker"
replies(1): >>42147492 #
527. phil917 ◴[] No.42145286[source]
The more Sam Altman posts stuff like this, the more he comes across as a grifter hype man to me
528. kreyenborgi ◴[] No.42145309{3}[source]
If you are one of today's ten thousand, this is a reference to the original garbage-in, garbage-out quote: https://en.wikiquote.org/wiki/Charles_Babbage#Passages_from_...
529. physicsguy ◴[] No.42145327{6}[source]
The point is clearly that they don’t have a vertical axis of control, they can’t make the car fly up in the air unless they’re driving crazy taxi style
530. ookdatnog ◴[] No.42145330{3}[source]
OpenAI is losing far more billions than they are raking in. I don't think any generative AI company is even close to profitable at the moment.

https://www.cnbc.com/2024/10/30/microsoft-cfo-says-openai-in...

replies(1): >>42153556 #
531. Aeolun ◴[] No.42145460{5}[source]
Yeah, but if our current incidence rate is 1 per 5 and it suddenly goes down to 1 in 50, that’s a major improvement.
532. namaria ◴[] No.42145534{5}[source]
The premise of deep learning is the automated 'absorption' of knowledge.

If we're back to curating it by hand and imparting it by writing code manually, how exactly are these systems an improvement on the 80's idea of building expert systems?

533. namaria ◴[] No.42145542{6}[source]
It's not a gold standard. It just shows how difficult the problem really is.

Flying machines rest on the excess power of internal combustion. They have nothing to do with bird evolution.

replies(1): >>42148991 #
534. boredhedgehog ◴[] No.42145544{5}[source]
> I do believe LLM is a game changer, but I'm not convinced it is designed to be public-facing.

I think that, too, is a UX problem.

If you present the output as you do, as simple text on a screen, the average user will read it with the voice of an infallible Star Trek computer and be irritated by every mistake.

But if you present the same thing as a bunch of cartoon characters talking to each other, users might not only be fine with "egg in your face moments", as you put it, they will laugh about them.

The key is to move the user away from the idealistic mental model of what a computer is and does.

replies(2): >>42148581 #>>42157351 #
535. cageface ◴[] No.42145565{3}[source]
I find specific blog posts ChatGPT is cribbing from all the time when I use it. I think it depends a lot on exactly what you're asking it for.
536. fullstackchris ◴[] No.42145571{4}[source]
Comments like these are so prevalent and yet illustrate very well the lack of understanding of the underlying technology. Neural nets, once trained, are static! You'll never get dynamic "through-time" reasoning like you can with a human-like mind. It's simply the WRONG tool. I say human-like because I still think AGI could be acheived in some digital format, but I can assure you it wont be packaged in a static neural net.

Now, neural nets that have a copy of themselves, can look back at what nodes were hit, and change through time... then maybe we are getting somewhere

replies(1): >>42147035 #
537. corimaith ◴[] No.42145589[source]
Looks you independently arrived at the original context that language models existed in as interfaces for deeper knowledge system in chatbots.

But the knowledge system here is doing the grunt of the work, and progressing past it's own limitations goes right hack to the pitfalls of the rules based AI winter. That's not a engineering problem, it's a foundational mathematics problems that only a few people are seriously working on.

538. wccrawford ◴[] No.42145614[source]
Plus, they're "struggling"? Of course they are! It's cutting edge, and it's hard. If they weren't struggling, it would have been done long ago.
539. spacebanana7 ◴[] No.42145680{3}[source]
It's ugly, but I've had some success with uploading a few files from a project and a sketch of the schema. Then asking for new functionality.

ChatGPT and Claude seem to be pretty good at maintaining an implicit understanding of the codebase based on a subset of files.

540. netdevnet ◴[] No.42145693{6}[source]
Welcome to capitalism. The market forces will squeze max value out of them. I imagine that Anthropic and OpenAI will be in the future fully downsized and acquired by their main investors (Microsoft and Amazon) and will simply becoming part of their generic and faceless AI & ML Teams once the current downwards stage of the hype cycle completes it closure in the next 5-8 years.
replies(1): >>42167865 #
541. netdevnet ◴[] No.42145712{7}[source]
I believe the honeymoon face has loong been finished. Even in the mainstream, last year of the AI year. 2024 has seen nothing substantially good and the only notesworthy thing is this article finally hitting into the public consciousness that we are past of the AI peak and beyond the plateau and freefalling has already begun.

LLM investors will be reviewing their portfolios and will likely begin declining further investments without clear evidence of profits in the very near future. On the other side, LLM companies will likely try to downplay this and again promise the Moon.

And on and on the market goes

542. mrweasel ◴[] No.42145740[source]
That's an interesting limitation. They can't make the LLMs (I still refuse to call them AIs) better, which the current dataset available. So with the sum of all human knowledge, more or less, and mixed in with the dumpster fire that it Internet comments, this is the best we can do with the current models.

I don't know much about LLMs, but that seems to indicate a sort of dead-end. The models are still useful, but limited in their abilities. So now the developers and researchers needs to start looking for new ways to use all this data. That in some sense resets the game. Sucks to be OpenAI, billions of dollars spend on a product that has been match or even outmatched by the competition in a few short years, not nearly enough time to make any of it back.

If there is a take away, it might be that it takes billions, if not trillions of dollars, to develop an AI and the result may still be less than what you hope for, and the investment really hard to recoup.

543. whyowhy3484939 ◴[] No.42145742[source]
You are definitely on to something here, but the difference is that the fundamental process was proven. It "just" needed to scale. That's hard and complex, but on a different level.

I don't think anyone doubted the nature of the technology. The bits were being sent. It's not like we were unsure of the fundamental possibility of transmitting information. The potential was shown very, very early on (Mother of all demos was in 1968). What we were and to some extent still are unsure of is the practical impact on society.

AI and LLMs in particular are not even at the mother of all demos level yet notwithstanding the grandiose claims and demos. There is no consensus on what these models are even doing. There is (IMO) justified skepticism surrounding the claims of reasoning and ability to abstract. We are in my opinion not yet at the "bits are being sent" stage.

replies(1): >>42150027 #
544. netdevnet ◴[] No.42145776{4}[source]
They are partially hype though. That's what people here are arguing. There are benefits but their valuation is largely hype driven. AI is going to transform industries and humanity, yes. But AI does not mean LLM (even if LLM means AI). LLM raw potential was reached last year with GPT-4. From here on, the value will lie on exploiting the potential we already have to generate clever applications. Just like the internet provided a platform for new services, I expect LLMs to be the same but with a much smaller impact
545. netdevnet ◴[] No.42145790{5}[source]
Why don't you give actual concrete testable examples back with evidence where this is the case? Put your skin in the game.
replies(1): >>42148527 #
546. netdevnet ◴[] No.42145814{4}[source]
what do you want done about it? Hallucination is an intrinsic part of how LLMs work. What makes a hallucination is the inconsistency between the hallucinated concept and the reality. Reality is not part of how LLMs work. They do amazing things but at the end of the day they are elaborate statistical machines.

Look behind the veil and see LLMs for what they really are and you will maximise their utility, temper your expectations and save you disappointment

547. hatefulmoron ◴[] No.42145866{4}[source]
The music/video models are cool, but It's an apples to orange comparison with GPT-4. I don't think there's really any comparison of intelligence or "advanceness" between those models and GPT-4.

I'm surprised to hear someone say that O1 and new Sonnet are "leaps", though. My impression of them is that they're qualitatively similar to GPT-4. Incremental improvements at best. I don't think the gap between GPT-4 and the new Sonnet is anywhere near as large as the gap between GPT-3 and GPT-4, for instance.

548. netdevnet ◴[] No.42145913{6}[source]
I don't know what's your experience with outsourcing. But people outsource full projects not the writing of a couple of methods. With LLMs still unable to fully understand relatively simple stuff, you can't expect them to deliver a project whose specification (like most software projects) contains ambiguities that only an experienced dev can detect and ask deep questions about the intention and purpose of the project. LLMs are nowhere near that. To be able to handle external uncertainty and turn it into certainty, to explain why technical decisions were made, to understand the purpose of a project and how it matches the project. To handle the overall uncertainties of writing code with other's people's code. All this is stuff outsourced teams do well. But LLMs won't be anywhere near good for at least a decade. I am calling it
549. ◴[] No.42145921[source]
550. netdevnet ◴[] No.42145940{9}[source]
This, code is written by humans for humans. LLMs cannot compete no matter how much data you throw at them. A world in which software is written by AI will likely won't be code that will be readable by humans. And that is dangerous for anything where people's health, privacy, finances or security is involved
551. netdevnet ◴[] No.42145951{5}[source]
genius move by Mark, this could make them the google of LLMs
552. vidarh ◴[] No.42145953{6}[source]
Yikes, that was an unfortunate auto-correct and too late to edit. "LLM losers" was meant to be "LLM users".
replies(1): >>42148538 #
553. hatefulmoron ◴[] No.42145965{6}[source]
I think their point is that having complex interactions between simple things doesn't necessarily result in any great emergent behavior. You can't just throw gloopy masses of cells into a bucket, shake it about, and get a cat.
554. RivieraKid ◴[] No.42145974{5}[source]
Human DNA is just 750 and only a fraction of it is something that may be called "brain pre-training".
555. raxxorraxor ◴[] No.42146002[source]
The context is a strict limitation if you work with data analysis or knowledge bases. Embeddings work, but the products we know get left and right mostly do not offer such capabilities at all. In that case most of these products remain decent chat bots.

For coding LLMs certainly are helpful, but I prefer local models instead of anything on offer right now. There is just much more potential here.

556. zkry ◴[] No.42146038[source]
There are a lot of comparisons that could be drawn: web 3.0, the internet, the dot com bubble, etc. but I think the most appropriate comparison would be to... AI in the past. No one doubts that there was a lot of value coming from that research. In fact a lot of it is incorperated in our every day life. But it didn't live up to its hype. I suspect the same will be true for this wave of AI (and perhaps an associated AI winter).
replies(1): >>42149466 #
557. dmd ◴[] No.42146049{6}[source]
This is like having an argument about whether airplanes can fly with someone who has never been in, piloted, or even really seen an airplane but is very, very sure of their understanding of how they can’t possibly work.

Among other things: it writes new, useful code daily in our local DSL, which appears nowhere on the internet and in fact didn't exist a few months ago.

558. zkry ◴[] No.42146078{7}[source]
a lot of which were founded on the promises of AI: symbolics, thinking machines corporation
559. larodi ◴[] No.42146106{7}[source]
No sadly they just voicing the opinion already voiced by (many) other scientists.

My masters was text-to-sql and I can tell you hundreds of papers conclude that seq2seq and the transformer dérivâtes suck at logic even when you approach logic the symbolic way.

We’d love to figure production rules of any sort emerge with scale of the transformer, but I’m get to read such paper.

560. larodi ◴[] No.42146113{7}[source]
Sorry I thought this was already discussed in HN in a major topic, and was hard for me to copy page the link on mobile. Please take excuse.
561. ◴[] No.42146122{3}[source]
562. alainx277 ◴[] No.42146128{5}[source]
o1-preview managed to complete this in one attempt:

https://chatgpt.com/share/67373737-04a8-800d-bc57-de74a415e2...

I think the parent comment's challenge is more appropriate.

replies(1): >>42148745 #
563. vidarh ◴[] No.42146209{7}[source]
I think this goes exactly to the point that a whole lot of things become acceptable once they become cheap enough.
replies(1): >>42148086 #
564. fennecfoxy ◴[] No.42146354{7}[source]
It's also true that Apple hasn't really created any of these technologies themselves; afaik they're using a mostly standard LLM architecture (not invented by Apple) combined with task specific LORAs (not invented by Apple). Has Apple actually created any genuinely new technologies or innovations for Apple Intelligence?
565. hackinthebochs ◴[] No.42147035{5}[source]
The context window of LLMs gives something like "through time reasoning". Chain of thought goes even further in this direction.
566. sdesol ◴[] No.42147492{8}[source]
Hey thanks! The error is in the config file. Will fix this.
567. Filligree ◴[] No.42147539{5}[source]
That’s… what I said, yes.
568. nickpsecurity ◴[] No.42147890{4}[source]
On the second paragraph, there’s been work that shows whether a model has memorized or is strongly replying to certain prompts. Something like that combined with a memory-equipped model would tell you if it might contain the info.

From there, you need multiple layers building on info it contains to synthesize a reply that might be good. Alternatively, an iterative process going a few rounds through a model, re-presenting the combo of results together, and it fuses them. All based on known data or what’s in the prompt with nothing else.

This is speculative based on a few things our own minds do.

569. intended ◴[] No.42148086{8}[source]
Since this is a comparison, what has been made comparatively cheaper?
replies(1): >>42148416 #
570. jacobr1 ◴[] No.42148338{3}[source]
This is why we are only at the start of exploring the solution space. What applications don't require 100% accuracy? What tooling can we build that enables a human in the loop to choose between options? What options do we have to better testing or checking accuracy? There is a lot more to be done to invest hybrid systems that use other types of models or novel training date or heuristics or human workflows in novel ways that shore up the shortcomings ... but in aggregate allow us to do new things. It will take many years for us to figure where this makes the most sense.
571. jacobr1 ◴[] No.42148416{9}[source]
We aren't talking about skilled knowledge work in Silicon Valley campuses. We are talking about work that might already have been outsourced so some cube-farm in the Philippines. Our routine office work that probably could already have been automated away by a line of business app in the 1980s, but is still done in some small office in Tulsa because it doesn't make sense to pay someone to write the code when 80% of the work is managing the data entry that still needs to be done regardless.

This more marginal labor is going to be more easy to replace. Also plenty of the more "elite" type labor will too, as it turns out it is more marginal. Already glue and boilerplate programming work is going this way, there is just so much more to do, and the important work of figuring out what should be done, that it hasn't displaced programmers yet. But it will for some fraction. WYSIWG type websites for small business has come a long way and will only get better, so there will be less need for customization on the margin. Or light design work (like take my logo and plug into into this format for this charity tournament flyer).

replies(1): >>42150352 #
572. jacobr1 ◴[] No.42148431{7}[source]
Which is a good example, because accuracy can be improved significantly with even minor human guidance in task like unit tests. Human augmentation is extremely valuable.
573. jacobr1 ◴[] No.42148527{6}[source]
A support ticket is a good middle ground. This is probably the area of most robust enterprise deployment. Synthesizing knowledge to produce a draft reply with some logic either to automatically send it or have human review. There are both shitty and ok systems that save real money with case deflection and even improved satisfaction rates. Partly this works because human responses can also suck, so you are raising a low bar. But it is a real use case with real money and reputation on the line.
replies(1): >>42152090 #
574. tim333 ◴[] No.42148538{7}[source]
I thought you were maybe a bit rude there!
replies(1): >>42149423 #
575. tim333 ◴[] No.42148581{6}[source]
To be fair they usually have "ChatGPT can make mistakes. Check important info" type disclaimers.
replies(1): >>42149865 #
576. jacobr1 ◴[] No.42148625{4}[source]
Semantic search without LLMs is already making a dent. It still gives traditional results that need to be human processed, but you can get "better" search results.

And with that there is a body work on "groundedness" that basically post-processes output to compare it against its source material. It still can result in logic errors and has a base error it self, but can ensure you at least have clear citations for factual claims that match real documents, but doesn't fully ensure they are being referenced correctly (though that is already the case even with real papers produced by humans).

Also consider the baseline isn't perfection, it is a benchmark against real humans. Accuracy is getting much better in certain domains where we have a good corpora. Part of assessing the accuracy of a system is going to be about determining if the generated content is "in distribution" of its training data. There is progress being made in this direction, so we could perhaps do a better job at the application level of making use of a "confidence" score of some kind maybe even taking that into account in a chain of thought like reasoning step.

People keep finding "obviously wrong" hallucinates that seem like proof things are still crap. But these system keep getting better on benchmarks looking at retrieval accuracy. And the benchmarks keep getting better as people point out deficiencies it them. Perfection might not be possible, but consistently better than average human seems in reach, and better than that seems feasible too. The challenge is the class of mistakes might look different even if the error rate overall is lower.

577. jaculabilis ◴[] No.42148631{5}[source]
> You might personally have seen a deer just once, but human evolution, and animal evolution prior to that have practiced this skill a lot.

Which pre-human animals evolved instincts for swerving a car to avoid a deer?

replies(1): >>42150433 #
578. Vampiero ◴[] No.42148745{6}[source]
Have you personally verified that the answers are not hallucinations and that they are indeed factually true?

Oh, you just asked it to make a trivia app that feeds on JSON. Cute, but that's not what I meant. The web is full of tutorials for basic stuff like that.

To be clear I meant that LLMs can't write trivia questions and answers, thus proving that they can't produce trustworthy outputs.

And a trivia app is a toy (one might even say... a trivial example), but it's a useful demonstration of why you wouldn't put an LLM into a system on which lives depend on, let alone invest billions on it.

If you don't trust my words just go back to fiddling with your models and ask them to write a trivia quiz about a topic that you know very well. Like a TV show.

579. tim333 ◴[] No.42148862{4}[source]
Yeah, listening to him last week he seemed very unlike that https://www.youtube.com/watch?v=xXCBz_8hM9w&t=2324s
580. danielbln ◴[] No.42148991{7}[source]
The fact that it has nothing to do with evolution is exactly my point. We built something that can fly but has nothing to do with how birds fly. So we might be able to build an AGI that isn't based on biological mechanism and/or evolutionary principles.
replies(1): >>42154431 #
581. ikrenji ◴[] No.42149330{3}[source]
you do realize all of the worlds most valuable companies are either built on/for internet or heavily leverage it
582. vidarh ◴[] No.42149423{8}[source]
Yeah, not my intent. I use LLMs a lot myself too...
583. tim333 ◴[] No.42149466{3}[source]
My recollection of AI in the past is that it was nothing like this.

If you look at the Wikipedia article 'History of artificial intelligence' for now it has 'AI boom' and '2004 Nobel Prizes' but everything earlier is kind of meh.

I remember sitting down with pen and paper to try to write a ChatGPT type chatbot 44 years ago and of course totally failing to get anywhere, but I've followed the goings on since and this is the first time this stuff is working well.

584. tim333 ◴[] No.42149715[source]
Try asking one to write a poem. You'll get a lot of stuff that didn't exist before.
585. slashdave ◴[] No.42149743{3}[source]
I hear what you are saying, but "innovation" is also often used to excuse some rather badly engineered concepts
586. slashdave ◴[] No.42149769{3}[source]
The improvements in transformer implementation (e.g. "Flash Attention") have saved gobs of money on training and inference, I am guessing most likely more than the salary of those researchers.
587. tim333 ◴[] No.42149771[source]
Yeah this https://www.youtube.com/watch?v=xXCBz_8hM9w&t=2324s

Not quite that wording. More we know which way to head. I think he's sincere.

588. slashdave ◴[] No.42149778[source]
Deep learning is the very opposite of generalization.
replies(1): >>42170301 #
589. tim333 ◴[] No.42149802[source]
Sort of. But vaguely.
590. tim333 ◴[] No.42149854[source]
AlphaGo which beat Lee Sedol was trained on human games. But then they produced AlphaZero which learned entirely from self play and got better than AlphaGo. So it goes.
replies(1): >>42152444 #
591. sdesol ◴[] No.42149865{7}[source]
As mentioned earlier, unless you have a 6th sense for what is wrong, you won't know. If the message was "make sure to double check our response" then they get a pass, but they know people will just say "why shouldn't i just use google."
592. datahack ◴[] No.42150027{3}[source]
I see this as entirely surmountable. We’re still making geometric progress in small model accuracy, and breakthroughs like test-time training and synthetic data are poised to deliver immediate gains in self-training performance.

Your point about skepticism being warranted when viewing this linearly is well taken. But this isn’t a linear path. The Internet, at its core, was about connecting computers to unlock the value of those connections—a transformative but relatively straightforward concept.

What we’re dealing with now is the training of cognitive digital intelligence. This is an inherently dynamic and breakthrough-oriented process, one that evolves in ways far less predictable or constrained than simple network effects. While the metaphor of connectivity is useful, it doesn’t fully capture the parallel, multi-dimensional approaches at play here.

Pessimism, in my view, is deeply unwarranted, especially given the history of technological progress. Time and again, advancements have proven to be far more impactful and beneficial than even the most optimistic predictions. Consider the projections for AI in 2017—most futurists undershot its actual progress by an order of magnitude.

This research clearly illuminates a path forward:

https://ekinakyurek.github.io/papers/ttt.pdf

Deeply appreciate your thoughtful comment.

593. tim333 ◴[] No.42150039[source]
Or maybe not https://149909199.v2.pressablecdn.com/wp-content/uploads/201... https://waitbutwhy.com/2015/01/artificial-intelligence-revol...
594. tim333 ◴[] No.42150091[source]
A financial crash is quite possible, even while AI keeps getting better. Most of the players are not making profits.
595. lagrange77 ◴[] No.42150305{6}[source]
Yes, but that's essentially my point. Where to draw the system boundary? The brain is also composed of multiple components and does IO with external components, that are definitely not considered part of it.
596. whiplash451 ◴[] No.42150308{6}[source]
They definitely tell you jack. GPTs have reach their glass ceiling as they’ve sucked all available data and overfit to benchmarks.

Their models have tons of use cases, but OpenAI and Anthropic are now in a product/commercial play.

replies(1): >>42156007 #
597. intended ◴[] No.42150352{10}[source]
Ok.

Well, I can see the direction you are going. I am unconvinced though - it hasn't thread the needle.

Reason being

1) They are doing both in cube farms in the PHP, RTO + replacement by GenAI.

2) In high tech, they are also trying achieve these contradictory goals. RTO + Increased GenAI capability to reduce manpower needs.

I can see a desire to reduce costs. I cant see how RTO to improve team work sits with using LLMs to do human work.

replies(1): >>42153386 #
598. osigurdson ◴[] No.42150433{6}[source]
I'm pretty sure that evolution would select out anything that could not generalize pretty quickly.
599. ppeetteerr ◴[] No.42152090{7}[source]
Keyword is "draft". You still need a person to review the response with knowledge of the context of the issue. It's the same as my email example.
600. ppeetteerr ◴[] No.42152124{5}[source]
Google became a billion dollar company creating the best search and indexing service at the time and putting ads around the results (that and YouTube). The didn't own the answer of the question.
601. wslh ◴[] No.42152444{3}[source]
That is just for chess which is not comparable to society/historical content, science, etc. Chess also have well defined rules.
602. salad-tycoon ◴[] No.42153386{11}[source]
That’s a lot of weight on RTO and why it’s being implementing. A company is fully able to have you RTO, maybe even move, and fire you next day/month/year and desiring increased teamwork is not mutually exclusive of preparing for lay offs. Plus, I imagine at these companies there are multiple hands all doing things for their own purpose and metrics without knowing what the other hand is doing.Mid level Jan’s Christmas bonus depends on responding to exit interviews measurements showing workers leaving due to lack of teamwork, Bobs bonus depends on quickly implementing the code.
603. dmix ◴[] No.42153556{4}[source]
When you make 4 billion in revenue you can generally figure out how to become profitable over time

High growth early days is a poor time to judge that

604. aniforprez ◴[] No.42154431{8}[source]
Planes don't fly radically differently than birds. Birds can flap their wings because they're light and small. Birds don't fly by flapping their wings, they flap their wings to fly. The flapping is to gain and maintain height but beyond that they use the same principle to stay afloat. Birds expend massive amounts of energy to flap too and eat a lot of food to compensate. Large predatory birds try their best to glide as much as possible as a consequence. To carry a human, you need a proportionally larger machine and the square-cubed law would stop us from being able to flap plane size wings. Aside from that, birds and planes fly on the same Bernoulli's Principle of fluid motion and to compensate for being unable to take off from rest with wings, we made engines that provide thrust.

If AGI doesn't take the form of human-ish intelligence, then we'd never know it was intelligence. This means that the target is always a "visible" human like intelligence and that was gained through evolution and millions of years of experimentation and records. It will most certainly not take that long for human-like intelligence to form given our current progress but we would not recognise anything else.

605. ben_w ◴[] No.42156007{7}[source]
That's one possibility.

Rumours have been in abundance since GPT-4 came out due to on the lack of clarity, but that lack of clarity seems to also exist within the companies themselves.

OpenAI and Anthropic certainly seem up be doing a lot of product stuff, but at the same time the only reason people have for saying OpenAI not making a profit is all the money they're also spending on training new models — I've yet to use o1, it's still in beta and is only 2 months old (how long was gmail in "beta", 5 years?)

I also don't know how much self-training they do, training on signals from the model's output and how users rate that output, only that (1) it's more then none, that (2) some models like Phi-3 use at least some synthetic data[0], and (3) that making a model to predict how users will rate the output was one of the previous big breakthroughs.

If they were to train on almost all their own output, and estimaing API costs as approximately actual costs, and given the claimed[1] public financial statements, that's in the order of a quadrillion (1e15) tokens, compared to the mere ~1e13 claimed for some of the larger models.

[0] https://arxiv.org/abs/2404.14219

[1] I've not found the official sources nor do I know where to look for them, all I see are news websites reporting on the numbers without giving citations I can chase up

606. BlueTemplar ◴[] No.42157351{6}[source]
> It looks like you're writing unsubstantiated nonsense. Would you like to turn it all caps ?

clippy.gif

607. tim333 ◴[] No.42157364[source]
Working towards it more than on it.

People use the term in different ways. It generally implies being able to think like a human or better. OpenAI have always said they are working towards it, I think deepmind too. It'll probably take more than an LLM.

It's economically a big deal because if it can out think humans you can set it to develop the next improved model and basically make humans redundant.

608. qnleigh ◴[] No.42159546{3}[source]
A charitable interpretation of what you're saying is that humans produce lots of original data from their experiences of the world, like thinking about their experiences, imagining what they would have done differently, and perhaps even dreaming. I agree with the root comment that something is fundamentally missing, and probably it is the ability to iteratively learn from one's own experiences, test understanding, and recursively improve.

There are definitely teams working on applying reinforcement learning to LLMs. Maybe that will unlock new potential from finite training data.

609. qnleigh ◴[] No.42159607{4}[source]
And essays for homework assignments that would get a decent grade, art for the headings of blog posts, rewrites of an email to make it sound more professional, summaries of long documents, or just generally to create something that gives the semblance that you did a lot of work while actually having done very little work.

And yes of course hallucinations are a huge problem for most of these use cases, but they aren't stopping people from using them anyway. We have a new misinformation problem and it has no agenda. It's basically just white noise.

So my money is also on this changing the world dramatically, just not in the in uniformly positive way that the hype said it will.

610. MyFirstSass ◴[] No.42166964[source]
This is the most interesting comment in this highly autistic field.
611. parineum ◴[] No.42167865{7}[source]
> Welcome to capitalism. The market forces will squeze max value out of them.

What a ringing endorsement.

612. Tagbert ◴[] No.42169563{7}[source]
Considering that I do get questions and comments about the projects, yet, people are reading this.
613. pas ◴[] No.42170301{3}[source]
it's not that simple

"""

Intuitively, an overparameterized model will generalize well if the model’s representations capture the essential information necessary for the best model in the model class to perform well

"""

https://iclr-blogposts.github.io/2024/blog/double-descent-de...

614. snapcaster ◴[] No.42172995{4}[source]
At least it's a testable measurable definition. Everyone else seems to be down boring linguistic rabbit holes or nonstop goal post moving
615. nomel ◴[] No.42177920{7}[source]
> You won't get an LLM outputting "wait, that's not right" halfway through their original output

No, that's one contiguous response from the LLM. I have screenshots, because I was so surprised the first time. I've had it happen many times. This was (as I always use LLM) direct API calls. In the first case it happened, it was with largest Llama 3.5. It usually only happens one shot, no context, base/empty system prompt.

> LLMs don't exhibit such an inner feedback loop

That's not true, at all. Next token prediction is based on all previous text, including the previous word that was just produced. It uses what it has said for what it will say next, within the same response, just as a markov chain would.

616. ◴[] No.42178012{7}[source]
617. rocho ◴[] No.42181780{4}[source]
That's correct. I saw a paper recently that showed how LLMs performance collapses when they are trained on synthetic data.
618. KETpXDDzR ◴[] No.42206600[source]
LLMs are glorified Markow chains in the end. They can't reason or think, even when they are good in pretending they can. What we need is a totally different approach IMO.
619. rodgerd ◴[] No.42230225{6}[source]
Goatseus Maximus is what you're after.