Most active commenters
  • Joel_Mckay(3)

←back to thread

1481 points sandslash | 14 comments | | HN request time: 1.225s | source | bottom
Show context
whilenot-dev ◴[] No.44320812[source]
I watched Karpathy's Intro to Large Language Models[0] not so long ago and must say that I'm a bit confused by this presentation, and it's a bit unclear to me what it adds.

1,5 years ago he saw all the tool uses in agent systems as the future of LLMs, which seemed reasonable to me. There was (and maybe still is) potential for a lot of business cases to be explored, but every system is defined by its boundaries nonetheless. We still don't know all the challenges we face at that boundaries, whether these could be modelled into a virtual space, handled by software, and therefor also potentially AI and businesses.

Now it all just seems to be analogies and what role LLMs could play in our modern landscape. We should treat LLMs as encapsulated systems of their own ...but sometimes an LLM becomes the operating system, sometimes it's the CPU, sometimes it's the mainframe from the 60s with time-sharing, a big fab complex, or even outright electricity itself?

He's showing an iOS app, which seems to be, sorry for the dismissive tone, an example for a better looking counter. This demo app was in a presentable state for a demo after a day, and it took him a week to implement Googles OAuth2 stuff. Is that somehow exciting? What was that?

The only way I could interpret this is that it just shows a big divide we're currently in. LLMs are a final API product for some, but an unoptimized generative software-model with sophisticated-but-opaque algorithms for others. Both are utterly in need for real world use cases - the product side for the fresh training data, and the business side for insights, integrations and shareholder value.

Am I all of a sudden the one lacking imagination? Is he just slurping the CEO cool aid and still has his investments in OpenAI? Can we at least agree that we're still dealing with software here?

[0]: https://www.youtube.com/watch?v=zjkBMFhNj_g

replies(5): >>44320931 #>>44321098 #>>44321426 #>>44321439 #>>44321941 #
1. bwfan123 ◴[] No.44320931[source]
> Am I all of a sudden the one lacking imagination?

No, The reality of what these tools can do is sinking in.. The rubber is meeting the road and I can hear some screaching.

The boosters are in 5 stages of grief coming to terms with what was once AGI and is now a mere co-pilot, while the haters are coming to terms with the fact that LLMs can actually be useful in a variety of usecases.

replies(4): >>44320973 #>>44321074 #>>44321532 #>>44321573 #
2. acedTrex ◴[] No.44320973[source]
I actually quite agree with this, there is some reckoning on both sides happening. It's quite entertaining to watch, a bit painful as well of course as someone who is on the "they are useless" side and is noticing some very clear usecases where a value add is present.
replies(2): >>44321120 #>>44321407 #
3. anothermathbozo ◴[] No.44321074[source]
> The reality of what these tools can do is sinking in

It feels premature to make determinations about how far this emergent technology can be pushed.

replies(1): >>44321337 #
4. natebc ◴[] No.44321120[source]
I'm with you. I give several of 'em a shot a few times a week (thanks Kagi for the fantastic menu of choices!). Over the last quarter or so I've found that the bullshit:useful ratio is creeping to the useful side. They still answer like a high school junior writing a 5 paragraph essay but a decade of sifting through blogspam has honed my own ability to cut through that.
replies(1): >>44321223 #
5. diggan ◴[] No.44321223{3}[source]
> but a decade of sifting through blogspam has honed my own ability to cut through that.

Now, a different skill need to be honed :) Add "Be concise and succinct without removing any details" to your system prompt and hopefully it can output its text slightly better.

6. Joel_Mckay ◴[] No.44321337[source]
The cognitive dissonance is predictable.

Now hold my beer, as I cast a superfluous rank to this trivial 2nd order Tensor, because it looks awesome wasting enough energy to power 5000 homes. lol =3

7. Joel_Mckay ◴[] No.44321407[source]
In general, the functional use-case traditionally covered by basic heuristics is viable for a reasoning LLM. These are useful for search. media processing, and language translation.

LLM is not AI, and never was... and while the definition has been twisted in marketing BS it does not mean either argument is 100% correct or in err.

LLM is now simply a cult, and a rather old one dating back to the 1960s Lisp machines.

Have a great day =3

replies(1): >>44321494 #
8. johnxie ◴[] No.44321494{3}[source]
LLMs aren’t perfect, but calling them a “cult” misses the point. They’re not just fancy heuristics, they’re general-purpose function approximators that can reason, plan, and adapt across a huge range of tasks with zero task-specific code.

Sure, it’s not AGI. But dismissing the progress as just marketing ignores the fact that we’re already seeing them handle complex workflows, multi-step reasoning, and real-time interaction better than any previous system.

This is more than just Lisp nostalgia. Something real is happening.

replies(1): >>44321690 #
9. pera ◴[] No.44321532[source]
Exactly! What skeptics don't get is that AGI is already here and we are now starting a new age of infinite prosperity, it's just that exponential growth looks flat at first, obviously...

Quantum computers and fusion energy are basically solved problems now. Accelerate!

replies(1): >>44321584 #
10. hn_throwaway_99 ◴[] No.44321573[source]
> The boosters are in 5 stages of grief coming to terms with what was once AGI and is now a mere co-pilot, while the haters are coming to terms with the fact that LLMs can actually be useful in a variety of usecases.

I couldn't agree with this more. I often get frustrated because I feel like the loudest voices in the room are so laughably extreme. One on side you have the "AGI cultists", and on the other you have the "But the hallucinations!!!" people. I've personally been pretty amazed by the state of AI (nearly all of this stuff was the domain of Star Trek just a few years ago), and I get tons of value out of many of these tools, but at the same time I hit tons of limitations and I worry about the long-term effect on society (basically, I think this "ask AI first" approach, especially among young people, will kinda turn us all into idiots, similar to the way Google Maps made it hard for most of us to remember the simple directions). I also can't help but roll my eyes when I hear all the leaders of these AI companies going on about how AI will make a "white collar bloodbath" - there is some nuggets of truth in that, but these folks are just using scare tactics to hype their oversold products.

11. hn_throwaway_99 ◴[] No.44321584[source]
This sounds like clear satire to me, but at this point I really can't tell.
replies(1): >>44329395 #
12. Joel_Mckay ◴[] No.44321690{4}[source]
Sure, I have seen the detrimental impact on some teams, and it does not play out as Marketers suggest.

The trick is in people seeing meaning in well structured nonsense, and not understanding high dimension vector spaces simply abstracting associative false equivalency with an inescapable base error rate.

I wager Neuromorphic computing is likely more viable than LLM cults. The LLM subject is incredibly boring once your tear it apart, and less interesting than watching Opuntia cactus grow. Have a wonderful day =3

13. _se ◴[] No.44329395{3}[source]
Nah this one is just a lemming.
replies(1): >>44335063 #
14. pera ◴[] No.44335063{4}[source]
How dare you