←back to thread

1481 points sandslash | 2 comments | | HN request time: 2.711s | source
Show context
whilenot-dev ◴[] No.44320812[source]
I watched Karpathy's Intro to Large Language Models[0] not so long ago and must say that I'm a bit confused by this presentation, and it's a bit unclear to me what it adds.

1,5 years ago he saw all the tool uses in agent systems as the future of LLMs, which seemed reasonable to me. There was (and maybe still is) potential for a lot of business cases to be explored, but every system is defined by its boundaries nonetheless. We still don't know all the challenges we face at that boundaries, whether these could be modelled into a virtual space, handled by software, and therefor also potentially AI and businesses.

Now it all just seems to be analogies and what role LLMs could play in our modern landscape. We should treat LLMs as encapsulated systems of their own ...but sometimes an LLM becomes the operating system, sometimes it's the CPU, sometimes it's the mainframe from the 60s with time-sharing, a big fab complex, or even outright electricity itself?

He's showing an iOS app, which seems to be, sorry for the dismissive tone, an example for a better looking counter. This demo app was in a presentable state for a demo after a day, and it took him a week to implement Googles OAuth2 stuff. Is that somehow exciting? What was that?

The only way I could interpret this is that it just shows a big divide we're currently in. LLMs are a final API product for some, but an unoptimized generative software-model with sophisticated-but-opaque algorithms for others. Both are utterly in need for real world use cases - the product side for the fresh training data, and the business side for insights, integrations and shareholder value.

Am I all of a sudden the one lacking imagination? Is he just slurping the CEO cool aid and still has his investments in OpenAI? Can we at least agree that we're still dealing with software here?

[0]: https://www.youtube.com/watch?v=zjkBMFhNj_g

replies(5): >>44320931 #>>44321098 #>>44321426 #>>44321439 #>>44321941 #
1. demosthanos ◴[] No.44321439[source]
What you're missing is the audience.

This talk is different from his others because it's directed at aspiring startup founders. It's about how we conceptualize the place of an LLM in a new business. It's designed to provide a series of analogies any one of which which may or may not help a given startup founder to break out of the tired, binary talking points they've absorbed from the internet ("AI all the things" vs "AI is terrible") in favor of a more nuanced perspective of the role of AI in their plans. It's soft and squishy rhetoric because it's not about engineering, it's about business and strategy.

I honestly left impressed that Karpathy has the dynamic range necessary to speak to both engineers and business people, but it also makes sense that a lot of engineers would come out of this very confused at what he's on about.

replies(1): >>44321622 #
2. whilenot-dev ◴[] No.44321622[source]
I get that, motivating young founders is difficult, and I think he has a charming geeky way of provoking some thoughts. But on the other hand: Why mainframes with time-sharing from the 60s? Why operating systems? LLMs to tell you how to boil an egg, seriously?

Putting my engineering hat on, I understand his idea of the "autonomy slider" as lazy workaround for a software implementation that deals with one system boundary. He should aspire people there to seek out for unknown boundaries, not provide implementation details to existing boundaries. His MenuGen app would probably be better off using a web image search instead of LLM image generation. Enhancing deployment pipelines with LLM setups is something for the last generation of DevOps companies, not the next one.

Please mention just once the value proposition and responsibilities when handling large quantities of valuable data - LLMs wouldn't exist without them! What makes quality data for an LLM, or personal data?