←back to thread

230 points taikon | 5 comments | | HN request time: 0.203s | source
1. zbyforgotp ◴[] No.42547305[source]
LLMs are not that different from humans, in both cases you have some limited working memory and you need to fit the most relevant context into it. This means that if you have a new knowledge base for llms it should be useful for humans too. There should be a lot of cross pollination between these tools.

But we need a theory on the differences too. Now it is kind of random how we differentiate the tools. We need ergonomics for llms.

replies(2): >>42548228 #>>42548440 #
2. andai ◴[] No.42548228[source]
>ergonomics for LLMs

When I need to build something for an LLM to use, I ask the LLM to build it. That way, by definition, the LLM has a built in understanding of how the system should work, because the LLM itself invented it.

Similarly, when I was doing some experiments with a GPT-4 powered programmer, in the early days I had to omit most of the context (just have method stubs). During that time I noticed that most of the code written by GPT-4 was consistently the same. So I could omit its context because the LLM would already "know" (based on its mental model) what the code should be.

replies(2): >>42548382 #>>42548682 #
3. matthewsinclair ◴[] No.42548382[source]
> the LLM has a built in understanding of how the system should work, because the LLM itself invented it

Really? I’m not sure that the word “understanding” means the same thing to you as it does to me.

4. photonthug ◴[] No.42548440[source]
> This means that if you have a new knowledge base for llms it should be useful for humans too. There should be a lot of cross pollination between these tools.

This is realistic but hence going to be unpopular unfortunately, because people expect magic / want zero effort.

5. EagnaIonat ◴[] No.42548682[source]
> the LLM has a built in understanding of how the system should work,

Thats not how an LLM works. It doesn't understand your question, nor the answer. It can only give you a statistically significant sequence of words that should follow what you gave it.