←back to thread

196 points zmccormick7 | 1 comments | | HN request time: 0.332s | source
Show context
aliljet ◴[] No.45387614[source]
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

replies(10): >>45387639 #>>45387672 #>>45387700 #>>45387992 #>>45388228 #>>45388271 #>>45388664 #>>45388965 #>>45389266 #>>45404093 #
1. tom_m ◴[] No.45404093[source]
You don't want to discard prior information though. That's the problem with small context windows. Humans don't forget the original request as they ask for more information or go about a long task. Humans may forget parts of information along the way, but not the original goal and important parts. Not unless they have comprehension issues or ADHD, etc.

This isn't a misconception. Context is a limitation. You can effectively have an AI agent build an entire application with a single prompt if it has enough (and the proper) context. The models with 1m context windows do better. Models with small context windows can't even do the task in many cases. I've tested this many, many, many times. It's tedious, but you can find the right model and the right prompts for success.