←back to thread

425 points sfarshid | 1 comments | | HN request time: 0.314s | source
Show context
beefnugs ◴[] No.45005811[source]
"At one point we tried “improving” the prompt with Claude’s help. It ballooned to 1,500 words. The agent immediately got slower and dumber. We went back to 103 words and it was back on track."

Isn't this the exact opposite of every other piece of advice we have gotten in a year?

Another general feedback just recently, someone said we need to generate 10 times, because one out of those will be "worth reviewing"

How can anyone be doing real engineering in such a: pick the exact needle out of the constantly churning chaos-simulation-engine that (crashes least, closest to desire, human readable, random guess)

replies(5): >>45005876 #>>45006945 #>>45007356 #>>45009461 #>>45011229 #
1. xboxnolifes ◴[] No.45011229[source]
Its not the exact opposite of what ive been reading. Basically every person claiming to have success with LLM coding that ive read have said that too long of a prompt leads to too much context which leads to the LLM diverging from working on the problem as desired.