←back to thread

358 points andrewstetsenko | 8 comments | | HN request time: 0.001s | source | bottom
Show context
taysix ◴[] No.44360808[source]
I had a fun result the other day from Claude. I opened a script in Zed and asked it to "fix the error on line 71". Claude happily went and fixed the error on line 91....

1. There was no error on line 91, it did some inconsequential formatting on that line 2. More importantly, it just ignored the very specific line I told it to go to. It's like I was playing telephone with the LLM which felt so strange with text-based communication.

This was me trying to get better at using the LLM while coding and seeing if I could "one-shot" some very simple things. Of course me doing this _very_ tiny fix myself would have been faster. Just felt weird and reinforces this idea that the LLM isn't actually thinking at all.

replies(4): >>44360819 #>>44360879 #>>44360917 #>>44363593 #
1. klysm ◴[] No.44360819[source]
LLMs probably have bad awareness of line numbers
replies(4): >>44360870 #>>44362858 #>>44368342 #>>44375001 #
2. mcintyre1994 ◴[] No.44360870[source]
I suspect if OP highlighted line 71 and added it to chat and said fix the error, they’d get a much better response. I assume Cursor could create a tool to help it interpret line numbers, but that’s not how they expect you to use it really.
replies(1): >>44361092 #
3. recursive ◴[] No.44361092[source]
How is this better from just using a formal language again?
replies(1): >>44361793 #
4. svachalek ◴[] No.44361793{3}[source]
Who said it's better? It's a design choice. Someone can easily write an agent that takes instructions in any language you like.
replies(1): >>44362395 #
5. recursive ◴[] No.44362395{4}[source]
The current batch of AI marketing.
6. crackalamoo ◴[] No.44362858[source]
Not sure how tools like Cursor work under the hood, but this seems like an easy model context engineering problem to fix.
7. ProllyInfamous ◴[] No.44368342[source]
I do not code/program, but I do read thousands of fiction pages annually. LLMs (Perplexity, specifically) have been my lifetime favorite book club member — I can ask anything.

However, I can't just say "on page 123..." I've found it's better to either provide the quote, or describe the context, and then ask how it relates to [another concept]. Or I'll say "at the end of chapter 6, Bob does X, then why Y?" (perhaps this is similar to asking a coding LLM to fix a specific function instead of a specific line?).

My favorite examples of this have been sitting with living human authors and discussing their books — usually to jaw-dropped creators, particularly to Unknowns.

Works for non-fiction, too (of course). But for all those books you didn't read in HS English classes, you can somewhat recreate all that class discussion your teachers always attempted to foster — at your own discretion/direction.

8. weatherlite ◴[] No.44375001[source]
That's the thing. We're expecting the tool to have a clear understanding of its own limitations by now and ask for better prompts (or say: I don't know, I can't etc). The fact it just does something wacky is not good at all to the consistency of these tools.