←back to thread

435 points crawshaw | 9 comments | | HN request time: 0.954s | source | bottom
Show context
_bin_ ◴[] No.43998743[source]
I've found sonnet-3.7 to be incredibly inconsistent. It can do very well but has a strong tendency to get off-track and run off and do weird things.

3.5 is better for this, ime. I hooked claude desktop up to an MCP server to fake claude-code less the extortionate pricing and it works decently. I've been trying to apply it for rust work; it's not great yet (still doesn't really seem to "understand" rust's concepts) but can do some stuff if you make it `cargo check` after each change and stop it if it doesn't.

I expect something like o3-high is the best out there (aider leaderboards support this) either alone or in combination with 4.1, but tbh that's out of my price range. And frankly, I can't mentally get past paying a very high price for an LLM response that may or may not be useful; it leaves me incredibly resentful as a customer that your model can fail the task, requiring multiple "re-rolls", and you're passing that marginal cost to me.

replies(3): >>43998797 #>>43999022 #>>43999599 #
1. agilebyte ◴[] No.43998797[source]
I am avoiding the cost of API access by using the chat/ui instead, in my case Google Gemini 2.5 Pro with the high token window. Repomix a whole repo. Paste it in with a standard prompt saying "return full source" (it tends to not follow this instruction after a few back and forths) and then apply the result back on top of the repo (vibe coded https://github.com/radekstepan/apply-llm-changes to help me with that). Else yeah, $5 spent on Cline with Claude 3.7 and instead of fixing my tests, I end up with if/else statements in the source code to make the tests pass.
replies(4): >>43999036 #>>43999080 #>>43999160 #>>44013021 #
2. harvey9 ◴[] No.43999036[source]
Guess it was trained by scraping thedailywtf.com
3. actsasbuffoon ◴[] No.43999080[source]
I decided to experiment with Claude Code this month. The other day it decided the best way to fix the spec was to add a conditional to the test that causes it to return true before getting to the thing that was actually supposed to be tested.

I’m finding it useful for really tedious stuff like doing complex, multi step terminal operations. For the coding… it’s not been great.

replies(2): >>43999202 #>>43999758 #
4. nico ◴[] No.43999160[source]
Cool tool. What format does it expect from the model?

I’ve been looking for something that can take “bare diffs” (unified diffs without line numbers), from the clipboard and then apply them directly on a buffer (an open file in vscode)

None of the paste diff extension for vscode work, as they expect a full unified diff/patch

I also tried a google-developed patch tool, but also wasn’t very good at taking in the bare diffs, and def couldn’t do clipboard

replies(1): >>43999210 #
5. nico ◴[] No.43999202[source]
I’ve had this in different ways many times. Like instead of resolving the underlying issue for an exception, it just suggests catching the exception and keep going

It also depends a lot on the mix of model and type of code and libraries involved. Even in different days the models seem to be more or less capable (I’m assuming they get throttled internally - this is very noticeable sometimes in how they try to save on output tokens and summarize the code responses as much as possible, at least in the chat/non-api interfaces)

6. agilebyte ◴[] No.43999210[source]
Markdown format with a comment saying what the file path is. So:

This is src/components/Foo.tsx

```tsx // code goes here ```

OR

```tsx // src/components/Foo.tsx // code goes here ```

These seem to work the best.

I tried diff syntax, but Gemini 2.5 just produced way too many bugs.

I also tried using regex and creating an AST of the markdown doc and going from there, but ultimately settled on calling gpt-4.1-mini-2025-04-14 with the beginning of the code block (```) and 3 lines before and 3 lines after the beginning of the code block. It's fast/cheap enough to work.

Though I still have to make edits sometimes. WIP.

replies(1): >>43999528 #
7. ◴[] No.43999528{3}[source]
8. christophilus ◴[] No.43999758[source]
Well, that’s proof that it used my GitHub projects in its training data.
9. never_inline ◴[] No.44013021[source]
Aider has a --copy-paste mode which can pass in relevant context to web chat UI and you can paste back the LLM answer.