←back to thread

2127 points bakugo | 2 comments | | HN request time: 0.411s | source
Show context
anotherpaulg ◴[] No.43164684[source]
Claude 3.7 Sonnet scored 60.4% on the aider polyglot leaderboard [0], WITHOUT USING THINKING.

Tied for 3rd place with o3-mini-high. Sonnet 3.7 has the highest non-thinking score, taking that title from Sonnet 3.5.

Aider 0.75.0 is out with support for 3.7 Sonnet [1].

Thinking support and thinking benchmark results coming soon.

[0] https://aider.chat/docs/leaderboards/

[1] https://aider.chat/HISTORY.html#aider-v0750

replies(18): >>43164827 #>>43165382 #>>43165504 #>>43165555 #>>43165786 #>>43166186 #>>43166253 #>>43166387 #>>43166478 #>>43166688 #>>43166754 #>>43166976 #>>43167970 #>>43170020 #>>43172076 #>>43173004 #>>43173088 #>>43176914 #
gwd ◴[] No.43165555[source]
Interesting that the "correct diff format" score went from 99.6% with Claude 3.5 to 93.3% for Claude 3.7. My experience with using claude-code was that it consistently required several tries to get the right diff. Hopefully all that will improve as they get things ironed out.
replies(3): >>43166482 #>>43166647 #>>43168693 #
1. macNchz ◴[] No.43166647[source]
Reasoning models pretty reliably seem to do worse at exacting output formats/structured outputs—so far with Aider it has been an effective strategy to employ o1 to “think” about the issue at hand, and have Sonnet implement. Interested to try various approaches with 3.7 in various combinations of reasoning effort.
replies(1): >>43167507 #
2. bugglebeetle ◴[] No.43167507[source]
It’s funny because I also have found myself doing this exact with R1+Sonnet 3.5 recently. Windsurf allows you to do a chat mode exchange with one model and then switch to another to implement. The reasoning models all seem pretty poorly implemented for the agentic workflows, but work well when paired with Claude.