Tied for 3rd place with o3-mini-high. Sonnet 3.7 has the highest non-thinking score, taking that title from Sonnet 3.5.
Aider 0.75.0 is out with support for 3.7 Sonnet [1].
Thinking support and thinking benchmark results coming soon.
Tied for 3rd place with o3-mini-high. Sonnet 3.7 has the highest non-thinking score, taking that title from Sonnet 3.5.
Aider 0.75.0 is out with support for 3.7 Sonnet [1].
Thinking support and thinking benchmark results coming soon.
Has there been any effort taken to reduce data leakage of this test set? Sounds like these exercises were available on the internet pre-2023, so they'll probably be included in the training data for any modern model, no?
Tests that require thinking about the physical world are the most revealing.
My new favourite is:
You have 2 minutes to cool down a cup of coffee to the lowest temp you can.
You have two options: 1. Add cold milk immediately, then let it sit for 2 mins.
2. Let it sit for 2 mins, then add cold milk.
Which one cools the coffee to the lowest temperature and why?
Phrased this way without any help, all but the thinking models get it wrong
They get caught up in the idea that adding milk first cools it fastest and can’t escape from that
People making up their own benchmarks for these things has confirmed one thing for me: The bias that people think they mostly have original thoughts is extremely strong. I find if I have a “good” idea someone has probably already thought of it as well and maybe even written about it. About 0.01% of the time do I have an idea that one may consider novel and even that’s probably my own bias and overstated. This example just confirms that these models don’t really seem to reason and have a really hard time doing the basic generalization they can with fewer examples.