[I'm one of the co-creators of SWE-bench] The team managed to improve on the already very strong o3 results on SWE-bench, but it's interesting that we're just seeing an improvement of a few percentage points. I wonder if getting to 85% from 75% on Verified is going to take as long as it took to get from 20% to 75%.
I can be completely off base, but it feels to me like benchmaxxing is going on with swe-bench.
Look at the results from multi swe bench - https://multi-swe-bench.github.io/#/
swe polybench - https://amazon-science.github.io/SWE-PolyBench/
Kotlin bench - https://firebender.com/leaderboard
Not sure what you mean by benchmaxxing but we think there's still a lot of useful signals you can infer from SWE-bench-style benchmarking.
We also have SWE-bench Multimodal which adds a twist I haven't seen elsewhere:
https://www.swebench.com/multimodal.html