←back to thread

612 points meetpateltech | 1 comments | | HN request time: 0.631s | source
Show context
sho_hn ◴[] No.42950830[source]
Anyone have a take on how the coding performance (quality and speed) of the 2.0 Pro Experimental compares to o3-mini-high?

The 2 million token window sure feels exciting.

replies(2): >>42950892 #>>42956069 #
mohsen1 ◴[] No.42950892[source]
I don't know what those "needle in haystack" benchmarks are testing for because in my experience dumping a big amount of code in the context is not working as you'd expect. It works better if you keep the context small
replies(2): >>42950964 #>>42952255 #
1. cma ◴[] No.42952255[source]
Claude works well for me loading code up to around 80% of its 200K context and then asking for changes. If the whole project can't fit I try to at least get in headers and then the most relevant files. It doesn't seem to degrade. If you are using something like an AI IDE a lot of times they don't really get the 200K context.