←back to thread

612 points meetpateltech | 3 comments | | HN request time: 0.435s | source
Show context
sho_hn ◴[] No.42950830[source]
Anyone have a take on how the coding performance (quality and speed) of the 2.0 Pro Experimental compares to o3-mini-high?

The 2 million token window sure feels exciting.

replies(2): >>42950892 #>>42956069 #
1. mohsen1 ◴[] No.42950892[source]
I don't know what those "needle in haystack" benchmarks are testing for because in my experience dumping a big amount of code in the context is not working as you'd expect. It works better if you keep the context small
replies(2): >>42950964 #>>42952255 #
2. airstrike ◴[] No.42950964[source]
I think the sweet spot is to include some context that is limited to the scope of the problem and benefit from the longer context window to keep longer conversations going. I often go back to an earlier message on that thread and rewrite with understanding from that longer conversation so that I can continue to manage the context window
3. cma ◴[] No.42952255[source]
Claude works well for me loading code up to around 80% of its 200K context and then asking for changes. If the whole project can't fit I try to at least get in headers and then the most relevant files. It doesn't seem to degrade. If you are using something like an AI IDE a lot of times they don't really get the 200K context.