←back to thread

612 points meetpateltech | 1 comments | | HN request time: 0.001s | source
Show context
sho_hn ◴[] No.42950830[source]
Anyone have a take on how the coding performance (quality and speed) of the 2.0 Pro Experimental compares to o3-mini-high?

The 2 million token window sure feels exciting.

replies(2): >>42950892 #>>42956069 #
mohsen1 ◴[] No.42950892[source]
I don't know what those "needle in haystack" benchmarks are testing for because in my experience dumping a big amount of code in the context is not working as you'd expect. It works better if you keep the context small
replies(2): >>42950964 #>>42952255 #
1. airstrike ◴[] No.42950964[source]
I think the sweet spot is to include some context that is limited to the scope of the problem and benefit from the longer context window to keep longer conversations going. I often go back to an earlier message on that thread and rewrite with understanding from that longer conversation so that I can continue to manage the context window