←back to thread

555 points maheshrijal | 1 comments | | HN request time: 0.233s | source
Show context
typs ◴[] No.43707854[source]
I’m not sure I fully understand the rationale of having newer mini versions (eg o3-mini, o4-mini) when previous thinking models (eg o1) and smart non-thinking models (eg gpt-4.1) exist. Does anyone here use these for anything?
replies(2): >>43707901 #>>43707916 #
1. sho_hn ◴[] No.43707901[source]
I use o3-mini-high in Aider, where I want a model to employ reasoning but not put up with the latency of the non-mini o1.