←back to thread

555 points maheshrijal | 3 comments | | HN request time: 0.475s | source
1. typs ◴[] No.43707854[source]
I’m not sure I fully understand the rationale of having newer mini versions (eg o3-mini, o4-mini) when previous thinking models (eg o1) and smart non-thinking models (eg gpt-4.1) exist. Does anyone here use these for anything?
replies(2): >>43707901 #>>43707916 #
2. sho_hn ◴[] No.43707901[source]
I use o3-mini-high in Aider, where I want a model to employ reasoning but not put up with the latency of the non-mini o1.
3. drvladb ◴[] No.43707916[source]
o1 is a much larger, more expensive to operate on OpenAI's end. Having a smaller "newer" (roughly equating newer to more capable) model means that you can match the performance of larger older models while reducing inference and API costs.