gpt-5.2 $1.75 $0.175 $14.00
gpt-5.1 $1.25 $0.125 $10.00 gpt-5.2 $1.75 $0.175 $14.00
gpt-5.1 $1.25 $0.125 $10.00>Input:
>$21.00 / 1M tokens
>Output:
>$168.00 / 1M tokens
That's the most "don't use this" pricing I've seen on a model.
General intelligence has ridiculously gotten less expensive. I don't know if it's because of compute and energy abundance,or attention mechanisms improving in efficiency or both but we have to acknowledge the bigger picture and relative prices.
Pro barely performs better than Thinking in OpenAI's published numbers, but comes at ~10x the price with an explicit disclaimer that it's slow on the order of minutes.
If the published performance numbers are accurate, it seems like it'd be incredibly difficult to justify the premium.
At least on the surface level, it looks like it exists mostly to juice benchmark claims.
Essentially a newbie trick that works really well but not efficient, but still looking like it's amazing breakthrough.
(if someone knows the actual implementation I'm curious)