> Unlike GPT-3, which at least attempted arithmetic internally (and often failed), o1 explicitly delegates computation to external tools.
How is it a bad thing? Does the author really believe this is a bad thing?
Even if we believe tech bros' most wild claim - AGI is around the corner - I still don't know why calling external tools makes an AGI less AGI.
If you ask Terence Tao what 113256289421x89831475287 is I'm quite sure he'd "call external tools." Does it make him less a mathematician?
Plus, this is not what people call "reasoning." The title:
> Reasoning Is Not Model Improvement
The content:
> (opening with how o1 is calling external tools for arithmetic)
...anyway, whatever. I guess it's a Cunningham's Law thing. Otherwise it's a bit puzzling why someone knows nothing about a topic had to write an article to make everyone know how clueless they are.