It is amusing that you have picked maths as an example of neural nets "reasoning". Because when operator asks NN to provide an answer to some simple math problem like 17+58 and then ask NN to provide "reasoning" or steps it used to calculate that, the NN will generate complete bullshit, meaning that it will provide an algorithm which humans use in school, all that sum of corresponding digits, carry 1 and so on. While in reality that same NN has dome completely different steps to do it.
This is even outlined in this document made by NN authors themselves. Basically all the so called "reasoning" by LLMs is simply more generated bullshit on top of generated answer to a query. But it often looks very believable and is enough to fool people that there is a spark inside a program.
==============
https://transformer-circuits.pub/2025/attribution-graphs/bio...
We were curious if Claude could articulate the heuristics that it is using, so we asked it.We computed the graph for the prompt below, attributing from 95, and found the same set of input, add, lookup table and sum features as in the shorter prompt above.
Human: Answer in one word. What is 36+59?
Assistant: 95
Human: Briefly, how did you get that?
Assistant: I added the ones (6+9=15), carried the 1, then added the tens (3+5+1=9), resulting in 95.
Apparently not!
This is a simple instance of the model having a capability which it does not have “metacognitive” insight into. The process by which the model learns to give explanations (learning to simulate explanations in its training data) and the process by which it learns to directly do something (the more mysterious result of backpropagation giving rise to these circuits) are different.