The way it would solve that problem would look more like some combination of Hebbian Learning and Mu Zero, where it starts to explore the space around it interms of interactions, information gathering, information parsing, forming associations, to where it eventually understands that your task involves the action of writing bytes to a file in a certain structure that when executed produces certain output, and the rules around the structure that make it give that output.
And it will be able to do this through running as a model on your computer, or a robot that can type on a keyboard, all from the same code.
LLMs appear to "reason" because most people don't actually reason - a lot of people even in technical fields operate on a principle of information lookup. I.e they look at the things that they have been taught to do, figure out which problem fits closest, and repeat steps with a few modifications a long the way. LLMs pretty much do the same thing. If you operate like this, then sure LLMs, "reason". But there is a reason why LLMs are barely useful in actual technical work - under the hood, to make them do things autonomously, you basically have to specify wrapper code/prompts that take often as long to write and finetune as actual code itself.