LLMs don't model anything but are still very useful. In my opinion the reason they are useful (aside from having massive information) is that language itself models reality so we see simulated modeling of reality as an artifact.
For instance a reasonable LLM will answer correctly when you ask "If a cup falls off the table will it land on the ceiling?". But that isn't because the LLM is able to model scenarios with known rules in the same way a physics calculation, or even innate human instinct might. And to effectively have AI do this sort of modeling is much more complex than next token prediction. Even dividing reality into discrete units may be a challenge. But without this type of thinking I don't see full AGI arising any time.
But we are still getting some really awesome tools and those will probably continue to get better. They really are powerful and a bit scary if you poke around.