I think a lot of maths is secretly a lot easier than it appears, but just missing an explanation that makes it easy to get the core idea to build upon.
For example, I've been meaning to write an explorable[0] for explaining positional notation in any integer base (so binary, hexadecimal, etc) in a way that any child who can read clocks should be able to follow. Possibly teaching multiplication along the way.
Conceptually it's quite simple: imagine a counter that looks like an analog clock, but with the digits 0 to 9 and a +1 and -1 button. We can use it to count between zero and nine, but if we add one to nine, we step back to zero. Oh no! Ok, but we can solve this by adding a second counter. Whenever the first counter does a full circle, we increase it by one. A full circle on the first counter is ten steps, so each step on the second counter represents ten steps. But what if the second counter wants to count ten steps? No problem, just add a third! And so on.
So then the natural question is... what if we have fewer digits than 0 to 9? Like 0 to 7? Oh, we get octal numbers. 0 and 1 is binary. Adding more digits using letters from the alphabet?
The core approach is just a very physical representation of base-10 positional, which hopefully it makes it easy to do the counting and follow what is happening. No "advanced" concepts like "base" or "exponentiation" needed, but those are abstractions that are easy to put on top when they get older.
I've asked around with friends who have kids - most of them learn to read clocks somewhere between four and six, and by the time they're eight they can all count to 100. So I would expect that in theory this approach would make the idea of binary and hexadecimal numbers understandable at that age already.
EDIT: funny enough the article also mentions that precisely thanks to positional notation, almost every adult can immediately answer the question "what is one billion minus one".
It took me some time, but now it's a lot better -- like a little game I somewhat know the rules of. I now accept that mathematicians are often worrying about maximal abstraction or addressing odd pathological corner cases. This allows me to wade through the complexity without getting overwhelmed like I used to.
I didn't fall in love with math until Statistics, Discrete Math, Set Theory and Logic.
It was the realization that math is a language that can be used to describe all the patterns of real world, and help cut through bullshit and reckon real truths about the world.
We were given the exact text of the final exam weeks in advance, and were allowed to do anything at all to prepare, including collaborating with the other students or asking other professors (who couldn't make heads or tails of it). The goal was to be able to answer 1 or 2 out of the 10 questions on the exam, and even if you couldn't you got a B+ at minimum.
I wish I had a better memory, but I believe one of the questions I successfully answered was to prove Post's Theorem using Turing machines? The problem is, I never used the knowledge from that class again, but to this day I still think about it. It would be amazing to go back and learn more about that fascinating intersection of philosophy and computer science.
What I loved the most was that it combined hard math with the kind of esoteric metaphysical questions about mathematics which many practitioners despise because they feel like it undermines their work. It turns out, when you go that deep it's impossible not to touch on the headier stuff.