As soon as Gödel published his first incompleteness theorem, I would have thought the entire field of mathematics would have gone full throttle on trying to find more axioms. Instead, over the almost century since then, Gödel’s work has been treated more as an odd fact largely confined to niche foundational studies rather than any sort of mainstream program (I’m aware of Feferman, Friedman, etc., but my point is there is significantly less research in this area compared to most other topics in mathematics).
But why? Gödel's theorem does not depend on number of axioms but on them being recursively enumerable.
In fact, both Gödel and Turing worked on this problem quite a bit. Gödel thought we might be able to find some sort of “meta-principle” that could guide us toward discovering an ever increasing hierarchy of more powerful axioms, and Turing’s work on ordinal progressions followed exactly this line of thinking as well. Feferman’s completeness theorem even showed that all arithmetical truths could be discovered via an infinite process. (Now of course this process is not finitely axiomatizable, but one can certainly extract some useful finite axioms out of it — the strength of PA after all is equivalent to the recursive iteration up to ε_0 of ‘Q_{n+1} = Q_n + Q_n is consistent’ where Q_0 is Robinson arithmetic).
If you’re talking about every true sentence in the language of PA, then not all such sentences are derivable via the theory of PA. If you are talking about the theorems of PA, then these are missing an infinite number of true statements in the language of PA.
Harvey Friedman’s “grand conjecture” is that virtually every theorem that working mathematicians actually publish can already be proved in Elementary Function Arithmetic (much weaker than PA in fact). So the majority of mathematicians are not pushing the boundaries of the existing foundational theories of mathematics, although there is certainly plenty of activity regardless.