While I am to old to say what is taught today, this is exactly the map vs territory issue I mentioned in my other comment.
It is all there in what you would have been taught, but hidden because we tend to avoid the hay in the haystack problems and focus on the needles, because we don't have access to the hay.
As an example that can cause huge arguments, if for analysis you used Rudin, go look for an equally binary operator in that book, as every construction of the reals is a measure zero set, it is actually impossible to prove the equality of two real numbers. ZFC uses constructablity, Spivik uses Cauchy sequences etc...
If you look at the paper[1], about the equivalence of PAC/VC dimensionality it is all there, framing learning as a decision problem, set shattering etc.
Richardson's theorm, especially with more recent papers is another lens.
With the idea that all models are wrong, but some are useful, it does seem that curriculums tend to leave these concepts to postgraduate classes, often hiding them or ignoring them, in descriptive complexity theory they should have been front and center IMHO.
Assuming something is computable or learnable is important for finding pathological cases that are useful, don't confuse the map with the territory.
We have enough examples to know the neuronic model is wrong, but the proposed models we have found aren't useful yet, and with what this post provided shows that may always hold.
Physics and other fields make similar assumptions, like physics assumptions that laplacian determinism is true, despite counterexamples.
Gödel, Rice, Turing. etc... may be proven wrong some day, but right now Halt ~= open frame ~= symbol grounding ~= system identification in the general case is the safe bet.
But that doesn't help get work done, or possibly find new math that changes that.
[1] https://dl.acm.org/doi/10.1145/76359.76371