It's complicated. You have to understand that when you ask an LLM something, you have the model itself, which is kind of like a function: put something in, get something out. However, you also pass an argument to that function: the context.
So, in a literal sense, no, they do not learn as they go, in the sense that the model, that function, is unchanged by what you send it. But the context can be modified. So, in some sense, an LLM in a agentic loop that goes and reads some code from GitHub can include that information in the context it uses in the future, so it will "learn" within the session.
> If the latter, what happens as less and less code gets written by human experts?
So, this is still a possible problem, because future trainings of future LLMs will end up being trained on code written by LLMs. If this is a problem or not is yet to be seen, I don't have a good handle on the debates in this area, personally.
Verification for code would be a formal proof, and these are hard; with a few exceptions like seL4, most code does not have any formal proof. Games like chess and go are much easier to verify. Math is in the middle; it also needs formal proofs, but most of math is doing these formal proofs themselves, and even then there are still unproven conjectures.
You can incorporate proofs with Coq or Dafny or use model checkers or TLA+ to actually verify your code.
This will be required for any software where correctness matters.