Presumably what is possible for a person with 6 months of experience is rather limited.
The idea as I understand it is that he achieved apps that he would not be able to write by himself, with the help of AI. That means that it is possible to have bugs that would be reasonable to fix for someone who built the app using their own knowledge, but for the junior they may be too hard. This is a novel situation.
Just because everyone has problems sometimes does not mean problems are all the same, all the same difficulty. Like if I was building Starship, and I ran into some difficult problem, I would most likely give up, as I am way out of my league. I couldn't build a model rocket. I know nothing about rockets. My situation would not be the same as of any rocket engineer. All problems and all situations and all people are not the same, and they are not made the same by AI, despite claims to the contrary.
These simplifications/generalisations "we are all stochastic parrots" "we all make mistakes just like the llms make mistakes" "we all have bugs" "we all manage somehow" are absurd. Companies do not do interviews and promote some people and not others out of a sense of whimsy. Experience and knowledge matters. We are not all interchangable. If LLMs affect this somehow, it's to be looked at.
I can't believe LLMs or devs using LLMs cand suddenly do anything, without limitations. We are not all now equal to Linus and Carmack and such.