And I'm not the only one saying this but - the bit about LLMs is likely throwing the baby out with the bathwater. Yes the "AI-ification" of everything is horrible and people are shoehorning it into places where it's not useful. But to say that every single LLM interaction is wrong/not useful is just not true (though it might be true if you limit yourself to only freely available models!). Using LLMs effectively is a skill in itself, and not one to be underestimated. Just because you failed to get it to do something it's not well-suited to doesn't mean it can't do anything at all.
Though the conclusion (do things, make things) I do agree with anyway.