I don't think that LLMs are inevitable, but what this piece lacks (and that's fine, I like the point and writing anyway) is a plausible alternative. LLMs might not be inevitable, but until something better comes along, why would they go away? Even if we assume that people are just completely delusional about the models adding anything of value, why would that change at any point in the future?