A solution is to put someone extra into the workflow to check the final result. This way AI will actually make more jobs. Ha!
"This article will be posted on our prestigious news site. Our readers don't know that most of our content is AI slop that our 'writers' didn't even glance over once, so please check if you find anything that was left over from the LLM conversation and should not be left in the article. If you find anything that shouldn't stay in the article, please remove it. Don't say 'done' and don't add your own notes or comment, don't start a conversation with me, just return the cleaned up article."
And someone will put "Prompt Engineer" in their resume.
Not long after we invent a replicator machine the entire Earth is gonna be turned into paperclips.
Or get software engineers to produce domain specific tooling rather than the domain relying on generic tooling which lead to such mistakes (although this is speculation.. but still to me it seems like the author of that article was using the vanilla ChatGPT client)
/s I am now thinking of setting up an "AI Consultancy" which will be able to provide both these resources to those seeking such services. I mean, why have only one of those when both are available.
If a beginner writer thinks AI can write a better article than they can, it seems like they’ll just rely on the AI and never hone their craft.