People who put the effort into checking the output aren't necessarily checking more than style, but some of them will, so it will still help.
I've been prompting the bot to avoid its tics for as long as I've been using it for anything; 3 years or so, now, I'd guess.
It's just a matter of reading and understanding the output, noticing patterns that are repetitious or annoying, and instructing the bot as such: "No. Fucking stop that."
If you're getting a lot of value out of LLM writing right now, your quality was already garbage and you're just using it to increase volume, or you have let your quality crater.
Biggest one in this case, in my opinion: it's an extremely long article with awkward section headers every few paragraphs. I find that any use of "The ___ Problem" or "The ___ Lesson" for a section header is especially glaring. Or more generally, many superfluous section headers of the form "The [oddly-constructed noun phrase]". I mean, googling "The Fire-Retardant Giants" literally only returns this specific article.
Or another one here: the historic stock price data is slightly wrong. For whatever reason, LLMs seem to make mistakes with that often, perhaps due to operating on downsampled data. The initial red-flag here is the first table claims Apple's split-adjusted peak close in 2000 was exactly $1.00.
There are plenty of issues with the accuracy of the written content as well, but it's not worth getting into.