E.g. imagine it was the case that you could write a blog post, with some insight, in some niche field – but you know that traffic isn't going to get directed to your site. Instead, an LLM will ingest it, and use the material when people ask about the topic, without giving credit. If you know that will happen, it's not a good incentive to write the post in the first place. You might think, "what's the point".
Related to this topic - computers have been superhuman at chess for 2 decades; yet good chess humans still get credit, recognition, and I would guess, satisfaction, from achieving the level they get to. Although, obviously the LLM situation is on a whole other level.
I guess the main (valid) concern is that LLMs get so good at thought that humans just don't come up with ideas as good as them... And can't execute their ideas as well as them... And then what... (Although that doesn't seem to be the case currently.)