To twist the Upton Sinclair quote: It's difficult to convince a man to believe in something when his company's valuation depends on him not believing it.
Sure, the average engineer probably isn't thinking in those explicit terms, but I can easily imagine a cultural miasma that leads people to avoid thinking of certain implications. (It happens everywhere, no reason for software development to be immune.)
> If you expose your data to an LLM
I like to say that LLMs should be imagined as javascript in the browser: You can't reliably keep any data secret, and a determined user can get it to emit anything they want.
On reflection, that understates the problem, since that threat-model doesn't raise sufficient alarm about how data from one user can poison things for another.