I have zero trust in these companies on this count, and that's the main reason why I avoid using products that incorporate "AI".
- How much do you trust major LLM providers (OpenAI, Anthropic, Google, etc.) with your data?
- For those working on or deploying LLM applications, what approaches do you take to maximize user privacy?
- Do you think end users are generally aware of where their data is going and how it's being used, or is this still an overlooked issue?
I'd love to hear your perspectives, experiences, or any best practices you recommend on privacy when deploying LLM-powered use cases.
I have zero trust in these companies on this count, and that's the main reason why I avoid using products that incorporate "AI".
This is like the early days when people didn’t trust buying things over the internet
The LLM uploaded to internet and “found” the exact picture stating it didn’t exist anywhere else, that source is still viewable on the internet despite me immediately submitting a request to remove. It’s not private but I felt like I’d been had.
If you like bad analogies, why not do car analogies? For example, at least this one's accurate:
I wouldn't trust Sam Altman any more than a used car salesman. The only difference is, Sam Altman's persuaded me to pay him to sell me as a the product.
I do not use Apple devices, and I do not use Google services. While certainly not the majority, I don't this is that an unusual position on HN.