←back to thread

5 points eniz | 1 comments | | HN request time: 0.203s | source

I'm increasingly curious about user privacy when it comes to hosted LLM usage. Many popular LLM apps and APIs require sending your prompts, messages, and potentially sensitive information to remote servers for processing—sometimes routed through multiple third-party providers.

- How much do you trust major LLM providers (OpenAI, Anthropic, Google, etc.) with your data?

- For those working on or deploying LLM applications, what approaches do you take to maximize user privacy?

- Do you think end users are generally aware of where their data is going and how it's being used, or is this still an overlooked issue?

I'd love to hear your perspectives, experiences, or any best practices you recommend on privacy when deploying LLM-powered use cases.

1. greyjoyduck ◴[] No.44020112[source]
It's kind of a nightmare as of right now considering they are basically processing any data they can get their hands on