←back to thread

5 points eniz | 1 comments | | HN request time: 0.256s | source

I'm increasingly curious about user privacy when it comes to hosted LLM usage. Many popular LLM apps and APIs require sending your prompts, messages, and potentially sensitive information to remote servers for processing—sometimes routed through multiple third-party providers.

- How much do you trust major LLM providers (OpenAI, Anthropic, Google, etc.) with your data?

- For those working on or deploying LLM applications, what approaches do you take to maximize user privacy?

- Do you think end users are generally aware of where their data is going and how it's being used, or is this still an overlooked issue?

I'd love to hear your perspectives, experiences, or any best practices you recommend on privacy when deploying LLM-powered use cases.

1. fosco ◴[] No.44001779[source]
I think it’s terrible, I asked one to find a picture I thought was copied to me from somewhere was a quote not knowing it was made by that person.

The LLM uploaded to internet and “found” the exact picture stating it didn’t exist anywhere else, that source is still viewable on the internet despite me immediately submitting a request to remove. It’s not private but I felt like I’d been had.