←back to thread

138 points FrasiertheLion | 3 comments | | HN request time: 0.001s | source

Hello HN! We’re Tanya, Sacha, Jules and Nate from Tinfoil: https://tinfoil.sh. We host models and AI workloads on the cloud while guaranteeing zero data access and retention. This lets us run open-source LLMs like Llama, or Deepseek R1 on cloud GPUs without you having to trust us—or any cloud provider—with private data.

Since AI performs better the more context you give it, we think solving AI privacy will unlock more valuable AI applications, just how TLS on the Internet enabled e-commerce to flourish knowing that your credit card info wouldn't be stolen by someone sniffing internet packets.

We come from backgrounds in cryptography, security, and infrastructure. Jules did his PhD in trusted hardware and confidential computing at MIT, and worked with NVIDIA and Microsoft Research on the same, Sacha did his PhD in privacy-preserving cryptography at MIT, Nate worked on privacy tech like Tor, and I (Tanya) was on Cloudflare's cryptography team. We were unsatisfied with band-aid techniques like PII redaction (which is actually undesirable in some cases like AI personal assistants) or “pinky promise” security through legal contracts like DPAs. We wanted a real solution that replaced trust with provable security.

Running models locally or on-prem is an option, but can be expensive and inconvenient. Fully Homomorphic Encryption (FHE) is not practical for LLM inference for the foreseeable future. The next best option is using secure enclaves: a secure environment on the chip that no other software running on the host machine can access. This lets us perform LLM inference in the cloud while being able to prove that no one, not even Tinfoil or the cloud provider, can access the data. And because these security mechanisms are implemented in hardware, there is minimal performance overhead.

Even though we (Tinfoil) control the host machine, we do not have any visibility into the data processed inside of the enclave. At a high level, a secure enclave is a set of cores that are reserved, isolated, and locked down to create a sectioned off area. Everything that comes out of the enclave is encrypted: memory and network traffic, but also peripheral (PCIe) traffic to other devices such as the GPU. These encryptions are performed using secret keys that are generated inside the enclave during setup, which never leave its boundaries. Additionally, a “hardware root of trust” baked into the chip lets clients check security claims and verify that all security mechanisms are in place.

Up until recently, secure enclaves were only available on CPUs. But NVIDIA confidential computing recently added these hardware-based capabilities to their latest GPUs, making it possible to run GPU-based workloads in a secure enclave.

Here’s how it works in a nutshell:

1. We publish the code that should run inside the secure enclave to Github, as well as a hash of the compiled binary to a transparency log called Sigstore

2. Before sending data to the enclave, the client fetches a signed document from the enclave which includes a hash of the running code signed by the CPU manufacturer. It then verifies the signature with the hardware manufacturer to prove the hardware is genuine. Then the client fetches a hash of the source code from a transparency log (Sigstore) and checks that the hash equals the one we got from the enclave. This lets the client get verifiable proof that the enclave is running the exact code we claim.

3. With the assurance that the enclave environment is what we expect, the client sends its data to the enclave, which travels encrypted (TLS) and is only decrypted inside the enclave.

4. Processing happens entirely within this protected environment. Even an attacker that controls the host machine can’t access this data. We believe making end-to-end verifiability a “first class citizen” is key. Secure enclaves have traditionally been used to remove trust from the cloud provider, not necessarily from the application provider. This is evidenced by confidential VM technologies such as Azure Confidential VM allowing ssh access by the host into the confidential VM. Our goal is to provably remove trust both from ourselves, aka the application provider, as well as the cloud provider.

We encourage you to be skeptical of our privacy claims. Verifiability is our answer. It’s not just us saying it’s private; the hardware and cryptography let you check. Here’s a guide that walks you through the verification process: https://docs.tinfoil.sh/verification/attestation-architectur....

People are using us for analyzing sensitive docs, building copilots for proprietary code, and processing user data in agentic AI applications without the privacy risks that previously blocked cloud AI adoption.

We’re excited to share Tinfoil with HN!

* Try the chat (https://tinfoil.sh/chat): It verifies attestation with an in-browser check. Free, limited messages, $20/month for unlimited messages and additional models

* Use the API (https://tinfoil.sh/inference): OpenAI API compatible interface. $2 / 1M tokens

* Take your existing Docker image and make it end to end confidential by deploying on Tinfoil. Here's a demo of how you could use Tinfoil to run a deepfake detection service that could run securely on people's private videos: https://www.youtube.com/watch?v=_8hLmqoutyk. Note: This feature is not currently self-serve.

* Reach out to us at contact@tinfoil.sh if you want to run a different model or want to deploy a custom application, or if you just want to learn more!

Let us know what you think, we’d love to hear about your experiences and ideas in this space!

1. rkagerer ◴[] No.43999475[source]
What's your revenue model?

The pricing page implies you're basically reselling access to confidential-wrapped AI instances.

Since you rightly open-sourced the code (AGPL) is there anything stopping the cloud vendors from running and selling access to their own instances of your server-side magic?

Is your secret sauce the tooling to spin up and manage instances and ease customer UX? Do you aim to attract an ecosystem of turnkey, confidential applications running on your platform?

Do you envision an exit strategy that sells said secret sauce and customers to a cloud provider or confidential computing middleware provider?

Ps. Congrats on the launch.

replies(1): >>43999686 #
2. FrasiertheLion ◴[] No.43999686[source]
>Since you rightly open-sourced the code (AGPL) is there anything stopping the cloud vendors from running and selling access to their own instances of your server-side magic?

Sure they can do that. Despite being open source, CC-mode on GPUs is quite difficult to work with especially when you start thinking about secrets management, observability etc, so we’d actually like to work with smaller cloud providers who want to provide this as a service and become competitive with the big clouds.

>Is your secret sauce the tooling to spin up and manage instances and ease customer UX?

Pretty much. Confidential computing has been around a while, and we still don’t see widespread adoption of it, largely because of the difficulty. If we're successful, we absolutely expect there to be a healthy ecosystem of competitors both cloud provider and startup.

>Do you envision an exit strategy that sells that secret sauce to a cloud provider or confidential computing middleware provider?

We’re not really trying to be a confidential computing provider, but more so, a verifiably private layer for AI. Which means we will try to make integration points as seamless as possible. For inference, that meant OpenAI API compatible client SDKs, we will eventually do the same for training/post-training, or MCP/OpenAI Agents SDK, etc. We want our integration points to be closely compatible with existing pipelines.

replies(1): >>44000481 #
3. threeseed ◴[] No.44000481[source]
> Confidential computing has been around a while, and we still don’t see widespread adoption of it, largely because of the difficulty

This is not the reason at all. Complexity and difficult are inherent to large companies.

It's because it is a very low priority in an environment where for example there are tens of thousands of libraries in use, dozens of which will be in Production with active CVEs. And there are many examples of similar security and risk management issues that companies have to deal with.

Worrying about the integrity of the hardware or not trusting my cloud provider who has all my data in their S3 buckets anyway (which is encrypted using their keys) is not high on my list of concerns. And if it were I would be simply running on-premise anyway.