←back to thread

4 points safekeylab | 1 comments | | HN request time: 0.195s | source

Hey HN, I built SafeKey because I was handling patient data as an Army medic, then doing AI research at Cornell. Every time we tried to use LLMs with sensitive data, something leaked. Existing tools only covered text at ~85% accuracy. Nothing worked across modalities. SafeKey is an AI input firewall. It sits between your app and the model, redacting PII before data leaves your environment. What we built:

PII Guard: 99%+ accuracy across text, images, audio, video AI Guard: Blocks prompt injection and jailbreaks (95%+ F1, zero false positives) Agent Security: Protects autonomous AI workflows RAG Security: Secures retrieval-augmented generation pipelines

Sub-30ms latency. Drop-in SDK for OpenAI, Anthropic, Azure, AWS Bedrock. Runs in your VPC or our cloud.

Would love feedback on the approach. Happy to answer questions.

Thanks, Sukin

Show context
vunderba ◴[] No.46137933[source]
The only link on the site to a source repository is a 404 Github repository.

https://github.com/safekeylab

EDIT: Manually searching Github leads to this https://github.com/sukincornell/safekeylab (assuming that is the correct one)

replies(1): >>46138409 #
1. safekeylab ◴[] No.46138409[source]
Thanks for flagging. We're not open-source — the GitHub link shouldn't have been on the site. Removing it now. We offer a private SDK for customers. If you want to test it, you can go to the website and create your account or ping me at sukin@safekeylab.com