←back to thread

111 points arnabkarsarkar | 1 comments | | HN request time: 0.24s | source

OP here.

I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.

The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.

The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.

A few notes up front (to set expectations clearly):

Everything runs 100% locally. Regex detection happens in the extension itself. Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.

No data is ever sent to a server. You can verify this in the code + DevTools network panel.

This is an early prototype. There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.

Tech Stack: Manifest V3 Chrome Extension Python FastAPI (Localhost) HuggingFace dslim/bert-base-NER Roadmap / Request for Feedback: Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.

I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.

Repo is MIT licensed.

Very open to ideas suggestions or alternative approaches.

Show context
sciencesama ◴[] No.46232361[source]
Develop a pihole style adblock
replies(1): >>46238665 #
1. accrual ◴[] No.46238665[source]
I feel it's not really applicable here. Pihole has the advantage of funneling all DNS traffic (typically UDP/53) to a single endpoint and making decisions about the request.

A user using an LLM is probably talking directly to the service inside a TLS connection (TCP/443) so there's not a lot of room to inspect the prompt at the same layer a Pihole might (unless you MITM yourself).

I think OP has the right idea to approach this from the application layer in the browser where the contents of the page are available. But to me it feels like a stopgap, something that fixes a specific scenario (copy/pasted private data into a web browser form), and not a proper service-level solution some have proposed (swap PII at the endpoint, or have a client that pre-filters).