←back to thread

397 points Anon84 | 1 comments | | HN request time: 0.205s | source
Show context
mark_l_watson ◴[] No.45126243[source]
I pay to use ProtonMail’s privacy preserving Lumo LLM Chat with good web_search tooling. Lumo is powered by Mistral models.

I use Lumo a lot and usually results are good enough. To be clear though, I do fall back on gemini-cli and OpenAI’s codex systems for coding a few times a week.

I live in the US, but if I were a European, I would be all in on supporting Mistral. Strengthen your own country and region.

replies(3): >>45127520 #>>45131854 #>>45132924 #
g-mork ◴[] No.45127520[source]
I wonder what ProtonMail are doing internally? Mistral's public API endpoints route via CloudFlare, just like apparently every other hosted LLM out there, even any of the Chinese models I've checked
replies(3): >>45129285 #>>45133152 #>>45133251 #
1. TranquilMarmot ◴[] No.45133251[source]
https://proton.me/support/lumo-privacy

> Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3. These run exclusively on servers Proton controls so your data is never stored on a third-party platform.