This does not seem to be local and additionally appears to be tied to one SaaS LLM provider?
The deeper problem
1. Not portable – context is vendor-locked; nothing travels across tools.
2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.
3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.
Demo video: https://youtu.be/iANZ32dnK60
Repo: https://github.com/RedPlanetHQ/core
What we built
- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.
- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.
- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.
Try it
- Hosted free tier (HN launch): https://core.heysol.ai
This does not seem to be local and additionally appears to be tied to one SaaS LLM provider?
Also we build core first internally for our main project SOL - AI personal assistant. Along the journey of building a better memory for our assistant we realised it's importance and are of the opinion that memory should not be vendor locked. It should be pluggable and belong to the user. Hence build it as a separate service.