The deeper problem
1. Not portable – context is vendor-locked; nothing travels across tools.
2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.
3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.
Demo video: https://youtu.be/iANZ32dnK60
Repo: https://github.com/RedPlanetHQ/core
What we built
- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.
- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.
- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.
Try it
- Hosted free tier (HN launch): https://core.heysol.ai
My question is, what’s the value of explicitly storing semantics as triples when the LLM can infer the semantics on runtime?
Efficient, precise retrieval through graph traversal patterns that flat text simply can't match ("find all X related to Y through relationship Z")
Algorithmic contradiction detection by matching subject-predicate pairs across time, which LLMs struggle with across distant contexts
Our goal is also to make assistant more proactive, where triplets make pattern recognition more easy and effective
what do you think about these?