←back to thread

111 points Manik_agg | 2 comments | | HN request time: 0.468s | source

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

Show context
adamkochanowicz ◴[] No.44438195[source]
For those asking how this is different from a simple text based memory archive, I think that is answered here:

--- Unlike most memory systems—which act like basic sticky notes, only showing what’s true right now. C.O.R.E is built as a dynamic, living temporal knowledge graph:

Every fact is a first-class “Statement” with full history, not just a static edge between entities. Each statement includes what was said, who said it, when it happened, and why it matters. You get full transparency: you can always trace the source, see what changed, and explore why the system “believes” something. ---

replies(2): >>44438722 #>>44440833 #
1. ramoz ◴[] No.44438722[source]
I'm not sure the graph offers any clear advantage in the demonstrated use case.

It's overhead in coding.

The source is the doc. Raw text is as much of a fact as an abstracted data structure derived from that text (which is done by an external LLM - provenance seems to break here btw, what other context is used to support that transcription, why is it more reliable than a doc within the actual codebase?).

replies(1): >>44438941 #
2. Manik_agg ◴[] No.44438941[source]
Hey - i agree that the demonstrated use can be solved with simple plan.md file in the codebase itself.

With use-case we wanted to showcase the shareable aspect of CORE more. The main problem statement we wanted to address was "take your memory to every AI" and not repeating yourself again and again anymore.

The relational graph based aspect of CORE architecture is an overkill for simple fact recalling. But if you want an intelligent memory layer about you that can answer What, When, Why and also is accessible in all the major AI tools that you use, then CORE would make more sense.