←back to thread

111 points Manik_agg | 1 comments | | HN request time: 0.721s | source

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

Show context
MarkMarine ◴[] No.44440229[source]
I’ve been thinking about how to do this well, how my memory actually works. I think what is happening is I’ve either got the facts now (that is easy to repro w/ a system like this) or I’ve got an idea that I could have the facts after working on retrieval. It’s like I’ve got a feeling or sense that somewhere in cold storage is the info I need so I kick off a background process to get it. Sometimes it works.

That second system, the “I know this…” system is I think what is missing from these LLMs. They have the first one, they KNOW things they’ve seen during training, but what I think is missing is the ability to build up the working set as they are doing things, then get the “feeling” that they could know this if they did a little retrieval work. I’ve been thinking about how to repro that in a computer where knowledge is 0|1, but could be slow to fetch

replies(1): >>44440375 #
1. Manoj58 ◴[] No.44440375[source]
You've identified a fundamental gap - that meta-cognitive "I could retrieve this" intuition that humans have but LLMs lack.

Our graph approach addresses this: - Structure knowledge with visible relationship patterns before loading details

- Retrieval system "senses" related information without fetching everything

- Temporal tracking prioritizes recent/relevant information

- Planning recall frequency tracking for higher weightage on accessed facts

In SOL(personal assistant), we guide LLMs to use memory more effectively by providing structured knowledge boundaries. This creates that "I could know this if I looked" capability.