←back to thread

111 points Manik_agg | 1 comments | | HN request time: 0s | source

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

Show context
khaledh ◴[] No.44438628[source]
I love how we have come full circle. Anybody remembers the "semantic web" (RDF-based knowledge graph)? It didn't take off because building and maintaining such a graph requires extensive knowledge engineering work and tools. Fast forward a couple of decades and we have LLMs, which is basically auto-complete on steroids based on general knowledge, with the downside that it doesn't "remember" any facts unless you spoon-feed it with the right context. We're now back to: "let's encode context knowledge as a graph and plug it into LLMs". Fun times :)
replies(2): >>44439118 #>>44439551 #
cpard ◴[] No.44439551[source]
The problem with semantic web was deeper, people had to agree on the semantics that would be formalized as triples and getting people to agree on an ongoing basis is not an easy task.

My question is, what’s the value of explicitly storing semantics as triples when the LLM can infer the semantics on runtime?

replies(2): >>44440246 #>>44440787 #
khaledh ◴[] No.44440246[source]
Not much tbh. I'm using markdown files as a memory bank[1] for my projects and it works well without the need to structure them in a schema/graph. But I guess one benefit of this particular memory graph implementation is its temporal aspect: searchable facts can evolve over time; i.e. what is true now and how it got here.

[1] https://docs.cline.bot/prompting/cline-memory-bank

replies(1): >>44440653 #
1. cpard ◴[] No.44440653[source]
That’s interesting! I’ll take a deeper look. Thanks for sharing