←back to thread

111 points Manik_agg | 1 comments | | HN request time: 0.199s | source

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

Show context
khaledh ◴[] No.44438628[source]
I love how we have come full circle. Anybody remembers the "semantic web" (RDF-based knowledge graph)? It didn't take off because building and maintaining such a graph requires extensive knowledge engineering work and tools. Fast forward a couple of decades and we have LLMs, which is basically auto-complete on steroids based on general knowledge, with the downside that it doesn't "remember" any facts unless you spoon-feed it with the right context. We're now back to: "let's encode context knowledge as a graph and plug it into LLMs". Fun times :)
replies(2): >>44439118 #>>44439551 #
cpard ◴[] No.44439551[source]
The problem with semantic web was deeper, people had to agree on the semantics that would be formalized as triples and getting people to agree on an ongoing basis is not an easy task.

My question is, what’s the value of explicitly storing semantics as triples when the LLM can infer the semantics on runtime?

replies(2): >>44440246 #>>44440787 #
1. Manoj58 ◴[] No.44440787[source]
This is something we brainstorm a lot on. While LLMs can infer semantics at runtime, we got biased to explicit triples for these reasons:

Efficient, precise retrieval through graph traversal patterns that flat text simply can't match ("find all X related to Y through relationship Z")

Algorithmic contradiction detection by matching subject-predicate pairs across time, which LLMs struggle with across distant contexts

Our goal is also to make assistant more proactive, where triplets make pattern recognition more easy and effective

what do you think about these?