←back to thread

20 points Manik_agg | 1 comments | | HN request time: 0s | source

Hi HN,

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Why this matters

- Ask “What’s our roadmap now?” and “What was it last quarter?” — timeline and authorship are always preserved.

- Change a preference (e.g. “I no longer use shadcn”) — assistants see both old and new memory, so no more stale facts or embarrassing hallucinations.

- Every answer is traceable: hover a fact to see who/when/why it got there.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

Show context
demondynamic ◴[] No.44432659[source]
Hey nice one! Where can i find the documentation for self-host?
replies(1): >>44432723 #
harshithmul ◴[] No.44432723[source]
Hey thank you, you can find this here https://github.com/RedPlanetHQ/core?tab=readme-ov-file#core-..., it's a docker compose based setup
replies(1): >>44433040 #
demondynamic ◴[] No.44433040[source]
What’s even the point of calling yourself open source if you can’t bother to support LLaMA models? Kinda defeats the purpose, don’t you think?
replies(1): >>44433208 #
1. harshithmul ◴[] No.44433208[source]
We are working on it, LLaMA models are not working good when it comes to finding facts from the message. We should have this out in the next couple of days