←back to thread

111 points Manik_agg | 1 comments | | HN request time: 0.24s | source

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

Show context
ianbicking ◴[] No.44438552[source]
I've been building a memory system myself, so I have some thoughts...

Why use a knowledge graph/triples? I have not been able to come up with any use for the predicate or reason to make these associations. Simple flat statements seem entirely sufficient and more accurate to the source material.

... OK, looking a little more, I'm guessing it is a way to see when a memory should be updated; you can match on the first two items of the predicate. In a sense you are normalizing the input and hoping that shows an update or duplicate memory.

I would be curious how well this works in practice. I've spent a fair amount of effort trying to merge and deduplicate memories in a more ad hoc way, generally using the LLM for this process (giving it a new memory and a list of old memories). It would feel much more deterministic and understandable to do this in a structured way. On the other hand I'm not sure how stable these triples would be. Would they all end up attached to the user? And will the predicate be helpful to establish meaningful relationships, or could the memories simply be attached to an entity?

For instance I could list a bunch of facts related to my house: the address, which room I sleep in, upcoming and past repairs, observations on the yard, etc. Many (but not all) of these could be best represented as one "about my house" memory, with all the details embedded in one string of natural language text. It would be great to structure repairs... but how will that work? (my house, needs repair, attic bath)? Or (my house, has room, attic bathroom) and (attic bathroom, needs repair, bath)? Will the system pick one somewhat arbitrarily then, being able to see that past memory, replicate its structure?

Another representation that occurs to for detecting duplicates and updates is simply "is related to entities". This creates a flatter database where there's less ambiguity in how memories are represented.

Anyway, that's one area that stuck out to me. It wasn't clear to me where the schema for memories is in the codebase, I think that would be very useful to understanding the system.

replies(2): >>44439148 #>>44440465 #
1. Manoj58 ◴[] No.44439148[source]
Hey, another co-founder of CORE. Great question about triples vs. fact statements! Your house example actually highlights why we went with a reified graph:

With fact statements, you'd need to decide upfront: is this one "about my house" memory or separate facts? Our approach lets you do both:

Representation flexibility: For your house example, we can model (house, needs repair, attic bath) AND connect it to (attic bathroom, has fixture, bath). The LLM extraction helps maintain consistency, but the graph structure allows both high-level and detailed representations simultaneously.

Updating and deduplication: - We identify potential duplicates/updates by matching subject-predicate patterns - When new information contradicts old (e.g., repair completed), we don't delete - we mark the old statement invalid at timestamp X and create a new valid statement - This maintains a complete history while still showing the current state - The structured format makes conflicts explicit rather than buried in text

The schema isn't rigid - we have predefined types (Person, Place, etc.), but relationships form dynamically. This gives structure where helpful, but flexibility where needed.

In practice, we've found this approach more deterministic for tracking knowledge evolution while still preserving the context and nuance of natural language through provenance links.