←back to thread

111 points Manik_agg | 2 comments | | HN request time: 0s | source

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

Show context
ianbicking ◴[] No.44438552[source]
I've been building a memory system myself, so I have some thoughts...

Why use a knowledge graph/triples? I have not been able to come up with any use for the predicate or reason to make these associations. Simple flat statements seem entirely sufficient and more accurate to the source material.

... OK, looking a little more, I'm guessing it is a way to see when a memory should be updated; you can match on the first two items of the predicate. In a sense you are normalizing the input and hoping that shows an update or duplicate memory.

I would be curious how well this works in practice. I've spent a fair amount of effort trying to merge and deduplicate memories in a more ad hoc way, generally using the LLM for this process (giving it a new memory and a list of old memories). It would feel much more deterministic and understandable to do this in a structured way. On the other hand I'm not sure how stable these triples would be. Would they all end up attached to the user? And will the predicate be helpful to establish meaningful relationships, or could the memories simply be attached to an entity?

For instance I could list a bunch of facts related to my house: the address, which room I sleep in, upcoming and past repairs, observations on the yard, etc. Many (but not all) of these could be best represented as one "about my house" memory, with all the details embedded in one string of natural language text. It would be great to structure repairs... but how will that work? (my house, needs repair, attic bath)? Or (my house, has room, attic bathroom) and (attic bathroom, needs repair, bath)? Will the system pick one somewhat arbitrarily then, being able to see that past memory, replicate its structure?

Another representation that occurs to for detecting duplicates and updates is simply "is related to entities". This creates a flatter database where there's less ambiguity in how memories are represented.

Anyway, that's one area that stuck out to me. It wasn't clear to me where the schema for memories is in the codebase, I think that would be very useful to understanding the system.

replies(2): >>44439148 #>>44440465 #
1. visarga ◴[] No.44440465[source]
I built a graph memory MCP tool as well. I don't use triplets, instead I generate nodes. The node is composed of (id, title, text) and text can contain links inlined, like @45, referencing past nodes. So it can create both a node and its relations in one tool call.

My MCP has two tools - a search tool and a node adding tool. The search tool uses embedding similarity to retrieve K nodes, then expands on links and fetches another P nodes. By controlling K and P the LLM can choose to use the graph as a simple RAG or as a pure linked graph, or anywhere in-between. In practice I use Claude which is able to do deep searches. What it does not find in one call it locates in 4-5 calls.

The LLM will only add new ideas not already in the KB. It does the searching, filtering and writing. I am just directing this process. The KB can grow unbounded because when I need to add new nodes I first search the KB and find relevant nodes to link to without loading every node.

But one problem I see with these memory systems is that they can reduce interest on a topic once we put it in the KB.

replies(1): >>44440583 #
2. saxenauts ◴[] No.44440583[source]
I am building a graph memory too, and I agree with you. It is almost useless to generate triplets, and instead generate nodes that are usually statement strings. And can extend up to a short paragraph too.

I have strong opinions that memory should be a graph + vector hybrid. The vector store can store and index information as a cognitive fragment( ex. all things related to my house), and can keep editing it as a set of statements, while that node can be associated with other nodes (ex. my renovation plans, budgeting, etc.), because those are separate fragments. I am also using LLM to consolidate and find new patterns across the connected memories

> But one problem I see with these memory systems is that they can reduce interest on a topic once we put it in the KB.

Can you elaborate please?