←back to thread

198 points alexmrv | 1 comments | | HN request time: 0.221s | source

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.

The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?

How it works:

Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.

This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.

The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.

GitHub: https://github.com/Growth-Kinetics/DiffMem

Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.

Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

1. lsb ◴[] No.44970643[source]
Interesting! Text files in git can work for small sizes, like your 100MB.

That is what's known in FAISS as a "flat" index, just one thing after another. And obviously you can query by primary key to the key-value store that is git, and do atomic updates as you'd expect. In SQL land this is an unindexed column, you can do primary key lookups on the table, or you can look through every row in order to find what you want.

If you don't need fast query times, this could work great! You could also use SQL (maybe an AWS Aurora Postgres/MySQL table?) and stuff the fact and its embedding into a table, and get declarative relational queries (find me the closest 10 statements users A-J have made to embedding [0.1, 0.2, -0.1, ...] within the past day). Lots of SQL databases are getting embedding search (Postgres, sqlite, and more) so that will allow your embedding search to happen in a few milliseconds instead of a few seconds.

It could be worth sketching out how to use SQLite for your application, instead of using files on disk: SQLite was designed to be a better alternative to opening a file (what happens if power goes out while you are writing a file? what happens if you want to update two people's records, and not get caught mid-update by another web app process?) and is very well supported by many language ecosystems.

Then, to take full advantage of vector embedding engines: what happens if my embedding is 1024 dimensions and each one is a 32 bit floating point value? Do I need to save all of that precision? Is 16-bit okay? 8-bit floats? What about reducing the dimensionality? Is it good enough accuracy and recall if I represent each dimension with an index to a palette of the best 256 floats for that dimension? What about representing each pair of dimensions with an index to a palette of the best 256 pairs of floats for those two dimensions? What about, instead of looking through every embedding one by one, we know that people talk about one of three different topics, and we have three different indices for each of those major topics, and to find your nearest neighbors you want to first find your closest topic (or maybe closest two topics?) and then search in those lower indices? Each of these hypotheticals is literally a different “index string” in an embedding search called FAISS, and could easily be thousands of lines of code if you did it yourself.

It’s definitely a good learning experience to implement your own embedding database atop git! Especially if you run it in production! 100MB is small enough that anything reasonable is going to be fast.