←back to thread

198 points alexmrv | 1 comments | | HN request time: 0.212s | source

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.

The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?

How it works:

Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.

This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.

The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.

GitHub: https://github.com/Growth-Kinetics/DiffMem

Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.

Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

Show context
BenoitP ◴[] No.44970042[source]
I'm failing to grasp how it solves/replaces what vector db were created for in the first place (high-dimensional neighborhood searching, where the space to be searched grows by distance^dimension)
replies(4): >>44970063 #>>44970064 #>>44970076 #>>44970631 #
1. OutOfHere ◴[] No.44970631[source]
The submission and the readme fail to explain the important thing, which is how BM25 is run. If it creates bags of words for every document for every query, that would be ineffective. If it reuses the BM25 index, it is not clear when it is constricted, updated, and how it is stored.

Because BM25 ostensibly relies on word matching, there is no way it will extend to concept matching.