←back to thread

198 points alexmrv | 2 comments | | HN request time: 0s | source

Hey HN! I built a proof-of-concept for AI memory using Git instead of vector databases.

The insight: Git already solved versioned document management. Why are we building complex vector stores when we could just use markdown files with Git's built-in diff/blame/history?

How it works:

Memories stored as markdown files in a Git repo Each conversation = one commit git diff shows how understanding evolves over time BM25 for search (no embeddings needed) LLMs generate search queries from conversation context Example: Ask "how has my project evolved?" and it uses git diff to show actual changes in understanding, not just similarity scores.

This is very much a PoC - rough edges everywhere, not production ready. But it's been working surprisingly well for personal use. The entire index for a year of conversations fits in ~100MB RAM with sub-second retrieval.

The cool part: You can git checkout to any point in time and see exactly what the AI knew then. Perfect reproducibility, human-readable storage, and you can manually edit memories if needed.

GitHub: https://github.com/Growth-Kinetics/DiffMem

Stack: Python, GitPython, rank-bm25, OpenRouter for LLM orchestration. MIT licensed.

Would love feedback on the approach. Is this crazy or clever? What am I missing that will bite me later?

1. rekttrader ◴[] No.44970011[source]
Using a vector db that’s created as a post commithook is probably a reasonable improvement. With context lengths getting bigger natural language is getting easier to work with but your leaving a bunch of gains on the floor without them.

No shade on your project, this is an emerging space and we can all use novel approaches.

Keep it up!

replies(1): >>44970025 #
2. alexmrv ◴[] No.44970025[source]
ooo nice post-commit creation of the db for the "now" state and links to the files for diff/historical? best of both worlds.