←back to thread

224 points jamesxv7 | 1 comments | | HN request time: 0.309s | source

First of all, this is purely a personal learning project for me, aiming to combine three of my passions: photography, software engineering, and my family memories. I have a large collection of family photos and want to build an interactive experience to explore them, ala Google or Apple Photo features.

My goal is to create a system with smart search capabilities, and one of the most important requirements is that it must run entirely on my local hardware. Privacy is key, but the main driver is the challenge and joy of building it myself (an obviously learn).

The key features I'm aiming for are:

Automatic identification and tagging of family members (local face recognition).

Generation of descriptive captions for each photo.

Natural language search (e.g., "Show me photos of us at the beach in Luquillo from last summer").

I've already prompted AI tools for a high-level project plan, and they provided a solid blueprint (eg, Ollama with LLaVA, a vector DB like ChromaDB, you know it). Now, I'm highly interested in the real-world human experience. I'm looking for advice, learning stories, and the little details that only come from building something similar.

What tools, models, and best practices would you recommend for a project like this in 2025? Specifically, I'm curious about combining structured metadata (EXIF), face recognition data, and semantic vector search into a single, cohesive application.

Any and all advice would be deeply appreciated. Thanks!

1. coffeecoders ◴[] No.44426391[source]
I have been building something like this but for personal use.

As of now, I use SentenceTransformer model to chunk files, blip for captioning (“Family vacation in Banff, February 2025”)) and mtcnn with InsightFace for face detection. My index stores captions, face embeddings, and EXIF metadata (date, GPS) for queries like “show photos of us in Banff last winter.” I’m working on integrating ChromaDB for faster searches.

Eventually, I aim to store indexes as:

{

  "filename": "/Vacation/Banff/Wife.jpg",

  "chunk_id": 0,

  "text": "Family at Banff, February 2025",

  "caption_embedding": [0.1, 0.2, ...],

  "face_embeddings": [{"name": "NT", "embedding": [0.3, 0.4, ...]}, ...],

  "exif": {
     
     "DateTimeOriginal": "2025:02:15",

     "GPSCoordinates": "18.387, -65.992"

    }
}

I also built an UI (like Spotlight Search) to search through these indexes.

Code (in progress): https://github.com/neberej/smart-search