In practice, pure LLM suggestions often feel detached from your actual codebase—missing intent, architectural constraints, or team conventions. What helped us was adopting a repo‑aware evaluation approach with tooling that: - Scans entire repos, generates architecture diagrams, dependency maps, and feature breakdowns. - Surfaces AI suggestions grounded in context—so prompts don’t float in isolation. - Supports human-in-the-loop validation, making it easy to vet AI‑generated PRs before merging. - Tracks drift, technical debt, and cost per eval, so AI usage isn’t a black box.
The result isn’t autopilot coding—it’s contextual assistance that amplifies developer decisions. That aligns exactly with Dohmke: use AI to accelerate, but keep the engineer firmly in the driver’s seat.
Curious if others have tried similar repo‑aware AI workflows that don’t sacrifice control for speed?