←back to thread

What to build instead of AI agents

(decodingml.substack.com)
233 points giuliomagnifico | 1 comments | | HN request time: 0.328s | source
Show context
mindwok ◴[] No.44450569[source]
I'm not yet convinced (though I remain open to the idea) that AI agents are going to be a widely adopted pattern in the way people on LinkedIn suggest.

The way I use AI today is by keeping a pretty tight leash on it, a la Claude Code and Cursor. Not because the models aren't good enough, but because I like to weigh in frequently to provide taste and direction. Giving the AI more agency isn't necessarily desirable, because I want to provide that taste.

Maybe that'll change as I do more and new ergonomics reveal themselves, but right now I don't really want AI that's too agentic. Otherwise, I kind of lose connection to it.

replies(3): >>44450601 #>>44450841 #>>44451530 #
1. afc ◴[] No.44450841[source]
My thinking is that over time I can incrementally codify many of these individual "taste" components as prompts that each review a change and propose suggestions.

For example, a single prompt could tell an llm to make sure a code change doesn't introduce mutability when the same functionality can be achieved with immutable expressions. Another one to avoid useless log statements (with my specific description of what that means).

When I want to evaluate a code change, I run all these prompts separately against it, collecting their structured (with MCP) output. Of course, I incorporate this in my code-agent to provide automated review iterations.

If something escapes where I feel the need to "manually" provide context, I add a new prompt (or figure out how to extend whichever one failed).