Isn't the point he's making:
>> Yet too many AI projects consistently underestimate this, chasing flashy agent demos promising groundbreaking capabilities—until inevitable failures undermine their credibility.
This is the problem with the 'MCP for Foo' posts that recently.
Adding a capability to your agent that the agent can't use just gives us exactly that:
> inevitable failures undermine their credibility
It should be relatively easy for everyone to agree that giving agents an unlimited set of arbitrary capabilities will just make them terrible at everything; and that promising that giving them these capabilities will make them better is:
A) false
B) undermining the credibility of agentic systems
C) undermining the credibility of the people making these promises
...I get it, it is hard to write good agent systems, but surely, a bunch of half-baked, function-calling wrappers that don't really work... like, it's not a good look right?
It's just vibe coding for agents.
I think it's quite reasonable to be say, if you're building a system, now, then:
> The key to navigating this tension is focus—choosing a small number of tasks to execute exceptionally well and relentlessly iterating upon them.
^ This seems like exceptionally good advice. If you can't make something that's actually good by iterating on it until it is good and it does work, then you're going to end up being a devin (ie. over promised, over hyped failure).