←back to thread

Agent Client Protocol (ACP)

(agentclientprotocol.com)
270 points vinhnx | 5 comments | | HN request time: 0.477s | source
Show context
mg ◴[] No.45074786[source]
I'm fine with treating AI like a human developer:

I ask AI to write a feature (or fix a bug, or do a refactoring) and then I read the commit. If the commit is not to my liking, I "git reset --hard", improve my prompt and ask the AI to do the task again.

I call this "prompt coding":

https://www.gibney.org/prompt_coding

This way, there is no interaction between my coding environment and the AI at all. Just like working with a human developer does not involve them doing anything in my editor.

replies(2): >>45074878 #>>45076374 #
Disposal8433 ◴[] No.45074878[source]
> Nowadays, it is better to write prompts

Very big doubt. AI can help for a few very specific tasks, but the hallucinations still happen, and making things up (especially APIs) is unacceptable.

replies(6): >>45074958 #>>45074999 #>>45075081 #>>45075111 #>>45079473 #>>45081297 #
1. wongarsu ◴[] No.45074999[source]
In languages with strong compile-time checks (like say rust) the obvious problems can mostly be solved by having the agent try to compile the program as a last step, and most agents now do that on their own. In cases where that doesn't work (more permissive languages like python, or http APIs) you can have the AI write tests and execute them. Or ask the AI to prototype and test features separately before adding them to the codebase. Adding MCP servers with documentation also helps a ton.

The real issues I'm struggling with are more subtle, like unnecessary code duplication, code that seems useful but is never called, doing the right work but in the wrong place, security issues, performance issues, not implementing the prompt correctly when it's not straight forward, implementing the prompt verbatim when a closer inspection of the libraries and technologies used reveals a much better way, etc. Mostly things you will catch in code review if you really pay attention. But whether that's faster than doing the task yourself greatly depends on the task at hand

replies(2): >>45075560 #>>45084484 #
2. Disposal8433 ◴[] No.45075560[source]
> the obvious problems can mostly be solved by having the agent try to compile the program

The famous "It compiles on my machine." Is that where engineering is going? Spending $billions to get the same result as the laziest developer ever?

replies(1): >>45075949 #
3. wongarsu ◴[] No.45075949[source]
If it compiles on my machine then the library and all called methods exist and are not hallucinated. If it runs on my machine then the called external APIs exist and are not hallucinated

That obviously does not mean that it's good software. That's why the rest of my comment exists. But "AI is hallucinating libraries/APIs" is something that can be trivially solved with good software practices from the 00s, and that the AI can resolve by itself using those techniques. It's annoying for autocomplete AI, but for agents it's a non-issue

4. lsaferite ◴[] No.45084484[source]
The subtle bugs are horrible.

We used Claude Code the other day to add a new record type to an API and it was mostly right. CC decided (for some weird reason) to use a slightly different return shape on a list endpoint than the entire rest of the API. It changed two field names (count/items became total_count/data). This divergence was missed until the code was released because it 'worked' and had full tests and everything. But when the standard client lib code was used to access the API it failed on the list endpoint. Didn't take long to discover the issue. Luckily, it was a new feature so nothing broke, but it was a very clear reminder that you have to be very thorough when reviewing coding agent PRs.

FWIW, I use CC frequently and have mostly positive things to say about it as a tool.

replies(1): >>45084563 #
5. ◴[] No.45084563[source]