←back to thread

Agent Client Protocol (ACP)

(agentclientprotocol.com)
272 points vinhnx | 1 comments | | HN request time: 0.221s | source
Show context
mg ◴[] No.45074786[source]
I'm fine with treating AI like a human developer:

I ask AI to write a feature (or fix a bug, or do a refactoring) and then I read the commit. If the commit is not to my liking, I "git reset --hard", improve my prompt and ask the AI to do the task again.

I call this "prompt coding":

https://www.gibney.org/prompt_coding

This way, there is no interaction between my coding environment and the AI at all. Just like working with a human developer does not involve them doing anything in my editor.

replies(2): >>45074878 #>>45076374 #
Disposal8433 ◴[] No.45074878[source]
> Nowadays, it is better to write prompts

Very big doubt. AI can help for a few very specific tasks, but the hallucinations still happen, and making things up (especially APIs) is unacceptable.

replies(6): >>45074958 #>>45074999 #>>45075081 #>>45075111 #>>45079473 #>>45081297 #
salomonk_mur ◴[] No.45075081[source]
Hard disagree. LLMs are now incredibly good for any coding task (with popular languages).
replies(2): >>45075488 #>>45075893 #
quotemstr ◴[] No.45075893[source]
What's your explanation for why others report difficulty getting coding agents to produce their desired results?

And don't respond with a childish "skill issue lol" like it's Twitter. What specific skill do you think people are lacking?

replies(3): >>45078123 #>>45079566 #>>45082881 #
1. kevinmchugh ◴[] No.45082881[source]
In no particular order: LLMs seem, for some reason, to be worse at some languages than others.

LLMs only have so much context available, so larger projects are harder to get good results in.

Some tools (eg a fast compiler) are very useful to agents to get good feedback. If you don't have a compiler, you'll get hallucinations corrected more slowly.

Some people have schedules that facilitate long uninterrupted periods, so they see an agent work for twenty minutes on a task and think "well I could've done that in 10-30 minutes, so where's the gain?". And those people haven't understood that they could be running many agents in parallel (I don't blame people for not realizing this, no one I talk to is doing this at work).

People also don't realize they could have the agent working while they're asleep/eating lunch/in a meeting. This is why, in my experience, managers find agents more transformative than ICs do. We're in more meetings, with fewer uninterrupted periods.

People have an expectation that the agent will always one-shot the implementation, and don't appreciate it when the agent gets them 80% of the way there. Or that, it's basically free to try again if the agent went completely off the rails.

A lot of people don't understand that agents are a step beyond just an LLM, so their attempts last year have colored their expectations.

Some people are less willing to attempt to work with the agent to make it better at producing good output. They don't know how to do it. Your agent got logging wrong? Okay, tell it to read an example of good logging and to write a rule that will get it correct.