←back to thread

Agent Client Protocol (ACP)

(agentclientprotocol.com)
270 points vinhnx | 6 comments | | HN request time: 0s | source | bottom
Show context
mg ◴[] No.45074786[source]
I'm fine with treating AI like a human developer:

I ask AI to write a feature (or fix a bug, or do a refactoring) and then I read the commit. If the commit is not to my liking, I "git reset --hard", improve my prompt and ask the AI to do the task again.

I call this "prompt coding":

https://www.gibney.org/prompt_coding

This way, there is no interaction between my coding environment and the AI at all. Just like working with a human developer does not involve them doing anything in my editor.

replies(2): >>45074878 #>>45076374 #
Disposal8433 ◴[] No.45074878[source]
> Nowadays, it is better to write prompts

Very big doubt. AI can help for a few very specific tasks, but the hallucinations still happen, and making things up (especially APIs) is unacceptable.

replies(6): >>45074958 #>>45074999 #>>45075081 #>>45075111 #>>45079473 #>>45081297 #
salomonk_mur ◴[] No.45075081[source]
Hard disagree. LLMs are now incredibly good for any coding task (with popular languages).
replies(2): >>45075488 #>>45075893 #
1. quotemstr ◴[] No.45075893{3}[source]
What's your explanation for why others report difficulty getting coding agents to produce their desired results?

And don't respond with a childish "skill issue lol" like it's Twitter. What specific skill do you think people are lacking?

replies(3): >>45078123 #>>45079566 #>>45082881 #
2. Eisenstein ◴[] No.45078123[source]
Thought experiment: you can ride a bike. You can see other people ride bikes. Some portion of people get on a bike and fall off, then claim that bikes are not useful for transportation. Specify what skill they are lacking without saying 'ability to ride a bike'.
replies(1): >>45078257 #
3. quotemstr ◴[] No.45078257[source]
For a bike? Balance, fine motor control, proprioception, or even motivation. You can always break it down.
replies(1): >>45079852 #
4. 80hd ◴[] No.45079566[source]
Not OP but my two cents - probably laziness and propensity towards daydreaming.

I have extreme intolerance to boredom. I can't do the same job twice. Some people don't care.

This pain has caused me to become incredibly effective with LLMs because I'm always looking for an easier way to do anything.

If you keep hammering away at a problem - i.e. how to code with LLMs - you tend to become dramatically better at other people who don't do that.

5. Eisenstein ◴[] No.45079852{3}[source]
Knowing those things won't help them acquire the skill. What will help them be able to ride a bike is practicing trying to riding a bike until they can do it.
6. kevinmchugh ◴[] No.45082881[source]
In no particular order: LLMs seem, for some reason, to be worse at some languages than others.

LLMs only have so much context available, so larger projects are harder to get good results in.

Some tools (eg a fast compiler) are very useful to agents to get good feedback. If you don't have a compiler, you'll get hallucinations corrected more slowly.

Some people have schedules that facilitate long uninterrupted periods, so they see an agent work for twenty minutes on a task and think "well I could've done that in 10-30 minutes, so where's the gain?". And those people haven't understood that they could be running many agents in parallel (I don't blame people for not realizing this, no one I talk to is doing this at work).

People also don't realize they could have the agent working while they're asleep/eating lunch/in a meeting. This is why, in my experience, managers find agents more transformative than ICs do. We're in more meetings, with fewer uninterrupted periods.

People have an expectation that the agent will always one-shot the implementation, and don't appreciate it when the agent gets them 80% of the way there. Or that, it's basically free to try again if the agent went completely off the rails.

A lot of people don't understand that agents are a step beyond just an LLM, so their attempts last year have colored their expectations.

Some people are less willing to attempt to work with the agent to make it better at producing good output. They don't know how to do it. Your agent got logging wrong? Okay, tell it to read an example of good logging and to write a rule that will get it correct.