←back to thread

Claude Code now supports hooks

(docs.anthropic.com)
381 points ramoz | 3 comments | | HN request time: 0s | source
Show context
b0a04gl ◴[] No.44431774[source]
>before this you had to trust that claude would follow your readme instructions about running linters or tests. hit and miss at best. now its deterministic. pre hook blocks bad actions post hook validates results.

>hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.

replies(2): >>44432628 #>>44433004 #
icoder ◴[] No.44432628[source]
This nicely describes where we're at with LLM's as I see it: they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.

I feel that currently improvement mostly comes from slapping what to me feels like workarounds on top of something that very well may be a local maximum.

replies(4): >>44432685 #>>44432867 #>>44433289 #>>44433819 #
oefrha ◴[] No.44433289[source]
> they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.

Humans are fancy enough to be able to write code yet at the same time they can’t be trusted to do stuff which can be solved with a simple hook, like a simple formatter or linter. That’s why we still run those on CI. This is a meaningless statement.

replies(1): >>44433527 #
1. RobertDeNiro ◴[] No.44433527[source]
One is a machine the other one is not. People have to stop comparing LLMs to humans. Would you hold a car to human standards?
replies(2): >>44433781 #>>44433797 #
2. oefrha ◴[] No.44433781[source]
The machine just needs to be coded to run stuff (as shown in this very post). My coworkers can’t be coded to follow procedures and still submit PRs failing basic checks, sadly.
3. zhivota ◴[] No.44433797[source]
A self driving car, yes.