Most active commenters

    ←back to thread

    Claude Code now supports hooks

    (docs.anthropic.com)
    381 points ramoz | 14 comments | | HN request time: 0.856s | source | bottom
    1. b0a04gl ◴[] No.44431774[source]
    >before this you had to trust that claude would follow your readme instructions about running linters or tests. hit and miss at best. now its deterministic. pre hook blocks bad actions post hook validates results.

    >hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.

    replies(2): >>44432628 #>>44433004 #
    2. icoder ◴[] No.44432628[source]
    This nicely describes where we're at with LLM's as I see it: they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.

    I feel that currently improvement mostly comes from slapping what to me feels like workarounds on top of something that very well may be a local maximum.

    replies(4): >>44432685 #>>44432867 #>>44433289 #>>44433819 #
    3. Marazan ◴[] No.44432685[source]
    Someone described LLMs in the coding space as stone soup. So much stuff is being created around then to make them work better that at some point it feels like you'll be able to remove the LLM part of the equation
    replies(1): >>44432761 #
    4. samrus ◴[] No.44432761{3}[source]
    We cant deny the LLM has utility. You cant eat the stone but the LLM can implement design patterns for example.

    I think this insistance on near autonomous agents is setting the bar too high, which wouldnt be an issue if these companies werent then insisting that the bar is set just right.

    These things understand language perfectly, theyve solved NLP because thats what they model extremely well. But agentic stuff is modelled by reinforcement learning and until thats in the foundation model itself (at the token prediction level) these things have no real understanding of state spaces being a recursive function of action spaces and such stuff. And they cant autonomously code or drive or manage a fund until they do

    5. ramoz ◴[] No.44432867[source]
    Claude Code is an agent, not an LLM. Literally this is software that was released 4mo ago. lol.

    1y ago - No provider was training LLMs in an environment modeled for agentic behavior - ie in conjunction with software design of an integrated utility.

    'slapped on workaround' is a very lazy way to describe this innovation.

    replies(1): >>44433109 #
    6. gwbas1c ◴[] No.44433004[source]
    I wonder how hard it is to create an alternate user account and have Claude run as that user instead?
    replies(1): >>44433952 #
    7. koakuma-chan ◴[] No.44433109{3}[source]
    > Literally this is software that was released 4mo ago.

    Feels like ages

    replies(1): >>44433823 #
    8. oefrha ◴[] No.44433289[source]
    > they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.

    Humans are fancy enough to be able to write code yet at the same time they can’t be trusted to do stuff which can be solved with a simple hook, like a simple formatter or linter. That’s why we still run those on CI. This is a meaningless statement.

    replies(1): >>44433527 #
    9. RobertDeNiro ◴[] No.44433527{3}[source]
    One is a machine the other one is not. People have to stop comparing LLMs to humans. Would you hold a car to human standards?
    replies(2): >>44433781 #>>44433797 #
    10. oefrha ◴[] No.44433781{4}[source]
    The machine just needs to be coded to run stuff (as shown in this very post). My coworkers can’t be coded to follow procedures and still submit PRs failing basic checks, sadly.
    11. zhivota ◴[] No.44433797{4}[source]
    A self driving car, yes.
    12. iagooar ◴[] No.44433819[source]
    Humans use tools, so does AI. Does us make any less valuable as humans because we use bicycles and hammers? Why would it be bad for an AI to use tools?
    13. ryandvm ◴[] No.44433823{4}[source]
    That's what a singularity feels like.
    14. symbolicAGI ◴[] No.44433952[source]
    A NYC bank has given its coding agents email addresses.