>hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.
>hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.
I feel that currently improvement mostly comes from slapping what to me feels like workarounds on top of something that very well may be a local maximum.
I think this insistance on near autonomous agents is setting the bar too high, which wouldnt be an issue if these companies werent then insisting that the bar is set just right.
These things understand language perfectly, theyve solved NLP because thats what they model extremely well. But agentic stuff is modelled by reinforcement learning and until thats in the foundation model itself (at the token prediction level) these things have no real understanding of state spaces being a recursive function of action spaces and such stuff. And they cant autonomously code or drive or manage a fund until they do
1y ago - No provider was training LLMs in an environment modeled for agentic behavior - ie in conjunction with software design of an integrated utility.
'slapped on workaround' is a very lazy way to describe this innovation.
Feels like ages
Humans are fancy enough to be able to write code yet at the same time they can’t be trusted to do stuff which can be solved with a simple hook, like a simple formatter or linter. That’s why we still run those on CI. This is a meaningless statement.