←back to thread

Building Effective "Agents"

(www.anthropic.com)
597 points jascha_eng | 3 comments | | HN request time: 0.001s | source
1. adeptima ◴[] No.42479285[source]
The whole Agent thing can easily blow in complexity.

Here some challenges I personally faced recently

- Durable Execution Paradigm: You may need the system to operate in a "durable execution" fashion like Temporal, Hatchet, Inngest, and Windmill. Your processes need to run for months, be upgraded and restarted. Links below

- FSM vs. DAG: Sometimes, a Finite State Machine (FSM) is more appropriate than a Directed Acyclic Graph (DAG) for my use cases. FSMs support cyclic behavior, allowing for repeated states or loops (e.g., in marketing sequences). FSM done right is hard. If you need FSM, you can't use most tools without "magic" hacking

- Observability and Tracing - takes time to put it everything nice in Grafana (Alloy, Tempo, Loki, Prometheus) or whatever you prefer. Attention switch between multiple systems is not an option during to limited attention span and "skills" issue. Most of "out of box" functionality or new Agents frameworks quickly becomes a liability

- Token/Inference Economy - token consumption and identifying edge cases with poor token management is a challenge, similar to Ethereum's gas consumption issues. Building a billing system based on actual consumption on the top of Stripe was a challenge. This is even 10x harder ... at least for me ;)

- Context Switching - managing context switching is akin to handling concurrency and scheduling with async/await paradigms, which can become complex. Simple prompts is a ok, but once you start joggling documents or screenshots or screen reading it's another game.

What I like about the all above it's nothing new - all design patterns, architecture are known for a while.

It's just hard to see it through AI/ML buzzwords storm ... but once you start looking at source code ... the fog of mind wars become clear.

Durable Execution / Workflow Engines

- Temporal https://github.com/temporalio - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

- Hatchet https://news.ycombinator.com/item?id=39643136

- Inngest https://news.ycombinator.com/item?id=36403014

- Windmill https://news.ycombinator.com/item?id=35920082

Any comments and links on the above challenges and solutions are greatly appreciated!

replies(1): >>42480891 #
2. ldjkfkdsjnv ◴[] No.42480891[source]
Which do you think is the best workflow engine to use here? I've chosen temporal. Engineering management and their background at AWS means the platform is rock solid.
replies(1): >>42482037 #
3. adeptima ◴[] No.42482037[source]
IMHO Temporal and its team is great - it checks all boxes on abstracting away queues, schedulers, distributed state machines for your workflows related load balancers/gateways.

After following discussions and commits Hatchet, Inngest, Windmill, I have a feeling in few years time all other systems will have 95% overlap in core features. They are all influence each other.

Much bigger question what price you will pay by introducing workflow system like Temporal in your code base.

Temporal and co are not for real-time data pubsub.

If latency is an issue or want to keep small memory footprint, better to use something else.

The max payload is 2 MB. It needs to be serializable. Event History has limitations. It's a postgres write heavy.

Bringing the entire team on the same page, and it's not trivial either. If your team has strong Golang developers like in mine. They might oppose it and state something like Temporal is unnecessary abstraction.

Writing your code is fun. Studying and reusing someone else patterns is not so much. Check https://github.com/temporalio/samples-go

For now, I decided to keep prototyping with Temporal and has it running on my personal projects till I create strong use cases and discover all edges.

The great side of effect of exploring Temporal and its competitors you will see better ways of structuring of your code. Especially with distributed state and decoupling execution.