Most active commenters
  • minasmorath(5)
  • simonw(4)
  • pvg(3)

←back to thread

Building Effective "Agents"

(www.anthropic.com)
763 points jascha_eng | 25 comments | | HN request time: 1.677s | source | bottom
Show context
simonw ◴[] No.42475700[source]
This is by far the most practical piece of writing I've seen on the subject of "agents" - it includes actionable definitions, then splits most of the value out into "workflows" and describes those in depth with example applications.

There's also a cookbook with useful code examples: https://github.com/anthropics/anthropic-cookbook/tree/main/p...

Blogged about this here: https://simonwillison.net/2024/Dec/20/building-effective-age...

replies(6): >>42475903 #>>42476486 #>>42477016 #>>42478039 #>>42478786 #>>42479343 #
1. Animats ◴[] No.42478039[source]
Yes, they have actionable definitions, but they are defining something quite different than the normal definition of an "agent". An agent is a party who acts for another. Often this comes from an employer-employee relationship.

This matters mostly when things go wrong. Who's responsible? The airline whose AI agent gave out wrong info about airline policies found, in court, that their "intelligent agent" was considered an agent in legal terms. Which meant the airline was stuck paying for their mistake.

Anthropic's definition: Some customers define agents as fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks.

That's an autonomous system, not an agent. Autonomy is about how much something can do without outside help. Agency is about who's doing what for whom, and for whose benefit and with what authority. Those are independent concepts.

replies(5): >>42478093 #>>42478201 #>>42479305 #>>42480149 #>>42481749 #
2. solidasparagus ◴[] No.42478093[source]
That's only one of many definitions for the word agent outside of the context of AI. Another is something produces effects on the world. Another is something that has agency.

Sort of interesting that we've coalesced on this term that has many definitions, sometimes conflicting, but where many of the definitions vaguely fit into what an "AI Agent" could be for a given person.

But in the context of AI, Agent as Anthropic defines it is an appropriate word because it is a thing that has agency.

replies(1): >>42478308 #
3. simonw ◴[] No.42478201[source]
Where did you get the idea that your definition there is the "normal" definition of agent, especially in the context of AI?

I ask because you seem very confident in it - and my biggest frustration about the term "agent" is that so many people are confident that their personal definition is clearly the one everyone else should be using.

replies(3): >>42478826 #>>42478885 #>>42486107 #
4. Animats ◴[] No.42478308[source]
> But in the context of AI, Agent as Anthropic defines it is an appropriate word because it is a thing that has agency.

That seems circular.

replies(1): >>42478992 #
5. PhilippGille ◴[] No.42478826[source]
Didn't he mention it was the court's definition?

But I'm not sure if that's true. The court didn't define anything, in contrary they only said that (in simplified terms) the chatbot was part of the website and it's reasonable to expect the info on their website to be accurate.

The closest I could find to the chatbot being considered an agent in legal terms (an entity like an employee) is this:

> Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot.

Source: https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/202...

6. JonChesterfield ◴[] No.42478885[source]
Defining "agent" as "thing with agency" seems legitimate to me, what with them being the same word.
replies(1): >>42479373 #
7. Nevermark ◴[] No.42478992{3}[source]
It would only be circular if agency was only defined as “the property of being an agent”. That circle of reasoning isn’t being proposed as the formal definitions by anyone.

Perhaps you mean tautological. In which case, an agent having agency would be an informal tautology. A relationship so basic to the subject matter that it essentially must be true. Which would be the strongest possible type of argument.

8. pvg ◴[] No.42479305[source]
AI people have been using a much broader definition of 'agent' for ages, though. One from Russel and Norvig's 90s textbook:

"Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators"

https://en.wikipedia.org/wiki/Intelligent_agent#As_a_definit...

replies(1): >>42483983 #
9. simonw ◴[] No.42479373{3}[source]
That logic doesn't work for me, because many words have multiple meanings. "Agency" can also be a noun that means an organization that you hire - like a design agency. Or it can mean the CIA.

I'm not saying it's not a valid definition of the term, I'm pushing back on the idea that it's THE single correct definition of the term.

replies(2): >>42481146 #>>42481493 #
10. jeffreygoesto ◴[] No.42480149[source]
And "autonomous" is "having one's own laws".

https://www.etymonline.com/word/autonomous

11. Nekit1234007 ◴[] No.42481146{4}[source]
May I push back on the idea that a single word may mean (completely) different things?
replies(5): >>42481317 #>>42481479 #>>42481545 #>>42482230 #>>42485302 #
12. chrisweekly ◴[] No.42481479{5}[source]
What's the single, unambiguous definition of the word "cleave"?
13. Der_Einzige ◴[] No.42481493{4}[source]
Anything involving real agents likely does get your local spymaster interested. I assume all good AI work attracts the three letter types to make sure that the researcher isn’t trying to make AI that can make bioweapons…
14. ToValueFunfetti ◴[] No.42481545{5}[source]
Aloha! Indeed, the language is being cleaved by such oversights. You can be in charge of overlooking this issue, effective ahead of two weeks from now. We'll peruse your results and impassionately sanction anything you call out (at least when it's unravelable). This endeavor should prove invaluable. Aloha!
15. jcims ◴[] No.42481749[source]
>Anthropic's definition: Some customers define agents as fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks.

But that's not their definition, and they explicitly describe that definition as an 'autonomous system'. Their definition comes in the next paragraph:

"At Anthropic, we categorize all these variations as agentic systems, but draw an important architectural distinction between workflows and agents:

* Workflows are systems where LLMs and tools are orchestrated through predefined code paths. Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

* Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks."

16. simonw ◴[] No.42482230{5}[source]
It's pretty clearly true.

Bank: financial institution, edge of a river, verb to stash something away

Spring: a season, a metal coil, verb to jump

Match: verb to match things together, noun a thing to start fires, noun a competition between two teams

Bat: flying mammal, stick for hitting things

And so on.

17. minasmorath ◴[] No.42483983[source]
That definition feels like it's playing on the verb, the idea of having "agency" in the world, and not on the noun, of being an "agent" for another party. The former is a philosophical category, while the latter has legal meaning and implication, and it feels somewhat disingenuous to continue to mix them up in this way.
replies(2): >>42484022 #>>42484981 #
18. AnimalMuppet ◴[] No.42484022{3}[source]
Interesting. The best agents don't have agency, or at least don't use it.

You can think of this in video game terms: Players have agency. NPCs are "agencs", but don't have agency. But they're still not just objects in the game - they can move themselves and react to their environment.

replies(1): >>42486096 #
19. pvg ◴[] No.42484981{3}[source]
In what way is it 'disingenuous'? You think Norvig is trying to deceive us about something? I'm not saying you have to agree with or like this definition but even if you think it's straight up wrong, 'disingenuous' feels utterly out of nowhere.
replies(1): >>42486056 #
20. rcxdude ◴[] No.42485302{5}[source]
You're pushing up against the english language, then. 'let' has 46 entries in the dictionary (more if you cinsider obsolete usages).
21. minasmorath ◴[] No.42486056{4}[source]
It's disingenuous in that it takes a word with a common understanding ("agent") and then conveniently redefines or re-etomologizes the word in an uncommon way that leads people to implicitly believe something about the product that isn't true.

Another great example of this trick is "essential" oils. We all know what the word "essential" means, but the companies selling the stuff use the word in the most uncommon way, to indicate the "essence" of something is in the oil, and then let the human brain fill in the gap and thus believe something that isn't true. It's techinically legal, but we have to agree that's not moral or ethical, right?

Maybe I'm wildly off base here, I have admittedly been wrong about a lot in my life up to this point. I just think the backlash that crops up when people realize what's going on (for example, the airline realizing that their chat bot does not in fact operate under the same rules as a human "agent," and that it's still a technology product) should lead companies to change their messaging and marketing, and the fact that they're just doubling down on the same misleading messaging over and over makes the whole charade feel disingenuous to me.

replies(1): >>42486702 #
22. minasmorath ◴[] No.42486096{4}[source]
That's actually a great example of what I'm saying, because I don't think the NPCs are agents at all in the traditional sense of "One that acts or has the power or authority to act on behalf of another." Where would the NPC derive its power and authority from? There is a human somewhere in the chain giving it 100% of its parameters, and that human is ultimately 100% responsible for the configuration of the NPC, which is why we don't blame the NPC in the game for behaving in a buggy way, we blame the devs. To say the NPC has agency puts some level of metaphysical responsibility about decision making and culpability on the thing that it doesn't have.

An AI "agent" is the same way, it is not culpable for its actions, the humans who set it up are, but we're leading people to believe that if the AI goes off script then the AI is somehow responsible for its own actions, which is simply not true. These are not autonomous beings, they're technology products.

23. minasmorath ◴[] No.42486107[source]
I searched for the definition of "agent" and none of the results map to the way AI folks are using the word. It's really that simple, because we're marketing this stuff to non-tech people who already use words to mean things.

If we're redefining common words to market this stuff to non-tech people, and then we're conveniently not telling them that we redefined words, and thus allowing them to believe implicit falsehoods about the product that have serious consequences, we're being disingenuous.

24. pvg ◴[] No.42486702{5}[source]
with a common understanding ("agent") and then conveniently redefines or re-etomologizes the word in an uncommon way that leads people to implicitly believe something about the product that isn't true.

What is the 'product' here? It's a university textbook. Like, where is the parallel between https://en.wikipedia.org/wiki/Intelligent_agent and 'essential oils'.

replies(1): >>42486840 #
25. minasmorath ◴[] No.42486840{6}[source]
Oh, I have no issue with his textbook definition, I'm saying that it's now being used to sell products by people who know their normal consumer base isn't using the same definition and it conveniently misleads them into believing things about the product that aren't true.

Knowing that your target market (non-tech folks) isn't using the same language as you, but persisting with that language because it creates convenient sales opportunities due to the misunderstandings, feels disingenuous to me.

An "agent" in common terms is just someone acting on behalf of another, but that someone still has autonomy and moral responsibility for their actions. Like for example the airline customer service representative situation. AI agents, when we pull back the curtains, get down to brass tacks, whatever turn of phrase you want to use, are still ultimately deterministic models. They have a lot more parameters, and their determinism is offset by many factors of pseudo-randomness, but given sufficient information we could still predict every single output. That system cannot be an agent in the common sense of the word, because humans are still dictating all of the possible actions and outcomes, and the machine doesn't actually have the autonomy required.

If you fail to keep your tech product from going off-script, you're responsible, because the model itself isn't a non-deterministic causal actor. A human CSR on the other hand is considered by law to have the power and responsibility associated with being a causal actor in the world, and so when they make up wild stuff about the terms of the agreement, you don't have to honor it for the customer, because there's culpability.

I'm drifting into philosophy at this point, which never goes well on HN, but this is ultimately how our legal system determines responsibility for actions, and AI doesn't meet those qualifications. If we ever want it to be culpable for its own actions, we'll have to change the legal framework we all operate under.

Edit: Causal, not casual... Whoops.

Also, I think I'm confusing the situation a bit by mixing the legal distinctions between agency and autonomy with the common understanding of being an "agent" and the philosophical concept of agency and culpability and how that relates to the US legal foundations.

I need to go touch grass.