←back to thread

Building Effective "Agents"

(www.anthropic.com)
628 points jascha_eng | 3 comments | | HN request time: 0.603s | source
Show context
simonw ◴[] No.42475700[source]
This is by far the most practical piece of writing I've seen on the subject of "agents" - it includes actionable definitions, then splits most of the value out into "workflows" and describes those in depth with example applications.

There's also a cookbook with useful code examples: https://github.com/anthropics/anthropic-cookbook/tree/main/p...

Blogged about this here: https://simonwillison.net/2024/Dec/20/building-effective-age...

replies(6): >>42475903 #>>42476486 #>>42477016 #>>42478039 #>>42478786 #>>42479343 #
Animats ◴[] No.42478039[source]
Yes, they have actionable definitions, but they are defining something quite different than the normal definition of an "agent". An agent is a party who acts for another. Often this comes from an employer-employee relationship.

This matters mostly when things go wrong. Who's responsible? The airline whose AI agent gave out wrong info about airline policies found, in court, that their "intelligent agent" was considered an agent in legal terms. Which meant the airline was stuck paying for their mistake.

Anthropic's definition: Some customers define agents as fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks.

That's an autonomous system, not an agent. Autonomy is about how much something can do without outside help. Agency is about who's doing what for whom, and for whose benefit and with what authority. Those are independent concepts.

replies(5): >>42478093 #>>42478201 #>>42479305 #>>42480149 #>>42481749 #
pvg ◴[] No.42479305[source]
AI people have been using a much broader definition of 'agent' for ages, though. One from Russel and Norvig's 90s textbook:

"Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators"

https://en.wikipedia.org/wiki/Intelligent_agent#As_a_definit...

replies(1): >>42483983 #
minasmorath ◴[] No.42483983[source]
That definition feels like it's playing on the verb, the idea of having "agency" in the world, and not on the noun, of being an "agent" for another party. The former is a philosophical category, while the latter has legal meaning and implication, and it feels somewhat disingenuous to continue to mix them up in this way.
replies(2): >>42484022 #>>42484981 #
pvg ◴[] No.42484981[source]
In what way is it 'disingenuous'? You think Norvig is trying to deceive us about something? I'm not saying you have to agree with or like this definition but even if you think it's straight up wrong, 'disingenuous' feels utterly out of nowhere.
replies(1): >>42486056 #
1. minasmorath ◴[] No.42486056[source]
It's disingenuous in that it takes a word with a common understanding ("agent") and then conveniently redefines or re-etomologizes the word in an uncommon way that leads people to implicitly believe something about the product that isn't true.

Another great example of this trick is "essential" oils. We all know what the word "essential" means, but the companies selling the stuff use the word in the most uncommon way, to indicate the "essence" of something is in the oil, and then let the human brain fill in the gap and thus believe something that isn't true. It's techinically legal, but we have to agree that's not moral or ethical, right?

Maybe I'm wildly off base here, I have admittedly been wrong about a lot in my life up to this point. I just think the backlash that crops up when people realize what's going on (for example, the airline realizing that their chat bot does not in fact operate under the same rules as a human "agent," and that it's still a technology product) should lead companies to change their messaging and marketing, and the fact that they're just doubling down on the same misleading messaging over and over makes the whole charade feel disingenuous to me.

replies(1): >>42486702 #
2. pvg ◴[] No.42486702[source]
with a common understanding ("agent") and then conveniently redefines or re-etomologizes the word in an uncommon way that leads people to implicitly believe something about the product that isn't true.

What is the 'product' here? It's a university textbook. Like, where is the parallel between https://en.wikipedia.org/wiki/Intelligent_agent and 'essential oils'.

replies(1): >>42486840 #
3. minasmorath ◴[] No.42486840[source]
Oh, I have no issue with his textbook definition, I'm saying that it's now being used to sell products by people who know their normal consumer base isn't using the same definition and it conveniently misleads them into believing things about the product that aren't true.

Knowing that your target market (non-tech folks) isn't using the same language as you, but persisting with that language because it creates convenient sales opportunities due to the misunderstandings, feels disingenuous to me.

An "agent" in common terms is just someone acting on behalf of another, but that someone still has autonomy and moral responsibility for their actions. Like for example the airline customer service representative situation. AI agents, when we pull back the curtains, get down to brass tacks, whatever turn of phrase you want to use, are still ultimately deterministic models. They have a lot more parameters, and their determinism is offset by many factors of pseudo-randomness, but given sufficient information we could still predict every single output. That system cannot be an agent in the common sense of the word, because humans are still dictating all of the possible actions and outcomes, and the machine doesn't actually have the autonomy required.

If you fail to keep your tech product from going off-script, you're responsible, because the model itself isn't a non-deterministic causal actor. A human CSR on the other hand is considered by law to have the power and responsibility associated with being a causal actor in the world, and so when they make up wild stuff about the terms of the agreement, you don't have to honor it for the customer, because there's culpability.

I'm drifting into philosophy at this point, which never goes well on HN, but this is ultimately how our legal system determines responsibility for actions, and AI doesn't meet those qualifications. If we ever want it to be culpable for its own actions, we'll have to change the legal framework we all operate under.

Edit: Causal, not casual... Whoops.

Also, I think I'm confusing the situation a bit by mixing the legal distinctions between agency and autonomy with the common understanding of being an "agent" and the philosophical concept of agency and culpability and how that relates to the US legal foundations.

I need to go touch grass.