There's also a cookbook with useful code examples: https://github.com/anthropics/anthropic-cookbook/tree/main/p...
Blogged about this here: https://simonwillison.net/2024/Dec/20/building-effective-age...
There's also a cookbook with useful code examples: https://github.com/anthropics/anthropic-cookbook/tree/main/p...
Blogged about this here: https://simonwillison.net/2024/Dec/20/building-effective-age...
This matters mostly when things go wrong. Who's responsible? The airline whose AI agent gave out wrong info about airline policies found, in court, that their "intelligent agent" was considered an agent in legal terms. Which meant the airline was stuck paying for their mistake.
Anthropic's definition: Some customers define agents as fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks.
That's an autonomous system, not an agent. Autonomy is about how much something can do without outside help. Agency is about who's doing what for whom, and for whose benefit and with what authority. Those are independent concepts.
I ask because you seem very confident in it - and my biggest frustration about the term "agent" is that so many people are confident that their personal definition is clearly the one everyone else should be using.
But I'm not sure if that's true. The court didn't define anything, in contrary they only said that (in simplified terms) the chatbot was part of the website and it's reasonable to expect the info on their website to be accurate.
The closest I could find to the chatbot being considered an agent in legal terms (an entity like an employee) is this:
> Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot.
Source: https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/202...