There's also a cookbook with useful code examples: https://github.com/anthropics/anthropic-cookbook/tree/main/p...
Blogged about this here: https://simonwillison.net/2024/Dec/20/building-effective-age...
There's also a cookbook with useful code examples: https://github.com/anthropics/anthropic-cookbook/tree/main/p...
Blogged about this here: https://simonwillison.net/2024/Dec/20/building-effective-age...
This matters mostly when things go wrong. Who's responsible? The airline whose AI agent gave out wrong info about airline policies found, in court, that their "intelligent agent" was considered an agent in legal terms. Which meant the airline was stuck paying for their mistake.
Anthropic's definition: Some customers define agents as fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks.
That's an autonomous system, not an agent. Autonomy is about how much something can do without outside help. Agency is about who's doing what for whom, and for whose benefit and with what authority. Those are independent concepts.
"Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators"
https://en.wikipedia.org/wiki/Intelligent_agent#As_a_definit...
Another great example of this trick is "essential" oils. We all know what the word "essential" means, but the companies selling the stuff use the word in the most uncommon way, to indicate the "essence" of something is in the oil, and then let the human brain fill in the gap and thus believe something that isn't true. It's techinically legal, but we have to agree that's not moral or ethical, right?
Maybe I'm wildly off base here, I have admittedly been wrong about a lot in my life up to this point. I just think the backlash that crops up when people realize what's going on (for example, the airline realizing that their chat bot does not in fact operate under the same rules as a human "agent," and that it's still a technology product) should lead companies to change their messaging and marketing, and the fact that they're just doubling down on the same misleading messaging over and over makes the whole charade feel disingenuous to me.
What is the 'product' here? It's a university textbook. Like, where is the parallel between https://en.wikipedia.org/wiki/Intelligent_agent and 'essential oils'.
Knowing that your target market (non-tech folks) isn't using the same language as you, but persisting with that language because it creates convenient sales opportunities due to the misunderstandings, feels disingenuous to me.
An "agent" in common terms is just someone acting on behalf of another, but that someone still has autonomy and moral responsibility for their actions. Like for example the airline customer service representative situation. AI agents, when we pull back the curtains, get down to brass tacks, whatever turn of phrase you want to use, are still ultimately deterministic models. They have a lot more parameters, and their determinism is offset by many factors of pseudo-randomness, but given sufficient information we could still predict every single output. That system cannot be an agent in the common sense of the word, because humans are still dictating all of the possible actions and outcomes, and the machine doesn't actually have the autonomy required.
If you fail to keep your tech product from going off-script, you're responsible, because the model itself isn't a non-deterministic causal actor. A human CSR on the other hand is considered by law to have the power and responsibility associated with being a causal actor in the world, and so when they make up wild stuff about the terms of the agreement, you don't have to honor it for the customer, because there's culpability.
I'm drifting into philosophy at this point, which never goes well on HN, but this is ultimately how our legal system determines responsibility for actions, and AI doesn't meet those qualifications. If we ever want it to be culpable for its own actions, we'll have to change the legal framework we all operate under.
Edit: Causal, not casual... Whoops.
Also, I think I'm confusing the situation a bit by mixing the legal distinctions between agency and autonomy with the common understanding of being an "agent" and the philosophical concept of agency and culpability and how that relates to the US legal foundations.
I need to go touch grass.