> However, most examples of high agency agents operate in ideal environments which provide complete knowledge to the agent, and are ‘patient’ to erroneous or flaky interactions. That is, the agent has access to the complete snapshot of its environment at all times, and the environment is forgiving of its mistakes.
This is a long way around basic (very basic) existing concepts in classic AI ("Fully" vs "partially" observable environments; nondeterministic actions; are all, if I'm not sorely mistaken, discussed in Russel and Norvig - the standard textbook for AI 101).
Now, maybe the authors _do_ know and choose to discuss this at pre-undergrad level like this on purpose. Regardless of whether they don't know the basic concepts or think its better to pretend they don't because their audience doesn't - this signals that folks working on/with the new AI wave (authors or audience) have not read the basic literature of classic AI
The bastardization of the word 'agent' is related to this, but I'll stop here at the risk of going too far as 'old man yells at clouds', as if I'd never seen technical terms coopted for hype