←back to thread

20 points destraynor | 1 comments | | HN request time: 0.201s | source
Show context
gota ◴[] No.43656152[source]
This is not core the to article, but -

> However, most examples of high agency agents operate in ideal environments which provide complete knowledge to the agent, and are ‘patient’ to erroneous or flaky interactions. That is, the agent has access to the complete snapshot of its environment at all times, and the environment is forgiving of its mistakes.

This is a long way around basic (very basic) existing concepts in classic AI ("Fully" vs "partially" observable environments; nondeterministic actions; are all, if I'm not sorely mistaken, discussed in Russel and Norvig - the standard textbook for AI 101).

Now, maybe the authors _do_ know and choose to discuss this at pre-undergrad level like this on purpose. Regardless of whether they don't know the basic concepts or think its better to pretend they don't because their audience doesn't - this signals that folks working on/with the new AI wave (authors or audience) have not read the basic literature of classic AI

The bastardization of the word 'agent' is related to this, but I'll stop here at the risk of going too far as 'old man yells at clouds', as if I'd never seen technical terms coopted for hype

replies(2): >>43657488 #>>43658723 #
1. fergal_reid ◴[] No.43658723[source]
Yes we're familiar with the terminology and framing of gofai. Fwiw I read (most of) the 3rd edition of Russell and Norvig in my undergrad days.

However, the point we're trying to make here is at a higher level of abstraction.

Basically most demos of agents you see these days don't prioritize reliability. Even a Copilot use case is quite a bit less demanding than a really frustrated user trying to get a refund or locate a missing order.

I'm not sure putting that in the language of pomdps is going to improve things for the reader, rather than just make us look more well read.

But your feedback is noted!