←back to thread

265 points ctoth | 2 comments | | HN request time: 0.503s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
1. mrshadowgoose ◴[] No.43746447[source]
I've always felt that trying to pin down the precise definition of AGI is as useless as trying to pin down "what it means to truly understand". It's a mental trap for smart people, that distracts them from focusing on the impacts of hard-to-define concepts like AGI.

AGI doesn't need to be "called", and there is no need for anyone to come to an agreement as to what its precise definition is. But at some point, we will cross that hard-to-define threshold, and the economic effects will be felt almost immediately.

We should probably be focusing on how to prepare society for those changes, and not on academic bullshit.

replies(1): >>43746556 #
2. throwup238 ◴[] No.43746556[source]
It's definitely a trap for those who aren't familiar with the existing academic work in philosophy, cognition, and neuroscience. There are no definitive answers but there are lots of relatively well developed ideas and concepts that everyone here on HN seems completely ignorant of, even though some of the ideas were developed by industry giants like Marvin Minsky.

Stuff like society of minds (Minksy), embodied cognition (Varela, Rosch, and Thompson), connectionist or subsymbolic views (Rumelhart), multiple intelligences (Gardner), psychometric and factor-analytic theories (Carroll), and all the other work like E. Hutchins. They're far from just academic wankery, there's a lot of useful stuff in there, it's just completely ignored by the AI crowd.