←back to thread

174 points Philpax | 1 comments | | HN request time: 1.378s | source
Show context
stared ◴[] No.43722458[source]
My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.

And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).

There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.

replies(3): >>43722582 #>>43723139 #>>43728389 #
1. pixl97 ◴[] No.43722582[source]
>There’s no consistent, universally accepted definition.

That's because of the I part. An actual complete description accepted by different practices in the scientific community.

"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"