←back to thread

265 points ctoth | 1 comments | | HN request time: 0.888s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
jjeaff ◴[] No.43745959[source]
I suspect AGI will be one of those things that you can't describe it exactly, but you'll know it when you see it.
replies(7): >>43746043 #>>43746058 #>>43746080 #>>43746093 #>>43746651 #>>43746728 #>>43746951 #
1. DesiLurker ◴[] No.43746951[source]
my 2c on this is that if you interact with any current llm enough you can mentally 'place' its behavior and responses. when we truly have AGI+/ASI my guess is that it will be like that old adage of blind men feeling & describing an elephant for the first time. we just wont be able to fully understand its responses. it would always be something left hanging and then eventually we'll just stop trying. that would be time when the exponential improvement really kicks in.

it should suffice to say we are nowhere near that and I dont even believe LLMs are the right architecture for that.