←back to thread

265 points ctoth | 5 comments | | HN request time: 0.744s | source
1. tomrod ◴[] No.43745336[source]
I agree with Professor Mollick that the capabilities in specific task categories are becoming superhuman -- a precursor for AGI.

Until those capabilities are expanded for model self-improvement -- including being able to adapt its own infrastructure, code, storage, etc. -- then I think AGI/ASI are yet to be realized. My POV is SkyNet, Traveler's "The Director", Person of Interest's "The Machine" and "Samaritan." The ability to target a potentially inscrutable goal along with the self-agency to direct itself towards that is true "AGI" in my book. We have a lot of components that we can reason are necessary, but it is unclear to me that we get there in the next few months.

replies(2): >>43745919 #>>43747327 #
2. airstrike ◴[] No.43745919[source]
I don't think we should take it as a given that these are truly precursors for AGI.

We may be going about it the wrong way entirely and need to backtrack and find a wholly new architecture, in which case current capabilities would predate AGI but not be precursors.

replies(1): >>43750713 #
3. esafak ◴[] No.43747327[source]
That's the kind of AGI we don't need. Please let Skynet stay fictional.
replies(1): >>43750705 #
4. tomrod ◴[] No.43750705[source]
Not saying I love the idea of an extant ASI, but the need to clearly define it is present. I feel these self-capable examples highlight what a basic API endpoint doesn't about ASI capability.
5. tomrod ◴[] No.43750713[source]
I call them precursors because we would anticipate an ASI to be able to do thsse things. Perhaps necessary conditions would be a more appropriate term here.