←back to thread

265 points ctoth | 1 comments | | HN request time: 0.21s | source
Show context
tomrod ◴[] No.43745336[source]
I agree with Professor Mollick that the capabilities in specific task categories are becoming superhuman -- a precursor for AGI.

Until those capabilities are expanded for model self-improvement -- including being able to adapt its own infrastructure, code, storage, etc. -- then I think AGI/ASI are yet to be realized. My POV is SkyNet, Traveler's "The Director", Person of Interest's "The Machine" and "Samaritan." The ability to target a potentially inscrutable goal along with the self-agency to direct itself towards that is true "AGI" in my book. We have a lot of components that we can reason are necessary, but it is unclear to me that we get there in the next few months.

replies(2): >>43745919 #>>43747327 #
airstrike ◴[] No.43745919[source]
I don't think we should take it as a given that these are truly precursors for AGI.

We may be going about it the wrong way entirely and need to backtrack and find a wholly new architecture, in which case current capabilities would predate AGI but not be precursors.

replies(1): >>43750713 #
1. tomrod ◴[] No.43750713[source]
I call them precursors because we would anticipate an ASI to be able to do thsse things. Perhaps necessary conditions would be a more appropriate term here.