←back to thread

207 points lexandstuff | 1 comments | | HN request time: 0.233s | source
Show context
ActorNightly ◴[] No.44477104[source]
If you are going to write anything about AGI, you should really prove that its actually possible in the first place, because that question is not really something that has a definite yes.
replies(3): >>44477147 #>>44477223 #>>44477246 #
mitthrowaway2 ◴[] No.44477246[source]
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
replies(3): >>44477386 #>>44477553 #>>44477701 #
habinero ◴[] No.44477386[source]
This is like saying "planets exist, therefore it's possible to build a planet" and then breathlessly writing a ton about how amazing planet engineering is and how it'll totally change the world real estate market by 2030.

And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".

replies(1): >>44477523 #
1. mitthrowaway2 ◴[] No.44477523[source]
I think it's more like saying "Stars exist, therefore nuclear fusion is possible" and then breathlessly writing a ton about how amazing fusion power will be. Which is a fine thing to write about even if it's forever 20 years away. This paper does not claim AGI will be attained by 2030. There are people spending their careers on achieving exactly this, wouldn't they be interested on a thoughtful take about what happens after they succeed?