←back to thread

322 points atomroflbomber | 1 comments | | HN request time: 0.001s | source
Show context
lelag ◴[] No.36983601[source]
If 2023 ends up giving us AGI, room-temperature superconductors, Starships and a cure for cancer, I think we will able to call it a good year...
replies(10): >>36983623 #>>36984116 #>>36984118 #>>36984549 #>>36986942 #>>36987008 #>>36987250 #>>36987546 #>>36987577 #>>36992261 #
azinman2 ◴[] No.36986942[source]
We’re not getting AGI anytime soon…
replies(6): >>36987177 #>>36987360 #>>36987472 #>>36987477 #>>36987541 #>>36987759 #
glimshe ◴[] No.36987360[source]
We could, we're not there yet but at the current rate we could be less than 10 years away from it.
replies(2): >>36987442 #>>36987857 #
azinman2 ◴[] No.36987442[source]
Maybe? Really hard to say. We haven’t had any major advancements in planning, none of the current advances have done anything with memory (retrieval augmentation is a not very good hack and fine tuning doesn’t qualify for AGI), perception is getter better but still has a ways to go, we dont have any foundational multi-modal models that can extend to arbitrary new modalities like learning arbitrary new sensors, etc etc. OpenAI does little for a massage or chef robot for example.

I think everyone is fooled by Searle’s Chinese dictionary and the visual equivalents with midjourney.

replies(1): >>36987771 #
Demotooodo ◴[] No.36987771[source]
Why is retivele a hack?
replies(1): >>36993309 #
1. azinman2 ◴[] No.36993309[source]
Because you’re grabbing stuff in some near by vector space and putting what can fit into context. This isn’t anywhere close to “intelligence.” You’re limited by context length, there’s no evolution or generalization, the vector space itself is just one facet of the problem, etc etc.