←back to thread

625 points lukebennett | 1 comments | | HN request time: 0s | source
Show context
aaroninsf ◴[] No.42139331[source]
It's easy to be snarky at ill-informed and hyperbolic takes, but it's also pretty clear that large multi-modal models trained with the data we already have, are going to eventually give us AGI.

IMO this will require not just much more expansive multi-modal training, but also novel architecture, specifically, recurrent approaches; plus a well-known set of capabilities most systems don't currently have, e.g. the integration of short-term memory (context window if you like) into long-term "memory", either episodic or otherwise.

But these are as we say mere matters of engineering.

replies(2): >>42139463 #>>42139929 #
1. throwawa14223 ◴[] No.42139929[source]
Why is that clear? Why is that more probable than a second AI winter? What if there's no path from LLMs to anything else?