←back to thread

131 points xlinux | 1 comments | | HN request time: 0.337s | source
Show context
skeltoac ◴[] No.42187440[source]
I especially enjoyed the link to The Bitter Lesson by Rich Sutton, which I hadn't read before. Now I wonder what "discoveries" have been built into today's AI models and how they might come to be detrimental.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

replies(2): >>42187972 #>>42188177 #
1. janalsncm ◴[] No.42188177[source]
Probably LLM maximalist ideas that suggest infinite “scaling laws” for LLMs (they are not laws), leading to ridiculous conclusions like building a $1 trillion cluster is the fastest way to AGI. People like Leopold Aschenbrenner are in this camp.

Imagine if LLMs were the only way we had to play chess. You’d need a centralized server and peak performance wouldn't even best a grandmaster. You spend $1 trillion building a super cluster because that’s all you know.

< - - This is where AI is today.

And then some startup creates Stockfish, a chess engine better than any LLM or grandmaster and can run on a smartphone.