←back to thread

625 points lukebennett | 2 comments | | HN request time: 0.001s | source
Show context
wslh ◴[] No.42139668[source]
It sounds a bit sci-fi, but since these models are built on data generated by our civilization, I wonder if there's an epistemological bottleneck requiring smarter or more diverse individuals to produce richer data. This, in turn, could spark further breakthroughs in model development. Although these interactions with LLMs help address specific problems, truly complex issues remain beyond their current scope.

With my user hat on, I'm quite pleased with the current state of LLMs. Initially, I approached them skeptically, using a hackish mindset and posing all kinds of Turing test-like questions. Over time, though, I shifted my focus to how they can enhance my team's productivity and support my own tasks in meaningful ways.

Finally, I see LLMs as a valuable way to explore parts of the world, accommodating the reality that we simply don’t have enough time to read every book or delve into every topic that interests us.

replies(1): >>42149854 #
1. tim333 ◴[] No.42149854[source]
AlphaGo which beat Lee Sedol was trained on human games. But then they produced AlphaZero which learned entirely from self play and got better than AlphaGo. So it goes.
replies(1): >>42152444 #
2. wslh ◴[] No.42152444[source]
That is just for chess which is not comparable to society/historical content, science, etc. Chess also have well defined rules.