←back to thread

65 points binning | 3 comments | | HN request time: 0s | source
Show context
xnx ◴[] No.46338971[source]
AI will be a super-tutor for the curious and a tool to outsource all thinking for the incurious.
replies(6): >>46338983 #>>46339024 #>>46339093 #>>46339179 #>>46339227 #>>46339274 #
1. turtletontine ◴[] No.46339093[source]
I don’t necessarily think you’re wrong, but I’m skeptical that the curious will really meaningfully learn from LLMs. There’s a huge gap between reading something and thinking “gee that’s interesting, I’m glad I know that now,” and really doing the work and deeply understanding things.

This is part of what good teaching is about! The most brilliant engaged students will watch a lecture and think “wow nice I understand it now!” and as soon as they try to do the homework they realize there’s all kinds of subtleties they didn’t consider. That’s why pedagogical well crafted assignments are so important, they force students to really learn and guide them along the way.

But of course, all this is difficult and time consuming, while having a “conversation” with a language model is quick and easy. It will even write you flowery compliments about how smart you are every time you ask a follow up question!

replies(1): >>46339183 #
2. tnias23 ◴[] No.46339183[source]
I find LLMs useful for quickly building mental models for unfamiliar topics. This means that instead of beating my head against the wall trying to figure out the mental model, I can beat my head against the wall trying to do next steps, like learning the lower level details or the higher level implications. Whatever is lost not having to struggle through figuring out the mental model is easily outweighed by being able to spend that time applying myself elsewhere.
replies(1): >>46339315 #
3. wrs ◴[] No.46339315[source]
I have some success by trying to explain something to an LLM, having it correct me with its own explanation that isn’t quite right either, correcting it with a revised explanation, round and round until I think I get it.

Sort of the Feynman method but with an LLM rubber duck.