←back to thread

721 points bradgessler | 1 comments | | HN request time: 0.001s | source
Show context
don_neufeld ◴[] No.44009004[source]
Completely agree.

From all of my observations, the impact of LLMs on human thought quality appears largely corrosive.

I’m very glad my kid’s school has hardcore banned them. In some class they only allow students to turn in work that was done in class, under the direct observation of the teacher. There has also been a significant increase in “on paper” work vs work done on computer.

Lest you wonder “what does this guy know anyways?”, I’ll share that I grew up in a household where both parents were professors of education.

Understanding the effectiveness of different methods of learning (my dad literally taught Science Methods) were a frequent topic. Active learning (creating things using what you’re learning about) is so much more effective than passive, reception oriented methods. I think LLMs largely are supporting the latter.

replies(6): >>44009388 #>>44010296 #>>44010436 #>>44010768 #>>44011460 #>>44011653 #
hammock ◴[] No.44010768[source]
> I’m very glad my kid’s school has hardcore banned them.

What does that mean, I’m curious?

The schools and university I grew up in had a “single-sanction honor code” which meant if you were caught lying or cheating even once you would be expelled. And you signed the honor code at the top of every test.

My more progressive friends at other schools who didn’t have an honor code happily poo-pooed it as a repugnantly harsh old fashioned standard. But I don’t see a better way today of enforcing “don’t use AI” in schools, than it.

replies(4): >>44010858 #>>44011199 #>>44011668 #>>44013992 #
garrickvanburen ◴[] No.44010858[source]
I don’t see the problem.

I’m not sure how LLMs output is indistinguishable from Wikipedia or World Book.

Maybe? and if the question is “did the student actually write this?” (which is different than “do they understand it?” there are lots of different ways to assess if a given student understands the material…that don’t involve submitting typed text but still involve communicating clearly.

If we allow LLMs- like we allow calculators, just how poor LLMs are will become far more obvious.

replies(3): >>44010919 #>>44011674 #>>44032057 #
1. bccdee ◴[] No.44032057{3}[source]
Do you really not see the problem? A student who pastes an essay prompt into an input box and copies out the response has learned nothing. Even direct plagiarism from Wikipedia would typically need to be reworked; there will rarely be a Wikipedia page corresponding to your teacher's specific essay prompt.

Students are also poor writers. Often LLM-generated essays can be spotted in elementary school because they write too well for that grade level. A good student will surpass a chatbot, but not if they use it as a crutch while it's still a stronger writer than they are.