←back to thread

323 points timbilt | 5 comments | | HN request time: 0.835s | source
Show context
wrp ◴[] No.42132026[source]
I have been working with colleagues to develop advice on how to adapt teaching methods in the face of widespread use of LLMs by students.

The first point I like to make is that the purpose of having students do tasks is to foster their development. That may sound obvious, but many people don't seem to take notice that the products of student activities are worthless in themselves. We don't have students do push-ups in gym class to help the national economy by meeting some push-up quota. The sole reason for them is to promote physical development. The same principle applies to mental tasks. When considering LLM use, we need to be looking at its effects on student development rather than on student output.

So, what is actually new about LLM use? There has always been a risk that students would sometimes submit homework that was actually the work of someone else, but LLMs enable willing students to do it all the time. Teachers can adapt to this by basing evaluation only on work done in class, and by designing homework to emphasize feedback on key points, so that students will get some learning benefit even though a LLM did the work.

Completely following this advice may seem impossible, because some important forms of work done for evaluation require too much time. Teachers use papers and projects to challenge students in a more elaborate way than is possible in class. These can still be used beneficially if a distinction is made between work done for learning and work done for evaluation. While students develop multiple skills while working on these extended tasks, those skills could be evaluated in class by more concise tasks with a narrower focus. For example, good writing requires logical coherence and rhetorical flow. If students have trouble in these areas, it will be just as evident in a brief essay as a long one.

replies(1): >>42132902 #
Eisenstein ◴[] No.42132902[source]
It is trivially easy to spot AI writing if you are familiar with it, but if it requires failing most of the class for turning in LLM generated material, I think we are going to find that abolishing graded homework is the only tenable solution.

The student's job is not to do everything the teacher says, it is to get through schooling somewhat intact and ready for their future. The sad fact is that many things we were forced to do in school were not helpful at all, and only existed because the teachers thought it was, or for no real reason at all.

Pretending that pedagogy has established and verified methodology that will result in a completely developed student, if only the student did the work as prescribed, is quite silly.

Teaching evolves with technology like every other part of society and it may come out worse or it may come out better, but I don't want to go back fountain pens and slide rules and I think in 20 years this generation won't look back on their education thinking they got a worse one than we did because they could cheat easier.

replies(3): >>42133174 #>>42133208 #>>42133655 #
low_tech_love ◴[] No.42133655[source]
As a (senior) lecturer in a university, I’m with you on most of what you wrote. The truth is that every teacher must immediately think: if any of their assignments or examinations involve something that could potentially be GPT-generated, it will be GPT-generated. It might be easy to spot such a thing, but you’ll be spending hours writing feedback while sifting through the rivers of meaningless artificially-generated text your students will submit.

Personally what I’m doing is to push the weight back at the students. Every submission now requires a 5-minute presentation with an argumentation/defense against me as an opponent. Anyway it would take me around 10-15 min to correct their submission, so we’re just doing it together now.

replies(1): >>42134408 #
1. jay_kyburz ◴[] No.42134408[source]
A genuine question, have you evaluated AI for marking written work?

I'm not an educator, but it seems to me like gippity would be better at analyzing a students paper than writing it in the first place.

Your prompt could provide the AI the marking criteria, or the rubric, and have it summarize how well the paper hits the important points.

replies(2): >>42134490 #>>42137437 #
2. Eisenstein ◴[] No.42134490[source]
I know I would have had a blast finding ways to direct the model into giving me top scores by manipulating it through the submitted text. I think that without a bespoke model that has been vetted, is supervised, and is constrained, you are going to end up with some interesting results running classwork through a language model for grading.
3. low_tech_love ◴[] No.42137437[source]
Never say never, but I do not plan on doing this. This sounds quite surreal: a loop where the students pretend to learn and I pretend to teach? I would… hm… I’ve never heard of such… I mean, this is definitely not how it is in reality… right…

(Jokes aside, I have an unhealthy, unstoppable need to feel proud of my work, so no I won’t do that. For now…)

replies(1): >>42140110 #
4. jay_kyburz ◴[] No.42140110[source]
I would have thought that the teaching comes before the test, and that the test is really just a way to measure how well the student soaked up the knowledge.

You could take pride in a well crafted technology that could mark an assignment and provide feedback in far more detail that you yourself could ever provide given time constraints.

I asked my partner about it last night, she teaches at ANU and she made some joke about how variable the quality of tutor marking is. At least the AI would be impartial and consistent.

I have no idea how well an AI can assess a paper against a rubric. Might be a complete waist of time, but if there were some teachers out there who wanted to do some tests, I would be interested in helping set up the tests and evaluating the results.

replies(1): >>42141384 #
5. wrp ◴[] No.42141384{3}[source]
In discussing how to adapt teaching methods, we have also looked at evaluation by LLM. The most talked about concern now is the unreliability of LLM output. However, say that in the future, accuracy of LLMs improves to the point that it is no longer a problem. Would it then be good to have evaluation by LLM?

I would say generally not, for two reasons. First, the teacher needs to know how the student is developing. To get a thorough understanding takes working through the student's output, not just checking a summary score. Second, the teacher needs to provide selective feedback, to focus student attention on the most important areas needing development. This requires knowledge of the goals of the teacher and the developmental history of the student.

I won't argue that LLM evaluation could never be applied usefully. If the task to be evaluated is simple and the skills to be learned are straightforward, I imagine that it could benefit the students of some grossly overloaded teacher.