←back to thread

395 points pseudolus | 2 comments | | HN request time: 0.647s | source
1. chenzo44 ◴[] No.43634094[source]
professor here. i set up a website to host openwebui to use in my b-school courses (UG and grad). the only way i've found to get students to stop using it to cheat is to push them to use it until they learn for themselves that it doesn't answer everything correctly. this requires careful thoughtful assignment redesign. everytime i grade a submission with the hallmarks of ai-generation, i always find that it fails to cite content from the course and shows a lack of depth. so, i give them the grade they earn. so much hand wringing about using ai to cheat... just uphold the standards. if they are so low that ai can easily game them, that's on the instructor.
replies(1): >>43640148 #
2. lgessler ◴[] No.43640148[source]
Sure, this is a common sentiment, and one that works for some courses. But for others (introductory programming, say) I have a really hard time imagining an assignment that could not be one-shot by an LLM. What can someone with 2 weeks of Python experience do that an LLM couldn't? The other issue is that LLMs are, for now, periodically increasing in their capabilities, so it's anyone's guess whether this is actually a sustainable attitude on the scale of years.