> I think the issue is that it's so tempting to lean on AI.
This is not the root cause, it's a side effect.
Student's cheat because of anxiety. Anxiety is driven by grades, because grades affect failure. To detect cheating is solving the wrong problem. If most of the grades did not directly affect failure, student's wouldn't be pressured to cheat. Evaluation and grades have two purposes:
1. Determine grade of qualification i.e result of education (sometimes called "summative")
2. Identify weaknesses to aid in and optimise learning (sometimes called "formative")
The problem arises when these two are conflated, either by combining them and littering them throughout a course, or when there is an imbalance in the ratio between them i.e too much of #1. Then the pressure to cheat arises, the measure becomes the target, and focus on learning is compromised. This is not a new problem, student's already waste time trying to undermine grades through suboptimal learning activities like "cramming".
The funny thing is that everyone already knows how to solve cheating: controlled examination, which is practical to implement for #1, so long as you don't have a disruptive number of exams filling that purpose. This is even done in sci-fi, Spok takes a "memory test" in 2286 on Vulkan as a kind of "final exam" in a controlled environment with challenges from computers - it's still using a combination of proxy knowledge based questions and puzzles, but it doesn't matter, it's a controlled environment.
What's needed is a separation and balance between summative and formative grading, then preventing cheating is almost easy, and student's can focus on learning... cheating at tests throughout the course would actually have a negative affect on their final grade, because they would be undermining their own learning by breaking their own REPL.
LLMs have only increased the pressure, and this may end up being a positive thing for education.