←back to thread

395 points pseudolus | 1 comments | | HN request time: 0.219s | source
Show context
walleeee ◴[] No.43638779[source]
> Students primarily use AI systems for creating (using information to learn something new)

this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say

> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.

and later they report

> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection

kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something

replies(2): >>43639295 #>>43639631 #
1. ◴[] No.43639295[source]