←back to thread

427 points JumpCrisscross | 6 comments | | HN request time: 0.197s | source | bottom
1. flappyeagle ◴[] No.41897914[source]
Rather than flagging it as AI why don’t we flag if it’s good or not?

I work with people in their 30s That cannot write their way out of a hat. Who cares if the work is AI assisted or not. Most AI writing is super dry, formulaic and bad. The student doesn’t recognize this the give them a poor mark for having terrible style.

replies(4): >>41897946 #>>41898164 #>>41901366 #>>41901874 #
2. echoangle ◴[] No.41897946[source]
Because sometimes an exercise is supposed to be done under conditions that don’t represent the real world. If an exam is without calculator, you can’t just use a calculator anyways because you’re going to have one when working, too. If the assignment is „write a text about XYZ, without using AI assistance“, using an AI is cheating. Cheating should have worse consequences than writing bad stuff yourself, so detecting AI (or just not having assignments to do unsupervised) is still important.
3. Ekaros ◴[] No.41898164[source]
Because often goal of assessing student is not that they can generate output. It is to ensure they have retained sufficient amount of knowledge they are supposed to retain from course and be able regurgitate it in sufficiently readable format.

Actually being able to generate good text is entirely separate evaluation. And AI might have place there.

4. kreyenborgi ◴[] No.41901366[source]
Traditional school work has rewarded exactly the formulaic dry ChatGPT language, while the free thinking, explorative and creative writing that humans excel at is at best ignored, more commonly marked down for irrelevant typos and lack of the expected structure and too much personality showing through.
replies(1): >>41901742 #
5. FirmwareBurner ◴[] No.41901742[source]
Because judging the quality of "free thinking" outside of STEM is incredibly biased and subjective on the person doing the judging and could even get you in trouble for wrong think (try debating the Israel vs Palestine issue and see), which is why many school systems have converged on standardized boiler plate slop that's easy to judge by people with average intellect and training, and most importantly, easy to game by students so that it's less discriminatory on race, religion and socio economic backgrounds.
6. throwaway290 ◴[] No.41901874[source]
> Most AI writing is super dry, formulaic and bad.

LLM can generate text that is as entertaining and whimsical as its training dataset gets with no effort on your side