←back to thread

395 points pseudolus | 1 comments | | HN request time: 0s | source
Show context
SamBam ◴[] No.43633756[source]
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.

In the article, I guess this would be buried in

> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.

"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.

(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)

replies(3): >>43634021 #>>43634024 #>>43642992 #
ignoramous ◴[] No.43642992[source]
> feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them

You're right.

Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.

replies(1): >>43644703 #
xpe ◴[] No.43644703[source]
> Bloom's taxonomy is a framework for categorizing educational goals, developed by a committee of educators chaired by Benjamin Bloom in 1956. ... In 2001, this taxonomy was revised, renaming and reordering the levels as Remember, Understand, Apply, Analyze, Evaluate, and Create. This domain focuses on intellectual skills and the development of critical thinking and problem-solving abilities. - Wikipedia

This context is important: this taxonomy did not emerge from artificial intelligence nor cognitive science. So its levels are unlikely to map to how ML/AI people assess the difficulty of various categories of tasks.

Generative models are, by design, fast (and often pretty good) at generation (creation), but this isn't the same standard that Bloom had in mind with his "creation" category. Bloom's taxonomy might be better described as a hierarchy: proper creation draws upon all the layers below it: understanding, application, analysis, and evaluation.

replies(1): >>43644912 #
1. xpe ◴[] No.43644912[source]
Here is one key take-away, phrased as a question: when a student uses an LLM for "creation", are underlying aspects (understanding, application, analysis, and evaluation) part of the learning process?