https://www.purdue.edu/newsroom/2025/Q4/purdue-unveils-compr...
Where the actual news is:
> To this end, the trustees have delegated authority to the provost, working with deans of all academic colleges, to develop and to review and update continuously, discipline-specific criteria and proficiency standards for a new campuswide “artificial intelligence working competency” graduation requirement for all Purdue main campus students, starting with new beginners in fall 2026.
So the Purdue trustees have "delegated authority" to people at the University to make a new graduation requirement for 2026.
Who knows what will be in the final.
I though Purdue was a good school, these kind of gimmicks are usually the province of low-tier universities trying to get attention.
After more than a trillion dollars spent, LLMs can replace: (a) a new secretary with one week of experience (b) a junior programmer who just learned that they can install programs on a desktop computer, and (c) James Patterson.
That's the bright future that Purdue is preparing its students for.
Yes, AIs will be a huge thing...eventually...but LLMs are not AI, and they never will be.
But I like to think that actually learning the history was important and it certainly was a diversion from math/chemistry/physics. I liked Shakespeare, so reading the plays was also worthwhile and discussing them in class was fun. Yeah, I was bored to tears in medieval history, so AI could have helped there.
Professors can tailor lectures to narrower topics or advanced, current, or more specialized subjects. There may be less need to have a series of beginning or introductory courses--it's assumed learners will avail themselves.
Pessimistically, AI literacy contributes to further erosion of critical thinking, lazy auto-grading, and inability to construct book-length arguments.
Part of this is very reasonable; AI is upending how students learn (or cheat), so adding a requirement to teach how to do it in a way that improves learning rather than just enhances cheating makes sense. The problem with the broad, top-down approach is it looks like what happens in Corporate America where there's a CEO edict that "we need a ____ strategy," and every department pivots projects to include that, whether or not it makes sense.
AI/ML isn't going to completely shift the world, but understanding how to do basic prompt engineering, validate against hallucinations, and know what the difference between ChatGPT and GPT-4o is valuable for people who do not have a software background.
Gaining any kind of knowledge is a net win.
And I just know this is going to turn into a (pearl-clutching) AI Ethics course...
If you're going to try to fake being able to write, better to try to dupe any other professor than a professor of English. (source: raised by English majors)
Why do you think it wouldn't do the same for other fields? The purpose of writing essays in school is never to have the finished product; it's to learn and analyze the topic of the essay and/or to go through the process of writing and editing it.
"all as informed by evolving workforce and employer needs"
“At the same time, it’s absolutely imperative that a requirement like this is well informed by continual input from industry partners and employers more broadly."
Purdue is engaging in the oldest profession in the world. And the students pay for this BS.
This is not remotely the kind of thing that a school should be making a requirement at this time. The technology is changing way too fast to even be sure that basic fundamental skills related to it will remain relevant for as many as 4-5 years.
However, there's no reason to think any trick would be relevant even in a year. As llms get better, why wouldn't we just have them auto rewrite prompts using appropriate prompt engineering tricks?
For the same reason that elementary schools don't allow calculators in math exams.
You first need to understand how to do the thing yourself.
it's not unrealistic to be selecting for people with strong language skills and the ability to break tasks into discrete components and assemble them into a process. or the skill of being able to define what they do not know.
a lot of what makes a person good with an llm makes them also good at general problem solving.
Purdue not necessarily uniquely but specific to their charter does a really good job at workforce development focus in their engineering. They are very highly focused on staffing and training and less so on the science and research part - though that exists as well.
This tracks what I would expect an in line with what I think it should be best practice
Perhaps the world is going the direction of relying on an AI to do half the things we use our own brains for today. But to me that sounds like a sad and worse future.
I’m just rambling here. But at the moment I fail to see how current LLMs help people truly learn things.
That's why you don't understand the dismissive comments. The reality is that the technology sucks for actually doing anything useful. Mandating that kids work with a poor tool just because it's trendy right now is the height of foolishness.
What percentage of students who graduated in 2025 have no idea what machine learning is?
Forget Attention Is All You Need and transformers. What percentage can't define machine learning? What percentage have no idea what the question even means? A highly non-trivial percentage.
ChatGPT prompting 101 would obviously be stupid but there is more than enough material to do a fantastic AI 101 class.
You need exposure to philosophical ideas because you need words to be able to think about and describe the similarities and differences between computed language output and a lived experience. You need evolutionary biology to understand that AI is not going to catch up to a billion years of evolutionary progress in the next 6 months. You need ethics because AI is an invitation to ruin yourself through cheating, bullshitting your responsibilities, and generally failing to consider that improving yourself takes work.
But none of that actually requires using AI, which is what makes me suspicious that I would not see eye to eye with Purdue.
What I suspect they're thinking is "every employer wants to hire AI-human centaur employees, so we better make sure are students are the best AI-human hybrids they can be because otherwise there will be no employers who would want them"
When I heard that today, it sounded like self-serving partnership, and, frankly, incompetence.
Yeah, yeah, for you who knows better than everything, you already know what they're going to teach from this press release, you already know it all, that's why you have no use for AI.
With little apology for breaking the HN civility rules. "They did it first."
“License your chat history” - most of us wouldn’t have any takers, but someone like you might.
(And I say this as someone who is really not a fan of how LLMs are being presented to the world at large)
Not really, you're the one accelerating "reach and pace" based on hype, and you'd naively expect more educated approach at institutions that educate.