We certainly improve productivity, but that is not necessarily good for humanity. Could be even worse.
i.e.: my company already expect less time for some tasks given that they _know_ I'll probably use some AI to do tasks. Which means I can humanly handle more context in a given week if the metric is "labour", but you end up with your brain completely melted.
I think people will get more utility out of education programs that allow them to be productive with AI, at the expense of foundational knowledge
Universities have a different purpose and are tone deaf to why their students use universities for the last century: which is that the corporate sector decided university degrees were necessary despite 90% of the cross disciplinary learning being irrelevant.
Its not the university’s problem and they will outlive this meme of catering to the middle class’ upwards mobility at all. They existed before and will exist after.
The university may never be the place for a human to hone the skill of being augmented with AI but a trade school or bootcamp or other structured learning environment will be, for those not self started enough to sit through youtube videos and trawl discord servers
I think this is really still up for debate
We produce more output certainly but if it's overall lower quality than previous output is that really "improved productivity"?
There has to be a tipping point somewhere, where faster output of low quality work is actually decreasing productivity due to the efforts now required to keep the tower of garbage from toppling
No shit. This is anecdotal evidence, but I was recently teaching a university CS class as a guest lecturer (at a somewhat below-average university), and almost all the students were basically copy-pasting task descriptions and error messages into ChatGPT in lieu of actually programming. No one seemed to even read the output, let alone be able to explain it. "Foundational skills" were near zero, as a result.
Anyway, I strongly suspect that this report is based on careful whitewashing and would reveal 75% cheating if examined more closely. But maybe there is a bit of sampling bias at play as well -- maybe the laziest students just never bother with anything but ChatGPT and Google Colab, while students using Claude have a little more motivation to learn something.
In the article, I guess this would be buried in
> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.
"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.
(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)
I built a popular product that helps teachers with this problem.
Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
Perhaps Claude is disproportionately marketed to the STEM crowd, and the business students are doing the same stuff using ChatGPT.
"AI bubble seems close to collapsing" in response to an article about AI being used as a study aid. Does not seem relevant to the actual content of the post at all, and you do not provide any proof or explanation for this statement.
"God knows how many billions have been invested", I am pretty sure it's actually not that difficult to figure out how much investor money has been poured into AI, and this still seems totally irrelevant to a blog post about AI being used as a study aid. Humans 'pour' billions of dollars into all sorts of things, some of which don't work out. What's the suggestion here, that all the money was wasted? Do you have evidence of that?
"We still don't have an actual use case for AI which is good for humanity"... What? We have a lot of use cases for AI, some of which are good for humanity. Like, perhaps, as a study aid.
Are you just typing random sentences into the HN comment box every time you are triggered by the mention of AI? Your post is nonsense.
In the end the willingness to struggle will set apart the truly great Software Engineer from the AI-crutched. Now of course this will most of the time not be rewarded, when a company looks at two people and sees “passable” code from both but one is way more “productive” with it (the AI-crutched engineer) they’ll inititally appreciate this one more.
But in the long run they won’t be able to explain the choices made when creating the software, we will see the retraction from this type of coding when the first few companies’ security falls apart like a house of cards due to AI reliance.
It’s basically the “instant gratification vs delayed gratification” argument but wrapped in the software dev box.
I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).
I felt that during my time in university. I absolutely loved reading and working through dense math text books but the moment there was a time constraint the struggle turned into chaos.
"The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the LLM to provide an answer, rather than taking a few moments to quietly ponder the problem on your own. By reaching for it to solve any problem at nearly an instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning."
They were the first to adopt digital wordprocessing, presentations, printing and now generative AI even though in essence all of these would have been disproportionately more hand in glove for the humanities on a purely functional level.
It's just a matter of comfortability with and interest in technology.
Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.
For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.
I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
I guess I'd qualify to you as someone "AI crutched" but I mostly use it for research and bouncing ideas (or code complete, which I've mentioned before - this is a great use of the tool and I wouldn't consider it a crutch, personally).
For instance, "parse this massive log output, and highlight anything interesting you see or any areas that may be a problem, and give me your theories."
Lots of times its wrong. Sometimes its right. Sometimes, its response gives me an idea that leads to another direction. It's essentially how I was using google + stack overflow ten years ago - see your list of answers, use your intuition, knowledge, and expertise to find the one most applicable to you, continue.
This "crutch" is essentially the same one I've always used, just in different form. I find it pretty good at doing code review for myself before I submit something more formal, to catch any embarrassing or glaringly obvious bugs or incorrect test cases. I would be wary of the dev that refused to use tools out of some principled stand like this, just as I'd be wary of a dev that overly relied on them. There is a balance.
Now, if all you know are these tools and the workflow you described, yea, that's probably detrimental to growth.
People who spent the past two years offloading their entry-level work onto LLMs are now taking 400-level systems programming courses and running face-first into a capability wall. I try my best to help, but there's only so much I can do when basic concepts like structs and pointer manipulation get blank stares.
> "Oh, the foo field in that struct should be signed instead of unsigned."
< "Struct?"
> "Yeah, the type definition of Bar? It's right there."
< "Man, I had ChatGPT write this code."
> "..."
> We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, editing essays, or summarizing academic material.
Sure, throwing a paragraph of an essay at Claude and asking it to turn it into a 3-page essay could have been categorized as "editing" the essay.
And it seems pretty naked the way they lump "editing an essay" in with "designing practice questions," which are clearly very different uses, even in the most generous interpretation.
I'm not saying that the vast majority of students do use AI to cheat, but I do want to say that, if they did, you could probably write this exact same article and tell no lies, and simply sweep all the cheating under titles like "create and improve educational content."
Do you google if 5 is less than 6 or do you just memorize that?
If you believe that creativity is not based on a foundation of memorization and experience (which is just memorization) you need to reflect on the connection between those.
I agree in principal - the process of problem solving is the important part.
However I think LLMs make you do more of this because of what you can offload to the LLM. You can offload the simpler things. But for the complex questions that cut across multiple domains and have a lot of ambiguity? You're still going to have to sit down and think about it. Maybe once you've broken it into sufficiently smaller problems you can use the LLM.
If we're worried about abstract problem solving skills that doesnt really go away with better tools. It goes away when we arent the ones using the tools.
The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The AI becomes their brain, such that they cannot function without it.
I'd never want to work with someone who is this reliant on technology.
Does my opinion count?
They will clearly recognize other kids which did not have an AI to talk with at that stage when curiosity really blossoms.
They never could be arsed to learn how to input their assignments into Wolfram Alpha. It was always the ux/ui effort that held them back.
Bullshit! You cannot do second order reasoning with a set of facts or concepts that you have to look up first.
Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.
People will think you are a wizard if you read documentation and bother to remember it, because they're still busy asking Google or ChatGPT while you're happily coding without pausing
That being said, I agree with you, if you just ask ChatGPT to write a b-tree implementation from scratch, then you have not learned anything. So like all things in academia, AI can be used to foster education or cheat around it. There's been examples of these "cheats" far before ChatGPT or Google existed.
Like, Socrates may have been against writing because he thought it made your memory weak, but at least I, an individual, am perfectly capable of manufacturing my own writing implements with a modest amount of manual labor and abundantly-available resources (carving into wood, burning wood into charcoal to write on stone, etc.). But I ain't perfectly capable of doing the same to manufacture an integrated circuit, let alone a digital calculator, let alone a GPU, let alone an LLM. Anyone who delegates their thought to a corporation is permanently hitching their fundamental ability to think to this wagon.
Look, I agree with you, I'm just trying to articulate to someone why they should learn X if they believe an LLM could help them and "an LLM won't always be around" isn't a good argument, because lets be honest, it likely will. This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"
While it can be useful to use LLMs as a tutor if you're stuck. The moment that you use it to provide a solution, you stop learning and the tool becomes a required stepping stone.
It's sort of the mental analog of weight training. The only way to get better at weightlifting is to actually lift weight.
So if you use them at that level you may learn the concepts at hand, but you won't learn _how to struggle_ to come up with novel answers. Then later in life when you actually hit problem domains that the LLM wasn't trained in, you'll not have learned the thinking patterns needed to persist and solve those problems.
Is that necessarily a bad thing? It's mixed: - You lower the bar for entry for a certain class of roles, making labor cheaper and problems easier to solve at that level. - For more senior roles that are intrinsically solving problems without answers written in a book or a blog post somewhere, you need to be selective about how you evaluate the people who are ready to take on that role.
It's like taking the college weed out classes and shifting those to people in the middle of their career.
Individuals who can't make the cut will find themselves stagnating in their roles (but it'll also be easier for them to switch fields). Those who can meet the bar might struggle but can do well.
Business will also have to come up with better ways to evaluate candidates. A resume that says "Graduated with a degree in X" will provide less of a signal than it did in the past
But when the answer flows out of thin air right in front of you with AI, you get the "oh duh" or "that makes sense" moments and not the "a-ha" moment that ultimately sticks with you.
Now does everything need an "a-ha" moment? No.
However, I think core concepts and fundamentals need those "a-ha" moments to build a solid and in-depth foundation of understanding to build upon.
The other part I imagine was largely entertainment, social and memory is a good skill to build.
What I don't like are all the hidden variables in these systems. Even GPS, for example, is making some assumptions about what kind of roads you want to take and how to weigh different paths. LLMs are worse in this regard because the creators encode a set of moral and stylistic assumptions/dictates into the model and everybody who uses it is nudged into that paradigm. This is destructive to any kind of original thought, especially in an environment where there are only a handful of large companies providing the models everyone uses.
Yes, but that horse has long ago left the barn.
I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
The history of civilization is the history of specialization. No one can re-build all the tools they rely on from scratch. We either let other people specialize, or we let machines specialize. LLMs are one more step in the latter.
The Luddites were right: the machinery in cotton mills was a direct threat to their livelihood, just as LLMs are now to us. But society marches on, textile work has been largely outsourced to machines, and the descendants of the Luddites are doctors and lawyers (and coders). 50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."
It’s not that LLM’s are particularly different it’s that people are less able to determine when they are messing up. A search engine fails and you notice, an LLM fails and your boss, customer, ect notices.
So if you're not bothering to learn how to farm, dress some wild game, etc, chances are this argument won't be convincing for "why should I learn calculus"
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
If the simpler thing in question is a task you've already mastered, then you're not losing much by asking an LLM to help you with it. If it's not trivial to you though, then you're missing an opportunity to learn.
This may eventually apply to all human labor.
I was thinking, even if they pass laws to mandate companies employ a certain fraction of human workers... it'll be like it already is now: they just let AI do most of the work anyway!
I sympathize, but it's impossible to remove all struggle from life. It's better in the long run to work through this than try to avoid it.
The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.
You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.
I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.
A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!
In my circle, I can't name a single person who doesn't heavily use these tools for assignments.
What's fascinating, though, is that the most cracked CS students I know deliberately avoid using these tools for programming work. They understand the value in the struggle of solving technical problems themselves. Another interesting effect: many of these same students admit they now have more time for programming and learning they “care about” because they've automated their humanities, social sciences, and other major requirements using LLMs. They don't care enough about those non-major courses to worry about the learning they're sacrificing.
GPT3 was pretty ass - yet some students would look you dead in the eyes with that slop and claim it as their own. Fast forward to last year when I complimented a student on his writing and he had to stop me - “bro this is all just AI.”
I’ve used AI to help build out frameworks for essays and suggest possible topics and it’s been quite helpful. I prefer to do the writing myself because the AIs tend to take very bland positions. The AIs are also great at helping me flesh out my writing. I ask “does this make sense” and it tells me patiently where my writing falls off the wagon.
AI is a game changer in a big way. Total paradigm shift. It can now take you 90% of the way with 10% of the effort. Whether this is good or bad is beyond my pay grade. What I can say is that if you are not leveraging AI, you will fall behind those that are.
I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.
In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.
I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?
Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).
P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.
People who can't do simple addition and multiplication without a calculator (12*30 or 23 + 49) are absolutely at a disadvantage in many circumstances in real life and I don't see how you could think this isn't true. You can't work as a cashier without this skill. You can't play board games. You can't calculate tips or figure out how much you're about to spend at the grocery store. You could pull out your phone and use a calculator in all these situations, but people don't.
All school work must be done within the walls of the school.
What are we teaching our children? It’s ok to do more work at home?
There are countries that have no homework and they do just fine.
Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.
I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.
The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.
IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.
Of course the problem is the much lower barrier for that to turn into cutting corners or full on cheating, but always remember it ultimately hurts you the most long term.
Because understanding how addition works is instrumental to understanding more advanced math concepts. And being able to perform simple addition quickly, without a calculator is a huge productivity boost for many tasks.
In the world of education and intellectual development it's not about getting the right answer as quickly as possible. It's about mastering simple things so that you can understand complicated things. And often times mastering a simple thing requires you to manually do things which technology could automate.
Flipped classroom is just having the students give lectures, instead of the teacher.
> Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable.
This is called "proctored exams" and it's been pretty common in universities for a few centuries.
None of this addresses the real issue, which is whether teachers should be preventing students from using AIs.
I'm the polar opposite. And I'm AI researcher.
The reason you can't answer your kid when he asks about LLMs is because the original position was wrong.
Being able to write isn't optional. It's a critical tool for thought. Spelling is very important because you need to avoid confusion. If you can't spell no spell checker can save you when it inserts the wrong word. And this only gets far worse the more technical the language is. And maps are crucial too. Sometimes, the best way to communicate is to draw a map. In many domains like aviation maps are everything, you literally cannot progress without them.
LLMs are no different. They can do a little bit of thinking for us and help us along the way. But we need to understand what's going on to ask the right questions and to understand their answers.
On the other hand my master 2 students, most of which learned scripting in the previous year, can't even split a project in multiple files after being explained multiple times. Some have more knowledge and ability than others, but a signifiant fraction is just copy-pasting LLM output to solve whatever is asked from them instead of trying to do it themselves, or asking questions.
Why not? I mean that, quite literally.
I don't know how to make an ASIC, and if I tried to write an OS I'd probably fail miserably many times along the way but might be able to muddle through to something very basic. The rest of that list is certainly within my wheelhouse even though I've never done any of those things professionally.
The peer commenter shared the Heinlein quote, but there's really something to be said for /society/ of being peopled by well-rounded individuals that are able to competently turn themselves to many types of tasks. Specialization can also be valuable, but specialization in your career should not prevent you from gaining a breadth of skills outside of the workplace.
I don't know how to do any of the things in your list (including building a web site) as an /expert/, but it should not be out of the realm of possibility or even expectation that people should learn these things at the level of a competent amateur. I have grown a garden, I have worked on a farm for a brief time, I've helped build houses (Habitat for Humanity), I've taken a hobbyist welding class and made some garish metal sculptures, I've built a race car and raced it, and I've never built a toaster but I have repaired one (they're actually very electrically and mechanically simple devices). Besides the disposable income to build a race car, nothing on that list stands out to me as unachievable by anyone who chooses to do so.
The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.
No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
1. Get people interested in my topics and removing fears and/or preconceived notions about whether it is something for them or not
2. Teach students general principles and the ability to go deeper themselves when and if it is needed
3. Giving them the ability to apply the learned principles/material in situations they encounter
I think removing fear and sparking interest is a precondition for the other two. And if people are interested they want to understand it and then they use AI to answer questions they have instead of blindly letting it do the work.
And even before AI you would have students who thought they did themselves favours by going a learn-and-forget route or cheating. AI jusr makes it a little easier to do just that. But in any pressure situation, like a written assignment under supervision it will come to light anyways, whether someone knows their shit or not.
Now I have the luck that the topics I teach (electronics and media technology) are very applied anyways, so AI does not have a big impact as of now. Not being able to understand things isn't really an option when you have to use a mixing desk in a venue with a hundred people or when you have to set up a tripod without wrecking the 6000€ camera on top.
But I generally teach people who are in it for the interest and not for some prestige that comes with having a BA/MA. I can imagine this is quite different in other fields where people are in it for the money or the prestige.
> The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The parent poster is positing that for 90% of cases they WILL have their AI assistant because its in their pocket, just like a calculator. It's not insane to think that and its a fair point to ponder.
In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.
Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.
I'm sure the same effect could be achieved by having AI transcribe a conversation.
Reminds me of the Nate Bargatz set where he talks about how if he was a time traveler to the past that he wouldn't be able to prove it to anyone. The skills most of us have require this supply chain and then we apply it at the very end. I'm not sure anyone in 1920 cares about my binary analysis skills.
I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.
Children will lack the critical thinking for solving complex problems, and even worse, won't have the work ethic for dealing with the kinds of protracted problems that occur in the real world.
But maybe that's by design. I think the ownership class has decided productivity is more important than societal malaise.
In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.
Maybe the way universities do it is not great, but writing in itself is important.
It's been my experience that LLMs are only better than me at stuff I'm bad at. It's noticeably worse than me at things I'm good at. So the answer to your question depends: can your child get good at things while leaning on an LLM?
I don't know the answer to this. Maybe schools need to expect more from their students with LLMs in the picture.
The problem is that there's a conflict of interest here. The extreme case proves it--leaving aside the feasibility of it, what if the only solution is a total ban on AI usage in education? Anthropic could never sanction that.
Not quite. Flipped classroom means more instruction outside of class time and less homework.
> This is called "proctored exams" and it's been pretty common in universities for a few centuries. None of this addresses the real issue
Proctored exams is part of it. In-class assignments is another. Asynchronous instruction is another.
And yes, it addresses the issue. Students can use AI however they see fit, to learn or to accomplish tasks or whatever, but for actual assessment of ability they cannot use AI. And it leaves the door open for "open-book" exams where the use of AI is allowed, just like a calculator and textbook/cheat-sheet is allowed for some exams.
I started way back in my 20s just figuring out how to write websites. I'm not sure where the camel's back would have broken.
It has, of course, been convenient to be able to "bootstrap" my self-reliance in these and other fields by consuming goods produced by others, but there is no mechanical reason that said goods should be provided by specialists rather than advanced generalists beyond our irrational social need for maximum acceleration.
This is not conjecture by the way. As a TA, I have observed that half of the undergraduate students lost the ability to write any code at all without the assistance of LLMs. Almost all use ChatGPT for most exercises.
Thankfully, cheating technology is advancing at a similarly rapid pace. Glasses with integrated cameras, WiFi and heads-up display, smartwatches with polarized displays that are only readable with corresponding glasses, and invisibly small wireless ear-canal earpieces to name just a few pieces of tech that we could have only dreamed about back then. In the end, the students stay dumb, but the graduation rate barely suffers.
I wonder whether pre-2022 degrees will become the academic equivalent to low-background radiation steel: https://en.wikipedia.org/wiki/Low-background_steel
Does that actually work? I'm long past having easy access to college programming assignments, but based on my limited interaction with ChatGPT I would be absolutely shocked if it produced output that was even coherent, much less working code given such an approach.
Torturing students with five paragraph essays, which is what “learning” looks like for most American kids, is not that great and isn’t actually teaching critical thinking which is most valuable. I don’t know any other form of writing that is like that.
Reading “themes” into books that your teacher is convinced are there. Looking for 3 quotes to support your thesis (which must come in the intro paragraph, but not before the “hook” which must be exciting and grab the reader’s attention!).
And what happens to those coders? For that matter--what happens to all the other jobs at risk of being replaced by AI? Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?
We live in a highly specialized society that requires people take out large loans to learn the skills necessary for their careers. You take away their ability to provide their labor, and it now seriously threatens millions of workers from obtaining the same quality of life they once had.
I seriously oppose such a future, and if that makes me a Luddite, so be it.
So, I would say that while I wouldn't fully dispute your claim that attributing authorship from text alone is impossible, it isn't yet totally clear one way or the other (to us, at least -- would welcome any outside research).
Long-term -- and that's long-term in AI years ;) -- gaze tracking and other biometric tracking will undoubtedly be necessary. At some point in the near future, many people will be wearing agents inside earbuds that are not obvious to the people around them. That will add another layer of complexity that we're aware of. Fundamentally, it's more about creating evidence than creating proof.
We want to give writers and students the means to create something more detailed than they would get from a chatbot out-of-the-box, so that mimicking the whole act of writing becomes more complicated.
Quality of CS/Software Engineering programs vary that much.
Seeing how the world is based around consumerism, this future seems more likely.
HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.
Some will manage to remain in their field, most won't.
> Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?
They don't exist. Instead they'll take low-paying jobs that can't (yet) be automated. Maybe they'll work in factories [1].
> I seriously oppose such a future, and if that makes me a Luddite, so be it.
Like I said, the Luddites were right, in the short term. In the long term, we don't know. Maybe we'll live in a post-scarcity Star Trek world where human labor has been completely devalued, or maybe we'll revert to a feudal society of property owners and indentured servants.
[1] https://www.newsweek.com/bessent-fired-federal-workers-manuf...
Given what I know of human nature, this seems improbable.
- LLMs are good enough to zero or few-shot most business questions and assignments, so n.questions is low VS other tasks like writing a codebase.
- Form factor (biased here); maybe threads-only aren't best for business analysis?
Surely this is sarcasm, but really your average schoolteacher is now a C student Education Major.
"productive struggle" is essential, I think, and it's hard to tease that out of models that are designed to be as immediately helpful as possible.
Being a well-rounded individualist is a great, but that's an orthogonal issue to the question of outsourcing our skills to machinery. When you were growing crops, did you till the land by hand or did you use a tractor? When you were making clothes did you sew by hand or use a sewing machine? Who made your sewing needles?
The (dubious) argument for AI is that using LLMs to write code is the same as using modern construction equipment to build a house: you get the same result for less effort.
>or maybe we'll revert to a feudal society of property owners and indentured servants.
We as the workers in society have the power to see that this doesn't happen. We just need to organize. Unionize. Boycott. Organize with people in your community to spread worker solidarity.
Mental math is essential for having strong numerical fluency, for estimation, and for reasoning about many systems. Those skills are incredibly useful for thinking critically about the world.
People who organize tend to be the people who are most optimistic about change. This is for a reason.
The clueless educational institutions will simply try to fight it, like they tried to fight copy/pasting from Google and like they probably fought calculators.
I’d also have sessions / days where I don’t use AI at all.
Use it or lose it. Your brain, your ability to persevere through hard problems, and so on.
All the things you mention have a certain objective quality that can be reduced to an approachable minimum. A house could be a simple cabin, a tent, a cave; a piece of cloth could just be a cape; metal can be screwed, glued or cast; a transistor could be a relay or a wooden mechanism etc. ...history tells us all that.
I think when there's a Homo ludens that wants to play, or when there's a Homo economicus that wants us to optimize, there might be one that separates the process of learning from adaptation (Homo investigans?)[0]. The process of learning something new could be such a subjective property that keeps a yet unknown natural threshold which can't be lowered (or "reduced") any further. If I were to be overly pessimistic, a hardcore luddite, I'd say that this species is under attack, and there will be a generation that lacks this aspect, but also won't miss it, because this character could have never been experienced in the first place.
[0]: https://en.wikipedia.org/wiki/Names_for_the_human_species#Li...
> Students primarily use AI systems for creating...
> Direct conversations, where the user is looking to resolve their query as quickly as possible
Aka cheating.
The biology of the human brain will not change as a result of these LLMs. We are imperfect and will tend to take the easiest route in most cases. Having an "all powerful" tool that can offload the important work of figuring out tough problems seems like it will lead to a society less capable in solving complex problems.
Don't ask me what the solution is. Maybe your product does it. If I knew, I'd be making a fortune selling it to universities.
Sure. Works in my IDE. "Create a linked list implementation, use that implementation in a method to reverse a linked list and write example code to demonstrate usage".
Working code in a few seconds.
I'm very glad I didn't have access to anything like that when I was doing my CS degree.
It's also a quote from a character who's literally immortal and so has all the time in the world to learn things, which really undermines the premise.
Currently, I view LLMs as huge enablers. They helped me create a side-project alongside my primary job, and they make development and almost anything related to knowledge work more interesting. I don't think they made me think less; rather, they made me think a lot more, work more, and absorb significantly more information. But I am a senior, motivated, curious, and skilled engineer with 15+ years of IT, Enterprise Networking, and Development experience.
There are a number of ways one can use this technology. You can use it as an enabler, or you can use it for cheating. The education system needs to adapt rapidly to address the challenges that are coming, which is often a significant issue (particularly in countries like Hungary). For example, consider an exam where you are allowed to use AI (similar to open-book exams), but the exam is designed in such a way that it is sufficiently difficult, so you can only solve it (even with AI assistance) if you possess deep and broad knowledge of the domain or topic. This is doable. Maybe the scoring system will be different, focusing not just on whether the solution works, but also on how elegant it is. Or, in the Creator domain, perhaps the focus will be on whether the output is sufficiently personal, stylish, or unique.
I tend to think current LLMs are more like tools and enablers. I believe that every area of the world will now experience a boom effect and accelerate exponentially.
When superintelligence arrives—and let's say it isn't sentient but just an expert system—humans will still need to chart the path forward and hopefully control it in such a way that it remains a tool, much like current LLMs.
So yes, education, broad knowledge, and experience are very important. We must teach our children to use this technology responsibly. Because of this acceleration, I don't think the age of AI will require less intelligent people. On the contrary, everything will likely become much more complex and abstract, because every knowledge worker (who wants to participate) will be empowered to do more, build more, and imagine more.
I guess you can apply similar mechanics to reports. Some deeper questions and you will know if the report was self written or if an AI did it.
There's something irreplaceable about the absoluteness of words on paper and the decisions one has to do to write them out. Conversational speak is, almost by definition, more relaxed and casual. The bar is lower and as such, the bar for thoughts is lower, in order of ease of handwaving I think it goes: mental, speech, writing.
Furthermore there's the concept of editing which I'm unsure how it could be carried out in a conversational sense in graceful manner. Being able to revise words, delete, move around, can't be done with conversation unless you count "forget I said that, it's actually more like this..." as suitable.
The fact that you can ask it for a solution for exactly the context you're interested in is amazing and traditional learning doesn't come close in terms of efficiency IMO.
If education (schools) were to adopt a teaching-AI (one that will given them the solution, but at least ask a bunch of questions ), may be there is some hope.
Not to mention discernment and info literacy when you do need to go to the web to search for things. AI content slop has put everybody who built these skills on the back foot again, of course.
No, you see a plausible set of tokens that appear similar to how it's done, and as a beginner, you're not able to tell the difference between a good example and something that is subtly wrong.
So you learn something, but it's wrong. You internalize it. Later, it comes back to bite you. But OpenAI keeps the money for the tokens. You pay whether the LLM is right or not. Sam likes that.
Grandma will not be able to implement a simple add function using python by asking chat gpt and copy pasting.
Universities aren’t here to hold your hand and give you a piece of paper. They’re here to build skills. If you cheat, you don’t build the skills, so the piece of paper is now worthless.
The only reason degrees mean anything is because the institutions behind them work very hard to make sure the people earning them know what they’re doing.
If you can’t research a write an essay and you have to “copy/paste” from google, the reality is you’re probably a shit writer and a shit researcher. So if we just give those people degrees anyway, then suddenly so-called professionals are going to flounder. And that’s not good for them, or for me, or for society as a whole.
That’s the key here that people are missing. Yeah cheating is fun and yeah it’s the future. But if you hire a programmer, and they can’t program, that’s bad!
And before I hear something about “leveling up” skills. Nuh-uh, it doesn’t work that way. Skills are built on each other. Shortcuts don’t build skills, they do the opposite.
Using chat GPT to pass your Java class isn’t going to help you become a master C++ day trading programmer. Quite the opposite! How can you expect to become that when you don’t know what the fuck a data type is?
We use calculators, sure. We use Google, sure. But we teach addition first. Using the most overpowered tool for block number 1 in the 500 foot tall jenga tower is setting yourself up for failure.
Keeping the curriculum fixed, there's already barely enough time to cover everything. Cutting the amount of lectures in half to make room for in-class homework time does not fix this fundamental problem.
It won't be long 'til we're at the point that embodied AI can be used for scalable face-to-face assessment that can't be cheated any easier than a human assessor.
I encourage you to take action to prove to yourself that real change is possible.
What you can do in your own life to enact change is hard to say, given I know nothing about your situation. But say you are a parent, you have control over how often your children use their phones, whether they are on social media, whether they are using ChatGPT to get around doing their homework. How we raise the next generation of children will play an important role in how prepared they are to deal with the consequences of the actions we're currently making.
As a worker you can try to organize to form a union. At the very least you can join an organization like the Democratic Socialists of America. Your ability to organize is your greatest strength.
Sure, I should probably practice benching 150lbs. That would be a good challenge for me and I would benefit from that experience. But 300lbs would crush me.
1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.
2. (Carefully) Prompt it to create Anki flashcards to meet each goal.
3. Use Anki (duh).
4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.
Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.
It feels like a learning superpower.
Academia needs to embrace this concept and not try to fight it. AI is here, it's real, it's going to be used. Let's teach our students how to benefit from its (ethical) use.
And even at work, the coworkers that don’t have a lot of general knowledge seem to work a lot harder and get less done because it takes them so much longer to figure things out.
So I don’t know… is avoiding the work of learning worth it to struggle at life more?
Let AI generate a short novel. The student is tasked to read it and criticize what's wrong with it. This requires focus and advanced reading comprehension.
Show 4 AI-generated code solutions. Let the student explain which one is best and why.
Show 10 AI-generated images and let art students analyze flaws.
And so on.
I think all of the above do one thing brilliantly, built self confidence.
Its easy to get bullshitted if what youre able to hold in your head is effectively nothing.
This is something I see with other tools. Some people get highly dependent on things like advanced IDE features and don't care to learn how they actually work. That works fine most of the time but if they hit a subtle edge case they are dead in the water until someone else bails them out. In a complicated domain there are always edge cases out there waiting to throw a wrench in things.
And those people are wrong, in a similar way to how it's wrong to say: "There's no point in having very much RAM, as you can just page to disk."
It's the cognitive equivalent of becoming morbidly obese (another popular decision in today's world).
And I can tell you from experience that "letting a calculator solve arithmetic" (or more accurately, being dependent on a calculator to solve arithmetic) means you cripple your ability to learn and understand more advanced stuff. At best your decision turned you into the equivalent of a computer trying to run a 1GB binary with 8MB of RAM and a lot of paging.
> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
It's the latter. Though I suspect the masses will be shoved into the garbage disposal than be allowed to wallow in ignorant consumerism. Only the elite that owns the means of production will be allowed to indulge.
As to writing, just the action of writing something down with a pen, on paper, has been proven to be better for memorization than recording it on a computer [1].
If we're not teaching these basic skills because an LLM does it better, how do learn to be skeptical of the output of the LLM. How do we validate it?
How do we bolster ourselves against corporate influences when asking which of 2 products is healthier? How do we spot native advertising? [2]
[0]: https://www.nature.com/articles/531573a
[1]: https://www.sciencedirect.com/science/article/abs/pii/S00016...
[2]: Example: https://www.nytimes.com/paidpost/netflix/women-inmates-separ...
I’m still garbage at arithmetic, especially mental math, and it really hasn’t inhibited my career in any way.
this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say
> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.
and later they report
> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection
kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something
Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does. Don't do silly textbook challenges for exams anymore - reestablish a culture of scientific innovation!
I don't know. I really feel like the auto-correct features are out to get me. So many times I want to say "in" yet it gets corrected to "on", or vice-versa. I also feel like it does the same to me with they're/their/there. Over the past several iOS/macOS updates, I feel like I've either gotten dumber and no longer do english gooder, or I'm getting tagged by predictive text nonsense.
You need to know how to do things so you know when the AI is lying to you.
On the one hand, I've caught some students red handed (ChatGPT generated their exact solution and they were utterly unable to explain the advanced Python that was in their solution) and had to award them 0s for assignments, which was heartbreaking. On the other, I was pleasantly surprised to find that most of my students are not using AI to generate wholesale their submissions for programming assignments--or at least, if they're doing so, they're putting in enough work to make it hard for me to tell, which is still something I'd count as work which gets them to think about code.
There is the more difficult matter, however, of using AI to work through small-scale problems, debug, or explain. On the view that it's kind of analogous to using StackOverflow, this semester I tried a generative AI policy where I give a high-level directive: you may use LLMs to debug or critique your code, but not to write new code. My motivation was that students are going to be using this tech anyway, so I might as well ask them to do it in a way that's as constructive for their learning process as possible. (And I explained exactly this motivation when introducing the policy, hoping that they would be invested enough in their own learning process to hear me.) While I still do end up getting code turned in that is "student-grade" enough that I'm fairly sure an LLM couldn't have generated it directly, I do wonder what the reality of how they really use these models is. And even if they followed the policy perfectly, it's unclear to me whether the learning experience was degraded by always having an easy and correct answer to any problem just a browser tab away.
Looking to the future, I admit I'm still a bit of an AI doomer when it comes to what it's going to do to the median person's cognitive faculties. The most able LLM users engage with them in a way that enhances rather than diminishes their unaided mind. But from what I've seen, the more average user tends to want to outsource thinking to the LLM in order to expend as little mental energy as possible. Will AI be so good in 10 years that most people won't need to really understand code with their unaided mind anymore? Maybe, I don't know. But in the short term I know it's very important, and I don't see how students can develop that skill if they're using LLMs as a constant crutch. I've often wondered if this is like what happened when writing was introduced, and capacity for memorization diminished as it became no longer necessary to memorize epic poetry and so on.
I typically have term projects as the centerpiece of the student's grade in my courses, but next year I think I'm going to start administering in-person midterms, as I fear that students might never internalize fundamentals otherwise.
1. You won’t always have an LLM. It’s the same reason I still have at least my wife’s phone number memorized.
2. So you can learn to do it better. See point 1.
I wasn’t allowed to use calculators in first and second grade when memorizing multiplication tables, even though a calculator could have finished the exercise faster than me. But I use that knowledge to this day, every day, and often I don’t have a calculator (my phone) handy.
It’s what I tell my kids.
Your post is based in a misguided idea that students actually care about some basic quality of their work.
I think most farmers would be somewhat capable on most of that list. Equations for farm production. Programming tractor equipment. Setting bones. Giving and taking orders. Building houses and barns.
Building a single story building isn’t that difficult, but time consuming. Especially nowadays with YouTube videos and pre-planned plans.
I would double check every card at the start though, to make sure it didn't hallucinate anything that you then cram in your brain.
Au contraire! It is quite wrong and was wrong then too. "Rote memorisation" is a slur for knowledge. Knowledge is still important.
Knowledge is the basis for skill. You can't have skill or understanding without knowledge because knowledge is illustrative (it gives examples) and provides context. You can know abstract facts like "addition is abelian" but that is meaningless if you can't add. You can't actually program if you don't know the building blocks of code. You can't write a C program if you have to look up the function signature of read(2) and write(3) every time you need to use them.
You don't always have access to Google, and its results have declined procipitously in quality in recent years. Someone relying on Google as their knowledge base will be kicking themselves today, I would claim.
It is a bit like saying you don't need to learn how to do arithmetic because of calculators. It misses that learning how to do arithmetic isn't just important for the sake of being able to do it, but for the sake of building a comfort with numbers, building numerical intuition, building a feeling for maths. And it will always be faster to simply know that 6x7 is 42 than to have to look it up. You use those basic arithmetical tasks 100 times every time you rearrange an equation. You have to be able to do them immediately. It is analogous.
Note that I have used illustrative examples. These are useful. Knowledge is more than knowing abstract facts like "knowledge is more than knowing abstract facts". It is about knowing concrete things too, which highlight the boundaries of those abstract facts and illustrate their cores. There is a reason law students learn specific cases and their facts and not just collections of contextless abstract principles of law.
>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers),
Writing legibly is important for many reasons. Note taking is important and often isn't and can't be done with a computer. It is also part of developing fine motor skills generally.
>spell very well (spell check keeps us professional),
Spell checking can't help with confusables like to/two/too, affect/effect, etc. and getting those wrong is much more embarrassing than writing "embarasing" or "parralel". Learning spelling is also crucial because spelling is an insight into etymology which is the basis of language.
>reading a map to get around (GPS), etc
Reliance on GPS means never building a proper spatial understanding. Many people that rely on GPS (or being driven around by others) never actually learn where anything is. They get lost as soon as they don't have a phone.
>but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection).
Memorising poetry is a different sort of thing--it is a value judgment not a matter of practicality--but it is valuable in itself. We have robbed generations of children of their heritage by not requiring them to learn their culture.
I feel AI has just revealed how poor the teaching is, though I don't expect any meaningful response to be made by teaching establishments. If anything AI will lead to bigger differences in student learning. Those who learn core concepts and to critically think will be become more valuable and the people who just AI everything will become near worthless.
Unis will release some handbook policy changes to the press and will continue to pump out the bell curve of students and get paid.
It's remarkable that reading and writing, once the guarded domain of elites and religious scribes, are now everyday skills for millions. Where once a handful of monks preserved knowledge with their specialized scribing skills, today anyone can record history, share ideas, and access the thoughts of centuries with a few keystrokes.
The wheel moves on and people adapt. Who knows what the "right side" of history will be, but I doubt we get there by suppressing advancements and guaranteeing job placements simply because you took out large loans to earn a degree and a license.
I try to explain by shifting the focus from neurological to musculoskeletal development. It's easy to see that physical activity promotes development of children's bodies. So although machines can aid in many physical tasks, nobody is suggesting we introduce robots to augment PE classes. People need to recognize that complex tasks also induce brain development. This is hard to demonstrate but has been measured in extensive tasks like learning languages and music performance. Of course, this argument is about child development, and much of the discussion here is around adult education, which has some different considerations.
You really want start with a smaller weight, and increment it in steps as you progress. You know, like a class or something. And when you do those exercises, you really want to be lifting those weights yourself, and not rely on spotter for every rep.
I don't think that can be caught.
SOCRATES: You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not. And when it is faulted and attacked unfairly, it always needs its father’s support; alone, it can neither defend itself nor come to its own support.
PHAEDRUS: You are absolutely right about that, too.
SOCRATES: Now tell me, can we discern another kind of discourse, a legitimate brother of this one? Can we say how it comes about, and how it is by nature better and more capable?
PHAEDRUS: Which one is that? How do you think it comes about?
SOCRATES: It is a discourse that is written down, with knowledge, in the soul of the listener; it can defend itself, and it knows for whom it should speak and for whom it should remain silent.
[link](https://newlearningonline.com/literacies/chapter-1/socrates-...)
If the product is not commoditized, then capital will absorb all the increased labor efficiency, while labor (and consumption) are sacrificed on the altar of profits.
I suspect your assumption is more likely. Voltaire's critique of 'the best of all possible worlds' and man's place in creating meaning and happiness, provides more than one option.
* due to either learning/concentration issues * the fact that most lecturers are boring, dull, and unengaging * and oftentimes you can learn better from other sources
making lecture longer doesn't fix a single one of these issues. it just makes students learn even less.
The amount of visualizations that i have made after chat gpt was released has increased exponentially. I loath looking the documentation again and again to make a slightly non standard graph. Now all of the friction is gone! Graphs and visuals are everywhere in my code!
With previous technological advancements, humans had places to intellectually "flee", and in fact, previous advancements were often made for the express purpose of freeing up time for higher level pursuits. The invention of computers, for example, let mathematicians focus on much higher level skills (although even there an argument can be made that something has been lost with the general decrease in arithmetic abilities amoung modern mathematicians).
Large language models don't move humans further up the value chain, though. They kick us off of it.
I hear lots of people prosletizing wonderful futures where humans get to focus on "the problems that really matters", like social structures, or business objectives; but there's no fundamental reason that large language models can't replace those functionalities aswell. Unlike, say, a Casio, which would never be able to replace a social worker no matter how hard you tried.
Take rote memorization. It is hard. It sucks in so many ways (just because you memorized something doesn't mean you can reason using that information). Yet memorization also provides the foundations for growth. At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for? How can you assess the validity of a source if you don't know the fundamentals? How can you avoid falling prey to propaganda if your only knowledge of a subject is what is in front of your face? None of that is to say that we should dismiss search and depend upon memorization. We need both.
I can't tell you what to say to your children about LLMs. For one thing, I don't know what is important to them. Yet it is important to remember that it isn't an either-or thing. LLMs are probably going to be essential to manage the profoundly unmanagable amount of information our world creates. Yet it is also important to remember that they are like the person who memorizes but lacks the ability to reason. They may be able to impress people with their fountain of facts, yet they will be unable to create a mark on the world since they will lack the ability to create anything unique.
The person you're responding to is talking about it from an educational perspective though. If your fundamentals aren't solid, you won't know that exponentially smoothed reservoir sampling backed by a splay tree is optimal for your problem, and ChatGPT has no clue either. Trying things, struggling, and failing is crucial to efficient learning.
Not to mention, you need enough brain power or expertise to know when it's bullshitting you. Just today it was telling me that a packed array was better than my proposed solution, confidently explaining why, and not once saying anything correct. No prompt changes could fix it (whether restarting or replying), and anyone who tried to use less brainpower there would be up a creek when their solution sucked.
Mind you, I use LLMs a lot, including for code-adjacent tasks and occasionally for code itself. It's a neat tool. It has its place though, and it must be used correctly.
Or they don't.
Really? We invent LLMs, continue to improve them, and that's the end of our intellectual capability?
> a Casio, which would never be able to replace a social worker no matter how hard you tried
And LLMs can't replace a social worker no matter how hard you try today.
That is exactly how our ancestors built houses. Also a traditional wooden house doesn't look complicated.
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography. <<<
For me it is the second order benefits, notably the idea of "attention to detail" and "a feel for the principles". The principles of each activity being different: writing -> fine motor control, spelling -> word choice/connotation, map -> sense of direction, (my own insert here) money handling -> cost of things
All of them involve "attention to detail" because that's what any activity is - paying attention to it.
But having built up the experience in paying attention to [xyz], you can now be capable when things go wrong.
IE catch disputable transaction on the credit card, or note being told by the shop clerk "No Returns" when their policy says otherwise, un-losting yourself when the phone runs out of battery in the city.
Notably, you don't have to be trained for the details in traditional ways like writing the same sentence 100 times on a piece of paper. Learning can be fun and interesting.
Children can write letters to their friends well before they get their own phone. Geocaching/treasure hunts(hand drawn mud maps!)/orienteering for map use.
As for LLM ... well currently "attention to detail" is vital to spot the (handwave number) 10% of when it goes wrong. In the future LLMs may be better.
But if you want to be better than your peers at any given thing - you will need an edge somewhere outside of using an LLM. Yet still, spelling/word choice/connotations are especially linked to using an LLM currently.
Knowing how to "pay attention to detail" when it counts - counts.
However, I am going to hazard a guess that you still care about your child's ability to do arithmetic, even though calculators make that trivial.
And if I'm right, I think it's for a good reason—learning to perform more basic math operations helps build the foundation for more advanced math, the type which computers can't do trivially.
I think this applies to AI. The AI can do the basic writing for you, but you will eventually hit a wall, and if all you've ever learned is how to type a prompt into ChatGPT, you won't ever get past that wall.
----
Put another way:
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
"Because eventually, you will be able to do X better than any LLM, but it will take practice, and you have to practice now."
The same way you answer - "Why should I memorise this if I can always just look it up"
Because your perceptual experience is built upon your knowledge and experiences. The entire way you see the universe is altered based on these things, including what you see through your eyes, what you decide is important and what you decide to do.
The goal of life is not always "simply to do as little as possible", or "offload as much work as possible" but lots of the time includes struggling through the fundimentals so that you become a greater version of yourself, it is not the complete task that we desire, it is who you became while you did the work that we desire.
You should feel nothing. They knew they were cheating. They didn't give a crap about you.
Frankly, I would love to have people failing assignments they can't explain even if they did NOT use "AI" to cheat on them. We don't need more meaningless degrees. Make the grades and the degrees mean something, somehow.
Sure. But somebody has to know these things. For many jobs, knowing these things isn’t beneficial, but for others it is.
Sure, you might be able to get a job slinging AI code to produce CRUD apps or whatever. But since that’s the easy thing, you’re going to have a hard time standing out from the pack. Yet we will still need people who understand the concepts at a deeper level, to fix the AI messes or to build the complex systems AI can’t, or the systems that are too critical to rely on AI, or the ones that are too regulated. Being able to do those thing, or to just better understand what the AI is doing to get better more effective results, that will be more valuable than just blindly leaning on AI, and it will remain valuable for a while yet.
Maybe some day the AI can do everything, including ASICs and growing crops, but it’s got a long way to go still.
Well, you know, we'd all love to change the world...
Also, this kind of fatuous response leaves out the skill building required - how do students acquire the skill of criticism or analysis? They're doing all of the easier work with ChatGPT until suddenly it doesn't work and they're standing on ... nothing ... unable to do anything.
That's the insidious effect of LLMs in education: as I read here recently "simultaneously raising the bar for the skill required at the entry level and lowering the amount of learning that occurs in the preparation phase (e.g., college)".
That being said, buildings collapse a lot less frequently these days. House fires happen at a lower rate. Insulation was either nonexistent or present in much lower quantities.
I guess the point I'm making is that the lesson here shouldn't be "we used to make our houses, why don't we go back to that?" It also shouldn't be "we should leave every task to a specialist."
Know how to maintain and fix the things around your house that are broken. You don't need a plumber to replace the flush valve on your toilet. But maybe don't try to replace a load-bearing joist in your house unless you know what you're doing? The people building their own homes weren't engineers, but they had a lot more carpentry experience than (I assume) you and I.
How many teachers are offloading their teaching duties onto LLMs? Are they reading essays and annotating them by hand? If everything is submitted electronically, why not just dump 30 or 50 papers into a LLM queue for analysis, suggested comments for improvement, etc. while the instructor gets back to the research they care about? Is this 'cheating' too?
Then there's the use of LLMs to generate problem sets, test those problem sets for accuracy, come up with interesting essay questions and so on.
I think the only real solution will be to go back to in-person instruction with handwritten problem-solving and essay-writing in class with no electronic devices allowed. This is much more demanding of both the teachers and the students, but if the goal is quality educational programs, then that's what it will take.
And not watching lectures is not the same as not reviewing the material. I generally prefer textbooks and working through proofs or practice problems by hand. If I listen to someone describe something technical I zone out too quickly. The only exception seems to be if I'm able to work ahead enough that the lecture feels like review. Then I'm able to engage.
Outliers will still work hard and become even more valuable, AI won't affect them negatively. I feel non outliers will be affected negatively on average in ability to learn/think.
With no confirming data, I feel those who got that fancy education would do so in any other institution. Just those fancy institutions draw in and filter for intelligent types, not teach them to be intelligent as it's practically a pre-requisite.
It's the most efficient few-shot which beats the odds on any SotA model.
Even humans have gotten shocks like this. Things like the Black Death created social and economic upheavals that lasted generations.
Now, these are all biological examples. They don't map cleanly to technogical advances, because human brains adapt much faster than immune systems that are constrained by their DNA. But the point is that complex systems can adapt and can seem to handle "anything," up until they can't.
I don't know enough about AI or LLM's to say if we're reaching an inflection point. But most major crises happen when enough people say that something can't happen, and then it happens. I also don't think that discouraging innovation is the solution. But I don't also want to pretend like "humans always adapt" is a rule and not a 300,000 year old blip on the timeline of life's existence.
For instance your point about: > reading a map to get around (GPS)
https://www.statnews.com/2024/12/16/alzheimers-disease-resea...
After reading the above it dawned on me that the human brain needs to develop spatial awareness and not using that capability of the brain very slowly shuts it off. So I purposefully turn off the gps when I can.
I think not fully developing each of those abilities might have some negative effects that will be hard to diagnose.
I think you're missing the point of my comment. I'm not saying that human knowledge is useless. I'm specifically arguing against the case that:
> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
My logic being that we are already irreversibly dependent on supply chains.
https://www.indeed.com/career/software-engineer/salaries
https://www.levels.fyi/t/software-engineer/locations/united-...
Kind of like how an ignorant electorate makes for a poor democracy, an ignorant consumer base makes for a poor free market.
It also seems like a waste of having an expert around to be doing something you could do at home without them.
Exams should increasingly be written with the idea in mind that students can and will use AI. Open book exams are great. They're just harder to write.
Or is pioneering 200 years ago an in-demand skillset that we should be picking up?
This frees you up to work on the crunchy unsolved problems.
I do agree that it's a fair point to ponder. It does seem like people draw fairly arbitrary lines in the sand around what skills are "essential" or not. Though I can't even entertain the notion that I shouldn't be concerned about my child's ability to spell.
Seems to me that these gains in technology have always come at a cost, and so far the cost has been worth it for the most part. I don't think it's obviously true that LLMs will be (or won't be) "worth it" in the same way. And anyways the tech is not nearly mature enough yet for me to be comfortable relying on it long term.
My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.
Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.
The only homes built by our ancestors that you see are those that didn't collapsed and killed whoever was inside, burned down, were too unstable to live in, were too much of a burden to maintain and keep around, etc.
There's also the problem of developing critical thinking skills. It's not very comforting to think of a time where your average Joe relies on an AI service to tell what he should think and believe, when that AI service is ran, trained, and managed by people pushing radical ideologies.
I believe there is some truth to it. When you automated away some time-consuming tasks, your time and focus is shifted elsewhere. For example, washing clothes is no longer a major concern since the massification of washing machines. Software engineering also progressively shifted it's attention to higher-level concerns, and went from a point where writing/editing opcodes was the norm to a point where you can design and deploy a globally-available distributed system faster than what it takes to build a program.
Focusing on the positive traits of AI, having a way to follow the Socratic method with a tireless sparring partner that has an encyclopedic knowledge on everything and anything is truly brilliant. The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference.
This is the existential crisis that appears imminent. What does it mean if humanity, at large, begins to offload thinking (hence decision making), to machines?
Up until now we’ve had tools. We’ve never before been able to say “what’s the right way to do X?”. Offloading reasoning to machines is a terrifying concept.
It never gets it right, even after many reattempts in cursor. And even if it gets it right, it doesn't do the parallelization effectively enough - it's a hard problem to parallelize.
I myself am one of them, but I attribute that to the fact that this is a graduate version of an undergrad class I took two years ago (but have to take the grad version for degree requirements). Instead, I've been skimming the posted exercises and assessing myself which specific topics I need to brush up on.
Target the cheaters with pop quizzes. Prof can randomly choose 3 questions from assignments. If students cant get enough marks on 2/3 of them they are dealt a huge penalty. Students that actually work through the problems will have no problems with scoring enough marks on 2/3 of the questions. Students that lean irresponsibly on LLMs will lose their marks.
That's actually pretty doable. Almost every resource provides more context than just the exact thing you're asking. You build on that knowledge and continue asking. Nobody knows everything - we've been doing the equivalent of this kind of research forever.
> How can you assess the validity of a source if you don't know the fundamentals?
Learn about the fundamentals until you get to the level you're already familiar with. You're describing an adult outside of school environment learning basically anything.
Spaced repetition as it's more commonly known has been quite studied, and is anecdotally very popular on HN and reddit. Albeit more for some subject than others
Does your product help teachers detect cheating? Because I hear none of them are accurate, with many false positives and ruined academic careers.
Are you saying yours is better?
Homework would still be assigned as a learning tool, but has no impact on your grade.
People's careers are going to be filled with AI. College needs to prepare them for that reality, not to get jobs that are now extinct.
If they are never going to have to program without AI, what's the point in teaching them to do it? It's like expecting them to do arithmetic by hand. No one does.
For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class? Goals that they will still need, in a world with AI".
Hmm, millions of humans are spending a bulk of their lives plugging away at numbers on a screen. We can replace this with an AI and free them up to do literally anything else.
No, let's not do that. Keep being slow ineffective calculators and lay on your death bed feeling FULFILLED!
If the expectation is X, and your tool gives you Y, then you’ve failed - no matter if you could have done X by hand from scratch or not, it doesn’t really matter, because what counts is whether the person checking your work can verify that you’ve produced X. You agreed to deliver X, and you gave them Y instead.
So why should you learn to do X when the LLM can do it for you?
Because unless you know how to do X yourself, how will you be able to verify whether the LLM has truly done X?
Your kid needs to learn to understand what the person grading them is expecting, and deliver something that meets those expectations.
That sounds like so much bullshit when you’re a kid, but I wish I had understood it when I was younger.
It's not true even though it's accepted. Rote memorization has a place in an education. It does strengthen learning and allow one to make connections between the things seen presently and things remembered, among other things.
So you make them take exams in-class, and you check their papers for mistakes and irresponsible AI use and punish this severely.
But actually using AI ought not to be punished.
It absolutely isn't.
I mean, arithmetic is the same way, right? Nobody should do the arithmetic by hand, as you say. Kindergarten teachers really ought to just hand their kids calculators, tell them they should push these buttons like this, and write down the answers. No need to teach them how to do routine arithmetics like 3+4 when a calculator can do it for them.
Imagine always having Tex autocorrect to Texaco
Don't all children learn by doing arithmetic by hand first?
All those have, at the base of them, the experience of being human, something an LLM does not and will never have.
I agree that AI could be an enormous educational aid to those who want to learn. The problem is that if any human task can be performed by a computer, there is very little incentive to learn anything. I imagine that a minority of people will learn stuff as a hobby, much in the way that people today write poetry or develop film for fun; but without an economic incentive to learn a skill or trade, having a personal Socratic teacher will be a benefit lost on the majority of people.
Oh, please, from the bottom of my heart as a teacher: go fuck yourselves.
And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.
Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.
If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).
And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.
LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.
And teachers should use AIs too. Evaluating papers is not that hard for an LLM.
"Your a teacher. Given this assignment (paste /attach the file and the student's paper), does this paper meet the criteria. Identify flaws and grammatical errors. Compose a list of ten questions to grill the student on based on their own work and their understanding of the background material."
A prompt like that sounds like it would do the job. Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
What does this mean?
I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.
As a society, we should mandate universities to calculate the full score of a course based solely on oral or pen and paper exams, or computer exams only under strict supervision (eg share screen surveillance). Anything less is too easy to cheat.
And most crucially let go of this need to promote at least X% of the students: those who pass the bar should get the piece of paper that says they passed the bar, the others should not.
This is a serious problem.
What's your next rant: know nead too learn two reed and right ennui moor? Because AI can do that for you? No need to think? "So, you turned 6 today? That over there is your place at the assembly line. Get to know it well, because you'll be there the rest of your life."
> For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class?
That's already how schools organize their curriculum.
"That I lived so much longer, just means, that I forgot much more, not that I know much more."
Memory might have a limited capacity, but of course, I doubt most humans use that capacity, or well, for useful things. I know I have plenty of useless knowledge ..
We use an airgapped lab (it has LAN and a local git server for submissions, no WAN) to give coding assessments. It works.
I don't see the former as that much of a problem.
I'm no Turing or Ramanujan, but my opinion is that knowing how the operations work, and as example understanding how the area under a curve is calculated, allows you to guesstimate whether numbers are close enough in terms of magnitude to what you are calculating, without needing to be exact in figures.
It is shocking how often I have looked at a spreadsheet, eyeballed the number of rows and the approximate average of numbers in there and figured out there's a problem with a =choose-your-forumula(...) getting the range wrong.
Automating thinking and self-expression is a lot more dangerous. We're not automating the calculation or the research, but the part where you add your soul to that information.
You still needed to know what to ask it, and how to interpret the output. This is hard to do without an understanding of how the underlying math works.
The same is true with LLMs. Without the fundamentals, you are outsourcing work that you can't understand and getting an output that you can't verify.
If there is a difference, then fundamentally LLMs cannot solve problems for you. They can only apply transformations using already known operators. No different than a calculator, except with exponentially more built-in functions.
But I'm not sure that there is a difference. A problem is only a problem if you recognize it, and once you recognize a problem then anything else that is involved along the way towards finding a solution is merely helping you solve it. If a "problem" is solved for you, it was never a problem. So, for each statement to have any practical meaning, they must be interpreted with equivalency.
At best what you can learn specifically regarding critical thinking are some rules of thumb such as "compare at least three sources" and "ask yourself who benefits".
I think it makes a very relevant point to us as well. The value of doing the work yourself is in internalizing and developing one's own cognition. The argument of offloading to the LLM to me sounds link arguing one should bring a forklift to the gym
Yes, it would be much less tiresome and you'd be able to lift orders of magnitude more weights. But is the goal of the gym to more efficiently lift as much weight as possible, or to tire oneself and thus develop muscles?
You'll acquire advanced knowledge and skills much, much faster (and sometimes only) if you have the base knowledge and skills readily available in your mind. If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
This is at the root of the Dunnin-Kruger effect. When you read an explanation you feel like you understand it. But it's an illusion, because you never developed the underlying cognition, you just saw the end result.
Learning is not about arriving at the result, or knowing the answers. These are by products of the process of learning. If you just short cut to the end by products, you get the appearance of learning. And you might be able to play the system and come out with a diploma. But you didn't actually develop cognitive skills at all.
Edit: And how can you critically assess if that research is any good? To do it well you need... domain knowledge.
I think they are in the right path here
> they've automated their humanities, social sciences, and other major requirements using LLMs.
This worries me. If they struggle with these topics but don't see the value in that struggle, that is their prerogative to decide for themselves what is important to them. But I think more technically apt people who have low verbal reasoning skills, little knowledge of history, sociology, psychology, etc, is a net positive for society. So many of the problems with the current tech industry is the tendency to think everything is just a technical problem and being oblivious to the human aspects.
In some of the CS tests, coding by hand sucks a bit but to be honest, they're ok with pseudo code as long as you show you understand the concepts.
You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?
Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.
As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?
There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.
I have the base knowledge and skill readily available to perform basic arithmetic, but I still can't do it in my mind in any practical way because I, for lack of a better description, run out of memory.
I expect most everyone eventually "runs out of memory" if the values are sufficiently large, but I hit the wall when the values are exceptionally small. And not for lack of trying – the "you won't always have a calculator" message was heard.
It wasn't skill and knowledge that was the concern, though. It was very much about execution. We were tested on execution.
> If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
I can't imagine anyone is still using a four function calculator. Certainly not in an application like learning linear algebra. Modern calculators are decidedly designed for linear algebra. They need to be given the rise of things like machine learning that are heavily dependent on such.
Doing things that could be in principle automated by AI is still fundamentally valuable, because they bring two massive benefits:
- *Understanding what happens under the hood*: if you want to be an effective software engineer, you need to understand the whole stack. This is true of any engineering discipline really. Civil engineers take classes in fluid dynamics and material science classes although they will mostly apply pre-defined recipes on the job. You wouldn't be comfortable if the engineer who signed off on the blueprints of dam upstream of your house had no idea about the physics of concrete, hydrodynamic scour, etc.
- *Having fun*: there is nothing like the joy of discovering how things work, even though a perfectly fine abstraction that hides these details underneath. It is a huge part of the motivation for becoming an engineer. Even by assuming that Vibe Coding could develop into something that works, it would be a very tedious job.
When students use AI to do the hard work on their behalf, they miss out on those. We need to be extremely careful with this, as we might hurt a whole generation of students, both in terms of their performance and their love of technology.
Some people argue that it doesn’t matter if there is mistakes (it depends which actually) and with time it will cost nothing.
I argue that if we give up learning and let LLM do the assignments then what is the extent of my knowledge and value to be hired in the first place ?
We hired a developper and he did everything with chatGPT, all the code and documentation he wrote. First it was all bad because from the infinity of answers chatGPT is not pinpointing the best in every case. But does he have enough knowledge to understand what he did was bad ? And then we need people with experience that confronted themselves with hard problems and found their way out. How can we confront and critic an LLM answer otherwise ?
I feel student’s value is diluted to be at the mercy of companies providing the LLM and we might loose some critical knowledge / critical thinking in the process from the students.
That's after all the implication from your assessment that there would be no good data.
This is what isn't explained or understood properly (...I think) to students; on the surface you go to college/uni to learn a subject, but in reality, you "learn to learn". The output that you're asked to submit is just to prove that you can and have learned.
But you don't learn to learn by using AI tools. You may learn how to craft stuff that passes muster, gets you a decent grade and eventually a piece of paper, but you haven't learned to learn.
Of course, that isn't anything new, loads of people try and game the system, or just "do the work, get the paper". A box ticking exercise instead of something they actually want to learn.
That sounds like setting-up your child for failure, to put it bluntly.
How do you want to express a thought clearly if you already fail at the stage of thinking about words clearly?
You start with a fuzzy understanding of words, which you delegated to a spellchecker, added to a fuzzy understanding of writing, which you've delegated to a computer, combined with a fuzzy memory, which you've delegated to a search engine, and you expect that not to impact your child's ability to create articulate thoughts and navigate them clearly?
To add irony to the situation, the physical navigation skills have, themselves, been delegated to a GPS..
Brains are like muscles, they atrophy when not used.
Reverse that course before it's too late, or suffer (and have someone else suffer) the consequences.
This is entirely your opinion. We don't know how the brain learns nor do we know if intelligence can be "taught"
>If a house builder built a house for a man but did not secure/fortify his work so that the house he built collapsed and caused the death of the house owner, that house builder will be executed.
If even professionals did get it wrong so often that there had to be law for it... Yeah, maybe it is not that simple.
As a comment upthread said, let them cheat on the take home as much as they want to, they're still going to fail the exam.
My wife is a secondary school teacher (UK), teaching KS3, GCSE, and A level. She says that most of her students are using Snapchat LLM as their first port of call for stuff these days. Many of the students also talk about ChatGPT but she had never heard of Claude or Anthropic until I shared this article with her today.
My guess would be that usage is significantly higher across all subject, and that direct creation is also higher. I'd also assume that these habits will be carried with them into University over the coming years.
It would be great to see this as an annual piece, a bit like the StackOverflow survey. I can't imagine we'll ever see similar research being written up by companies like Snapchat but it would be fascinating to compare it.
In other words, is the school certificaftion meant to distinguish those who genuinely learnt, or was it merely meant to signal (and thus, those who used to copy pre-llm are going to do the same, and thus reach the same level of certification regardless of whether they learnt or not)?
Spotify CEO is channeling The Two Bobs from Office Space: "What are you actually doing here?" Just in a nastier way, with a kind of prisoner's dilemma on top. If you can get by with an agent, fine, you won't bother him. If you can't, why can't you? Should we replace you with someone who can, or thinks they can?
Spotify CEO is not his employees' friend.
But those who traditionally learnt arithmetics have had this training, which _enables_ higher order thinking.
Being reliant on AI to do this means they would not have had that same level of training. It could prevent them from being able to synthesize new patterns or recognize them (and so if the AI also cannot do the same, you get stagnation).
They will likely sell some version of this "Clio" to managers, to make it easier for them to accept this very intimate insight into the businesses they manage.
what's the point of the teacher then? Courses could entirely be taught via LLM in this case!
A student's willingness to learn is orthogonal to the availability of cheating devices. If a student is willing, they will know when to leverage the LLM for tutoring, and when to practise without it.
A student who's unwilling cannot be stopped from cheating via LLM now-a-days. Is it worth expending resources to try prevent it? The only reason i can think of is to ensure the validity of school certifications, which is growing increasingly worthless anyway.
when wikipedia was initially made, many schools/teachers explicitly denied wikipedia as a source for citing in essays. And obviously, plenty of kids just plagerized wikipedia articles for their essay topics (and was easily discovered at the time).
With the advent of LLM, this sort of pseudo-learning is going to be more and more common. The unsupervised tests (like online tests, or take home assignments) cannot prevent cheating. The end result is that students would pass, but without _actually_ learning the material at all.
I personally think that perhaps the issue is not with the students, but with the student's requirement for certification post-school. Those who are genuinely interested would be able to leverage LLM to the maximum for their benefit, not just to cheat a test.
However I really don't need to implement some weird algorithms myself every time (ideally I am using a well tested Library) but the point is that you learn to be able to but also to be able to modify or compose the algorithm in ways the LLM couldn't easily do.
Depends on the country and educational system I suppose, but I do believe professors in many places get in trouble for failing too many students. It's right there in the phrasing.
If most students pass and some fail, that's fine. Revenue comes in, graduates are produced, the university is happy.
If most students fail, revenue goes down, less students might sign up, less graduate, the university is unhappy.
It's a tragedy of the commons situation, because some professors will be happy to pass the majority of students regardless of merit. Then the professors that don't become the problem, there's something wrong with them.
Likewise, if most universities are easy and some are really hard, they might not attract students. The US has this whole prestige thing going on, that I haven't seen all that much in other countries.
So if the students overall get dumber because they grow up over relying on tools, the correction mechanism is not that they have to work harder once the exam approaches. It's that the exam gets easier.
There is no single human alive that can understand or build a modern computer from top to bottom. And this is true for various bits of human technology, that's how specialized we are as a species.
You would learn more if you tell Claude to not give outright answers but generate more problems where you are weak for you to solve. That reduction in errors as you go along will be the positive reinforcement that will work long term.
Remember, language is a natural skill all humans have. So is counting (a skill that may not even be unique to humans).
However writing is an artifical technology invented by humans. Writing is not natural in the sense that language itself is. There is no part of brain we're born with that comes ready to write. Instead, when we learn to write other parts of our brain that are associated with language and hearing and vision are co-opted into the "writing and reading parts".
Teaching kids math using writing and symbolism is unnatural and often an abstraction too far for them (initially). Introducing written math is easier and makes more sense once kids are also learning to read and write - their brains are being rewired by that process. However even an toddler can look at a pile of 3 objects and a pile of 5 objects and know which one is more, even if they can't explicitly count them using language - let alone read and write.
> Literally all progress we've made is due to ever increasing specialization.
Then we don't really need plural examples, right?
Anyway - language, wheel, fire, tool-making, social constructs like reciprocity principle - I think gave us some progress as a species and a society.
And same goes for art. You do not become master in art by looking at art or even someone drawing...
I'm running into similar issues trying to use LLMs for logic and reasoning.
They can do it (surprisingly well, once you disable the friendliness that prevents it), but you get a different random subset of correct answers every time.
I don't know if setting temperature to 0 would help. You'd get the same output every time, but it would be the same incomplete / wrong output.
Probably a better solution is a multi phase thing, where you generate a bunch of outputs and then collect and filter them.
It's like some people learn knowledge by TikTok, some just waste time on it.
Just think of the time everybody will save! Instead of wasting effort learning or teaching, we'll be free to spend our time doing... uh... something! Generative AI will clearly be a real 10x or even 100x multiplier! We'll spiral into cultural and intellectual oblivion so much faster than we ever thought possible!
> You should feel nothing. They knew they were cheating. They didn't give a crap about you.
Most of us (a) don't feel our students owe us anything personally and (b) want our students to succeed. So it's upsetting to see students pluck the low-hanging, easily picked fruit of cheating via LLMs. If cheating were harder, some of these students wouldn't cheat. Some certainly would. Others would do poorly.
But regardless, failing a student and citing students for plagiarism feel bad, even though basically all of us would agree on the importance and value of upholding standards and enforcing principles of honesty and integrity.
Even in mankind's beginning specialization existed in the form of hunter and gatherer. This specialization in combination with team work brought us to the top of the food chain to a point where we can strive beyond basic survival.
The people making space crafts (designing and building, another example of specialization) don't need to know how to repair or build a microwave to heat there for food.
Of course everybody still needs to know basic knowledge (how to turn on microwave) to get by.
The dangers I've found personally are more around how it eases busywork, so I'm more inclined to be distracted doing that as though it delivers actual progress.
You can solve stuff like:
> If you walk 1 mile in 7 minutes, how fast are you walking in kilometers per hour?
$ units -t "1 mile / 7 minutes" "kilometers per hour"
13.7943771428571
You need some basic knowledge to even come up with "1 mile / 7 minutes" and "kilometers per hour".There are examples where you need much more advanced knowledge, too, meaning it is not enough to just have a calculator, for example in thermodynamics, when dealing with gas laws, you cannot simply convert pressure, volume, and temperature from one unit to another without taking into account the specific context of the law you’re applying (e.g., the ideal gas law or real gas behavior)", or for example you want to convert 1 kilowatt-hour (kWh) to watts (W). This is a case of energy (in kilowatt-hours) over time (in hours), and we need to convert it to power (in watts), which is energy per unit time.
You cannot do:
$ units -t "1 kWh" "W"
conformability error
3600000 kg m^2 / s^2
1 kg m^2 / s^3
You have to have some knowledge, so you could do: $ units -t "1 kWh" "J"
1 kWh = 3600000 J
$ units -t "3600000 joules / 3600 seconds" "W"
3600000 joules / 3600 seconds = 1000 W
To sum it up: in many cases, without the right knowledge, even the most accurate tool will only get you part of the way there.It applies to LLMs and programming, too, thus, I am not worried. We will still have copy-paste "programmers", and actually knowledgeable ones, as we have always had. The difference is that you can use LLMs to learn, quite a lot, but you cannot use a calculator alone to learn how to convert 1 kWh to W.
You're right.
Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.
The only thing I care about is the ratio between those two things and you decide to group them together in your report? Fuck that
It's technically allowed on an individual basis, but the economics don't work for any institution to attempt to raise its bar.
If institutions X and Y grant credential Z, and X starts failing a third of its students, who would apply to go there?
You used to _actually_ need to do the arithmetic, now you just need to understand when a calculator is not giving you what you expected. (Not that this is being taught either, lol)
You can get to the higher order thinking sooner than if you spent years grinding multiplication tables.
I'm not sure how you get from pre-agricultural humans developing fire, to dentists building cars.
I don't doubt that after fire was 'understood', there was specialisation to some degree, probably, around management of fire, what burns well, how best to cook, etc.
But any claim that fire was the result of specialisation seems a bit hard to substantiate. A committee was established to direct Thag Simmons to develop a way to .. something involving wood?
Wheel, the setting of broken bones, language etc - specialisation happened subsequently, but not as a prerequisite for those advances.
> Even in mankind's beginning specialization existed in the form of hunter and gatherer. This specialization in combination with team work brought us to the top of the food chain to a point where we can strive beyond basic survival.
Totally agree that we advanced because of two key capabilities - a) persistence hunting, b) team / communication.
You seem to be conflating the result of those advancements with "all progress", as was GP.
> The people making space crafts (designing and building, another example of specialization) don't need to know how to repair or build a microwave to heat there for food.
I am not, was not, arguing that highly specialised skills in modern society are not ... highly specialised.
I was arguing against the lofty claim that:
"All progress we've made is due to ever increasing specialization."
Noting the poster of that was responding to a quote from a work of fiction - claiming it was awful - that the author had suggested everyone should be familiar with (among other things) 'changing a diaper, comfort the dying, cooperate, cook a tasty meal, analyse a problem, solve equations' etc.
If you're suggesting that you think some people in society should be exempt from some basic skills like those - that's an interesting position I'd like to see you defend.
> Of course everybody still needs to know basic knowledge (how to turn on microwave) to get by.
FWIW I don't have a microwave oven.
An agent can't do it. It can help you like a calculator can help you, but it can't do it alone. So that means you've become the programmer. If you want to be the programmer, you always could have been. If that is what you want to be, why would you consider hiring anyone else to do it in the first place?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
It was Shopifiy, but that's just a roundabout way to say that there is a hiring freeze due to low sales (no doubt because of tariff nonsense seizing up the market). An agent, like a calculator, can only increase the productivity of a programmer. As always, you still need more programmers to perform more work than a single programmer can handle. So all they are saying is that "we can't afford to do more".
> The company will and is just using the agent as well …
In which case wouldn't they want to hire those who are experts in using agents? If they, like Shopify, have become too poor to hire people – well, you're screwed either way, aren't you? So that is moot.
Programmer here. The answer is 100% no. The programmers who think they're saving time are racking up debts they'll pay later.
The debts will come due when they find they've learned nothing about a problem space and failed to become experts in it despite having "written" and despite owning the feature dealing with it.
Or they'll come due as their failure to hone their skills in technical problem solving catches up to them.
Or they'll come due when they have to fix a bug that the LLM produced and either they'll have no idea how or they'll manage to fix it but then they'll have to explain, to a manager or customer, that they committed code to the codebase that they didn't understand.
>FWIW I don't have a microwave oven.
That was just an example. You still know how to use them hence basic knowledge. Seem like this discussion boils down to semantics
Then they're on a video call and their vocabulary is wildly different, or they're very clearly a recent immigrant and struggle with basic sentence structure such that there is absolutely zero change their discussion forum persona is actually who they are.
This has happened at least once in every class, and invariably the best classes in terms of discussion and learning from other students are the ones where the people using AI to generate their answers are failed or drop the course.
I'm in a 100%-online grad school but they proctor major exams through local testing centers, and every class is at least 50% based on one or more major exams. It's a good way to let people use LLMs, because they're available, and trying to stop it is a fool's errand, while requiring people to understand the underlying concepts in order to pass.
There is no graded homework, the coursework is there only as a guide and practice for the exams.
So you can absolutely use LLMs to help you with the exercises or to help understand something, however if you blindly get answers you will only be fooling yourself as you won't be able to pass the exams.
As a society we need to be okay with failing people who deserve to fail and not drag people across the finish line at the expense of diluting the degrees of everyone else who actually put in effort.
Coaching the student on their learning journey, kicking their ass when they are failing, providing independent testing/certification of their skills, answering questions they have, giving lectures, etc.
But you are right, you don't have to wait for a teacher to tell you stuff if you want to self educate yourself. The flip side is that a lot of people lack the discipline to teach themselves anything. Which is why going to school & universities is a good idea for many.
And I would expect good students that are naturally curious to be using LLM based tools a lot to satisfy their curiosity. And I would hope good teachers would encourage that instead of just trying to fit students into some straight jacket based on whatever the bare minimum standards say they should know, which of course is what a lot of teaching boils down to.
Where will all those new students find a job if :
- they did not learn much because LLM did work for them
- there is no new jobs required because we are more productive ?
This assumes, of course, an institution is actively trying to raise the academic bar of its student population. Most schools are emphatically not trying to do this and are focused more on just increasing enrollment, getting more tax dollars, and hiring more administrators.
But I have a feeling that if it's that easy to cheat through life then its just as easy to eliminate that job being performed by a human and negate the need to worry about cheating. So I have a feeling it will work for only a very short amount of time.
Another feeling I have is mandatory in-person exams involving a locked down terminal presenting the user with a problem to solve. Might be a whole service industry waiting to be born - verify the human on the other end is real and competent. Of course, anything is corruptible. Weird future of rapidly diminishing trust.
To be clear the students almost certainly aren't using ChatGPT to write their thesis for them from scratch, but rather to edit and improve their bad first drafts.
Never in the history of humans have we been content with stagnation. The people who used to do manual calculations soon joined the ranks of people using calculators and we lapped up everything they could create.
This time around is no exception. We still have an infinite number of goals we can envision a desire for. If you could afford an infinite number of people you would still hire them. But Shopify especially is not in the greatest place right now. They've just come off the COVID wind-down and now tariffs are beating down their market further. They have to be very careful with their resources for the time being.
> - they did not learn much because LLM did work for them
If companies are using LLMs as suggested earlier, they will find jobs operating LLMs. They're well poised for it, being the utmost experts in using them.
> - there is no new jobs required because we are more productive ?
More productivity means more jobs are required. But we are entering an age where productivity is bound to be on the decline. A recession was likely inevitable anyway and the political sphere is making it all but a certainty. That is going to make finding a job hard. But for what scant few jobs remain, won't they be using LLMs?
1. During final exams, directly in front of professors: Check
2. During group projects, with potentially unaligned team-members: Check
3. By professors using "detection" selectively to target students based on prohibited groubds: Check
4. By professors for marking and feedback: Check.
And so, the problem is clearly the institutions. Because none of these are real problems unless you stopped giving a shit.
Good luck when you point out that your marked feedback is a hallucination and the professor targets you for pointing that out.
How do you do that if you can't do arithmetic by hand though? At most, when working with integers, you can count digits to check if the order of magnitude is correct.
This model won't work for subjects that rely on students writing reports. But yes, universities frequently accept that failure rates for some courses will be high, especially for engineering and the sciences.
most normal-difficulty undergraduate assignments are now doable reliably by AI with little to no human oversight. this includes both programming and mathematical problem sets.
for harder problem sets that require some insight, or very unstructured larger-scale programming projects, it wouldn't work so reliably.
but easier homework assignments serve a valid purpose to check understanding, and now they are no longer viable.
I'm not saying some degree of specialization isn't desirable in the world, just that it's overrated.
Do we need a modern computer?
"students must learn to avoid using unverified GenAI output. ... misuse of AI may also constitute academic fraud and violate their university’s code of conduct."
This context is important: this taxonomy did not emerge from artificial intelligence nor cognitive science. So its levels are unlikely to map to how ML/AI people assess the difficulty of various categories of tasks.
Generative models are, by design, fast (and often pretty good) at generation (creation), but this isn't the same standard that Bloom had in mind with his "creation" category. Bloom's taxonomy might be better described as a hierarchy: proper creation draws upon all the layers below it: understanding, application, analysis, and evaluation.
That is if learning-to-become-a-contributing-member-of-society doesn't become obsolete anyway.
split the entire coursework into two parts:
part 1 - students are prohibited from using AI. Have the exams be on physical papers than digital ones requiring use of laptop/computer. I know this adds burden on corrections and evaluations of these answers, but I think this provides a raw answer to someone's understanding of concepts being taught in the class.
part2 - students are allowed, and even encouraged to use LLMs. And they are evaluated based on the overall quality of the answer, keeping in mind that a non zero portion of this was generated using an LLM. Here the credit should be given to the factual correctness of the answer (and if the student is capable of verifying the LLM output).
Have the final grade be some form of weighted average of a student's scores in these 2 parts.
note: This is a raw thought that just occurred to me while reading this thread, and I have not had the chance to ruminate on it.
Another problem is there is so much in technology, I just can't remember everything after years of exposure to so many spaces. Not being able to recall information you used to know is frustrating and having AI to remind you of details is very useful. I see it as an amplifying tool, not a replacement for knowledge. I'm sure there are some prolific note taking memory tricksters out there but I'm not one of them.
I frequently forget information over time and it's nice to have a tool to remind me of how UDP, RTP, and SIP routing work when I haven't been in the comm or network space for a while.
FWIW, exams testing rote learning without the ability to look up things would have been much easier. It was really stressful to sit down and make major changes to your project to satisfy new unit tests, which often targeted edge cases and big O complexity to crash your code.
Keep your responses short and to the point. Use the Socratic method when appropriate.
When enumerating assumptions, put them in a numbered list. Make the list items very short: full sentences not needed there.
---
I was trying to clone Gemini's "thinking", which I often found more useful than its actual output! I failed, but the result is interesting, and somewhat useful.
GPT 4o came up with the prompt. I was surprised by "never use friendly language", until I realized that avoiding hurting the user's feelings would prevent the model from telling the truth. So it seems to be necessary...
It's quite unpleasant to interact with, though. Gemini solves this problem by doing the "thinking" in a hidden box, and then presenting it to the user in soft language.
Most students would find getting their hands dirty in this way more valuable than reading about something from start to end.
What percentage of the dumbest will be boosted? What makes a person dumb? If they are productive and friendly, isn't that more important?
What percentage of the dumbest will fall farther or abandon heavy learning even earlier?
Were they wrong? People who rely too much on a calculator don't develop strong math muscles that can be used in more advanced math. Identifying patterns in numbers and seeing when certain tricks can be used to solve a problem (verses when they just make a problem worse) is a skill that ends up being beyond their ability to develop.
They really should modify it to take out that whole loop where it apologizes, claims to recognize its mistake, and then continues to make the mistake that it claimed to recognize.
My wife is surprisingly good at remembering routes, she'll use the GPS the first time, but generally remembers the route after that. She still isn't good at knowing which direction is east vs west or north/south, but neither am I.
This is not the root cause, it's a side effect.
Student's cheat because of anxiety. Anxiety is driven by grades, because grades affect failure. To detect cheating is solving the wrong problem. If most of the grades did not directly affect failure, student's wouldn't be pressured to cheat. Evaluation and grades have two purposes:
1. Determine grade of qualification i.e result of education (sometimes called "summative")
2. Identify weaknesses to aid in and optimise learning (sometimes called "formative")
The problem arises when these two are conflated, either by combining them and littering them throughout a course, or when there is an imbalance in the ratio between them i.e too much of #1. Then the pressure to cheat arises, the measure becomes the target, and focus on learning is compromised. This is not a new problem, student's already waste time trying to undermine grades through suboptimal learning activities like "cramming".
The funny thing is that everyone already knows how to solve cheating: controlled examination, which is practical to implement for #1, so long as you don't have a disruptive number of exams filling that purpose. This is even done in sci-fi, Spok takes a "memory test" in 2286 on Vulkan as a kind of "final exam" in a controlled environment with challenges from computers - it's still using a combination of proxy knowledge based questions and puzzles, but it doesn't matter, it's a controlled environment.
What's needed is a separation and balance between summative and formative grading, then preventing cheating is almost easy, and student's can focus on learning... cheating at tests throughout the course would actually have a negative affect on their final grade, because they would be undermining their own learning by breaking their own REPL.
LLMs have only increased the pressure, and this may end up being a positive thing for education.
Honestly it seems like we're doing both most of the time. It's hard to only optimize resources for boosting the dumbest without taking them away from the brightest.
Not really. While doing something to ensure that students are actually learning is important, plenty of the smartest people still don't always test well. End of semester exams also tend to not be the best way to tell if people are learning along the way and then fall off part way through for whatever reason.
Realistically it comes down to the idea that being an educated individual that knows how to think is important for being successful, and learning in school is the only way we know to optimize for that, even if it's likely not the most efficient way to do so.
My folk theory of education is that there is a sequence you need to complete to truly master a topic.
Step 1: You start with receptive learning where you take in information provided to you by a teacher, book, AI or other resource. This doesn't have to be totally passive. For examble, it could take the form of Socratic questioning to guide you towards an understanding.
Step 2: Then you digest the material. You connect it to what you already know. You play with the ideas. This can happen in an internal monologue as you read a textbook, in a question and answer period after a lecture, in a study group conversation, when you review your notes, or as you complete homework questions.
Step 3: Finally, you practice applying the knowledge. At this stage, you are testing the understanding and intuition you developed during digestion. This is where homework assignments, quizes, and tests are key.
This cycle can occur over a full semester, but it can also occur as you read a single textbook paragraph. First, you read (step 1). Then you stop and think about what this means and how it connects to what you previously read. You make up an imaginary situation and think about what it implies (step 2). Then you work out a practice problem (step 3).
Note that it is iterative. If you discover in step 3 a misunderstanding, you may repeat the loop with an emphasis on your confusion.
I think AI can be extremely helpful in all three stages of learning--in particular, for steps 2 and 3. It's invaluable to have quick feedback at step 3 to understand if you are on the right trail. It doesn't make sense to wait for feedback until a teacher's aid gets around to grading your HW if you can get feedback right now with AI.
The danger is if you don't give yourself a chance to struggle through step 3 before getting feedback. The amount of struggle that is appropriate will vary and is a subtle question.
Philosophers, mathematicians, and physicists in training obviously need to learn to be comfortable finding their way through hairy problems without any external source of truth to guide them. But this is a useful muscle that arguably everyone should exercise to some extent. On the other hand, the majority of learning for the majority of students is arguably more about mastering a body of knowledge than developing sheer brain power.
Ultimately, you have to take charge of your own learning. AI is a wonderful learning tool if used thoughtfully and with discipline.
I can't even imagine how learning is impacted by the (ab)use of AI.
It would be pretty neat if there was an LLM that guides you towards the right answer without giving it to you. Asking questions and possibly giving small hints along the way.
I would happily support automation to free myself, and others, from having to work full-time. But we live in a capitalist society, not StarTrek. Automation doesn't free up people from having to work, it only places people in financial crisis.
Where is the existing work these people would take up? If it doesn't exist yet, then how do you suppose people will support themselves in the meantime?
What if the new work that is created pays less? Do you think people should just accept being made obsolete to take up lower paying jobs?
The social contract lives and dies by what the populace is willing to accept. If you push people into a corner by threatening their quality of life, don't be surprised if they push back.
I think you can prompt them to do that, but that doesn't solve the issue of people not being willing to learn vs just jump to the answer, unless they made a school approved one that forced it to do that.
At first glance, knowing how to spell a word and understanding a word should be perfectly orthogonal. How could it not be? Saying that it is not so would imply that civilizations without writing would have no thought or could not communicate through words, which is preposterous.
And yet, once we start delegating our thinking, our spelling and our writing to external black boxes, our grasp on those words and our grasp of those words becomes weaker. To the point that knowing how to spell a word might become a much bigger part, relatively, of our encounter with those words, as we are doing less conceptual thinking about those words and their meaning.
And therefore, I argue that, in a not too far-fetched extremum, understanding a word and knowing how to spell a word might not be fully orthogonal.
High schools are a long way off from that level of education. I took AP CS in highschool and it was a joke by comparison. Of course YMMV. The best highschool CS course might be better than the worst university level offerings. We would always have know it all students who learned Java in high school. They either appreciated the new perspective on the fundamentals and did well, or they blew off the class and failed when it got harder.
Same with education, for example you can financially force people to learn, say, computer science instead of liberal arts. Even when they don't like it. It's harder, less efficient, but possible.
If I have accidentally lifted too much weight, I want a spotter that can immediately give me relief. But yes, you're right. If I am always getting a spot, then I'm not really lifting my own weight and indeed not making any gains.
I think the question was, "I'm stuck on this code, and I don't see an obvious answer." Now the lazy student is going to ask for help prematurely. But that doesn't preclude ChatGPT's use to only the lazy.
If I'm stuck and I'm asking for insight, I think it's brilliant that ChatGPT can act as a spotter and give some immediate relief. No different than asking for a tutor. Yes maybe ChatGPT gives away the whole answer when all you needed is a hint. That's the difference between pure human intelligence and just the glorified search engine that is AI.
And quite probably, this could be a really awesome way in which AI learning models could evolve in the context of education. Maybe ChatGPT doesn't give you the whole answer, instead it can just give you the hint you need to consider moving forward.
Microsoft put out a demo/video of a grad student using Copilot in very much this way. Basically the student was asking questions and Copilot was giving answers that were in the frame of "did you think about this approach?" or "consider that there are other possibilities", etc. Granted, mostly a marketing vibe from MSFT, but this really demonstrates a vision for using LLMs as a means for true learning, not just spoiling the answer.
I run it locally and read the raw thought process, find it very useful (can be ruthless at times) seeing this before it tags on the friendliness.
Then you can see it's planning process to tag on the warmth/friendliness "but the user seems proud of... so I need to acknowledge..."
I don't think Gemini's "thoughts" are the raw CoT process, they're summarized / cleaned up by a small model before returned to you (same as OpenAI models).
I concur that semantics have a) overtaken this thread, and b) are part of my complaint with OP when they claimed all historical progress was the result of specialisation.
The one chapter that stood out very clear, especially in a college setting, was how inefficient flash cards were compare to other methods like taking a practice exam instead.
There are a lot of executive summaries on the book and I've posted comments in support of their science backed methods as well.
It's also something I'm personally testing myself this year regarding programming since I've had great success doing their methods in other facets of my life.
I agree that it's not that different than asking a tutor though, assuming it's a personal tutor whom are you paying so they won't ever refuse to answer. I've never had access to someone like that, but I can totally believe that if I did, I would graduate without learning much.
Back to ChatGPT: during my college times I've had plenty of times when I was really struggling, I remember feeling extremely frustrated when my projects would not work, and spending long hours in the labs. I was able to solve this myself, without any outside help, be it tutors or AI - and I think this was the most important part of my education, probably at least as important as all the lectures I went to. As they say, "no pain, no gain".
That said, our discussion is kinda useless - it's not like we can convince college students to stop using AI. The bad colleges will pass everyone (this already happens), the good colleges will adapt (probably by assigning less weight to homework and more weight to in-class exams). Students will have another reason to fail the class: in additional to classic "I spend whole semester partying/playing computer games instead of studying", they would also say "I never opened books and had ChatGPT do all assignments for me, why am I failing tests?"
Then they suddenly become kinda stricter in high school, where your results decide if you can go to university and which.
But I've been to one of the top technical universities and compared to Italy it was very easy. It was obvious the goal was to try and have everyone pass. Still people managed to fail or drop out anyway, although not in the dramatic numbers I saw in Italy for math exams.
I also worked for the faculty for the better part of my university studies, and I know that ultimately changing the status quo is most likely impractical. There are not enough resources to continuously grade open-ended assignments for so many students and they probably need the pedagogical pressure to learn fundamentals. Still makes me a bit bitter from time to time.
Everywhere in human society. "Jobs" is literally when you do something that someone needs, so that in exchange they do something that you need. And in human society, because of AI, neither people’s needs, nor the ability to satisfy them, nor the possibility of exchanging them will suddenly disappear. So the jobs will be everywhere.
>Do you think people should just accept being made obsolete to take up lower paying jobs?
Let's start with the fact that on average all jobs will become higher paying because the amount of goods produced (and distributed) will increase. So the more correct answer to this question is "What choice will they have?".
AI will make the masses richer, so society will not abandon it. Subsidize their obsolete well-paid jobs to make society poorer? Why would anyone do that? So the people replaced by AI will go to work in other jobs. Sometimes higher paying, sometimes lower.
If we are talking about real solutions, the best alternative they will have is to form a cult like the Amish did (God bless America and capitalism), in which they can pretend that AI does not exist and live as before. The only question in this case is whether they will find willing participants, because for most, participation in such a cult will mean losing the increase in income provided by the introduction of AI.
And if all needs are already satisfied... Why worry about work? The purpose of work is to satisfy needs. If needs are satisfied, there is no need for work.
Maybe it’s fewer people, yes, but it’ll take quite a leap forward in AI ability to replace all the specialists we will continue to require, especially as the less-able AI makes messes that need to be cleaned up.
LLMs can't produce intellectual rigour. They get fine details wrong every time. So indeed using ChatGPT for doing your reasoning for you produces inferior results. By normalising non-rigorous yet correct sounding answers, we drive down the expectations.
To take a concrete example, if you tell a student to implement memcpy with chatgpt, and it will just give an answer which uses uint64 copying. The student has not thought from first principles (copy byte by byte? Improve performance? How to handle alignment?). This lack of insight in return to immediate gratification will bite later.
It's maybe not problem for non-STEM fields where this kind of rigor and insight is not required to excel. But in STEM fields, we write programs and prove theorems for insight. And that insight and the process of obtaining it is gone with AI.
The students who want to learn, will learn. For the students who just want the paper so they can apply for jobs, we ought to give them their diploma on the first day of class, so they can stop wasting everybody's time.
It does seem similar in structure to Gemini 2.0's output format with the nested bullets though, so I have to assume they trained on synthetic examples.
This remains to be seen. Inequality is worse now than it was 20 years ago despite technology progressing. This is true across income and wealth.
No, I am not assuming that. "Together" are not required. It's just combination of needs, ability to satisfy them and ability to exchange - creates jobs. And nothing of this will be thwarted by AI.
>More likely is that the property owning class acquire AI robots to provide cheap labor
Doesn't matter. Your everyday person either will be able to afford this cheap AI labor for themselves (no problem that required solving) or if AI labor for them are unaffordable - will create jobs for other people (there will be jobs on market everywhere)
No, that's just logic. AI doesn't thwart the ability of people to satisfy their needs (getting richer).
>Inequality is worse now than it was 20 years ago despite technology progressing.
And people are still richer than ever before (if we take into account the policies that are thwarting society's ability to satisfy each other's needs and that have nothing to do with technologies)
There is a measurable decrease in critical thinking skills when people consistently offload the thinking about a problem to an LLM. This is where the primary difference is between solving problems with an LLM vs having it solved for you with an LLM. And, that is cause for concern.
Two studies on impact of LLMs and generative AI on critical thinking:
https://www.mdpi.com/2075-4698/15/1/6
https://slejournal.springeropen.com/articles/10.1186/s40561-...
Failure to do the homework made class time useless, the material was difficult, and the instructors were willing to give out failing grades. So doing the homework was vital even when it wasn't graded. Perhaps that can also work well here in the context of AI, at least for some subjects.
Student being unable to unwilling to learn that knowledge or acquire those skills should mean they don't get that degree, they don't get those jobs, and they go work in fast food or a warehouse.
"Just give them the degree" is quite literally the worst possible solution to the problem.
When I was in college students were paying for homeworks solved by other students, teachers and so on.
In the article "Evaluating" is marked at 5.5% where creating is 39.8%. Students are still evaluating the answers.
My point is that just got easier to go in any direction. The distribution range is wider, is the mean changing?
(And I am aware of the irony in failing to communicate when mentioning that studying writing is important to be good at communication.) Maybe I should have also cited this part:
> writing as a proxy for what actually matters, which is thinking.
In my opinion, writing is important not (only) as a proxy for thinking, but as a direct form of communicating ideas. (Also applies to other forms of communication though.)