←back to thread

323 points timbilt | 1 comments | | HN request time: 0.207s | source
Show context
ratedgene ◴[] No.42129665[source]
I was talking to a teacher today that works with me at length about the impact of AI LLM models are having now when considering student's attitude towards learning.

When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.

That attitude seems to be the case for students now, "Why do I need to do this when an LLM can just do it better?"

This led us to the conclusion:

1. How do you construct challenges that AI can't solve? 2. What skills will humans need next?

We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?

I think this should lead to a fundamental shift in how we work WITH AI in every facet of education. How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?

I also think there should be more education around basic models and how they work as an introductory course to students of all ages, specifically around the trustworthiness of output from these models.

We'll need to rethink education and what we really desire from humans to figure out how this makes sense in the face of traditional rituals of education.

replies(12): >>42129683 #>>42129718 #>>42129742 #>>42129844 #>>42130036 #>>42130165 #>>42130200 #>>42130240 #>>42130245 #>>42130568 #>>42135482 #>>42137623 #
Der_Einzige ◴[] No.42130036[source]
The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners, is to aggressively reorient education in society to education about how to use AI systems.

This rustles a TON of feathers to even broach as a topic, but it's the only correct one. The AI engineer will eat everything, including your educational system, in 5-10 years. You can either swim against the current and be ate by the sharks or swim with it and survive longer. I'll make sure my kids are learning about AI related concepts from the very beginning.

This was also the correct way to handle it circa the calculator era. We should have made most people get very good at using calculators, and doing "computational math" since that's the vast majority of real world math that most people have to do. Imagine a world where Statistics was primarily taught with Excel/R instead of with paper. It'd be better, I promise you!

But instead, we have to live in a world of luddites and authoritarians, who invent wonderful miracle tools and then tell you not to use them because you must struggle. The tyrant in their mind must be inflicted upon those under them!

It is far better to spend one class period, teaching the rote long multiplication technique, and then focus on word problems and applications of using it (via calculator), than to literally steal the time of children and make them hate math by forcing them to do times tables, again and again. Luddites are time thieves.

replies(4): >>42130159 #>>42130400 #>>42130637 #>>42131030 #
1. achierius ◴[] No.42131030[source]
> The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners

This does not really lend great credence to the rest of your argument. Yes, Linkedin is hyping the latest job trend. But study after study shows that the bulk of engineers are not doing ML/AI work, even after a year of Linkedin putting up those banners -- and if there were even 2 ML/AI jobs at the start of such a period, then 10% week-over-week growth would imply that the entire population of the earth was in the field.

Clearly that is not the case. So either those banners are total lies, or your interpretation of exponential growth (if something grows exponentially for a bit, it must keep growing exponentially forever) is practically disjointed from reality. And at that point, it's worth asking: what other assumptions about exponential growth might be wrong in this world-view?

Perhaps by "AI engineer" you (like many publications nowadays) just mean to indicate "someone who works with computers"? In that case I could understand your point.