When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.
That attitude seems to be the case for students now, "Why do I need to do this when an LLM can just do it better?"
This led us to the conclusion:
1. How do you construct challenges that AI can't solve? 2. What skills will humans need next?
We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?
I think this should lead to a fundamental shift in how we work WITH AI in every facet of education. How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?
I also think there should be more education around basic models and how they work as an introductory course to students of all ages, specifically around the trustworthiness of output from these models.
We'll need to rethink education and what we really desire from humans to figure out how this makes sense in the face of traditional rituals of education.