←back to thread

323 points timbilt | 1 comments | | HN request time: 0.214s | source
Show context
ratedgene ◴[] No.42129665[source]
I was talking to a teacher today that works with me at length about the impact of AI LLM models are having now when considering student's attitude towards learning.

When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.

That attitude seems to be the case for students now, "Why do I need to do this when an LLM can just do it better?"

This led us to the conclusion:

1. How do you construct challenges that AI can't solve? 2. What skills will humans need next?

We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?

I think this should lead to a fundamental shift in how we work WITH AI in every facet of education. How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?

I also think there should be more education around basic models and how they work as an introductory course to students of all ages, specifically around the trustworthiness of output from these models.

We'll need to rethink education and what we really desire from humans to figure out how this makes sense in the face of traditional rituals of education.

replies(12): >>42129683 #>>42129718 #>>42129742 #>>42129844 #>>42130036 #>>42130165 #>>42130200 #>>42130240 #>>42130245 #>>42130568 #>>42135482 #>>42137623 #
1. visarga ◴[] No.42137623[source]
> How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?

Humans must use what the AI doesn't have - physicality. We have hands and feet, we can do things in the world. AI just responds to our prompts from the cloud. So the human will have to test ideas in reality, to validate, do experiments. AI can ideate, we need to use our superior access and life-long context to help it keep on the right track.

We also have another unique quality - we can be punished, we are accountable. AI cannot be meaningfully punished for wrongdoing, what can you do to an algorithm? But a human can assume responsibility for an AI in critical scenarios. When there is a lot of value at stake we need someone who can be accountable for the outcome.