Do you think we are helping K-12 students by letting AI doing hallucinated thinking for them? What incredible "AI skills" will they be missing out on if we restrict the exposure? How to type things in a text box and adjust your question until you get what you want?
We are creating a massive competency gap by treating AI exposure as somehow more dangerous than social media, which we've already allowed to reshape adolescent development with inarguably negative educational value.
AI is already redefining job requirements and academic expectations. Students who first encounter these tools in college will be competing against peers who've had years to develop working usage patterns and build domain specific applications.
It is not that superficial. More like you actually have multiple search engines that are competent, instead of a Google monopoly, and you learn which is good for what. We already do that for software where we mix and match. How many people that are using ChatGPT have any idea of model nuances? They still just type in the box and get answers. Loads of people don't even know about Claude. You give them three separate apps with exact same chat mode and they will figure it out which works better for what - doesn't take experience of using them during the years when the brain is still developing. More like a few weeks for an adult.
> knowing when to use it vs when not to, developing judgment about AI content that goes beyond simple fact retrieval
Yes, that requires growing up with independent critical thought, not getting used to accepting AI results at face value - which is what is happening all around us in schools, right now.
> We are creating a massive competency gap by treating AI exposure as somehow more dangerous than social media, which we've already allowed to reshape adolescent development with inarguably negative educational value.
One bad thing doesn't justify another.
> AI is already redefining job requirements and academic expectations. Students who first encounter these tools in college will be competing against peers who've had years to develop working usage patterns and build domain specific applications.
And what are those usage patterns they have had years to develop in school? Typing in a chat box? Sure, some enterprising and talented students may go beyond that but frankly if you have those traits, you'll beat the crap out of mediocre competition in no time. We have two sets of people - who are "software developers" (who think like one, whether they do or work as one or not) and the rest who just want to ask a question and move on. Are we saying that those two sections will converge?
If so, it depends on the state of the AI at that point. If it is more or less the same but just better with jobs requiring more agentic type of automation, sure, that can require some learning on how to use it. Again, that still requires breaking down a problem in discrete steps and managing the feedback loop. That requires critical thinking and are still typing instructions in plain text. Also, you need to be knowledgeable enough to figure out where the AI messed up.
If we are talking much more advanced AI approaching AGI levels, the jobs we are worrying about will be gone and you'll have basically a handful of advanced AI-centric jobs left for which very few will qualify anyway. That is a much bigger problem which can't be fixed by just letting more people use AI.