←back to thread

222 points dougb5 | 5 comments | | HN request time: 0.91s | source
Show context
justaguitarist ◴[] No.45126131[source]
I'm a sysadmin for a public school district and the admins are working on rolling out Gemini for students/staff. I've shared all the studies I can find about cognitive decline associated with LLM use, but it seems like it's falling on deaf ears.
replies(5): >>45126679 #>>45126978 #>>45131048 #>>45131087 #>>45135103 #
EvanAnderson ◴[] No.45131087[source]
I do contract network admin work for a K-12 school district and I'm hearing the same thing from the in-house sysadmin about his administration staff. The District superintendent is very enthusiastic about getting LLM tools into the hands of the students and teachers. The in-house sysadmin and I are both horrified at what we're enabling.
replies(1): >>45134620 #
1. Jimmc414 ◴[] No.45134620[source]
Respectfully, do you think you are helping K-12 students by withholding exposure to an AI world they will soon be expected to be competitive in?
replies(1): >>45136587 #
2. noisy_boy ◴[] No.45136587[source]
When google came around it took me about 10 minutes to figure out how to use it. Further when I saw things in the search results that didn't make sense or were plainly wrong, I had the pre-gpogle critical faculty to question things.

Do you think we are helping K-12 students by letting AI doing hallucinated thinking for them? What incredible "AI skills" will they be missing out on if we restrict the exposure? How to type things in a text box and adjust your question until you get what you want?

replies(1): >>45139887 #
3. Jimmc414 ◴[] No.45139887[source]
The google comparison is superficial. The skill needed is understanding what different modes of AI can and cant do across different domains, knowing when to use it vs when not to, developing judgment about AI content that goes beyond simple fact retrieval.

We are creating a massive competency gap by treating AI exposure as somehow more dangerous than social media, which we've already allowed to reshape adolescent development with inarguably negative educational value.

AI is already redefining job requirements and academic expectations. Students who first encounter these tools in college will be competing against peers who've had years to develop working usage patterns and build domain specific applications.

replies(2): >>45143265 #>>45145087 #
4. EvanAnderson ◴[] No.45143265{3}[source]
I'm all for students learning the technology that makes LLMs work. That would go a long way, I suspect, toward students understanding what problems LLMs are a good fit for.

Likewise, using LLMs as an API for software they create to call sounds like it would give them insight into what LLMs are good for.

The act of just "conversing" with an LLM doesn't seem like much of a skill. I find it hard to reconcile the idea that one needs training or experience to use an LLM when contrasted with how LLM products are being advertised to the "everyman".

I simply don't buy that there's skill associated with using LLMs as an end user beyond the skills that you'd use for checking the validity of any other source. (Granted, everybody is pretty terrible at that anyway.) If anything, the LLM should be treated with more skepticism and subjected to more fact checking than human-created or curated sources.

The level of public LLM adoption tells me that they're not hard to use. The companies who make them are doing their best to make them useful for everyone. Any "moat" created by having "skills" associated with using an LLM will be drained. The companies want them to be useful to everyone, not just to people with "skills".

re: social media

Personally, I see "social media" as vastly more deleterious than LLMs alone. ("Social media" and LLMs, together, are a force-multiplier of badness.)

I already don't think there should be a place in schools for "social media", in terms of a curricular subject. I'd appalled if administrators approached "social media" as a part of the curriculum with the enthusiasm I'm seeing for LLMs.

5. noisy_boy ◴[] No.45145087{3}[source]
> The google comparison is superficial. The skill needed is understanding what different modes of AI can and cant do across different domains, knowing when to use it vs when not to, developing judgment about AI content that goes beyond simple fact retrieval.

It is not that superficial. More like you actually have multiple search engines that are competent, instead of a Google monopoly, and you learn which is good for what. We already do that for software where we mix and match. How many people that are using ChatGPT have any idea of model nuances? They still just type in the box and get answers. Loads of people don't even know about Claude. You give them three separate apps with exact same chat mode and they will figure it out which works better for what - doesn't take experience of using them during the years when the brain is still developing. More like a few weeks for an adult.

> knowing when to use it vs when not to, developing judgment about AI content that goes beyond simple fact retrieval

Yes, that requires growing up with independent critical thought, not getting used to accepting AI results at face value - which is what is happening all around us in schools, right now.

> We are creating a massive competency gap by treating AI exposure as somehow more dangerous than social media, which we've already allowed to reshape adolescent development with inarguably negative educational value.

One bad thing doesn't justify another.

> AI is already redefining job requirements and academic expectations. Students who first encounter these tools in college will be competing against peers who've had years to develop working usage patterns and build domain specific applications.

And what are those usage patterns they have had years to develop in school? Typing in a chat box? Sure, some enterprising and talented students may go beyond that but frankly if you have those traits, you'll beat the crap out of mediocre competition in no time. We have two sets of people - who are "software developers" (who think like one, whether they do or work as one or not) and the rest who just want to ask a question and move on. Are we saying that those two sections will converge?

If so, it depends on the state of the AI at that point. If it is more or less the same but just better with jobs requiring more agentic type of automation, sure, that can require some learning on how to use it. Again, that still requires breaking down a problem in discrete steps and managing the feedback loop. That requires critical thinking and are still typing instructions in plain text. Also, you need to be knowledgeable enough to figure out where the AI messed up.

If we are talking much more advanced AI approaching AGI levels, the jobs we are worrying about will be gone and you'll have basically a handful of advanced AI-centric jobs left for which very few will qualify anyway. That is a much bigger problem which can't be fixed by just letting more people use AI.