As noble as the goal sounds, I think it's wrong.
Software is just a tool. Much like a hammer, a knife, or ammonium nitrate, it can be used for both good or bad.
I say this as someone who has spent almost 15 years writing software in my free time and publishing it as open source: building software and allowing anyone to use it does not automatically make other people's lives better.
A lot of my work has been used for bad purposes or what some people would consider bad purposes - cheating on tests, cheating in games, accessing personal information without permission, and in one case my work contributed to someone's doxxing. That's because as soon as you publish it, you lose control over it.
But at least with open source software, every person can use it to the same extent so if the majority of people are good, the result is likely to be more positive than negative.
With what is called AI today, only the largest corporations can afford to train the models which means they are controlled by people who have entirely different incentives from the general working population and many of whom have quite obvious antisocial personality traits.
At least 2 billion people live in dictatorships. AI has the potential to become a tool of mass surveillance and total oppression from which those countries will never recover because just like the models can detect a woman is pregnant before she knows it, it will detect a dissenter long before dissent turns into resistance.
I don't have high hopes for AI to be a force for good and teaching people how toy models work, as fun as it is, is not gonna change it.