Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.
Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.
Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.
20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.
That's because someone, somewhere, invested money in training the models. You are given cooked fish, not fishing rods.
humans serving technology == evil
it's the power structure that determines the morality of technology. & power structures are a technology in and of themselves.
it follows that power structures which serve humans are good, and power structures that control humans are evil.
how do the things You create interact with humans and our power structures?
This is one of the deepest ironies of our era.
The scary thing about AI is that people might end up with the right to do problematic things that were previously infeasible.
The rise of steam engines did. And the printing press and electrical engines did the opposite.
It's not hard to understand the difference, it's about the minimum size of an economically useful application. If it's large, it creates elites, if it's small, it democratizes the society.
LLMs by their nature have enormous minimal sizes, and the promise to increase by orders of magnitude.