←back to thread

627 points cratermoon | 3 comments | | HN request time: 0.678s | source
Show context
gyomu ◴[] No.44461457[source]
Broadly agreed with all the points outlined in there.

But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.

> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.

> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”

When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.

But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.

I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.

replies(16): >>44461502 #>>44461693 #>>44461707 #>>44461712 #>>44461825 #>>44461881 #>>44461890 #>>44462182 #>>44462219 #>>44462354 #>>44462799 #>>44463172 #>>44463206 #>>44463495 #>>44463650 #>>44464426 #
maegul ◴[] No.44461502[source]
Agreed!

The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.

Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.

Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.

replies(1): >>44461579 #
bruce511 ◴[] No.44461579[source]
To be fair, LLMs are just the most recent step in a long road of doing the same thing.

At any point of progress in history you can look backwards and forwards and the world is different.

Before tractors a man with an ox could plough x field in y time. After tractors he can plough much larger areas. The nature of farming changes. (Fewer people needed to farm more land. )

The car arrives, horses leave. Computers arrive, the typing pool goes away. Typing was a skill, now everyone does it and spell checkers hide imperfections.

So yeah LLMs make "drawing easier". Which means just that. Is that good or bad? Well I can't draw the old fashioned way so for me, good.

Cooking used to be hard. Today cooking is easy, and very accessible. More importantly good food (cooked at home or elsewhere) is accessible to a much higher % of the population. Preparing the evening meal no longer starts with "pluck 2 chickens" and grinding a kilo of dried corn.

So yeah, LLMs are here. And yes things will change. Some old jobs will become obsolete. Some new ones will appear. This is normal, it's been happening forever.

replies(3): >>44461670 #>>44461719 #>>44461769 #
thankyoufriend ◴[] No.44461719[source]
The difference between GenAI and your examples is a theft component. They stole our data - your data - and used it to build a machine that diverts wealth to the rich. The only equitable way for GenAI to move forward is if we all own a share of it, since it would not exist in its current form without our data. GenAI should be a Universal Basic Asset.
replies(3): >>44461819 #>>44464615 #>>44469742 #
thedevilslawyer ◴[] No.44469742[source]
Today you can go buy and be an owner in AI by buying the AI company stocks. You are enabled by the system that we have..
replies(2): >>44474299 #>>44498618 #
1. thankyoufriend ◴[] No.44474299[source]
You realize that not everybody has the means to invest in stocks, right? Artists are so commonly poor, that there's even a trope called the "starving artist." I have noticed a distinct lack of empathy in the broader discussion about GenAI's impact on the working class. The argument is always made that this isn't new, that people have to retrain when new technology displaces them. Ok, sure. But the speed of this displacement is very new and it happened basically overnight. How do you expect these displaced people to sustain themselves during the retraining period? There's only so many McJobs, and it's not as easy as you think to get one right now. I just watched someone with a college degree apply to everything for 2 months before landing one. There's also the deeply-held belief that people are only valuable if they work, which I think many of us subconsciously believe, but that's pretty messed up if you reflect on it and follow it to its logical conclusion.

There's also the principle of the matter that we shouldn't have to pay for a share of something that was built using our collective unpaid labor/property without our consent.

replies(1): >>44487067 #
2. thedevilslawyer ◴[] No.44487067[source]
In a capitalist system - there's no other answer.

In a more socialist context, UBI is an answer.

In a more communist context, taking over all labs for the people is an answer.

In a dictatorial context, banning AI is an answer.

What are you recommending?

replies(1): >>44498628 #
3. johnnyanmac ◴[] No.44498628[source]
I'll take socialism.

More subtly, I'll modity the dictorial context to require payment to any sources an AI uses, and strong enforcement of infringements on AI. The core problem with capitalistic society is that money tends to bubble up to the top and then stay there. The goal of regulation should partly be to make sure that money is incentivized to be not stay up top in stocks.