←back to thread

Playing in the Creek

(www.hgreer.com)
346 points c1ccccc1 | 9 comments | | HN request time: 0.87s | source | bottom
1. DrSiemer ◴[] No.43651677[source]
So many articles and comments claim Ai will destroy critical thinking in our youths. Is there any evidence that this conviction that many people share is even remotely true?

To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.

Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.

Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.

The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.

replies(5): >>43651964 #>>43651975 #>>43652300 #>>43652603 #>>43657305 #
2. iNic ◴[] No.43651964[source]
I don't think you got the point of the article? It is saying that we as wise humans know (sometimes) when to stop optimizing for a goal, due to the negative side effects. AIs (and as some other people have pointed out corporations) do not naturally have this line in their head, and we must draw such lines carefully and with purpose for these superhuman beings.
3. dsign ◴[] No.43651975[source]
> Ai will destroy critical thinking in our youths

I don't think that's the argument the article was making. It was, to my understanding, a more nuanced question about if we want to destroy or severely disturb systems at equilibrium by letting AI systems infiltrate our society.

> Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.

One can zoom out a little bit. The issue didn't start with social media, nor AI. "Star Wars, A New Hope", is, to my understanding, an incredibly good film. It came out in 1977 and it's a great story made to be appreciated by the masses. And in trying to achieve that goal, it really wasn't intellectually challenging. We have continued in that downhill for a bit, and now we are in 16 second stingers in TikTok and Youtube. So, the way I see it, things are not balancing out. Worse, people in USA elected D.J. Trump because somehow they couldn't understand how this real-world Emperor Palpatine was the bad guy.

4. Tistron ◴[] No.43652300[source]
I would expect people today to be quite a lot worse at mental arithmetic that we used to be before calculators. And worse at memorizing stuff than before writing.

We have tools to help us with that, and maybe it isn't a big loss? And they also bring new arenas and abilities.

And maybe in the future we will be worse at critical thinking (https://news.ycombinator.com/item?id=43484224), and maybe it isn't a big loss? It is hard to imagine what new abilities and arenas will emerge. Though I think that critical thinking is a worse loss than memory and mental arithmetic. Though, also, we are probably a lot less good at it than we think we are, generally.

5. hacb ◴[] No.43652603[source]
> The calculator did not erase math wizards

The major difference is that in order to use a calculator, you need to know and understand the math you're doing. It's a tool you can work with. I always had a calculator for my math exams and I always had bad grades :)

You don't have to know how to program to ask ChatGPT to build yet another app for you. It's a substitute for your brain. My university students have good grades on their do-at-home exams, but can't spot a off-by-one error on a 3 lines Golang for loop during an in-person exam.

replies(1): >>43656731 #
6. DrSiemer ◴[] No.43656731[source]
This is incorrect. You very much need to know how to program to make an AI build an app for you. Language models are not capable of creating anything new without significant guidance and at least some understanding of the code, unless you're asking it to create projects that tutorials have been written about. AI in it's current form is also just "a tool you can work with".

Like with the calculator, why would you need to be able to calculate things on paper if you can just have a machine do it for you? Same goes for more advanced AI: what's the point of being able to do things without them?

Not to offend, but in my opinion that's nothing more than a romantic view of what humans "should be capable of". 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.

replies(1): >>43657122 #
7. hacb ◴[] No.43657122{3}[source]
Of course, an AI will not "magically" code an app the same way 10 developers will do in a year, I don't think we disagree on this.

However, it allows you to do things you don't understand. I'm again taking examples from what I see at my university (n=1): almost all students deliver complex programming projects involving multi-threading, but can't answer a basic quizz about the same language in-person. And by basic question I mean "select among the propositions listed below the correct keyword used to declare a variable in Golang". I'm not kidding, at least one-third of the class is actually answering something wrong here.

So yeah, maybe we as a society agree on the fact that those people will not be software engineers, but prompt engineers. They'll send instructions to an agent that will display text in a strange and cryptic language, and maybe when they'll press "Run" lights will be green. But as a professional, why should I hire them once they earned their diploma? They are far from being ready for the professional world, can't debug systems without using LLMs (and maybe those LLMs can't help them because the company context is too important), and most importantly they are way less capable than freshly graduated engineers from a few years back.

> 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.

I hope so, but I'm quite pessimistic unfortunately. Expertise and focus capabilities are dying, and we are more and more relying on artificial "intelligence" and its biases. But the future will tell

replies(1): >>43658209 #
8. fragmede ◴[] No.43657305[source]
> The calculator did not erase math wizards

But it did. Quick, what's 67 * 49? A math wiz would furrow their brow for a second and be able to spit out an answer, while the rest of us have to pull out a calculator. When you're doing business in person and have to move numbers around, having to stop and use a calculator slows you down. If you don't have a role where that's useful then it's not a needed skill and you don't notice it's missing, like riding s horse, but doesn't mean the skill itself wouldn't be useful to have.

9. DrSiemer ◴[] No.43658209{4}[source]
Isn't it irrelevant that students do not have the answer to a basic quiz though? In a real life situation, they can just _ask an LLM_ if they need to know something.

I don't believe having this option will make people a lot less functional. Sure, some may slip through the cracks by faking it, but we'll soon develop different metrics to judge somebodies true capabilities. Actually, we'll probably create AI for that as well.

As a professional, you hire people who get things done. If that means hiring skilled LLM users, that do not fully understand what they produce, but what they make consistently works about as often as classic dev output does, and they do this in a fraction of the time... You would be crazy _not_ to hire them.

It's true that inexperienced developers will probably generate a massive tech debt during the time where AI is good enough to provide code, but not good enough to fish out hidden bugs. It will soon surpass humans at that skill though, and can then quickly clean up all the spaghetti.

Over the last two years my knowledge on how to perform and automate repetitive and predictable tasks has gradually worn away, replaced by a higher level understanding of software architecture. I use it to guide language models to a desired outcome. For those that want to learn, LLM's excel at explaining code. For this, and plenty of other subjects, it's the greatest learning tool we have ever had! All it takes is a curious mind.

We are in a transitionary time and we simply need to figure out how to deal with this new technology, warts and all. It's not like there is an alternative scenario; it's not going to go away...