Most active commenters
  • margalabargala(5)

←back to thread

323 points timbilt | 24 comments | | HN request time: 1.949s | source | bottom
Show context
wcfrobert ◴[] No.42131165[source]
Lots of interesting debates in this thread. I think it is worth placing writing/coding tasks into two buckets. Are you producing? Or are you learning?

For example, I have zero qualms about relying on AI at work to write progress reports and code up some scripts. I know I can do it myself but why would I? I spent many years in college learning to read and write and code. AI makes me at least 2x more efficient at my job. It seems irrational not to use it. Like a farmer who tills his land by hand rather than relying on a tractor because it builds character or something. But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come...

On the other hand, if you are a student trying to learn something new, relying on AI requires walking a fine line. You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply. At the same time, if you under-rely on AI, you drastically decrease the rate at which you can learn new things.

In the old days, people were fit because of physical labor. Now people are fit because they go to the gym. I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?

replies(9): >>42131209 #>>42131502 #>>42131788 #>>42132365 #>>42133145 #>>42133517 #>>42133877 #>>42134499 #>>42136622 #
sbuttgereit ◴[] No.42131788[source]
"But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come..."

"You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply."

These two ideas are closely related and really just different aspects of the same basic frailty of the human intellect. Understanding that I think can really inform you about how you might use these tools in work (or life) and where the lines need to be drawn for your own personal circumstance.

I can't say I disagree with anything you said and think you've made an insightful observation.

replies(2): >>42132052 #>>42132729 #
1. margalabargala ◴[] No.42132052[source]
In the presence of sufficiently good and ubiquitous tools, knowing how to do some base thing loses most or all of its value.

In a world where everyone has a phone/calculator in their pocket, remembering how to do long division on paper is not worthwhile. If I ask you "what is 457829639 divided by 3454", it is not worth your time to do that by hand rather than plugging it into your phone's calculator.

In a world where AI can immediately produce any arbitrary 20-line glue script that you would have had to think about and remember bash array syntax for, there's not a reason to remember bash array syntax.

I don't think we're quite at that point yet but we're astonishingly close.

replies(6): >>42132323 #>>42132462 #>>42132743 #>>42132908 #>>42133330 #>>42133666 #
2. aquariusDue ◴[] No.42132323[source]
This is both inspiring and terrifying at the same time.

That being said I usually prefer to do something the long and manual way, write the process down sometimes, and afterwards search for easier ways to do it. Of course this makes sense on a case by case basis depending on your personal context.

Maybe stuff like crosswords and more will undergo a renaissance and we'll see more interesting developments like Gauguin[0] which is a blend of Sudoku and math.

[0] https://f-droid.org/en/packages/org.piepmeyer.gauguin/

3. treflop ◴[] No.42132462[source]
Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.

The difference is that you can trust a good calculator. You currently can't trust AI to be right. If we get a point where the output of AI is trustworthy, that's a whole different kind of world altogether.

replies(5): >>42132517 #>>42132519 #>>42132754 #>>42133035 #>>42137450 #
4. Mengkudulangsat ◴[] No.42132517[source]
Hopefully that happens rare enough that when it does, we can call upon highly-paid human experts that still remembers the art of doing long divisions.
5. margalabargala ◴[] No.42132519[source]
For replacement like I described, sure. But it will be very useful long before that.

AI that writes a bash script doesn't need to be better than an experienced engineer. It doesn't even need to be better than a junior engineer.

It just needs to be better than Stack Overflow.

That bar is really not far away.

replies(1): >>42133592 #
6. ◴[] No.42132743[source]
7. kamaal ◴[] No.42132754[source]
>>The difference is that you can trust a good calculator. You currently can't trust AI to be right.

Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.

Ask the smallest possible for loop and if loop that AI can generate now you have the pocket calculator equivalent of programming.

replies(1): >>42133306 #
8. nkrisc ◴[] No.42132908[source]
> If I ask you "what is 457829639 divided by 3454"

And if it spits out 15,395,143 I hope you remember enough math to know that doesn’t look right, and how to find the actual answer if you don’t trust your calculator’s answer.

replies(1): >>42133131 #
9. alasdair_ ◴[] No.42133035[source]
>The difference is that you can trust a good calculator.

I found a bug in the ios calculator in the middle of a masters degree exam. The answer changed depending on which way the phone was held. (A real bug - I reported it and they fixed it). So knowing the expected result matters even when using the calculator.

10. Baeocystin ◴[] No.42133131[source]
Sanity Checking Expected Output is one of the most vital skills a person can have. It really is. But knowing the general shape of the thing is different than any particular algorithm, don't you think?
replies(1): >>42133339 #
11. abduhl ◴[] No.42133306{3}[source]
>> Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.

Is it? What is 5/2+3?

replies(1): >>42133649 #
12. antasvara ◴[] No.42133330[source]
The value isn't in rote calculation, but the intuition that doing it gives you.

So yes, it's pretty useless for me to manually divide arbitrarily large numbers. But it's super useful for me to be able to reason around fractions and how that division plays out in practice.

Same goes for bash. Knowing the exact syntax is useless, but knowing what that glue script does and how it works is essential to understanding how your entire program works.

That's the piece I'm scared of. I've seen enough kids through tutoring that just plug numbers into their calculator arbitrarily. They don't have any clue when a number is off by a factor of 10 or what a reasonable calculation looks like. They don't really have a sense for when something is "too complicated" either, as the calculator does all of the work.

replies(2): >>42138484 #>>42139684 #
13. bruce511 ◴[] No.42133339{3}[source]
This gets to the root of the issue. The use case, and user experience, and thus outcome is, is remarkably different depending on your current ability.

Using AI to learn things is useful, because it helps you get terminology right, and helps you Google search well. For example say you need to know a Windows API, you can describe it snd get the name. Then Google how that works.

As an experienced user you can get it to write code. You're good enough to spot errors in the vote and basically just correct as you go. 90% right is good enough.

It's the in-between space which is hardest. You're an inexperienced dev looking to produce, not learn. But you lack the experience and knowledge to recognise the errors, or bad patterns, or whatever. Using AI you end up with stuff that's 'mostly right' - which in programming terms means broken.

This experience difference is why there's so much chatter about usefulness. To some groups it's very useful. To others it's a dangerous crutch.

14. treflop ◴[] No.42133592{3}[source]
You’re changing the goal post. Your original post was saying that you don’t need to know fundamentals.

It was not about whether AI is useful or not.

replies(1): >>42138412 #
15. TSP00N3 ◴[] No.42133649{4}[source]
There is only one correct way to calculate 5/2+3. The order is PEMDAS[0]. You divide before adding. Maybe you are thinking that 5/(2+3) is the same as 5/2+3, which is not the case. Improper math syntax doesn’t mean there are two potential answers, but rather that the person that wrote it did so improperly.

[0] https://www.mathsisfun.com/operation-order-pemdas.html

replies(2): >>42134379 #>>42135867 #
16. eyegor ◴[] No.42133666[source]
I don't honestly think anyone can remember bash array syntax if they take a 2 week break. It's the kind of arcane nonsense that LLMs are perfect for. The only downside is if the fancy autocomplete model messes it up, we're gonna be in bad shape when Steve retires cause half the internet will be an ouroboros of ai generated garbage.
17. Moru ◴[] No.42134379{5}[source]
Maybe user means the difference between a simple calculator that does everything as you type it in and one that can figure out the correct order. We used those simpler ones in school when I was young. The new fancy ones were quite something after that :)
18. abduhl ◴[] No.42135867{5}[source]
So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.

“Which is a question that can be interpreted in only one way. And done only one way.”

The question for calculators is then the same as the question for LLMs: can you trust the calculator? How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?

replies(2): >>42136303 #>>42138449 #
19. kamaal ◴[] No.42136303{6}[source]
>>How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?

This is just splitting hairs. People who use calculators interpret it in only one way. You are making a different and a more broad argument that words/symbols can have various meanings, hence anything can be interpreted in many ways.

While these are fun arguments to be made. They are not relevant to practical use of the calculator or LLMs.

20. visarga ◴[] No.42137450[source]
> Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.

This is basically how AI research is conducted. It's alchemy.

21. margalabargala ◴[] No.42138412{4}[source]
I'm not changing goalposts, I was responding to what you said about AI spitting out something wrong and you spending 3 hours debugging it.

My original point about not needing fundamentals would obviously require AI to, y'know, not hallucinate errors that take three hours to debug. We're clearly not there yet. The original goalposts remain the same.

Since human conversations often flow from one topic to another, in addition to the goal post of "not needing fundamentals" in my original post, my second post introduced a goalpost of "being broadly useful". You're correct that it's not the same goalpost as in my first comment, which is not unexpected, as the comment in question is also not my first comment.

22. margalabargala ◴[] No.42138449{6}[source]
> So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.

No. There being "more than one way" to interpret implies the meaning is ambiguous. It's not.

There's not one incorrect way to interpret that math statement, there are infinite incorrect ways to do so. For example, you could interpret as being a poem about cats.

23. margalabargala ◴[] No.42138484[source]
I totally agree.

The neat thing about AI generated bash scripts, would be that the AI can comment their code.

So the user can 1) check if the comment for each step match what they expect to be done, and 2) have a starting point to debug if something goes wrong.

24. behringer ◴[] No.42139684[source]
Go ahead and ask chat gpt how that glue script works. You'll be incredibly satisfied at its detailed insights.