←back to thread

108 points bertman | 4 comments | | HN request time: 0.211s | source
1. BiraIgnacio ◴[] No.43820889[source]
Great post and Naur's paper is really great. What I can't help stop thinking is of the many other cases where something should-not-be because being is less than ideal, and yet, they insist on being. In other words, LLMs should not be able to largely replace programmers and yet, they might.
replies(2): >>43822268 #>>43823122 #
2. codr7 ◴[] No.43822268[source]
Might, potentially; it's all wishful thinking.

I might one day wake up and find my dog to be more intelligent than me, not very likely but I can't prove it to be impossible.

It's still useless.

3. lo_zamoyski ◴[] No.43823122[source]
In some respects, perhaps in principle they could. But what is the point of handing off the entire process to a machine, even if you could?

If programming is a tool for thinking and modeling, with execution by a machine as a secondary benefit, then outsourcing these things to LLMs contributes nothing to our understanding. By analogy, we do math because we wish to understand the mathematical universe, so to speak, not because we just want some practical result.

To understand, to know, are some of the highest powers of the human person. Machines are useful for helping us enable certain work or alleviate tedium to focus on the important stuff, but handing off understanding and knowledge to a machine (if it were possible, which it isn't) would be one of the most inhuman things you could do.

replies(1): >>43826956 #
4. BiraIgnacio ◴[] No.43826956[source]
As a software engineer, I really hope that will be the case :) Thanks for the reply!