←back to thread

631 points cratermoon | 5 comments | | HN request time: 0.851s | source
1. rednafi ◴[] No.44462068[source]
Software programming used to be a blue-collar thing in the early days, when hardware wiring was all the rage.

Then it became hip, and people would hand-roll machine-specific assembly code. Later on, it became too onerous when CPU architecture started to change faster than programmers could churn out code. So we came up with compilers, and people started coding at a higher level of abstraction. No one lamented the lost art of assembly.

Coding is just a means to an end. We’ve always searched for better and easier ways to convince the rocks to do something for us. LLMs will probably let us jump another abstraction level higher.

I too spent hours looking for the right PHP or Perl snippet in the early days to do something. My hard-earned bash-fu is mostly useless now. Am I sad about it? Nah. Writing bash always sucked, who am I kidding. Also, regex. I never learned it properly. It doesn’t appeal to me. So I’m glad these whatever machines are helping me do this grunt work.

There are sides of programming I like, and implementation isn’t one of them. Once upon a time I could care less about the binary streams ticking the CPU. Now I’m excited about the probable prospect of not having to think as much about “higher-level” code and jumping even higher.

To me, programming is more like science than art. Science doesn’t care how much profundity we find in the process. It moves on to the next thing for progress.

replies(2): >>44462193 #>>44462571 #
2. eddiewithzato ◴[] No.44462193[source]
LLMs will not be doing that. I wish they could, but they just spit out whatever without verifying anything. Even in Cursor which has the agent tell you to run the test script they generated to verify the output, it just says “yep seems fine to me!”.

AI at the current state in my workflow is a decent search engine and stackoverflow. But it has far greater pitfalls as OP pointed out (it just assumes the code is always 100% accurate and will “fake” API).

replies(1): >>44462639 #
3. archagon ◴[] No.44462571[source]
LLMs are not an abstraction. If anything, they are the opposite of an abstraction.
4. wiseowise ◴[] No.44462639[source]
That’s where you, human, come into the scene.
replies(1): >>44463261 #
5. eddiewithzato ◴[] No.44463261{3}[source]
And that’s where I end up wasting more time investigating and fixing issues, rather than creating a solution ;)

I only use AI for small problems rather than let it orchestrate entire files.