←back to thread

399 points nomdep | 9 comments | | HN request time: 1.449s | source | bottom
Show context
waprin ◴[] No.44295040[source]
To some degree, traditional coding and AI coding are not the same thing, so it's not surprising that some people are better at one than the other. The author is basically saying that he's much better at coding than AI coding.

But it's important to realize that AI coding is itself a skill that you can develop. It's not just , pick the best tool and let it go. Managing prompts and managing context has a much higher skill ceiling than many people realize. You might prefer manual coding, but you might just be bad at AI coding and you might prefer it if you improved at it.

With that said, I'm still very skeptical of letting the AI drive the majority of the software work, despite meeting people who swear it works. I personally am currently preferring "let the AI do most of the grunt work but get good at managing it and shepherding the high level software design".

It's a tiny bit like drawing vs photography and if you look through that lens it's obvious that many drawers might not like photography.

replies(5): >>44295112 #>>44295146 #>>44295705 #>>44295759 #>>44296665 #
mitthrowaway2 ◴[] No.44295705[source]
The skill ceiling might be "high" but it's not like investing years of practice to become a great pianist. The most experienced AI coder in the world has about three years of practice working this way, much of which is obsoleted because the models have changed to the point where some lessons learned on GPT 3.5 don't transfer. There aren't teachers with decades of experience to learn from, either.
replies(2): >>44296283 #>>44296884 #
1. dr_dshiv ◴[] No.44296283[source]
It’s mostly attitude that you are learning. Playfulness, persistence and a willingness to start from scratch again and again.
replies(1): >>44296305 #
2. suddenlybananas ◴[] No.44296305[source]
>persistence and a willingness to start from scratch again and again.

i.e. continually gambling and praying the model spits something out that works instead of thinking.

replies(3): >>44296577 #>>44296609 #>>44297265 #
3. HPsquared ◴[] No.44296577[source]
Most things in life are like that.
4. tsurba ◴[] No.44296609[source]
Gambling is where I end up if I’m tired and try to get an LLM to build my hobby project for me from scratch in one go, not really bothering to read the code properly. It’s stupid and a waste of time. Sometimes it’s easier to get started this way though.

But more seriously, in the ideal case refining a prompt based on a misunderstanding of an LLM due to ambiguity in your task description is actually doing the meaningful part of the work in software development. It is exactly about defining the edge cases, and converting into language what is it that you need for a task. Iterating on that is not gambling.

But of course if you are not doing that, but just trying to get a ”smarter” LLM with (hopefully deprecated study of) ”prompt engineering” tricks, then that is about building yourself a skill that can become useless tomorrow.

5. chii ◴[] No.44297265[source]
why is the process important? If they can continuously trial and error their way into a good output/result, then it's a fine outcome.
replies(1): >>44297465 #
6. suddenlybananas ◴[] No.44297465{3}[source]
Why is thinking important? Think about it a bit.
replies(1): >>44297582 #
7. chii ◴[] No.44297582{4}[source]
is it more important for a chess engine to be able to think? Or is it able to win by brute force through searching a sufficient outcome?

If the outcome is indistinguisable from using "thinking" as the process rather than brute force, why would the process matter regarding how the outcome was achieved?

replies(1): >>44298044 #
8. suddenlybananas ◴[] No.44298044{5}[source]
maybe if programming were a well-defined game like chess, but it's not.
replies(1): >>44298218 #
9. chii ◴[] No.44298218{6}[source]
the grammar of a programming language is just as well defined. And the defined-ness of the "game" isn't required for my argument.

Your concept of thinking is the classic retoric - as soon as some "ai" manages to achieve something which previously wasn't capable, it's no longer AI and is just xyz process. It happened with chess engines, with alphago, and with LLMs. The implication being that human "thinking" is somehow unique and only the AI that replicate it can be considered to have "thinking".