Most active commenters

    ←back to thread

    Nobody knows how to build with AI yet

    (worksonmymachine.substack.com)
    526 points Stwerner | 14 comments | | HN request time: 1.065s | source | bottom
    Show context
    lordnacho ◴[] No.44616832[source]
    I'm loving the new programming. I don't know where it goes either, but I like it for now.

    I'm actually producing code right this moment, where I would normally just relax and do something else. Instead, I'm relaxing and coding.

    It's great for a senior guy who has been in the business for a long time. Most of my edits nowadays are tedious. If I look at the code and decide I used the wrong pattern originally, I have to change a bunch of things to test my new idea. I can skim my code and see a bunch of things that would normally take me ages to fiddle. The fiddling is frustrating, because I feel like I know what the end result should be, but there's some minor BS in the way, which takes a few minutes each time. It used to take a whole stackoverflow search + think, recently it became a copilot hint, and now... Claude simply does it.

    For instance, I wrote a mock stock exchange. It's the kind of thing you always want to have, but because the pressure is on to connect to the actual exchange, it is often a leftover task that nobody has done. Now, Claude has done it while I've been reading HN.

    Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.

    Now I know what you're all thinking. How does it not end up with spaghetti all over the place? Well. I actually do critique the changes. I actually do have discussions with Claude about what to do. The benefit here is he's a dev who knows where all the relevant code is. If I ask him whether there's a lock in a bad place, he finds it super fast. I guess you need experience, but I can smell when he's gone off track.

    So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.

    What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.

    replies(15): >>44616871 #>>44616935 #>>44617102 #>>44617254 #>>44618137 #>>44618793 #>>44621101 #>>44621200 #>>44621741 #>>44621995 #>>44622452 #>>44622738 #>>44623119 #>>44624925 #>>44624959 #
    1. ikerino ◴[] No.44617254[source]
    Hot take: Junior devs are going to be the ones who "know how to build with AI" better than current seniors.

    They are entering the job market with sensibilities for a higher-level of abstraction. They will be the first generation of devs that went through high-school + college building with AI.

    replies(5): >>44617657 #>>44621434 #>>44621827 #>>44622643 #>>44625127 #
    2. stefan_ ◴[] No.44617657[source]
    Where did they learn sensibility for higher-level of abstraction? AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea, you will have to learn yourself all the way down into the details that the big picture it chose for you was faulty from the start. Convert some convoluted bash script to run on Windows because thats what the office people run? Get strapped in for the AI PowerShell ride of your life.
    replies(2): >>44618114 #>>44628619 #
    3. ikerino ◴[] No.44618114[source]
    How is that different than how any self-taught programmer learns? Dive into a too-big idea, try to make it work and learn from that experience.

    Repeat that a few hundred times and you'll have some strong intuitions and sensibilities.

    replies(2): >>44620758 #>>44621285 #
    4. pessimizer ◴[] No.44620758{3}[source]
    The self-taught programmer's idea was coded by someone who is no smarter than they are. It will never confuse them, because they understand how it was written. They will develop along with the projects they attempt.

    The junior dev who has agents write a program for them may not understand the code well enough to really touch it at all. They will make the wrong suggestions to fix problems caused by inexperienced assumptions, and will make the problems worse.

    i.e. it's because they're junior and not qualified to manage anybody yet.

    The LLMs are being thought of as something to replace juniors, not to assist them. It makes sense to me.

    5. skydhash ◴[] No.44621285{3}[source]
    > Dive into a too-big idea, try to make it work and learn from that experience.

    Or... just pick up that book, watch a couple of videos on Youtube and avoid all that trying.

    6. refactor_master ◴[] No.44621434[source]
    Like the iPad babies and digital natives myth? I don’t think that really went anywhere. So a new generation of… native prompters? Ehhh.
    7. booleandilemma ◴[] No.44621827[source]
    Do you think that kids growing up now will be better artists than people who spent time learning how to paint because they can prompt an LLM to create a painting for them?

    Do you think humanity will be better off because we'll have humans who don't know how to do anything themselves, but they're really good at asking the magical AI to do it for them?

    What a sad future we're going to have.

    replies(1): >>44626715 #
    8. heavyset_go ◴[] No.44622643[source]
    This is the same generation that falls for online scams more than their grandparents do[1].

    [1] https://www.vox.com/technology/23882304/gen-z-vs-boomers-sca...

    replies(1): >>44622713 #
    9. jansan ◴[] No.44622713[source]
    It may be the same generation, but it's probably not the same people.
    replies(1): >>44623028 #
    10. alternatex ◴[] No.44623028{3}[source]
    I think the argument is that growing up with something doesn't necessarily make you good at it. I think it rings especially true with higher level abstractions. The upcoming generation is bad with tech because tech has become more abstract, more of a product and less something to tinker with and learn about. Tech just works now and requires little in assistance from the user, so little is learned.
    replies(1): >>44623163 #
    11. Terr_ ◴[] No.44623163{4}[source]
    Yeah, I have a particular rant about this with respect to older generations believing "kids these days know computers." (In this context, probably people under 18.)

    The short version is that they mistake confidence for competence, and the younger consumers are more confident poking around because they grew up with superior idiot-proofing. The better results are because they dare to fiddle until it works, not because they know what's wrong.

    12. MITSardine ◴[] No.44625127[source]
    I think this disregards the costs associated with using AI.

    It used to be you could learn to program with a cheap old computer a majority of families can afford. It might have run slower, but you still had all the same tooling that's found on a professional's computer.

    To use LLMs for coding, you either have to pay a third party for compute power (and access to models), or you have to provide it yourself (and use freely available ones). Both are (and IMO will remain) expensive.

    I'm afraid this builds a moat around programming that will make it less accessible as a discipline. Kids won't just tinker they way into a programming career as they used to, if it takes asking for mom's credit card from minute 0.

    As for HS + college providing a CS education using LLMs, spare me. They already don't do that when all it takes is a computer room with free software on it. And I'm not advocating for public funds to be diverted to LLM providers either.

    13. bawana ◴[] No.44626715[source]
    more reasons for humans not to birth more humans
    14. CamperBob2 ◴[] No.44628619[source]
    AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea

    That's not true at all, and hasn't been for a while. When using LLMs to tackle an unfamiliar problem, I always start by asking for a comparative review of possible strategies.

    In other words, I don't tell it, "Provide a C++ class that implements a 12-layer ABC model that does XYZ," I ask it, "What ML techniques are considered most effective for tasks similar to XYZ?" and drill down from there. I very frequently see answers like, "That's not a good fit for your requirements for reasons 1, 2, and 3. Consider UVW instead." Usually it's good advice.

    At the same time I will typically carry on the same conversation with other competing models, and that can really help avoid wasting time on faulty assumptions and terrible ideas.