Most active commenters

    ←back to thread

    53 points cmpit | 22 comments | | HN request time: 1.123s | source | bottom
    1. artemsokolov ◴[] No.41918225[source]
    1972: Using Anything Other Than Assembly Will Make You a Bad Programmer

    1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer

    2024: Using AI Generated Code Will Make You a Bad Programmer

    replies(14): >>41919060 #>>41919523 #>>41919644 #>>41919894 #>>41920479 #>>41920712 #>>41920753 #>>41920815 #>>41920819 #>>41920944 #>>41922549 #>>41923314 #>>41929277 #>>41934480 #
    2. dwaltrip ◴[] No.41919060[source]
    Generalized version: Using tool to do thing makes you bad at doing thing without tool.

    I get where this is coming from and it is true sometimes (e.g. my favorite example is Google maps). But it’s quite silly to assume this for all tools and all skill sets, especially with more creative and complex skills like programming.

    Wise and experienced practitioners will stay grounded in the fundamentals while judiciously adding new tools to their kit. This requires experimentation and continual learning.

    The people whose skills will be impacted the most are those who didn’t have strong fundamentals in the first place, and only know the craft through that tool.

    Edit: forgive my frequent edits in the 10 minutes since after initially posting

    3. m463 ◴[] No.41919523[source]
    2035: Using simplified english will make you a bad prompt engineer.
    replies(1): >>41920580 #
    4. stonethrowaway ◴[] No.41919644[source]
    >1972: Using Anything Other Than Assembly Will Make You a Bad Programmer 1995: Using a Language with a Garbage Collector Will Make You a Bad Programmer

    Both of these remain true to today, which is why we always interview people at one layer below the requirement of the job so they know what they’re doing.

    Writing C/C++ - know how the output looks like. Using GC-based languages? Know the cleanup cycle (if any).

    I would wager the third also holds true.

    replies(1): >>41919965 #
    5. perihelion_zero ◴[] No.41919894[source]
    Plato: Writing will make people forgetful.
    replies(1): >>41952108 #
    6. giraffe_lady ◴[] No.41919965[source]
    It's not true but I appreciate you leaving all of the people with experience actually building usable software for me to hire.
    7. morkalork ◴[] No.41920479[source]
    Right, but having used assembler and C/C++ before has made me a better programmer, even if I choose to work with a more high level language day to day so I'm more productive.
    8. ted_bunny ◴[] No.41920580[source]
    2069: Ascension will unmake you
    9. shusaku ◴[] No.41920712[source]
    Don’t forget copy and pasting from stack overflow! I have to do it every time matplotlib cuts off part of my saved figure when using tight layout.
    10. luckman212 ◴[] No.41920753[source]
    2044: "What's a programmer?" the child asks his father innocently. "Well you see son, humans used to be needed to instruct the machines on what to do..."
    replies(1): >>41923298 #
    11. colincooke ◴[] No.41920815[source]
    To me the issue with AI generated code, and what is different than prior innovations in software development, is that it is the the wrong abstraction (or one could argue not even an abstraction anymore).

    Most of SWE (and much of engineering in general) is built on abstractions -- I use a Numpy to do math for me, React to build a UI, or Moment to do date operations. All of these libraries offer abstractions that give me high leverage on a problem in a reliable way.

    The issue with the current state of AI tools for code generation is that they don't offer a reliable abstraction, instead the abstraction is the prompt/context, and the reliability can vary quite a bit.

    I would feel like one hand it tied behind my back without LLM tools (I use both co-pilot and Gemini daily), however the amount of code I allow these tools to write _for_ me is quite limited. I use these tools to automate small snippets (co-pilot) or help me ideate (Gemini). I wouldn't trust them to write more than a contained function as I don't trust that it'll do what I intend.

    So while I think these tools are amazing for increasing productivity, I'm still skeptical of using them at scale to write reliable software, and I'm not sure if the path we are on with them is the right one to get there.

    replies(1): >>41921633 #
    12. BillLucky ◴[] No.41920819[source]
    Haha, so funny
    13. intelVISA ◴[] No.41920944[source]
    2 out of 3 of those maxims are Reasonably Correct?
    14. danielmarkbruce ◴[] No.41921633[source]
    It isn't an abstraction. Not everything is an abstraction. There is a long history of tools which are not abstractions. Linters. Static code analysis. Debuggers. Profiling tools. Autocomplete. IDEs.
    replies(1): >>41930457 #
    15. PeterStuer ◴[] No.41922549[source]
    You forgot:

    - using a debugger will make you a bad programmer

    - using an IDE will make you a bad programmer

    - using Google will make you a bad programmer

    - using StackOverflow will make you a bad programmer

    Hint: It's not the tools, it's how you use them.

    16. euroderf ◴[] No.41923298[source]
    "Yup, really! It used to be the other way around."
    17. rsynnott ◴[] No.41923314[source]
    > Using a Language with a Garbage Collector Will Make You a Bad Programmer

    If garbage collectors only did the correct thing 90% of the time, and non-deterministically did something stupid the other 10%, then, er, yeah, it very much would!

    There's a reason that conservative GCs for C didn't _really_ catch on... (It would be unfair to describe them as as broken as an LLM, but they certainly have their... downsides.)

    18. CivBase ◴[] No.41929277[source]
    I can trust that a higher level language will produce the correct assembly code.

    I can trust that a garbage collector will allocate and cleanup memory correctly.

    I cannot trust that an AI will generate quality code. I have to review its output. As someone who has been stuck doing nothing but review other people's code for the last few months, I can confidently say it would take me less time to code the solution myself than to read, digest, provide feedback for, and review changes for someone else's code. If I cannot write the code myself, I cannot accurately review its output. If I can write the code myself, it would be faster (and more fulfilling) to do that than review output from an AI.

    replies(1): >>41931462 #
    19. SaucyWrong ◴[] No.41930457{3}[source]
    I can’t tell if this is an argument against the parent or just a semantic correction. Assuming the former, I’ll point out that every tool classification you’ve mentioned has expected correct and incorrect behavior, and LLM tools…don’t. When LLMs produce incorrect or unexpected results, the refrain is, inevitably, “LLMs just be that way sometimes.” Which doesn’t invalidate them as a tool, but they are in a class of their own in that regard.
    replies(1): >>41931211 #
    20. danielmarkbruce ◴[] No.41931211{4}[source]
    It's not a semantic issue.

    Yeah, they are generally probabilistic. That has nothing to do with abstraction. There are good abstractions built on top of probabilistic concepts, like rngs, crypto libraries etc.

    21. namaria ◴[] No.41934480[source]
    And we evidently have a fantastic software landscape with very few buggy products/features being introduced, superb security and transparent interoperability across diverse software domains nowadays.
    22. musicale ◴[] No.41952108[source]
    Seems accurate.

    On the other hand, it's nice to not have to have people memorize every book in the library every generation.