Most active commenters

    ←back to thread

    258 points signa11 | 34 comments | | HN request time: 1.503s | source | bottom
    Show context
    kirubakaran ◴[] No.42732804[source]
    > A major project will discover that it has merged a lot of AI-generated code

    My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

    replies(16): >>42733064 #>>42733126 #>>42733357 #>>42733510 #>>42733737 #>>42733790 #>>42734461 #>>42734543 #>>42735030 #>>42735130 #>>42735456 #>>42735525 #>>42735773 #>>42736703 #>>42736792 #>>42737483 #
    1. alisonatwork ◴[] No.42734461[source]
    I have heard the same response from junior devs and external contractors for years, either because they copied something from StackOverflow, or because they copied something from a former client/employer (popular one in China), or even because they just uncritically copied something from another piece of code in the same project.

    From the point of view of these sorts of developers they are being paid to make the tests go green or to make some button appear on a page that kindasorta does something in the vague direction of what was in the spec, and that's the end of their responsibility. Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.

    I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

    Sorry, got a little carried away. Anywho, the point is LLMs are just another tool for these folks. It's not new, it's just worse now because of the mixed messaging where executives are hyping the tech as a magical solution that will allow them to ship more features for less cost.

    replies(14): >>42734514 #>>42734610 #>>42734635 #>>42734989 #>>42735105 #>>42735171 #>>42735362 #>>42735765 #>>42735948 #>>42736401 #>>42736870 #>>42736880 #>>42737897 #>>42738468 #
    2. bryanrasmussen ◴[] No.42734514[source]
    >Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.

    maybe I am just supremely lucky but while I have encountered people like (in the coding part) it is somewhat rare from my experience. These comments on HN always makes it seem like it's at least 30% of the people out there.

    replies(1): >>42734641 #
    3. ojbyrne ◴[] No.42734610[source]
    I have been told (at a FAANG) not to fix those kind of code smells in existing code. “Don’t waste time on refactoring.”
    replies(1): >>42735309 #
    4. devsda ◴[] No.42734635[source]
    > then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant

    That's because they come across as result oriented, go getter kind of persons while the others will be seen as uptight individuals. Unfortunately, management for better or worse self selects the first kind.

    LLMs are only going to make it worse. If you can write clean code in half a day and an LLM can generate a "working" sphagetti mess in few mins, management will prefer the mess. This will be the case for many organizations where software is just an additional supporting expense and not critical part of the main business.

    5. alisonatwork ◴[] No.42734641[source]
    I think even though these types of developers are fairly rare, they have a disproportionate negative impact on the quality of the code and the morale of their colleagues, which is perhaps why people remember them and talk about it more often. The p95 developers who are more-or-less okay aren't really notable enough to be worth complaining about on HN, since they are us.
    replies(1): >>42734825 #
    6. ryandrake ◴[] No.42734825{3}[source]
    And, as OP alluded to, I bet these kinds of programmers tend to “fail upward” and disproportionately become eng managers and directors, spreading their carelessness over a wider blast radius, while the people who care stagnate as perpetual “senior software engineers”.
    replies(1): >>42735681 #
    7. arkh ◴[] No.42734989[source]
    What you describe is the state of most devops.

    Copy / download some random piece of code, monkey around to change some values for your architecture and up we go. It works! We don't know how, we won't be able to debug it when the app goes down but that's not our problem.

    And that's how you end up with bad examples or lack of exhaustive options in documentations, most tutorials being a rehash of some quickstart and people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.

    Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.

    replies(2): >>42735359 #>>42735513 #
    8. KronisLV ◴[] No.42735105[source]
    > I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

    For them, this clearly sound like personal success.

    There's also a lot of folks who view programming just as a stepping stone in the path to becoming well paid managers and couldn't care any less about all of the stuff the nerds speak about.

    Kind of unfortunate, but oh well. I also remember helping out someone with their code back in my university days and none of it was indented, things that probably shouldn't be on the same line were and their answer was that they don't care in the slightest about how it works, they just want it to work. Same reasoning.

    replies(1): >>42735397 #
    9. beAbU ◴[] No.42735171[source]
    Do other companies not have static analysis integrated into the CI/CD pipeline?

    We by default block any and all PRs that contain funky code: high cyclomatic complexity, unused variables, bad practise, overt bugs, known vulnerabilities, inconsistent style, insufficient test coverage, etc.

    If that code is not pristine, it's not going in. A human dev will not even begin the review process until at least the static analysis light is green. Time is then spent mentoring the greens as to why we do this, why it's important, and how you can get your code to pass.

    I do think some devs still use AI tools to write code, but I believe that the static analysis step will at least ensure some level of forced ownership over the code.

    replies(3): >>42735216 #>>42736224 #>>42738050 #
    10. lrem ◴[] No.42735216[source]
    Just wait till AI learns how to pass your automated checks, without getting any better in the semantics. Unused variables bad? Let’s just increment/append whatever every iteration, etc.
    replies(1): >>42735337 #
    11. dawnerd ◴[] No.42735309[source]
    To be fair sometimes it just isn’t worth the companies time.
    12. whatevertrevor ◴[] No.42735337{3}[source]
    And then we'll need AI tools to diagnose and profile AI generated code to automagically improve performance.

    I can't wait to retire.

    13. whatevertrevor ◴[] No.42735359[source]
    I'm ashamed to say this is me with trying to get Linux to behave tbh.

    I like fully understanding my code and immediate toolchain, but my dev machine is kinda held together with duct tape it feels.

    replies(1): >>42735775 #
    14. oytis ◴[] No.42735362[source]
    > Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

    Wow. I am probably very lucky, but most of managers, and especially architects I know are actually also exceptional engineers. A kind of exception was a really nice, helpful and proactive guy who happened to just not be a great engineer. He was still very useful for being nice, helpful and proactive, and was being promoted for that. "Failing up" to management would actually make a lot of sense for him, unfortunately he really wanted to code though.

    15. anal_reactor ◴[] No.42735397[source]
    I used to be fascinated about computers, but then I understood that being a professional meeting attender pays more for less effort.
    replies(2): >>42735469 #>>42737124 #
    16. KronisLV ◴[] No.42735469{3}[source]
    I still like it, I just acknowledge that being passionate isn't compatible with the corpo culture.

    Reminds me of this: https://www.stilldrinking.org/programming-sucks

    replies(1): >>42738057 #
    17. sofixa ◴[] No.42735513[source]
    I disagree. A lot of DevOps is using abstractions, yes. But using a Terraform module to deploy your managed database without reading the code and checking all options is the same as using a random library without reading the code and checking all parameters in your application. People skimping on important things exist in all roles.

    > people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.

    I mean, this is just wrong. Both Ansible roles and Helm charts have normalised documentations. Official Ansible modules include docs with all possible parameters, and concrete examples how they work together. Helm charts also come with a file which literally lists all possible options (values.yaml). And yes, checking the code is always a good idea when using third party code you don't trust. Which is it you're complaining about, that DevOps people don't understand the code they're running or that you have to read the code? It can't be both, surely.

    > Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.

    Rose tinted glasses, and bias. You seem to have worked only with good developer practices (or forgotten about the bad), and bad DevOps ones. Every developer fully understands React or the JS framework du jour they're using because it's cool? You've never seen some weird legacy code with no documentation?

    replies(1): >>42736115 #
    18. bryanrasmussen ◴[] No.42735681{4}[source]
    maybe they care more about the quality as they become managers etc. quality takes effort, maybe they don't like taking the effort but like making other people take the effort.
    19. Cthulhu_ ◴[] No.42735765[source]
    You can lead a horse to water, etc. What worked for me wasn't so much a mentor telling me xyz was good / bad, but metrics and quality gates - Sonar (idk when it was renamed to sonarqube or what the difference is) will flag up these issues and simply make the merge request unmergeable unless the trivial issues are fixed.

    Because that's the frustrating part; they're trivial issues, unreachable code and unused variables are harmless (on paper), just a maintenance burden and frustrating for whoever has to maintain it later on. But because they're trivial, the author doesn't care about them either. Trivial issues should be automatically fixed and / or checked by tooling, it shouldn't cost you (the reviewer) any headspace in the first place. And it shouldn't need explanation or convincing to solve either. Shouldn't, but here we are.

    But yeah, the next decade will be interesting. I'm not really using it in my code yet because idk, the integration broke again or I keep forgetting it exists. But we integrated a tool in our gitlab that generates a code review, both summarzing the changes and highlighting the risks / issues if any. I don't like that, but the authors of merge requests aren't writing proper merge request descriptions either, so I suppose an AI generated executive summary is better than nothing.

    20. Cthulhu_ ◴[] No.42735775{3}[source]
    Oof, same to be honest. It doesn't help that at some point Apache changed its configuration format, and that all of these tools seem to have reinvented their configuration file format. And that, once it's up you won't have to touch it again for years (at least in my personal server use case, I've never done enterprise level ops work beyond editing a shell script or CI pipeline)
    21. 0xEF ◴[] No.42735948[source]
    The LLMs are not just another tool for these folks, but for folks who should not be touching code at all. That's the scary part. In my field (industrial automation), I have had to correct issues three times now in the ladder logic on a PLC that drives an automation cell that can definitely kill or hurt someone in the right circumstances (think maintenance/repair). When asked where the logic came from, they showed me the tutorials they feed to their LLM of choice to "teach" it ladder logic, then had it spit out answers to their questions. Safety checks were missed, needless to say, which thankfully only broke the machines.

    These are young controls engineers at big companies. I won't say who, but many of you probably use one of their products to go to your own job.

    I am not against using LLMs as a sort of rubber duck to bounce ideas off of or maybe get you thinking in a different directions for the sake of problem solving, but letting them do the work for you and not understanding how to check the validity of that work is maddeningly dangerous in some situations.

    22. arkh ◴[] No.42736115{3}[source]
    > Rose tinted glasses, and bias. You seem to have worked only with good developer practices (or forgotten about the bad), and bad DevOps ones. Every developer fully understands React or the JS framework du jour they're using because it's cool? You've never seen some weird legacy code with no documentation?

    Not really. I'm mainly in code maintenance so good practices are usually those the team I join can add to old legacy projects. Right now trying to modernize a web of 10-20 old add-hoc apps. But good practices are known to exist and widely shared even between dev ecosystems.

    For everything ops and devops it looks like there are like islands of knowledge which are not shared at all. At least when coming with a newbie point of view. Like for example with telemetry: people who worked at Google or Meta all rave about the mythical tools they got to use in-house and how they cannot find anything equivalent outside... and yes when you check what is available "outside" it looks less powerful and all those solutions feel like the same. So you got the FAANG islands of tools and way to do things, the big box commercial offering and their armies of consultants and then the OpenSource and Freemium way of doing telemetry.

    replies(1): >>42736323 #
    23. liontwist ◴[] No.42736224[source]
    I think it’s a good thing to use such tools. But no amount of tooling can create quality.

    It gives you an illusion of control. Rules are a cheap substitute for thinking.

    24. sofixa ◴[] No.42736323{4}[source]
    > For everything ops and devops it looks like there are like islands of knowledge which are not shared at all

    Very strongly disagree, if anything it's the opposite. Many people read the knowledge shared by others and jump to thinking it's suitable for them as well. Microservices and Kubernetes got adopted by everyone and their grandpa because big tech uses them, without any consideration if its suitable or not for each org.

    > At least when coming with a newbie point of view. Like for example with telemetry: people who worked at Google or Meta all rave about the mythical tools they got to use in-house and how they cannot find anything equivalent outside... and yes when you check what is available "outside" it looks less powerful and all those solutions feel like the same. So you got the FAANG islands of tools and way to do things, the big box commercial offering and their armies of consultants and then the OpenSource and Freemium way of doing telemetry.

    The latter two are converging with OpenTelemetry and Prometheus and related projects. Both ways are well documented, and there are a number of projects and vendors providing alternatives and various options. People can pick what works best for them (and it could very well be open source but hosted for you, cf. Grafana Cloud). I'm not sure how that's related to "islands of knowledge"... observability in general is one of the most widely discussed topics in the space.

    25. quietbritishjim ◴[] No.42736401[source]
    It's definitely worse for LLMs than for StackOverflow. You don't need to fully understand a StackOverflow answer, but you at least need to recognise if the question could be applicable. With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.

    I think young people today are at severe risk of building up what I call learning debt. This is like technical debt (or indeed real financial debt). They're getting further and further, through university assignments and junior dev roles, without doing the learning that we previously needed to. That's certainly what I've seen. But, at some point, even LLMs won't cut it for the problem they're faced with and suddenly they'll need to do those years of learning all at once (i.e. the debt becomes due). Of course, that's not possible and they'll be screwed.

    replies(1): >>42736953 #
    26. ben_w ◴[] No.42736870[source]
    > I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time.

    I've seen that, though fortunately only in one place. Duplicated entire files, including the parts to which I had added "TODO: deduplicate this function" comments, rather than change access specifiers from private to public and subclass.

    By curious coincidence, 20% was also roughly the percentage of lines in the project which were, thanks to him, blank comments.

    27. redeux ◴[] No.42736880[source]
    > Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

    I’ve heard this sentiment several times over the years and what I think a lot of people don’t realize is that they’re just playing a different game than you. Their crappy code is a feature not a bug because they’re expending the energy on politics rather than coding. In corporations politics is a form a work, but it’s not work that many devs want to do. So people will say the uncaring dev is doing poor work, but really they’re just not seeing the real work being done.

    I’m not saying this is right or wrong, it’s just an observation. Obviously this isn’t true for everyone who does a poor job, but if you see that person start climbing the ladder, that’s the reason.

    replies(1): >>42737556 #
    28. ben_w ◴[] No.42736953[source]
    > With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.

    To an extent. The failure modes are still weird, I've tried this kind of automation loop manually to see how good it is, and while it can as you say produce functional mediocre code*… it can also get stuck in stupid loops.

    * I ran this until I got bored; it is mediocre code, but ChatGPT did keep improving the code as I wanted it to, right up to the point of boredom: https://github.com/BenWheatley/JSPaint

    29. oblio ◴[] No.42737124{3}[source]
    Pays more for less effort and frequently less risk. Just make sure to get enough headcount to go over the span of control number.
    30. stcroixx ◴[] No.42737556[source]
    The kind of work you're describing doesn't benefit the company, it benefits the individual. It's not what they were hired to do. The poor quality code they produce can be a net negative when it causes bugs, maintenance issues, etc. I think it's always the right choice to boot such a person from any company once they've been identified.
    31. ChrisMarshallNY ◴[] No.42737897[source]
    I have incorporated a lot of SO code. I never incorporate it, until I understand exactly what it does.

    I usually learn it, by adapting it to my coding style, and documenting it. I seldom leave it untouched. I usually modify in one way or another, and I always add a HeaderDoc comment, linking to the SO answer.

    So far, I have not been especially thrilled with the AI-generated code that I've encountered. I expect things to improve, rapidly, though.

    32. ericmcer ◴[] No.42738050[source]
    That is a softball question for an AI: this block of code is throwing these errors, can you tell me why?
    33. epiccoleman ◴[] No.42738057{4}[source]
    That is an all time favorite that I've come back to many times over the years. It's hard to choose just one quote, but this one always hits for me:

    > You are an expert in all these technologies, and that’s a good thing, because that expertise let you spend only six hours figuring out what went wrong, as opposed to losing your job.

    34. svilen_dobrev ◴[] No.42738468[source]
    > failed up to manager...

    see, everything around can be a tool. Sticks, screwdrivers, languages, books, phones, cars, houses, roads, software, knowledge, ..

    in my rosy glasses this line stops at people (or maybe life-forms?). People are not tools. Should not be treated as such.

    But that is not the case in reality. So anyone for whom other people are tools, will fail (or fall) upwards (or will be pulled there). Sooner or later.

    sorry if somewhat dark..