Most active commenters

    ←back to thread

    258 points signa11 | 21 comments | | HN request time: 0.001s | source | bottom
    Show context
    kirubakaran ◴[] No.42732804[source]
    > A major project will discover that it has merged a lot of AI-generated code

    My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

    replies(16): >>42733064 #>>42733126 #>>42733357 #>>42733510 #>>42733737 #>>42733790 #>>42734461 #>>42734543 #>>42735030 #>>42735130 #>>42735456 #>>42735525 #>>42735773 #>>42736703 #>>42736792 #>>42737483 #
    DowsingSpoon ◴[] No.42733737[source]
    I am fairly certain that if someone did that where I work then security would be escorting them off the property within the hour. This is NOT Okay.
    replies(5): >>42733887 #>>42733897 #>>42734054 #>>42734331 #>>42734746 #
    1. bitmasher9 ◴[] No.42733897[source]
    Where I work we are actively encouraged to use more AI tools while coding, to the point where my direct supervisor asked why my team’s usage statistics were lower than company average.
    replies(1): >>42733926 #
    2. dehrmann ◴[] No.42733926[source]
    It's not necessarily the use of AI tools (though the license parts are an issue), is that someone submitted code for review without knowing how it works.
    replies(3): >>42733954 #>>42734138 #>>42735136 #
    3. masteruvpuppetz ◴[] No.42733954[source]
    I think we should / have already reached to a place where AI written code is acceptable.
    replies(3): >>42734014 #>>42734055 #>>42734506 #
    4. bigstrat2003 ◴[] No.42734014{3}[source]
    Whether it's acceptable or not to submit AI code, it is clearly unacceptable to submit code that you don't even understand. If that's all an employee is capable of, why on earth would the employer pay them software engineer salary versus hiring someone to do the exact same for minimum wage?
    replies(1): >>42734337 #
    5. dpig_ ◴[] No.42734055{3}[source]
    What a god awful thing to hear.
    6. xiasongh ◴[] No.42734138[source]
    Didn't people already do that before, copy and pasting code off stack overflow? I don't like it either but this issue has always existed, but perhaps it is more common now
    replies(3): >>42734276 #>>42734384 #>>42734669 #
    7. hackable_sand ◴[] No.42734276{3}[source]
    Maybe it's because I'm self-taught, but I have always accounted for every line I push.

    It's insulting that companies are paying people to cosplay as programmers.

    replies(2): >>42734882 #>>42735003 #
    8. userbinator ◴[] No.42734337{4}[source]
    Or even replace them with the AI directly.
    9. rixed ◴[] No.42734384{3}[source]
    Or importing a new library that's not been audited. Or compile it with a compiler that's not been audited? Or run it on silicon that's not been audited?

    We can draw the line in many places.

    I would take generated code that a rookie obtained from an llm and copied without understanding all of it, but that he has thoughtfully tested, over something he authored himself and submitted for review without enough checks.

    replies(2): >>42734895 #>>42735242 #
    10. bsder ◴[] No.42734506{3}[source]
    The problem is that "AI" is likely whitewashing the copyright from proprietary code.

    I asked one of the "AI" assistants to do a very specific algorithmic problem for me and it did. And included unit tests which just so happened to hit all the exact edge cases that you would need to test for with the algorithm.

    The "AI assistant" very clearly regurgitated the code of somebody. I, however, couldn't find a particular example of that code no matter how hard I searched. It is extremely likely that the regurgitated code was not open source.

    Who is liable if I incorporate that code into my product?

    replies(2): >>42734886 #>>42734888 #
    11. noisy_boy ◴[] No.42734669{3}[source]
    Now there is even lesser excuse for not knowing what it does, because the same chatGPT that gave you the code, can explain it too. That wasn't a luxury available in copy/paste-from-StackOverflow days (though explanations with varying degrees of depth were available there too).
    replies(1): >>42735026 #
    12. guappa ◴[] No.42734882{4}[source]
    I've seen self taught and graduates alike do that.
    13. kybernetikos ◴[] No.42734886{4}[source]
    This seems like you don't believe that AI can produce correct new work, but it absolutely can.

    I've no idea whether in this case it directly copied someone else's work, but I don't think that it writing good unit tests is evidence that it did - that's it doing what it was built to do. And you searching and failing to find a source is weak evidence that it did not.

    replies(1): >>42745251 #
    14. guappa ◴[] No.42734888{4}[source]
    According to microsoft: "the user".

    There's companies that scan code to see if it matches known open source code or not. However they probably just scan github so they won't even have a lot of the big projects.

    15. yjftsjthsd-h ◴[] No.42734895{4}[source]
    > We can draw the line in many places.

    That doesn't make those places equivalent.

    16. ascorbic ◴[] No.42735003{4}[source]
    It's probably more common among self-taught programmers (and I say that as one myself). Most go through the early stage of copying chunks of code and seeing if they work. Maybe not blindly copying it, but still copying code from examples or whatever. I know I did (except it was 25 years ago from Webmonkey or the php.net comments section rather than StackOverflow). I'd imagine formally-educated programmers can skip some (though not all) of that by having to learn more of the theory at first.
    replies(1): >>42735395 #
    17. ascorbic ◴[] No.42735026{4}[source]
    Yes, and I think the mistakes that LLMs commonly make are less problematic than Stack Overflow. LLMs seem to most often either hallucinate APIs, or use outdated ones. They're easier to detect when they just don't work. They're not perfect, but seem less inclined to generate the bad practices and security holes that are the bread and butter of Stack Overflow. In fact they're pretty good at identifying those sort of problems in existing code.
    18. johnisgood ◴[] No.42735136[source]
    I use AI these days and I know how things work, there really is a huge difference. It helps me make AI write me code faster and the way I want it, something I could do, except more slowly.
    19. whatevertrevor ◴[] No.42735242{4}[source]
    That's a false dichotomy. People can write code themselves and thoroughly test it too.
    20. hackable_sand ◴[] No.42735395{5}[source]
    If people are being paid to copy and run random code, more power to them. I wouldn't have dreamt of getting a programming job until I was literate.
    21. bsder ◴[] No.42745251{5}[source]
    There is no way on this planet that an LLM "created" the exact unit tests needed to catch all the edge cases--it would even take a human quite a bit of thought to catch them all.

    If you change the programming language, the unit tests disappear and the "generated" code loses the nice abstractions. It's clearly regurgitating the Python code and "generating" the code for other languages.