←back to thread

258 points signa11 | 4 comments | | HN request time: 0.212s | source
Show context
kirubakaran ◴[] No.42732804[source]
> A major project will discover that it has merged a lot of AI-generated code

My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

replies(16): >>42733064 #>>42733126 #>>42733357 #>>42733510 #>>42733737 #>>42733790 #>>42734461 #>>42734543 #>>42735030 #>>42735130 #>>42735456 #>>42735525 #>>42735773 #>>42736703 #>>42736792 #>>42737483 #
DowsingSpoon ◴[] No.42733737[source]
I am fairly certain that if someone did that where I work then security would be escorting them off the property within the hour. This is NOT Okay.
replies(5): >>42733887 #>>42733897 #>>42734054 #>>42734331 #>>42734746 #
bitmasher9 ◴[] No.42733897[source]
Where I work we are actively encouraged to use more AI tools while coding, to the point where my direct supervisor asked why my team’s usage statistics were lower than company average.
replies(1): >>42733926 #
dehrmann ◴[] No.42733926[source]
It's not necessarily the use of AI tools (though the license parts are an issue), is that someone submitted code for review without knowing how it works.
replies(3): >>42733954 #>>42734138 #>>42735136 #
masteruvpuppetz ◴[] No.42733954[source]
I think we should / have already reached to a place where AI written code is acceptable.
replies(3): >>42734014 #>>42734055 #>>42734506 #
1. bsder ◴[] No.42734506[source]
The problem is that "AI" is likely whitewashing the copyright from proprietary code.

I asked one of the "AI" assistants to do a very specific algorithmic problem for me and it did. And included unit tests which just so happened to hit all the exact edge cases that you would need to test for with the algorithm.

The "AI assistant" very clearly regurgitated the code of somebody. I, however, couldn't find a particular example of that code no matter how hard I searched. It is extremely likely that the regurgitated code was not open source.

Who is liable if I incorporate that code into my product?

replies(2): >>42734886 #>>42734888 #
2. kybernetikos ◴[] No.42734886[source]
This seems like you don't believe that AI can produce correct new work, but it absolutely can.

I've no idea whether in this case it directly copied someone else's work, but I don't think that it writing good unit tests is evidence that it did - that's it doing what it was built to do. And you searching and failing to find a source is weak evidence that it did not.

replies(1): >>42745251 #
3. guappa ◴[] No.42734888[source]
According to microsoft: "the user".

There's companies that scan code to see if it matches known open source code or not. However they probably just scan github so they won't even have a lot of the big projects.

4. bsder ◴[] No.42745251[source]
There is no way on this planet that an LLM "created" the exact unit tests needed to catch all the edge cases--it would even take a human quite a bit of thought to catch them all.

If you change the programming language, the unit tests disappear and the "generated" code loses the nice abstractions. It's clearly regurgitating the Python code and "generating" the code for other languages.