Most active commenters
  • guappa(4)
  • manmal(3)

←back to thread

258 points signa11 | 43 comments | | HN request time: 0.864s | source | bottom
Show context
kirubakaran ◴[] No.42732804[source]
> A major project will discover that it has merged a lot of AI-generated code

My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

replies(16): >>42733064 #>>42733126 #>>42733357 #>>42733510 #>>42733737 #>>42733790 #>>42734461 #>>42734543 #>>42735030 #>>42735130 #>>42735456 #>>42735525 #>>42735773 #>>42736703 #>>42736792 #>>42737483 #
1. DowsingSpoon ◴[] No.42733737[source]
I am fairly certain that if someone did that where I work then security would be escorting them off the property within the hour. This is NOT Okay.
replies(5): >>42733887 #>>42733897 #>>42734054 #>>42734331 #>>42734746 #
2. phinnaeus ◴[] No.42733887[source]
Are you hiring?
3. bitmasher9 ◴[] No.42733897[source]
Where I work we are actively encouraged to use more AI tools while coding, to the point where my direct supervisor asked why my team’s usage statistics were lower than company average.
replies(1): >>42733926 #
4. dehrmann ◴[] No.42733926[source]
It's not necessarily the use of AI tools (though the license parts are an issue), is that someone submitted code for review without knowing how it works.
replies(3): >>42733954 #>>42734138 #>>42735136 #
5. masteruvpuppetz ◴[] No.42733954{3}[source]
I think we should / have already reached to a place where AI written code is acceptable.
replies(3): >>42734014 #>>42734055 #>>42734506 #
6. bigstrat2003 ◴[] No.42734014{4}[source]
Whether it's acceptable or not to submit AI code, it is clearly unacceptable to submit code that you don't even understand. If that's all an employee is capable of, why on earth would the employer pay them software engineer salary versus hiring someone to do the exact same for minimum wage?
replies(1): >>42734337 #
7. bigstrat2003 ◴[] No.42734054[source]
To be fair I don't think someone should get fired for that (unless it's a repeat offense). Kids are going to do stupid things, and it's up to the more experienced to coach them and help them to understand it's not acceptable. You're right that it's not ok at all, but the first resort should be a reprimand and being told they are expected to understand code they submit.
replies(2): >>42734253 #>>42734271 #
8. dpig_ ◴[] No.42734055{4}[source]
What a god awful thing to hear.
9. xiasongh ◴[] No.42734138{3}[source]
Didn't people already do that before, copy and pasting code off stack overflow? I don't like it either but this issue has always existed, but perhaps it is more common now
replies(3): >>42734276 #>>42734384 #>>42734669 #
10. DowsingSpoon ◴[] No.42734253[source]
I understand the point you’re trying to get across. For many kinds of mistakes, I agree it makes good sense to warn and correct the junior. Maybe that’s the case here. I’m willing to concede there’s room for debate.

Can you imagine the fallout from this, though? Each and every line of code this junior has ever touched needs to be scrutinized to determine its provenance. The company now must assume the employee has been uploading confidential material to OpenAI too. This is an uncomfortable legal risk.

How could you trust the dev again after the dust is settled?

Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon. That’s a frankly dangerous level of professional incompetence. (At least they didn’t try to hide it.)

Well now I’m wondering what the correct way would be to handle a junior doing this with ChatGPT, and what the correct way would be to handle similar kinds of mistakes such as copy-pasting GPL code into the proprietary code base, copy-pasting code from Stack Overflow, sharing snippets of company code online, and so on.

replies(4): >>42734298 #>>42734496 #>>42734700 #>>42734745 #
11. LastTrain ◴[] No.42734271[source]
Kids, sure. University trained professional and paid like one? No.
replies(1): >>42734412 #
12. hackable_sand ◴[] No.42734276{4}[source]
Maybe it's because I'm self-taught, but I have always accounted for every line I push.

It's insulting that companies are paying people to cosplay as programmers.

replies(2): >>42734882 #>>42735003 #
13. thaumasiotes ◴[] No.42734298{3}[source]
> Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon.

Austen Allred is selling this as the future of programming. According to him, the days of writing code into an IDE are over.

https://www.gauntletai.com/

replies(2): >>42734542 #>>42735281 #
14. userbinator ◴[] No.42734331[source]
In such an environment, it would be more common for access to ChatGPT (or even most of the Internet) to be blocked.
15. userbinator ◴[] No.42734337{5}[source]
Or even replace them with the AI directly.
16. rixed ◴[] No.42734384{4}[source]
Or importing a new library that's not been audited. Or compile it with a compiler that's not been audited? Or run it on silicon that's not been audited?

We can draw the line in many places.

I would take generated code that a rookie obtained from an llm and copied without understanding all of it, but that he has thoughtfully tested, over something he authored himself and submitted for review without enough checks.

replies(2): >>42734895 #>>42735242 #
17. raverbashing ◴[] No.42734412{3}[source]
You're having high expectations of the current batch of college graduates

(and honestly it's not like the past graduates were much better, but they didn't have chatgpt)

replies(1): >>42734513 #
18. manmal ◴[] No.42734496{3}[source]
> The company now must assume the employee has been uploading confidential material to OpenAI too.

If you think that’s not already the case for most of your codebase, you might be in for a rough awakening.

19. bsder ◴[] No.42734506{4}[source]
The problem is that "AI" is likely whitewashing the copyright from proprietary code.

I asked one of the "AI" assistants to do a very specific algorithmic problem for me and it did. And included unit tests which just so happened to hit all the exact edge cases that you would need to test for with the algorithm.

The "AI assistant" very clearly regurgitated the code of somebody. I, however, couldn't find a particular example of that code no matter how hard I searched. It is extremely likely that the regurgitated code was not open source.

Who is liable if I incorporate that code into my product?

replies(2): >>42734886 #>>42734888 #
20. The_Colonel ◴[] No.42734513{4}[source]
A cynical take would be that the current market conditions allow you to filter out such college graduates and only take the better ones.
replies(1): >>42735199 #
21. manmal ◴[] No.42734542{4}[source]
Responding to the link you posted: Apparently, the future of programming is 100 hour weeks? Naive me was thinking we could work less and think more with these new tools at our disposal.
replies(2): >>42734637 #>>42734898 #
22. ojbyrne ◴[] No.42734637{5}[source]
Also you think with their fancy AI coding they could update their dates to the future or at least disable the page for a past dated session.
23. noisy_boy ◴[] No.42734669{4}[source]
Now there is even lesser excuse for not knowing what it does, because the same chatGPT that gave you the code, can explain it too. That wasn't a luxury available in copy/paste-from-StackOverflow days (though explanations with varying degrees of depth were available there too).
replies(1): >>42735026 #
24. ujkiolp ◴[] No.42734700{3}[source]
unless you work for hospitals or critical infrastructure, this reaction is overblown and comical
25. guappa ◴[] No.42734745{3}[source]
I've seen seniors and above do that.

They never cared about respecting software licenses until Biden said they must. Then they started to lament and cry.

26. dyauspitr ◴[] No.42734746[source]
Why? I encourage all my devs to use AI but they need to be able to explain what it does.
27. guappa ◴[] No.42734882{5}[source]
I've seen self taught and graduates alike do that.
28. kybernetikos ◴[] No.42734886{5}[source]
This seems like you don't believe that AI can produce correct new work, but it absolutely can.

I've no idea whether in this case it directly copied someone else's work, but I don't think that it writing good unit tests is evidence that it did - that's it doing what it was built to do. And you searching and failing to find a source is weak evidence that it did not.

replies(1): >>42745251 #
29. guappa ◴[] No.42734888{5}[source]
According to microsoft: "the user".

There's companies that scan code to see if it matches known open source code or not. However they probably just scan github so they won't even have a lot of the big projects.

30. yjftsjthsd-h ◴[] No.42734895{5}[source]
> We can draw the line in many places.

That doesn't make those places equivalent.

31. guappa ◴[] No.42734898{5}[source]
Seems people didn't read the link and are downvoting you, possibly because they don't understand what you're talking about.
replies(1): >>42735002 #
32. manmal ◴[] No.42735002{6}[source]
Thanks, added context.
33. ascorbic ◴[] No.42735003{5}[source]
It's probably more common among self-taught programmers (and I say that as one myself). Most go through the early stage of copying chunks of code and seeing if they work. Maybe not blindly copying it, but still copying code from examples or whatever. I know I did (except it was 25 years ago from Webmonkey or the php.net comments section rather than StackOverflow). I'd imagine formally-educated programmers can skip some (though not all) of that by having to learn more of the theory at first.
replies(1): >>42735395 #
34. ascorbic ◴[] No.42735026{5}[source]
Yes, and I think the mistakes that LLMs commonly make are less problematic than Stack Overflow. LLMs seem to most often either hallucinate APIs, or use outdated ones. They're easier to detect when they just don't work. They're not perfect, but seem less inclined to generate the bad practices and security holes that are the bread and butter of Stack Overflow. In fact they're pretty good at identifying those sort of problems in existing code.
35. johnisgood ◴[] No.42735136{3}[source]
I use AI these days and I know how things work, there really is a huge difference. It helps me make AI write me code faster and the way I want it, something I could do, except more slowly.
36. solatic ◴[] No.42735199{5}[source]
And how do you propose filtering them out? There's a reason why college students are using LLMs, they're getting better grades for less effort. I don't assume you're proposing selecting students with worse grades on purpose?
replies(2): >>42735282 #>>42737575 #
37. whatevertrevor ◴[] No.42735242{5}[source]
That's a false dichotomy. People can write code themselves and thoroughly test it too.
38. whatevertrevor ◴[] No.42735281{4}[source]
Without prior knowledge, that reads like a scam?

A free training program with a promise of a guaranteed high paying job at the end, where have I heard that before? Seems like their business model is probably to churn people through these sessions and then monetize whatever shitty chatbot app they build through the training.

replies(1): >>42743619 #
39. The_Colonel ◴[] No.42735282{6}[source]
I wouldn't hire based on grades.

I think what the junior did is a reason to fire them (then you can try again with better selection practices). Not because they use code from LLMs, but that they don't even try to understand what it is doing. This says a lot about their attitude to programming.

40. hackable_sand ◴[] No.42735395{6}[source]
If people are being paid to copy and run random code, more power to them. I wouldn't have dreamt of getting a programming job until I was literate.
41. LastTrain ◴[] No.42737575{6}[source]
One way to filter them out, relevant to this thread, would be to let them go if they brazenly turned in work they did not create and do not understand.
42. thaumasiotes ◴[] No.42743619{5}[source]
No, their business model is getting placement fees for whoever they graduate from the program.

Considering this was a sponsored link on HN, endorsed by Y Combinator, I'd say you have a ridiculous threshold for labeling something a "scam", except to the degree that the companies committing to hire these people are pretty unlikely to get whatever they were hoping to get.

43. bsder ◴[] No.42745251{6}[source]
There is no way on this planet that an LLM "created" the exact unit tests needed to catch all the edge cases--it would even take a human quite a bit of thought to catch them all.

If you change the programming language, the unit tests disappear and the "generated" code loses the nice abstractions. It's clearly regurgitating the Python code and "generating" the code for other languages.