My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
Can you imagine the fallout from this, though? Each and every line of code this junior has ever touched needs to be scrutinized to determine its provenance. The company now must assume the employee has been uploading confidential material to OpenAI too. This is an uncomfortable legal risk.
How could you trust the dev again after the dust is settled?
Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon. That’s a frankly dangerous level of professional incompetence. (At least they didn’t try to hide it.)
Well now I’m wondering what the correct way would be to handle a junior doing this with ChatGPT, and what the correct way would be to handle similar kinds of mistakes such as copy-pasting GPL code into the proprietary code base, copy-pasting code from Stack Overflow, sharing snippets of company code online, and so on.
It's insulting that companies are paying people to cosplay as programmers.
Austen Allred is selling this as the future of programming. According to him, the days of writing code into an IDE are over.
We can draw the line in many places.
I would take generated code that a rookie obtained from an llm and copied without understanding all of it, but that he has thoughtfully tested, over something he authored himself and submitted for review without enough checks.
(and honestly it's not like the past graduates were much better, but they didn't have chatgpt)
I asked one of the "AI" assistants to do a very specific algorithmic problem for me and it did. And included unit tests which just so happened to hit all the exact edge cases that you would need to test for with the algorithm.
The "AI assistant" very clearly regurgitated the code of somebody. I, however, couldn't find a particular example of that code no matter how hard I searched. It is extremely likely that the regurgitated code was not open source.
Who is liable if I incorporate that code into my product?
I've no idea whether in this case it directly copied someone else's work, but I don't think that it writing good unit tests is evidence that it did - that's it doing what it was built to do. And you searching and failing to find a source is weak evidence that it did not.
That doesn't make those places equivalent.
A free training program with a promise of a guaranteed high paying job at the end, where have I heard that before? Seems like their business model is probably to churn people through these sessions and then monetize whatever shitty chatbot app they build through the training.
I think what the junior did is a reason to fire them (then you can try again with better selection practices). Not because they use code from LLMs, but that they don't even try to understand what it is doing. This says a lot about their attitude to programming.
Considering this was a sponsored link on HN, endorsed by Y Combinator, I'd say you have a ridiculous threshold for labeling something a "scam", except to the degree that the companies committing to hire these people are pretty unlikely to get whatever they were hoping to get.
If you change the programming language, the unit tests disappear and the "generated" code loses the nice abstractions. It's clearly regurgitating the Python code and "generating" the code for other languages.