My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
it's interesting to see the underlying anxiety among devs though I think there is a place in the back of their minds that knows the models will get better and better and someday could get to staff engineer level
Perhaps take intern's LLM code and have the LLM do the code review. Keep reviewing the code with the LLM until the intern gets it correct.
I do agree that I feel much more productive with it LLMs though. Just being able to rubber duck my ideas with an AI and talk about code is extremely helpful, especially because I'm a solo dev/freelancer and don't usually have anyone else to do that with. And even though I don't typically use the code they give me, it's still helpful to see what the AI is thinking and explore that.
On the other hand, I use d3.js for data visualization which has had a stable API for years, has likely hundreds of thousands of examples that are small contained in a single file, and has many blog posts, O'Reilly books, and tutorials. The LLMs create perfect data visualizations that are high quality. Any request to change one such as adding dynamic sliders or styling tooltips, for example, are done without errors or bugs. People who do data visualization likely will be the first to go. :(
I am concerned that new libraries will not gain traction because the LLMs haven't been trained to implement them. We will be able to implement all the popular libraries, languages, and techniques quickly, however, innovation might stall if we rely on these machines stuck in the past.