My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
it's interesting to see the underlying anxiety among devs though I think there is a place in the back of their minds that knows the models will get better and better and someday could get to staff engineer level
i'm thinking like models get small enough, you fine tune them on your code, you add fuzzing, rewriting
it may not be bug free but could it become self healing with minimal / known natural language locations? or instead of x engineers one feeds the skeleton to chatgpt 20 or something and instead of giving you the result immediately it does it iteratively would still be cheaper than x devs
One man's bug is another man's feature