←back to thread

491 points todsacerdoti | 1 comments | | HN request time: 0.206s | source
Show context
acedTrex ◴[] No.44383211[source]
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol

Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.

I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.

replies(3): >>44383293 #>>44383732 #>>44384776 #
SchemaLoad ◴[] No.44383293[source]
I've started seeing this at work with coworkers using LLMs to generate code reviews. They submit comments which are way above their skill level which almost trick you in to thinking they are correct since only a very skilled developer would make these suggestions. And then ultimately you end up wasting tons of time proving how these suggestions are wrong. Spending far more time than the person pasting the suggestions spent to generate them.
replies(5): >>44383324 #>>44383343 #>>44383723 #>>44383791 #>>44384027 #
1. beej71 ◴[] No.44383791[source]
I'm not really in the field any longer, but one of my favorite things to do with LLMs is ask for code reviews. I usually end up learning something new. And a good 30-50% of the suggestions are useful. Which actually isn't skillful enough to give it a title of "code reviewer", so I certainly wouldn't foist the suggestions on someone else.