←back to thread

491 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
acedTrex ◴[] No.44383211[source]
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol

Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.

I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.

replies(3): >>44383293 #>>44383732 #>>44384776 #
SchemaLoad ◴[] No.44383293[source]
I've started seeing this at work with coworkers using LLMs to generate code reviews. They submit comments which are way above their skill level which almost trick you in to thinking they are correct since only a very skilled developer would make these suggestions. And then ultimately you end up wasting tons of time proving how these suggestions are wrong. Spending far more time than the person pasting the suggestions spent to generate them.
replies(5): >>44383324 #>>44383343 #>>44383723 #>>44383791 #>>44384027 #
Groxx ◴[] No.44383723[source]
By far the largest review-effort PRs of my career have been in the past year, due to mid-sized LLM-built features. Multiple rounds of other signoffs saying "lgtm" with only minor style comments only for me to finally read it and see that no, it is not even remotely acceptable and we have several uses built by the same team that would fail immediately if it was merged, to say nothing of the thousands of other users that might also be affected. Stuff the reviewers have experience with and didn't think about because they got stuck in the "looks plausible" rut, rather than "is correct".

So it goes back for changes. It returns the next day with complete rewrites of large chunks. More "lgtm" from others. More incredibly obvious flaws, race conditions, the works.

And then round three repeats mistakes that came up in round one, because LLMs don't learn.

This is not a future style of work that I look forward to participating in.

replies(1): >>44384634 #
tobyhinloopen ◴[] No.44384634[source]
I think a future with LLM coding requires much more tests, both testing happy and bad flows.
replies(2): >>44385427 #>>44387195 #
1. zelphirkalt ◴[] No.44387195[source]
I think the issue is with people taking mental shortcuts and thus no longer properly thinking about design decisions and the bigger picture in terms of concepts of the software.