←back to thread

494 points todsacerdoti | 1 comments | | HN request time: 0.204s | source
Show context
acedTrex ◴[] No.44383211[source]
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol

Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.

I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.

replies(3): >>44383293 #>>44383732 #>>44384776 #
SchemaLoad ◴[] No.44383293[source]
I've started seeing this at work with coworkers using LLMs to generate code reviews. They submit comments which are way above their skill level which almost trick you in to thinking they are correct since only a very skilled developer would make these suggestions. And then ultimately you end up wasting tons of time proving how these suggestions are wrong. Spending far more time than the person pasting the suggestions spent to generate them.
replies(5): >>44383324 #>>44383343 #>>44383723 #>>44383791 #>>44384027 #
diabllicseagull ◴[] No.44383343[source]
funny enough I had coworkers who similarly had a hold of the jargon but without any substance. They would always turn out to be time sinks for others doing the useful work. AI imitating that type of drag on the workplace is kinda funny ngl.
replies(1): >>44384855 #
1. heisenbit ◴[] No.44384855[source]
Probabilistic patterns stringed together are something different from an end-to-end intention driven solidly linked chain of thought that is with pylons grounded in relevant context at critical points.