←back to thread

489 points todsacerdoti | 1 comments | | HN request time: 0.314s | source
Show context
acedTrex ◴[] No.44383211[source]
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol

Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.

I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.

replies(3): >>44383293 #>>44383732 #>>44384776 #
stevage ◴[] No.44384776[source]
> Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions.

Yep, and it's not just code. Student essays, funding applications, internal reports, fiction, art...everything that AI touches has this problem that AI outputs look superficially similar to the work of experts.

replies(2): >>44385407 #>>44385413 #
1. whatevertrevor ◴[] No.44385407[source]
I have learned over time that the actually smart people worth listening to, avoid jargon beyond what is strictly necessary, talk in simple terms with specific goals/improvements/changes in mind.

If I'm having to reread something over and over to understand what they're even trying to accomplish, odds are it's either AI generated or an attempt at sounding smart instead of being constructive.