←back to thread

490 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
acedTrex ◴[] No.44383211[source]
Oh hey, the thing I predicted in my blog titled "yes i will judge you for using AI" happened lol

Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.

I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.

replies(3): >>44383293 #>>44383732 #>>44384776 #
stevage ◴[] No.44384776[source]
> Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions.

Yep, and it's not just code. Student essays, funding applications, internal reports, fiction, art...everything that AI touches has this problem that AI outputs look superficially similar to the work of experts.

replies(2): >>44385407 #>>44385413 #
danielbln ◴[] No.44385413[source]
Trajectory so far has been that AI outputs are converging increasingly not just in superficial similarity but also quality of expert output. We are obviously not there yet, and some might say we never will. But if we do, there is a whole new conversation to be had.
replies(1): >>44387241 #
1. zelphirkalt ◴[] No.44387241[source]
I suspect that there are at least 1 or 2 more significant discoveries in terms of architecture and general way of models working, before these things become actual experts. Maybe they will never get there and we will discover how to better incorporate facts and reasoning, rather than just ingesting billions of training data points.