←back to thread

491 points todsacerdoti | 1 comments | | HN request time: 0.223s | source
Show context
wyldfire ◴[] No.44382903[source]
I understand where this comes from but I think it's a mistake. I agree it would be nice if there were "well settled law" regarding AI and copyright, probably relatively few rulings and next to zero legislation on which to base their feelings.

In addition to a policy to reject contributions from AI, I think it may make sense to point out places where AI generated content can be used. For example - how much of QEMU project's (copious) CI setup is really stuff that is critical content to protect? What about ever-more interesting test cases or environments that could be enabled? Something like "contribute those things here instead, and make judicious use of AI there, with these kinds of guard rails..."

replies(5): >>44382957 #>>44382958 #>>44383166 #>>44383312 #>>44383370 #
dclowd9901 ◴[] No.44382957[source]
What's the risk of not doing this? Better code but slower velocity for an open source project?

I think that particular brand of risk makes sense for this particular project, and the authors don't seem particularly negative toward GenAI as a concept, just going through a "one way door" with it.

replies(1): >>44384090 #
1. mrheosuper ◴[] No.44384090[source]
>Better code but slower velocity for an open source project

Better code and "AI assist coding" are not exclusive of each other.