←back to thread

493 points todsacerdoti | 1 comments | | HN request time: 0.315s | source
Show context
wyldfire ◴[] No.44382903[source]
I understand where this comes from but I think it's a mistake. I agree it would be nice if there were "well settled law" regarding AI and copyright, probably relatively few rulings and next to zero legislation on which to base their feelings.

In addition to a policy to reject contributions from AI, I think it may make sense to point out places where AI generated content can be used. For example - how much of QEMU project's (copious) CI setup is really stuff that is critical content to protect? What about ever-more interesting test cases or environments that could be enabled? Something like "contribute those things here instead, and make judicious use of AI there, with these kinds of guard rails..."

replies(5): >>44382957 #>>44382958 #>>44383166 #>>44383312 #>>44383370 #
1. hinterlands ◴[] No.44383370[source]
I think you need to read between the lines here. Anything you do is a legal risk, but this particular risk seems acceptable to many of the world's largest and richest companies. QEMU isn't special, so if they're taking this position, it's most likely simply because they don't want to deal with LLM-generated code for some other reason, are eager to use legal risk as a cover to avoid endless arguments on mailing lists.

We do that in corporate environments too. "I don't like this" -> "let me see what lawyers say" -> "a-ha, you can't do it because legal says it's a risk".