←back to thread

324 points rntn | 1 comments | | HN request time: 0.205s | source
Show context
ankit219 ◴[] No.44608660[source]
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.

One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].

> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.

[1] https://www.lw.com/en/insights/2024/11/european-commission-r...

replies(8): >>44610592 #>>44610641 #>>44610669 #>>44611112 #>>44612330 #>>44613357 #>>44617228 #>>44620292 #
m3sta ◴[] No.44612330[source]
The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.
replies(1): >>44613883 #
Oras ◴[] No.44613883[source]
Is that true? How can they decide to wipe out the intellectual property for an individual or entity? It’s not theirs to give it away.
replies(3): >>44613962 #>>44614016 #>>44616465 #
elsjaako ◴[] No.44613962[source]
Copyright is not a god given right. It's an economic incentive created by government to make desired behavior (writing an publishing books) profitable.
replies(3): >>44614270 #>>44616163 #>>44617440 #
klabb3 ◴[] No.44616163[source]
Yes, 100%. And that’s why throwing copyright selectively in the bin now when there’s an ongoing massive transfer of wealth from creators to mega corps, is so surprising. It’s almost as if governments were only protecting economic interests of creators when the creators were powerful (eg movie studios), going after individuals for piracy and DRM circumvention. Now that the mega corps are the ones pirating at a scale they get a free pass through a loophole designed for individuals (fair use).

Anyway, the show must go on so were unlikely to see any reversal of this. It’s a big experiment and not necessarily anything that will benefit even the model providers themselves in the medium term. It’s clear that the ”free for all” policy on grabbing whatever data you can get is already having chilling effects. From artists and authors not publishing their works publicly, to locking down of open web with anti-scraping. Were basically entering an era of adversarial data management, with incentives to exploit others for data while protecting the data you have from others accessing it.

replies(4): >>44616552 #>>44616611 #>>44616704 #>>44617293 #
1. ramses0 ◴[] No.44616552[source]
You've put into words what I've been internally struggling to voice. Information (on the web) is a gas, it expands once it escapes.

In limited, closed systems, it may not escape, but all it takes is one bad (or hacked) actor and the privacy of it is gone.

In a way, we used to be "protected" because it was "too big" to process, store, or access "everything".

Now, especially with an economic incentive to vacuum literally all digital information, and many works being "digital first" (even a word processor vs a typewriter, or a PDF that is sent to a printer instead of lithograph metal plates)... is this the information Armageddon?