←back to thread

324 points rntn | 5 comments | | HN request time: 0s | source
Show context
ankit219 ◴[] No.44608660[source]
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.

One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].

> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.

[1] https://www.lw.com/en/insights/2024/11/european-commission-r...

replies(8): >>44610592 #>>44610641 #>>44610669 #>>44611112 #>>44612330 #>>44613357 #>>44617228 #>>44620292 #
t0mas88 ◴[] No.44610641[source]
Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"

This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.

replies(4): >>44613578 #>>44614324 #>>44614949 #>>44615016 #
deanc ◴[] No.44613578[source]
Except that it’s seemingly impossible to prevent against prompt injection. The cat is out the bag. Much like a lot of other legislation (eg cookie law, being responsible for user generated content when you have millions of it posted per day) it’s entirely impractical albeit well-meaning.
replies(1): >>44613667 #
lcnielsen ◴[] No.44613667[source]
I don't think the cookie law is that impractical? It's easy to comply with by just not storing non-essential user information. It would have been completely nondisruptive if platforms agreed to respect users' defaults via browser settings, and then converged on a common config interface.

It was made impractical by ad platforms and others who decided to use dark patterns, FUD and malicious compliance to deceive users into agreeing to be tracked.

replies(3): >>44613785 #>>44613896 #>>44613989 #
jonathanlydall ◴[] No.44613896[source]
I recently received an email[0] from a UK entity with an enormous wall of text talking about processing of personal information, my rights and how there is a “Contact Card” of my details on their website.

But with a little bit of reading, one could ultimately summarise the enormous wall of text simply as: “We’ve added your email address to a marketing list, click here to opt out.”

The huge wall of text email was designed to confuse and obfuscate as much as possible with them still being able to claim they weren’t breaking protection of personal information laws.

[0]: https://imgur.com/a/aN4wiVp

replies(1): >>44614190 #
1. tester756 ◴[] No.44614190[source]
>The huge wall of text email was designed to confuse and obfuscate as much as possible with

It is pretty clear

replies(1): >>44614293 #
2. johnisgood ◴[] No.44614293[source]
Only if you read it. Most people do not read it, same with ToSes.
replies(1): >>44614671 #
3. octopoc ◴[] No.44614671[source]
If you ask someone if they killed your dog and they respond with a wall of text, then you’re immediately suspicious. You don’t even have to read it all.

The same is true of privacy policies. I’ve seen some companies have very short policies I could read in less than 30s, those companies are not suspicious.

replies(2): >>44615333 #>>44617435 #
4. 1718627440 ◴[] No.44615333{3}[source]
That's true, because of the EU privacy regulation, because they make companies write a wall of text before doing smth. suspicious.
5. johnisgood ◴[] No.44617435{3}[source]
I do not disagree. It could indeed be made shorter than usual, especially if you are not malicious.