←back to thread

297 points rntn | 2 comments | | HN request time: 0.003s | source
Show context
ankit219 ◴[] No.44608660[source]
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.

One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].

> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.

[1] https://www.lw.com/en/insights/2024/11/european-commission-r...

replies(7): >>44610592 #>>44610641 #>>44610669 #>>44611112 #>>44612330 #>>44613357 #>>44617228 #
dmix ◴[] No.44610592[source]
Lovely when they try to regulate a burgeoning market before we have any idea what the market is going to look like in a couple years.
replies(8): >>44610676 #>>44610940 #>>44610948 #>>44611033 #>>44611210 #>>44611955 #>>44612758 #>>44614808 #
remram ◴[] No.44610676[source]
The whole point of regulating it is to shape what it will look like in a couple of years.
replies(8): >>44610764 #>>44610961 #>>44611052 #>>44611090 #>>44611379 #>>44611534 #>>44611915 #>>44613903 #
olalonde ◴[] No.44610961{3}[source]
You're both right, and that's exactly how early regulation often ends up stifling innovation. Trying to shape a market too soon tends to lock in assumptions that later prove wrong.
replies(2): >>44612297 #>>44613233 #
mycall ◴[] No.44612297{4}[source]
Depends what those assumptions are. If by protecting humans from AI gross negligence, then the assumptions are predetermined to be siding towards human normals (just one example). Lets hope logic and understanding of the long term situation proceeds the arguments in the rulesets.
replies(1): >>44612400 #
dmix ◴[] No.44612400{5}[source]
You're just guessing as much as anyone. Almost every generation in history has had doomers predicting the fall of their corner of civilization from some new thing. From religion schisms, printing presses, radio, TV, advertisements, the internet, etc. You can look at some of the earliest writings by English priests in the 1500s predicting social decay and destruction of society which would sound exactly like social media posts in 2025 about AI. We should at a minimum under the problem space before restricting it, especially given the nature of policy being extremely slow to change (see: copyright).
replies(1): >>44612608 #
1. esperent ◴[] No.44612608{6}[source]
I'd urge you to read a book like Black Swan, or study up on statistics.

Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.

(+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.

replies(1): >>44616602 #
2. rpdillon ◴[] No.44616602[source]
I'd like for you to expand your point on understanding statistics better. I think I have a very good understanding of statistics, but I don't see how it relates to your point.

Your point is fundamentally philosophical, which is you can't use the past to predict the future. But that's actually a fairly reductive point in this context.

GP's point is that simply making an argument about why everything will fail is not sufficient to have it be true. So we need to see something significantly more compelling than a bunch of arguments about why it's going to be really bad to really believe it, since we always get arguments about why things are really, really bad.