https://artificialintelligenceact.eu/introduction-to-code-of...
It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.
https://artificialintelligenceact.eu/introduction-to-code-of...
It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.
Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.
Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.
On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.
Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.
My issue with this is that it doesn't look like America's laissez-faire stance on this issues helped Americans much. Internet companies have gotten absolutely humongous and gave rise to a new class of techno-oligarchs that are now funding anti-democracy campaigns.
I feel like getting slightly less performant models is a fair price to pay for increased scrutiny over these powerful private actors.
If Europe wants leverage, the best plan is to tell ASML to turn off the supply of chips.