> One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way
AFAICT the actual text of the act[0] does not mention anything like that. The closest to what you describe is part of the chapter on copyright of the Code of Practice[1], however the code does not add any new requirements to the act (it is not even part of the act itself). What it does is to present a way (which does not mean it is the only one) to comply with the act's requirements (as a relevant example, the act requires to respect machine-readable opt-out mechanisms when training but doesn't specify which ones, but the code of practice explicitly mentions respecting robots.txt during web scraping).
The part about copyright outputs in the code is actually (measure 1.4):
> (1) In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:
> a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner, and
> b) to prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents, or in case of general-purpose AI models released under free and open source licenses to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.
> (2) This Measure applies irrespective of whether a Signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations.
Keep in mind that "Signatories" here is whoever signed the Code of Practice: obviously if i make my own AI model and do not sign that code of practice myself (but i still follow the act requirements), someone picking up my AI model and signing the Code of Practice themselves doesn't obligate me to follow it too. That'd be like someone releasing a plugin for Photoshop under the GPL and then demanding Adobe release Photoshop's source code.
As for open source models, the "(1b)" above is quite clear (for open source models that want to use this code of practice - which they do not have to!) that all they have to do is to mention in their documentation that their users should not generate copyright infringing content with them.
In fact the act has a lot of exceptions for open-source models. AFAIK Meta's beef with the act is that the EU AI office (or whatever it is called, i do not remember) does not recognize Meta's AI as open source, so they do not get to benefit from those exceptions, though i'm not sure about the details here.
[0] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:...
[1] https://ec.europa.eu/newsroom/dae/redirection/document/11811...