The EU AI Act’s phased implementation continues: from 2 August 2025, the AI Act’s rules on general-purpose AI (GPAI) will enter into force (and become enforceable in respect of any new GPAI models in August 2026 and for existing GPAI models in August 2027). This milestone follows the recent publication of (non-binding) guidelines[1] developed by the European Commission to clarify the scope of the obligations on providers of GPAI models (the “GPAI Guidelines”), and the release of the final version of the General-Purpose AI Code of Practice.[2]
Recap of the Rules on GPAI Models
The AI Act adopts a tiered framework for GPAI models, distinguishing between GPAI models generally and those that pose “systemic risk” due to their capabilities, scale, or potential impact. For providers of all GPAI models, the Act imposes transparency and accountability measures. Providers must:
- Draw up and maintain technical documentation of the model, including for its training and testing processes and the results of any model evaluations.
- Disclose information and documentation to downstream providers of AI systems who intend to integrate the GPAI model into their AI systems.
- Put in place a copyright policy to comply with EU copyright law, including to identify and comply with opt-outs with respect to text and data mining as contemplated under the 2019 EU Copyright Directive[3].
- Draw up and make publicly available a summary of content used for training, according to a common minimal baseline template recently published by the AI Office.[4]
In addition, providers of GPAI models that pose “systemic risk”[5] must evaluate and test such models (including through adversarial testing such as red teaming), assess and mitigate possible systemic risks arising from development or use of such models, and report to the AI Office and other relevant authorities any serious incident[6] involving such models. Providers must also apply adequate cybersecurity measures in respect of such models.
But what is a GPAI model?
GPAI models are defined broadly[7] under the AI Act based on certain characteristics, including that the model displays significant generality and is capable of competently performing a wide range of distinct tasks. The AI Act does not identify specific criteria against which potential providers can assess whether their model is a GPAI model.
However, the GPAI Guidelines indicate that the Commission will use the amount of computational resources used to train a model (which must be greater than 1023 floating-point operations (FLOPs)), and whether that model can generate language, text-to-image, or text-to-video outputs,[8] as indicative criterion to make this assessment – although the Commission’s approach may evolve in future to reflect state of the art.
The GPAI Guidelines go on to explain that models trained to generate language are typically more capable of competently performing a wider range of tasks (because such models are able to use language to communicate, store knowledge and reason). Models that generate images and video may also be categorised as GPAI models if they enable flexible content generation to support a wide range of tasks.[9]
The definition of a GPAI model in the AI Act is, however, still important – even if a model meets the Commission’s indicative criterion if it does not display significant generality or performs only a narrow set of tasks it will not be a GPAI model (and vice versa).[10]
The GPAI Guidelines also elaborate on the circumstances in which a GPAI model will be classified as a model that has “high-impact capabilities” and, accordingly, poses “systemic risk”, and how providers can contest such classification (including to rebut the presumption under the AI Act that a model has high-impact capabilities if the cumulative training compute exceeds 1025 FLOPs[11]).
Who is a provider of a GPAI model?
The Commission provides guidance in the GPAI Guidelines as to who will qualify as a “provider” [12] of a GPAI model under the AI Act. The GPAI Guidelines consider the roles and responsibilities of different actors involved in the AI value chain and discuss when those actors should be considered to be the provider of the GPAI model.
In particular, the GPAI Guidelines examine the following scenarios:
Downstream modification of a GPAI model
If a downstream actor fine-tunes or modifies an existing GPAI model in a way that leads to a significant change in the model’s generality, capabilities or systemic risk, then the Commission will consider that downstream modifier to become the provider of such modified GPAI model.[13]
The Commission will assume that there has been a change of this type when the training compute used for the relevant modification is more than a third of the training compute of the original model (or, if the downstream modifier is not aware of this value, the threshold applied will be a third of 1025 FLOPs for GPAI models with systemic risks and a third of 1023 FLOPs for other GPAI models). Importantly, the Commission notes that it is “not necessary” for every modification of a GPAI model to lead to the downstream modifier being considered the provider of the modified GPAI model.[14]
GPAI models integrated into AI systems by downstream actors.
When an upstream actor develops a GPAI model and makes it available for the first time to a downstream actor on the EU market, that upstream actor is considered the provider of the model (and the downstream actor is likely to be the provider of the AI system into which the GPAI model is integrated).[15]
The GPAI Guidelines also consider the scenario where an upstream actor develops a GPAI model and makes it available for the first time to a downstream actor outside the EU market, and that downstream actor integrates the model into an AI system and places the AI system on the EU market. In such case, the upstream actor will still be considered the provider of the model, unless they have clearly excluded the distribution and use of the model on the EU market (in which case the downstream actor shall be the provider of the model for the purposes of the AI Act).[16]
Collaborative or consortium development.
In cases where a GPAI model is developed by or for a collaborative or consortium, the provider of such GPAI model will usually be the coordinator of the collaborative or consortium (or the collaborative or the consortium itself depending on the specific case).[17]
Could an exemption apply?
Providers of “free and open-source licence” AI models are exempt from certain GPAI requirements under the AI Act; provided, however, these exemptions are not available to GPAI models with systemic risk.[18]
The GPAI Guidelines outline the conditions that must be fulfilled for these exemptions to apply – these are:
License Conditions
The model must be released under a licence that allows users to obtain the model without payment or restriction (save for reasonable safety and security measures such as, e.g., user verification), and use the model for any purpose. Users must also be granted unrestricted rights to adapt or fine-tune the model, and to redistribute the model (or any modified version of it).[19]
It is worth noting that the GPAI Guidelines contemplate that licensors may include specific, safety-orientated terms that restrict usage in high-risk applications or domains, provided these are proportionate and based on objective criteria.
Lack of Monetisation
No monetary compensation should be required for access, use, modification, and distribution of the AI model. The Commission provides various examples of forms of monetisation in this context. These include dual licensing models, bundling the model license with a mandatory service package (e.g., for support and maintenance), and means of access that create vendor lock-in (e.g., the model is exclusively hosted by the provider on a platform that requires users to pay for access).[20]
Public Availability of Parameters
Finally, the GPAI Guidelines elaborate on the requirement in the AI Act that the model’s parameters (including the weights), information on its architecture, and information on model usage must be made publicly available.[21] The GPAI Guidelines indicate that this information should include, as a minimum, information about the model’s input and output modalities, capabilities, and limitations, as well as the technical means (e.g. instructions for use, infrastructure, tools) required for the model to be integrated into AI systems.[22]
How are the rules likely to be enforced?
The AI Office is tasked with supervising and enforcing the obligations on GPAI model providers under the AI Act.[23] The GPAI Guidelines include commentary on the scope of this mandate and the Commission’s expectations as regards the level of engagement of GPAI model providers with the AI Office.[24] Notably, the AI Office appears to expect informal cooperation and active engagement from GPAI model providers, including proactive reporting by providers with respect to compliance measures. This is expected to start even in the training phase for relevant models.
This is also where the GPAI Code of Practice comes in.[25] Once the code is endorsed by Member States and the Commission, AI model providers who elect to sign up to the code may use adherence to the code as a means to demonstrate compliance with the AI Act (although this will not be a presumption of conformity).
The final version of the GPAI Code of Practice addresses transparency, copyright and safety and security – at a high level, the requirements in the code commit signatories to the following:
- Transparency: preparation, maintenance and retention, and provision to relevant stakeholders (including the AI Office, national competent authorities and downstream providers), of model documentation. Broadly, this includes all information in the Model Documentation Form.
- Copyright: the preparation and implementation of a copyright policy to comply with EU copyright law, ensuring that training data is lawfully accessible and not obtained by circumventing technological protection measures (such as paywalls), excluding persistently infringing websites from web-crawling, and compliance with state-of-the art rights reservations by copyright holders (including by ensuring web-crawlers read and follow robots.txt protocols). Signatories are encouraged to make a summary of the copyright policy publicly available.
- Safety and Security: the adoption of a robust safety and security framework, including continuous risk assessment throughout the model’s lifecycle, developing and deploying risk mitigation measures and security controls, external evaluations to facilitate post-market monitoring, and reporting of serious incidents to the AI Office and national competent authorities.
Signing up to the code is voluntary – providers of GPAI models can still demonstrate compliance in other ways. The GPAI Guidelines do, however, suggest that adherence to the code – assuming it is assessed by the Commission and Member States as adequate – is a ‘straightforward way of demonstrating compliance’ and signatories will ‘benefit from increased trust from the Commission and other stakeholders’.[26]
Earlier this summer, there had been speculation that implementation of the AI Act’s rules on GPAI might be delayed as industry awaited Commission guidance on key related concepts. The GPAI Guidelines and final GPAI Code of Practice were subsequently published in quick succession – arriving just in time before the deadline. It remains to be seen whether the guidance is sufficiently detailed (and was delivered early enough) so as to meaningfully assist operators in the AI value chain to interpret the rules.
[1] See the full text of the Guidelines for providers of general-purpose AI models here.
[2] See the full text of the GPAI Code of Practice here.
[3] Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market; full text here.
[4] The template consists of three main sections: general information, list of data sources, and relevant data processing aspects. According to the explanatory notice published alongside the template (see full text here), the summary should cover data used in all stages of the model training, from pre-training to post-training, including model alignment and fine-tuning, covering all sources and types of data. However, other input data used during the model’s operation (e.g. through retrieval augmented generation) are not required to be recorded in the template, unless the model actively learns from this input data.
[5] i.e., a risk that is specific to the high-impact capabilities of GPAI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain (Article 3(65) of the EU AI Act).
[6] i.e., an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person or serious harm to a person’s health, (b) a serious and irreversible disruption of the management or operation of critical infrastructure, (c) the infringement of obligations under Union law intended to protect fundamental rights and (d) serious harm to property or the environment (Article 3(49) of the EU AI Act).
[7] Article 3(63) of the EU AI Act.
[8] GPAI Guidelines, para. 17.
[9] GPAI Guidelines, para. 19.
[10] GPAI Guidelines, para. 20.
[11] Article 51(2) of the EU AI Act.
[12] Article 3(3) of the EU AI Act defines a “provider” as a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
[13] GPAI Guidelines, para. 62.
[14] GPAI Guidelines, para. 61.
[15] GPAI Guidelines, para. 57.
[16] GPAI Guidelines, paras. 58-59.
[17] GPAI Guidelines, para. 51.
[18] Articles 53(2) and 54(6) of the EU AI Act.
[19] GPAI Guidelines, paras. 76-84.
[20] GPAI Guidelines, paras. 85-89.
[21] Articles 53(2) and 54(6) of the EU AI Act.
[22] GPAI Guidelines, paras. 90-92.
[23] Articles 88 and 89 of the EU AI Act.
[24] GPAI Guidelines, paras. 101-108.
[25] See the full text of the GPAI Code of Practice here.
[26] GPAI Guidelines, para. 94.
