European Commission publishes guidelines on obligations for general-purpose AI models under the EU AI Act

The European Commission (Commission) adopted its long-anticipated guidelines on the scope of obligations for general-purpose artificial intelligence (GPAI) models under Regulation (EU) 2024/1689 (AI Act) (Guidelines) on July 18, 2025. The Guidelines closely follow the publication of the Commission’s and AI Office’s GPAI Code of Practice (Code), which outlines several measures that providers of GPAI models can take to comply with their obligations under the AI Act. Further information on the Code, including key provisions for organizations, is outlined in DLA Piper’s summary of the Code.

What are the Guidelines?

The Guidelines are part of a broader package of guidance tied to the obligations of GPAI model providers that entered into application on August 2, 2025. The Guidelines outline the Commission’s interpretation of the obligations for GPAI model providers, set out in Chapter V of the AI Act. While they are non-binding, the Guidelines provide valuable insight into the likely approach that national regulators will take in interpreting and enforcing the provisions of the AI Act.

The Guidelines provide clarity to operators, including providers, downstream providers, and deployers, across several key considerations in application of the AI Act’s GPAI model obligations. These include:

  • What constitutes a GPAI model
  • When a GPAI model poses a systemic risk
  • When a modification converts a downstream party into GPAI model provider
  • Nuances regarding exceptions for open-source models
  • Enforcement and transition periods

These key elements are outlined in further detail below.

It is important to note that while the Guidelines offer valuable direction to providers and downstream parties (such as organizations integrating models into their AI systems), ambiguity remains, particularly with respect to enforcement timeframes and grandfathering provisions. Therefore, organizations are encouraged to carefully assess their models’ compliance status against additional insights provided by the Commission and the AI Office.

Definition and scope of GPAI models

A key component of the Guidelines is its further detail on what is considered a GPAI model.

Article 3(63) of the AI Act defines a GPAI model as

an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.

This definition, on publication of the final text of the AI Act, received heavy criticism, due to its vague characteristics that could be broadly interpreted.

The Guidelines note that it is not feasible to provide a precise list of characteristics that would clearly display when a model meets this definition. Instead, they interpret the definition in a more restricted manner and focus on whether the model

  • Is trained with at least 10²³ FLOPs (floating-point operations)
  • Can generate language (text or audio), text-to-image, or text-to-video outputs, and
  • Is capable of competently performing a wide range of distinct tasks.

The guidelines concede, however, that models below the above threshold may still qualify as GPAI models, and vice versa, depending on whether they demonstrate “significant generality,” which remains undefined. Illustrative examples are provided of models that are likely to fall out of scope, such as models with defined tasks (eg, weather modeling). While the vague nature of classification is likely deliberate to accommodate rapid developments in technology, it nevertheless continues the ongoing uncertainty around what constitutes a GPAI model and requires organizations to closely assess the capabilities of their models for compliance purposes.

The Guidelines emphasize the AI Act’s intent to classify GPAI models based on their general applicability and broad capabilities. However, it is noted that the Commission envisages that these characteristics may evolve over time to keep with the development of technologies, as has already been demonstrated, for example, when draft guidelines on GPAI models originally required a lower compute threshold of 1022 FLOPs.

Any transformations of models trained with greater than 10²³ FLOPs, such as distillations or quantized versions of the models, must take into account the FLOPs used in creating the original model, not just the compute power used in the distillation or quantization. This means that smaller/more computationally efficient variants of larger models will still be considered GPAI if the larger variants from which they were produced met the relevant test.

GPAI models with systemic risk

The Guidelines also clarify the specific criteria that categorize a GPAI model to have systemic risk.

Article 51(1) of the AI Act states that a GPAI model poses a systemic risk if it

  • Has certain “high-impact capabilities,” or
  • Is designated as a GPAI model with systemic risk by the Commission.

High-impact capabilities are presumed if the GPAI model’s cumulative training compute exceeds 10²⁵ FLOPs.

The Guidelines outline that organizations may contest the presumption of systemic risk classification if they demonstrate, based on the model’s capabilities, that the model does not present a systemic risk, despite meeting the compute threshold. It is important to note that organizations must clearly demonstrate that a GPAI model does not pose a systemic risk for the presumption to be rebutted, for which demonstrating that various potential emergent properties have not, and will not, manifest in the resulting model can be difficult. It is not sufficient to demonstrate that a systemic risk is mitigated through appropriate measures, although such measures may form part of the risk mitigation plan for the model outlined under Article 55(1) of the AI Act.

The criteria for rebutting this presumption are qualitative and discretionary, and the organization rebutting the presumption is responsible for providing sufficient evidence. The Commission retains broad interpretative leeway in determining whether a model’s capabilities match or exceed those of “the most advanced models” – itself a standard left deliberately broad on the basis that it is expected to evolve.

Downstream modifications

The Guidelines also attempt to clarify when a model is significantly changed, therefore becoming a new model, resulting in the organization becoming a GPAI model provider.

Downstream actors, who modify a GPAI model leading to a “significant change” in the model’s generality, capabilities, or systemic risk, may be considered providers of a separate model. The Commission outlines that the degree of modification required to be considered a “significant change” is to be assessed on a case-by-case basis.

A significant change will generally be presumed if the modification uses more than one-third of the original model’s training compute. The Guidelines acknowledge that it may be difficult to apply the one-third threshold where the original compute is unknown, though it notes that few current modifications are expected to meet this threshold. Given the lack of specificity around “significant change” to a model’s generality, capability, or systemic risk, providers and downstream operators are encouraged to carefully consider whether work on a model constitutes a development that forms part of the original lifecycle, or whether a significant change has occurred giving rise to a new model. We have seen examples of institutions undertaking fine-tuning training on open-weights pre-trained models where that fine-tuning has rapidly had a material impact on model capabilities, even when the tuning data set is not particularly large, nor the compute expended anywhere near one-third of the compute expended creating the underlying pre-trained model. Therefore, it seems possible that “significant change” to a GPAI model’s generality or capability could be made well before the “one-third of original compute” threshold is reached.

New models and their providers (the downstream organization making the substantial change) will be immediately subject to the requirements of the AI Act and will not benefit from the grandfathering provisions outlined in Article 111(3) of the AI Act (see enforcement timeline below).

Open-source exemptions

The Guidelines clarify that GPAI models may be exempt if they use a genuinely free and open-source license; are not monetized (eg, through accessory support or professional services offered against a fee); and publicly disclose model weights, architecture, and usage information. For instance, imposing usage limitations, requiring supplementary licensing, or restricting public access to model parameters (including its weights) would not be deemed as falling into the open-source exemption.

Enforcement and the transitional period

The Guidelines clarify that providers placing GPAI models on the market before August 2, 2025 have until August 2, 2027 to comply with their obligations under the AI Act. Models that are nearly ready for use but are not actually made available on the market before this time will not benefit from the extension.

The Guidelines suggest that providers expecting compliance difficulties after August 2, 2025 should proactively inform the AI Office of how and when they intend to comply. Given the narrow timeline associated with the grandfathering mechanism, the ambiguities surrounding enforcement present a challenge for providers seeking to assess the compliance risk of a later deployment.

Next steps for operators

In preparation for the GPAI model obligations of the AI Act taking effect, organizations should consider the following steps:

  • Model providers should determine whether:
    • Any models would be categorized as a GPAI model or GPAI model with systemic risk
    • Any exemptions to the classifications apply, or
    • They may benefit from the transitional grandfathering provisions that extend compliance timelines for existing GPAI models.

  • Downstream operators should assess whether their modifications to existing GPAI models may result in them becoming model providers.
  • GPAI model providers should review and update documentation, copyright policies, and training data summaries to align with applicable obligations under Chapter V of the AI Act.
  • GPAI model providers should consider whether adherence to the Code may streamline compliance with applicable obligations.

Find out more

DLA Piper’s team of AI lawyers, data scientists, and policy experts helps organizations navigate the complex workings of their AI systems and comply with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and their impact on industry across the world.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI strategy through our AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.

Continue Reading