Background
On 19 November 2025, the European Commission presented its much-anticipated Digital “Omnibus” package[1] intended to ease the administrative and compliance burden facing European businesses. Executive Vice-President of the Commission Henna Virkkunen stated that “[f]rom factories to start-ups, the digital package is the EU’s answer to calls to reduce burdens on our businesses.”[2]
The package is part of a legislative initiative to simplify and consolidate the EU’s Digital Rulebook[3] which consists of a body of EU legislation governing digital and emerging technologies, cybersecurity, and data. Developed by the Commission in response to calls from industry for greater legal clarity and alignment in enforcement approaches, the Digital Omnibus focuses on areas “where it was clear that the regulatory objectives can be achieved at a lower administrative cost”[4] across the board.
The package includes two key legislative proposals:[5]
- amendments to simplify and streamline a range of digital legislation, including rules relating to cybersecurity, data protection and data governance;[6] and
- a separate proposal to simplify and smooth the implementation of certain provisions of the EU AI Act.[7]
Key reform proposals
1. Single-entry point for incident reporting; uniform requirements for personal data breach notification
The Commission proposes to set up a centralised platform, to be established by the EU Agency for Cybersecurity (ENISA), where organisations can report cyber incidents under multiple frameworks (including NIS2, GDPR, DORA, and CRA) by way of a single filing. Prior to enabling notification of incidents in this way, ENISA would be expected to pilot and test the system to ensure it is fit-for-purpose for each relevant framework. Other sector-specific incident reporting obligations, such as those under aviation and energy regimes, would be brought under the same platform in due course.
Amendments have also been proposed to streamline and relax the personal data breach notification rules under GDPR. The proposals increase the threshold for notifying a data breach to regulators (the GDPR currently requires notification to regulators “unless the personal data breach is unlikely to result in a risk” [our emphasis], whereas the proposal only requires notification to regulators for a breach that is “likely to result in a high risk…” [our emphasis]) and extend the notification deadline to 96 hours (up from 72). The European Data Protection Board would prepare a common template for data breach notifications under GDPR.
2. Unified cookie consent framework
In a move to update the EU’s cookie policy framework under the ePrivacy Directive, the Commission proposes that the processing of personal data on and from terminal equipment should be governed solely by the GDPR (where the subscriber or user of the service is a natural person – so AI agents accessing a corporate subscriber’s terminal equipment may still be bound by the legacy ePrivacy Directive consent requirements). While consent would remain the general rule for storing personal data on, or accessing personal data from, a natural person’s electronic device (e.g., through cookies), the Commission proposes certain exceptions where consent would not be required – notably, for using first-party cookies to create aggregated information about the usage of an online service for audience measurement, where this is carried out by the data controller of that online service solely for its own use.
Changes to cookie-banner requirements have also been proposed: users must be able to reject cookies via a single-click button, and, once rejected, cookie consent must not be requested again for 6 months.
The proposed amendments could also reduce enforcement fragmentation by making the GDPR’s “one stop shop” mechanism applicable to the oversight of cookies used to collect personal data. The flipside is that this also triggers GDPR-level fines, which would mean a significant increase compared to the maximum fines available for breach of the ePrivacy Directive in some member states.
3. Amendment of the definition of “personal data” under GDPR
The proposal would amend the definition of “personal data” by clarifying that (pseudonymised) information will not be considered personal data for anyone who does not have the “means reasonably likely to be used” to (re-)identify the natural person to whom the information relates. The Commission would also adopt implementing acts to help specify when data resulting from pseudonymisation no longer constitutes personal data.
While this is being characterised as merely codifying recent case law on the “relative” (as opposed to “absolute”) approach to defining personal data, it could be a highly impactful change in practice: the concept of personal data not only delineates the scope of application of GDPR, but also of several key obligations under other regulations in the Digital Rulebook (such as DSA and DMA).
4. Other “innovation-friendly” amendments to the GDPR
Other proposed amendments appear to be aimed specifically at reconciling GDPR with emerging technologies, in particular AI training and use, by addressing some of the most pressing data protection compliance questions faced by AI developers and deployers.
A new provision would be introduced expressly recognising that processing of personal data in the context of the development or operation of an AI system or model can be based on “legitimate interests” under GDPR, subject to potential exceptions under EU and national law and the usual “necessity” requirement and balancing test. This would be a helpful clarification, confirming how many in the industry already operate today. However, the proposal would also introduce certain conditions which may not always be straightforward to implement in practice, including providing data subjects with “an unconditional right to object.”
Similarly, “[i]n order not to disproportionately hinder the development and operation of AI”, a new (albeit narrow) exception would be introduced to GDPR to allow for the incidental processing of sensitive data (so-called “special categories of personal data”) in the context of the development or operation of an AI system. Several stringent conditions would apply, including a requirement to implement measures to avoid the collection and processing of sensitive data during the entire lifecycle of the AI system, removal of any “residual” sensitive data from datasets and preventing such data from being disclosed through outputs of the AI system.
Providers of high-risk AI systems would benefit from an additional (slightly broader) exception under the AI Act, allowing for the “exceptional” processing of sensitive data for bias detection and correction, but only if various conditions are satisfied (including that use of synthetic or anonymised data would be inadequate for such purpose). Where “necessary and proportionate”, the same exception would be available to deployers of high-risk AI systems and to providers of other (non-high-risk) AI systems and AI models.
Finally, the proposals would introduce a definition of “scientific research” into the GDPR, which would expressly include research for commercial purposes. This change could make it easier for AI developers to re-purpose personal data for AI training, and will also be welcomed by companies active in other R&D-intensive industries that perform research through the processing of personal data at scale, such as pharma and biotech.
5. Exemptions under GDPR for automated decision-making clarified
Amendments have been proposed to clarify the circumstances in which decisions that have legal or similarly significant effects can be based solely on automated processing, including profiling. In particular, this would be allowed when the decision is necessary for entering into or performing a contract with the data subject (regardless of whether the decision could have been taken by a human through non-automated means). Explanatory commentary provided by the Commission does, however, suggest that where several equally effective automated processing solutions exist, the controller should still “use the less intrusive one”.
6. Refusal of “abusive” data subject access requests
The legislative package introduces a new ground for data controllers to refuse (or charge a reasonable fee to respond to) “abusive” data subject access requests under GDPR made “for purposes other than the protection of their data.” The proposal also provides additional context as to the other types of requests that may be viewed as an “abuse” of the rights of access granted to data subjects (e.g., overly broad and undifferentiated requests, and requests made with the intent to cause damage or harm to the controller). If adopted, it remains to be seen how impactful this change would be in practice – for example whether it will mark the decline of the use of data subject access requests as a pre-litigation discovery tool in non-data protection-related disputes.
7. Rules for high-risk AI and certain transparency obligations delayed
Entry into force of the AI Act requirements for high-risk AI systems is set to be delayed. Depending on the category of high-risk AI system, there would be a 6- or 12-month transition period after supporting tools and measures, such as harmonised standards and guidelines (presently still under development by the Commission) become available. This will, however, be subject to a long-stop date (of either 2 December 2027 or 2 August 2028, depending on the type of AI system) after which time the rules would enter into application in any case.
The Commission also proposes to push out the grace period for compliance with transparency obligations under Article 50(2) of the AI Act to 2 February 2027 for AI systems (including general-purpose AI systems) that generate synthetic audio, image, video or text and which are placed on the market before 2 August 2026.
8. Supervisory mandate of the AI Office expanded
The proposal would centralise and expand the AI Office’s oversight of AI systems, granting it exclusive supervisory and enforcement competences in respect of AI systems based on general-purpose AI models when the model and the system are developed by the same provider.[8] The AI Office’s mandate would also extend to AI systems that constitute, or are integrated into, very large online platforms (VLOPs) or very large search engines (VLOSEs) designated as such under the DSA; although the first point of entry for the assessment of such AI systems would still be the risk assessment, mitigating measures and audit obligations prescribed under the DSA.
9. Greater flexibility for organisations to tailor AI compliance
The proposals give organisations greater flexibility to tailor their AI compliance approaches. Providers would no longer be required to register an AI system in the EU database for high-risk AI systems if they assess a system is not genuinely high-risk based on how it is used (according to the criteria in Article 6(3) of the AI Act). Providers of high-risk systems would also be able to tailor their systems for post-market monitoring, instead of being required to implement elements mandated by the Commission for such purposes.
The Commission also proposes lifting the mandatory responsibility imposed on businesses to promote AI literacy. Primary responsibility for AI literacy would instead shift to the Commission and Member States, who would be required to “encourage” providers and deployers (on a non-binding basis) to take measures to ensure sufficient AI literacy of their staff.
10. Safeguards for trade secrets strengthened; rules for data sharing narrowed under the EU Data Act
The package introduces an additional ground for data holders to refuse the disclosure of trade secrets under the EU Data Act: if they can demonstrate a high risk of unlawful acquisition, use or disclosure of trade secrets to third-country entities, or EU entities under the control of such entities, which are subject to weaker protections than those available under EU law. The Commission’s explanatory commentary indicates that data holders may take account of various factors when making this assessment (including, insufficient legal standards, poor enforcement, limited legal recourse, strategic misuse of procedural tactics to undermine competitors or undue political influence); but such refusal would in any event need to be clear, proportionate and tailored to the specific circumstances of each case.
The proposals also narrow the circumstances when data holders would be required to make data available to public sector bodies, limiting this only to scenarios where data is required to respond to a public emergency (and provided further that the granularity and volume of data requested and the frequency of access is proportionate and duly justified in the context of the relevant emergency).
11. Rules for cloud switching under the EU Data Act adjusted
To help mitigate the cost and administrative burden of renegotiating existing contracts, the proposals introduce targeted exemptions to the Data Act cloud switching rules for custom-made data processing services (the majority of features and functionalities of which have been adapted by the provider to meet the specific needs of the relevant customer) and data processing services provided by small mid-caps and SMEs. Services of this type (other than infrastructure-as-a-service) provided under contracts concluded on or prior to 12 September 2025 would not generally be subject to the switching rules.
Furthermore, the amendments clarify that providers of data processing services (other than infrastructure-as-a-service) may impose “proportionate early termination penalties” in fixed term contracts (as long as these do not constitute an obstacle to switching).
Next steps
The proposal now requires approval from the European Parliament and the Council before it is passed into law. The measures outlined above therefore remain subject to likely complex and politicised negotiations between the EU institutions in the coming months. Against this backdrop, political pressure is mounting to act swiftly to reinvigorate innovation in Europe and catch up in the AI race. Many of the rules impacted by these initiatives are already (or will soon be) in force. Companies must now reassess the impact of these proposals on their compliance strategies and operations, while the details are ironed out through the legislative process.
This article was republished by NYU’s Compliance & Enforcement Blog.
[1] The full press release can be found here.
[2] The full version of Executive Vice-President Virkkunen’s remarks is available here.
[3] See more information on the Digital Rulebook here.
[4] See the 2025 EU Commission Call for Evidence on the Digital Omnibus (Digital Package on Simplification) here.
[5] Two further initiatives accompany these two Omnibus proposals to form the complete Digital package: (i) a Data Union Strategy to facilitate data access for AI development and (ii) European Business Wallets to provide companies with a single digital identity for cross-border transactions.
[6] The full text of the Proposal for Regulation on simplification of the digital legislation can be found here.
[7] The full text of the Digital Omnibus on AI Regulation Proposal can be found here.
[8] Note, however, that sectoral authorities will continue to be responsible for the supervision of AI systems related to products covered by the Union harmonisation legislation listed in Annex 1 of the AI Act.
