EU legislators are moving forward with new legislation on regulating the access to and use of data generated through use of connected products or related services (under the Data Act) and artificial intelligence (under the AI Act). Both could become law as early as the end of 2023, although with grace periods built in. Taken together with other EU digital and privacy regulations – such as the Digital Services Act, Digital Operational Resilience Act and General Data Protection Regulation – the interlocking effect of such regulation will be challenging to navigate. This will not just be an issue for EU-headquartered companies to grapple with: both the proposed Data Act and the AI Act will apply to any business selling into the EU market, wherever established.
Users of Connected Products Must Have Access to the Data
The EU Data Act targets the commercially valuable data collected by certain digital products that obtain, generate or collect data concerning its use or environment (broadly, the ‘internet of things’) and any related service data. Under the Data Act, manufacturers will be required to ensure portability of such product-generated data to users and third parties. Policymakers believe this will boost the EU’s data economy by unlocking the sharing of such data, optimizing its accessibility and use, and fostering competition in sectors characterised by a concentration of a small number of manufacturers.
One of the aims of the Data Act is to stop customers being locked into a particular service provider because it holds all the data generated by a connected product. For example, airlines would have access to performance data about their planes currently held by the manufacturer, allowing them to explore potentially cheaper contracts for repair and maintenance with a third party. These data portability requirements would apply to manufacturers of “connected products”, providers of related services, and data holders (which is anyone who has the right or obligation in accordance with the Data Act, to use and make available data, including, where contractually agreed, product data or related service data which it has retrieved or generated during the provision of a related service).
In addition, the Data Act also includes separate provisions (Chapter VI) aimed at making it easier to switch between data processing services. Here, these provisions generally relate to servers and cloud infrastructure that hold or process vast amounts of data on other companies’ behalf. Providers of these data processing services may, depending on the nature of the data processing services they offer, be required to take all reasonable measures to ensure that their customers, after switching away from their service, achieve “functional equivalence” in the use of comparable services with the new host – including, by providing adequate information, technical support and, where appropriate, the necessary tools for the switch.
The Data Act makes clear that it isn’t intended to override any EU laws providing for the protection of intellectual property; this principle is reflected in the procedures set forth in the legislation as regards sharing of data in a scenario whereby this may impact, for example, trade secrets. For example, the Data Act is explicit that providers of data processing services, in satisfying the relevant obligations under Chapter VI of the Data Act, shall not be required to disclose or transfer digital assets protected by intellectual property rights, or that otherwise constitute a trade secret, to a customer or to another provider.
On 9 November 2023, the European Parliament adopted the final text of the Data Act. The Data Act will enter into force on the 20th day following its publication in the Official Journal of the European Union, and will apply from 20 months after its entry into force.
Regulating Uses of AI
The Artificial Intelligence (AI) Act, appropriately, is still evolving. The trilogue negotiation stage of the legislative process began in June 2023. We understand that the discussions between the participating institutions are focused on three key areas: (i) the scope of the exceptions to the prohibition on use by law enforcement of remote biometric identification in public spaces, (ii) the division of responsibility for enforcement of the legislation at EU and Member State-level, and (iii) perhaps attracting the most scrutiny, the inclusion of specific obligations for providers of foundation models. Pending alignment between EU member states on the scope of the legislative measures to address these areas and approval of a final text, the precise implications of the AI Act remain unknown.
But in a broad outline, the regulation proposes to regulate machine-based learning systems on a sliding scale, according to the level of risk posed by use of such systems:
- Some uses of AI would be deemed to pose an unacceptable risk, and therefore be banned outright: if it were to underpin a social credit score, for instance.
- Others, such as AI deployed in products subject to certain EU health and safety harmonization legislation (e.g., medical devices) or used in university admissions or grading, would be designated as high risk and subject to robust regulation.
- AI posing limited risk, such as certain AI systems intended to interact with individuals, would attract transparency obligations: the public should generally be told they are interacting with an AI system.
- Finally, minimal risk AI systems such as spam filters would not be regulated at all, although providers are encouraged to sign up to non-binding codes of conduct.
Most of the Act’s requirements would attach to systems in the high-risk bucket. Such systems would have to undergo a conformity assessment before being placed on the market, and a ‘CE’ mark attached. But providers would also have ongoing obligations to monitor incidents and malfunctions post-deployment; how exactly this will work and be enforced in practice is not altogether clear.
The AI Act’s obligations would largely attach to providers and, to a lesser extent, users, importers and distributors of AI systems. Various supply chain participants could be considered a provider, including a company that supplies “under its own name or trademark” a high-risk AI system. Companies should not assume the rules are only the concern of backend developers – especially as penalties will be assessed as a percentage of global turnover.
Not all have welcomed the proposed regulation. Earlier this year, over 150 senior executives of European companies, including from established enterprises such as Siemens, Merck and Airbus, signed an open letter arguing that the AI Act would “jeopardize Europe’s competitiveness and technological sovereignty”. Rather than impose “bureaucratic” rules on a rapidly changing set of technologies, the legislation should “confine itself to stating broad principles”.
Consistent with this approach, the G7 Leaders last month published the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, which is voluntary guidance based on guiding principles intended to promote safe, secure and trustworthy AI. It is not entirely clear how the Code of Conduct will supplement the existing and emerging regulatory requirements applicable to development and use of AI in G7 countries in practice, but a perceived advantage of voluntary codes of conduct is that market participants’ experiences in adopting such measures can be taken into account when creating binding rules. It is critical that co-legislators and rule-makers take care to ensure legal certainty with respect to how any binding (or non-binding) measures will apply – particularly in areas of overlap between different sets of rules.
The European Parliament is, however, keen to set binding guardrails – and soon. Its proposed amendments to the text of the AI Act, which have been the subject of scrutiny and debate in the trilogue negotiations, include specific measures to regulate foundation models (e.g., with respect to risk mitigation, data governance, and registration of those types of models in an EU-wide database), and generative AI models, including a requirement to disclose publicly a summary of copyrighted data used to develop and train any such model. 
Parliament has proposed a two-year grace period for compliance, whereas the Council, made up of representatives of national governments, proposes three. The final trilogue meetings of the year are expected to take place in early December 2023. If alignment is reached and a compromise text agreed, the AI Act could come into force – starting the grace period countdown – as early as the end of 2023. If not, in light of the European Parliament elections to be held mid-2024, enactment of the AI Act could be significantly delayed (potentially even until after the elections).