On 5 September  2024, the EU, UK and US joined seven other states[1] in signing the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“Treaty”) – the first international treaty governing the safe use of artificial intelligence (‘‘AI’’).[2] The Treaty remains subject to ratification, acceptance or approval by each signatory and will enter into force on the first day of the month following a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Any state worldwide is eligible to join the Treaty, subject to the unanimous approval of the signatories, and to commit to complying with its provisions. The Treaty is expected to have a positive impact on international cooperation on addressing AI-related risks.

Executive Summary

  • The Treaty sets a baseline across ratifying signatories (“Parties”) to adopt or maintain appropriate domestic legislative, administrative, or other measures to ensure that activities within the lifecycle of AI systems are not only fully consistent with human rights, democracy and the rule of law, but also respect seven broad principles: (1) human dignity and individual autonomy, (2) transparency and oversight, (3) accountability and responsibility,
    (4) equality and non-discrimination, (5) privacy and personal data protection, (6) reliability and (7) safe innovation.
  • Although the likelihood of enforcing the Treaty against any of its Parties (if they subsequently fail to adopt or maintain the relevant measures to which they have committed) seems remote, the Treaty’s creation of an inter-state forum for oversight of Parties’ compliance increases the public pressure for Parties (and, perhaps, for those states that have not yet signed) to adhere to the Treaty’s baseline standards.
  • Thus, the main takeaway for companies developing and deploying AI systems is that the Treaty is likely to foreshadow increased regulatory convergence in Q4 2024 and FY 2025, and possibly future legislation in signatory nation Parties to increase cross-border harmonisation in line with the Treaty’s broad, unifying principles.

Context

AI development is characterised by a collective action problem (or prisoners’ dilemma):  developers understand that AI can be misused, and that such misuse may even give rise to systemic risks.  Yet, no individual developer has an incentive to limit the capabilities of its technology, for fear that competitors will exploit areas or capabilities they let lie.  The result may leave every developer and society worse off.  The Treaty reflects a view that this collective problem is best resolved through worldwide coordination and cooperation setting parameters to AI development and use. 

The Treaty is the outcome of two years’ work by an intergovernmental body consisting of the Council of Europe member states, 11 non-member states, the EU, as well as representatives of other international organisations,[3] the private sector, civil society and academia.[4]  It was prepared taking into account existing international legal and policy instruments on AI, including the OECD AI Principles, the G7 Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems[5] and the EU AI Act (see Section F below on its interplay with the EU AI Act).[6] The Treaty is accompanied by a detailed explanatory report (“Explanatory Report”) aimed at facilitating the understanding of the Treaty’s provisions.[7]

Below we set out a summary of our main takeaways from the Treaty and Explanatory Report.

A. Key Concepts

    The Treaty sets out obligations on its Parties to adopt or maintain certain measures in relation to activities within the lifecycle of AI systems. Taking each element in turn:

    • Adopt or maintain. The Explanatory Report clarifies that the Treaty’s intention behind this phrasing is to provide flexibility for Parties to fulfil their obligations by adopting new measures or by applying existing measures, such as legislation and mechanisms that existed before the Treaty became effective.
    • AI system.  The Treaty draws the definition of AI system from that adopted by the OECD – a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments.”[8] This largely aligns with the definition of AI system in the EU AI Act, which was also based on the OECD’s definition.[9]
    • Lifecycle.  The Explanatory Report clarifies that the reference to the AI lifecycle is aimed at ensuring a comprehensive and future-proofed approach to AI-related risks and provides the following non-exhaustive list of examples of relevant activities: (1) planning and design,
      (2) data collection and processing, (3) development of AI systems, including model building and/or fine-tuning existing models for specific tasks, (4) testing, verification and validation, (5) supply/making the systems available for use, (6) deployment, (7) operation and monitoring, and (8) retirement.

    B. Principles Related to Activities Within the Lifecycle of AI Systems

    While Chapter II of the Treaty includes high-level general obligations on the protection of human rights, integrity of democratic processes, and respect for the rule of law, Chapter III of the Treaty sets out seven common principles that each Party should implement with respect to AI systems in a manner appropriate to its domestic legal system. The principles are as follows:

    1. Human Dignity and Individual Autonomy [10]

      ‘’Each Party shall adopt or maintain measures to respect human dignity and individual autonomy in relation to activities within the lifecycle of artificial intelligence systems’’.

      According to the Explanatory Report, this means that (a) activities within the lifecycle of AI systems should not lead to the dehumanisation of individuals, undermine their agency or reduce them to mere data points, or anthropomorphise AI systems in a way which interferes with human dignity and
      (b) individuals should have control over the use of AI technologies and resultant impact on their lives, and their agency and autonomy should not thereby be diminished.

      2. Transparency and Oversight[11]

      ‘’Each Party shall adopt or maintain measures to ensure that adequate transparency and oversight requirements tailored to the specific contexts and risks are in place in respect of activities within the lifecycle of artificial intelligence systems, including with regard to the identification of content generated by artificial intelligence systems.’’

      The Explanatory Report clarifies that transparency means that the decision-making processes and general operation of AI systems should be understandable and accessible to appropriate AI actors and relevant stakeholders. Relevant measures to ensure transparency should be assessed on a case-by-case basis but may include, as appropriate:

      • recording key considerations such as data provenance, training methodologies, validity of data sources, documentation and transparency on training, testing and validation data used, risk mitigation efforts, and processes and decisions implemented;
      • providing information about the system – e.g., purpose(s), known limitations, assumptions and engineering choices made during design, features, details of the underlying models or algorithms, training methods and quality assurance processes (including to ensure explainability of AI systems, how they work and interpretability of how they make predictions or decisions);
      • providing information about the data used to create, train and operate the system, the protection of personal data and the processing of information and the types and level of automation used to make consequential decisions, and the risks associated with the use of the AI system (including to facilitate the possibility for parties with legitimate interests, including copyright holders, to exercise and enforce their intellectual property rights); and
      • measures with regards to the identification of AI-generated content, including techniques such as labelling and watermarking (subject to their availability and proven effectiveness, the generally acknowledged state of the art, and specificities of different types of content, with a view to avoid the risk of deception and enable distinction between human-generated content and AI-generated content that may be considered deceptive, such as deepfakes).

      The Explanatory Report also acknowledges that promoting the use of technical standards, open-source licences and the collaboration of researchers and developers supports the development of more transparent AI systems in the long run.

      Oversight, on the other hand, refers to mechanisms, processes, and frameworks designed to monitor, evaluate, and guide activities within the lifecycle of AI systems, including legal, policy and regulatory frameworks, recommendations, ethical guidelines, codes of practice, audit and certification programmes, bias detection and mitigation tools, oversight bodies and committees and competent authorities (e.g., data protection authorities), and continuous monitoring of current developing capabilities and auditing, public consultations and technical standards.

      3. Accountability and Responsibility[12]

      ‘’Each Party shall adopt or maintain measures to ensure accountability and responsibility for adverse impacts on human rights, democracy and the rule of law resulting from activities within the lifecycle of artificial intelligence systems.’’

      The Explanatory Report clarifies that this principle refers to the need to provide mechanisms to enable individuals, organisations and entities responsible for the activities within the lifecycle of AI systems to be answerable for any adverse impacts on human rights, democracy or the rule of law resulting from such activities. Parties to the Treaty should establish new, or adapt existing, frameworks and mechanisms to give effect to this principle, including judicial and administrative measures, civil, criminal and other liability regimes and, in the public sector, administrative and other procedures.

      This principle emphasises the need for clear lines of responsibility and the ability to trace actions and decisions back to specific individuals or entities in a way that recognises the diversity of the relevant actors and their roles and responsibilities.

      4. Equality and Non-Discrimination[13]

      ‘'(1) Each Party shall adopt or maintain measures with a view to ensuring that activities within the lifecycle of artificial intelligence systems respect equality, including gender equality, and the prohibition of discrimination, as provided under applicable international and domestic law. (2) Each Party undertakes to adopt or maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems.’’

      According to the Explanatory Report, the Treaty requires Parties to consider appropriate regulatory, governance, technical or other solutions to address the different ways through which bias can intentionally or inadvertently be incorporated into AI systems at various stages throughout their lifecycle.

      The Explanatory Report provides the following as examples of well-documented types of potential bias in relation to AI systems which should be considered under this principle:

      • potential bias of the algorithm’s developers;
      • potential bias built into the model upon which the systems are built;
      • potential biases inherent in the training data sets used, or in the aggregation or evaluation of data;
      • biases introduced when such systems are implemented in real world settings or as AI evolves by self-learning due to errors and deficiencies in determining the working and learning parameters of the algorithm, and
      • automation or confirmation biases.

      5. Privacy and Personal Data Protection[14]

      ‘’Each Party shall adopt or maintain measures to ensure that, with regard to activities within the lifecycle of artificial intelligence systems: (a) privacy rights of individuals and their personal data are protected, including through applicable domestic and international laws, standards and frameworks; and (b) effective guarantees and safeguards have been put in place for individuals, in accordance with applicable domestic and international legal obligations.’’

      The Explanatory Report notes that, at the domestic level, most of the states which negotiated the Treaty have dedicated personal data or privacy protection laws and often specialised authorities responsible for the proper supervision of such laws (e.g., the GDPR).

      According to the Explanatory Report, individual privacy rights to be recognised and protected across jurisdictions include:

      • limiting access to an individual’s life experiences and engagements;
      • maintaining secrecy of certain personal matters;
      • maintaining a degree of control over personal information and data;
      • protecting personhood (individuality or identity, dignity, individual autonomy); and
      • protecting matters of intimacy and physical, psychological or moral integrity.

      While the Treaty points out these commonalities, it is not intended to endorse or require any particular regulatory measures in any given jurisdiction and hence it remains high-level with respect to this principle.

      6. Reliability[15]

      “Each Party shall take, as appropriate, measures to promote the reliability of artificial intelligence systems and trust in their outputs, which could include requirements related to adequate quality and security throughout the lifecycle of artificial intelligence systems.’’

      This principle focuses on the potential role to be played by standards, technical specifications, assurance techniques and compliance schemes in evaluating and verifying the trustworthiness of AI systems and transparently documenting and communicating evidence for this process.

      The Explanatory Report acknowledges that standards could provide a reliable basis to share common expectations about certain aspects of a product, process, system or service with a view to building justified confidence in the trustworthiness of an AI system if its development and use are compliant with these standards. Similarly, assurance and compliance schemes are important both for securing compliance with rules and regulations, and also for facilitating the assessment of more open-ended risks where rules and regulations alone do not provide sufficient guidance to ensure that a system is trustworthy.

      7. Safe Innovation[16]

      ‘‘With a view to fostering innovation while avoiding adverse impacts on human rights, democracy and the rule of law, each Party is called upon to enable, as appropriate, the establishment of controlled environments for developing, experimenting and testing artificial intelligence systems under the supervision of its competent authorities.’’

      According to the Explanatory Report, the purpose of this provision is not to stifle innovation but to recognise that innovation is shaped as much by regulation as by its absence. The failure to create an environment in which responsible innovation can flourish risks suppressing it and opening the market to more reckless approaches.[17]

      The Explanatory Report provides the examples of regulatory sandboxes, special regulatory guidance or no-action letters to clarify how regulators will approach the design, development, or use of AI systems in novel contexts. These approaches have the following advantages:

      • help identify potential risks and issues associated with AI systems early in the development process;
      • facilitate knowledge-sharing among private entities, regulators, and other stakeholders.
      • make it possible to learn about the opportunities and risks of an innovation at an early stage, provide evidence for regulatory learning purposes, and may provide flexibility for regulations and technologies to be tested;
      • may allow regulators to experiment with different regulatory approaches and evaluate their effectiveness;
      • boost public and industry confidence; and
      • allow AI organisations to work closely with regulators to understand and meet compliance requirements.

      C. Assessment and Mitigation of Risks and Adverse Impacts[18]

      The Treaty also includes a general obligation on each Party, taking into account the principles described above, to adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by AI systems by considering actual and potential impacts to human rights, democracy and the rule of law.

      Adverse impacts of AI systems on human rights, democracy, and the rule of law and the measures taken to address them should be documented. This should inform the relevant risk management measures to be implemented, which should be graduated and differentiated, as appropriate, and:

      • take due account of the context and intended use of AI systems, in particular risks to human rights, democracy, and the rule of law;
      • take due account of the severity and probability of potential impacts;
      • consider, where relevant, the perspectives of stakeholders, in particular persons whose rights may be impacted;
      • apply iteratively throughout the lifecycle of the AI system;
      • include monitoring for risks and adverse impacts to human rights, democracy, and the rule of law;
      • include documentation of risks, actual and potential impacts, and the risk management approach; and
      • require, where appropriate, testing of AI systems before making them available for first use and when they are significantly modified.

      Measures may explicitly include a ban in respect of certain uses of AI systems that are considered incompatible with the respect for human rights, the functioning of democracy or the rule of law. 

      D. Remedies and Procedural Safeguards[19]

      The Treaty also includes a chapter on remedies and procedural safeguards, which is intended to complement each Party’s applicable international and domestic legal regime of human rights protection and be carried out by each Party applying their existing frameworks to the context of AI systems.

      In particular, Chapter IV requires each Party, to the extent remedies are required by its international obligations and consistent with its domestic legal system, to adopt or maintain measures to ensure the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of AI systems, including:

      • transparency measures to ensure that relevant information regarding AI systems which have the potential to significantly affect human rights and their relevant usage is documented, provided to bodies authorised to access that information and, where appropriate and applicable, made available or communicated to affected persons;
      • measures to ensure that the information referred to above is sufficient for the affected persons to contest the decision(s) made or substantially informed by the use of the system, and, where relevant and appropriate, the use of the system itself; and
      • an effective possibility for persons concerned to lodge a complaint to competent authorities.

      Exceptions, limitations or derogations to such transparency measures, however, are possible in the interest of public order, security and other important public interests as provided for by applicable international human rights instruments and, where necessary, to meet these objectives.

      It also requires each Party to ensure that (a) where an AI system significantly impacts upon the enjoyment of human rights, effective procedural guarantees, safeguards and rights, in accordance with the applicable international and domestic law, are available to persons affected thereby, and (b) as appropriate, persons interacting with AI systems are notified that they are interacting with such systems rather than with a human. The Explanatory Report clarifies that the obligation to notify persons that they are interacting with an AI system is aimed at avoiding the risk of manipulation and deception.

      The Explanatory Report notes that, where an AI system substantially informs or takes decisions impacting on human rights, effective procedural guarantees should, for instance, include human oversight, including ex ante or ex post review of the decision by humans[20]. Where appropriate, such human oversight measures should guarantee that the AI system is subject to built-in operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.

      E. Conference of the Parties and Effective Oversight Mechanisms[21]

      To ensure its effective implementation, the Treaty provides for a follow-up mechanism in the form of a Conference of the Parties, composed of Party representatives. The Conference of the Parties is tasked to identify any problems relating to reservations, considering possible supplementation, amendments, specific recommendations relating to interpretation, and facilitating exchange of information, friendly settlement of disputes and cooperation with relevant stakeholders. Each Party is required to report to the Conference of the Parties within the first two years of becoming a Party and ‘‘periodically’’ thereafter.

      Beyond the Conference of the Parties, the Treaty requires that each Party establish or designate one or more effective mechanisms to oversee compliance with the Treaty’s obligations. The provision emphasises the need for the Parties to review their already existing mechanisms as applied to activities within the lifecycle of AI systems. The Treaty leaves it to the Parties’ discretion to expand, reallocate, adapt, or redefine their existing functions or, if appropriate, set up entirely new structures or mechanisms.

      F. Interplay with EU AI Act[22]

      As noted in the European Commission’s press release on its signing of the Treaty, the Commission regards the Treaty as fully compatible with EU law in general, and the EU AI Act in particular.[23]

      The influence of the EU AI Act on the Treaty is visible in its drafting. In particular, the Treaty contains a number of concepts from the AI Act, including:

      • A focus on human-centric AI, consistent with human rights, democracy, and rule of law. Consistent with the AI Treaty, the purpose of the AI Act (as set out in Article 1 therein) is to improve the functioning of the internal market, support innovation, and promote the uptake of human-centric and trustworthy AI, while ensuring a high level of protection of health, safety, and fundamental rights  (including democracy, the rule of law and environmental protection) against potential harmful effects of AI systems in the EU.
      • A risk-based approach and risk management obligations. Like the AI Act, the Treaty recognises that different AI systems may pose differing levels of risks depending on their nature and use case. Accordingly, the Treaty requires Parties to adopt or maintain graduated and differentiated measures that take account of “the context and intended use” of AI systems. The Treaty further acknowledges that Parties may choose to implement risk assessments at different levels, such as at regulatory level (by prescribing different categories of risk classification) and/or at operational level (by focusing on certain predefined categories of AI systems in line with the graduated and differentiated approach to keep the burden and obligations proportionate to the risks).[24]
      • Key principles for trustworthy AI. The AI Treaty’s principles largely overlap with principles underlying the obligations of the AI Act (e.g. transparency, robustness, safety, data governance and protection). For example, requirements with respect to transparency for AI-generated content and in interactions with AI systems – as detailed in the Explanatory Report with respect to the transparency and oversight principle of the AI Treaty (see Section B.2 above) – are substantially the same as the transparency obligations under Chapter IV of the AI Act.
      • Strengthened documentation, accountability, remedies and oversight. The AI Treaty’s various obligations with respect to record-keeping, provision of information and documentation, accountability frameworks and remedies recall obligations under the AI Act on providers of high-risk AI systems (and to an extent, GPAI models) in relation to technical documentation, transparency, registration, record-keeping requirements and human oversight, which are aimed at facilitating the relevant supervisory authorities’ review and assessment of AI-related risks and compliance.
      • Support safe innovation through regulatory sandboxes. Like the AI Act, the AI Treaty suggests that states establish, as a way to promote safe innovation, regulatory sandboxes as controlled environments for developing, experimenting and testing AI systems under the supervision of its competent authorities.

      Accordingly, in the EU, the Commission has confirmed that the Treaty’s commitments have been implemented by means of the AI Act.

      Conclusion

      The Treaty is intended as a step towards international cooperation and standardisation on AI safety. Although it remains to be seen which states will join and how each Party will decide to address its obligations under the Treaty, the Treaty sets a common baseline that each Party needs to respect to ensure that the activities within the entire lifecycle of AI systems are fully consistent with human rights, democracy, and the rule of law, and the Explanatory Report provides additional guidelines and factors to be considered in implementing such baseline.

      In particular, although certain signatories to the Treaty (such as the UK) have opted in their own domestic approach to AI for ‘‘context-specific guidance’’, instead of a new formal regulation like the EU AI Act,[25] the Treaty aims to bring a degree of uniformity in their approaches to AI by bringing them closer to the principles underlying the AI Act (as detailed in Section F).


      [1] Additional signatories included Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova and San Marino.

      [2] Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law (CETS No. 225).

      [3] For example, the Organisation for Economic Co-operation and Development (OECD) and the EU, represented by the European Commission, including in its delegation also representatives from the European Union Agency for Fundamental Rights (FRA) and the European Data Protection Supervisor (EDPS).

      [4] Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, USA and Uruguay.

      [5] See our blog-post, ‘G7 Leaders Publish AI Code of Conduct: A Common Thread in the Patchwork of Emerging AI Regulations Globally?’at Cleary AI and Technology Insights | Cleary Gottlieb | Cleary Gottlieb Steen & Hamilton LLP (clearyiptechinsights.com).

      [6] See our blog-post ‘The AI Act has been published in the EU Official Journal’.

      [7] Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

      [8] Article 2 of the Treaty.

      [9] According to Article 3(1) of the AI Act, an “AI system” is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

      [10] Article 7 of the Treaty.

      [11] Article 8 of the Treaty.

      [12] Article 9 of the Treaty.

      [13] Article 10 of the Treaty.

      [14] Article 11 of the Treaty.

      [15] Article 12 of the Treaty.

      [16] Article 13 of the Treaty.

      [17] Article 13, Paragraph 90 and 91– Explanatory Report.

      [18] Chapter V of the Treaty.

      [19] Chapter IV and Chapter VI of the Treaty.

      [20] Cf. Article 22 of the GDPR on data subjects’ right not to be subject to a decision based solely on automated processing which produces legal effects concerning them or similarly significantly affects them. The GDPR requires that, even for exceptional decisions whereby automated processing may be allowed, the data controller has to implement suitable measures to the right to obtain human intervention on the part of the controller, to express the data subject’s point of view and to contest the decision.

      [21] Article 23 of the Treaty.

      [22] For more information on the AI Act, see our blog-post ‘The AI Act has been published in the EU Official Journal’ at Cleary AI and Technology Insights.

      [23] Commission signed the Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law (5 September 2024).

      [24] See Section C “Assessment and Mitigation of Risks and Adverse Impacts” above.

      [25] See our blog-post ‘A “pro-innovation” agenda: the UK Government’s Approach to AI and Digital Technology’.