Skip to content

AI governance and regulation in the European union

The EU’s approach to artificial intelligence centres on excellence and trust, aiming to boost research and industrial capacity and ensure fundamental rights. The European Commission has proposed three inter-related legal initiatives that will contribute to building trustworthy AI:

  1. A European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;
  2. EU rules to address liability issues related to new technologies, including AI systems;
  3. A revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive).

High-Level Expert Group on AI – Ethics Guidelines for trustworthy AI

The European Commission’s High-Level Expert Group on AI published Ethics Guidelines for trustworthy AI in 2019. It introduces three necessary (but not sufficient) components for achievement of trustworthy AI, that should be met throughout the AI system's entire life cycle, namely that it should be lawful, ethical and robust. The guideline present a framework addressing the ethics and robustness aspects.

Chapter I (Foundations of Trustworthy AI) identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems. Key ethical principles include respect for human autonomy, prevention of harm, fairness and explicability. Chapter II (Realising Trustworthy AI) provides guidance on how Trustworthy AI can be realised, by listing seven requirements that AI systems should meet:

  1. human agency and oversight;
  2. technical robustness and safety;
  3. privacy and data governance;
  4. transparency;
  5. diversity, non-discrimination and fairness;
  6. environmental and societal well-being, and;
  7. accountability.

Chapter III (Assessing Trustworthy AI) provides a concrete and non-exhaustive Trustworthy AI assessment list aimed at operationalising the key requirements set out in Chapter II.

The EU AI Act

The AI act proposed by the European Commission in April 2021 is the world’s first legal framework for AI, with potential approval in 2022 and potential entry-into-force in 2023. After passing the act there will be a two-year implementation period. Systems existing at the time of implementation are exempted from the requirements in the act unless they subsequently experience a significant change in purpose or design. The AI act will apply “where the output produced by the system is used in the Union” - irrespective of where the provider and user are located. The regulation will not apply to AI systems developed and used solely for scientific research, as long as such activities do not lead to placing an AI system on the market or into service in the EU.

The proposed AI act is risk based and classifies AI systems into four risk categories, and a screening is required to determine the level of regulation:

  • Unacceptable risk: These are prohibited. And include social scoring systems, subliminal techniques for manipulation, exploitation of children or mentally disabled, etc.

  • High risk: These are permitted subject to compliance with AI requirements and conformity assessments.

  • Limited risk: These are permitted, but subject to information/transparency requirements.

  • Minimal/no risk: These are permitted with no restrictions, and application of AI requirements are voluntary.

Two categories of high-risk AI are defined:

  1. Safety components of regulated products, which are subject to third-party assessment under the relevant sectorial legislation.
  2. Stand-alone AI systems in 8 different fields, namely:
    • Biometric identification of natural persons
    • Management and operation of critical infrastructure
    • Educational and vocational training (to determine access, assessment, admission)
    • Employment, workers management and access to self-employment
    • Access to and enjoyment of essential private services and public services and benefits
    • Law enforcement
    • Migration, asylum and border control management
    • Administration of justice and democratic processes

For high risk AI there are requirements to the management system (article 9), data and data governance (article 10), technical documentation (article 11), record keeping (article 12) transparency (article 13), human oversight (article 14) and accuracy, robustness and cybersecurity (article 15).

A conformity assessment is required for high-risk AI systems. For seven of the eight mentioned high-risk AI systems, an internal conformity assessment is permitted. For high-risk AI systems that relate to the biometric identification and categorization of natural persons, the provider may choose internal conformity assessment only if harmonised standards or common specifications have been applied - otherwise the conformity assessment must be conducted by a Notified Body (third party assurance provider).