IA Act of the European Union

IA Act of the European Union

On 1 August 2024, the regulation proposed on 21 April 2021 by the European Commission and adopted on 13 March 2024 by the European Parliament entered into force. This regulation aims to create a single regulatory and legal framework for AI systems operating in the European Union. For its governance, it creates, among others, the AI Office, within the Commission.

Regulation and characteristics

The new regulation classifies the type of application according to its risk and characteristics, ranging from systems that are considered to pose unacceptable risks (e.g. social scoring systems and manipulative AI), which are prohibited, to those of minimal risk that are not regulated (including many AI applications currently available in the EU single market, such as, AI video games and spam filters). Most of the text deals with high-risk AI systems, which are regulated, and, to a lesser extent, with limited-risk AI systems, which require developers and implementers to ensure that the end-users are aware that they are interacting with AI (e.g. chatbots and deepfakes).

General Purpose Artificial Intelligence (GPAI) systems, which are those that have the capability to serve a variety of purposes, both for direct use and for integration into other AI systems, are also regulated.

After the entry into force, there are the following deadlines for implementation:

  • 6 months for prohibited AI systems.
  • 12 months for GPAI systems.
  • Up to 36 months for AI systems considered of high risk.

 

Prohibited systems

The following Artificial Intelligence systems or applications are prohibited:

  • Deploying subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
  • Exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
  • Biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except for the labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
  • Social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
  • Assessing the risk of an individual to commit criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity.
  • Compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
  • Real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except when:
    • searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited;
    • preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
    • identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.).

 

High-risk systems

An AI system is always considered high risk if it profiles individuals, i.e., systems that use automated processing of personal data to assess various aspects of a person’s life, such as, work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement.

Some use cases are also indicated, for example, non-prohibited biometric identification systems, security components in critical infrastructures, determining access to educational institutions or assessing the risk of a person becoming a victim of crime, among others.

According to the regulation, high-risk AI providers must meet several requirements, among others:

  • Establish a risk management system throughout the life cycle of the system.
  • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • Draw up technical documentation to demonstrate compliance and provide the authorities with the information necessary to assess compliance.
  • Design the system to allow for human oversight to be applied in its deployment.
  • Ensure appropriate levels of accuracy, robustness and cybersecurity, and establish a quality management system.

 

General Purpose Systems (GPAI)

In this particular case, providers of such systems are required to establish a policy to respect the Copyright Directive and to publish a sufficiently detailed summary about the content used for training the GPAI model.

Providers must also assess and mitigate potential systemic risks and ensure an adequate level of cybersecurity protection.

 

Reactions

Apart from the criticism and praise received, more recently, some large US companies have reacted negatively to the regulation, threatening that the EU will be left behind in the AI technology race if it does not relax the measures.

Some examples that will not initially be available in the EU include the multi-modal Llama 3.2 system from Meta (the parent company of Instagram, Whatsapp and Facebook), Advanced Voice Model (the voice version of ChatGPT) and Apple Intelligence, the latter two based on Open AI’s technology.

As it has happened before, technology companies tend to have a negative reaction to any regulatory framework that they consider may affect their business, although it is also true that fragmented regulation between countries makes simultaneous implementation around the world difficult. However, it seems unlikely that these companies rule out a market, such as the European in the short- or medium-term. Thus, they will probably end up adapting their products to the EU’s legislative framework.

Author: Carles Molina Martínez, European Patent Attorney at Curell Suñol, SLP.

Photo by Steve Johnson on Unsplash