Over the last few years, AI’s place in societies overflowed from a strict scientific use to a more mainstream one. Security systems, biometrics technologies, education, law enforcement, administrations, employment, all those sectors interact with AI as can every citizen through AI Chatbot such as ChatGPT or Gemini. The explosion of potentials uses and therefore potential users called for regulation in the EU. In April 2021, the European Commission proposed the first EU regulatory framework for AI. Its aim is to analyse and classify AI systems with multiple applications according to the risk they pose to users. Which in turn, has to deal with the level of regulations. After three years’ work the final version of the text was adopted by the EU Parliament on the 13th of March 2024 and entered into force on the 1st of August. This new text is considered to be the main piece of the EU framework on AI, which is why we will study its most important points but is supported by other pre-existing acts.
Read the full text>Regulation – EU – 2024/1689 – EN – EUR-Lex (europa.eu)
Define to regulate
For such a complicated and diversified technology as AI, the first need of this act was to provide AI with a definition and a way of classifying it. In article 113(a), AI is defined as follow: ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This definition is broad to encompass a wide range of technologies, including machine learning, logic- and knowledge-based approaches, and other methods that can perform tasks typically requiring human intelligence. From this very wide definition, AI systems are to be classified according to the risk they represent. The Act defines 3 categories: prohibited AI systems, general purpose AI, and high-risk AI systems.
After providing a definition for AI systems, the Act also gives guidance on how they should be defined. The regulation emphasizes that AI should be human-centric, meaning that it should be designed and used to improve human well-being. AI systems should also be trustworthy, ethical and aligned with EU core values. The Act supports the development of AI systems serving as tools for people, not replacing them, with the ultimate aim of benefiting society at large.
A risk-based approach
From a very broad definition emerges the need for a more thorough classification to effectively regulate every AI system. For each class of systems, a deadline is applied for the entry into force of the Act.
Prohibited systems – 6 months
Prohibited AI systems are handled in Chapter II, article 5 of the AI act. They could be defined as all systems attempting others integrity or privacy. This includes the deployment of subliminal, manipulative or deceptive techniques, the exploitation of vulnerability, social scoring or biometric categorization systems to only cite a few. Exceptions can be made to allow their use in very specific cases described in the article ensuring the protection and security of individuals and when not using the tools would do some harm. In these very specific and framed cases an impact assessment shall be done beforehand, and any operation should be done under the control of a judicial authority.
Because of their dangerousness, the AI Act shall be applied to those systems in the 6 months following the text adoption.
General Purpose AI – 12 months
In Chapter V, general purpose AI (GPAI) is described as an AI model including when trained with a large amount of data using self-supervision at scales that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or application. GPAI notably includes Generative AI. This inclusion is a sign of the EU capabilities to adapt itself. Indeed, Generative AI was not as developed as today in 2021 and was hence added later to the Act answering the boom of these technologies.
According to article 53, these systems must draw up technical documentation and information to ensure transparency. They shall also respect the Copyright Directive. Finally, the EU pays attention to the need to avoid the development of systemic systems. Hence, given article 51, GPAI models are considered systemics when the cumulative amount of compute used for its training is greater than
10251025
floating points operations per second. They must be signalled to the Commission which will assess if it represents a systemic risk.
High risk AI – 24 to 36 months
Chapter 3 handles high-risk AI systems, which are defined in article 6. Those systems are the ones part of uses cased classified in Annex III of the Act including for instance non-banned biometrics, safety components in the management and operation of critical infrastructures, law enforcement, etc. AI systems are always considered high-risk if they profile individuals. To help providers determine if their systems are considered high-risk, the Commission will provide guidance with a list of practical examples 18 months after the Act’s entry into force.
These systems must be the object of the establishment of risk management systems, the drawing of technical documentation, and the designing of recordkeeping among other obligations. This complies with the need for transparency supported by the EU.
Sharing responsibility between the stakeholders
The responsibility to adapt the systems, to provide documentation for transparency, and to ensure that the systems are implemented to be human oversight falls both on providers and deployers, but the majority are on providers. These stakeholders are defined as follow in the Act:
- provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
- ‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
Users are also involved in the deployment of the Act. These obligations include ensuring transparency, maintaining records, conducting conformity assessments, and providing accurate information about the AI system’s capabilities and limitations. Market surveillance is to be established to ensure that the stakeholders and their systems comply with the requirements. National authorities are empowered to monitor, investigate and enforce the rules.
Innovation & Regulation: a balance to find
The risk of such regulation is to prevent innovation through a rigid legal framework. To avoid this trap, the 6th chapter of the AI Act is devoted to “Measures in Support of Innovation”. The regulation includes provisions to support innovation, particularly aimed at small and medium-sized enterprises (SMEs) and startups. It seeks to ensure that the regulatory environment does not stifle innovation but instead encourages the development of AI systems that are safe and trustworthy. Specific measures, such as regulatory sandboxes, are provided to help innovators test AI technologies under regulatory supervision without being immediately subject to the full weight of the regulation. The whole issue is to strike a balance between protecting public interest and encouraging innovation.
A framework completed by existing laws
The AI Act is a legal breakthrough. Indeed, it is the first Act providing such a complete framework on AI. Nevertheless, it operates alongside existing EU laws such as data protection – e.g. GDPR, Copyright Directive – and consumer protection, ensuring that AI systems do not infringe on rights and remedies provided under these laws. Moreover, the regulation does not override national laws that pursue other legitimate public interest objectives.
Even completed by national regulation, the AI Acts aims to harmonize the legal framework in all EU member states, preventing fragmentation through the creation of consistent obligations and standards for AI systems, regardless of where they are developed, marketed, and used across the EU.
Conclusion
This Act and its associated regulations position the EU as a global leader in the ethical and safe development of AI. This regulation should set a benchmark for other regions and countries, influencing global standards and practices in AI governance. Their strong positions address both the challenges and opportunities posed by AI while fostering innovation and ensuring trustworthy systems for the users. This already milestone act has a bright future and will surely be able to adapt itself to the upcoming technologies.
Author: Eugenie Descour (EOS trainee)
Reviewed: Elodie Reuge, Anna Pomortseva (EOS)