Overview of the AI Regulation Part 2

B) Risk-based approach of the AI Regulation

As already described in the Overview of the AI Regulation Part 1, the AI Regulation follows a risk-based approach.This means that the degree of regulation depends on the severity of the risks posed by the AI applications. In order to be able to assess which requirements need to be met, an affected organisation must first check what type of AI system is involved. AI systems are generally divided into four categories:

– According to Art. 5 I AI Regulation into AI systems for prohibited practices

– According to Art. 6 AI Regulation in high-risk systems

– According to Art. 50 of the AI Regulation into AI systems with limited risk

– According to Art. 95 of the AI Regulation in low-risk AI systems

Additional requirements are demanded for systems with a general purpose. More on this later. In order to be able to assess which requirements must be met, an affected organisation must first check what type of AI system it has.

a) Prohibited practices

Prohibited practices are listed in Art. 5 of the AI Regulation. It is assumed that the risk posed by these systems for those affected is too serious to authorise their use.For example, the placing on the market, commissioning and use of AI for subliminal manipulation outside of a person’s awareness is prohibited if this manipulation is significant and is intended to cause physical or psychological harm to that person or another person.

b) High-risk systems

The regulation of AI high-risk systems represents a large part of the AI Regulation.The European legislator has not included a precise definition of high-risk systems in the law. Instead, it aims to remain as adaptable as possible and not to set excessively narrow limits. Points of reference are therefore distributed across Art. 6 of the AI Regulation and Art. 7 of the AI Regulation. According to Art. 6 I of the AI Regulation, a high-risk system exists if it is used as a safety component for a product or is itself a product that is subject to certain EU regulations. Art. 7 I of the AI Regulation authorises the EU Commission to draw up a catalogue of life situations or applications that fall under this definition. Further use cases can be added by the EU Commission in the future. For example, AI systems that are to be used for the recruitment or selection of natural persons, in particular for placing targeted job advertisements, analysing and filtering applications and evaluating applicants, have been defined as high-risk systems.

Requirements for high-risk systems

Art. 8 et seq. AI Regulation define the compliance requirements for high-risk AI systems.The central provision here is likely to be Art. 9 of the AI Regulation, which requires the establishment of a risk management system that covers the entire life cycle of the AI. The risk analysis should take into account the risks to health, safety and fundamental rights that the AI system poses when used appropriately.

c) AI systems with limited risk

Art. 50 of the AI Regulation sets out information obligations for both operators and providers of AI systems with limited risk. The user must be informed that they are interacting with an AI in order to be able to prepare for this. According to Art. 50 I of the AI Regulation, AI systems must be designed in such a way that a normal person clearly recognises that they are interacting with an AI.

d) AI systems with minimal risk

For AI systems that neither fall under Art. 50 of the AI Regulation nor constitute a high-risk system, a code of conduct can be followed voluntarily in accordance with Art. 95 of the AI Regulation. According to the legislator, this is intended to strengthen social trust in AI applications.

e) Special provisions for AI systems for general use

For general purpose AI systems, additional obligations apply in accordance with Art. 51 et seq. AI Regulation, additional obligations apply that must be fulfilled in addition to the requirements of the respective level.

It should be noted that these additional obligations apply exclusively to providers of so-called GPAI (General Purpose Artificial Intelligence). Operators of such systems are not affected by these additional obligations. A GPAI model is an AI model that, even if it has been trained with large amounts of data using large-scale self-monitoring, has significant generality and is capable of competently performing a wide range of different tasks. This is true regardless of how the model is brought to market. It can be integrated into a variety of downstream systems or applications, with the exception of AI models that are used for research, development or prototyping purposes before being brought to market.

A well-known example of a GPAI model is currently ChatGPT. Companies that intend to use or are already using AI systems must therefore consider a number of aspects. It is strongly recommended to prepare accordingly by setting up AI compliance.