European Union officially approves landmark AI legislation

The new law categorizes AI systems according to risk

European Union officially approves landmark AI legislation

The European Union (EU) has officially approved the world’s first landmark AI legislation.

The Council of the European Union unanimously adopted the Artificial Intelligence (AI) Act, which will enter into force 20 days after its publication in the EU’s Official Journal. This law aims to harmonize rules on artificial intelligence across the EU and establish a global standard for AI regulation.

The AI Act follows a risk-based approach, meaning the higher the risk of harm to society, the stricter the rules. The legislation is designed to foster the development and uptake of safe and trustworthy AI systems by private and public actors in the EU’s single market. It also aims to ensure the respect of the fundamental rights of EU citizens and stimulate investment and innovation in artificial intelligence in Europe. The AI Act applies only to areas within EU law and provides exemptions for systems used exclusively for military and defence and for research purposes.

The new law categorizes AI systems according to risk. AI systems presenting limited risk will be subject to light transparency obligations. In contrast, high-risk AI systems will be authorized but must meet stringent requirements to gain access to the EU market. Certain AI systems, such as those involving cognitive behavioural manipulation and social scoring, will be banned due to their unacceptable risk. Additionally, the law prohibits the use of AI for predictive policing based on profiling and the use of biometric data to categorize people by race, religion, or sexual orientation.

The AI Act also addresses general-purpose AI (GPAI) models. GPAI models that do not pose systemic risks will have limited requirements and will primarily focus on transparency. However, GPAI models with systemic risks must comply with stricter regulations to ensure they do not harm individuals or society.

To ensure proper enforcement, the law establishes several governing bodies, including an AI office within the European Commission to enforce common rules across the EU, a scientific panel of independent experts to support enforcement activities, an AI board with representatives from member states to advise and assist in the consistent application of the AI act, and an advisory forum for stakeholders to provide technical expertise to the AI board and the commission.

The AI act sets significant penalties for violations, calculated as a percentage of the offending company’s global annual turnover from the previous financial year or a predetermined amount, whichever is higher. Small and medium-sized enterprises (SMEs) and start-ups will face proportional administrative fines.

Entities providing public services must assess the fundamental rights impact before deploying high-risk AI systems. The regulation also mandates increased transparency regarding developing and using high-risk AI systems. High-risk AI systems and certain users within public entities must register in the EU database for high-risk AI systems. Users of emotion recognition systems must inform individuals when they are exposed to such technologies.

The AI act provides an innovation-friendly legal framework and promotes evidence-based regulatory learning. The law includes provisions for AI regulatory sandboxes, which allow for the controlled development, testing, and validation of innovative AI systems in real-world conditions.

The legislative act will be signed by the presidents of the European Parliament and the Council and published in the EU’s Official Journal in the coming days. The new regulation will become effective twenty days after publication and will apply two years after its entry into force, with some exceptions for specific provisions.