As AI systems increasingly touch upon every aspect of our lives, the European Union (EU) seeks to mitigate the risks associated with its rapid adoption through clear, risk-based regulations. On December 9th, 2023, the EU Parliament and the Council reached a provisional agreement on the AI Act to safeguard EU citizens’ rights and safety. With the Act’s effective approval on the horizon around the second half of 2024, businesses will face stringent requirements and severe financial penalties for non-compliance. 

 

This blog post, by Jean-Sébastien Nahon, Senior Manager of Application & Blockchain Security at Kudelski Security, focuses on the rules that the AI Act covers and what Kudelski Security proposes you need to do to prepare.

 

The Differences Between an AI System and Traditional Software

The European Commission defines artificial intelligence as “systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals.” However, this broad term requires a narrower scope to accurately understand its implications. A more common definition of an “AI system” would define it as a computer system that can perform tasks that require human intelligence. For example, an AI system can perform reasoning, language processing, perception, and other human cognitive feats.

Unlike traditional software, artificial intelligence systems often use machine learning (ML) algorithms to analyze a set of data and make decisions without explicit programming. The key concern is that AI technology learns and develops on its own, posing a significant risk to the population if left completely unsupervised.

 

Why Does the EU Want to Regulate AI?

Nowadays, many governments and organizations are mass-adopting AI technology without fully understanding its potential risks and without having a clearly defined regulatory framework for its appropriate use. Under these circumstances, AI deployment poses a significant threat, raising the likelihood of violating the fundamental human rights of EU citizens and the values upheld by the European Union.

To exacerbate the problem, competent EU authorities currently have limited oversight, resources, and regulatory frameworks to enforce the proper use of artificial intelligence. There are three main arguments why AI must be regulated:

  1. Unclear and complicated rules about AI systems are preventing businesses from adequately using them.
  2. If people don’t trust AI, it will slow down its growth and adoption in Europe and make the European Union less competitive worldwide.
  3. Different rules in various countries are making it hard to have a single AI market across the world, which could threaten the EU’s control over its own digital space.

 

What is the EU AI Act?

The European Union Artificial Intelligence Act is a risk-based regulatory framework set to control the development, market adoption, and general use of AI technology. This newly introduced legal framework marks a major move in European AI regulation and is expected to have worldwide effects — much like the General Data Protection Regulation (GDPR) did in shaping data privacy laws in regions outside the European Union. The five key pillars of the AI Act are to ensure EU citizens:

  1. Safety
  2. Transparency
  3. Traceability
  4. Non-discrimination
  5. Respect

If these five elements are guaranteed, then AI is very likely not to pose a threat to EU citizen rights. Most importantly, the EU’s risk-based regulation categorizes AI systems into four groups: unacceptable-risk, high-risk, limited-risk, and minimal-risk systems.

Unacceptable-risk AI systems

AI systems that fall under this category are strictly prohibited, as they pose an imminent threat to EU citizen safety and rights. Some examples include:

  • AI software used for social scoring, such as ranking individuals by their actions, socio-economic status, or personal characteristics.
  • AI computer programs that use biometric identification such as facial recognition, fingerprinting, and other biometric methods.
  • Systems that incorporate AI for manipulating and materially distorting a person’s behavior beyond their consciousness, either causing or being reasonably likely to cause that person or another person any physical or psychological harm.

High-risk AI systems

This level of risk is where most of the requirements will apply. The European Union classifies high-risk AI systems into two main groups: one includes AI used in EU-regulated products like automobiles and medical devices, while the other involves the deployment of AI in critical social and economic areas of the population, such as infrastructure, law enforcement, education, employment, and immigration. Therefore, businesses will be required to assess compliance with the AI Act and follow various measures before launching AI systems into production:

  • Risk management
  • Data governance
  • Technical documentation
  • Audits and logs of every AI system
  • User transparency policies
  • Human oversight
  • Quality management systems to ensure compliance
  • Duty of information
  • Cooperation with authorities

Worth mentioning, performance and compliance with EU Act standards must also be monitored throughout the systems’ entire lifecycle, ensuring they remain safe and aligned with fundamental rights at all times.

Limited-risk AI systems

Limited-risk AI systems must adhere to certain transparency requirements. This means that whenever users engage with AI applications, like chatbots, they should be informed that their interaction is with an AI computer system rather than a human. Additionally, such systems must provide users the choice to decide if they wish to continue using these applications after an initial interaction. Examples of these systems include general-purpose AI (GPAI) designed for generating limited-risk content and code, like chatbots and deep fakes.

Minimal-risk AI systems

The AI Act permits the unrestricted use of artificial intelligence that poses minimal risk, which mostly includes AI-driven video games, spam filters, etc. Most AI systems in operation within the EU are classified under this category. However, it’s important to note that these will still be subject to audits and must provide precise information to EU authorities upon request.

 

General Purpose AI

The European Union’s AI Act addresses the complexities and broad applications of General Purpose AI (GPAI) systems, recognizing their potential to be adapted for a wide range of uses, including both high-risk and non-high-risk applications. Specifically, the Act identifies GPAI as AI systems designed to be used across various sectors and for multiple purposes, thereby requiring a flexible yet cautious regulatory approach. To manage the inherent risks and ensure safe integration into society, the Act proposes a dynamic framework that emphasizes the importance of continuous monitoring and assessment of GPAI applications. This involves stringent compliance requirements for developers and deployers, focusing on transparency, accountability, and data governance to safeguard fundamental rights and prevent harm.

The EU AI Act mandates that entities utilizing GPAI systems adhere to high standards of ethical usage, including clear documentation of the AI’s decision-making processes, robust data protection measures, and mechanisms to address and rectify any adverse impacts. This approach reflects the EU’s commitment to foster innovation while protecting citizens and maintaining ethical standards, balancing the potential benefits of GPAI with the need for oversight to prevent misuse and ensure these technologies contribute positively to society.

When using GPAI in a high risk application some specific controls must be enforced including :

  • Model evaluation
  • Assess and mitigate systemic risk
  • Conduct adversarial testing (red teaming of AI system)
  • Report serious incidents to the EU Commission
  • Ensure a proper cybersecurity level
  • Report on their energy efficiency

 

When Will The EU AI Act Come Into Force?

At present, the EU AI Act is still not formally approved. However, the expectation is that the final version will come into force in the second half of 2024, with progressive enforcement until 2026. Since initial discussions by the EU Council in October 2020, a provisional agreement was published in December 2023, with recent draft versions being leaked. The Act is still in the drafting stage and has not yet been approved due to the complexity of the legislation, which requires detailed implementation plans and the consent of various EU authorities and governing bodies. However, if your organization has already started its AI program, we strongly recommend that you start assessing your compliance.

 

Who Falls Under the Artificial Intelligence Act?

The Act applies to all artificial intelligence systems impacting people in the European Union. Those who are in scope include:

  • Providers and deployers of artificial intelligence systems, regardless of their location, where the output of the AI system is used within the European Union.
  • Importers and distributors placing artificial intelligence systems on the EU market.
  • Manufacturers placing products with artificial intelligence systems on the EU market under their own name or trademark.

Those who do not fall under this Act include public authorities in non-EU countries, AI systems used for purposes outside the scope of EU law (military or defense), AI systems developed for research purposes, and AI systems that are used in testing environments and not yet launched into production.

 

Penalties and Compliance Requirements

Strict fines will be imposed upon entities that do not comply with the EU AI Act, and the highest respective amount listed below will be applied. Although the final decision hasn’t been approved yet, the known penalties are:

€35,000,000 or 7% of the total worldwide annual turnover in the previous financial year for failure to comply with the unacceptable-risk definitions listed in Article 5 of the Act.
€15,000,000 or 3% of the total worldwide annual turnover in the previous financial year for non-compliance with the obligations set forth for providers of high-risk systems.
€7,500,000 or 1.5% of the total worldwide annual turnover in the previous financial year for providing inaccurate, incomplete, or misleading information to notified bodies and competent EU national authorities in response to a request.

As an exception, there will be proportionate caps on administrative fines for small and medium enterprises (SMEs).

 

How to Assess the Business Impact of AI

Organizations deploying artificial intelligence must manage several risks. Based on experience from previous EU regulation implementations such as the General Data Protection Regulation (GDPR), your company should consider addressing the most critical risks and adequately preparing before the Act is finally released. This can be achieved by identifying AI assets, validating AI objectives, properly identifying and managing risks, and securing AI solutions. The most important risks and considerations include:

Business risks: including financial and legal risks that may result from inadequately implementing AI systems — as these could potentially generate false, inaccurate, or nonsensical information. You should consider:

  • What are our specific goals in adopting AI?
  • How will users interact with AI?
  • Are we using AI for business-critical processes?
  • What is the financial risk if the AI system does not perform as expected?
  • Does the AI system comply with current legal and regulatory frameworks?

Safety risks: particularly when AI controls systems like autonomous vehicles, health diagnoses, and critical infrastructure such as energy and telecommunications. You should consider:

  • What are the potential physical safety risks associated with the AI system, and how are we mitigating them?
  • How does the AI system handle unexpected scenarios or edge cases?
  • What would happen if the AI system led to failures in critical infrastructure?
  • What are the repercussions of an AI misdiagnosis in healthcare?

Ethical risks: that arise from AI’s decision-making and performance capabilities, covering tasks traditionally carried out by humans. You should consider:

  • How can our organization ensure the fairness of the AI model?
  • Can we ensure the privacy of the data needed for training models?
  • How do we explain decisions made by AI?
  • How can we prevent harmful behavior to humans and society at large?

Cyber risks: the threat of unauthorized access, data breaches, or malicious attacks on AI systems, compromising data security, system integrity, and user privacy. You should consider:

  • What measures are implemented to protect AI from cyber threats?
  • How can AI-related data be guaranteed in terms of both quality and security?
  • How do security objectives align with overall enterprise goals?
  • How do we handle and remediate security incidents related to AI?

 

How Kudelski Security Can Help

Kudelski Security is a world leader in cybersecurity solutions and services. We help organizations navigate an increasingly complex cyber environment to reduce business risk and become resilient against threats. If you want further details, the latest version of the Act under approval is available here, and our latest webinar regarding EU AI regulation is accessible here.

If you want to know what practical, preparatory steps your company should take regarding your AI program, the specific risks of AI, what governance you should set up or how to prepare for compliance with the newly introduced EU AI Act, get in touch with me at info@kudelskisecurity.com

Was this article helpful?