Artificial Intelligence is here, and so is its regulation

AI Act - EU Regulation 2024/1689

Introduction

Artificial intelligence is revolutionizing the business world, offering new opportunities while simultaneously introducing challenges related to security and regulatory compliance. To address these needs, the European Union has adopted the AI Act (EU Regulation 2024/1689), a legislative intervention aimed at ensuring that the development, placing on the market, and use of AI systems are safe, reliable, and ethical.

Let's analyze together how the regulation is structured in terms of objectives and requirements, paying particular attention to the implications on cybersecurity for permitted AI systems. You will discover how you can not only comply with the new provisions but also transform them into a competitive advantage for your business.

Regulation overview

The core of the AI Act lies in the will to create a uniform regulatory framework throughout the EU to ensure the responsible development and use of artificial intelligence. The main objectives of the Regulation are:

  • Promote ethical and transparent AI: The adoption of AI systems must respect fundamental rights, in line with the values of the European Union, protecting health, safety, and the environment.
  • Ensure user safety: By defining stringent requirements that oblige AI providers, distributors, importers, and deployers to adopt robust risk management processes.
  • Harmonize rules within the Union: The regulation discourages unilateral restrictions by individual states on the development and use of AI systems, thereby promoting the free movement of innovative goods and services.

The regulatory model adopted is based on a classification of AI systems according to the level of risk associated with their use:

  • Unacceptable risks include practices banned due to their potential danger, such as social scoring by public authorities.
  • High-risk systems are subject to stringent regulatory obligations and are the subject of this article, for example, artificial intelligence software for personnel selection.
  • Limited-risk systems require transparency obligations and adequate information for users, like a chatbot for customer service.
  • Minimal or no-risk systems are subject to less stringent controls, a typical example being a spam filter in an email inbox.

Who is involved? Key roles

The AI Act clearly defines the different actors involved in the AI value chain and their respective responsibilities:

  • Providers: Any natural or legal person who develops or commissions an AI system and places it on the market under their own name or trademark. Providers are primarily responsible for the compliance of their AI systems with the AI Act.
  • Deployers: Any natural or legal person using an AI system in the course of their professional activities, excluding personal non-commercial use. Deployers must ensure that AI systems are used safely and in compliance with the regulation.
  • Distributors and Importers: Play a role in the supply chain and have specific obligations under the regulation.

Implementation timelines

The implementation of the regulation will follow a phased approach, with different deadlines for compliance depending on the specific provisions. It is crucial for organizations to start preparing as soon as possible, as implementing compliant systems requires significant time and resources.

DateDeadline/Obligation

August 1, 2024

Formal entry into force of the AI Act Regulation (EU 2024/1689)

February 2, 2025

Application of prohibitions for unacceptable-risk AI systems (Art. 5)

May 2, 2025

Entry into force of voluntary codes of conduct for non-high-risk AI systems

August 2, 2025

Application of rules for general-purpose AI (e.g., ChatGPT) and penalty regime

August 2, 2026

General application of the regulation for all risk categories (24 months from entry into force)

August 2, 2027

Application of obligations for high-risk systems in Annex III (36 months from entry into force)

August 2, 2028

First official review of the regulation by the European Commission

Key implementation requirements: what must companies do? (focus on high-risk systems)

For companies developing or implementing AI systems considered high-risk, compliance with the regulation involves adopting specific obligations that affect the entire product lifecycle. Here are the main requirements:

1. Risk management systems: 

Companies must establish a continuous risk management process. This means assessing, monitoring, and mitigating risks at every stage of development and implementation. A proactive approach allows for the timely identification of vulnerabilities and potential threats, ensuring the system's operational safety.

2. Data governance and quality: 

Data is the engine of AI systems. It is crucial that data used for training, validation, and testing is:

    • Relevant and representative
    • Free of bias
    • Of high quality 

Effective data governance, documented at every stage of the process, is essential to ensure reliable results that comply with regulatory requirements.

3. Technical documentation and activity logging (Logging)

Detailed technical documentation forms the "heart" of compliance. Companies are required to:

    • Draft and maintain updated manuals, technical specifications, and operating procedures.
    • Implement logging systems that automatically record every event and anomaly, facilitating control and any compliance reviews.

4. Transparency and information for users: 

Clearly and comprehensively informing users about the functioning of AI systems and their related risks is an essential requirement. Such transparency fosters trust and allows for more informed interaction with the technology.

5. Human oversight: Despite the automation offered by AI systems, human intervention remains indispensable. Human supervision ensures that any errors can be corrected promptly, ensuring continuous control over the operation and output of high-risk systems.
Examples:

  • A doctor uses an AI system to support diagnosis, but is required to manually validate the result before communicating it to the patient.
  • A personnel selection officer must review the automated decisions of a CV screening system to avoid discrimination.

6. Accuracy, robustness, and cybersecurity

One of the crucial aspects of the AI Act concerns system resilience:

    • Resilience to errors and attacks: Systems must be designed to withstand manipulation attempts or calculation errors.
    • Protection from cyberattacks: Cybersecurity measures must be implemented to safeguard data confidentiality, integrity, and availability.
    • Prevention of specific attacks: For example, countering data poisoning, model inversion, and adversarial attacks is a priority to ensure the overall security of the system.

7. Compliance assessment

Before commercialization, every high-risk system must undergo rigorous compliance assessments. These procedures ensure that the system meets all regulatory requirements, including those related to cybersecurity, and that it can be put into service without risk to users.

How our consultancy can make a difference with the AI Act

Understanding and effectively implementing the requirements of the AI Act is a complex and multidimensional challenge. This is where our cybersecurity support stands out, transforming a potential regulatory barrier into a competitive advantage.

1. Assessment and Gap Analysis specific to the AI Act

Through an in-depth analysis of your AI systems, we identify any gaps with respect to regulatory requirements – with particular attention to security aspects. This gap analysis allows for the definition of a targeted action plan to address critical issues and ensure full compliance.

2. Design and implementation of cybersecurity measures for AI systems

We offer our expertise to:

  • Protect training data and models: By implementing solutions that ensure confidentiality, integrity, and high availability.
  • Counter specific attacks: By adopting countermeasures against data poisoning, model inversion, and adversarial attacks.
  • Create secure architectures: By designing frameworks and infrastructures capable of resisting cyberattacks.
  • Implement secure logging mechanisms: Essential for continuously monitoring system operation and responding in real-time to any anomalies.

3. Support in data governance for AI

We offer consultancy to structure and optimize data governance practices, ensuring compliance with the quality and security standards provided by the regulation. This activity is fundamental to minimizing risks and ensuring responsible data management.

4. Creation of required documentation and policies

Our team supports you in drafting the technical documentation and specific policies for AI systems. Accurate documentation not only facilitates compliance checks but also represents a strategic asset for risk management.

5. Staff training and awareness

We develop targeted training programs to educate staff on the specific risks associated with the use of AI and on best practices for secure system management. Continuous training is key to maintaining a high level of security awareness in a rapidly evolving context.

6. Consultancy for conformity assessment and continuous risk management

We assist your company in preparing for conformity assessments, ensuring that every aspect – from cybersecurity to risk management – is adequately covered. We also support the implementation of a continuous risk management system, capable of adapting to new threats and regulatory changes.

7. Testing the robustness of AI systems

As with IT systems, AI systems can also be tested to assess their resilience to modern cyber attacks. To do this, ethical hacking techniques are used, with reference to the latest studies on attacks on AI systems, to emulate real attacks on AI systems until their weaknesses are found, helping you to resolve them before they are identified by attackers.

Conclusion: preparing today for the AI of tomorrow

The AI Act (EU 2024/1689) is not just a regulatory obligation, but an opportunity to build a future where artificial intelligence is safe, reliable, and capable of promoting innovation and growth. Complying with the new requirements means preparing today to successfully face the challenges of tomorrow.

Cybersecurity emerges as a fundamental pillar in this process: protecting data, ensuring business continuity, and countering cyber threats are essential elements for transforming regulation into a competitive advantage.

Artificial Intelligence is here, and so is its regulation
Matteo Panozzo July 9, 2025
Share this post
Archive