The AI act and its implications on the organisations

The iDanae Chair (where iDanae stands for intelligence, data, analysis and strategy in Spanish) for Big Data and Analytics, created within the framework of a collaboration between the Polytechnic University of Madrid (UPM) and Management Solutions, has published its 4Q24 quarterly newsletter on the AI act and its implications on the organisations


 


The AI act and its implications on the organisations

Watch video


1. Introduction

Artificial intelligence (AI) has revolutionized the global market, driving technological advancements across a wide range of industries. From automating routine tasks to enhancing decision-making through complex algorithms, AI has become a key engine of innovation and productivity. Its ability to analyse vast amounts of data, learn from patterns, and adapt to new situations has enabled businesses to unlock new levels of efficiency and competitiveness. This growing influence has also been recognized at the highest levels, with the 2024 Nobel Prize in Physics awarded to John Hopfield and Geoffrey Hinton for their foundational contributions to AI, highlighting the transformative impact of machine learning on modern technology. 

 

As AI continues to advance and permeate various sectors of society, the need for robust regulatory frameworks has become increasingly apparent. Recognizing the profound impact that AI technologies can have on individuals, businesses, and societies, in April 2021 the European Union has proposed the AI Act, the first ever comprehensive regulation aimed at ensuring the protection of the fundamental rights of the individuals and the responsible development and use of AI. This landmark regulation not only seeks to define the roles and responsibilities of those involved in AI systems but also categorizes these systems in types of risk, with tailored requirements designed to protect public safety, ethical standards, and human rights. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs). 

The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI. Together, these measures guarantee the safety and fundamental rights of people and businesses when it comes to AI. They also strengthen uptake, investment and innovation in AI across the EU.  

The AI Act entered into force on 1st August 2024, and is fully applicable 2 years later, with some exceptions: prohibitions take effect after six months, the governance rules and the obligations for general-purpose AI models become applicable after 12 months and the rules for AI systems - embedded into regulated products - apply after 36 months. To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation and invites AI developers from Europe and beyond to comply with the key obligations of the AI Act ahead of time. 

However, this approach to AI is not homogeneous across jurisdictions. For example, in the United States, several documents have been published with principles and guidance for AI to be used in a responsible and safe way (such as the 2022 AI Bill of Rights, the 2022 AI risk management framework, the 2023 Executive Order on Principles for the Development and Use of Safe and Secure AI, or the associated documents to interpret the implementation of the Executive Order on published in 2024). In China, several state agencies issued a set of interim administrative measures for the development of generative AI services in 2023. In Latin America, work is underway to adopt regulations addressing some aspects of the use of AI systems (such as the initiatives presented in Mexico for the ethics of Artificial Intelligence in 2023, the regulation to promote the use of AI in favour of economic and social development in Peru in 2023, or the law on the development, promotion and ethical and responsible use of artificial intelligence in Brazil, approved at the end of 2024). However, none of these regulations, principles or initiatives have the coercive and comprehensive character of the European AI Law (except for the Brazilian law, which has similarities with the European one). This Act therefore places the EU at the forefront of regulation, although it may increase differences in the competitive environment with other jurisdictions.

In this whitepaper, the key elements of the regulation are reviewed, the implications for the organisations are explored, and a use case is developed to help understand some practical aspects described in the document. 
 

2. The AI Act: a summary of main requirements

In this chapter, the main requirements of the AI Act for AI systems are reviewed, outlining the new requirements that organizations need to follow to use AI responsibly. 

Definition of an AI system and identification of roles 

According to the AI Act, an AI system refers to a machine-based system designed to operate with varying levels of autonomy, which may exhibit adaptiveness after deployment. For explicit or implicit objectives, it processes input to infer how to generate outputs—such as predictions, content, recommendations, or decisions—that can influence physical or virtual environments. In other words, AI systems don’t necessarily respond to a set of predefined instructions; they learn, adapt, and evolve in ways that can directly shape outcomes to the current reality. 

The AI Act also provides other important definitions that clarify crucial roles in AI-related projects, such as the provider and deployer roles. A provider is natural or legal person that develops an AI system or general-purpose AI model and places it on the market, either under its own name or trademark, for payment or free of charge. A deployer, on the other hand, is a natural or legal person that uses an AI system created by another entity under its authority. The distinction between these two entities is key in the regulation, as each one has its own responsibilities. 

The scope of this regulation is limited to the European Union. It concerns providers and deployers based in the EU. However, providers outside the European Union must satisfy the requirements established in the AI Act if they wish for their systems to be implemented inside the Union.  

Classification of AI systems 

The EU AI Act establishes a classification for artificial intelligence systems based on the level of risk they pose to security, fundamental rights, and general welfare. The main risk categories in which these systems are classified are detailed below (see figure).

 


1. Prohibited AI Systems 

The AI Act bans AI systems deemed to pose unacceptable risks to individuals or society. These include technologies designed to subconsciously manipulate behavior or exploit vulnerabilities, such as targeting children or people with disabilities to influence decisions. It also prohibits discriminatory profiling, where individuals are categorized based on characteristics for unethical purposes, and social scoring systems, which judge or restrict individuals based on personal or personality characteristics. 

Real-time, remote biometric recognition is also banned in the AI Act, in order to protect the citizen's’ privacy and fundamental rights. Only in properly justified cases, such as judicial activities, will these sorts of practices be allowed. 

2. High-Risk AI Systems

High-risk AI systems are those that, if they fail or are used incorrectly, can significantly impact the safety, health, or fundamental rights of individuals. While these systems are permitted, they must comply with strict regulations regarding conformity, safety, and transparency before deployment. 

These systems are found across various sectors, including medicine, critical infrastructures management, education or employment. For example, in education, high-risk AI can affect student evaluations and admissions processes, making it crucial to ensure these systems are fair and unbiased. Similarly, in employment, AI is increasingly utilized for recruitment and performance assessments, raising concerns about potential discrimination and the need for transparent practices. Additionally, high-risk AI plays a significant role in essential services, law enforcement, migration management, and the justice system, where its impact on safety and fundamental rights is closely monitored. 

3. Rest of AI Systems (limited or no risk)

AI systems which are not classified under the categories mentioned above are deemed low risk. Regarding the general-purpose systems not classified as high risk, the AI Act defines the notion of systemic risk for these systems. A General-Purpose AI system is deemed to have systemic risk if its scope is sufficiently high, either by handling vast amounts of data or affecting a large number of people. AI systems with great levels of complexity, concretely those who surpass 1025 floating-point operations during training, are also considered to have systemic risk. 

Requirements for High-risk AI systems 

Whenever a given AI system is deemed high risk, several requirements must be met by all parties and agents that are involved in the development and use of the system. Below is a summary of the main regulatory requirements these systems must meet according to the AI Act. 

All high-risk AI systems must have a detailed risk management plan. Its main objective is to assess the possible risks that the AI system might incur while being used for its intended purpose and adopt measures to manage and mitigate them. This plan should be constantly reviewed and updated and should consider as many scenarios as possible, to make sure of its completeness and overall effectiveness. 

AI systems rely on data for training, validation and testing. The quality of this data is in direct correlation with the performance of the system. For high-risk AI systems, extra care must be taken to ensure the quality of the data it uses. This affects all stages where data is involved: its recollection, preparation, structuring and updating. One of the most relevant points is to ensure that the data being used is as complete as possible, error-free and does not present any bias.  

Given the complexity of many AI systems, it is essential that high-risk AI systems provide complete and accessible technical documentation before being deployed. This documentation must cover all components, including the data used for development, a detailed description of how the system works, any pre-trained models it utilizes, and the risk management plan. The goal is to make the system as understandable as possible for deployers, with clear instructions on usage, its intended purpose, accuracy, and potential risks or malfunctions.

To help providers create this documentation, the EU will offer standardized forms outlining the required information. Small companies, including start-ups, will benefit from a simplified version of these forms, making it easier for them to provide the necessary details in an accessible way. 

The new regulation outlines the importance of this transparency for high-risk AI systems. Not only should providers of high-risk AI systems present their risk management plan to the EU Authorities. Anybody who uses a high-risk AI system should always be aware of its intended purpose, its accuracy, its potential risks and how to interpret the output that the system yields. To do this, the systems should be designed to make them as accessible as possible, while also their providers should make themselves available by providing contact information to their users. 

The speed at which AI models grow in terms of accuracy, efficiency and capabilities is staggering. Every few months, the most important tech companies introduce updates and upgrades to their models which improve significantly their overall performance. However, they are all bound to make mistakes every now and then, even with the simplest of tasks. This is unlikely to be resolved any time soon, especially considering the probabilistic factor that exists in almost all advanced AI models. Therefore, it is always convenient for AI users to have the option to monitor or analyze the output given by an AI system before validating it. With the AI Act, this will be mandatory for high-risk AI systems.  

The legislation states that these systems are required to be built as to facilitate this procedure: Its output must be readable and clear to the user. Moreover, these systems should point out certain tendencies or prejudices that their model has, especially for those models designed to create recommendations for decisions to be taken by natural persons. It should always be an option to modify or override the output given by a high-risk AI system and the user should always have the possibility to stop the procedures of a high-risk AI system coming to a halt in a safe space. 

High-risk AI systems must include detailed cybersecurity plans, as they could be targeted by cyber-attacks due to their potential impact on health, fundamental rights, or decision-making. While many AI systems already have security measures in place, the AI Act requires high-risk systems to incorporate specific actions within their risk management plan to ensure they are prepared for security breaches. 

These security measures also cover internal risks, such as biases, inconsistencies, or failures within the model itself. High-risk AI systems must be designed to detect these issues and include backup plans to address and correct them 

Requirements for General-Purpose AI systems 

The set of requirements General-Purpose AI systems are set to implement is quite reduced in comparison to high-risk ones. However, this does not mean that these systems are exempt from appearing in the AI Act. They are forced to apply some security and accessibility measures, which become more restrictive when dealing with General-Purpose AI systems with systemic risk. 

All AI systems must draw up technical documentation of the model, which should include a basic description, its intended use, information on how the model has been trained or tested and the data it has used. This information must be accessible to the EU Authorities upon request and be constantly updated. 

Moreover, General-Purpose AI models must provide information on how the model works to its deployers. The idea behind this is that users which intend to use these models or integrate them in their own AI systems have sufficient information to correctly use it and avoid any potential risk of its misuse. This information, therefore, must be a bit more technical than the one previously mentioned, and must include details about the format of input and output data, the software or hardware required for its use and the overall model architecture. It should be clear from these instructions how a deployer can integrate the model, what data it requires and how to train and test it. 

All General-Purpose AI models are required to have this information except those that are free and operate under an open-source license. That is, models whose parameters, architecture and overall development are open to the public and can be changed by the user. 

Requirements for General-Purpose AI Systems with systemic risk 

If a General-Purpose AI system is considered to have systemic risk, besides the requirements just mentioned above, it must also satisfy additional measures to avoid any potential threats coming from this detected risk. 

Since these models may be very complex, they must be properly and exhaustively evaluated to assess their overall performance. This testing should be also made under “adversarial conditions”, that is, under conditions on which the AI system might incur any potential risk, in view of identifying and mitigating systemic risks. 

Whenever these risks are identified, they should be communicated immediately to the competent authorities, together with corrective measures to address them. The final user or deployer should always consider these possible risks when using the model. 

AI Transparency 

The use of AI is becoming more and more popular in our every day: It is so common that there might be situations where the final user is not aware of the fact that he is interacting with an AI system. This can be quite dangerous, especially in the era of misinformation, fake news or biased recommendations. 

The AI Act sets to regulate this by ensuring that, whenever dealing with an AI system, it becomes clear to the user that it is an AI system the user is interacting with. This affects all forms of AI models outputs, whether its text, audio, image or video. It is an obligation for developers of the model that the user can easily see that the output is artificially generated. 
Additionally, other restrictions apply to biometric categorization or emotion recognition: whenever these sort of models are applied, people affected should always be aware of such fact, so that they can exercise their rights in accordance with the rest of European regulation regarding this matter. 

The most notable exception to this transparency policies is when dealing with security measures. Whenever trying to detect, prevent or investigate a criminal offence, some of these restrictions may be lifted. However, competent authorities must properly document why this is required and must have the respective judicial and administrative permissions to do so. A great deal of technicalities regarding this regulation involves explaining the pretexts under which authorities can skip these restrictions, as the EU has a big interest in preserving fundamental rights regarding its citizens privacy and overall security. 

 

3. Implications of the AI Act to the organisations

The AI Act may pose different questions with regards to its implications for the organisations: How can any organization adapt to this regulation? What steps must be followed to comply with it? How can a successful integration of AI be achieved? 

These questions may be addressed by creating and implementing a detailed plan for the adoption of AI into the organization, considering the new regulatory framework established in the AI Act.  

AI Integration plan: 4 steps to successfully integrate AI into an organization 

1. Definition of an AI Strategy and the development of an AI framework

The first step is to develop an AI strategy, which should define the organization’s position on the use of AI, establish the scope of its application and the desired level of the adoption of AI at all levels of the organization.  

This strategy should materialise in the definition of an AI framework and an adoption plan, aligned to the elements previously defined in the strategy, detailing milestones, stakeholders and budget needs. This framework would at least include the review and potential evolution of the following components: 

  • Governance and organization: the definition of roles, functions, responsibilities and committees in the AI framework. Update of the three lines of defence model. Need for upgrade the skills of each area to ensure the ability to conduct the allocated responsibilities. Decision on the creation of specific structures for the modelling and support to business needs. 
  • Policies and procedures: definition and update of policies and procedures affected by the incorporation of AI: model development and validation, use, data protection, ethics, model risk management, operational risk, vendor policies, data protection and cybersecurity, etc.  
  • Technology and data needs: review of the IT needs, the structure, and the environments to be created (sandboxes, development, deployment, etc.), to ensure a sound performance of the construction and use of AI models and systems. This involves selecting cloud infrastructure, computational capacity and storage needs, while managing costs and vendor risks. Specialized hardware, distributed computing, and Big Data infrastructures could be leveraged on for efficiency. The framework must support the evolution from legacy systems, enabling seamless integration and maintenance across development, testing, and production environments (e.g., MLOps). 
  • Use case definition and selection / prioritization: long list of use cases; definition of a detailed benefit-cost-risk analysis; systems to be used; means for monitoring of the performance and the results. 
  • Corporate culture: defining and ensuring a full plan for upskilling, reskilling, and disseminating the AI culture across all the organization, to ensure an effective, responsible and secure use of the AI. Tailored training programs should cover technical capabilities, ethical considerations, and the practical implications of AI in daily operations. 

2. Operationalisation

The defined framework needs a proper operationalisation, which includes at least the following aspects: 

  • The adjustment and modification of policies, or creation of new ones, that must be approved at the appropriate level. 
  • The creation of methodologies and tools for the actual implementation of the framework: 
    • Methodologies for the design and creation of AI systems and their validation (including concepts such as the bias and fairness analysis and treatment, the interpretability, the robustness, etc.) 
    • Elements for the management of the models (tiering and risk classification process, the process for testing the compliance with the regulation, expected documentation, the monitoring process, indicators and KPIs, adapting existing model risk management and inventory tools, etc.) 
    • Processes and tools for measuring and managing risks emerging or amplified by the use of AI 
    • Reporting needs (monitoring dashboards, internal reporting, external documentation to be sent to the EU in case of high risk models, etc.) 
  • The low-level definition and setting-up of the IT infrastructure, the architecture, and data needs. Implementation of key features, such as security measures, real-time data handling, and optimized system interactions.  
  • Integration with the data framework: setting comprehensive processes for acquiring and managing diverse, high-quality data, ensuring reliability, traceability, ethical use, and regulatory compliance.  

3. Testing phase

Application of the framework implemented, using the tools and environment to construct a limited but representative set of use cases, to ensure that all elements have been properly set up, and run a fine tuning of those elements that may require an adjustment. This phase could be done in parallel with the operationalisation.  

4. Full deployment and change management

Once the framework has been tested, it can be deployed to the whole organisation. To ensure a smooth integration and deployment of the framework into the organization, a strategic approach to change management has to be considered. A crucial first step is developing a stakeholder engagement process. This involves identifying key internal and external stakeholders, understanding their expectations, and actively involving them in the AI adoption process. Transparent communication is vital, as it fosters trust and helps address concerns about AI's impact on roles, workflows, and the broader organizational strategy. Regular updates on the progress of AI projects, their objectives, and their potential benefits can help build alignment across all levels of the organization.  

In addition, beyond initial training, continuous improvement mechanisms should be in place, encouraging feedback from users and adapting AI systems and processes to meet evolving needs. By cultivating a culture of learning and adaptability, organizations can maximize the value of AI and ensure its successful integration into their operations.

The AI Act in action: application through a use case 

In this section, the requirements for developing an AI model coming from the AI Act are exemplified through the construction of an automatic AI system. 

The AI system considered is a CV Data extractor. Its primary use is to extract concrete information from CVs: name, personal information, education, work-experience, English level, etc. This automatically extracted information could be used, for example, for the filtering of job candidates according to specific requirements needed for a concrete position. 

The functionality of the system is simple: via a webpage, the employee from human resources can upload several CVs from different candidates and click a button that executes the data collection.

Then, an AI model reads the content from each CV and extracts the desired information from them. Once this data is collected, the application automatically generates a csv file with the extracted information written down in columns.  

1. Classification of the system

To address the impact that the AI Act has on this CV Data collector, the role that the company plays in the use of the AI system must first be assessed. Since the company has not built the model itself, but uses one created by another agent for their application, the company is a “deployer” of the AI System. Therefore, it is subject to comply with the specific obligations that deployers have in the AI Act, excluding the ones that are reserved only for providers. However, the AI system used is an existing and commercially available large language model. Since such a model is not specifically designed for CV processing, the intended purpose of the system can be considered to have been modified. Article 25 states that “Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: they modify the intended purpose of an AI system, including a general-purpose AI system […]”. Therefore, since this AI system modifies the intended purposes of the general-purpose model, the organization shall also comply with the obligations for providers. 

A determinant factor to dictate what requirements should an AI system satisfy is its risk level. Is this AI system prohibited? Is it high-risk? Or can it just be interpreted as a General-Purpose AI system? 

Even though it can have a considerable impact on important decisions such as job applications, this AI system is not hurting or discriminate people. Its main purpose is to automate the process of job recruitment by extracting concise information from CVs. Therefore, it is not amongst the prohibited AI practices and the company can carry out this application subject to adapting it to the new regulation. 

However, this application can have a direct influence on employment. Article 6, paragraph 2 claims that all systems referred to in Annex III qualify as high-risk AI systems. This is the fourth paragraph of Annex III, which describes which AI are considered high-risk: 

“4. Employment, workers management and access to self-employment:  (a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates.“ 

This matches almost entirely the scope of the application built by the company. Hence, this application should be considered a high-risk AI system.  

2. Compliance with the AI Act

Some of the requirements from the AI Act are analysed to show an example of practical implications on this specific use case. 

  • Data Privacy: The application is built to extract some sensible information, such as age or emails, which implies the obligation of ensuring data privacy. Therefore, the implementation of the system should address this aspect from a formal and technological point of view, and the functionality should be designed to ensure a proper use (for example, restricting the type of information that can be extracted, avoiding the option of downloading the information or making screenshots, security in the storage of the information, restricting access rights, including the system in the process of managing private information, avoiding the use of the system for processing documents other than CVs, etc.). In addition, other European regulations regarding data privacy, such as the General Data Protection Regulation (GDPR), should be considered.  
  • Transparency: the transparency requirement implies that it should be made clear and declared to the candidate that the data are treated using an AI system, and to the user that the output has been generated by an AI system.
  • Human Oversight: With regards to the requirement of human oversight, the output generated by the AI system needs to be supervised before making a relevant decision based on it. A possible solution to address this topic could be the following: 
    • The first interaction of the user with the system would be an interface where the user can upload the CV and execute the AI model that extracts the desired information from the CV.
    • Once the analysis is finished, the interface should show a view of both the CV used and a table with editable fields containing the extracted data. This way, the user can check directly on the same screen whether the information extracted from the model matches the one coming from the original CV and correct possible mistakes over the same field. 
    • A confirmation button of the correctness and review performed by the user could also be added. 
  • After the review of all the requirements and the adaptation of the system, the application should be compliant with the new regulation established in the AI Act. However, still a regularly monitoring of the state of the system should be implemented. The company should remain in contact with the AI System provider to check for updates or changes in the model and monitor the overall performance of the AI System they are using, checking for potential biases or repeated errors of performance.  
    If the company detects any malfunctioning of the model, in particular if it could lead to any kind of discrimination (for example, based on social conditions, such as origin, race, or social status), it should be well-documented, reported to the provider and the use of the system should be immediately halted until either the AI model is changed or the issues solved. 


Conclusions

The AI Act represents a significant step by the European Union in regulating the development and use of artificial intelligence technologies. It clearly defines the key components and actors involved in AI systems and classifies these systems according to their potential risk. In addition, it stablishes requisites for general-purpose models. The regulations are tailored to the risk level, with a particular focus on addressing safety and ethical concerns for high-risk systems. 

Additionally, the AI Act provides a clear framework for developers, offering practical guidance on how to navigate the complexities of compliance. While it aims to mitigate the risks associated with AI, the regulation also fosters responsible innovation, encouraging the development of AI systems that can improve citizens' lives and contribute to societal progress. 

In this publication, the main regulatory aspects of the AI Act have been covered, summarizing the requirements that must be met by EU-based AI systems based on their risk level. Moreover, a possible AI adoption plan has been explained, which any organization could follow to incorporate AI use cases into their business model, ensuring compliance with the new regulatory framework established in the AI Act. Additionally, a concrete example has been presented to illustrate how an AI tool can be adapted to comply with the AI Act, explaining its conflicts with the regulation and how they could be resolved.

In conclusion, the need for regulation in AI development has never been more critical. As AI technologies evolve, ensuring they are used responsibly and ethically is essential to protect individuals' rights and maintain societal trust. The AI Act is a huge step towards this goal. 


The newsletter is now available for download on the Chair's website in both in es​p​añ​ol and english.​​​