AI Pact: an evolving initiative supporting early compliance with the EU AI Act
The AI Pact, an initiative promoted by the AI Office to encourage and support businesses in proactively anticipating the application of key provisions of the AI Act (i.e., Regulation (EU) 2024/1689 – for further details on the AI Act, please refer to our previous contribution, available here, in Italian), is achieving significant results. The text of the pledges, initially drafted in May 2024 by the AI Office, was shared with initiative participants in September to gather feedback and suggestions.
In this context, less than two months before the entry into force of the AI Act, the European Commission announced that over a hundred organizations have already signed the AI Pact. Signatories include several multinational corporations, including major players (Amazon, Booking.com, Google, Microsoft, and Samsung, etc.), as well as numerous European SMEs operating in various sectors, including information technology, telecommunications, healthcare, banking, automotive, and aeronautics.
But what is the AI Pact and what does it entail?
The AI Pact is based on two pillars:
- Pillar I: gathering and exchanging with the AI Pact network. In this context, participants contribute to the creation of a collaborative community, sharing their experiences, knowledge, and best practices. This includes, for example, webinars organized by the AI Office aimed at providing participants with a better understanding of the AI Act and their responsibilities.
- Pillar II: facilitating and communicating company pledges. The purpose of this Pillar is to provide a structured framework to foster the early implementation of some of the measures of the AI Act, encouraging organizations to share the processes and practices adopted to anticipate compliance with the regulation. In this context, the preparation of specific operational models falls.
The AI Pact contains several voluntary pledges for organizations that decide to align with the provisions of the AI Act before they become formally mandatory. These commitments, presented in the form of “pledges”, are not legally binding and do not impose any legal obligation on participants. They outline concrete actions aimed at meeting the requirements of the AI Act, with the goal of mitigating as soon as possible the risks that AI can pose to health, safety, and fundamental rights.
Specifically, by taking part in this initiative, organizations agree to make three “core” commitments (and may decide to strive to meet all the other commitments mentioned in the AI Pact) that focus mainly on transparency obligations and requirements for high-risk AI systems:
- adopt an AI governance strategy, to foster the uptake of AI in the organization and work towards future compliance with the AI Act;
- map the AI systems developed or deployed in areas considered high-risk under the AI Act (such as biometrics, critical infrastructures, employment, etc.);
- promote awareness and AI literacy of their staff, i.e., the skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
Non-fundamental pledges
As stated above, signatories may also undertake to make their best efforts to comply with or contribute to the achievement of specific commitments. The AI Pact distinguishes these pledges based on the organization’s role, drawing a distinction between providers and deployers of AI systems.
Specifically, AI system providers, where relevant and where possible, may:
- put in place processes to identify, throughout the entire life cycle of the AI system, possible risks to health, safety, and fundamental rights arising from the use of AI systems;
- develop policies to ensure high-quality training, validation, and testing datasets for AI systems;
- implement logging features to allow traceability appropriate for the intended purpose of the system;
- inform deployers about how to appropriately use AI systems, their capabilities, limitations, and potential risks;
- implement concrete measures to ensure human oversight;
- adopt specific policies and processes aimed at mitigating risks associated with the use of AI systems;
- design AI systems intended to directly interact with individuals so that those are informed that they are interacting with an AI system;
- design generative AI systems so that the AI-generated content is marked as artificially generated or manipulated;
- provide means for deployers to clearly and distinguishably label AI-generated content, including deepfakes and texts published to inform the public on matters of public interest.
As for deployers, where relevant and where possible, they may:
- map the possible risks to fundamental rights of persons that may be affected by the use of AI systems;
- implement concrete measures to ensure human oversight;
- clearly and distinguishably label AI-generated content, including deepfakes and texts published to inform the public on matters of public interest;
- inform users that they are interacting with an AI system;
- provide clear explanations to users when a decision made about them is prepared, recommended or taken by AI systems with an adverse impact on their health, safety or fundamental rights;
- when deploying AI system at the workplace, inform workers’ representatives and affected workers.