AI Act: the European Council finally approved the first worldwide rules on AI

This past April 21st, the Council of the European Union approved the Regulation on artificial intelligence (“AI Act”), which is officially now the first law in the world on artificial intelligence (“AI”). Therefore, it can set a global standard for AI regulation also in other jurisdictions, just as the Regulation (EU) 2016/679 (the “GDPR”) has done for data privacy.

After being signed by the presidents of the European Parliament and of the Council, the AI Act will be published in the European Union’s Official Journal in the coming days and enter into force twenty (20) days after this publication. Generally speaking, the new regulation will apply two (2) years after its entry into force, with some exceptions for specific provisions.

The AI Act is a regulatory framework that aims to make sure AI systems are safe and trustworthy, and they respect the law and the European Union’s fundamental rights and values. But the AI Act’s objectives are not only to enhance governance and effective enforcement of existing law on fundamental rights and safety: it also promotes investments and innovations through the so-called AI regulatory sandboxes. Indeed, the new legal framework includes provisions to support AI innovations in the European Union, such as the Coordinated Plan on AI.

It applies only to areas within the European Union law and provides exemptions such as for systems used exclusively for military defense as well as for research purposes.

The risk-based approach

The new legislative framework follows a risk-based approach, i.e., the higher the risk implied by the AI system, the greater the requirements and obligations to be fulfilled.

More specifically, the AI Act categorizes the risks in four levels (unacceptable, high, limited, and minimal), establishing different rules accordingly. Just to make an example, if an AI system presents only limited risk, it would be subject to very light transparency obligations (set forth in Article 50 of the AI Act). On the other hand, in case of a high-risk AI system, it would be subject to a set of requirements and obligations for gaining access to the European Union market. In this case, the AI Act provides for increased transparency regarding both the development and the use of high-risk AI systems. Besides, prior to deploying a high-risk AI system deployers which are public entities, private entities providing public services, or banks and insurance companies, shall carry out a fundamental right impact assessment.

For some other uses of AI – such us cognitive behavioral manipulation, predictive policing, emotion recognition in the workplace and educational institutions, and social scoring – the risks are deemed unacceptable, therefore those systems are banned from use.

Moreover, the AI Act also addresses the use of general-purposes AI models (“GPAI”), which either may or may not pose systemic risk and therefore may or not be subject to stricter rules, such us transparency and risk mitigation obligations.

But when will it come into effect?

According to Article 113 of the AI Act, the new legal framework shall enter into force on the 20th day following that of its publication in the Official Journal of the European Union and it shall apply from 24 months from the date of its entry into force. However:

  • the general provisions set forth in Chapter I (i.e. subject matter, scope, definitions, and AI literacy) and provisions regarding prohibited AI practice set forth in Chapter II, shall apply from 6 months from the date of its entry into force;
  • provisions regarding (i) the notifying authorities and notified bodies referred to in Section 4 of Chapter III (High-risk AI systems), (ii) the rules for new GPAI models (set forth in Chapter V), (iii) the new European – i.e. the AI Office, the scientific panel of independent experts, the AI Board with member states’ representatives, and the advisory forum for stakeholders – and national governing bodies (set forth in Chapter VIII), and (iv) penalties (set forth in Chapter XII) shall apply from 12 months from the date of its entry into force, with the exception of Article 101, dealing with fines for providers of GPAI models;
  • provision regarding high-risk AI systems set forth in Annex 1 and referred to in Article 6(1) of the AI Act and the corresponding obligation shall apply from 36 months from the date of its entry into force.

In addition, the AI Act provides, under Article 111, deadlines within which the AI systems already places on the market or put into service need to comply with the requirements and obligations set therein. More specifically without prejudice to the application of the provisions set forth in Article 5 (Prohibited AI practices) from 6 months from the date of the entry into force of the AI Act:

  • AI system which are components of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market or put into service before 36 months from the date of entry into force of the AI Act, shall be brought into compliance with the AI Act by 31 December 2030
  • the AI Act shall apply to operators of high-risk AI systems, other than the systems referred before, that have been placed on the market or put into service before 24 months from the date of its entry into force, only if, as from that date, those systems are subject to significant changes in their design. In case of high-risk AI systems intended to be used by public authorities, the providers and deployers of such systems shall take the necessary steps to comply with the requirements of the AI Act by 6 years from the date of its entry into force.

Lastly, Article 111 requires that providers of GPAI models that have been placed on the market before 12 months from the date of entry into force of the AI Act, shall take the necessary steps to comply with the obligations of the AI Act by 36 months from the date of its entry into force.