Prohibition of emotion recognition at work and its implications in light of the AI Act

The entry into force of Article 5 of Regulation (EU) 2024/1689 (“AI Act”) represents a significant milestone in the regulation of artificial intelligence (“AI”) within the European Union. This provision, titled “Prohibited AI Practices”, imposes a prohibition on specific AI systems that may cause significant harm to individuals or infringe upon their fundamental rights. The legislator has identified seven particularly critical practices, the use of which is now prohibited:

  • manipulation of behavior through deception or exploiting individuals’ vulnerabilities;
  • social scoring systems;
  • risk assessment of an individual committing a crime;
  • non-targeted scraping of online material or CCTV footage to create or expand facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • biometric categorization aimed at deducing “sensitive” characteristics (e.g. political orientations or religious beliefs);
  • real-time biometric identification in publicly accessible spaces for law enforcement purposes.

Failure to comply with these prohibitions results in the application of the most severe sanctions provided under the AI Act, with fines reaching up to €35 million or 7% of global annual turnover, whichever is higher.

Aware of the need to ensure a uniform and consistent interpretation of these prohibitions, the European Commission, pursuant to Article 96 of the AI Act, has adopted specific guidelines on prohibited AI practices (the “Guidelines”). Although these Guidelines are non-binding and will be formally adopted at a later stage, they serve as a key reference for competent authorities, as well as for AI system providers and deployers, to facilitate compliance with regulatory requirements.

While a thorough examination of the Guidelines is necessary for a complete understanding, this analysis focuses specifically on the scope of the prohibition on emotion recognition in workplaces, highlighting its legal basis, practical implications, and possible exceptions.

The prohibition on emotion recognition in the workplace

The AI Act prohibits the use of AI systems for emotion recognition in workplaces and educational institutions, except for medical or security purposes. Outside the scope of this prohibition, AI systems intended for emotion recognition are classified as high-risk and shall comply with the regulatory obligations established for this category.

Affect technology, or emotion recognition technology, operates through a complex set of data collection, analysis, and interpretation techniques to infer an individual’s emotional state. Its applications range from neuromarketing and the entertainment industry to education, personnel selection, and healthcare. However, its use in workplace settings raises significant concerns, as it may infringe upon fundamental rights, particularly privacy, human dignity, and freedom of self-determination.

In particular, emotion recognition in the workplace is applied in an environment characterized by a significant power imbalance between employer and employee, raising concerns about pervasive monitoring and behavioral influence. For instance, AI systems may be deployed to track employees’ level of attention or stress, or to assess their mood based on facial expressions or voice tone. Without adequate safeguards, such tools can be highly intrusive and discriminatory.

Scope of the prohibition

The Guidelines clarify that the prohibition on emotion recognition applies to AI systems designed to infer or identify emotions through an individual’s biometric data. Specifically, a distinction is made between:

  • emotion identification: directly associating a specific expression with a predefined emotional state (e.g., detecting anger based on a facial expression);
  • emotion inference: conducting complex analyses of biometric and other data to indirectly deduce emotions (e.g., determining an employee’s stress level by analyzing keystroke speed).

Regarding the concept of “emotions”, the Guidelines specify that the prohibitions do not extend to detecting physical states (e.g., pain or fatigue) or merely identifying expressions, gestures, or clearly visible movements unless they are used to deduce emotions. For example, observing that a person is smiling does not constitute “emotion recognition”, whereas concluding that a person is happy does (e.g., an AI system deducing that an employee is unhappy or angry with customers from body gestures, a frown, or the lack of a smile).

In the workplace, the prohibition applies regardless of the nature of the employment relationship (employee, collaborator, intern, etc.) and extends to the personnel selection phase. Therefore, AI systems used in recruitment to analyze candidates’ emotional state, emotion monitoring technologies during virtual meetings or video calls, and AI-integrated cameras capable of detecting employees’ emotions are considered prohibited.

Exception to the prohibition: medical or security purposes

The AI Act explicitly provides for an exception to the prohibition, limited to the use of emotion recognition systems in the workplaces and educational institutions for medical or security purposes. The Guidelines specify that this exception shall be interpreted restrictively:

  • therapeutic uses should be limited to CE-marked medical devices, excluding general wellness monitoring applications (e.g., AI systems detecting burnout or depression at work);
  • security reasons should be understood solely in relation to the protection of life and health.

Labor law and data protection considerations

Even if an AI system qualifies for an exception under the AI Act, it shall still comply with the constraints imposed by Regulation (EU) 679/2016 (“GDPR”) and, in Italy, by Article 4 of Law 300/1970, as amended by Legislative Decree 151/2015 (“Italian Workers’ Statute”), which governs employer monitoring through remote surveillance tools.

For example, an employer using a camera system to monitor employees’ emotions solely for training purposes, or a supermarket or bank deploying a similar system to detect suspicious behavior (e.g., identifying a potential robbery suspect), should ensure that such systems do not involve continuous employee monitoring, and include adequate security measures.

Even if an AI system falls within the permitted exceptions, employers shall:

  • ensure transparency through privacy notices and company policies, typically via IT usage tools, and communicate them appropriately;
  • demonstrate the necessity and proportionality of the system, ensuring that no less invasive alternatives exist to achieve the same objective;
  • conduct, if required, a data protection impact assessment (“DPIA”) under Article 35 GDPR;
  • comply with the guarantees and conditions of Article 4 of the Italian Workers’ Statute, which requires the existence of legitimate needs (e.g., organizational, productivity-related, workplace safety, or asset protection), prior collective agreement or administrative authorization, and adhere to transparency obligations.

Final considerations: what should organizations do?

In light of the AI Act and the Guidelines, and to avoid severe sanctions, organizations shall first map and classify the AI systems they provide and/or deploy to identify any prohibited practices, such as emotion recognition in workplaces. Beyond this preliminary assessment, they shall evaluate associated risks and determine whether the system qualifies for an exception under the regulation, such as medical or security purposes.

Regardless of whether an AI system falls within an exception, companies shall ensure compliance with other applicable regulations, including the GDPR and the Italian Workers’ Statute, implementing appropriate safeguards to protect workers’ personal data and fundamental rights.