lexisnexisip.com

Artificial intelligence in HR processes: the boundaries of Italian employment law

Thursday 25 April 2024

Elena Ryolo

Legance, Milan

eryolo@legance.it

Daniele Dellacasa

Legance, Milan

ddellacasa@legance.it

Artificial intelligence (AI) stands as one of the most captivating yet potentially dangerous technologies of our time. It promises efficiency, rapid achievement of results and objective evaluations. However, challenges such as dehumanisation, bias and partial unreliability lurk just around the corner. In this context, AI systems are today a reality in several industries and sectors, and the human resources (HR) area is clearly impacted.

Numerous multinational groups, driven by the desire to harness technology for enhanced efficiency and improved decision-making, have either already adopted or are currently considering the adoption of AI-driven systems for HR processes.

From our observations in Italy, companies are starting to consider the use of AI in the recruitment process. Software capable of scanning CVs, evaluating candidates (including through emotion/facial recognition) and matching job requirements with applicants’ skills and experience is being assessed by several large-sized players. AI-driven platforms aimed at personnel training and performance measurement are also being adopted in certain ambits. Likewise, the food and on-demand delivery sectors are also witnessing the widespread use of AI: the automation of virtually every aspect of employment, from recruitment and day-to-day management to performance evaluation and termination, requires minimal (or no) human intervention.

However, the implementation and usage of AI systems do not always align with the current framework of Italian employment law, which imposes significant limitations, especially when it comes to safeguarding workers’ rights, transparency, union involvement and non-discrimination.

Legal framework affecting AI in HR-related processes

As we navigate the implementation of AI systems, various rules and principles of Italian law – as interpreted by the courts – must be necessarily considered.

Remote monitoring

According to section 4 of the Italian Workers’ Statute, a cornerstone of Italian legislation dating back to 1970 and only partially updated a few years ago, the use of technological systems or devices that potentially allow employers to monitor remotely the employees’ working activities is permissible only if (1) these systems or devices are required to satisfy organisational, production-related or security needs; and (2) their use is agreed upon with the unions or authorised by the competent local Labour Office. The authorisation may be denied if the systems, although aimed at satisfying employers’ legitimate needs, do enable monitoring in a manner that excessively infringes on employees’ freedom and dignity, such as through continuous ‘real-time’ oversight. Non-compliance with these requirements is considered anti-union conduct and also entails the application of criminal sanctions.

As an exception, the above limitations do not apply (so that no agreement or authorisation is needed) to instruments which are used ‘to perform the employees’ duties’, despite their capability to remotely control employees.

AI systems fully rely on massive data elaboration. This makes it possible – if not likely – that they would fall under the section 4 limitations since they would potentially be able to monitor the employees’ working activity. At the same time, it seems hard to imagine that any AI system would be seen as a tool used ‘to perform the employees’ duties’, a circumstance which would allow derogating the above limitation.

Given how fast these technologies and their adoption are evolving, this query might become a reality in the not-so-distant future.

Data collection beyond professional attitudes

Under section 8 of the same Workers’ Statute, employers are prohibited from inquiring, directly or indirectly, into employees’ political, religious or trade union beliefs, or any other matters irrelevant to assessing professional aptitude, both during recruitment and throughout the employment relationship. This prohibition stands even if express consent is given by the candidates or employees. Violations trigger criminal liability for the employers, underscoring the law’s intent to separate aspects of employees’ personal lives relevant to their job performance from those that are not.

This rule poses a significant limitation on those AI-driven tools which collect/elaborate data that a court might deem not immediately relevant for assessing professional aptitude, such as – for instance – the emotion/facial recognition made by recruiting software.

Transparency and union involvement

In August 2022, Italy transposed the EU Directive on transparent and predictable working conditions,[1] which mandates employers to disclose a set of detailed information, including on the use of any automated decision-making or monitoring systems, thus anticipating communication obligations under the forthcoming EU Directive for Platform Workers.

In its initial formulation, the Italian legislation covered automated systems that involve human intervention as well; however, an amendment enacted in May 2023 clarified that only ‘fully’ automated systems need to be disclosed.

Information on such systems, when they are intended to provide indications relevant to recruitment, management or termination of the employment, assignment of tasks or duties, as well as affecting the monitoring, performance assessment and fulfilment of contractual obligations, must be structured and detailed, and be addressed to both employees and the works council (or the external trade unions in their absence).

Failure to comply entails the enforcement of administrative sanctions and constitutes anti-union conduct.

Considering how broad the scope of this legislation is, it is very likely that most AI-driven systems in the HR field would trigger the above information obligations unless a reasonable degree of human intervention can be demonstrated. In general, the obligation to provide transparent and understandable explanations of AI-driven decisions seems particularly challenging with complex AI models.

Non-discrimination

Italian labour law stringently prohibits workplace discrimination, both directly and indirectly. Despite AI’s potential to mitigate human bias, poorly designed or unmonitored systems can perpetuate or exacerbate discriminatory practices.

Worker classification

The classification of workers (as employees or independent contractors) in Italy hinges on the nature of their work and the degree of autonomy or concrete control exercised by the employer. The role of AI-driven systems in organising people’s work can influence this classification, as demonstrated by case law decisions (including recently on platform workers) where the organisation dictated by a platform/software, and the control exercised by it, have been considered indexes of subordination.

Data protection

The EU General Data Protection Regulation (GDPR)[2] is pivotal for AI systems processing personal data in HR processes, emphasising transparency, data minimisation and explicability. The handling of sensitive personal data by AI systems necessitates lawful processing bases, stringent security measures and adherence to individuals’ rights under the GDPR, including (1) to receive transparent, clear, and ‘intelligible’ information, also on the purposes of processing; (2) to access, and ask erasure of, personal data; and (3) not to be subject to decisions based solely on automated processing, including profiling, which produce legal effects or affect them.

Some key points to consider going forward

The increasing adoption of AI in HR processes in Italy presents both opportunities and challenges. While AI promises to revolutionise HR through enhanced efficiency and accuracy, it also raises complex legal considerations. Key points for a possible approach in the current law scenario might include:

  1. Union involvement/consultation – this would be a legal requirement in most cases, and the unions’ informed and cooperative approach will be decisive for an agreed use of AI systems allowing companies to be at the forefront of technologic development while fully respecting the employees’ rights.
  2. Human oversight – this should always be present to help reduce the risk of error, discrimination and bias, ensuring that AI acts as a tool for human decision-making rather than as a substitute; this includes the need to regularly review and assess AI tools and their impact on employees and work organisation.
  3. Corporate awareness – employers (and HR people in particular) must be aware of the legal implications the use of AI can have; incorporating AI systems into HR processes imposes a thorough legal review, to identify potential liabilities, safeguard the organisation against legal challenges and ensure that the deployment of AI aligns with applicable laws. This is particularly true considering the possible consequences, which – in the most serious cases – may also include sanctions of a criminal nature and reputational damages (eg, in case of discriminatory processes).
  4. Employees’ training – this is necessary so that they can understand the basics of AI technologies, their benefits and potential pitfalls, thus being ready to accept a future where AI plays a significant role.
  5. AI systems’ design – it is crucial that providers design AI for HR processes to minimise data access and only process information which is strictly necessary.
  6. Data protection – ensuring compliance with the GDPR (but also with the decisions of the local Data Protection Authority) when integrating AI systems into HR processes, safeguarding the privacy and security of employee data.

The forthcoming EU Artificial Intelligence Act, and the significant investments in this area recently promised by the Italian government, including through the establishment of an AI-dedicated authority, suggest that there will be an increasing focus on this matter in Italy in the future, hopefully leading to balanced solutions where the (necessary) legal restrictions do not unreasonably inhibit the (inevitable) development of AI in the workplace.


[1] Directive (EU) 2019/1152 on transparent and predictable working conditions in the European Union [2019] OJ L186/105.

[2] Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L119/1.