Managing the legal and health risks of workplace AI

Neil HodgeWednesday 23 November 2022

The Covid-19 pandemic accelerated the deployment of artificial intelligence tools in the workplace, for example for employee monitoring. In-house teams must mitigate the legal risks to their organisations and ensure that employees are protected, as In-House Perspective reports.​​​​​​​

Employers have embraced the opportunities afforded by artificial intelligence (AI)-based technologies to help boost performance, but their use may come at a high human price if not used appropriately and with safeguards.

Since the pandemic forced many staff to work remotely, many employers are using a range of technological tools to monitor worker output and wellbeing. This includes the monitoring of internet access, using webcams, keystroke logging and time-tracking devices, and even carrying out secret audio recording.

However, there are concerns that such monitoring is excessive and intrusive. Further, where AI is involved to assess workloads, it may push employees harder than any line manager to perform more tasks, more quickly, because the data says such workloads are technically possible – especially if the impact on workers’ physical and mental strength is not considered.

In August the European Agency for Safety and Health at Work (OSHA) released a report examining the risks and opportunities presented by AI-based worker management systems and their impact. The report found AI can enable better monitoring of hazards and employees’ mental health. However, it listed dangers including that AI usage can ‘dehumanise’ workers – giving the sense they have little control over their jobs. The technology can also create an unhealthy and pressurised environment with little transparency about how decisions are made or can be challenged.

The report also found that AI usage can create mistrust, limit worker participation and produce a blurred work–life balance. It can also cause serious physical and mental harms, including musculoskeletal and cardiovascular disorders and anxiety. The report suggests employers should pursue a strong ‘prevention through design’ approach from the start.

In summer 2021 food delivery companies Deliveroo and Foodinho were both fined by the Italian data protection authority, the Garante. The Garante said that Deliveroo collected a disproportionate amount of personal data from its riders, in violation of the EU General Data Protection Regulation. This data was used for the automated rating of each rider’s performance and for the assignment of work, with the Garante finding that Deliveroo was not transparent enough about how such algorithms worked.

Foodinho was fined, among other things, because the workings of its automatic algorithm system, used to evaluate the performance of workers, were not sufficiently transparent and did not ensure accurate results.

In the UK, in response to concerns over staff safety and data protection, the Information Commissioner’s Office (ICO) issued draft guidance in October to help ensure employers’ monitoring of staff performance doesn’t turn into surveillance or harassment. It reminds companies they must make workers aware of the nature, extent and reasons for monitoring, and ensure it’s proportionate. ‘Just because a form of monitoring is available, it does not mean it is the best way to achieve your aims,’ it says.

Anurag Bana, a senior project lawyer in the IBA’s Legal Policy & Research Unit, says ‘there needs to be an appropriate level of human oversight for any AI worker management system to protect employees’ and that ‘there should also be an algorithmic impact assessment procedure before any system is installed’. He believes that a human rights due diligence exercise in respect of AI systems is essential in order that ‘automated decision-making does not produce harmful outcomes and workers can challenge how decisions are made to ensure transparency and accountability’.

Bana says that employers need to demonstrate a duty of care to employees regarding AI use. ‘Providing information to employees about how and why AI is being used is not enough,’ he explains. ‘There needs to be consultation with staff about the business reasons for using AI and how it will positively impact them. You need to have employees’ buy-in before you start monitoring their performance in this way. You also should have an ethical framework in place that protects employees’ health and safety – it may be a good idea to conduct an assessment/check compliance against the ISO 45003 guidelines, which look at employees’ psychological health and safety at work.’

“There needs to be consultation with staff about the business reasons for using AI and how it will positively impact them


Anurag Bana, Senior Project Lawyer, IBA Legal Policy & Research Unit

Johan Hübner, Chair of the IBA Artificial Intelligence and Robotics Subcommittee and a partner at Swedish law firm Delphi, says there are several aspects to ensuring that AI-generated decisions don’t discriminate against employees and that employees don’t suffer ill health by being pushed too hard. The most important, he says, is to ensure that the AI has been sufficiently trained with complete and non-biased data, and that all final decisions are made in collaboration with humans. Hübner also notes that ‘excessive monitoring can lead to higher levels of employee stress and increased ill health among employees’.

Where AI is used to allocate tasks, it’s ‘important to ensure that all dimensions of the allocated tasks are included in the AI-generated decision,’ adds Hübner. Among other things, the AI needs to consider the number of tasks allocated to each employee, their difficulty and how long each task will take. ‘Otherwise, the risk is that some employees become overworked while other employees are underworked, which could lead to ill-health in either scenario,’ says Hübner. Preventing this requires, again, that the AI is trained using adequate data and that humans are involved in decision-making.

Ida Nordmark, an associate at Delphi, says organisations could face fines and damages claims for employee injury or sickness caused by AI use. For example, under Swedish labour law a company that causes an employee’s ill health or injury through its use of AI is responsible for bearing the costs of that employee’s rehabilitation. The company may also face additional costs from hiring workers to provide cover. In more serious cases, there’s a risk that a company may be fined by the regulator for causing an employee’s illness in the workplace or be required to pay damages due to discrimination. However, ‘the most obvious risk is not legal, but reputational,’ says Nordmark.

To avoid potential legal claims, in-house lawyers should be involved in the procurement and contracting process to ensure the AI technologies their companies are contracting from suppliers ‘meet sufficiently high requirements’ throughout the contract’s lifecycle, concludes Felix Makarowski, a senior associate at Delphi.