Ethical implications of artificial intelligence at the Indian workplace

Tuesday 12 December 2023

Avik Biswas
IndusLaw, Bangalore
avik.biswas@induslaw.com

Ivana Chatterjee
IndusLaw, Bangalore
ivana.chatterjee@induslaw.com

Advika Madhok
IndusLaw, Bangalore
advika.madhok@induslaw.com

Introduction

When Charles Babbage invented the computer in 1822, little did he know that his invention would ultimately lead to the development of artificial intelligence (AI). Today, AI has already started influencing and governing key aspects of our lives. The ability of machines today to perform cognitive tasks like thinking, perceiving, learning, problem solving and decision making[1] is astounding. AI, though not a new phenomenon, has been very topical of late due to its public reach through open AI systems and its capability to automate tasks, improve efficiency and enhance productivity by exhibiting intelligent behaviour. Over the years, the Indian government has also focused its attention on the development and usage of AI in different sectors through its AI leadership brand – ‘AIforAll’.[2]

In India, AI has made its presence felt, in both government as well as the private sector, and it is being used by companies in their day-to-day operations. AI has the capability of automating tasks, freeing up time for employees to focus on more complex and creative work, and is also equipped to analyse data at a scale that is difficult for humans to match, thereby providing valuable insights into business operations and customer behaviour. The usage of AI by companies is not limited only to complex business decisions or core business functions, but also finds significance in its day-to-day administrative and internal functions. While AI has the potential to revolutionise the way we work, it also raises certain ethical and legal concerns that will need to be addressed and managed as we progress.

Concerns regarding use of AI at the workplace

AI does have the potential to transform the way businesses function even when it comes to internal human resources (HR) and administrative operations. Today, the use of machine learning is considered as the most advanced and promising method for workplace and workforce management. Employers in India are integrating AI tools with their HR functions for various purposes such as analysing resumes, screening candidates, managing and monitoring employees’ performance and health, and conducting internal training. While such practices are meant to streamline internal processes at the workplace and make operations seamless for the employer, such use has also received severe backlash due to lack of algorithmic transparency, and lack of clarity on accountability and liability.

Bias and discrimination in hiring practices

Bias during resume screening

It is a fact that an algorithm is only as good as the data it is trained on. Many employers train algorithms using legacy data to produce future results. For example, for screening resumes of potential employees, an employer may use resumes of its high performing employees to ensure that the candidates that are screened for further evaluation also have similar skill sets. While this improves efficiency at work, there is a potential risk of bias and discrimination in such practices. For instance, the resumes that have been fed to the AI tool may be predominantly of candidates of a particular gender, caste or geographic location or even from a particular educational institution. By using such data to screen employee resumes, therefore, it is certainly possible that the AI tool would shortlist resumes of candidates of only a particular gender, caste or qualification and other resumes would go overlooked and unnoticed. Such practices would potentially lead to discriminatory behaviour on part of the organisation, leading to unintentional consequences and violation of applicable law.

Discrimination during the interview

During virtual hiring interviews, some employers now use AI tools to observe a candidate’s behaviour, confidence levels and attention spans to evaluate their suitability for a role. There may be times that a well-suited candidate may exhibit subpar confidence or lack of concentration during intensive interviews, or they even have anxiety or ADHD (attention deficit hyperactivity disorder) due to which they may seem unsure or under-confident. It is more than likely that the AI tool’s evaluation will not be favourable for such candidates.

While the use of machine intelligence brings objectivity in decision-making, it is critical for employers to have sufficient checks and human interventions in place.

Use of AI for performance management

Over the years, technology has been used to manage and track employees’ performance at work. However, in recent times, there has been significant reliance on AI for tracking and evaluating employees’ productivity and performance. Such evaluation is typically conducted based on an employee’s hours of work, time taken by the employee to complete given tasks, number of leaves availed, time taken to respond to customer concerns, inputs and feedback provided by all stakeholders. It is concerning that a lot of such evaluation has been found to be inaccurate as it seems to ignore employees’ health conditions, disability or other limitations, leading to incorrect performance evaluation. Additional stress and anxiety for employees have also been reported as their performance at customer-related work is partly being evaluated by an automated system using data that they do not have access to.

While it is certainly important for organisations to have machine intelligence in place to manage its employees’ performance, it also crucial to bring in subjectivity at the right time so that employee’s performance can be analysed on a case-by-case basis.

Privacy rights

AI collects, processes, and analyses large amounts of employee data for undertaking tasks pertaining to workforce management. Such data regularly includes sensitive personal information (such as passwords, financial information, biological data, as defined under the ‘Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011’), which is used and processed by employers for various purposes. One of the most popular concerns appears to be that not everybody in the ecosystem is aware of the exact manner in which the personal data is being utilised or processed by machine learning. In this context, there are of course very clear and stringent benchmarks set out by the Information Technology Act 2000 and the ‘Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011’.

Harassment and sexual harassment

The Covid-19 pandemic has transformed the way employers interact and collaborate with their workforce. Hybrid work or remote working arrangements have found relevance in the post-Covid world as well. Many employers are using virtual reality platforms to interact with their employees and conduct their operations. There have been varied reports of workplace harassment and sexual harassment faced by employees on such virtual reality platforms, which have gone unnoticed and unaddressed. Unfortunately, today, a large proportion of corporations does not have effective internal policies in place (whether existing or standalone) to regulate conduct on virtual reality platforms, thereby leading to such unfortunate instances at the workplace. There is a critical need today for such employers to amend their existing policies or implement new ones altogether to ensure the safety of their employees on such virtual reality platforms.

Conclusion

Undoubtedly, unregulated and unsupervised use of AI at the workplace raises grave ethical concerns. Currently, in India, there are no standalone employment law legislations that address concerns regarding the use of AI at the workplace. The Factories Act 1948 only addresses a fraction of such concerns by stipulating certain safety standards in factory premises where robots are installed to carry out hazardous tasks.[3] While the Indian government has released several guidelines to regulate the use of AI in various sectors[4] and has taken multiple initiatives in this regard (eg, the Ministry of Electronics and Information Technology has constituted multiple committees[5] to develop a policy framework for regulating AI) the need of the hour perhaps is the existence of a robust legislation or directive. In the absence of appropriate regulation, the entire ecosystem is unwittingly dependant on individual employers to have adequate internal policies addressing relevant concerns arising out of the use of AI.

 

Notes

[1] Niti Ayog, Report on National Strategy for Artificial Intelligence (2018), available at: www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf.

[2] ‘India’s Artificial Intelligence Strategy: AI for All’ (15 October 2019), available at: https://indbiz.gov.in/indias-artificial-intelligence-strategy-ai-for-all.

[3] Section 25 of the Factories Act, 1948 provides: ‘Self -acting machines – No traversing part of a self-acting machine in any factory and no material carried thereon shall, if the space over which it runs is a space over which any person is liable to pass, whether in the course of his employment or otherwise, be allowed to run on its outward or inward traverse within a distance of [forty-five centimetres] from any fixed structure which is not part of the machine.’

[4] ‘India’s Artificial Intelligence Strategy: AI for All’ (15 October 2019) – as n2 above;  and Niti Ayog, Report on National Strategy for Artificial Intelligence (2018) – as n1 above.