Fighting bias in the age of machine learning and artificial intelligence

Back to Law Firm Management Committee publications

Adrio Rivadi
ZICO Law, Jakarta
adrio.rivadi@zicoholdings.com

 

Predictive analysis is becoming increasingly normal in modern life – they can determine what we eat today, what we buy tomorrow, what we watch on TV and so on. More so than this, predictive analysis has the ability to affect lives with subjective decision-making, particularly when it comes to things like hiring processes, loan approvals and so forth. In this instance, AI can either eliminate predisposed bias or perpetuate it, depending on its programming. So how can we safeguard AI from this?

According to Christina Blacklaws of Blacklaws Consulting, and IBA Council member and former president of the Law Society of England and Wales, ‘research shows from stab vests, to mobile phones, to crash test dummies that the world is designed for men. Women are the global majority but by no means are they the dominant force.’

Systemic bias is evident from the statistics of female leadership. In the legal industry, only 31 per cent of partners in the legal industry are female while the challenges and obstacles experienced by female lawyers are quite similar across the board irrespective of country, race or culture – the expectations of women appear to be the same. The gender stereotype appears to be ever-present to prejudice women, particularly those with children and this is showcased in statistics like the United Kingdom equal pay gap which favours men by 17.3 per cent despite being encoded in law for the past 50 years.

The potential for this to worsen is present with the increasing dependency on AI as bias can be hardwired into machine algorithm. The IT workforce employs approximately one million employees but only 18 per cent is female. The way that AI is programmed to learn is crucial – if a machine is told to learn based on flawed and biased data then the output of the machine process will also be biased. When this sort of process occurs for things like predictive criminal analysis, the bias (through the algorithm) becomes self-reinforcing.

As a society, humans are hesitant to question algorithms and trust that its capabilities surpass those of human analysis given its capacity to digest past data to provide the most desired outcome. Based on above though, we know this not to be true and computers must still be audited to ensure that outcomes are not only desired but also fair.

As is usually the relationship between law and technology, regulations will always fall behind innovation with the adverse consequences delayed. Having said that, attention is starting to build on this subject and the regulatory framework is slowly developing with data privacy regulations such as the European Union’s General Data Protection Regulation as the first step, which imposes notice requirements – but this might not be sufficient. In the case of the EU, guidelines have been established on AI decision-making in that the decision cannot be solely made by AI and there is requirement of human involvement and intervention as the final authority. There is a long road ahead in terms of regulation with a lot of improvement needed – the varying levels of concern regarding this subject will mean that patchy regulations and uncertain enforcement will continue to be the case for the foreseeable future.

An interesting way to look at the regulation of algorithm is to first look at the complete lifecycle of an algorithm, which generally includes: the creation stage, which encompasses the design, implementation, analysis and testing phase; the deployment stage, which encompasses the input, process and output phase; and finally, the decision based on the output of the algorithm. Solutions from a regulatory standpoint generally only deal with the latter stages of an algorithm process.

Paulina Silva of Chilean firm Carey and Chair of the IBA Internet Business Subcommittee, said ‘only after the discrimination has occurred will the regulation be triggered’.

Limitations to traditional anti-discrimination regulations still apply: proving discrimination is very difficult while enforcement is still flawed. Legislation like the Algorithm Accountability Act 2019 is interesting in that it forces the companies subject to that Act to identify any bias resulting from that system and to fix it.

Gender equality falls even further behind when it comes to Asia. An imbalance of male-female education levels and child marriage still mars many Asian countries and cultures which means it is even harder for women to enter the workforce due to lack of education and child-rearing duties. This is also reflected in the legislation in Asia where only four Asian countries have domestic violence laws.

Stefanie Yuen Thio, a managing partner at TSMP Law in Singapore, said ‘if we treat women like chattels, how are you going to treat your female colleagues? How are you going to empower your wife to go and work?’

Most Asian cultures dictate that household and child responsibilities tend to lean towards women. The pandemic has made it worse where female workers who usually have child-minders no longer have that option due to various health and movement restrictions. The result is that more and more women are leaving or considering leaving the workforce due to household responsibilities.

Stepping away from robots and algorithms, perhaps the real cure is in education, equipping and empowering: to instil equal capabilities in women as well as men, to prepare women with the expectations of household and career and the tools to manage them as well as empowering them with the confidence to learn and try new things even when they are tentative in the first place.

There is perhaps a need to address human bias first before it spills into algorithm bias.

Back to Law Firm Management Committee publications