Regulators anxious about the role of AI in global finance

Jonathan Watson

It may be the latest big trend in the finance industry, but artificial intelligence (AI) is making regulators nervous. The Financial Stability Board (FSB), the international body that monitors and makes recommendations about the global financial system, has become the latest institution to voice its concerns in a report on the role of AI in finance.

Published in November, its report warns that market power could rest with only a few large AI technology firms supplying to the industry, which in turn could lead to financial stability risks.

It also says major suppliers of AI and machine learning tools to financial services firms may ‘fall outside the regulator perimeter or may not be familiar with applicable law’ – suggesting that software developers and regulators need to develop a better understanding of each other.

The report doesn’t claim that AI is entirely a bad thing. 'The use of AI and machine learning [whereby computers are given the ability to learn without being explicitly programmed] in financial services may bring key benefits for financial stability in the form of efficiencies in the provision of financial services and regulatory and systemic risk surveillance,’ says the report. It can help, for example, with ‘more efficient processing of information on credit risks and lower-cost customer interaction’.

Andrus Ansip, the European Commissioner in charge of driving the European Union’s ‘digital single market’ project, also highlighted the possibilities of AI in a recent speech. ‘There is huge potential in technologies like AI, distributed ledgers, cloud computing and Internet of Things applications,’ he said. ‘These kinds of disruptive technologies promise cost-efficiencies as well as new opportunities for the financial sector.’

‘‘We already have ample evidence from the financial crisis of why it’s important to avoid a dependency on firms that are ‘too big to fail’’

Josh Hogan
Head of Financial Services Regulatory Group, McCann Fitzgerald; Young Lawyers Liaison Officer, IBA Banking Law Committee

However, the use of AI does come with risks. One issue for the FSB is that AI and machine learning services are increasingly being offered by a small handful of large technology firms. ‘There is the potential for natural monopolies or oligopolies,’ adds its report. ‘These competition issues… could be translated into financial stability risks if and when such technology firms have a large market share in specific financial market segments.' If one of them were to face major disruption or insolvency, there would be major repercussions in the world of finance.

Josh Hogan is head of the Financial Services Regulatory Group at McCann Fitzgerald and Young Lawyers Liaison Officer for the IBA Banking Law Committee. ‘While technological innovation in finance is of itself nothing new, there is a sense that the nature, scale and speed of what is happening now could be unusually disruptive,’ he says.

Hogan thinks that if we get to the stage where there is an over-concentration of market power in the hands of a few large technology firms central to AI, then the natural response of governments will be to intervene – either to break up the firms themselves (for example, on competition law grounds), to regulate them, or to impose additional requirements on regulated entities such as banks using the services of the AI firms.

AI tech finance

‘I think it’s inconceivable that a large dependency could be allowed to develop on a few technology firms where the financial system would become vulnerable to difficulties with those firms,’ he says. ‘We already have ample evidence from the financial crisis of why it’s important to avoid a dependency on firms that are “too big to fail”.’

Chris Holder, a partner at Bristows and incoming Co-Chair of the IBA Technology Law Committee, believes it’s almost impossible to predict how the AI industry will evolve. ‘Most of the developments in this area are being driven by companies with fewer than ten employees,’ he says. ‘If they are successful, they may be acquired by much larger providers. But technology changes very quickly. IBM was around for years, then Microsoft became the dominant player, then Google. Who knows what will happen next? The dangers that come with large, dominant companies have always existed. It’s no different now.’

Holder believes that much of the anxiety about AI is based on a fundamental misunderstanding of what it really means. ‘There is a huge amount of hype and the term is being misused,’ he says. ‘AI at the moment is just a set of algorithms that interrogates data sets. It’s software – very clever software – that looks at data and spits out answers.’

Machines are not, as many mistakenly believe, making decisions or providing advice, he adds. ‘They are just interrogating data sets. They are only as good as the information that’s put in and the person who tries to interpret the results that come out.’

The FSB report also warns that major suppliers of AI and machine learning tools to financial services firms may ‘fall outside the regulator perimeter or may not be familiar with applicable law and regulation’. In other words, software developers working on the latest tech don’t have a sufficient understanding and awareness of regulations, and regulators don’t really understand the tech developments taking place.

When machines do actually start making decisions for people and providing advice, that is the point when financial services regulators will have to get involved, says Holder. ‘Machines will have to be regulated. A lot of regulators are looking at this and trying to understand what AI is doing.’

Hogan says that while, in theory, the law and regulation governing financial services can and should be technology-neutral, there are certain features of AI that will probably mean special rules are needed. ‘For example, the updated Markets in Financial Instruments Directive regime (MiFID II), which will apply in the EU from January 2018, will introduce specific rules relating to algorithmic trading,’ he says. This is trading that uses computer algorithms to automatically determine parameters of orders such as whether to initiate the order, the timing, price, or how to manage the order after submission, with limited or no human intervention.

International bodies and regulators, for the most part, appear to be adopting a wait-and-see approach, adds Hogan. ‘An example of current regulatory thinking can be seen in an August 2017 discussion paper from the European Banking Authority. A key point it makes is that existing risks that are currently deemed to be immaterial may be amplified through the use of financial technology.’