The ethics of new technologies in the legal profession

Back to Alternative and New Law Business Structures (ANLBS) Committee

Back to Professional Ethics Committee publications


Carlos Valls Martinez
Augusta Abogados, Barcelona
c.valls@augustaabogados.com

 

The development of new technologies derived from computer science, such as development of artificial intelligence (AI), robotics and nanotechnology, has broadened our horizons in an extraordinary manner in terms of work. This technological progress can and should also be incorporated into the legal profession.

In the face of the expectations of what can be achieved through technology, some authors have expressed caution (significantly, Evgeni Morozov[1]). Several commentators have been particularly cautious In the field of autonomous AI systems, including the Stanford University professor Richard Omohundro;[2] he commented in 2014 that, if autonomous AI systems are programmed to operate in a strict rational way, they can end up acting in an antisocial and harmful manner unless designed with great care. If a computer is programmed as an autonomous system based on self-learning to beat any human opponent in chess, it could transcend the mere game and protect itself even from a possible unplugging, choosing to neutralise the human being who tried to unplug it. Other authors (e.g. Eliezer Yudkowsky[3]) is of the same opinion as Professor Omohundro, insisting that the machines of the future must be conceived and created to be ‘friendly’ (i.e. kind) and avoid any risk of them becoming ‘unfriendly’ (i.e. hostile).

Indeed, in the academic community there is a growing call for realism and scientific reasoning about the possibilities of new technologies, since until now it has not been possible to demonstrate how far one can go – conscious machines, feeling machine or moral machines – there is a very fine line between science and mere belief (Vincent Müller[4]).

The specificity of legal services

The IBA's Professional Ethics Committee[5] has stated in different conferences and articles that the specificity of our profession is a key element in assessing the degree to which new technologies should be incorporated into our daily work.

In particular, there remains a question over whether we should follow the trends or maintain a cautious position. This requires a careful look at the consequences that a particular application may involve, especially in the social context in which we are immersed (for example, if it can generate negative externalities).

This specificity of our profession is predicated on the existence of three factors that differentiate us from other professional services. Firstly, lawyers are in a position of asymmetry of information with respect to their clients, who cannot verify the quality of the service (a fact highlighted by the Harvard business professor Ashish Nanda:[6]): clients cannot know, for example, whether the recommendation to initiate a lawsuit or to reach an agreement is in their best interests (or in the interest of the law firm advising them). This leads clients to place their trust in the professional. After all, the situation will be similar even if they ask for a second or third opinion. Consequently, the quality of the lawyer's recommendation cannot be checked ex ante (like that of the advertising creative), nor ex post (as it can be done with the private banker, or the architect).

Secondly, the legal profession plays a part in defending the client's interests against other citizens or against the public administration. It is an essential part of the rule of law, the proper functioning of which depends on the right social coexistence. The social service of the legal profession in a democratic society demands prudence when considering the profession in its purely business aspect. Furthermore, the ethical need resulting from the asymmetry of information also leads us to think that there should be limits on considering our profession exclusively as a business: even more so if we consider that, for most users, going to the lawyer can be an isolated experience during the course of their life.

A third factor (not always properly assessed) is that our profession is the only one that in most cases faces an opposing party who will try to question the work we do, or neutralise it, by defending opposing interests.

New technologies and professional ethics:[7] raising issues to be considered in their application

Given the characteristics mentioned above, which require highly ethically motivated professionals for the proper functioning of social coexistence, embracing technological solutions should be framed in the achievement of excellence without putting any of the three factors at ethical risk.

In other words, without such technological achievements denaturing the profession, since then we would be in front of other types of services (and the substitution of the profession should necessarily be debated at the parliaments, following Ulrich Beck,[8] who demanded that technical issues with great social transcendency should not be confined to discussions between scientists and stolen from the political debate).

We do not yet know for sure what technological advancements will be achieved, For example, according to the mathematician Hannah Fry,[9] the reality of the autonomous car is far from being accomplished because of the extraordinary complexity of forecasting rare phenomena, such as a child crossing the road unexpectedly. Therefore, we reaffirm that new developments and their application to the legal profession should be matched to the essential characteristics of our activity by formulating the right questions, in a threefold area:

  • the nature of our service (to avoid its denaturalisation);
  • the supply structure (to avoid the inefficiencies or risks derived from the creation of oligopolies); and
  • the perception of our service by the citizens.

Some of these issues are listed below.

New technologies and the nature of the profession

In relation to the essence of legal services, we must ask ourselves the following questions in relation to AI, as an example:

  1. Whether advice involving AI or other advanced technological solutions would be considered more reliable than purely human advice?
  2. Whether there would be real transparency in the algorithms used by the software provider?
  3. Whether we would tend towards autonomous AI systems that would replace a significant part of our work?
  4. Whether we will know how to assess the risks of such autonomous systems?
  5. Whether lawyers will also have to be trained in computer science, and whether this may accentuate the risk of professional denaturalisation?
  6. Whether we will incorporate robot judges or robot lawyers before human judges, and what social consequences we can predict from this?

Embracing technological solutions and the effects on supply structure

We will also have to analyse the effect of the integration of advanced technological solutions in the supply structure of technology providers and of the supply of legal services themselves.

We should ask ourselves if there is a possibility that our sector will become capital intensive, and if, in order to do so, we will have to design policies that allow easy access to the profession (by facilitating the corresponding financing) for any potential professional.

Alternatively, will we rather accept the consolidation of large legal operators? If an oligopolistic structure will be produced in the supply of technological solutions, how we will avoid the risk of oligopolistic prices in this case, will we have free access to analyse their algorithms and corresponding biases, and will such biases will be found as they are applied (via a trial and error method in the market) or should they be standardised prior to use?

Moreover, how will potential conflicts of interest of the computer systems themselves be resolved (can the same AI solution be used for both plaintiff and defendant)?

The recent ITechLaw report on Responsible AI,[10] published during 2019 with the ambition of establishing a global framework, reflects on a number of additional issues such as the responsibility of professional secrecy, the security and reliability of AI, and industrial and intellectual property derived from the use of AI systems.

Risk of devaluing the perception of the added value of our services?

Finally, the very debate on the applicability to our profession of new technologies may inadvertently generate a risk of deterioration in the perception of our services by the public.

On the one hand, the technological promises of the future may lead to the idea that we are currently going through a provisional stage of imperfect, and therefore less reliable, human intervention. On the other hand, there may be a temptation to reduce the added value that we provide, caused by the impatience to consider most of our activity as standardisable in order to make it more suitable for information technology (IT) processing. We must not forget that this ethical risk has its counterpart in its exact opposite, which is that we lawyers continue to consider as value-added activities tasks that are essentially standardised.

It is possible that the purpose of professional ethics in this area is to distinguish at any given time which task of the legal profession has become standard and which part of our activities have not, in order to preserve and protect ‘human’ added value. This differentiation must be carried out based on professional and ethical criteria, not from business criteria exclusively, for the benefit of the rule of law which we must abide by.



[1] Evgeni Morozov, La locura del solucionismo tecnológico (To Save Everything, Click Here) (Madrid, Katz, 2015.

[2] Steve Omohundro, ‘Autonomous technology and the greater human good’ (2014), 26:3 Journal of Experimental &Theoretical Artificial Intelligence, p 303.

[3] Eliezer Yudowsky, ‘Artificial intelligence as a positive and negative factor in global risk’, in Nick Bostrom, Nick and Milan Circovic, Milan (Pubs), Global catastrophic risks (Oxford, Oxford University Press, 2008).

[4] Vincent C Müller, ‘Risks of general artificial intelligence’, (2014), 26:3, Journal of Experimental & Theoretical Artificial Intelligence, p 297.

[5] International Bar Association, www.ibanet.org.

[6] Ashish Nanda, ‘The Essence of Professionalism: Managing Conflicts of Interest’, Harvard Law School, No 9-903-120, rev 29 December 2003.

[7] De La Torre distinguishes between professional ethics and deontology, pointing out that the latter shows a concrete or explicit character of the rules that apply to professionals at a given moment and whose noncompliance may be a reason for infringement, while professional ethics consists not only in the application of general moral principles to the context of each profession, but also in finding out the internal goods that each of these activities should provide to society, what goals should be pursued and, therefore, what values and habits should be incorporated in each profession. Citing Augusto Hortal, De La Torre concludes that, without the ethical perspective, deontology is left without a reference horizon. That is to say, he concludes, ethics proposes, but also asks for motivations. Díaz De La Torre, Francisco Javier: Ética y deontología jurídica (Ethics and Legal Deontology), (Madrid, Dykinson, 2000, p 105).

[8] Ulrich Beck, La sociedad del riesgo (Risikogesellschaft - Auf dem Weg in eine andere Moderne) (Barcelona: Paidós, 1998).

[9] Hannah Fry, Hola mundo (Hello World: How to be Human in the Age of the Machine), (Barcelona: Blackie Books, 2019).

[10] Charles Morgan, Responsible AI. A global policy framework. (ITechLaw, McLean, 2019).