The risks and rewards of robotics

Arthur Piper, IBA Technology CorrespondentThursday 18 March 2021

Advances in technology have created new possibilities for the use of robotics by medical practitioners. Global Insight considers the legal issues inherent in such technologies and the approaches being taken to overcome them.

A robotics revolution is under way in the UK’s National Health Service (NHS). In February 2020, Western General Hospital in Edinburgh and Milton Keynes University Hospital NHS Foundation Trust became the first European adopters of CMR Surgical’s Versius robot.

During operations with the machine, surgeons use two video game controller-style joysticks to perform keyhole surgery with the robot’s three or four independent arms. Equipped with 3D visualisation, they can see inside the body and operate controllers that mimic human hand movements. Surgery is more precise, less painful and recovery times are shorter. Long procedures are less strenuous for all concerned.


Lacking clear category definitions, regulators have often focused solely on individual technologies […] This threatens to create fragmented legal frameworks


While robot-assisted surgery is not new, this latest generation of machines promises to greatly broaden the procedures that can be tackled. Newcastle upon Tyne Hospitals NHS Foundation Trust has recently extended the use of robotics to eight surgical specialisms.

But a major blocking point has been – and still is – expense. A single robot can cost between £1m and £2m and a study by Columbia University Irving Medical Center calculated that their use could add around $3,000 to the price of each operation.

The pandemic may be changing that. Surgeons are reportedly able to safely conduct procedures using social distancing – and the costs are beginning to make more sense because shorter hospital stays partly offset the initial expense.

No benchmarks

If proof were needed that adoption is gathering pace, the NHS Supply Chain – which manages the sourcing, delivery and supply of healthcare and food products to the NHS and to healthcare organisations in England and Wales – launched a new robotics framework at the height of the pandemic in December 2020.

The framework aims to standardise compliance procedures over purchasing. According to Antonia Marks, an NHS Supply Chain Procurement Director, this will create a smoother route for robots into the NHS and make prices more competitive.

Because such robots are complex, the legal issues are not likely to be clear cut. When the French social theorist Bruno Latour began writing about technologies in the 1990s, he struck upon a useful explanatory idea that shows why this is the case. Robotic surgery is made up of many actors; some are human, others are not. This ‘actor network’ comprises cutting devices, electronic systems, software, software engineers, computer screens, robotic arms, the patient, surgeons, nurses, the hospital building and so on.

If something goes wrong, ascribing liability means unpicking that network to see where the blame lies. And this can be particularly difficult with novel technologies, which can act unpredictably.

For example, one of the known issues with some robotic devices used in surgery is latency. Latency refers to a potential gap in responsiveness between the tool that is inserted into a patient and the actions of the surgeon’s hand. Under ordinary circumstances, latency is not a problem and machines can actually compensate for slight unsteadiness in a surgeon’s hands.

Yet small delays between sudden potentially life-threatening problems that occur during a procedure and the ability of the surgeon to respond rapidly can create more physical damage than in non-assisted surgery. This raises questions over where the liability could lie in the case of an accidental death or serious injury.

In addition, standard guidelines for implementation are often missing or patchy. In 2015, Stephen Pettitt underwent the UK’s first robot-assisted heart value surgery at the Freeman Hospital in Newcastle upon Tyne. He suffered multiple organ failure and died days after the procedure. Coroner Karen Dilks, who heard the case in 2018, said that the retired music teacher would have had a 98-99 per cent chance of survival following a conventional operation.

In this case, one of the main contributing factors was that the surgeon had not undertaken adequate training. In fact, the coroner said that there was an ‘absence of any benchmark’ for training on new technologies in healthcare. In addition, two technical experts – known as proctors – left part way through the operation. They ought to have been on hand to advise on complications relating to the operation.

These examples illustrate that the range of potential risk factors for complex systems is extremely broad.

Implement first, regulate later

It is a fact of technological innovation that novel ‘actor networks’ come into existence before the legal and ethical frameworks that attempt to regulate their uses.

Maria Pia Sacco and Anurag Bana are both Senior Project Lawyers in the IBA’s Legal Policy and Research Unit (LPRU). Sacco is the Chair of the Association’s Working Group on AI, while Bana is a group member. Bana acknowledges that there are likely to be no quick conceptual fixes in such a complex area. In the field of artificial intelligence (AI), for instance, it is difficult to disentangle all of the potential elements in a complex technology in a clear-cut way, Sacco says. That is partly because there is no universally agreed definition of AI, and partly because the close interdependencies of data, software and (sometimes) hardware are fuzzy.

Lacking clear category definitions, regulators have often focused solely on individual technologies such as facial recognition, nanotechnologies or driverless cars. This approach threatens to create fragmented legal frameworks around the world that may overlook potentially bigger issues in favour of more localised – sometimes knee-jerk – reactions to a specific technology (see Automated facial recognition technology comes of age, in IBA Global Insight August–September 2019).

Ethical guidelines, on the other hand, have taken a more general approach where broad groups of technologies – such as AI – have been treated with little acknowledgement of their real-life applications, Sacco suggests.

The LPRU is trying to square this circle with their recent work on AI and its potential impact on human rights. Instead of focusing on a purely ethics-based approach, they have suggested that businesses need to undertake human rights due diligence in their AI projects. That would mean mapping the potential impacts of those technologies not just onto ethical guidelines, but also onto legal instruments to foster both accountability and liability.

‘Due diligence is flexible and dynamic and can take account of both the broad features of technologies and the way risks may change, as well as how they are deployed in individual cases and sectors,’ says Sacco.

Done properly, due diligence would help to define the relationships between manufacturers, designers, software writers, training providers, end users and even, in the healthcare sector, the relationship between government and care providers. Working together, they would be able to map risks across the entire network and potentially avoid some of the more harmful impacts of technology.

It is an approach that may both broaden an understanding of the complex network of actors that advanced technologies – such as robotic surgeons – entail. It may also ensure that the benefits of such life-saving devices are not outweighed by their potentially hidden risks.

Arthur Piper is a freelance journalist and can be contacted at arthurpiper@mac.com.

The LPRU’s work on AI and its potential impact on human rights is available here.

Header pic: Shutterstock.com / Corona Borealis Studio