New Times,
New Thinking.

  1. Science & Tech
19 February 2022

Zoom trials and kitten lawyers: inside the e-justice revolution

Virtual courts have come a long way since the Texan advocate who couldn’t take a cat filter off his face. But will remote and AI-assisted justice improve access – or threaten human rights?

By Laura Spinney

On 18 May 2020, Judge Emily Miskel heard a run-of-the-mill case concerning a disputed insurance payout for wind and hail damage to a building. Normally she would have expected little public interest, but that day around 1,200 people observed deliberations in Collin County, Texas – not from inside the courtroom, which was closed by the pandemic, but via YouTube. The case had been billed as the world’s first virtual jury trial, with all participants communicating via Zoom.

Forty-year-old Miskel studied engineering before she took up law, and she is a natural problem solver. But it had been a battle to persuade a conservative legal establishment to agree to the experiment: there were concerns over access, and fears that a virtual environment would dilute the formality of the court – and hence its legitimacy. In the end, there seemed little alternative. Collin County handles a high volume of cases, and judges know that when official channels are blocked (in this case by Covid-19), people can be tempted to dispense justice themselves. Meanwhile, their backlog was growing. “To shut down even for a week is hard. If I took a month off, I’d never catch up,” says Miskel.

Miskel’s virtual trial, part of a larger sea change in the way justice is being delivered, was deemed a success – even earning her a prize for judicial excellence. “The jurors felt that it was a better experience,” she tells me. The lawyers, meanwhile, were pleasantly surprised by how forthcoming people were during jury selection, something they attributed to the platform. “You don’t have to stand up in front of a group of strangers,” Miskel explains. “You’re in a comfortable environment on your own device.”

The world has been moving towards “e-justice” for decades now, but Covid-19 has accelerated a shift towards virtual platforms and an increasing use of artificial intelligence (AI). From the spring of 2020, most countries moved their courts online and the advantages of e-filing legal documents became obvious – especially when it was thought, wrongly, that contaminated surfaces were a major path of viral transmission (the virus can transmit via surfaces, but it is rare). Within six months, lawyers who had been viscerally opposed to video hearings were expressing high levels of satisfaction.

The author and legal scholar Richard Susskind, who advises the lord chief justice of England and Wales on technology, tells me that “minds have been changed. Many people are of the view that we will never go back.” This is a good thing, he argues, when too few people have access to justice. A 2020 survey by the Legal Services Board showed that 31 per cent of Britons who had a legal problem did not get adequate help, and elsewhere the problem is worse. The Organisation for Economic Co-operation and Development (OECD) estimates that 5.1 billion or two-thirds of the world’s population lack meaningful access to justice – a figure Susskind thinks is shameful. He sees technology as the key to improving access – to making justice faster, cheaper and more intelligible to the layperson. So far there has been automation without transformation, he says, but a revolution is coming: “by 2030 I think we’ll see very substantial change. The Twenties is the decade during which artificial intelligence will have great impact on the courts.”

That doesn’t mean robot judges, Susskind explains, but it does mean redefining court as a service rather than a place. It means online legal help for users, and courts that operate remotely – synchronously via audio or video, or asynchronously via a secure online platform where documents can be exchanged. It means legal professionals outsourcing routine tasks to AI tools that do them better, such as transcribing speech or handwriting. And it means a higher standard of technical competence will be required in court – as the case of the Texan lawyer who proved unable to remove a kitten filter from his face during a remote hearing in February 2021 illustrated.

With increased efficiency come challenges: concerns have been raised about governments expanding their powers of surveillance, and the potential erosion of the right to a fair trial. One of the more controversial developments has been “predictive justice”, or the mining of legal and other data to guide judicial decisions. This is already happening in the UK: since 2013, the Offender Assessment System (OASys) has used a combination of actuarial risk-assessment tools and human judgment to gauge the risk of re-offending, or of an offender harming themselves or others. Similar systems are used in the US, Spain, the Netherlands and elsewhere. Algorithms have been shown to demonstrate bias, but then so have judges and juries: which will prove easier to detect and correct?

Give a gift subscription to the New Statesman this Christmas from just £49

The e-justice revolution has been spearheaded by China and the US, which have embraced the technology in different ways. In the decentralised US court system, change has been piecemeal and bottom-up. The first remote hearings were held in the Eighties with closed-circuit television, and e-filing has been accepted by some courts since the late Nineties. But evolution has been uneven, as Jim McMillan of the US National Center for State Courts in Williamsburg, Virginia, explains: “the willingness of courts to change is often greatly affected by their customers – the legal professionals – and the tech industry that serves them.”

China did not recognise electronic evidence until 2012, but since then has embraced all aspects of e-justice with the enthusiastic backing of its highest court, the Supreme People’s Court. Both countries have harnessed AI for time-consuming tasks such as transcribing, but also for more sophisticated and sensitive ones such as mining data for informative patterns – social trends in the types of crime being committed, say, or patterns in judicial decision-making. “We may not have entered the era of robo-judges yet,” McMillan says, “but we’ve certainly entered the era of robo-clerks.”

Europe (and the UK, which remains roughly aligned) has yet to embrace AI to the same extent. Gregor Strojin, who chairs the Council of Europe’s Ad hoc Committee on Artificial Intelligence, tells me this comes down to two factors: Europe’s modest technology industry compared with those of the US and China; and constitutional checks and balances that are the product of the continent’s brushes with authoritarianism. In general, Europe places a higher value on privacy than the US, he says, and a higher value on individual rights than China. “It is obvious that technology can provide efficiency,” Strojin says. “But we’ve seen in World War Two that mass killing can be efficient as well.”

China’s courts may have been the best prepared when the pandemic hit. In spring 2021, the legal scholar Wang Zhuhao of the China University of Political Science and Law described in an American journal how the Supreme People’s Court had been aggressively pushing the development of specialised legal AI. This forms part of an “intelligent court” project – an online platform linking judges across China and generating large amounts of data on their activity. The tech is being developed in collaboration with universities and industry, and blockchain services are gaining ground: these ensure the provenance and security (“chain of custody”) of e-evidence. Three “internet courts” were already in operation, dedicated to the resolution of internet-related civil disputes (one recent case involved the infringement of the Peppa Pig copyright by two Chinese companies). Remote hearing rooms have also been built in jails for use in criminal cases.

Experimental AI designed to support judges in criminal cases was also in development. A tool called “System 206”, used in Shanghai since 2019, learns from past case data to evaluate whether evidence meets the required standard of proof; it can also identify evidence that was obtained unlawfully – for example, if its speech transcription reveals that illegal interrogation methods were used. Such tools – as well as more recent ones that suggest charges and sentences based on case descriptions – are still too error-prone to be routinely useful, Wang says, but they are improving all the time. “Media reports that legal AI has replaced human judges aren’t true,” he tells me. “But judges are AI-assisted.”

This has relieved pressure on overworked lawyers, Wang says, and increased public trust, thanks to the portals built into parts of the intelligent court. It has helped that the supreme court has released its own analyses of court data: in 2019, the chief justice Zhou Qiang revealed that 74 per cent of divorces in China were filed by women, often after only three years of marriage – not, as was widely believed, by men suffering a “seven-year itch”.

But there are concerns, too. China has a poor record on human rights, and its supreme court is a branch of its government. This means that increased access to justice has been mirrored by increased surveillance potential, and possible further infringements of individual rights. Some courts serve e-documents via mobile phone, for example, using software that locks the phone if the owner doesn’t confirm receipt. Another concern is the judiciary’s new reliance on tech giants, which are frequently implicated in the kind of dispute the internet courts were set up to resolve.

Some of these issues are specifically Chinese; others aren’t. Predictive justice is a divisive issue everywhere. Companies such as Solomonic in the UK and the Los Angeles-based Gavelytics now routinely mine judicial data for trends that might help lawyers litigate more successfully. Solomonic extracts general trends; you can learn what proportion of cases like yours settled out of court, for example, and how long it took. The company’s co-founder, London-based barrister Gideon Cohen, sees it as a form of “levelling up”: better-informed litigants are more likely to settle out of court, he tells me, making the process cheaper and quicker. Meanwhile, looser US privacy laws mean that a company like Gavelytics can “reveal the behaviours and tendencies of judges, law firms, lawyers, and litigants”. As a result, Gavelytics claims, its clients are better placed to “win more motions, win more cases and win more business”.

Should we worry about litigants gaming the system in this way? Geoffrey Vos, the head of the civil justice system in England and Wales as master of the rolls, sees increased transparency as a benefit for everyone. “If we’re going to run the justice system well, we need to know what happens within it,” he tells me. “It’s a public process open to public scrutiny, and so it should be.” In Texas, Emily Miskel agrees. Judges are human and prone to error, she says, citing a widely publicised, decade-old study suggesting that they are more lenient after lunch than before. She’s happy to have her decisions publicly analysed, because if it means litigants can game her, it also means she can improve. “If people uncover a bias that I’m not aware I have, then I can fix it,” Miskel says. Others are far more sceptical. In 2019, France banned the publication of statistics on judges’ decisions, threatening up to five years in jail for anyone who transgressed.

Algorithms can be biased too, sometimes disastrously so. In January 2021, the Dutch government was forced to resign after an algorithm it used to detect misuse of the benefit system mistakenly identified more than 20,000 parents as fraudsters, a disproportionate number of whom came from immigrant backgrounds. Even after it resigned, the government refused to disclose the algorithm, on the grounds that doing so would allow real fraudsters to exploit the system. If judicial decision-making is to be supported by AI, Strojin tells me, then the people affected need to know when it is happening. At the moment, there is little or no accountability. “We need an obligation for states to disclose what tools they are using,” Strojin says.

For Griff Ferris, a London-based legal and policy officer with the human rights non-governmental organisation (NGO) Fair Trials, this should be a minimum requirement. Last September, Fair Trials released a report in which it called for more safeguards around many legal applications of AI in Europe, and a complete prohibition on those involving the prediction of future behaviour – including re-offending. The report cautions against an over-reliance on flawed data: “[the] data used to create, train and operate AI and ADM [automatic decision-making] systems is reflective of systemic, institutional and societal biases which result in Black people, Roma, and other minoritised ethnic people being overpoliced and disproportionately detained and imprisoned.” Fair Trials argues that, far from improving efficiency, AI tools bake injustice into the process and infringe fundamental rights – to privacy, liberty, a fair trial. “There is a deep unfairness in the use of systems which aim to predict crimes someone has yet to commit,” Ferris says. “It’s completely incompatible with the presumption of innocence.”

One pre-pandemic concern looks as if it may have been unfounded. Some feared that people would be disenfranchised due to a lack of digital access but, anecdotally at least, the opposite seems to have happened. Once litigants were relieved of travelling, and of the formality and inconvenience of physical court appearances, Miskel tells me she started seeing a broader cross-section of society. Jury panels also tended to be more diverse, and were perceived to be more representative. “More people have access to the technology of a smartphone than can afford a working car,” she says.

For Richard Susskind, the pandemic ushered in a global experiment in how best to deliver justice, and as in any experiment, data needs to be collected. Evidence-based justice won’t be black or white, traditional or virtual, he says: it will mean working out which cases are suited to which formats. In a jury trial, say, is it better that jurors are separated – each one tuning in from their respective home – or together in a space removed from the rest of the trial, as in the Scottish trials that, starting in September 2020 and continuing today, see jurors gather in cinemas?

Miskel is in favour of what she calls “behavioural informality” if it improves the flow of information, but insists on the importance of legal formality – for example, the procedures for ensuring that people affected by a lawsuit are informed about it: “that cannot be relaxed, ever.” She thinks the right balance can be struck, but here she and other modernising judges run headlong into an old debate over the importance of a court’s “majesty”.

A traditional courtroom is laden with symbols of the power the state has invested in it, and its proceedings are highly ritualised. Whether majesty is required, or mere solemnity will do, is a point of divergence; but most lawyers agree that those rituals remind those present of what’s at stake, and that they bear personal responsibility for the outcome. They also shape public perceptions as to whether justice has been done. “People need to perceive the procedures as fair and objective,” says Strojin. “There is a certain ritual that helps with that.”

In 2020, the Institute for Criminal Justice Reform (ICJR) in Jakarta reported that twice as many people were sentenced to death in Indonesia after the pandemic forced hearings online, than in the same period the previous year (87 people in 2020, against 48 in 2019). ICJR researcher Iftitah Sari tells me that, while it’s hard to determine the causes of the increase, it is particularly concerning given the evidence that virtual justice can harm a defendant’s right to a fair trial – at least in Indonesia.

“The biggest concern is access to a lawyer, which becomes restricted due to prison lockdown,” Sari says. “Also, during an online trial, defendants are unable to physically examine the evidence submitted by the prosecutor.” Her fears have been echoed by a number of NGOs, including Harm Reduction International, whose 2021 annual report highlighted death sentences delivered by Zoom or WhatsApp in Indonesia and Singapore. These trials were often hampered by frozen screens, an inability to confer with lawyers, and insecure links that meant the public, including family, were unable to watch proceedings. “The use of virtual platforms to conduct criminal proceedings… can expose the defendant to significant violations of their fair trial rights and impinge on the quality of the defence,” the report concluded.

As more trials move online, which rituals will remain essential to upholding justice, and which will become redundant? It’s a question people were asking even before the pandemic, says criminologist Meredith Rossner of the Australian National University in Canberra. In Kenya, Zimbabwe and other African countries, lawyers have questioned why they still wear reminders of colonial history in the form of wigs and gowns. In Australia, the coat of arms in Queensland’s Supreme Court has been substituted with a work by a well-known indigenous artist, Sally Gabori.

In Italy and Austria, however, judges still sit in front of a crucifix, “reminding viewers of a brutal form of capital punishment practised by the Romans”, as Rossner and colleagues put it in an academic paper last year. And when, months before the pandemic, a video hearing system was piloted in the UK, a coat of arms was always visible.

“The shift to the virtual is a moment of opportunity to rethink traditional procedures and ways of interacting that are potentially alienating for laypeople,” says Rossner, who was involved in the UK pilot. Susskind agrees that the pandemic experiment represents a turning point. He urges lawyers to follow the evidence, industrialising what works and discarding what doesn’t. “The reputation of the justice system is at stake,” he warns.

[See also: Why the justice system is broken]

The online shift appears to be improving access already, and helping to clear backlogs. In the first few months of the pandemic, both magistrates’ courts and the crown court in the UK were reduced to hearing urgent cases only. But more than 3,000 virtual courtrooms have been set up since March 2020, via a national cloud video platform, and in the following year 990,000 virtual hearings were held, compared with 630,000 face-to-face ones.

The number of outstanding cases has dropped by around 80,000 in the magistrates’ courts since a peak in July 2020, a Ministry of Justice spokesperson tells me; meanwhile the caseload in the crown court has stabilised. The economic and social distress caused by long delays in the justice system shouldn’t be underestimated, Geoffrey Vos says, which is partly why most civil judges view the shift to virtual hearings positively: “as time went on, most if not all, became very comfortable with it.”

But if the modernisers are confident that the bulk of court business can be taken care of online, they hesitate when it comes to serious crimes such as murder, rape and armed robbery – which in England and Wales tend to come before the crown court. It’s only a matter of time before China holds a virtual criminal trial, Wang tells me, but the technology isn’t up to it yet. Miskel thinks it is likely to be the last thing to give in the US, where the constitution requires that a person face their accuser, and the Supreme Court has ruled – for now – that this means they share a physical space (this is not a requirement in the UK, or in many other countries).

The backlogs in serious criminal cases should force a rethink, Susskind argues. (The Ministry of Justice reports that nearly 60,000 criminal cases were outstanding at the crown court in the latest period measured (2020-21), compared with 42,000 in 2016-17.) Tech is likely to be part of the answer, he says, but the procedures need an overhaul, too.

Rossner agrees. She’s optimistic that justice can be improved when it comes to serious crime – and that AI will one day play its part. But depriving someone of liberty or life is the most serious thing a court does, she cautions: “I don’t think that criminal trials is where we want to experiment with an imperfect technology and an imperfect procedure.” With two-thirds of the world’s population denied access to justice, an e-justice revolution is long overdue – but it can’t come at the price of an irreparable mistake.

Content from our partners
How the UK can lead the transition to net zero
We can eliminate cervical cancer
Leveraging Search AI to build a resilient future is mission-critical for the public sector

Topics in this article :