Magazine

Editors Pick

Why you need to have a happy workforce

18th August 2021

Professor Suzanne Rab on how AI is changing justice and the workplace – and why it threatens humanity

Advances in technology, brought to the fore during and in the wake of COVID-19, have reignited the debate about how such developments may remove barriers connected with access to justice.  The rise of artificial intelligence or “AI” promises significant advances for humankind. As both a barrister specialising in human rights and an educator I see the opportunities and the challenges.  One area as yet underexplored is whether our humanity is being lost in this process.

The technological advances I observe build on the field of artificial intelligence or “AI” as a discrete phenomenon which has its origins in a workshop organised by John McCarthy held at Dartmouth College in 1956.  The aim of the workshop was to explore how machines could be used to simulate human intelligence.  Various disciplines contribute to AI including computer science, economics, linguistics, mathematics, statistics, evolutionary biology, neuroscience and psychology.  A useful starting point is a definition offered by Russel and Norvig in 2010 where AI is defined as computers or machines that seek to act rationally, think rationally, act like a human, or think like a human (see Box A below).

Artificial Intelligence (AI) is characterised by four features

Acting rationally: AI is designed to achieve goals via perception and taking action as a result

Thinking rationally: AI is designed to logically solve problems, make inferences and optimise outcomes

Acting like a human: This form of intelligence was later popularised as the ‘Turing Test’, which involves a test of natural language processing, knowledge representation, automated reasoning and learning

Thinking like a human: Inspired by cognitive science, Nilsson defined AI as “that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment”  
Box A. What exactly is Artificial Intelligence?

So how does the above apply to the law? An effective civil justice system supports and upholds the rule of law where the law must be fair, accessible and enforceable. Yet, as things stand, there are well-documented barriers to accessing justice. In England and Wales the Legal Services Research Centre (LSRC) commissioned a series of surveys between 2001 and 2011 inviting more than 5000 participants to explore whether they had experienced problems in accessing justice. Cost is a major barrier where the LSRC found that less than 30 per cent of individuals who recognised that they had a legal problem sought formal advice (LSRC, 2011). There are other non-financial barriers including mental health problems, immigration status and discrimination.

Technological Breakthroughs

AI and other advances in technology have been used extensively in legal practice and provide opportunities to deliver and access legal services in ways previously unimaginable and represent the nearest that the legal world has come to sci-fi. 

Predictive analysis draws on big data to forecast the outcome of a case and advises clients whether to proceed, effectively substituting an individual lawyer’s experience, assessment and intuition.  The term ‘Big data’ has been coined for the aggregation, analysis and increasing value of vast exploitable datasets of unstructured and structured digital information.  Decisions founded on such tools could result in outcomes which are much cheaper than pursuing cases with limited prospects of success.  

However, this is likely not a silver bullet. The use of predictive analysis to access whether an outcome is likely to be successful may be inaccurate because of a number of factors. One problem is that the number of cases decided out of court means that predictive analysis based on reported cases will cover a small subset of actual disputes.  The accuracy and value of AI relies on how software is programmed and machines learn bias based on past experience.  These examples can distort the data collected.  Relying on predictive analysis to advise clients whether to proceed (potentially, saving time and money if a case is unlikely to be successful) may be flawed due to lack of a statistically significant dataset. Secondly, inconsistencies in algorithms could mean that critical data is not being collected.  Thirdly, the software may not be able to work out the finer subtleties and variations involved in some cases. In such cases, relying on predictive analysis to advise clients may be flawed because it misses the ‘human factor’.  

Virtual solutions do allow cheaper access to ADR and a number of innovations can be observed where online solutions (whether mediations, arbitrations or hybrid early neutral evaluations) are involved.  Advances in technology have unleashed automated document generation or information provided via chatbots in order to provide free or cheaper access to legal information.  

New means of searching for law are emerging. ROSS intelligence was developed to free up lawyers’ time so they could devote this to other tasks, potentially pro bono.  DoNotPay represents another channel for delivery of free legal advice.  This chatbot was invented in the UK by Joshua Browder. By March 2017, assisted users had overturned 200,000 parking fines in London and New York.  There are however practical limitations of chatbots regarding more complex areas of law. Lawyers may be unable to audit the accuracy of forms submitted online (and update them when required).  

New Opportunities

While it may be difficult to contemplate at least at the current times that machines will replace lawyers, developments in technology have the potential of reshaping some parts of legal practice. While this raises a number of legal, moral and ethical issues this phenomenon opens up new vistas and opportunities.  For consumers of legal services, these innovations allow greater and more diverse access to legal services.

Given the need to be well versed with technology to engage in effective outcomes, it may be asked whether and to what extent it would be useful in technology-led dispute resolution for members of the judiciary to have legal technology programmes. Related to this is the question of how the judiciary leverages support of law schools to develop such executive learning programmes.

COVID-19 has shown the legal sector lags in terms of digitisation despite its ambition to bring the sector into the digital age. Law schools which have developed online learning will be able to transfer their head-start to support the judiciary but there also needs to be an investment in systems.  While that is happening, support can be given in the area of legal technology skills training.  This will support at the skills level but also assist with overcoming any technology phobia or reticence. On the whole, in the author’s view, the experience in England and Wales has been positive in terms of the alacrity of the judiciary to embrace technology.  

A related issue in terms of capacity-building and skills adoption concerns access to the underlying technology and infrastructure.  The ideal of high-speed internet access within and across the jurisdiction is not universal.  COVID-19 has revealed the disparities in access to affordable, consistent and reliable internet within and between nations.  As the daughter of a diaspora, I do not forget my roots in the Indian sub-continent.  Not only the judiciary but most lawyers and clients in India do not have access to high speed internet.  Where courts do not have the infrastructure for online hearings this simply means that trials do not take place, adding to backlogs.  There are anecdotal examples of cases being filed using WhatsApp.  The judiciary and practitioners can perhaps work not just with law schools but engineering and software departments to initiate online filing software pilots and then have relevant executive programmes around this.

Humanising Legal Education and Practice in a World of Hi-Tech

Information and access to information are critical to knowledge acquisition and human education development.  Lockdown and social distancing during and in the wake of COVID-19 have meant that information technology devices have taken on a new or increased significance.  Computers have kept the wheels of business and social discourse turning, and for many they have been the main or only source of information on everything from the weather to the availability and safety of vaccines.  

This umbilical attachment to technology in the quest for knowledge and connection raises questions about the need for a new equilibrium between protecting individual freedoms and wider national interests in the context of the global digital information society.  AI is being used in almost every area of life from fintech, to robotics and telecoms (see Box B on AI and Fintech, AI and Box C on Robotics and AI and Telecoms).

AI and Fintech

Box B. AI and financial services

AI and Robotics

Box C. AI and Robotics

AI and Telecoms

Box D. AI and Telecoms

A balance has to be struck with sensitivity to respect for human rights including private and family life, home and correspondence, the peaceful enjoyment of possessions, freedom of thought, conscience and religion, and freedom of expression among other rights.  Freedom of expression includes the right to receive and impart information and freedom from discrimination in the exercise of such rights, while recognising that the exercise of these rights carries duties and responsibilities.

The European Convention on Human Rights and other international instruments sets out minimum conditions for the legitimacy of any interference with individual rights.  Broadly speaking, any interference with fundamental rights must be prescribed by law and necessary in a democratic society in the interests of national security, public safety, the economic wellbeing of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

It is hard to dispute that there has been a seismic shift in the development of technology prior to and ongoing through COVID-19.  This shift has in some respects allowed for mitigation of some of the worst shocks of dealing with the immediate emergency, yet it raises a question as to how, if at all, this has affected our humanity.

December 2018 heralded The transHuman Code in Shenzhen, China.  This was described as: “informing and engaging all citizens of the world about the dynamic influences of technology in our personal, communal and professional lives, The TransHuman Code was formed to redefine the hierarchy of our needs and how we will meet them in the future”.  Further endorsement followed with the “The TransHuman Code Davos Gathering of Minds” at the World Economic Forum in January 2019.  This event introduced the world’s first digital “person” and first digital book signing”.  In May 2019, the Organization for Economic Cooperation and Development (OECD) published the first, internationally agreed principles on human-centred, trustworthy AI reflected the democratic values of the OECD members.

Information and knowledge (whether it is formal education or ‘fake news’) built on minimal or cheap labour, where it does not reflect the cherished values of the rule of law and fundamental rights and where it its used for oppression or excessive profit, is a threat to our humanity. While the internet knows no geographic boundaries, human rights protection in this borderless hi-tech world remains largely a matter for individual states and is perhaps the next existential threat beyond COVID-19.

If you want to know more about these summary findings, and further research projects in the area, as well as upcoming publications, contact Suzanne Rab (E. srab@serlecourt.co.uk; M. +44(0) 7557 046522).

Professor Suzanne Rabis a barrister at Serle Court Chambers specialising in regulatory and education law. She is Professor of Commercial Law at Brunel University London, a law lecturer at the University of Oxford, and Visiting Professor at Imperial College London.  She is an expert panel member of the UK Regulators Network, a member of Council of the Regulatory Policy Institute and a non-executive director of the Legal Aid Agency.

Employability Portal

University Careers Service Rankings.
Best Global Cities to Work in.
Mentor Directory.
HR heads.

Useful Links

Education Committee
Work & Pensions
Business Energy
Working
Employment & Labour
Multiverse
BBC Worklife
Work in COVID
Mentoring Need to Know
Listen to our News Channel 9:00am - 5.00pm weekdays
Finito and Finito World are trade marks of the owner. We cannot accept responsibility for unsolicited submissions, manuscripts and photographs. All prices and details are correct at time of going to press, but subject to change. We take no responsibility for omissions or errors. Reproduction in whole or in part without the publisher’s written permission is strictly prohibited. All rights reserved.
© 2024 Finito World - All Rights Reserved.