This was a fascinating conference organised by the Law Society which I attended on 21 June 2016.
Professor Katie Atkinson (pictured below), Head of Liverpool University’s Department of Computer Science, said that looking at ‘a body of case law’ covering 32 cases, the programs had a 96% success rate and got only one case wrong, she said. Atkinson said she saw the technique as a 'decision support tool' to help make reasoning 'faster, more efficient and consistent', assimilating data over time 'so it will be there to help and support with the reasoning'.
Law Society President Jonathan Smithers said: ‘The avenues for rights and redress used to be narrow, used to be restricted to people, but they are widening. This expansion, however, does not necessarily mean that robots will be recognised as legal persons or that they will automatically have rights.’
Smithers said it was essential that the law considers legal effects of the current uses of machine-learning technology. Questions on tort liability must be answered on the use of technology such as drones, the digital currency Bitcoin and driverless cars, he added.
‘There are increased pressures on developers and companies to make new forms of technology available more quickly in order to maximise commercial opportunities. Inevitably this brings risks that need to be identified and must be mitigated.’
But even though lawyers will be needed to provide ‘sound, robust and evidence-based’ answers to questions, they cannot become complacent, Smithers warned. ‘The new uses of machine learning and artificial intelligence show that technology has evolved from science fiction to science fact. Unless we keep up with the pace of technology, unless we show leadership and take action in this field, unless we show determination and imagination in this sector, our legal system may not be fit for purpose,' he said.
Some may well argue that it is ridiculous to suggest that an algorithm can properly analyse a set of facts and make judgments based on case law.
Will this also mean that there will no longer be the need for an appeal system, as the 'iJudge' will presumably be infallible? Then, will the parties want Artificial Intelligence Avocates too to skew the presentation of the facts and confuse the judges algorithims...?
Whatever happens lawyers are having to adapt very quickly.
Computer programs can already match judges in decision-making, a conference highlighting the growing use of artificial intelligence in law heard last night. A poll of more than 300 attendees at the Law Society’s Robots and Lawyers conference found that 48% of respondents’ firms already use some form artificial intelligence (AI) — though only 4% agreed that lawyers will eventually be replaced by robots. However, research conducted by the University of Liverpool suggests a decision-making algorithm could be as effective at dispensing justice as a judge. Professor Katie Atkinson (pictured), head of the university’s department of computer science, said the university had researched whether its computer programs could replicate the reasoning that judges go through. However, research conducted by the University of Liverpool suggests a decision-making algorithm could be as effective at dispensing justice as a judge. Law Society President Jonathan Smithers told the event that although machine-learning and artificial intelligence may not strictly be human, their uses, applications and results ‘must still be subject to the rule of law’. Smithers highlighted several examples where the concept of ‘legal person’ had been extended in the law to less traditional entities, from companies to — in some jurisdictions — animals and features of the natural environment.