Artificial Intelligence in the Legal Profession
- WULR Team

- Jan 29
- 4 min read
Hallucinations and citations
Published January 29 2025
Analysis by Logan Espinosa
With the ever-growing expansion and distribution of Artificial Intelligence (AI) in today's workforce and increasing concerns about its use and ethical implications, AI has been the topic of discussion in many fields. Within the legal profession, AI has many uses and may contribute to the workflow of firms, however, there are concerns over the ethics and uses of AI.
The expansion of AI has grown into every workforce and has proven to be something firms cannot ignore. Market strategist and law firm consultant Heather Sutty says “The legal services industry has radically changed in the last decade.” The market has become denser, tougher, and more fragmented. Firms are looking for ways to get ahead of their competitors and, in the age of AI, “the key to survival is flexibility.”(1) AI has been around for decades in law firms, according to Bloomberg Law. Its main uses are primarily to “parse data and query documents,” however, with the rise of generative AI such as ChatGPT, many firms have started to take precautions. Some have put restrictions or even total bans on the use of generative AI given concerns about algorithmic bias, hallucinations (when AI chatbots provide false information confidently), inaccuracies, and confidentiality (2). These concerns are not just hypothetical, they have happened in the real world.
In 2023, a lawsuit was filed between Colombia-based Avianca Airlines and Roberto Mata, who was allegedly “struck by a metal serving car” while on a flight in 2019. When preparing a response, Mata’s lawyers cited cases to show there was precedent for this ruling citing Varghese v. China Southern Airlines and Shaboon v. Egypt Air. However, these cases did not exist (3). These are cases of hallucination of AI and show the potential dangers of using generative AI in law. While there may be many uses of AI in law, it is still uncertain what its use is in the legal profession.
There have been some instances where AI has been found to be beneficial. In February 2019, the High Court of England and Wales used a robot mediator to conclude a dispute between two parties concerning “approximately £2000, representing the outstanding payments requested by the client’s coach after completing a personal counseling course.”(4) This was the first use of a robot mediator, named “Smartsettle ONE” which uses AI to resolve disputes. The employment of this Artificial Intelligence allowed the dispute to be resolved in less than an hour. In the real world, this would provide more effective and timely mediation results between two parties, facilitating more efficient mediation. The use of AI as mediators can provide the court system with effective uses of AI, but what about firms? There has been little case law that has dealt with this issue, however, there is one case that recognized the use of computer-assisted review. Da Silva Moore v. Publicis Groupe et al appears to be the first case in which a court approved the use of computer-assisted review. This case specifically dealt with “tools that use sophisticated algorithms to enable the computer to determine relevance, based on interaction with (i.e. training by) a human reviewer.” Judge Andrew Peck wrote this opinion, stating “In my opinion, computer-assisted coding should be used in those cases where it will help ‘secure the just, speedy, and inexpensive.’” This is where AI in law may find its place: when it can be used only in cases where it will help lawyers and judges reach a fair outcome. Judge Peck goes on to write that computer-assisted review should be used in every case but only those where it may “save the producing party (or both parties) significant amounts of legal fees in document review.”(5)
Case law has said AI or computer-assisted review should be used when sorting through vast amounts of data. This seems to be a potential answer when faced with the problem of how to incorporate AI in the legal profession. It can help lawyers sort through large amounts of data for cases and provide them with helpful tools to effectively do their job. While there are benefits to using AI, it does not come without risk. The risk of algorithmic bias, hallucinations, inaccuracies, and confidentiality have caused some firms to advise against AI. In the coming years with the current AI boom, artificial intelligence systems will soon come into question about their place in law.
Anja Oskamp and Marc Lauritsen, “AI in Law Practice? So Far, Not Much,” Artificial Intelligence and Law 10, no. 4 (2002): 227–36, https://doi.org/10.1023/a:1025402013007.
Bloomberg Law. “AI Tools for Lawyers: A Practical Guide | Bloomberg Law.” Bloomberg Law, Aug. 2023, pro.bloomberglaw.com/insights/technology/ai-in-legal-practice-explained/#how-is-ai-used-in-law. Accessed 5 Nov. 2024.
Bohannon, Molly. “Lawyer Used ChatGPT in Court—and Cited Fake Cases. A Judge Is Considering Sanctions.” Forbes, 8 June 2023, www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/.
Anja Oskamp and Marc Lauritsen, “AI in Law Practice? So Far, Not Much,” Artificial Intelligence and Law 10, no. 4 (2002): 227–36, https://doi.org/10.1023/a:1025402013007.
Peck, Andrew. Da Silva Moore v. Publicis Groupe et Al. 2012, law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2011cv01279/375665/96/. Accessed 8 Nov. 2024.




Comments