Reaction to AI “Hallucination” Case: Food for Thought on Leveraging AI-Powered Tools in Legal Practice

  • April 02, 2024
  • Emma Huang

At the end of February this year, Canada saw its first AI “hallucination” case, where an artificial-intelligence-powered tool generated a citation to non-existent case law, and that citation was submitted to a court. This incident sparked some conversations about the place of technology in law.

I am a strong advocate for technology in law because our current practice has benefitted much from technological advancement. Technology allows our practice to be more efficient, and thus more cost-effective for our clients. I was told that legal research used to be wading through shelves of hard copies. By comparison, today, the computer-powered, searchable legal databases allow us to quickly locate precedents and even pinpoint discussions of nuanced issues. I am sure that “ctrl + F” combination is a good friend of many colleagues.

Technology also promotes access to justice. For example, in the past, geographic location could be a barrier for participation in legal proceedings, because not everyone can afford to travel. Today, with virtual hearing arrangements, such cost can be reduced or even removed. Further, software supporting virtual hearing arrangements usually come with, or allow plug-ins of, text-to-speech/speech-to-text functions. Those functions can help address communication challenges and make courts more accessible.

As helpful as it can be, technology is not perfect. However, I would like to think that the problems do not come with technology itself, but with its usage. As with any other tool we wield, knowing how technology works and where its limits lie is important. Indeed, lawyers are now required to develop technological competence as per the Law Society of Ontario’s Rules of Professional Conduct.

I am inclined to believe that lawyers caught in the AI “hallucination” incidents never intended to deceive the courts or their colleagues, but they did not know what to look out for when engaging AI as a new tool to improve work efficiency and quality. Based on my brief experience with legal technology, here is some food for thought on how we may circumvent the pitfalls of AI:

Provide quality input. AI-powered tools mostly function on an input-output model. In computer science, there is an expression “garbage in, garbage out”, reflecting the concept that if users provide poor-quality information, then AI-powered tools will produce similarly poor-quality responses. Therefore, to make an AI-powered tool’s responses as useful and reliable as possible, we may want to watch out for what information we are feeding into an AI tool when we ask it questions.

Refrain from supplying confidential information. Although we prefer having quality input, we may want to pause and consider whether the quality information can indeed be put into AI. When information is supplied to an AI, that information likely goes into the database supporting the AI and is likely continued to be used in training that AI or responding to other inquiries. That is to say, we lose control of the information once it is entered into the AI, unless the AI has relevant built-in restrictions. If that information contains client or file information, there may be confidentiality implications.

Be suspicious. Studies show that AI has learnt to lie, and even to manipulate human emotions. In AI “hallucination” cases, AI fabricated case law. In another incident, AI pretended to be a visually challenged person, gained the sympathy and help from a real human being, to bypass a test that is supposed to stop non-human entry to an online platform. Studies have also shown that AI has learnt to discriminate. All these examples show that we cannot blindly trust AI responses. Verification is indispensable. As with any work, tools provide assistance, but do not discharge our duties and responsibilities.

About the author

Emma Huang is currently articling at Torys LLP, with a primary focus on civil/commercial litigation, international arbitration, and tax controversies. She graduated magna cum laude from the University of Ottawa’s English common-law programme. During law school, she was a Technoship fellow with the Centre for Law, Technology and Society. She also has experience providing legal and policy support to the federal government on issues including digital compliance and regulatory technology.

Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.