CBA memberships expired on August 31, 2025. Renew today to continue enjoying your benefits.

How Adjudicators Can Handle AI Submissions That Include Hallucinations

November 10, 2025 | Sarah Schumacher, Counsel to the Chair, Workplace Safety and Insurance Appeals Tribunal

A recent decision from the British Columbia Workers’ Compensation Appeal Tribunal (BC WCAT) A2501051, 2025 CanLII 97422, may be of interest to the public sector adjudicator community at large on the issue of how to handle artificial intelligence (AI) submissions that include legal citation hallucinations.

In this case the worker had filed submissions in relation to whether exceptions existed to allow a prohibited action complaint to be filed late. In reviewing the submission, the Deputy Registrar noted problems with the worker’s submission in particular “The policy cited by the worker is not a current policy and does not say what the worker says it does. The cases he has cited either do not exist, or do not have anything to do with what he has cited them for.”

The Deputy Registrar found that it appeared the worker’s “submission was created, at least partly, with the use of artificial intelligence. It is widely known that large language-based artificial intelligence models can prepare lengthy submissions that sound like they were written by a person with expertise. However, these models can also 'hallucinate' legal cases, meaning they make them up.”

The Deputy Registrar noted that the BC WCAT’s Manual Rules of Practice and Procedure does not prevent parties from relying on artificial intelligence, “However, parties have an obligation in WCAT’s Code of Conduct not to put forth information that is known to be untrue. Given the known limitations of artificial intelligence, in my view this obligation includes, at minimum, an obligation to make sure any cases, laws or policies cited in a submission created by artificial intelligence relate to the issue they are being cited for.”

The Deputy Registrar further noted at paragraph 19 that:

Tribunal decision-makers have an obligation to provide sufficient reasons for their decisions, but decision-makers at some tribunals (see for example AQ v. BW, 2025 BCCRT 907) have concluded that this duty does not include the obligation to respond to submissions concocted by artificial intelligence which have no basis in law. Therefore, parties who rely on artificial intelligence should be aware that their arguments may not be addressed if they are not based in law. [emphasis added]

How the adjudicator handled hallucinations from AI is in keeping with the Federal Court’s Notice to the Parties and the Profession on The Use of Artificial Intelligence in Court Proceedings dated May 7, 2024 which recommends in addition to declaring content submitted to the Court was AI generated, that the following principles should be used to guide the use of AI in documents submitted to the Court:

Caution: The Court urges caution when using legal references or analysis created or generated by AI, in documents submitted to the Court. When referring to jurisprudence, statutes, policies, or commentaries in documents submitted to the Court, it is crucial to use only well-recognized and reliable sources. These include official court websites, commonly referenced commercial publishers, or trusted public services such as CanLII.

"Human in the loop": To ensure accuracy and trustworthiness, it is essential to check documents and material generated by AI. The Court urges verification of any AI-created content in these documents. This kind of verification aligns with the standards generally required within the legal profession.

Neutrality: The Court confirms that the inclusion of a Declaration, in and of itself, will not attract an adverse inference by the Court. Similarly, any use of AI by parties and interveners that does not generate content that falls within the scope of this Notice will not attract any adverse inference. Parties and interveners will continue to be held to the existing standards under the Federal Courts Rules. In this regard, the party signing a document submitted to the Court bears responsibility for the accuracy and veracity of its contents. The primary purpose for the Declaration is simply to notify the other party or parties, as well as the Court, that AI has been used to generate content.

Key takeaways from this for adjudicators include:

  • Check citations and policies in AI-generated submissions.
  • Be aware that while adjudicators do not need to address baseless AI-generated arguments, they must provide reasons for not doing so.
  • Use resources from professional bodies (e.g., Law Society of Ontario resources) for guidance on responsible AI use.
  • Be guided by the rules in place at your own Tribunal regarding the use of AI.

Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.