Engaging Experts To Inform The Law On Artificial Intelligence: A Candid Conversation With Mark Daley, Chief AI Officer at Western University

  • 11 décembre 2023
  • Sigma Khan, associate at Henein Hutchison Robitaille LLP

The legal sphere’s neoteric fascination with artificial intelligence (“AI”) is ubiquitous, as lawyers and legislators alike attempt to coherently leverage and govern this powerful technology. While those of the political and legal persuasion are not the first to admit when a concept confounds us, attempting to bring AI within the limits of a system of rules requires admitting fallibility and treading carefully. It is a societal necessity to acknowledge ineptitudes of a siloed approach to making sense of AI, simply because of the technology’s inherent complexities and far-reaching capabilities to encroach into all aspects of life.

Being cautious requires a foundational cognizance of AI and employing an interdisciplinary view to its governance. It is a well-established principle in the law of evidence that an expert is necessary when one needs the court to draw a conclusion that requires knowledge or expertise outside the knowledge and experience of an ordinary person. In the same way, it is in the interest of the legal community to draw from experts outside of the law to make comprehensive sense of how we should perceive AI in its initial stages.

To that end, I had the opportunity to discuss AI developments with one such expert, Mark Daley, who holds a PhD in Computer Science, and was recently appointed as Western University’s inaugural Chief AI Officer. Daley’s appointment is unprecedented, as Western University became one of the first academic institutions in the world to appoint a designated executive as part of the President’s team, to exclusively grapple with AI and related technologies.

To gain technical visibility into the underpinnings of AI for the purposes of legal integration, Daley and I conversed about a range of topics with accountability, adaptable culpability, democratization, and the longevity of humanness.

ACCOUNTABILITY

With Daley’s work being primarily in the academic setting, I wanted to know whether AI is increasingly being adopted by students, whether a rise in such a trend would lend itself to an overhaul of academic policies, and if so, how do institutions ensure procedural fairness. Daley notes that “the short-term answer is easy, no one is having a conversation about a ballpoint pen or cheating while using a ball point pen. Like the ball point pen, AI is still just a technology, and the principle of human accountability is still relevant. The legal tradition is already founded around the idea that you, as an individual, exercise human agency. Just because your action is being aided by a particular technology does not change the foundational principle. We do not need to have a separate policy about cheating with AI, the University already has an academic policy pertaining to academic misconduct. We do not need a separate policy, because this is still about personal agency and accountability. A human is still choosing to cheat, whether they do it using AI or a ballpoint pen.”

This logic of accountability when committing acts with the aid of AI is applicable in the broader socio-legal setting. Regulators, legislators, and lawyers need to think about how to frame AI; whether there is a heightened sense of fault when leveraging powerful technologies to commit acts if the intentions remain human. Or, whether it is a factor at all because the act is human regardless of the choice of using AI to execute the act. Refraining from disassociating human accountability from AI where procedural fairness becomes operative can be one of the frameworks as companies, schools, businesses, and governments increasingly put internal AI policies in place.