Regulating Generative Artificial Intelligence: Balancing Innovation and Risks*

  • June 23, 2023
  • Roland Hung, Torkin Manes LLP

Introduction

In a matter of months, generative artificial intelligence (“AI”) has been adopted ravenously by the public, thanks to programs like ChatGPT. The increasing use (or proposed use) of generative AI by organizations has presented a unique challenge for regulators and governments across the globe. The balance between fostering innovation while mitigating risks associated with the technology is a challenge that different lawmakers are trying to strike. This article summarizes some of the key legislation or proposed legislation around the world that tries to strike that balance.  

AI Regulation in Canada

  1. Current Law

While Canada does not have an AI-specific law yet, Canadian lawmakers have taken steps to address the use of AI in the context of so-called “automated decision-making.” Québec’s private sector law, as amended by Bill 64 (the “Québec Privacy Law”), is the first piece of legislation in Canada to explicitly regulate “automated decision-making”. The Québec Privacy Law imposes a duty on organizations to inform individuals when a decision is based exclusively on automated decision-making. 

Interestingly, this duty to inform individuals about “automated decision-making” is also found in Bill C-27, the federal bill to overhaul the federal private sector legislation. Bill C-27 imposes obligations on organizations around automated decision systems. Organizations that use personal information to inform their automated decision systems to make predictions about individuals are required to:

  • Deliver a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them; and
  • Retain the personal information related to the decisions for sufficient periods of time to permit the individual to make a request for access.

In addition to the privacy reforms, the third and final part of Bill C-27 introduced Canada’s first every AI-specific legislation, which is discussed in the next section.

  1. Bill C-27: The Digital Charter Implementation Act

On June 16, 2022, Canada’s Minister of Innovation, Science and Industry (“Minister”) tabled the Artificial Intelligence and Data Act (“AIDA”), Canada’s first attempt to formally regulate certain artificial intelligence systems as part of the sweeping privacy reforms introduced by Bill C-27.

Under AIDA, a person (which includes a trust, a joint venture, a partnership, an unincorporated association, and any other legal entity) who is responsible for an AI system must assess whether an AI system is a “high-impact system. Any person who is responsible for a high-impact system then, in accordance with (future) regulations, must:

  1. Establish measures to identify, assess and mitigate risks of harm or biased output that could result from the use of the system (“Mitigation Measures”);
  2. Establish measures to monitor compliance with the Mitigation Measures;
  3. Keep records in general terms of the Mitigation Measures (including their effectiveness in mitigating any risks of harm/biased output) and the reasons supporting whether the system is a high-impact system;
  4. Publish, on a publicly available website, a plain language description of the AI system and how it is intended to be used, the types of content that it is intended to generate, and the recommendations, decisions, or predictions that it is intended to make, as well as the Mitigation Measures in place and other information prescribed in the regulations (there is a similar requirement applicable to persons managing the operation of such systems); and
  5. As soon as feasible, notify the Minister if use of the system results or is likely to result in material harm.

It should be noted that “harm” under AIDA means physical or psychological harm to an individual, damage to an individual’s property, or economic loss to an individual.

If the Minister has reasonable grounds to believe that the use of a high-impact system by an organization or individual could result in harm or biased output, the Minister has a variety of remedies at their disposal. 

You can read more about AIDA here.