AI Content Labels – Private Sector Policy and Canada

May 12, 2025 | Warren Urquhart, Governance Counsel at Toronto Hydro

You may have noticed AI Content Labels in your newsfeeds. These Labels have utility for several reasons:

(1) The inherent value of knowing what’s “real”;

(2) The importance of informing the viewer so they can determine authenticity; and

(3) The technical value of LLMs knowing whether the content they are being trained on is AI-generated content.

Private Sector – Meta AI Labelling

Meta’s AI-content labelling approach is based on (1) self-labelling and (2) automatic labelling set on standards set by industry partners. The visibility of that label on a viewer’s newsfeed depends on how generative AI played a role in its creation.

Meta’s initial news release is a living document that has shown the evolution of its policy. After feedback from their Oversight Board, Meta admitted that their initial “manipulated media policy” was not modern enough to deal with realistic AI-generated content that has increasingly proliferated.

So, Meta developed a two-fold approach to identify non-human generated content: automatic detection, or creator self-disclosure. If Meta’s systems detect AI content, they label it as “Made with AI” or “imagined with AI”. Creators and posters, of course, can self-label.

Meta’s policy currently has a high bar for removal that applies to human and AI-made content. A post will be removed if it violates certain policies or their Community Standards. However, posts that do not reach that bar but still violate a policy, even if determined to be false or altered, will only be deprioritized in a user’s social feed and labeled.[i]

Canada and AI Labelling

So far, Canada has no Federal AI Labelling laws on the books. However, there are suggestive sources and signs of what more formal laws and regulations they may become.

Innovation, Science and Economic Development Canada released a report from their Consultation on Copyright in the Age of Generative Artificial Intelligence this year.[ii] The report, written from stakeholder feedback solicited from 2023-2024, saw support for labelling AI-generated content, especially echoed by cultural industries.

The impetus for labelling was to protect rights holders and identify the source of the content they were reading. In the case of the consumer, it was thought that the labelling of content as AI or “human-made” would empower consumer choice to support ”human-authored content” and be used to combat misleading deepfakes. The Stakeholder feedback may (1) provide direction to the private sector to include labelling in their content and (2) show where policy-makers will go after the most recent Federal election.

Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems is another force. While it doesn’t explicitly mention AI “labels”, the Code’s principle of  “Transparency”, calls for managers of generative systems to “Ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.”[iii]

The closest AI Labelling has come to being codified is the constantly in purgatory Artificial Intelligence and Data Act. However, due to Parliament’s recent proroguing, Canada is still without a firm AI regulatory framework.

Conclusion

In conclusion, Canada is likely a few years off from any official regulation on AI-labelling content. Until then, the policies of social media companies and suggestive guidance from regulators will fill that gap.  


[i] Our Approach to Labeling AI-Generated Content and Manipulated Media”, Meta, Originally Published April 5, 2024, updated with context and policy changes on May 10 2024, July 1 2024 and September 12 2024

[ii] What we Heard Report – Consultation on Copyright In the Age of Generative Artificial Intelligence, Innovation Science and Economic Development Canada, Pg. 14, 2025

 

Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.