Bold Idea The current labelling approach by social media platforms isn’t working. More effective methods must be implemented to help improve trust and transparency online.
Executive Summary
It’s official—when it comes to online content, seeing is no longer believing. Rapid advancements in artificial intelligence (AI) technology has made it possible to create hyper-realistic synthetic images, videos, and audio, commonly called “deepfakes” when created for malicious purposes. And as the content-generating tools inevitably grow in sophistication, fake content will become virtually indistinguishable from real content. This has serious implications for misinformation, trust in the media, and even our democracy.
As such, navigating online platforms and information is becoming more complex, leaving users with greater challenges than ever before in determining whether what they are seeing is real or synthetic.
Because of this, Canadian residents want tools and strategies that help them confidently evaluate the authenticity of the content they encounter. Research from the Dais’ Survey of Online Harms in Canada 2024 found that 82 percent of Canadian residents want online platforms to label synthetic media and deepfakes so they know what they are looking at.
In response to the proliferation of AI-generated content, major platforms have begun rolling out strategies designed to label or otherwise highlight this content. But are they effective? Without a clear understanding of the effectiveness of these measures, it’s difficult to know whether they are helping to build public trust or simply window-dressing.
To investigate this question, we conducted a novel survey experiment examining Canadian residents’ impressions of AI-generated content in a mock social media environment (Facebook), applying a variety of labelling approaches.
This report offers an overview of the existing literature on AI-generated content, provides a scan of the current regulatory landscape surrounding AI-generated content on social media platforms, and shares the results of our survey on Canadian residents’ experience with synthetic and deepfake media online.
Key Findings:
Small AI-generated content labels have no meaningful effect on user trust or sharing behaviour. This makes these labels functionally ineffective.
Full-screen labels are most effective. Only a full-screen label blocking AI content until manually removed significantly reduces exposure and improves perceptions of effective labelling. However, no social media platforms use this label type.
Nearly half (47 percent) of Canadian residents see deepfakes on at least a weekly basis—double the figure from our previous study: Survey of Online Harms in Canada.
One in five Canadian residents see synthetic media or deepfakes multiple times a day. Seventy percent encounter deepfakes at least a few times a month.
Canadian residents over 60 do not have a solid grasp of deepfakes. Older Canadian residents are more likely to be unsure if content is real or synthetic, suggesting difficulties in identifying AI-generated material. While most Canadian residents are at least somewhat familiar with deepfakes, younger individuals (16-29) and men of any age are significantly more aware than older demographics.
Exposure to deepfakes is highest among TikTok users (78 percent) and Instagram users (76 percent). Daily exposure was highest with YouTube users. This finding highlights social media as a key vector for synthetic content.
Canada’s governance approach remains largely voluntary and has stalled. Despite the growing presence of generative AI, legislative efforts—Canada’s Artificial Intelligence and Data Act (Bill C-27) and Online Harms Act (Bill C-63)— that would apply to AI-generated content on social platforms have stalled due to the prorogation of Canada’s parliament.