Skip to content

Canadians want social media regulation. Parliament needs to get on with it

The illustration shows a person looking to their phone and seeing it as a harm.

As global outrage grew over horrible stories of deepfaked sexualized images of women and children on the X social platform, created by users with Grok AI, the company responded: it will turn photo editing off, but only in jurisdictions where this content is illegal. 

In Canada, this revealed a hard truth. Without regulation of digital platforms, there is little policymakers can do to address this kind of harmful content or grotesque corporate behaviour.

With Parliament back in session in Ottawa, fixing this gap must be a top priority.

Social media connects people and acts as a digital public square, but the failure to impose legal consumer protections online has become unacceptable.

Our research finds that Canadians of all ages ubiquitously use social platforms like YouTube, Facebook, Instagram, TikTok and Snapchat. They report growing exposure to many different types of harmful content—false information, hate speech, fraud, online harassment, AI generated deepfakes, violent content, or intimate images shared without consent. Exposure is higher for younger Canadians and marginalized groups.

The impacts are increasingly clear.

As social scientist and Anxious Generation author, Jonathan Haidt, noted last month, there are “mountains of evidence” that social media is harming young people – including Meta’s own research, made public by whistleblowers. In Canada, one-fifth of youth aged 12-17 report negative mental health effects related to their online activity, including compulsive Internet use, negative mood, diminished self-worth, body dysmorphia, self-harming behaviors, and cybervictimization.

Social media also poses a threat to Canadians’ civic health and democracy. Our study finds higher social media use linked to greater belief in mis- and disinformation. With journalistic news banned on Meta platforms, Canadians are increasingly getting their information from social media sites where algorithms polarize the conversation and influencers mediate it.

The ad-driven business model provides little incentive for corporations to address these harms. The manipulative design of the platforms is a feature and not a bug, employing “dark patterns” and algorithmic amplification to exploit user vulnerabilities and keep ‘users’ engaged to harvest their data and serve ads. 

Even a couple of years ago, there was a sense that the social media companies were working in good faith to address harmful content and systemic risks. But with the rollback of trust and safety efforts, and content moderation teams, on Facebook, Instagram, X and other platforms, this era of “self-regulation” is over. 

No wonder, then, that Canadians overwhelmingly support government regulation. Seven in 10 support public oversight requiring online platforms to behave responsibly in protecting users, with near-universal support for more specific measures. For instance, 88% support requiring platforms to quickly remove child sexual abuse material and report it to police. Coalitions and campaigns like Safer Online Spaces have come together across party lines to demand action.

The core of the previous Parliament’s Online Harms Act, or Bill C-63, worked. Civil society, youth, experts, and even opposition parties supported the duty-of-care and transparency framework in that legislation. As European regulators’ strong response to X demonstrates, holding global tech platforms accountable requires an independent oversight body—a Digital Safety Commission, at arm’s length from the government. 

The new bill should include strengthened youth protections: age-appropriate design standards, opt-out rights from algorithmic feeds, prohibition on data collection, restrictions on or elimination of advertising, and youth-specific algorithmic and content safeguards. New provisions for AI are essential, including bringing AI chatbots into scope as a regulated service. 

As the government has trial ballooned, policymakers should certainly consider the age minimum, as we follow how the under 16 ban unfolds in Australia. But let’s be clear: it can be part of the package, and not an alternative to comprehensive regulation to protect all Canadians.

The platforms will fight efforts at regulation – through aggressive lobbying, deceptive arguments about limiting speech, and mobilizing political allies in the Trump Administration. We’ve seen this playbook before. Rather than playing it safe for fear of trade implications, a robust bill would be an additional point of leverage for Canada in forthcoming negotiations.

But setting the rules for our online spaces is core to bolstering Canada’s digital sovereignty. To use Prime Minister Carney’s words from Davos, it’s about “reducing the leverage that enables coercion.”