Skip to content

Survey of Online Harms in Canada 2025

May 2025

BOLD IDEA
Canadians are reporting higher exposure to online hate and misinformation which continues to further corrode our democracy and shared citizenship—more Canadians support government intervention to mitigate these harms. 


Much of public life now unfolds in a digital public square that is evolving faster than ever. Social media platforms connect people and enable information flows at previously unimaginable scale, with Facebook alone claiming over three billion monthly active users.1

Yet, longstanding and worsening issues like online harassment and disinformation have been compounded by a new array of challenges driven by rapid advances in generative artificial intelligence (AI). These technologies not only amplify traditional harms—such as the spread of politically charged misinformation and hate speech—but also introduce novel risks, including hyper-realistic deepfakes, and the proliferation of non-consensual explicit content. The digital public square has become less safe for Canadians.

Further, Canadians’ relationship with social media presents a striking paradox: even as platforms like YouTube, Facebook, and TikTok become dominant sources of news and information, public trust in these platforms remains extremely low.

Despite widespread use, Canadians report high exposure to online harms—ranging from hate speech and identity fraud to misinformation and non-consensual image sharing—especially among youth and marginalized communities. While some users try to manage these risks themselves, most find the platform tools to do so ineffective.

At this critical juncture for online safety policy, the report presents the findings of our sixth Survey of Online Harms in Canada. Since 2019, the Dais has been committed to understanding Canadians’ use of social media platforms, the extent to which they are experiencing harms on platforms, and their openness to different types of solutions and policy interventions. Conducted in February 2025, the findings of this survey cover ongoing trends in shifting media habits, increased experiences with online harms, and reveal a strong desire by Canadian residents to see action on this issue.

The findings present a clear message: while Canadians rely on digital platforms more than ever, they increasingly recognize the harms that come with that reliance—and are calling for stronger, systemic interventions to address them.

A strong majority of Canadians support robust government regulation to curb harmful content, with especially urgent calls for measures like removing child sexual abuse material, labelling AI-generated content, and increasing parental controls. 

We asked Canadian residents how often they use a specific set of platforms.

  • Facebook and Facebook Messenger are the two platforms most used on a daily basis, with Instagram and WhatsApp close behind. YouTube is the most popular social media platform overall.
  • While Bluesky and (Meta-owned) Threads use are growing, X/Twitter is still more popular than both, despite declining usership.
  • Daily use of OpenAI’s platform ChatGPT has nearly doubled, from four percent to seven percent. The platform is most popular with users under 30, as 17 percent of these respondents report using it daily.

We asked respondents to select the most common news sources from a list of options. 

  • Traditional news sources such as television (53 percent), news websites (40 percent) and radio (38 percent) remain the most popular news sources for Canadian residents.
  • Facebook (28 percent) and Instagram (20 percent) are still popular sources for news.
  • Canadian residents aged 60 and older are more likely to use traditional media for news, while those under 30 rely most on YouTube, Instagram, and TikTok.

We asked survey respondents to rate their trust in a select set of Canadian corporations. 

  • Corporations Canadian Tire, CBC/Radio-Canada, and CTV rate highest on the trust scale.
  • X, OpenAI and TikTok rate lowest, with TikTok lowest on the trust scale.
  • While Facebook was identified as the most popular social media platform for news (tied with YouTube), it was also the least trusted news source.
  • Canadian health-care providers were most trusted to deliver digital services, while cryptocurrency exchanges were least trusted.

We asked respondents about their experiences with online harms.

  • Misinformation was the most common reported harm, with three-quarters of Canadian residents having spotted news or current events they immediately suspected to be false at least a few times a year.
  • US politicians and celebrities were indicated as most frequent topics of misinformation content.
  • False news and deepfake images were the second most common form of online harm.
  • Respondents who indicated news websites, news alerts on phones and radio as their most regular news sources scored best on the misinformation index.
  • Those who indicated WhatsApp, Facebook, Instagram, and YouTube as news sources, scored lowest on the misinformation index.
  • Deepfake exposure is rising, with 67 percent of respondents reporting seeing it online at least a few times a year—up from 60 percent of users just last year.
  • The frequency of exposure to hate speech, identity fraud, and impersonation, and promotion of violence has increased by at least three percent over the last three years.
  • For racialized Canadian residents, recent immigrants, those living with disabilities, and 2SLGBTQ+ residents, encountering hate speech is anywhere from 50 percent to 100 percent more common than those who do not belong in these groups.

We asked respondents who they think is responsible for causing and fixing online harms. 

  • Canadian residents consistently report that users of social platforms are most at fault for causing the rise in harmful content.
  • Nearly half of respondents (47 percent) said that online platforms are responsible for fixing these harms and another 21 percent say that government or political leaders are.
  • Sixty-eight percent of respondents believe reducing the amount of hate speech, harassment, and false information online is more important than protecting freedom of expression.
  • Sixty-nine percent believe the government should intervene to require online platforms to act responsibly.

Introduction


This report presents the findings of the Survey of Online Harms in Canada 2025, the sixth national study conducted by the Dais to examine Canadians’ experiences with social media platforms, exposure to online harms, and attitudes toward regulation and solutions.

The report begins by situating the findings in the current regulatory and geopolitical context, outlining stalled federal legislation, evolving global frameworks, and recent rollbacks in platform safety practices.

Next, the report explores the online media environment in Canada—tracking how Canadians use social media platforms, where they get their news, and how those trends vary by age and political orientation. This section also assesses levels of trust in news sources, tech platforms, and digital services.

The heart of the report analyzes Canadians’ exposure to online harms, including misinformation and disinformation, synthetic media, hate speech, fraud, and harassment. It reveals which groups are most affected and highlights the growing impact of generational divides in media consumption and harm exposure.

The final sections examine what Canadians think should be done about these harms. This includes their views on individual responsibility, and the perceived (in)effectiveness of tools like blocking and reporting. Our findings reveal Canadians’ strong support for system-level solutions. Potential solutions range from fact-checking tools and platform accountability measures to content labelling, bot removal, and age-based restrictions.

Overview of Current Regulatory Landscape


Despite widespread public concern about online harms affecting both children and adults, Canada does not yet have a national legal framework for online safety on social platforms. After a multi-year process and extensive consultation, the federal government tabled Bill C-63—An Act to enact the Online Harms Act—in February 2024, which sought to establish a legal framework for governing online harms in seven categories of illegal content, place duties on social platforms, and create a Digital Safety Commissioner as regulator.

An opposition private member’s bill presented another framework more narrowly focused on the protection of minors, with alternative regulatory enforcement mechanisms. Related technology policy legislation was also proposed, notably Bill C-27, which would have updated Canada’s private sector privacy regulation for a digital era and introduced a new regulatory framework for AI through the Artificial Intelligence and Data Act (AIDA). With the prorogation of parliament in January 2025, however, these pieces of legislation including C-63 were stalled prior to passage.2

At the same time, significant realignment is underway in geopolitics and the digital policy landscape. The new administration in the United States has signalled a strong resistance to platform regulation—and to trust and safety policy efforts by platforms that might be perceived as limiting free speech. Platforms such as X and Meta’s suite (including Facebook and Instagram) have rolled back or eliminated online safety efforts, including content moderation and fact-checking, in line with this approach.

Concurrently, other peer jurisdictions are moving ahead with robust social media regulation efforts, with the implementation of the European Union’s Digital Services Act and the United Kingdom’s Online Safety Act, with regulatory enforcement to ramp up in 2025. Australia’s new social media ban for children under 16 offers another example. 

Methodology 


The Media Environment


In recent years, the online landscape has been changing, with platforms that focus on short-form video like TikTok, as well as AI chatbots like ChatGPT quickly gaining popularity. To understand respondents’ media habits, we asked them a series of questions about the platforms they use and interact with online.

To understand online platform-use trends, we asked Canadian residents how often they use a specific set of platforms (see Figure 1), with options ranging from “multiple times an hour” to “never using the platform.” In general, Canadians’ platform usage falls into three categories: those who use platforms habitually (i.e. daily users), those who use platforms at all (i.e. at least a few times a year), and those who don’t use a given platform.

Consistently, Meta-owned platforms rank highest in habitual usage—of the top five platforms from the list we tested, Meta platforms occupy four spots (see Figure 2). Facebook and Facebook Messenger are the two most regularly used platforms daily, with Instagram and WhatsApp not far behind.

With the exception of WhatsApp, more than a third of Canadians say they use all of these platforms daily, with more than half saying they use Facebook daily. The only other platform that is as popular (used on a daily basis) is YouTube, which is used by 36 percent of Canadians every day. However, YouTube is the most popular platform when you look at those who have ever used each platform—only seven percent of Canadian respondents say they never use YouTube.


Figure 3 also shows that the Meta-owned platforms have continued to grow in Canada, with Canadian Facebook users increasing from 75 percent (of all Canadian residents) in 2019 to 84 percent in 2025 at the high end, and, at the low end, WhatsApp increasing from 34 to 49 percent usership.

This year, we also asked about the use of several alternatives to Twitter/X that have emerged in recent years: in particular, Bluesky and (Meta-owned) Threads. Figure Y shows that overall X/Twitter use has continually declined since 2019, falling from 44 percent to only 32 percent now (although daily users on X have remained more consistent). Both Threads and Bluesky now have significant and growing total usership (18 percent report using Threads, while nine percent report using Bluesky). However, neither Bluesky nor Threads have approached the habitual use that X has—13 percent of Canadian respondents use X daily, while only three percent use Threads and two percent use Bluesky daily.

New research in Canada suggests that content moderation changes on X have spurred some users to seek out alternative platforms.3 The survey suggests that Bluesky in particular seems to have filled the role of X for some Canadian residents that left on the political spectrum. Among right-leaning Canadian residents, 19 percent say they use X daily, while only two percent say they use Bluesky daily. However, for left-leaning respondents, 12 percent use X daily and four percent say they use Bluesky daily. That means twice as many left-leaning Canadian residents are using Bluesky, while a third fewer are using X.

Since 2024, we have also tracked the use of ChatGPT, the Large Language Model (LLM) operated by OpenAI. In that period, usage of ChatGPT has increased, both in terms of daily habitual use and in terms of total users. Previously, we saw that only four percent of users said they use ChatGPT every day—now seven percent use it daily, nearly doubling in the last year. Similarly, in 2024, 24 percent said they use ChatGPT ever, which has now increased to 34 percent of Canadian residents. This usage is strongly concentrated among younger respondents. Among the youngest group under 30, 17 percent say they use ChatGPT daily, and among those 30 to 44, 11 percent use it daily. This is compared to only three percent of those 45 to 59 and one percent of those over 60.

The use of these platforms varies significantly across different age groups. Figure 4 shows that the two most common platforms for Canadian residents in general—Facebook and Facebook Messenger—are disproportionately popular among the oldest survey respondents, and much less popular among Canadian residents under 30. Facebook is only the third most popular platform (by daily use) within this age group, and Facebook Messenger is fifth. These are the only two platforms that are less popular among younger cohorts than older ones. At the same time, TikTok and Snapchat both jump up in the rankings with 40 percent and 39 percent of the youngest cohort using the platforms daily respectively.

Despite the impact of social media on the journalism and news landscape in Canada, traditional media sources still dominate where Canadians say they get their news. Figure 5 shows that the most popular news sources currently are television news, news websites, and news on the radio, with search engines coming in fourth. This has remained consistent from previous iterations of this survey (see Figure 6). 

Facebook and Instagram have remained popular sources for news among Canadian residents, though their parent company Meta has opted to block access to traditional news sources on its platforms in response to Canada’s Online News Act (Bill C-18). More than a quarter of Canadian residents say that they get news from Facebook (28 percent), and another 20 percent say they get news from Instagram. However, this may not reflect how much news is being consumed on Facebook—other studies have shown that engagement with news on the platform has decreased significantly since the news ban went into place.4

Figure 7 shows the differences in news sourcing across age groups. Canadians fall broadly into two categories: older respondents are far more likely to use traditional media sources for news, while younger respondents get their news from a wider range of sources, including more social media platforms.

This means that those aged 60 and up are the most likely to say they get their news directly from TV, news websites, and print newspapers, while those 45 to 59 are tied with the oldest group as the most likely to get news on the radio.

Conversely, YouTube, Instagram and TikTok are much more popular as news sources for those under 30. This is most stark with TikTok, where nearly 30 percent of young people say they get news from the platform compared to barely one percent of those 60 and older.

Some sources are less polarized by age—in particular, search engines, news alerts on phones, and messages from friends or family are consistently popular across age groups.

In many cases, younger respondents getting news through social media “likes” means seeing the same content as those who get it directly from TV or news websites. Clips from television and direct links to news publishers often make the rounds on platforms like X, TikTok, and YouTube (and even in some cases on Meta platforms despite the news link ban). However, as we discuss below, there is evidence to suggest that relying on social media as a source of information leads to an increased chance of believing misinformation.


There is also significant polarization in the use of news sources across the political spectrum, as noted in Figure 8. Generally, respondents who say they fall on the left of the spectrum are more likely to say they get news from traditional media, while respondents on the right of the political spectrum say they get news from social media.

In recent years, there has been a lot of talk about the declining trust in Canadian institutions5 and in response to that concern we have been trying to understand the nature of those shifts. Since 2023, we have been tracking trust in a range of prominent corporations in Canada, both media and brands, asking Canadians about their levels of trust that these entities will act in the best interest of the public.

Figure 9 shows clear differences in trust levels. Retail brand Canadian Tire and national broadcaster CBC/Radio-Canada are the top tier when it comes to high trust, with nearly half of Canadian residents rating their trust in these corporations between a 7 and 9 (on a 9-point scale). Canadian media companies like CTV, Global News, and the Globe and Mail all have high levels of trust. For other Canadians corporations, including Bell Canada, Loblaws, and Shell Canada, only about a quarter of Canadians report they highly trust them to act in the public interest. It is worth noting that this survey was conducted before the escalation of trade concerns between Canada and the United States, which has spurred Canadians to rally around domestic companies and products.6

There is also a cluster of corporate entities clearly at the bottom of the scale. This group includes social media companies (Meta, X, and TikTok), as well as OpenAI and Ticketmaster. None of these companies received more than 20 percent of Canadian respondents’ and, for X and TikTok, a majority report low trust (below 3 on the 9-point scale).

Across all companies in our three years of tracking, these scores are very consistent. In general, trust in corporations has remained steady from 2023 through to 2025. The exception is X, where the share of Canadian residents with low trust in the corporation has risen from 40 percent to 50 percent in 2025 (with the share who have high trust remaining steady at about 12 percent).

In general, those who are on the right of the political spectrum are more likely than those on the left to trust corporations to act in the public interest. This is mirrored in our survey findings, with Figure 10 showing that nearly every corporation is more trusted by right-wing respondents than left-wing respondents. Mainstream news organizations generally see the smallest differences in trust, with the CBC / Radio-Canada being the only corporation with significantly higher levels of trust by left-leaning respondents.

Figure 11 shows the sources respondents most trust and distrust receiving news from. In general, traditional news sources are the most trusted for Canadians: nearly a third say they most trust news on TV and the next most common is news websites and news on the radio. Together, these three represent the most trusted news sources for 52 percent of all Canadian residents. This is consistent with the 2024 survey results.

Distrust is spread evenly across a range of platforms. At 12 percent, Facebook was identified as the least trusted news source. However, nearly half of respondents didn’t pick any of the sources. Despite Facebook remaining one of the most popular news sources in Canada (used by 28 percent of respondents), it is the least trusted social platform for news.

In total, 45 percent of respondents say either none of the above or that they didn’t know. This likely reflects the fact that most people self-select away from platforms that they wouldn’t trust for news and so only in the most extreme cases do they specifically have a platform that they use for news, but also do not trust.

We also asked respondents about their trust in a range of institutions, specifically in their ability to offer secure and responsible digital services (see Figure 12). Generally, trust is highest in digital services provided by banks and health-care providers, while all levels of government are clustered in the middle. Start-up businesses and cryptocurrency exchanges are at the bottom of the list, but for different reasons. With start-up businesses, nearly one-quarter of Canadians residents are unsure how to rate trust levels. However, nearly half of Canadians rank their trust in cryptocurrency exchanges as 3 or below on a 9-point scale.

As we saw with trust in corporations, those on the right of the political spectrum are also more likely to trust private corporations with digital services, while those on the left are more likely to trust the federal government and health-care providers. The exception is provincial governments, which are better trusted by respondents on the right.

Online Harms


Canadian residents are exposed to a wide range of potential harms through their use of digital platforms—from exposure to fake news, to hate speech, identity fraud and beyond.

Figure 13 shows that the most common harm that respondents notice in posts, links, images, or videos on online platforms is false news (i.e. misinformation). Three-quarters of Canadian residents have spotted news or current events that they immediately suspected as false at least a few times a year. Nearly as many reported seeing false news they initially believed to be true and later discovered to be false (71 percent). Similarly, 67 percent have seen synthetic media or deepfake images or videos online. Exposure to hate speech against an identifiable group and identity fraud are both nearly as high, while exposure to speech promoting physical violence is slightly lower with just over half of Canadian residents (51 percent) reporting having seen it at least a few times per year.

This section will dive into Canadian respondents’ exposure to each of these harms online and highlight the groups most vulnerable to or at risk of each harm.

The top category of online harm experienced by Canadian residents is false information about news and current events. Generally, “misinformation” is any false information that spreads, without any intent behind it. This is contrasted with “disinformation,” which is any false information intentionally spread to cause damage to individuals, organizations, countries, or any other group. While misinformation is usually unintentional, it can still harm because it spreads virally on social platforms. Figure 13 shows that 12 percent say they see false news multiple times a day and a large majority are exposed to it more generally.

This has consistently been true over time (see Figure 14). While it was largely steady between 2022 and 2024, there has been a slight increase in its prevalence. While in 2022 only 32 percent saw false news (and instantly identified it as fake) multiple times a week, that has since risen to 38 percent in 2025. Similarly, in 2022 only 16 percent of respondents believed fake news at least a few times a week but that has grown to 23 percent now.

Respondents who said they had seen examples of false news online were asked to describe an example of what they had seen. The most commonly cited examples relate to US politicians, and in particular US President Donald Trump—a topic that experienced a significant jump from 11 percent in 2024 to 22 percent in 2025. Conversely, examples related to celebrities fell from 22 percent in 2024 to 16 percent in 2025.

Because this survey was conducted around the time of the Los Angeles wildfires, this event featured prominently since false information related to them was circulating online. This included fake photos of the Hollywood sign burning, or misattribution of blame, and other stories that were false—with some cases like the fake burning Hollywood sign likely being disinformation (circulated with the intent to deceive), while other stories simply being misinformation spread more naturally. This is an example that reflects the speed at which misinformation and disinformation circulate for breaking news events. There is nothing about the wildfire disaster that made it particularly susceptible to false claims—all current events face the same risks of false information spreading quickly.

TopicPercent
US politicians22
Celebrity16
Canadian politicians13
Scam/fraud10
Los Angeles wildfires6
AI4
Israel/Palestine4


As noted, the more pernicious form of fake news online is untrue information that Canadian residents actually believe—this comes in the form of misinformation or disinformation. While this distinction is important, for the purpose of this research we only assess belief in false information, regardless of whether it was spread intentionally or not.

To measure belief in false information (or misinformation) we ask respondents to assess whether or not eight different statements are true (see Figure 15). These statements reflect a range of beliefs that have circulated online recently—from climate denial to 15-minute cities (the belief that government is intentionally limit the free movement of Canadians), to COVID-19-related conspiracies. The list also included one statement that has been flipped so that the statement is true and saying it is false is a sign of belief in misinformation as a control (the statement “the number of natural disasters is increasing due to climate change”).

Aside from the flipped statement, belief is highest in a version of the “great replacement theory” (the false belief that there is an intentional plan in place to replace native-born populations in Western countries with immigrants). This is a more recent development; when the statement was first tested in 2022, only 16 percent said it was either definitely or somewhat true (see Figure 16).

That has now risen to over one-quarter of Canadians (28 percent) including 10 percent who say it is definitely true. This rise has coincided with significant public discourse in Canada surrounding immigration over the past year.7 The next most common belief is that climate change is a natural phenomenon. While it has grown since 2022 from 22 percent to 25 percent now, it has grown more slowly than the great replacement theory.

While belief in the other statements has remained consistent over time, there has been some softening in beliefs surrounding the war in Ukraine. In 2022, 73 percent of Canadian residents identified the statement “Ukraine nationalism is a neo-Nazi movement, so Russia invaded Ukraine to protect people” as untrue. That figure has dropped to only 66 percent in 2025 with the share of residents unsure rising from 20 to 25 percent.

Question: How much truth do you think there is to each of the following statements?

We group Canadian residents together based on the number of statements they correctly identify as not true (or true in the case of the single flipped statement). This means that respondents who are unsure about any statement were treated as having not correctly identified that statement. The result is a scale ranging from 0 (no statements correctly identified) to 8 (every statement correctly identified).

Doing this, as shown in Figure 17, we find that just over half (52 percent) of Canadian residents fall into the “low misinformation” group which includes anyone who scored at least 6 out of 8. A further 35 percent fall into the middle group, scoring between 3 and 5, and the remaining 13 percent of Canadian residents are classified as the “high misinformation” group—correctly identifying at most two statements out of the eight total.

This represents a slight but statistically insignificant increase in overall belief in misinformation since 2024. We cannot compare this against previous iterations of this survey due to changes in the statements used.

Belief in misinformation is not spread evenly across the population. In general, older, wealthier, and better-educated Canadian residents tend to score higher on the misinformation index while younger, lower-income, and less well-educated Canadian residents do worse (see Figure 18). This has been consistently true over the years of this survey.

While there are not massive regional differences, Quebec has consistently stood out as a region where residents tend to perform better on the misinformation index. In Quebec, 58 percent scored at least a 6 on this scale, 6 points higher than the national average.

Consistent with both past results from this survey and work by other groups like EKOS8, those on the right of the political spectrum tend to believe more misinformation than those on the left.

We also can compare this misinformation index with where Canadian residents say they get their news. For this, we use the average number of correct statements (which for the entire survey population is 5.23).

In general, respondents who say they get news from traditional sources such as news websites, news alerts on phones, the radio, and other similar traditional media sources tend to have the highest average number of correct responses. Those who read news websites in particular have the highest overall scores on the misinformation index. See Figure 19.

While most social platforms tend to be associated with worse scores, Reddit and Threads both have users who score above average on the misinformation index. While Threads is new to the survey this year, Reddit has consistently been an outlier from other social platforms.

Other social platforms do worse on this measure. WhatsApp has the lowest average score, but all the most popular platforms including Facebook, Instagram, YouTube, and TikTok score below average on the misinformation index. There is a general trend that relying on social media networks as a source of news is associated with higher rates of belief in misinformation.

ChatGPT also falls at the bottom of the list. This is consistent with last year’s results, despite the fact that ChatGPT as a platform has added new features to its base model allowing the tool to pull in up-to-date information. However, this data does not allow us to disentangle whether the platform causes the low misinformation index score or if those who are already prone to misinformation are more likely to use AI tools like ChatGPT.

Recently, concerns about the increasing ease of creating deepfakes have become front and centre in the minds of many Canadians, as well as policymakers concerned with the integrity of informational ecosystems. With cheap or free generative AI tools capable of generating increasingly realistic images, video, and audio easily available and becoming better with every passing year, Canadians are now seeing more synthetic media than ever.

Now, 67 percent of Canadian residents report seeing deepfakes or synthetic media online at least a few times a year—up from 60 percent of Canadians just last year (see Figure 20). This includes 30 percent who now report seeing synthetic media multiple times a week (and nine percent who are seeing it daily).

While most of the conversation surrounding deepfakes focuses on their threats as it relates to political misinformation, the most common type of deepfake that Canadian residents report seeing is related to celebrities. A close second, however, are deepfakes of Canadian politicians.

TopicPercent
Celebrities24
Canadian politicians23
AI images and videos12
Fake news7
Something from social media (not specific)7
Environment6
Animals4
Explicit content4
Scam/fraud3

Canadians’ exposure to hate speech, identity fraud and impersonation, and promotion of violence have all become more common online over the past three years. Although the overall number of Canadians reporting exposure to these harms has remained largely stable (outside impersonation), the frequency of exposure has increased.

Figure 21 shows the share of Canadians who say they see hate speech against an identifiable group at least a few times a week has grown from 22 percent in 2022 to 27 percent in 2025. Similarly, the share who see content promoting physical violence has also grown from 15 percent to 18 percent in that period.

When it comes to identity fraud or impersonation, there has been growth both in overall exposure and in frequency of exposure. In 2025, 22 percent reported seeing this type of content multiple times a week compared to 19 percent in 2022, but overall exposure to fraud at least a few times a year has also grown from 59 percent to 63 percent.

While most Canadians are exposed to these harms online, fewer report personal experiences of being targeted. In 2025, 12 percent of Canadian residents say they have been targeted by hate speech online, eight percent have been targeted with online harassment causing them to fear for their safety, and five percent report having had intimate images shared online without consent (see Figure 22).

The incidence of targeted hate speech is significantly higher for many equity-deserving communities. For racialized Canadian residents, recent immigrants, those living with disabilities, and 2SLGBTQ+ residents, hate speech is anywhere from 50 percent to 100 percent more common (see Figure 23). Women experience slightly lower rates of targeted hate speech.

The topic of online hate speech has changed significantly since 2024. Last year, online hate speech related to the Israel-Palestine conflict was most prevalent. In 2025, the most cited topic of online hate speech is exposure related to newcomers to Canada, with many specifically citing hate speech toward Indian immigrants. Israel/Palestine remains a significant topic of online hate speech reported by Canadians (17 percent) with many identifying exposure to the related topics of antisemitism (eight percent) and Islamophobia (three percent).

TopicPercent
Newcomers18
Israel/Palestine17
Trump11
Anti-2SLGBTQIA+10
Racism9
Antisemitism8
Canadian politicians6
Sexism6

The Generational Divide


A consistent theme in this research on the online media environment and exposure to online harms has been the significant generational divide between the youngest and oldest Canadian residents.

For older generations, Facebook is the mainstay of social media. While YouTube and Instagram are both also used by many older respondents, Facebook is the most popular, with a majority of those over 60 still using the platform (and even higher usage for those 45 to 59 and those 30 to 44). However, older respondents do not rely on Facebook for news. Instead, many still turn to traditional media—from television to radio and even print newspapers in the oldest generation.

The pattern is very different for the youngest generation (those 29 and under). While Facebook is still very popular, it is far from the go-to social platform. Younger respondents have a much more diverse range of online platforms they use daily, ranging from Instagram to TikTok to WhatsApp, all of which are about as popular as Facebook (or significantly moreso for Instagram). Beyond that, even the smaller platforms have a large user base in the youngest generation. Nearly a fifth of those under 29 say they use Discord daily, 22 percent use X daily, and 20 percent use Reddit daily.

Young people also rely much more heavily on these social media platforms for news. For this youngest cohort, Instagram and YouTube are the two most popular news sources, significantly more than for those in any other age group. Young people also rely on other platforms like X or TikTok for news. Yet young people still use traditional media: a third say they get news from TV and another third get news from news websites.

The varied media ecosystems that different generations live in mean that each has very different experiences with online harms. Looking at all potential online harms, younger Canadian residents are far more exposed. Figure 24 shows that for everything from seeing hate speech online to having intimate images shared without their consent, younger respondents are around 50 percent more likely to have experienced any given harm.

Additionally, the platforms that younger Canadian residents are using, both for entertainment and for news, are associated with stronger belief in false information. For all five of the platforms associated with the worst scores on the misinformation index, young respondents were significantly more likely to say they used the platform and significantly more likely to say they got information from the platform.

Solutions


Given the prevalence of harmful online content on social media platforms, Canadians are seeking solutions. In this section, we explore who respondents believe is at fault for the rise in harmful content, how interventions to address harmful content are working, and what types of system-level policy interventions they believe governments should pursue.

Canadians consistently report that users of social platforms are most at fault for causing the rise in harmful content. In 2025, 43 percent of Canadians say users of social platforms are most responsible for causing the rise in harmful content, while only 20 percent attribute fault to the platforms themselves and 11 percent to government or political leaders. A quarter of Canadians do not think anyone is responsible or are unsure of who is to blame (see Figure 25).

When asked who is responsible for fixing it, these numbers are reversed. Nearly half (47 percent) say that online platforms are responsible and another 21 percent say that government or political leaders are responsible. Only 14 percent say that users are responsible for fixing the rise in harmful content.

Users are taking a number of actions to address their own exposure to harmful online content and help improve the state of social media platforms.

Blocking and reporting

The primary tools available to users online are blocking and reporting accounts and content they deem harmful.

Four in ten Canadian residents say they have blocked, reported, or flagged an account for being either fake or automated online. Another quarter have either reported or flagged, or blocked an account for sharing illegal content (see Figure 26). A smaller number of Canadian residents (eight percent) say they are also reporting illegal online activity to police.

However, Canadians believe the impact of these tactics is limited. Only 37 percent of those who have blocked, reported, or flagged a user online rate this action as effective (and only 10 percent as very effective). Comparatively, 14 percent say it was not at all effective and 29 percent total called it ineffective. See Figure 27.

Fact checking

When it comes to misinformation, disinformation, and deepfakes, users are often told to fact check what they see online to make sure it’s correct. As noted above, 70 percent of Canadian residents have seen false information online but immediately recognized that it was false. However, fewer Canadians report that they fact check what they see. Figure 28 reveals that only 61 percent of Canadians say they have ever fact-checked something they see online, despite the persistent warnings not to believe what you see online. More concerning, those who believe the most misinformation are also less likely to fact-check online content. Among the group that is more likely to report belief in misinformation, only 40 percent report ever fact checking information using other sources—more than 20 points lower than the 61 percent of Canadian residents who report fact checking.

Given the perceived limitations of user-side solutions to harmful accounts and content and the responsibility that Canadians attribute to platforms and governments to address these issues, we also asked about attitudes toward regulatory interventions for online platforms.

Platform regulation

In general, respondents support government action to regulate online platforms to limit online harms, even when presented with tradeoffs. As Figure 29 shows, two-thirds of Canadians (68 percent) believe that reducing the amount of hate speech, harassment, and false information online is more important than protecting freedom of expression. Similarly, two-thirds believe the government should intervene to require online platforms to act responsibly and reduce the amount of harmful content on their platforms and believe government should intervene to reduce the intentional spread of false information because it is a threat to Canadian democracy. These attitudes have held steady since 2022, with support for all three pro-regulation statements consistently between 64 percent and 70 percent across all three waves of the survey.

Question: For each of the following pairs of statements, please indicate which of the following best describes your perspective.

Canadians' attitudes are not consistently aligned with either pro- or anti-regulation positions. Only about half of Canadians agreed with all of the pro-regulation statements, while only about 15 percent agree with every anti-regulation statement. The remaining third of Canadian residents agree with some pro-regulation and some anti-regulation statements.

When asked about specific interventions that place requirements on online platforms to address online harms, Canadians are overwhelmingly supportive.

Figure 30 presents a range of potential regulatory requirements for online platforms, with nearly every proposed approach supported by more than three-quarters of Canadian residents. The highest levels of support are for requirements that aim to safeguard children or limit the spread of non-consensual intimate images. The proposed intervention with the lowest level of support (still 65 percent) would allow the government to order platforms to take action during times of crisis, including blocking or promoting specific content. Levels of support have been consistent or growing over time (see Figure 31).

Banning TikTok

Canadians were also asked about whether they support banning TikTok in Canada, in response to concerns about the potential risk of surveillance or interference by the Chinese state given the company’s ownership structure. The Government of Canada has already banned the use of TikTok on government-issued devices, and the United States has taken significant steps toward banning or forcing the sale of the platform (although, more recently, that has been pulled back).

As shown in Figure 32, a majority of respondents (52 percent) support banning TikTok in Canada, with only 19 percent opposed and 29 percent who are either neutral on the issue or don’t know yet. This is largely consistent with our findings in 2024, when 57 percent of Canadians supported either fully banning TikTok or banning it for users under 18 years of age.

Unsurprisingly, given the demographics of those who use TikTok, younger Canadians are significantly more likely to oppose a proposed ban than older groups. Among respondents in the 16 to 29 age group, only 32 percent support banning TikTok while 35 percent are opposed to it. At the opposite end of the spectrum, only nine percent of respondents over 60 years of age oppose banning TikTok while 67 percent support it.

Conclusion


The findings of this survey continue to present a paradox. Canadians’ use of social media platforms like YouTube and Facebook is nearly ubiquitous and has been rising on an array of newer platforms. Canadian residents’ media diet increasingly consists of news and information from social media rather than directly from news organizations themselves—with some of the most used platforms, run by Meta, actually blocking Canadian news content.

Yet Canadians’ levels of trust in online platforms—notably the Meta platforms, X, and TikTok—is extremely low. The people who use these platforms for news and current events are more likely to be exposed to false and misleading information than those who use traditional media sources. And Canadians continue to report high levels of exposure to harms like hate speech, identity fraud, and non-consensual sharing of intimate images. Exposure to these harms is more common for younger Canadians, as well as for other marginalized communities. These harms are likely to be amplified by synthetic media and deepfakes, produced by new and rapidly advancing generative AI technologies.

What is to be done to address these online harms on social platforms?

Canadians believe that platforms should be primarily responsible for fixing these problems. Some Canadians are employing user-side tactics offered by platforms, like blocking and reporting harmful content and accounts to the platforms, or independently fact checking the sources of information they are seeing. Yet a plurality of respondents do not perceive these tools and tactics to be effective. And those with the highest levels of belief in misinformation are least likely to use them.

Canadians strongly support the regulation of online platforms. Even when offered a tradeoff, about two-thirds of respondents want governments to regulate platforms to reduce the amount of hate speech, harassment and false information; to require platforms to act responsibly to reduce harmful content; and to address the intentional false spread of information that threatens democracy. This has been consistently true for the years we have tracked these attitudes.

When presented with specific policy interventions to impose requirements on social platforms, Canadians are even more supportive. The highest levels of support are for requiring the removal of the most egregious types of harmful content, such as child sexual abuse materials and non-consensual sharing of intimate images. But there is also strong support for increasing parental controls, quickly removing fraudulent accounts and bots, addressing hate speech, labelling synthetic media and deepfakes, and many others. No proposal has less than 65 percent support from Canadian residents.

In sum, Canadians are very clearly calling for government action to address online harms.


1

Christina Newberry, “2025 Facebook Statistics Every Marketer Needs,” Hootsuite, February 19, 2025, https://blog.hootsuite.com/facebook-statistics.

2

Christopher Ferguson and Dongwoo Kim, “Prorogation’s Digital Impact: Canada’s Digital Bills Set to Die on the Order Paper,” Privacy & Cybersecurity Law Bulletin, Fasken, January 14, 2025, https://www.fasken.com/en/knowledge/2025/01/prorogations-digital-impact

3

Christopher Ross, Aengus Bridgman, Saewon Park and Sejal Davla, “Like it or Not: The Changing Canadian Information Ecosystem,” Media Ecosystem Observatory, March 11, 2025, https://meo.ca/work/like-it-or-not-the-changing-canadian-information-ecosystem

4

Sara Parker, Christopher Ross, Zeynep Pehlivan and Aengus Bridgman, “Old News, New Reality: A Year of Meta’s News Ban in Canada,” Media Ecosystem Observatory, August 1, 2024, https://meo.ca/work/old-news-new-reality-a-year-of-metas-news-ban-in-canada

5

J. Steinburg, Trust in Canada: Recent Trends in Measures of Trust, Trust in Research Undertaken in Science and Technology (TRUST) Scholarly Network, University of Waterloo, April 2024, https://uwaterloo.ca/trust-research-undertaken-science-technology-scholarly-network/sites/default/files/uploads/documents/trust-in-canada-recent-trends-in-measures-of-trust-april-2024.pdf.

6

Mario Toneguzzi, “Canadian Consumers Rally Behind ‘Buy Canadian’ Movement, Retail Insider, March 19, 2025, https://retail-insider.com/retail-insider/2025/03/canadian-consumers-rally-behind-buy-canadian-movement

7

Keith Neuman, Canadian Public Opinion About Immigration and Refugees – Fall 2024, Environics Institute, October 17, 2024, https://www.environicsinstitute.org/projects/project-details/canadian-public-opinion-about-immigration-and-refugees—fall-2024

8

“The Politics of Resentment: Disinformation and Mistrust are the Critical Sorters of the Electorate,” EKOS Politics, September 28, 2023, https://www.ekospolitics.com/index.php/2023/09/the-politics-of-resentment/.