Skip to content
Survey of Online Harms in Canada

Survey of Online Harms in Canada 2023

Sam Andrey | March 2023

March 2023

Author: Sam Andrey

Share findings in 7 languages

For more recent findings, go here


Funded by the Government of Canada

Executive Summary

Social media platforms have, in many ways, become a new public square, where people in Canada and around the world connect and engage in society and our democracy. Increasingly, though, the platforms and their algorithms have also been weaponized by ill-intentioned actors to spread conspiracy theories and extremism, and to target marginalized people with hate and harassment.

As the Government of Canada considers new legislation to tackle online safety, the latest Survey of Online Harms in Canada — the fourth conducted by the Leadership Lab since 2019 — provides up-to-date insights on Canadians’ experiences with harmful online content, and their views on the role of government and platforms in addressing those harms. This new survey was conducted online in late October 2022 with a representative sample of 2,000 people in Canada aged 16 and older.

Key Findings

  • Canadians’ use of social media platforms remains high and is growing. Platforms are increasingly being used as a source for news, particularly among younger Canadians, for whom Instagram is the most widely used news source.
  • 72% believe Canadians have been exposed to more harmful content, such as hate speech, harassment and false information, over the past few years.
  • 10% of Canadians reported being targets of online hate speech, and 8% said they were targets of online harassment that caused them to fear for their safety; these proportions were approximately twice as high among Canadians who are racialized, have a disability or identify as LBGTQ2S+.
  • About 15% of Canadians have a high degree of belief in misinformation. This group is less trusting of mainstream news, more likely to trust social media and use it for news, and less likely to fact check.

Canadians are ready for action to address harmful online content

  • Canadians’ trust in social media platforms to act in the best interest of the public continues to fall. Only one in ten have a high degree of trust in Facebook, TikTok or Twitter.
  • Two-thirds believe government should require online platforms to act responsibly and reduce the amount of harmful content on their platforms.
  • Strong support for requiring platforms to remove various categories of illegal or harmful content has increased significantly since 2021.
  • Over 80% of Canadians support requirements for platforms to quickly remove reported illegal content, block automated or bot accounts, label information verified as false, and provide tools for users to fact check or search for the authenticity of online content.

Résumé exécutif

Les plateformes de médias sociaux représentent, à bien des égards, une nouvelle place publique, où la population du Canada et du monde entier peut échanger et participer à la vie sociale et à notre démocratie. Cependant, les plateformes et leurs algorithmes sont de plus en plus souvent utilisés par des actrices et acteurs mal intentionné·e·s pour diffuser des théories du complot et des idées extrémistes ainsi que pour cibler les personnes marginalisées et les harceler.

Le gouvernement du Canada envisage une nouvelle législation pour renforcer la sécurité en ligne. Parallèlement, le dernier sondage sur les méfaits en ligne au Canada, le quatrième mené par le Leadership Lab depuis 2019, fournit des observations actualisées de la population canadienne en matière de méfaits en ligne ainsi que son avis sur le rôle du gouvernement et des plateformes dans la lutte contre ces méfaits. On a mené ce nouveau sondage en ligne à la fin du mois d’octobre 2022 auprès d’un échantillon représentatif de 2 000 personnes âgées de 16 ans et plus au Canada.

Principaux constats

  • L’utilisation des plateformes de médias sociaux par les Canadien·ne·s reste élevée et ne cesse de croître. Les Canadien·ne·s utilisent de plus en plus les plateformes comme source d’information, surtout les jeunes, pour qui Instagram est la source d’information la plus utilisée.
  • 72 % des participant·e·s estiment que les méfaits en ligne au Canada, comme les discours haineux, le harcèlement et les faux renseignements, ont augmenté au cours de ces dernières années.
  • 10 % des Canadien·ne·s ont déclaré avoir été la cible de discours haineux en ligne et 8 % ont dit avoir été la cible de harcèlement en ligne qui les a fait craindre pour leur sécurité. Ces proportions étaient environ deux fois plus élevées chez les personnes racialisées, ayant un handicap ou qui s’identifient comme LBGTQ2S+.
  • Environ 15 % des Canadien·ne·s accordent une grande importance à la mésinformation. Ce groupe fait moins confiance aux nouvelles destinées au grand public, est plus susceptible de faire confiance aux médias sociaux et de les utiliser pour s’informer sans pour autant vérifier les faits.

La population canadienne est prête à agir pour lutter contre les méfaits en ligne

  • Les Canadien·ne·s font de moins en moins confiance aux plateformes de médias sociaux en ce qui concerne leur capacité à agir dans l’intérêt du public. Une seule personne sur dix accorde une grande confiance à Facebook, TikTok ou Twitter.
  • Deux tiers d’entre elles estiment que le gouvernement devrait exiger des plateformes en ligne qu’elles agissent de manière responsable et qu’elles réduisent le nombre de méfaits diffusés sur leurs plateformes.
  • La volonté d’obliger les plateformes à supprimer diverses catégories de contenus illégaux ou de méfaits s’est considérablement renforcée depuis 2021.
  • Plus de 80 % de la population canadienne est favorable à ce que les plateformes suppriment rapidement le contenu illégal signalé, bloquent les comptes automatisés ou robots, signalent les renseignements faux vérifiés et fournissent aux utilisateurs·rices des outils qui leur permettent de vérifier les faits ou de rechercher l’authenticité d’un contenu en ligne.
Online Harms

More Key Findings

Online Harms and Misinformation

  1. Four in ten Canadians are exposed to online hate speech on a monthly or weekly basis. Sixteen percent of survey respondents reported being exposed to online hate speech on a weekly basis, while 20% were exposed monthly. Five percent reported daily exposure. (Figure 9) .
  2. Exposure to online hate remains higher amongst marginalized communities. Ten percent of Canadians reported being targets of online hate speech, and 8% said they were targets of online harassment that caused them to fear for their safety. These proportions were approximately twice as high among Canadians who are racialized, have a disability or identify as LBGTQ2S+.
  3. The frequency of exposure to online misinformation persists. Over half (56%) of respondents reported seeing information about the news or current events that they immediately suspected to be false at least a few times a month — this proportion remains unchanged from 2019. Examples of misinformation cited include COVID-19 and vaccines, the Russian invasion in Ukraine, and Canadian and American elections. (Page 13)
  4. Right-leaning respondents had the highest rates of belief in false information. Twenty-seven percent of those who identify on the right of the political spectrum scored low (0-2) on identifying eight misinformation statements. Comparatively, only 15% of centre and 6% of left-leaning respondents scored similarly. This group was less likely to trust mainstream news, more likely to trust social media and use it for news, and less likely to fact check. (Figure 10)
  5. Canadians want intervention. Over 80% of Canadians support requirements for platforms to quickly remove reported illegal content, block automated or bot accounts, label information verified as false, and provide tools for users to fact check or search for the authenticity of online content. (Page 22)

Trends in online activity, news sources and trust levels

  1. Use of social media as a news source is rising. From 2021 to 2022, Canadians’ use of Facebook grew 8% and Instagram by 4% to access news. (Figure 3) .
  2. Trust levels in social media platforms are steadily declining. In 2021 to 2022, high-trust scores for Twitter fell 10%, while scores for Facebook and Tik Tok both decreased by 9%. (Figure 6).
  3. Instagram is the most popular online news source for young Canadians and TikTok is growing. At 44%, Instagram is the most popular news source for Canadians aged 16-29. Tik Tok is an emerging news source at 21%, and 30% of younger Canadians between 16 and 23 identified TikTok as a news source. (Figure 4)
  4. Trust levels in Canadian mainstream media have slightly declined. At 48%, 43% and 34% respectively, CBC/Radio-Canada, CTV and The Globe and Mail had significantly higher levels of trust in comparison to social media platforms; however trust has slightly declined from 2021 when trust levels of said media outlets were 52%, 49% and 36%, respectively. Low trust in these three media outlets has approximately doubled since 2019 among the 16% who identify on the right of the political spectrum, from an average of 13% to 25%, while remaining stable among those on the left and in the centre at 10%. (Figure 7)
  5. Legacy media remains top news source, but use is declining. At 58%, 44% and 39% respectively, television, news websites and radio continue to be the top news sources for people in Canada; however each platform experienced declines in 2022 since a surge in use during the pandemic. (Figure 3)

Introduction

A relatively small number of technology companies have succeeded in largely consolidating and privatizing the online public square.1 Large online platforms are increasingly used by people in Canada to connect with friends and family, stay up-to-date with the news, and engage in civic and democratic discourse. The impacts of this new public square are continuing to evolve and still not fully understood. However, evidence has mounted over recent years about its negative effects on Canadians’ safety, social cohesion and democracy: hate speech and harassment that target marginalized groups are on the rise; disinformation, both foreign and domestic, is fueling radicalization and extremism; and real-world violence, including sexual abuse and child exploitation, is unfortunately an increasing reality.2, 3, 4, 5, 6

As a result, there have been growing calls for public policy changes to mitigate these harms and rebuild our public square.7, 8, 9 At the same time, legitimate concerns have been raised regarding censorship — and that any changes may unreasonably limit our rights and freedoms, particularly the right to free expression.10, 11

In September 2021, we released Rebuilding Canada’s Public Square, which provided the results from our past three surveys on this topic, as well as advice on how to improve the Government of Canada’s proposal at the time to tackle some categories of online harm. Since then, the federal government established an expert advisory group on online safety, which provided advice on how to design a stronger legislative and regulatory framework to address harmful content online. The group rightly suggested a move away from a monitoring and 24-hour takedown model for harmful content, which could drive over-censorship, toward more of a systems approach that places duties on platforms to act responsibly, increase transparency and mitigate their systemic risks.

Public policy on online safety should be informed by evidence about Canadians’ experience with online harms, as well as Canadians’ views on the appropriate role of government in addressing those harms. This report is intended to provide up-to-date evidence about online harms and Canadians’ views on how to govern online platforms — and provide advice on how to do so in a manner that protects and advances Canadians’ fundamental rights and freedoms.

1 Nyabola, N., Owen, T. & Tworek, H., 2022

2 Canadian Race Relations Foundation, 2021

3 Garneau, K. & Zossou, C. , 2021

4 Bridgman, A.. et al., 2022

5 Humphreys, A., 2020

6 Statistics Canada, 2022

7 Housefather, A., 2019

8 Public Policy Forum, 2022

9 Tusikov, N.,2022

10 Carmichael, D. & Laidlaw, E., 2021

11 Canadian Heritage, 2022

Online Harms - Mobile Devices

Canadians’ Experiences on Online Platforms

Overall Use of Platforms

Use of online platforms among people in Canada remains high and is growing, with 98% using at least one major platform. While YouTube remains the most widely used platform, Meta’s four major platforms (Facebook, Messenger, Instagram and WhatsApp) have all increased in overall use since 2019 (Figure 1). TikTok is continuing to experience the fastest rate of growth among Canadians, nearly tripling its reach from 10% to 29% in two years, and is now used by more than half of those aged 16-29 (Figure 2). Platforms with noteworthy decreases in overall use since 2019 include Pinterest (-7 percentage points) and Twitter (-6 percentage points).

Figure 1: Use of Online Platforms in Canada

Figure 1: Use of Online Platforms in Canada

Figure 2: Use of Online Platforms by Age in Canada

Figure 2: Use of Online Platforms by Age in Canada

Platforms As a News Source

While legacy media from television (58%), news websites (44%) and radio (39%) continue to be the top news sources for people in Canada overall, each experienced declines in 2022 from surges in use during the pandemic (Figure 3). In contrast, the use of online platforms as a source of news continues to grow, with 32% citing Facebook, 24% using YouTube and 16% naming Instagram.

Younger Canadians, in particular, are using online platforms for news — those aged 16-29 use Instagram (44%), Facebook (36%) and YouTube (36%) for news at greater or comparable rates compared to legacy media, such as TV (43%), news websites (34%) and radio (27%). The use of Instagram for news among younger Canadians has increased in particular from 19% in 2019 (+25) compared to Facebook (+4) and YouTube (+8). TikTok is also an emerging news source, with 21% of those aged 16-29 citing the platform, growing to 30% among those aged 16-23.

Those using Facebook, YouTube or Instagram for news are more likely to also use each of those platforms for news, and are less likely to use legacy media (TV, radio and news websites).

Figure 3: Reported Sources for News and Current Events in Canada

Figure 3: Reported Sources for News and Current Events in Canada

Figure 4: Reported News Sources – by Age in Canada

Figure 4: Reported News Sources – by Age in Canada

In addition to consuming news on social media, a significant proportion of Canadian residents actively engage with news and politics on these platforms. In this survey, 35% of respondents indicated that they have commented on or posted links, videos or images about news or politics; 17% have joined an online group about an issue or cause with people they didn’t know; and 13% said they have participated in a government or political consultation or engagement online.

Trust in Platforms

Canadians’ trust levels in online platforms continues to decline. When asked to assess their trust in various organizations to act in the best interest of the public on a scale of 1-9 (with 1 being the lowest level of trust and 9 being the highest), TikTok, Facebook and Twitter had the lowest levels of trust (Figure 5).

Figure 5: Canadians’ Trust to Act in the Best Interest of the Public

Figure 5: Canadians’ Trust to Act in the Best Interest of the Public

Figure 6: Canadians’ Trust in Social Media Platforms Continues to Fall

Figure 6: Canadians’ Trust in Social Media Platforms Continues to Fall

This finding is consistent with our past surveys, where social media platforms had lower levels of trust than oil companies, telecommunication providers and mainstream media. Trust levels in social media platforms declined significantly from 2021, with just one in ten Canadians now having high trust (Figure 6). Trust in TikTok fell in particular, replacing Facebook as the least trusted organization.

Figure 7: Canadians’ Trust in Media Slipping Slightly

Figure 7: Canadians’ Trust in Media Slipping Slightly

In comparison, Canadians’ trust in mainstream media outlets remained relatively high, with just one in eight having low trust in CBC, CTV, Global News and The Globe and Mail. Consistent with global trends,12 trust levels in Canadian media have fallen slightly (Figure 7), and it is worth noting that the decline is concentrated among those on the right end of the political spectrum (7-9 on a 9-point scale). Low trust in these three media outlets has approximately doubled since 2019 among the 16% who identify on the right, from an average of 13% to 25%, while remaining stable among those on the left and in the centre at 10%.

12 Newman, N., et al, 2022

Exposure to Online Harms

Canadians reported frequent and increasing exposure to harmful content, though self-reported rates of exposure on online platforms have not increased significantly in recent years.

We first asked respondents if they think there has been change in the amount harmful content, such as hate speech, harassment and false information, that Canadians have been exposed to over the past few years. Just over 70% thought that Canadians have been exposed to more harmful content, with only 6% thinking it has declined (Figure 8). We then asked how frequently they see a range of harmful content on online platforms, such as false information, hate speech, identity fraud and promotion of violence (Figure 9).

Figure 8: Canadians’ Perceived Change in Exposure to Harmful Content

Figure 8: Canadians’ Perceived Change in Exposure to Harmful Content

Figure 9: Reported Exposure to Online Harms in Canada

Figure 9: Reported Exposure to Online Harms in Canada

Misinformation

Over half (56%) of respondents reported seeing information about the news or current events that they immediately suspected to be false at least a few times a month — this proportion is unchanged from 2019. In addition, 37% reported seeing information about the news or current events that they believed to be true and later found was false at the same frequency. Both exposure levels were significantly higher among those who use Facebook, YouTube or Instagram for news, with monthly exposure levels of 64% (immediately suspect) and 47% (later find to be false).

Respondents were prompted to provide a recent example of false information that they saw on online platforms, which 47% did (n=688):

Respondents were prompted to provide a recent example of false information that they saw on online platforms, which 47% did

Figure 10: Canadians’ Belief Levels in Eight Misinformation Statements

Figure 10: Canadians’ Belief Levels in Eight Misinformation Statements

Survey respondents were also asked how much truth they thought there was to eight statements of misinformation on a range of topics, including COVID-19, climate change, immigration and the Russian invasion of Ukraine (see Methodology section for more detail). A majority of Canadians (53%) correctly identified at least 75% of the misinformation statements (i.e., 6-8 statements correctly identified, categorized as low belief in misinformation) (see Figure 10).

In total, 15% of respondents correctly identified 25% or fewer of the false statements (i.e., 0-2 statements correctly answered, categorized as high belief in misinformation). This group was more likely to identify on the right end of the political spectrum (7-9 on a 1-9 scale; correlation coefficient of 0.5). Weak correlations were also observed with lower household income (0.1) and education levels (0.1), with no significant differences in belief in misinformation by gender or overall frequency of social media use.

Figure 11: Low Trust in Organizations by Belief in Misinformation

Figure 11: Low Trust in Organizations by Belief in Misinformation

High believers in misinformation were significantly less likely to trust authoritative information sources to act in the best interest of the public, reporting three times the level of low trust (1-3 on a 9-point scale) in the CBC, CTV, Global News and The Globe and Mail, compared to low believers in misinformation (Figure 11). In contrast, high believers were less likely to have low trust in Facebook, Twitter and TikTok, compared to other Canadians.

Figure 12: News Sources by Belief in Misinformation

Figure 12: News Sources by Belief in Misinformation

High believers in misinformation were significantly less likely to report using legacy media (TV, radio, news websites, print newspapers) than other Canadians, and more likely to use Facebook, YouTube and Instagram for news (Figure 12).

Figure 13: Average Number of Correct Identification of Misinformation by News Source

Figure 13: Average Number of Correct Identification of Misinformation by News Source

Another way of looking at this relationship is the average number of correct identifications of misinformation statements by news source (Figure 13). Those who use legacy media (TV, news websites, radio, newspapers), as well as Reddit and Twitter, for news believe in misinformation at significantly lower levels; while those who use Instagram, YouTube, Facebook, TikTok, Snapchat, Messenger and WhatsApp have significantly higher misinformation belief levels. While most Canadians reported using more than one source for news (83%), and these groups are not mutually exclusive, it is also worth noting that the 17% who said they rely on only one source also had a significantly lower number of correct answers (4.3) compared to 5.2 overall.

Both of these findings echo previous research that found a relationship between consuming news on social media platforms and the propensity to believe in conspiracy theories.13 A noteworthy 10% of high believers also said they use no news sources (even with the ability to write-in other sources), compared to just 3% overall.

13 Stecula, D., Pickup, M., & van der Linden, C., 2020

Figure 14: Lower Rates of Fact Checking Among High Believers in Misinformation

Figure 14: Lower Rates of Fact Checking Among High Believers in Misinformation

High believers in misinformation were also significantly less likely to say they had fact checked something they saw online, and more likely to say they frequently encounter news they believed to be true and later found out was false (Figure 14).

Hate Speech and Harassment

People in Canada reported relatively frequent exposure to online hate speech, though the proportion reporting that they see hate speech online at least a few times per month fell from 48% in 2019 to 41% in 2022. Consistent with past results, self-reported rates of exposure to hate were significantly higher for marginalized communities: 48% among Canadians with disabilities, 50% among racialized Canadians, 58% among LGBTQ2S+ Canadians and 67% among those living in Canada less than 10 years.

Respondents were prompted to provide a recent example of online hate speech that they saw, which 45% (n=576) did:

Respondents were prompted to provide a recent example of online hate speech that they saw, which 45% did

About one-third cited examples that may not qualify as hate speech, a finding that provides important context for how survey respondents think about their exposure to online hate speech. For example, 10% cited harmful comments from politicians, 7% cited harmful comments directed at politicians, 7% cited harmful comments regarding COVID-19 restrictions or the convoy, and 5% cited harmful comments directed at celebrities.

Figure 15: Rates of Online Hate Targets

Figure 15: Rates of Online Hate Targets

When asked if they had ever been targeted with online hate speech that deliberately promoted hatred against a group they identify with, one in ten Canadians said they had; these proportions were significantly higher among marginalized communities (Figure 15).

Overall, 4% reported having their intimate images shared online without their consent. Responses were consistent across genders, though rose to 9% among those aged 16-29.

Another 8% reported having been targeted with online harassment that caused them to fear for their safety. Responses were, again, consistent across genders, but higher among younger people (14% those age 16-29), those with disabilities (14%) and LGBTQ2S+ Canadians (17%). In addition, 6% said they have reported someone to the police for illegal activity online.

One in four (25%) respondents said they have reported or flagged an account to an online platform for sharing illegal content — a proportion consistent with 2019 findings. In addition, 41% said they had blocked or reported an account to an online platform for being fake or automated. Of those who blocked or reported content, 39% rated its effectiveness as high (7-9 on 1-9 scale), 33% rated it as moderate (4-6) and 28% rated it as low (1-3). This relatively positive assessment of effectiveness is quite similar to the findings from 2019, with a slight increase in those rating effectiveness as low, from 23% to 28%. Those who had been personally targeted online with harassment, intimate image abuse or hate speech provided the same overall assessment of effectiveness, with 39% rating the reporting process as highly effective, underlining the importance of these notice-and-action mechanisms for online safety.

Online Harms - Mobile Devices

Canadians’ Perspectives on Platform Governance

A Role for Government

The findings from our latest survey continue to show that most Canadians are prepared for government intervention to address harmful online activity. We first asked participants to identify who is most responsible for contributing to a rise in harmful online content, and then who should be most responsible for fixing it (Figure 16). While most (48%) still believe platform users are most responsible for contributing to the increase in harmful online content, a clear majority (51%) now believe that platforms should be most responsible for fixing the issue, up 16 percentage points from 2019.

Figure 16: Perspectives on Most Responsible for Contributing and Fixing Rise in Harmful Online Content

Figure 16: Perspectives on Most Responsible for Contributing and Fixing Rise in Harmful Online Content

Figure 17: Canadians’ Perspectives on Platform Governance

Figure 17: Canadians’ Perspectives on Platform Governance

We then asked participants to choose, from among three pairs of statements, which best described their perspective. Findings showed that approximately two out of three Canadians favour platform intervention (Figure 17); these levels are largely stable since 2021. A new statement was added this year to assess views on the role of government in addressing disinformation — again, two out of three indicated that the spread of disinformation is a threat to Canadian democracy and needs to be addressed by government.

Finally, we asked about support for various options to provide recourse to Canadians regarding decisions of large online platforms to remove illegal content, such as hate speech or the promotion of violence. Again, about two out of three (63%) supported a public entity, either a Digital Safety Commissioner that could audit and order platforms to remove illegal content (51%); or an independent ombudsman to help users navigate platform appeals, and investigate and make recommendations regarding platform compliance (38%).(Figures are non-cumulative, as multiple options could be selected.) About half also supported requiring online platforms to have an appeal mechanism for decisions about content or user removal (46%). Only 5% of respondents said no action should be taken, while 11% were not sure.

Support for Action on Online Harms

While Canadians’ views on who is best positioned to lead governance in this space were somewhat mixed, their support with respect to specific policy actions on harmful content was overwhelming (Figure 18). Support levels exceeded 80% for nearly every proposed action. Opposition did not exceed 5%, except with respect to requiring warning labels on false information (7%); applying Canadian laws (8%); and allowing the government to order platforms to take certain actions, such as block or promote certain content or services, during times of crisis with risk of imminent harm, like a terrorist event or public health emergency (15%).

Figure 18: Canadians’ Support for Action on Online Harms

Figure 18: Canadians’ Support for Action on Online Harms

Support for many actions has increased significantly since we last asked the question in March 2021. Strong support for requiring platforms to quickly remove impersonation, violent content, hate speech, repeated spread of false information and bot accounts all increased between 16 and 25 percentage points. It is worth noting that this increase was largely driven by a decrease in the proportion that were neutral to these actions in 2021, as opposition has not changed significantly. Increase in support was also largely driven by those on the left and centre of the political spectrum. Those on the right of the political spectrum had lower overall increases in support, though a majority of those on the right still supported each proposed action.

Views on Private Online Content

A particularly challenging element of designing platform governance is defining which services are in scope for regulatory action. The Government of Canada previously expressed that it intends to exclude from regulation “services that enable persons to engage only in private communications,” and the scope of such an exclusion was a divisive topic among the Expert Advisory Group on Online Safety. Many platforms offer both public and private communication functions, such as large groups, private pages and user-defined ‘close friend’ lists, a situation that adds complexity to the process of defining ‘private communications’.

We have also previously outlined the range of online harms Canadians experience on private messaging platforms in our report Private Messages, Public Harms and advocated that the Government of Canada follow the EU in setting minimum standards for private platforms of a significant size, such as user notice-and-action mechanisms for harmful content and transparency requirements, while not requiring private content scanning or harming encryption. We believe such standards would enable harm reduction, promote greater understanding of online harms, and mitigate the risk of an incentive for companies to create more closed platforms as a means of avoiding new content moderation obligations.

We sought to build on this work by understanding Canadians’ perspectives on the topic. We first asked respondents whether their various social media profiles were set to public or private; we believe this is the first such representative data available in Canada. The majority of respondents said that their social media accounts are set to private, with the lowest proportions reported for Twitter (Figure 19).

Figure 19: Canadians’ Privacy Settings on Social Media Accounts

Figure 19: Canadians’ Privacy Settings on Social Media Accounts

We then asked half of the survey respondents which types of online spaces they thought should be required to remove illegal content like hate speech or the promotion of violence. A significant majority of 87% supported content moderation on public pages/profiles, while smaller majorities supported it for private groups (61%) and private pages/profiles (59%). Support fell below a majority (40%) for private messaging groups (Figure 20).

Figure 20: Canadians’ Support for Illegal Content Moderation on Different Platform Types

Figure 20: Canadians’ Support for Illegal Content Moderation on Different Platform Types

We asked the other half of respondents an open-ended question about what they thought makes an online space ‘private’ within the meaning of the Criminal Code of Canada, which makes it illegal to communicate hate speech “other than in private conversation.” Consistent with the previous question, 36% indicated that they thought no online space should be exempt from illegal content moderation. However, a majority (53%) felt that direct messaging should be considered private, with many describing variations of instances where content’s specific recipients are determined at the time of sending, and differentiating it from private groups where prior content is still accessible to newly-added members. About 20% also mentioned that content accessible beyond a finite number of users should no longer be considered private, with 12% indicating that only conversations between two users should be considered private.

Conclusion

These results collectively paint a clear picture: Canadians are ready for action to reduce online harms. They believe that the reach of harmful content is growing. Many marginalized groups report being victims of hate and harassment. A growing proportion of Canadians is using social media platforms for news and using traditional media less, a shift that is associated with belief in a range of conspiratorial misinformation.

Canadians’ first instinct is that the platforms are the most responsible for fixing these issues. But trust in the platforms to do the right thing has fallen to new lows. Two-thirds of Canadians believe it is the role of government to require online platforms to act responsibly and reduce the amount of harmful content on their online spaces. A majority recognize that such an approach may have some trade-offs with the critical right to free expression, but believe that these are threats to our democracy that must be addressed. After all, being subject to algorithmically amplified hate and harassment also impinges on the right of free expression.

While there is no consensus view among Canadians on the specific regulatory approach to make this happen, there is near unanimous support for actions that will quickly remove reported illegal content, label information verified as false, and provide tools for users to fact check or search for the authenticity of online content. Support for action has increased significantly since early 2021 — perhaps the events of the convoy protests served as a reminder of the tangible risks to these threats here at home. Canadians also understand the complexity of the diversity of public and private online platforms, and are prepared for content moderation on private groups and profiles, but a majority expect direct messaging to be treated differently.

Canadians do not believe the status quo of online discourse is working. A public policy approach that promotes platform responsibility at a systems level, while maintaining democratic and sovereign oversight and accountability for action, are likely to meet the expectations of Canadians. Such an approach can also add, rather than detract, from an emerging global movement of democracies that are undertaking similar platform governance efforts.


For more recent findings, go here