Skip to content

Submission on the Proposed Artificial Intelligence and Data Act

November 2023

Executive Summary


In October 2023, the Dais and the Centre for Media, Technology and Democracy organized a multi-stakeholder roundtable with over 30 participants from academia, civil society organizations, and industry. In hosting this event, we aimed to build on previous analyses of the proposed AI and Data Act (AIDA), including a report1 co-published by the Centre for Media, Technology, and Democracy at McGill University, the Center for Information Technology Policy at Princeton University, and the Cybersecure Policy Exchange (now part of the Dais) at the Toronto Metropolitan University, by inviting experts to deliberate on a set of proposed amendments to the AIDA. 

During this roundtable, experts expressed great concern with the lack of public consultation during drafting stages of the AIDA. Failure to meaningfully engage across sectors is a key issue that permeates areas of concern addressed in the following report. It is our belief that INDU has opportunities to better engage stakeholders and the public in the AIDA amendment process, and should take this responsibility seriously. In order to move forward, we urge the government and Parliamentarians to pay special attention to how it can engage with the Canadian population more rigorously, especially those representing marginalized communities, before the AIDA becomes law. 

With regards to the AIDA itself, we highlight five areas of concern discussed at the roundtable that ought to be considered by INDU in its deliberations:

Area of Concern 1: Definitions in Scope 

The Issue: The AIDA does not define “high-impact systems”.
Proposed amendment: Set out the factors to be used in deciding which systems are in scope, as well as deeming a minimum set of high-impact systems, with the ability to add others by regulation.

Area of Concern 2: Systems in Scope  

The issue: The AIDA aims to regulate only “high-impact” systems, leaving out broader harms associated with all AI systems.
Proposed amendment: Broaden how AI systems are categorized beyond “high-impact”, and establish minimum transparency and accountability requirements for systems that pose “lower” levels of impact, and prohibitions for “unacceptable impact” AI systems. 

Area of Concern 3: Institutions in Scope

The issue: The AIDA does not apply to public institutions. 
Proposed amendment: Public sector use of AI requires legislation.

Area of Concern 4: Harms in Scope

The issue: The scope of harms in AIDA is limited to individuals, excluding harms towards population groups or communities. 
Proposed amendment: Broaden the scope of harms to include impact of harms toward population groups or communities.

Area of Concern 5: Regulatory Oversight Model

The issue: The AIDA’s requirement that the ISED Minister appoint an Artificial Intelligence and Data Commissioner creates issues of regulatory independence, including a severe lack of accountability in oversight. 
Proposed amendment: Establish the AI and Data Commissioner as independent from the Minister, ideally through a parliamentary appointment, with sufficient resources and processes to support their function.

Introduction


In October 2023, the Dais and the Centre for Media, Technology and Democracy organized a one-day joint multi-stakeholder roundtable discussion with over 30 participants from academia, civil society organizations, and industry. Our goal was to build on our previous analysis2 of the proposed AI and Data Act (AIDA) by inviting experts to respond to and engage with our proposed recommendations. Discussions from the roundtable have informed the contents of this submission.

While the regulation of AI is a pressing and important issue, the AIDA came as a surprise to the broader policy community. The lack of public consultation prior to its introduction, in addition to the limited deliberative opportunities since, have made it difficult for civil society stakeholders, scholars, subject matter experts, and equity-deserving communities to engage with and propose improvements to the legislation. In contrast, the legislative process undertaken for the proposed AI Act in Europe included many channels for deliberation that rendered the drafting process significantly more transparent than the AIDA’s closed doors development in Canada. In fact, such a compromised legislative process led 45 of Canada’s leading civil society organizations and experts to sign an open letter calling on Minister Champagne to separate the AIDA from Bill C-27 in order to give it the attention necessary to make improvements.3 

During our roundtable discussions, participants expressed great concern about the lack of public consultation during the drafting stages of the AIDA. Failure to engage across sectors is a key issue that permeates all areas of concerns addressed in the following report. Many participants reiterated their agreement with recommendations from the ISED Public Awareness Working Group, citing the need for public consultation, especially given that such venues provide necessary constructive deliberation to address the problems highlighted herein.4 It is clear not just from this roundtable, but a series of other notable public calls by civil society actors, academics, and industry leaders that Canadians want to be engaged in the regulatory process for AI governance in the country, believe it is an important area of the digital economy that needs to be appropriately legislated, and find it necessary to work together to support the drafting of a stronger version of the AIDA. The committee has an opportunity to engage the public more meaningfully in this process. In order to move forward, we urge the government and Parliamentarians to pay special attention to how it can engage with the Canadian population more rigorously, especially those representing marginalized communities, before the AIDA becomes law. We also encourage an agile and consultative approach to future regulation and legislative reviews.

If the AIDA moves forward despite the inadequate public engagement undertaken throughout its drafting, we are outlining five areas of concern we have with the proposed Act informed by our roundtable discussion that we hope can be addressed.

Definitions in Scope

ISSUE: 

The AIDA does not define “high-impact systems.”

PROPOSED AMENDMENT: 

Set out the factors to be used in deciding which systems are in scope, as well as deeming a minimum set of high-impact systems, with the ability to add others by regulation.

The Act uses the term “high-impact systems” to describe the category of AI technologies that it aims to regulate. By applying legislation to only “high-impact systems,” the AIDA indirectly develops a hierarchy of systems based on impact, consequently leaving out other AI systems able to cause harm. Moreover, the Act does not provide a definition for high-impact systems, nor does it provide a definition or factors for determining the hierarchy, rather leaving it up to regulation to determine what those systems are.

In our first report, we mentioned that both the definition of AI and the lack of clarity regarding high-impact systems were key problems with the AIDA. These are concerns that were also echoed by scholars and civil society organizations in their analyses of the AIDA.5 While we now have greater clarity on the government’s direction as a result of the Minister’s letter to the Standing Committee on Industry & Technology (INDU), concerns on the exclusionary scope of the AIDA remain. 

The proposed change to the definition of AI and increased consistency with the EU’s approach was welcomed, particularly in defining AI by its applications rather than in aspirational terms. In addition, the proposed change to deem systems in scope rather than have businesses self-evaluate whether their AI systems are “high impact” is a step in the right direction. However, during the roundtable discussions, participants also expressed numerous concerns about this proposed framework. 

With regard to the proposed classes of systems, there were concerns that the level of specificity may be an over-correction that can exclude several important classes, such as AI systems used in financial services or immigration. Furthermore, the high-impact approach omits how the design and development of AI systems may generate various types of harms.6 For instance, to build facial recognition systems with sufficiently high accuracy rates for market use, companies have to develop training datasets with millions of images. When Facebook developed its own system, DeepFace, the platform used 4 million images from 4,000 users without seeking their consent. These privacy violations led to a $5 billion penalty from the US Federal Trade Commission (FTC).7 As another example, there are a growing number of reports concerning the working conditions of those developing AI systems.8 Content moderators who flag violence, child abuse and other explicit content online for social media platforms or to build automated systems suffer from anxiety, depression and post-traumatic stress disorder due to their exposure to horrific content.9 While the AIDA prohibits the use of illegally obtained personal information for the development of AI systems, harms beyond privacy violations are currently not within the scope of the Act and must be taken into account.10 

Furthermore, there were also some specific concerns about the proposed class of systems. For example, the Minister’s letter suggested that AI systems’ processing of biometric data to identify “an individual’s behavior or state of mind” would be considered high-impact.11 Researchers have debunked such practices as pseudoscience that can reinforce systemic forms of discrimination such as racism and sexism.12 In light of the absence of specific provisions relating to the protection of biometric data in the CPPA, this opens the door to harm for systems that should be prohibited. Under the proposed framework, the burden of proof for harm would be placed on individuals discriminated against by these systems, which would prove difficult if they were subjected to these systems without their awareness and/or if they do not have the resources necessary to object.

In light of the AIDA setting a hierarchy of systems based on impact, we propose that the AIDA be amended to at least set out the factors that must be used to decide which systems are in scope, such as the extent to which risks or harms are unaddressed by existing regulatory functions, as well as deeming a minimum set of high-impact systems, with the ability to add others by regulation.

 

Systems in Scope     

ISSUE:

The AIDA aims to regulate only “high-impact” systems, leaving out broader harms associated with all AI systems.

PROPOSED AMENDMENT: 

Broaden how AI systems are categorized beyond only “high-impact,” and establish minimum transparency and accountability requirements for systems that pose “lower” levels of impact, and prohibitions for “unacceptable impact” AI systems. 

The AIDA’s regulations apply mainly to one category of AI systems – those deemed as “high-impact.” However, this leaves other types of AI systems that do not fall under this category outside of regulation. This approach is at odds with the EU’s AI Act, which aims to regulate all types of AI systems. The EU’s AI Act introduces four categories of AI systems:

  1. Those that pose unacceptable risk, which the legislation bans (e.g, systems used for manipulation through subliminal techniques to cause harm, social scoring by public authorities; and real-time remote biometric identification systems in publicly accessible spaces).
  2. Those that pose a high risk, which are subject to various requirements before being put on the market (e.g., systems used for biometric identification, education and training, legal and interpretation).
  3. Those with limited risk, which are intended to interact with natural persons or that generate ‘deep fake’ images or videos, which are required to be designed in such a way users are informed that they are interacting with AI, unless this is obvious from the circumstances and the context of use.
  4. Those with minimal risk, which are not subject to new requirements.

During roundtable discussions, participants expressed concerns with limiting the scope of harms to “high-impact systems” only, citing that harms and risks are present in all types of AI systems. The context under which systems are deployed matters and factors into the level of risk present. There will also inevitably be a lag between the development of new systems and the evaluation of their harms. To address this, broadening the AIDA’s scope should include the establishment of minimum transparency and accountability requirements for systems that pose “lower” levels of impact, while being mindful of overly onerous reporting requirements, particularly for small and medium businesses.

Furthermore, akin to the EU AI Act’s ban on systems that pause unacceptable risk, we propose that the AIDA include explicit prohibitions on the design, development, and use of systems that cause unacceptable risks to individuals and communities. This may include developing factors that help identify systems that “exploit vulnerable groups based on their age (such as children) or physical and mental disabilities, as well as systems that are used by public authorities for social scoring purposes that lead to detrimental or unfavourable treatment that is unjustified.”13 This would strengthen the AIDA’s purpose “to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests”.14

Institutions in Scope    

ISSUE: 

The AIDA does not apply to public institutions. 

PROPOSED AMENDMENT: 

Public sector use of AI requires legislation.

The AIDA’s proposed application to the private sector is based on the federal trade and commerce power. The law would not apply to the use of systems by federal departments and Crown corporations, or to those under the direction of the Department of National Defence (DND), Canadian Security Intelligence Service (CSIS), Communications Security Establishment (CSE) or any other federal or provincial department or agency prescribed by regulation. As a result, many actor categories are exempt, including use of AI systems by police, immigration, and security actors, despite law enforcement and the work of state actors becoming increasingly data-driven, relying on AI systems to identify people or places based on perceived levels of threat.15

The federal government has attempted to provide some direction to public actors by requiring certain federal government institutions that use AI systems to follow the Directive on Automated Decision-Making.16 However, the Directive contains gaps, including its inapplicability to internal activities of the government (e.g., AI in hiring), and its unenforceability under law.17 It is therefore ill-equipped to address the public safety and human rights risks inherent to AI systems and creates inconsistencies in the development and deployment of systems by public and private actors. 

Minister Champagne’s proposed list of high impact systems includes a number of systems currently in use by state actors, including for instance the use of biometric technologies by the RCMP,18 CBSA,19 and the use of AI-driven hiring services by the DND.20 As a result, the exclusion of public sector institutions from the AIDA creates regulatory gaps and sets a double standard. While the public and private sector have been historically regulated separately, as exemplified by our country’s privacy legislations, it does not mean that government use of AI should be exempt from accountability and scrutiny. In the absence of legislation applying to government use of AI systems, we are opening the door to human rights violations by public actors. 

Unlike the EU, the AIDA fails to position the Canadian government as leading by example through responsible legal bans and guardrails for its own development and use of AI. The current structure of the bill, including its commissioner being an ISED department official, makes it poorly structured to provide oversight for all public sector AI deployment. While we acknowledge the AIDA’s private sector scope, we urge Parliament to understand the importance of developing AI legislation applicable to both the public sector and political parties with adequate public consultation and engagement.

Harms in Scope     

ISSUE:

The scope of harms in the AIDA is limited to individuals, excluding harms towards population groups or communities. 

PROPOSED AMENDMENT: 

Broaden the scope of harms to include impact of harms toward population groups or communities. 

The AIDA’s definition of harm is limited to individuals while failing to adequately capture the impact of harms caused by AI systems toward population groups or communities. Further, the AIDA’s focus on “high-impact systems” leaves a potential gap in regulating collective harms caused by all AI systems. 

The AIDA describes its intent as protecting Canadians from “biased outputs” and “harm.” The AIDA provides that high-impact systems must identify, assess, and mitigate risks of a) harm, or b) biased output on grounds prohibited in the Canadian Human Rights Act. Both “biased outputs” and “harm” are described in the AIDA as being limited to individuals. Further, the AIDA also defines “harm” narrowly, focussing specifically on “physical or psychological harm to an individual; damage to an individual’s property; or economic loss to an individual.”21 

Yet, harms of AI systems can occur at broader group and community levels. Depending on the context of the system in question, harm to individuals can also be difficult to prove, and only evident when assessed at a population level (e.g., racial profiling of racialized groups, Cambridge Analytica scandal and political profiling). Moreover, it is likely that other types of collective harms produced by AI systems that are manipulative and exploitative would fall outside the scope of regulation, including election interference or collective harms to children or persons with disabilities. The AIDA could be improved if it were to mirror the federal government’s Directive on Automated Decision-Making which includes considerations of broader risks towards “individuals or communities.”22

During roundtable discussions, participants noted that other significant harms of AI that affect individuals and communities may go unnoticed, including: 

  • Workplace harms of AI: algorithmic wage theft, harassment, and unhealthy work conditions. 
  • Environmental harms of AI: resource-intensive data centres needed to power algorithmic systems, and the use of AI to create and disseminate climate disinformation. 
  • Harms of AI-created and disseminated disinformation: deepfakes, and false and misleading content. 
  • Harms of AI beyond Canadian borders: inhumane working conditions in data mining and content moderation. 

Participants noted that while determining the harms of AI at a group or community level may be difficult depending on the system and context in question, it is nonetheless critical because of the clear historical evidence of collective harms caused by AI systems. 

Additionally, the focus on biased outputs as a source of harm provides a narrow focus on computational limitations as sources of harm. Instead, there should be legislative and regulatory interventions to prevent harms from non-technical interventions as de-biasing systems can be an insufficient measure (e.g., experts have not even reached consensus on how to do that for a wide variety of systems). 

Participants suggested that infusing more rights-based language into the AIDA may help to capture these types of collective harms. This is seen to some extent in the EU AI Act, which refers to high-risk systems in terms of “harm to the health and safety or a risk of adverse impact on fundamental rights.”23 

Regulatory Oversight Model     

ISSUE: 

The AIDA’s requirement that the ISED Minister appoint an Artificial Intelligence and Data Commissioner creates issues of regulatory independence and lack of oversight. 

PROPOSED AMENDMENT: 

Establish the AI and Data Commissioner as independent from the minister, ideally through a parliamentary appointment, with sufficient resources and processes to support their function. 

In our initial report, we noted that the proposed regulatory framework in the AIDA creates potential issues related to independence of the AIDA Commissioner. Similar concerns were expressed throughout the roundtable discussions, including the need for regulatory independence, clarification of the nature of the role of the regulator, and the importance of large-scale capacity building and cross-regulatory collaboration to support the Commissioner’s responsibilities.

The AIDA proposes to establish an Artificial Intelligence and Data Commissioner to assist the Minister with administration and enforcement powers. This senior public servant role is designated by the Minister. ISED has defended this approach citing AI as a rapidly evolving area requiring policy development and administration to work in close collaboration.24 

However, the Commissioner’s powers are afforded to them directly by the ISED Minister, who has competing roles of both championing the economic benefits of AI while regulating its risks. This can translate to challenges with the Commissioner being critical in their policy interventions, responding instead to the needs and interests of the Minister. Roundtable discussions reflected these concerns around the lack of independence, describing it as “the most glaring travesty” of the AIDA, as it contrasts with the OECD’s discussions on regulatory independence and the Companion Document.25 Some participants thus expressed the need for an independent regulatory body responsible for auditing and regulating, while others suggested merging this designated role into the existing privacy commissioner responsibilities to take advantage of existing expertise and infrastructure. 

Regarding execution of the Commissioner’s responsibilities, capacity building was also mentioned as a necessary action to support the implementation of the Commissioner’s responsibilities. The Commissioner’s responsibilities were stated to exceed the capacity of a single individual, and would require broader diffusion of responsibilities for efficient implementation. Executing the AIDA with a horizontal distribution of powers between multiple departments both in the process of regulatory drafting and in the implementation of responsibilities would ensure built-in accountability and input from other sectoral regulators and ministries. It is imperative that the legislation foster cooperation between different regulators in order to adapt to the evolving and broadening scope of AI systems. 

The AIDA’s regulatory approach could be strengthened to address these oversight deficiencies. We propose appointing a fully independent regulatory commissioner, ideally through a parliamentary appointment, or, failing that, through a Governor in Council (GIC) appointment. In either case, the separation of administration and enforcement from direct government control would foster more impartial decision-making for both system developers and those affected by them, while allowing the government to continue to develop policy through legislation and regulations. This decision would be a foundational component in an overall strategy to establish a more arms-length regulatory model. Furthermore, this independent commissioner would need an office appropriately resourced with the policy and technical expertise needed to keep up with the fast-paced evolution of AI. 

Legislated Processes used to Federally Appoint a Regulator 


Parliamentary appointmentsRequire consultation with all party leaders and then approval of the House of Commons and/or the Senate, and as such are accountable to Parliament rather than the government (e.g., Auditor General, Privacy Commissioner)
Governor in Council (GIC) appointmentsRequire the responsible minister to make a recommendation to Cabinet for approval after an open selection process, followed by formal appointment by the Governor General (e.g., Competition Commissioner, CRTC Chair, Standards Council CEO)
Ministerial appointmentsRequire the approval of the responsible minister (e.g., Director responsible for the Investment Canada Act)

The AIDA currently does not propose a complaint process for individuals or groups. Rather, the Minister must have “reasonable grounds” to investigate an organization, while remaining silent on how they would establish these grounds. We propose that the ability for individuals or groups to make complaints to an independent AI and Data Commissioner be specifically included, as well as the ability for the Commissioner to do pre-emptive audits. Roundtable participants also shared the need to strengthen whistleblower protections. In order to keep institutions and appointed leaders accountable, workers should be protected from any potential repercussions in the instance of the need to disclose sensitive and incriminating information. Strengthening public servant and whistleblower protections would create another form of internal accountability for the proper execution of the Commissioner’s responsibilities. 

Finally, the AIDA provides the ability to require an audit be conducted should there be reasons to believe that contraventions to the law have occurred. The audit can be performed internally by the company in question, or by hiring a third-party, at the company’s own expense. As such, the oversight of AI systems are therefore administered themselves. This is problematic as research shows that the quality of regulatory audits is poor when the auditee selects and compensates the auditor.26 Further, allowing companies to choose their auditors in the context of the AIDA enforcement opens the door to conflicts of interest, cronyism, and corruption. AI auditing is not yet a professionally codified process, nor is it clear what a professional approach should contain or even what discipline(s) should oversee it (e.g., computer science, engineering, statistics, actuarial science). Roundtable participants stressed the importance that regulatory audits be supported by robust standards development. 

Roundtable participants who consented to having their names included:


Abu Kamat, Council of Canadian Innovators
Andrew Clement, University of Toronto
Bianca Wylie, Digital Public & Tech Reset Canada
Brenda McPhail, McMaster University
Daniel Konikoff, Canadian Civil Liberties Association
Jake Hirsch-Allen, Lighthouse Labs, Mission Impact Academic, Readocracy
Jon Penney, Osgoode Hall Law, Citizen Lab, BKX Harvard, CCRI Advisory Board
Matt Hatfield, OpenMedia
Matthew Mendelsohn, Toronto Metropolitan University
Paul Samson, Centre for International Governance Innovation
Renjie Butalid, Montreal AI Ethics Institute
Rob Davidson, Information and Communications Technology Council
Sarah Gagnon-Turcotte, Conseil de l’innovation du Québec
Vicky Hailey, VHG
Wendy Chun, Simon Fraser University

André Côté, the Dais at Toronto Metropolitan University
Angus Lockhart, the Dais at Toronto Metropolitan University
Christelle Tessono, the Dais at Toronto Metropolitan University
Helen Hayes, Centre for Media, Technology and Democracy at McGill University
Joe Masoodi, the Dais at Toronto Metropolitan University
Julian Lam, Centre for Media, Technology and Democracy at McGill University
Katie Gibson, the Dais at Toronto Metropolitan University
Marium Hamid, the Dais at Toronto Metropolitan University
Mark Hazelden, the Dais at Toronto Metropolitan University
Nina Solomun, Centre for Media, Technology and Democracy at McGill University
Phaedra de Saint-Rome, Centre for Media, Technology and Democracy at McGill University
Sam Andrey, the Dais at Toronto Metropolitan University
Sonja Solomun, Centre for Media, Technology and Democracy at McGill University
Tiffany Kwok, the Dais at Toronto Metropolitan University

The varied perspectives of roundtable participants greatly informed this submission. Agreeing to be listed as a participant is not an endorsement of the contents of this report. The statements and recommendations are the sole responsibility of the Dais and the Centre for Media, Technology and Democracy.

1

Tessono, Christelle, Yuan Stevens, Momin Malik, M., Sonja Solomun, Supriya Dwivedi, and Sam Andrey. AI Oversight, Accountability and Protecting Human Rights. Cybersecure Policy Exchange, 2022.     
https://www.cybersecurepolicy.ca/aida.

2

Tessono, Christelle, Yuan Stevens, Momin Malik, M., Sonja Solomun, Supriya Dwivedi, and Sam Andrey. AI Oversight, Accountability and Protecting Human Rights. Cybersecure Policy Exchange, 2022.     
https://www.cybersecurepolicy.ca/aida.

3

International Civil Liberties Monitoring Group. “Canadians deserve to be protected from AI overreach, but Bill C-27’s Artificial Intelligence and Data Act is not up to the task.” September 25, 2023. https://iclmg.ca/aida-not-up-to-task/.

4

Learning Together for Responsible Artificial Intelligence: Report of the Public Awareness Working Group. Ottawa: Innovation, Science and Economic Development Canada Innovation, sciences et développement économique Canada, 2022.

5

See Women’s Legal Education and Action Fund. “Submission to the House of Commons Standing Committee on Industry & Technology”. 2023. https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12579508/br-external/WomensLegalEducationAndActionFund-e.pdf, Brandusescu, Ana and Renée Sieber. “Canada’s Artificial Intelligence and Data Act: A missed opportunity for shared prosperity”. 2023. https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12636987/br-external/Jointly4-e.pdf, Bailey, Jane., Burkell, Jacquelyn and Brenda McPhail. “Submissions on Bill C-27 The Digital Charter Implementation Act”. 2023.  https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12605252/br-external/Jointly3-e.pdf, International Civil Liberties Monitoring Group. “Brief on Bill C-27”. 2023. https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12598706/br-external/InternationalCivilLibertiesMonitoringGroup-e.pdf, Attard-Frost, Blair. “Generative AI Systems: Impacts on Artists & Creators and Related Gaps in the Artificial Intelligence and Data Act”. 2023. https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12541028/br-external/AttardFrostBlair-e.pdf

6

For discussion on intellectual property theft from generative AI systems, see Attard-Frost, Blair. “Generative AI Systems: Impacts on Artists & Creators and Related Gaps in the Artificial Intelligence and Data Act”. 2023. https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12541028/br-external/AttardFrostBlair-e.pdf.

7

Fair, Lesley. “FTC’s $5 Billion Facebook Settlement: Record-Breaking and History-Making.” Federal Trade Commission, July 24, 2019. https://www.ftc.gov/business-guidance/blog/2019/07/ftcs-5-billion-facebook-settlement-record-breaking-and-history-making.

8

See Gray, Mary L., and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt, 2019., Li, Hanlin, Nicholas Vincent, Stevie Chancellor, and Brent Hecht. “The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers,” 1151–61, 2023., Newton, Casey. “The Secret Lives of Facebook Moderators in America.” The Verge, February 25, 2019. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona.

9

Williams, Adrienne, Milagros Miceli., & Timnit Gebru. “The Exploited Labor Behind Artificial Intelligence.” Noema Magazine (2022). https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/.

10

The CPPA would apply here as well, but with regards to AIDA, s.38

12

Stark, Luke and Jevan Hutson. “Physiognomic Artificial Intelligence”, Fordham Intell. Prop. Media & Ent. L.J. 32, no. 4 (2022). https://ir.lawnet.fordham.edu/iplj/vol32/iss4/2.

13

Tessono, Christelle, Yuan Stevens, Momin Malik, M., Sonja Solomun, Supriya Dwivedi, and Sam Andrey. AI Oversight, Accountability and Protecting Human Rights. Cybersecure Policy Exchange, 2022. https://www.cybersecurepolicy.ca/aida. p.17

14

AIDA, s.4(b)

15

Lyon, David and David Murakami Wood. Big Data Surveillance and Security Intelligence. UBC Press, 2020. https://www.ubcpress.ca/big-data-surveillance-and-security-intelligence.

16

Directive on Automated Decision-Making, 2019. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592.

17

Scassa, Teresa. “Comments on the Third Review of Canada’s Directive on Automated Decision-Making.” Teresa Scassa, May 17, 2022.  https://www.teresascassa.ca/index.php?option=com_k2&view=item&id=354.

18

Office of the Privacy Commissioner of Canada. “PIPEDA Findings #2021-001: Joint Investigation of Clearview AI, Inc. by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information Du Québec, the Information and Privacy Commissioner for British Columbia, and the Information Privacy Commissioner of Alberta,” February 2, 2021. https://www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2021/pipeda-2021-001/.

19

Public Safety Canada. “Facial Verification at the Border,” January 19, 2022. https://www.publicsafety.gc.ca/cnt/trnsprnc/brfng-mtrls/prlmntry-bndrs/20211015/16-en.aspx.

20

Cardoso, Tom. “National Defence Skirted Federal Rules in Using Artificial Intelligence, Privacy Commissioner Says” The Globe and Mail, February 7, 2021. https://www.theglobeandmail.com/canada/article-national-defence-skirted-federal-rules-in-using-artificial/.

21

Artificial Intelligence and Data Act (AIDA) s.5 in Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1 sess., 44th Parliament, 2022.

22

AIDA, s.5

23

Accessible Law. “Article 7: Amendments to Annex III.” https://artificialintelligenceact.com/title-iii/chapter-1/article-7/

24

Innovation, Science and Economic Development Canada. “The Artificial Intelligence and Data Act (AIDA) – Companion Document.” March 13, 2023. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document.

25

OECD. “Creating a Culture of Independence: Practical Guidance Against Undue Influence.” https://www.oecd.org/gov/regulatory-policy/Culture-of-Independence-Eng-web.pdf.

26

Raji, Inioluwa Deborah, Peggy Xu, Colleen Honigsberg, and Daniel Ho. “Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance.” In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 557–71. Oxford United Kingdom: ACM, 2022. https://doi.org/10.1145/3514094.3534181.