Skip to content
Submission to the Consultation on Canada’s Renewed AI Strategy

In 2017, Canada became the first country in the world to launch a national artificial intelligence (AI) strategy. The Pan-Canadian AI Strategy cemented Canada’s place at the forefront of the generative AI revolution. Yet eight years later, Canada’s leadership in AI research has not translated into broad-based innovation or productivity gains across the economy. AI adoption remains limited outside of large firms, competition for top talent remains fierce, and public trust in AI systems and their governance is low.

The next phase of Canada’s AI strategy must move beyond short-term market competitiveness toward long-term national resilience. That requires building transparent and trusted institutions, developing human capital, and investing in reliable infrastructure and an innovation ecosystem that enables homegrown AI champions to flourish. It requires international collaboration with values-aligned democracies and thinking beyond this present hype cycle toward long-term investments, such as in energy.

This submission from the Dais, informed by over 25 reports on AI adoption, skills, compute, and governance, outlines a roadmap for a renewed national AI strategy, centred on three interrelated pillars: responsible AI adoption built on trust and effective governance; talent and skills to create and use AI; and building and securing Canadian AI sovereignty.


AI adoption across Canada’s economy remains shallow and uneven. The vast majority of small and medium enterprises (SMEs), and most public sector and nonprofit organizations, lack the knowledge, resources, or incentives to effectively integrate AI into their operations. 

At the same time, adoption at any cost should not be the goal. A forward-looking AI strategy must focus on responsible adoption by creating governance frameworks that are transparent, accountable, and aligned with values that build public trust.

To diffuse the benefits of AI widely across Canada’s economy, the government should:

  1. Develop and promote AI adoption programming to leaders of organizations with low AI adoption to address knowledge gaps. 
  2. Prioritize responsible AI adoption efforts in Canada’s nonprofit sector, with targeted investment support to build and scale initiatives. 
  3. Establish data and analysis infrastructure to generate economy-wide AI intelligence on AI adoption, productivity, workforce and other key trends.
  4. Launch firm-level research to study, evaluate and share AI implementation best practices and productivity-enhancing applications.
  5. Continue to address core digital modernization challenges in the public sector, and build clear, public AI strategies to transparently set goals and expectations. 
  6. Task existing government institutions with assessing how their respective regulatory regimes can be applied to AI, and identify gaps.
  7. Address overlapping categories of harms through related legislative proposals for privacy and online safety, with a focus on immediate challenges like deepfakes, child and youth protection, and limiting the spread of disinformation. 
  8. Empower new and existing regulatory bodies to oversee AI-related harms and risks under new and existing laws.
  9. Prioritize coordination of Canada’s AI governance policies and mechanisms under international agreements with value-aligned allies. 
  10. Support continued development of AI technical standards in Canada and internationally.
  11. Institutionalize participatory governance as part of the new national AI strategy and its activities.

Amid fierce global competition, Canada has struggled to attract and retain top talent. On the other side, the broader non-technical workforce is lacking the digital literacy they need to confidently, and productively, work with AI. A renewed AI strategy must approach talent and skills as critical national infrastructure.

To build and bolster an AI-ready workforce, the government should:

  1. Grow Canada’s AI research and talent, with a focus on increased women’s participation and redressing compensation gaps.
  2. Advance equity to expand Canada’s talent pool through sustained funding for Women and Gender Equality Canada to tackle barriers to participation. 
  3. Establish capacity for generating and sharing ongoing labour market intelligence about AI’s impacts.
  4. Support further research and analysis on evolving skills and competency requirements for Canadians, in preparation for continued AI diffusion and use.
  5. Commit to developing an AI literacy strategy for Canadians at all ages and stages, with supporting investment.
  6. Support the development of a Pan-Canadian AI Literacy Framework for K-12 education.

Limited domestic compute capacity and heavy reliance on foreign cloud providers threaten Canada’s competitiveness and sovereignty. While isolationism is not, and can not be the goalpost, advancing digital sovereignty will ensure that AI development aligns with Canada’s national interests, environmental goals, and democratic values. 

To protect Canada’s competitive edge, the government must:

  1. Develop and codify a formal definition of digital and AI sovereignty to inform the AI strategy.
  2. Focus resources to build sovereign compute capacity for Canada’s highest strategic-priority applications.
  3. Use the Major Projects Office to advance the strategy, through nation-building AI and digital infrastructure projects, and sustainable energy projects.
  4. Use Public AI or AI Commons as potential models for domestic infrastructure and international collaboration.
  5. Launch a Canadian AI grand challenge that clearly articulates areas where the government believes that Canadian-made AI solutions must exist.
  6. Launch matched private-public investment funds that invest in Canadian companies, at all stages, that are building solutions in critical AI usage areas.
  7. Expand shared compute and data resources to build commercial solutions that tackle AI grand challenges.

These recommendations present a balanced vision for AI in Canada, one that drives innovation while upholding the values of accountability, equity, and shared prosperity. By embedding transparency, inclusion, and security at the core of its AI strategy, Canada can not only accelerate technological progress, but also shape a future in which AI strengthens democratic institutions, expands opportunity, and serves the collective good for generations to come.

Introduction


The 2017 Pan-Canadian AI Strategy, the first national AI strategy in the world, outlined goals and introduced investments for growing Canada’s capacity in foundational AI research, and positioned Canadian talent at the forefront of the generative AI revolution. On these initial aims, it has largely been a success. 

Yet, Canada’s AI ambitions have stalled in other areas. First, in building broader public trust in the technology. Second, in retaining AI talent in Canada and creating homegrown AI scale-ups with global reach. Third, in diffusing AI to businesses and institutions across the broader economy in a productivity-enhancing way. Fourth, in ensuring our priorities shape a path of AI innovation that benefits society through shared prosperities. 

This year, with the appointment of Canada’s first Minister for AI and Digital Innovation, and at a moment of unprecedented interest and investment in AI globally, Canada has the opportunity to introduce a renewed AI strategy that outlines a broader set of objectives that reflect national values and interests, and a wider array of Canadian viewpoints. This consultation, and the parallel task force process, are important inputs to this strategy.

A bold, forward-looking national AI strategy must look beyond short-term hype cycles and market gains, and instead focus on building up economic, educational, and governance institutions on a sustainable and long-term path. The goal should not be AI adoption at all cost, but to empower Canadians to actively build and safely use trustworthy AI; give innovators confidence to grow AI businesses in a healthy innovation ecosystem; secure AI compute infrastructure, to power innovation and adoption while withstanding economic shocks; address emerging harms from AI that are a clear and present danger to people, institutions and democracy; and assert and advance Canada’s sovereign interests in response to a new economic and national security reality. 

The following submission leverages over 25 reports that the Dais has published over the last five years to contribute insights and recommendations for shaping this new strategy. Mapped to seven of the government’s eight priorities outlined in the consultation, we structure our submission around three pillars: responsible AI adoption built on trust and effective governance; talent and skills to create and use AI; and building and securing Canadian AI sovereignty.

Responsible AI Adoption Built on Trust and Effective Governance


AI deployment requires high upfront costs, from investments in tools and applications, to computing capacity, and worker training and reskilling. Alongside firm-level uncertainty about AI’s benefits for business processes and productivity, achieving broad adoption depends on addressing common organizational barriers and capacity challenges. It also requires efforts to build public and workforce trust that AI will be developed and deployed responsibly. 

In this section, we describe the state of AI adoption in the Canadian economy, and outline recommendations for the strategy to advance adoption. We then outline why public trust must be central to Canada’s AI agenda, and propose mechanisms for trust-building through AI governance.


THEME 1

Private sector adoption

Despite the hype about AI, Canada’s private sector has been slow in adopting the technology. By 2021, our research, Automation Nation, found that just under 4 percent of Canadian firms had incorporated AI into at least one business function. By 2023, after the launch of ChatGPT and the mainstreaming of generative AI, this rate had only increased to nearly 7 percent. Importantly, adoption is highly concentrated in large companies (25.9 percent compared to 5.9 percent in small firms in 2023) and uneven across industries: information and cultural industries have the highest rate of adoption (25.6 percent), followed by professional and technical services (14.7 percent), and finance and insurance (10 percent). 

The productivity impacts of firm adoption have also been muted to date. Our research, Waiting for Takeoff, analyzed firm-level AI adoption in the 2020 to 2022 period (preceding the mainstreaming of generative AI) and found that businesses that had adopted AI did not experience short term productivity gains. More recent work focusing on the productivity impact of generative AI reached the same conclusion.

Our research identifies two key adoption barriers. First, the vast majority of AI non-adopter firms cite not having any use for the technology as the primary factor for non-adoption. Second, companies that offer ICT training (to both technical and non-technical staff) are much more likely to adopt AI. We thus believe that knowledge gaps are the key barrier to adoption. AI is not adopted because key decision makers (and line employees) have insufficient levels of AI literacy to navigate AI integration processes, which can then hinder uptake within a company’s broader digital culture, as shown in our report Picking Up Speed.

Finally, while much discussion on AI adoption focuses on for-profit companies, the absence of focus on Canada’s nonprofit sector needs to be addressed in the new strategy. Canada’s nonprofit sector, representing over 170,000 organizations, is critical to the country’s economy and social fabric. In a sector that is often under-resourced and faces digital skills and technology gaps, AI offers significant potential, as outlined in our reports Canada’s Nonprofit Tech Workforce and The Demand for Digital Skills in Canadaʼs Nonprofit Sector. Pilot initiatives to offer bespoke training, frameworks and support, like the Responsible AI Adoption for Social Impact (RAISE) delivered by the Dais with partners and support from DIGITAL, offer great scale-up potential. 

Public sector adoption

The government of Canada has prioritized public sector AI adoption to improve operations and achieve spending targets through efficiencies. The G7 statement on AI specifically mentioned the launch of the Rapid Solutions Lab to “develop innovative and scalable solutions to the barriers we face in adopting AI in the public sector.”

Conversations on government AI adoption in the public sector cannot occur outside of broader conversations on its digital maturity. Once a leader in the United Nations’ E-Government rankings, Canada’s ranking tumbled to 47th place by 2024. Our study of digital transformation in the federal government, Byte-Sized Progress, found persistent culture and talent barriers to incorporating new technologies. 

Therefore, we believe the new AI strategy must build upon existing digital government efforts and directives on AI use in government, including the AI Strategy for the Federal Public Service, a recently-updated Directive on Automated Decision-Making for government use, and Digital Competencies for Public Servants, and focus on effectively addressing AI within these directives.

Our recent study Adoption Ready? found a much higher concentration of federal public sector workers in highly AI-exposed occupations, composed of tasks that can be automated, than across the general Canadian workforce (59% compared to 29%). We don’t believe that 59% of federal jobs can or should be automated, but it highlights the potential scale of impact that decisions around AI adoption in government may have. 

AI adoption may present opportunities, but also the potential for disruption for public servants and government services that necessitates discussions on workforce and operational planning. There have already been examples of the risks that flawed or rushed AI implementations present. In our report, we outline a series of action items for effective and responsible public sector deployment, including publishing clear, plain-language strategies; equipping workers with AI tools, training and governance frameworks; introducing low-risk applications that pose smaller threats to workers; rolling job classification realignments to reflect job change; and longer-term workforce planning, undertaken with workers and labour unions.

In sum, the renewed AI strategy must tackle the issue of adoption from a three-pronged approach: helping non-adopters identify useful applications and trial AI use responsibly; assisting advanced adopters to optimize for longer-term productivity benefits; and supporting workers (and other impacted groups) through the disruption and transition that AI will bring. This should be coupled with longitudinal efforts to track AI intelligence around macro adoption trends, micro company-level best practices, productivity outcomes, workforce impacts and disruptions, and implications for policymaking. At the organization level, this should generate a body of evidence that documents both the modalities and factors associated with productivity-enhancing AI implementation.

Recommendations

  1. Develop and promote AI adoption programming to leaders or organizations with low AI adoption to address knowledge gaps. This could involve scaling and adapting initiatives such as the Regional Artificial Intelligence Initiative (RAII) and AI Assist Program, and scaling programs that promote applied research projects between colleges and/or polytechnics and SMEs on topics related to adoption. 
  2. Prioritize responsible AI adoption efforts in Canada’s nonprofit sector, with targeted investment support to scale and build on early sector-wide initiatives.
  3. Establish data and analysis infrastructure to generate economy-wide AI intelligence on AI adoption, productivity, workforce and other key trends and impacts, through Statistics Canada, with academic and think tank partners. This should include extending and expanding critical analysis tools, such as the Survey of Digital Technology and Internet Use (SDTIU), to capture key firm-level factors (e.g. use of off-the-shelf AI solutions versus those built in-house).
  4. Launch directed firm-level research to study, evaluate and share AI implementation best practices and productivity-enhancing applications in business and institutional settings. This could build on efforts by granting councils, Mitacs, and through partnerships with firms, academic researchers and think tanks.
  5. Continue to address core digital modernization challenges, including setting clear, public AI strategies to transparently set goals and expectations; provide tools, training, and space for workers to experiment with AI; and foster proactive discussions with public sector unions about long-term workforce planning. 

THEME 2

Public trust must be at the centre of Canada’s renewed AI strategy. Building public trust requires ensuring that Canadians feel confident that harms from AI will be promptly and effectively addressed if they arise. Trust is also essential for Canada’s AI innovators, and business and institutional adoption plans to succeed, as workers, clients and other stakeholders must have confidence that use of the technologies will not cause harm, worsen service quality, or be used as a careless excuse to eliminate jobs.

It is notable then, that Canadians’ trust in AI technologies and AI companies is low. Research by KPMG and Edelman find that Canadians rank among the most skeptical of AI worldwide. In the Dais’ 2025 Survey of Online Harms, OpenAI was among the least trusted companies to act in the Canadian public interest, rivalling social platforms Facebook and TikTok. This mistrust likely stems from harms Canadians experience with AI content online, including exposure to deepfakes, AI-enabled disinformation, and fraud. They are also concerned about irresponsible AI deployment, such as the rush by companies to release video generation tools that were immediately used to create harmful content, or reports of youth suicide and mental health crises exacerbated by AI chatbot use.

The primary narrative advanced by large AI companies and their supporters is that AI safety and regulation stifle innovation and limit growth, and that the companies have sufficient internal infrastructure to address user safety. However, our research shows self-governance is clearly insufficient. For example, in Human or AI?, we demonstrated that synthetic content labelling strategies (small unintrusive labels) were ineffective in informing users that content was AI-generated. In that same report, we demonstrated that effective approaches to flag synthetic content reduce engagement, creating a central conflict of interest for companies to self-govern.

Trusting companies to act responsibly without public input is not only ineffective, but inconsistent with Canadian interests, values of human rights, or public attitudes. Our survey found a strong majority of Canadians (nearly 70 percent) support government intervention to require tech platforms to act responsibly and reduce harmful content. In this section, we outline possible actions that the government can take to create pathways for Canadians to build trust in AI systems.

Effective AI Governance as the Basis for Public Trust-Building

While the government has signalled that it will not prioritize the re-introduction of a bespoke bill to establish a regulatory framework for AI, as previously attempted in the Artificial Intelligence and Data Act (AIDA), there are many other tools to establish AI governance. Regulatory clarity, coherence and international cooperation will encourage companies to adopt more AI and digital technologies, not less.

Effective AI governance requires strong and independent oversight, and clear accountability and enforcement mechanisms. Existing and proposed regulators and enforcement agencies (the Privacy Commissioner, the Competition Bureau, and proposed Online Safety Regulator) should be resourced to fulfill their investigative and enforcement mandates that new legislation would introduce. Further, to ensure coordination and consistent enforcement when AI systems impact multiple sectors with overlapping jurisdictions, we need to build dedicated technical capacity across departments and foster formal cooperation among regulators, such as through bodies like the Canadian Digital Regulators Forum.

The scope of AI systems cuts across multiple sectors, including health, environmental protection, heritage, finance, and national security, making it unrealistic for a single commissioner, agency, or law to manage all aspects of governance. To address the growing body of AI harms, the strategy should empower existing government institutions in assessing how current governance regimes apply to AI, as well as identifying areas where new measures are needed. The United Kingdom’s pro-innovation 2023 framework could be used as a model to enable a faster and coordinated pathway to AI regulation,by building on existing institutional capacities rather than creating entirely new structures. Building public trust in AI does not require novel or disruptive approaches; instead, it requires reimagining how existing systems can respond to our new environment.  

The government can address overlapping categories of harm through related legislative proposals. In the new proposed privacy and data protection bill, the latest effort to update the existing PIPEDA law after Bill C-27 (including the Consumer Privacy Protection Act and AIDA) failed to pass in the last parliament, could update privacy protections relating to private sector AI use. It should include provisions for regulatory enforcement of the new Children’s Privacy Code that the Privacy Commissioner of Canada is introducing, which could be essential in governing design and data capture from tools like AI chatbots geared to children and teens. A reintroduced Online Safety Act would also put requirements on social platform harms that AI is amplifying. 

Canada should consider where these legal frameworks align with international agreements, with a focus on working with values-aligned democracies. For instance, ratifying the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law, of which Canada is a signatory; assessing the impacts of the European Union’s AI Act as it takes effect; and engaging in multilateral processes like the G7 Hiroshima AI Process reporting framework, and other coordinated actions on AI. 

Canada should continue efforts to establish technical standards for AI and intersecting digital technologies. This includes both domestic standards, such as those developed by the Digital Governance Council, and the Standards Council of Canada's continued coordination of Canadian participation in international standards setting processes. Once developed, AI standards adoption could be required under legal frameworks (as Europe is doing), or common practice through procurement processes of governments and large enterprises (as many are doing for cybersecurity standards).

Finally, trust is built through the public processes of passing laws and setting guardrails. Previous efforts have failed in part because they lacked genuine public consultations. When the federal government tabled Bill C-27 in 2022, over 60 civil society organizations, corporations, unions and academics decried the lack of public consultation, including around the inclusion of AIDA with little warning. This was echoed at the Dais’ multi-stakeholder roundtable on the proposed AIDA bill. Similar concerns surfaced when this very AI consultation was announced, with the process criticized as unrepresentative, unnecessarily hurried, and favoring industry actors. Building public trust requires developing avenues for Canadians to inform and actively shape policy. 

A participatory democratic approach within Canada’s AI strategy could mean creating ongoing, meaningful channels for public involvement that extend beyond traditional consultations or surveys. This could include online and in-person deliberative assemblies, citizens’ panels, and community-based advisory boards that bring together diverse voices to debate trade-offs, set priorities, evaluate emerging risks and build consensus on values. It could also involve co-design processes with historically marginalized communities, participatory technology assessments, and transparent mechanisms for public feedback on AI deployment in sectors such as health, education, and public safety. 

Recommendations

  1. Task existing government institutions with assessing how their respective regulatory regimes can be applied to AI and identify gaps, enabling a faster, coordinated, and adaptive regulatory approach built on existing systems.
  2. Address overlapping categories of harms through related legislative proposals for privacy and online safety, with a focus on immediate challenges like deepfakes, child and youth protection, and limiting the spread of disinformation.
  3. Empower new and existing regulatory bodies to oversee AI-related harms and risks under new and existing laws, and strengthen cross-regulatory coordination by creating formal mechanisms for collaboration among privacy, human rights, competition, and sectoral regulators to ensure accountable AI governance.
  4. Prioritize international coordination of Canada’s AI governance policies and mechanisms under international agreements and with values-aligned allies like the European Union.
  5. Support continued development of AI technical standards in Canada and internationally, and consider establishing requirements under Canadian legal regimes or through government and large enterprise procurement processes to encourage broader adoption, and facilitate technological and regulatory interoperability.
  6. Institutionalize participatory governance as part of the new national AI strategy and its activities, which can include, but is not limited to, online and in-person citizens’ assemblies and community advisory boards, to give Canadians a real voice in shaping Canada’s approach to AI policy.

Talent and Skills to Create and Use AI 



THEME 3

Canada’s ability to strengthen AI research and talent hinges on coordinated efforts that link competitive compensation with workforce development strategies, greater participation of women in the tech sector, and investments in digital skills and infrastructure across communities. As we outlined in our Inclusive Innovation Monitor, Canada will only reap the full benefits of innovations like AI when we achieve a fairer distribution of opportunities for people to participate in, and benefit from, AI-driven economic activity and growth. Yet, there are key challenges in Canada’s tech and AI workforce. ​​Addressing these challenges will not just benefit Canada’s tech sector, but all industries that depend on tech workers.

Canada cannot afford to miss any AI talent, and must tackle persistent issues when it comes to gender representation in the tech sector. As of 2021, Canada had 977,155 workers in highly technical occupations, representing around 5% of the country’s workforce. Notably, only one in five tech workers were women, a number that has stayed consistent for 20 years. As a result, any new strategy that grows Canada’s AI and tech talent must also strive to increase the representation of women.

Addressing trans-national tech worker pay gaps would reduce the risk of brain drain. Our research in Mind the Gap found that tech workers based in the United States earn an average of 46 percent more than tech workers based in Canada. In light of recent changes to US immigration policies and the emerging trend that fewer Canadians are seeking American job opportunities than in previous decades, Canada is better positioned than ever to retain Canadian talent and compete for international talent. 

Canada can provide tech workers with clearer, more reliable career paths by leveraging social advantages such as public health insurance, political stability and public safety. Reducing the volatility of our immigration policies when it comes to high skilled streams can also better position Canada to compete for global talent.

The public sector, in particular, struggles to compete against the private sector on salaries. While public sector jobs offer a number of non-salary benefits, the pay gap is so significant that only a small percentage of tech workers choose to work in the public service. Tech workers within the federal government earn nearly $10,000 less than their counterparts outside of government. Though the number of tech workers in Ottawa is growing, the government stands to benefit from tech workers beyond the National Capital Region by creating more remote or hybrid positions. 

Recommendations

  1. Grow Canada’s AI research and talent, with a focus on increased women’s and diverse workforce participation in AI and tech occupations, redressing Canada’s significant compensation gap with the US, and realigning immigration policies to attract top global AI talent.
  2. Advance equity to expand Canada’s talent pool through sustained funding for Women and Gender Equality Canada to sufficiently support programs addressing barriers in the workplace that contribute to low participation rates among women and other marginalized groups in the AI and tech workforce.

THEME 4

Canada’s education and employment programs need to incorporate AI literacy and skills development, and emphasize hybrid skills. For Canadians to feel prepared to engage with AI, digital literacy should begin early and be tailored to all ages. Ensuring that Canadians feel confident about how and when they can use AI can foster greater trust in the technology. Without relevant education opportunities, we risk fostering a mistrustful society who feel they have limited understanding of AI, and limited power in shaping AI’s impact on their lives. We see four strategies to avoid this future. 

First, Canada needs labour market intelligence about AI’s impacts. This should include: 

  • Longitudinal analysis of workforce trends, including AI’s impacts on specific categories of workers and jobs (e.g. youth and entry level roles).
  • Tracking of impacts at the worker level, with analysis (see Right Brain, Left Brain, AI Brain) on the AI exposure of Canadian occupations, from teachers and lawyers to software developers, presenting a directional picture of which jobs and tasks that AI is most likely to augment or eliminate in future.

Intelligence on labour market trends should act as the compass for government policy and AI training investment, education and training providers’ pedagogy and learning material, and large companies’ workforce planning processes. 

Second, as AI technologies continue to evolve, education and training providers must ensure their learning is agile to reflect changing skills demands. Despite narratives about the surging demand for digital and AI skills, Dais research found that employers most often seek general digital skills (e.g. Microsoft Excel), paired with non-technical abilities, such as communication and teamwork. Educational programming should not only include AI and general digital literacy, but also human skills that prepare workers critically engage with AI systems. As the OECD reports, these skills, such as communication and problem solving, can buffer against shocks triggered by technological change, economic cycles, or unexpected events like the COVID-19 pandemic. 

Third, the AI strategy should prioritize AI literacy for Canadians at all ages. Early evidence suggests that Canada is lagging in equipping citizens and workers for this new AI age, with an international survey by KPMG and the University of Melbourne placing Canada at 44th place among nations in AI training. There is a clear need for a national AI literacy strategy focused on educating and equipping Canadians at all stages, from students, to working Canadians, to seniors in later life phases. Canada can build on existing digital skills strategies and international examples—but the time to boldly act is now. 

Fourth, the starting point for AI literacy should be young Canadians in K-12 education. Digital-native students quickly emerged as the top users of generative AI tools like ChatGPT. The Dais’ Screen Break project has found high levels of awareness among teachers across the country on the need to equip students with AI literacy. But they have little capacity to do so as a result of inadequate training, insufficient classroom resources and AI literacy tools, and an absence of guidance from provincial education ministries and school boards. The Dais, Media Smarts, Digital Moment and others offer curriculum-linked digital and AI literacy tools for educators, but lack the resources to scale these initiatives. Future opportunities to scale programs include adaptations of the new OECD and European Commission’s AI Literacy Framework for Primary and Secondary Education, as well as UNESCO’s AI Competencies for Students and Teachers.

Recommendations

  1. Establish capacity for generating and sharing ongoing “labour market intelligence” about AI’s impacts, with a focus on longitudinal analysis of trends and disrupted workers and jobs, occupational AI exposure, and AI training and skills development models. Statistics Canada should partner with academic researchers and think tanks, who in turn should collaborate closely and at scale with the private sector, including SMEs and scale-ups.
  2. Support further research and analysis on evolving skills and competency requirements for Canadians in preparation for continued AI diffusion and use, with the aim of informing education and training providers, employers, and workforce systems about adapting learning programs and curriculum for AI resilience.
  3. Commit to developing an AI literacy strategy for Canadians at all ages and stages with supporting investment, which could be modelled on existing digital literacy initiatives like CanCode, and developed with provinces and territories and input from education and employment service system leaders, social sector digital literacy providers, and a coalition of other stakeholder groups. 
  4. Support the development of a Pan-Canadian AI Literacy Framework for K-12 education, adapted from international best practices and developed for use by provincial and territorial education systems through a coalition of K-12 sector organizations and content and curriculum partners.

Building and Securing Canadian AI Sovereignty 


A recent Dais Commentary made the case that America’s AI Action Plan, released in Summer 2025, should be Canada’s wake-up call on AI and digital sovereignty. The US strategy asserts the goal of “achiev[ing] and maintain[ing] unquestioned and unchallenged global technological dominance,” while a section on international AI diplomacy and security commits to exporting the United States’ “full AI technology stack”—hardware, models, software, applications, standards—to countries in America’s so-called AI Alliance. Failure to go along means becoming a “rival.” 

In practical terms, this is about extending and solidifying the control that the US’ already-dominant big tech companies’ hold over the most transformative technologies of our time. This is a major threat to the economic security and digital sovereignty of Canada, Europe and other Western alliance nations, and the latest signal that Canada needs a bold, new national strategy for AI and the digital economy.

For Canada to make gains towards digital sovereignty through its AI strategy, we should take steps to reduce dependence on foreign technology and infrastructure. To be clear, it is not in Canada’s interest to pursue digital isolationism. However, our current pursuit of increased integration of AI in the economy risks Canada becoming beholden to foreign technology companies and governments. 

There are two pillars to secure Canadian sovereignty within the context of AI. The first involves identifying and safeguarding Canada’s unique contributions in the AI value chain. The second involves ensuring that there are Canadian alternatives for critical areas where AI technology is deployed, such as in defense and in health. 

The following section explores approaches to advance Canadian sovereignty in AI by cementing our place in the AI production infrastructure value chain, and by enabling Canadian companies to commercialize AI products.


THEME 5

The first pillar to secure Canadian sovereignty involves identifying and investing in the value chain of AI-enabling infrastructure that ensures Canada’s national and economic security, and leverages Canada's unique competitive advantages. This means avoiding investing in areas where we lack clear competitive advantages (such as the design and manufacturing of advanced GPUs that a small number of companies in the US, Taiwan, and the Netherlands have largely dominated).

The Dais’ Can Canada Compute? study found that Canada lags behind all G7 countries in publicly owned compute resources. While it’s critical to address this aggregate compute capacity challenge, sovereign compute for strategic and security needs should be distinguished from general-purpose compute access for companies and organizations seeking to adopt and commercialize AI. The discussion here focuses on building sovereign AI compute (see the Dais’ AI compute consultation submission, as well as our discussion in Theme 6 of this submission for analysis on access).

An initial step must be clearly defining AI sovereignty for Canada. Global AI giants wield large power in controlling AI models and infrastructure, as well as the terms of AI deployment. OpenAI’s recent offer to assist in Canada’s sovereign AI tests the definition of sovereignty itself. It is therefore essential that the strategy introduce a clear and operationalizable definition of digital, cloud, and AI sovereignty. It should clarify how sovereign principles should apply to data governance, infrastructure ownership, or value chain control, among others, to guide AI strategy commitments and investments. 

With sovereignty defined, the strategy should then establish Canadian-owned and -operated AI computing capacity for Canada’s highest strategic priority AI applications (e.g. national and energy security), to ensure firms and public institutions are not independent from foreign providers and insulated from shifting geopolitics. These key strategic AI systems could be identified and developed through a Grand Challenges process (discussed in Theme 6). 

The strategy should direct identification of Canada’s strategic AI applications, and the creation of capabilities for forecasting the associated compute demands. There are a few pathways for building sovereign compute (including some detailed in our study, and actioned through the Budget 2024 Canadian Sovereign AI Compute Strategy). These include:

  • Centralizing government procurement or public investment, subsidizing AI compute from existing domestic enterprise cloud computing providers under commercial requirements that assure sovereign ownership, control, and access (e.g. the federal government’s recent cloud procurement from ThinkON, and partnerships with Canadian  telecoms companies).
  • Strategic international partnerships for shared AI compute infrastructure: jointly purchasing at scale, under assured access commitment, with values-aligned partners like the European Union, UK, Japan and South Korea (e.g. exploring participation in the European High-Performance Computing Joint Undertaking).
  • Investment in domestic AI supercomputing infrastructure, expanding Canada’s network of 10 high-performance domestic AI supercomputing sites. 

At the same time, Canada holds a competitive advantage over many other nations (including the US) in our capacity to design, develop, manufacture, and generate reliable and sustainable energy sources that can address the immense power needs of data centres. From hydroelectric power, to nuclear, and comparatively more sustainable natural gas sources. This aligns closely to the government’s nation-building agenda, and allows for infrastructure that recognizes AI as a technology that encompasses modalities beyond generative AI, with vastly different energy and infrastructure needs.

Additionally, nation-building infrastructure investments should prioritize Canada’s AI sovereignty and security. The new Major Projects Office could play a central role in new AI and digital infrastructure projects. This should include public benefit AI investments (termed “public AI”), advanced through coalitions of public, private and nonprofit actors to build foundational technologies that spur an ecosystem of AI applications, both commercial and non-commercial, with civic purpose. Digital sovereignty through collaboration can pool resources across diverse groups, from artists to companies,  nonprofits, and alliance partners such as the EU. Existing models such as AI Commons and Public AI Network can serve as a guide.

In addition, investments in large-scale electricity projects to power AI infrastructure, such as new hydroelectric, expanded CANDU reactors, and small modular nuclear capacity that prioritize clean, reliable energy generation should be expanded. A forward-looking approach should centre energy sovereignty and sustainable infrastructure, ensuring that AI growth is powered by Canadian innovation and clean energy.

Absent a clear definition of sovereignty and guiding framework, Canada risks misaligning its AI strategy from security needs and comparative economic advantages, diverting resources toward unrealistic infrastructure goals, and deepening dependence on foreign solutions which undermine AI commercialization and adoption domestically. 

Recommendations

  1. Develop and codify a formal definition of digital and AI sovereignty to inform the AI strategy, clarifying how digital infrastructure and protection of digital assets, and other aspects within the AI value chain require technological independence, and where there are opportunities for collaboration with allied countries. 
  2. Focus resources in building sovereign compute capacity for Canada’s highest strategic priority applications, used to design, deploy and operate critical use cases (such as national defence), potentially identified through a Grand Challenge process (as explored in theme 6 on scaling AI). Three parallel pathways for building compute are identified (detailed in Can Canada Compute?).
  3. Use the Major Projects Office to advance the national AI strategy, through both nation-building AI and digital infrastructure projects, and alignment to investments in clean, sustainable energy projects to cement Canada’s competitive edge in energy production that powers AI data centres.
  4. Ensure public involvement, ownership, and international collaboration as a model for sovereign AI, modelled on existing efforts like Public AI, or the proposed AI Commons, by focusing on unique comparative advantages like ethical data, soft power and open source AI development.

THEME 6

The second pillar to protect Canadian AI sovereignty is by ensuring the existence of strong Canadian alternatives for key AI technologies. This depends on building a strategy that supports homegrown AI champions in critical usage areas, creates a supportive and equitable entrepreneurship ecosystem, expands access to the foundational infrastructure that underpins AI innovation, and addresses critical infrastructure gaps.

Efforts to help commercialize Canadian-made AI technology must focus on scale-ups, not just start-ups. Scale-ups are firms that grow quickly and have an outsized productivity contribution. Importantly, they are able to leverage research and development more effectively than smaller firms, and are more likely to export (see our previous work, Into the Scale-up Verse). Commercialization strategies must focus on company growth, not just creating start-ups.

A recent Dais convening (see From Potential to Performance) of industry experts revealed a prevailing view that existing innovation programming must be modernized with a focus on retaining top AI talent and intellectual property (IP). Canada's AI infrastructure and AI application start-ups and scale-ups needing globally competitive public support for IP creation, talent retention, and commercialization efforts. Existing innovation support programs such as the Scientific Research & Experimental Development tax credit, Industrial Research Assistance Program, the Canada Infrastructure Bank, the Strategic Innovation Fund, and Innovative Solutions Canada need tailored and responsive processes for Canadian AI enterprises. 

In addition to leveraging broad-based commercialization measures, the AI strategy needs to contend with how policies can help grow Canadian companies building different layers of the AI stack: infrastructure, data, foundational models, compute, and applications for use in critical areas (such as healthcare, security and defence, and supply chain management). As it is not feasible for Canada to excel in building every layer of the stack, Canada needs to identify critical and priority areas. The next AI strategy should clearly outline these priority usage areas with clear public-interest rationale, where no strong Canadian alternatives to foreign AI solutions exist. 

The government should launch a Grand Challenge program to identify strategic AI use cases that lack Canadian solutions and create matched contribution private-public investment funds to invest to build domestic capacity for those usage areas. Modelled on ideas like moonshot policy, a grand challenge could leverage existing incubator and accelerator infrastructure and programs like Lab2Market, Creative Destruction Lab, and others.

To ensure that AI industry development in Canada drives inclusive growth rather than deepening existing divides, policymakers should prioritize actions that expand shared infrastructure, close equity gaps in access and participation, and strengthen trust through responsible and transparent adoption.

Recommendations

  1. Launch a Canadian AI Grand Challenge that clearly articulates areas where the government believes Canadian-made AI solutions must exist. The challenges should have clear, bold, measurable, and time-limited goals.
  2. Launch matched private-public investment funds that invest in Canadian companies at all stages that are building solutions in critical AI usage areas without a clear Canadian alternative, as identified in a Canadian AI Grand Challenge.
  3. Prioritize accessing public compute and data resources to companies that receive funding through Canadian AI Grand Challenge through the existing Canada Compute Access Fund.

Conclusion


The analysis and recommendations in this submission reflect the Dais’ mission to build Canada’s digital economy by striking the balance between growth and guardrails. In that light, it proposes a new national AI strategy that aims to drive innovation and prosperity while building public trust, safety, accountability, and sovereignty. 

Canada’s position as the first country to launch a national AI strategy gives it a strong foundation on which to lead the next phase of responsible AI development. Canada must not lose sight of the fact that AI is just one pathway to realize a prosperous and equitable future for Canada. We must ensure that we don’t sacrifice or compromise that goal in our pursuit to adopt artificial intelligence.