Rising privacy, safety risks for kids, teens on AI platforms
AI-based chatbots and online platforms pose new and more invasive privacy risks to minors, TMU study shows.
TORONTO, CANADA, March 11, 2025– Human-like interactions with generative AI (genAI) platforms, such as conversational chatbots, are encouraging youth to trust and share personal information about their lives, behaviour and relationships, all of which is at risk of data collection, manipulation, and exploitation.
A new report shows the rapid rise of generative AI (genAI) use has exposed youth to unprecedented privacy risks, increasing their chances of being impacted by mental illness, addiction and more. GenAI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music.
Findings are according to a new research report, (Gen)eration AI: Safeguarding youth privacy in the age of generative artificial intelligence, from the Dais at Toronto Metropolitan University.
Specific concerns outlined in the report include:
- Mental health and addiction risks from human-like relationships with AI chatbots
- Surveillance concerns as AI in schools tracks student behaviour and geolocation data
- AI-fueled bullying through peer-generated content
- Academic over-reliance on generative AI for assignments
“Dark patterns” are also a notable risk: these design tactics subtly encourage users to share more data, such as secrets they may not share with their friends or parents, or stay engaged longer than intended. This can be particularly exploitative for minors who often are unknowingly consenting to the collection of their data to unlock more interactive features on the platform.
Right now, there is not sufficient information about how companies/platforms process data about youth and no legislation in place to protect minors’ data on genAI platforms.
Using a mixed-method research approach, including a literature review, expert interviews, and case studies, the report assesses existing policy frameworks, identifies legislative and regulatory gaps in protections, and recommends best practices for policymakers, technologists, educators, and parents to mitigate risks and enhance privacy for minors.
“Children and teens are engaging with genAI platforms that are more human-like than ever before, making them more vulnerable to unprecedented privacy and safety risks. We encourage policymakers, technologists, educators to read this report to understand what concrete safeguards can be implemented now to mitigate harm,” said Christelle Tessono, policy research assistant at the Dais and report co-author.
For more insight into privacy and safety risks on minors using genAI platforms and what technologists, policymakers, parents and educators can do to mitigate them, please see the full report, (Gen)eration AI: Safeguarding youth privacy in the age of generative artificial intelligence.
For media interviews, contact Nina Rafeek Dow, communications and marketing lead at the Dais, nina.rafeek@torontomu.ca.
About the Dais at Toronto Metropolitan University
The Dais is a public policy and leadership think tank at Toronto Metropolitan University, working at the intersection of technology, education and democracy to build shared prosperity and citizenship for Canada. Visit us at dais.ca.