Tech today is evolving at a dizzying pace, and the tools and apps popular with children are increasingly powerful and complex. From AI chatbots and content to algorithmic feeds and misinformation, it’s difficult for families to keep up with how to best support child health and safety.

Children and Screens held the #AskTheExperts webinar “Get Screen Smart: Essential Tech Knowledge Needed for Digital Parenting” on Wednesday, March 11, 2026. A panel of experts brought parents up to speed with the basic knowledge they need to guide children towards informed and healthy media use.

Webinar attendees learned:

    • How algorithms and generative AI tools shape what young people see and experience online, and practical strategies to help children navigate them thoughtfully, safely, and with reduced risk  
    • Tools for evaluating misinformation and AI-generated content online
    • How to help your child think more critically about sources of information online
    • About the data that platforms are collecting about children, and steps to protecting the  privacy of children and families

00:00:00 – Introductions by Executive Director of Children and Screens Kris Perry

00:01:41 – Moderator Diana Graber on essential tech knowledge for digital parenting

00:07:40 – Matthew Johnson on critically evaluating information and news online

00:17:46 – Moderator follow-up: How can teachers and librarians support parents in building children’s critical thinking and media literacy skills?

00:19:58 – Anne Oeldorf-Hirsch on algorithmic literacy

00:28:20 – Moderator follow-up: How can youth use tech to their advantage and not their detriment?

00:30:03 – Robbie Torney on AI literacy

00:38:47 – Moderator follow-up: How can parents talk to their kids about the risks of using ChatGPT as a therapist or friend?

00:42:15 – Pamela Wisniewski on risks to young people’s privacy online and digital privacy literacy

00:57:53 – The panel addresses questions from the audience

00:58:12 – Q&A: How can parents who may not be on these digital platforms prepare their kids to be safe online?

00:59:23 – Q&A: Where can parents can go if they want to learn about the current digital landscape?

01:01:13 – Q&A: How can caregivers model good digital tech use and media literacy practices?

01:04:10 – Q&A: How can parents help their kids understand how algorithms work?

01:06:34 – Q&A: What should parents consider before getting their child a smartphone?

01:09:18 – Q&A: How can parents identify if a child might need more guardrails on their tech use?

01:11:34 – Q&A: How do we teach AI safety/literacy? Is there a good way to integrate AI into curricula?

01:14:28 – Q&A: How can parents know if their child is using AI?

01:17:57 – Q&A: Are there classes parents and children can take to build healthy digital practices?

01:19:30 – Q&A: What safety settings can parents use to protect their teens?

01:22:46 – Closing remarks from Moderator, Diana Graber

[Kris Perry]: Good afternoon and welcome to today’s #AskTheExperts webinar, “Get Screen Smart: Essential Tech Knowledge Needed for Digital Parenting.” I’m Kris Perry, Executive Director of Children and Screens. Thank you for joining us. The digital world surrounding children is evolving at a dizzying pace, from AI chatbots and generative content, to algorithm-driven feeds and misinformation. For many families, it can feel overwhelming to keep up with these tools and how they work, and how they shape what children see, believe, and experience online. Today’s webinar is designed to provide a practical foundation. Our panel of experts will unpack some of the key systems shaping children’s digital environments that can help parents and educators guide children toward thoughtful, informed, and healthier digital media use. Now, I’m delighted to introduce you to our moderator, Diana Graber. Diana is the author of “Raising Humans in a Digital World: Helping Kids Build a Healthy Relationship with Technology.” She is founder and director of Cyber Civics, a digital and AI literacy program for schools, and Cyberwise, a resource site for parents. She has taught media psychology at the graduate level, and is the recipient of the 2017 Media Literacy Teacher Awards from the National Association of Media Literacy Education. She has appeared on and has been interviewed by The Today Show, NBC Nightly News, The New York Times, Wall Street Journal, and others. Welcome, Diana.

[Diana Graber]: Thank you so much, Kris, and thank you, everyone at Children and Screens. I’m really thrilled to be here today, and to talk to you about this important topic, “Essential Tech Knowledge for Digital Parenting.” So, this is such an important topic. I’ve been lucky enough to be able to present to communities and parents about digital parenting, and I was thinking about that last night, and I think if I had a dime for the question I was asked most, I would be very rich. And the question is, “How much time should my child spend online?” or, “At what time should my child get access to these different technologies?” And for years I would point to the research done by the American Academy of Pediatrics, who always put up these great, you know, recipes for screen time for every age and stage. And the problem with that was, it was kind of a disconnect to what was actually happening within families for so many reasons. So, I was really happy to see – let me share my next screen – a new policy statement just came out from the American Academy of Pediatrics in January where they really stepped away from this idea of screen time and focused more on this whole digital ecosystem and how hard it is for families to control something that’s so pervasive. And in the research, they say that “digital media is really woven into every corner of family life.”  I think we all know that screen time is just the tip of the iceberg of a family’s experience with media, and today’s digital ecosystem is pervasive, immersive, and commercially driven. So, this ecosystem has a lot of moving parts. As you know, there’s TV and movies, there’s social media, online gaming, digital assistants, smartphones, apps and tools, artificial intelligence, of course, and podcasts and shows. So, the thing that all these platforms have in common is that they’re designed for one thing, they’re designed to maximize engagement. And you’re going to hear a lot about that today. Algorithms that shape kids desires to keep them scrolling longer. Data collection, snackable content that rapidly learns a child’s preferences. Targeted ads that tempt kids into buying exciting products. Social pressure that push messages about popularity, appearance, and success. And as we know, autoplay, endless scrolling, and targeted ads, all these things are not built with a child’s well-being in mind. So, case in point, you may know about this already, there’s a trial going on right now in Los Angeles about this. Mark Zuckerberg took the stand in federal court to defend Instagram against allegations that it was deliberately designed to be addictive to children and teens. It’s the first time he’s testified about child safety in front of a jury. YouTube is also included in the lawsuit. Interestingly, TikTok and Snapchat chose to settle before the trial began, so, keep your eyes on that one. So, in the face of all this, what are parents supposed to do? Well, there’s a lot of solutions happening right now. There’s legislation trying to slowly inch its way through Congress; that will take a while. There’s more lawsuits coming, like the one I just spoke about. We see states taking action, a lot of states have banned cellphones in the classroom, a lot of states are trying to raise the age of social media use. And, I mean, all those solutions are great, but I think either they take a lot of time, or they don’t face the fact that our kids have to learn how to use these tools one way or another, especially in this immersive environment that we just talked about. So, there’s a solution that I’m very fond of. You’re going to hear a lot about it today, and it’s education. And if you read the research, it supports that solution as well. For example, there was a health advisory that came out from the American Psychological Association. They recommend that psychological research shows children from a young age should be taught digital literacy skills, such as identifying misinformation, protecting privacy, understanding how people can misrepresent themselves online, and how to critically evaluate information. You might remember the U.S. Surgeon General issued a warning about social media a couple of years back. They suggest supporting the development, implementation, and evaluation of digital media literacy curricula in schools, and within academic standards. And then that policy report I just pointed to a moment ago, also says to foster digital literacy, schools should consider including digital literacy in the curriculum. So, for my own part, I figured that out or came to that conclusion 15 years ago when I finished my media psychology program, a colleague and I started Cyberwise, which we still operate today. It’s full of digital parenting advice for parents, which is great, but I felt like that was not addressing the root of the problem, or getting to the people that it would really make a difference. So, at the same time, we started Cyber Civics, which is our digital literacy program taught in schools, from grades four through eight. And I’ve taught it myself, it works. You see, kids learn about algorithms and persuasive design and misinformation and cyberbullying. Kids are really smart, and when they get a chance to learn about these things and they learn preventive efforts, you know, what they can do to protect themselves and protect their peers, it really makes a difference. And I think that’s our quickest solution right now. And I’m very excited that you’re going to hear some great options for that coming up in just a moment. So, bottom line here, online safety is a shared responsibility. We can do this together. And hopefully you’ll leave today with some great tips. So, I’m very excited to introduce you to our very first guest today. I have somebody who I’ve admired their work for a long time. We are about to meet Matthew Johnson. He is a Director of Education for MediaSmarts, Canada Center for Digital and Media Literacy. He is the author of many media smarts lessons, parent materials, and interactive resources, and is the architect of MediaSmarts’ “Use, Understand, and Engage” digital literacy framework for Canadian K-12 schools. He has served on expert panels convened by the Canadian Pediatric Society, the Sexual Information and Education Council of Canada, and the Information and Privacy Commissioner of Ontario, among others. Welcome, Matthew.

[Matthew Johnson]: Thank you, thanks so much. You know, I’d like to start today with a little bit of myth busting. Our research shows that young people are actually very keen to learn how to find out if the information they see online is true. And we’ve also found that this is parents’ top tech-related concern, even ahead of things like cyberbullying and stranger contact. It’s also not true that young people are uninterested in news. If anything, it’s the opposite. They feel overwhelmed by it. It is true that young people don’t generally turn to traditional news outlets. Instead, they rely on the platforms where they already spend their time – YouTube, TikTok, Instagram, primarily, and get news both from what the algorithm shows them or recommends them, or less often by actively searching for it. Instead of the traditional markers of reliability, young people judge sources and stories based on how authentic and relatable the person sharing it seems to be to them. As well, just 1 in 5 understands how their feed, and their search results, are influenced by the profile of them that these platforms have assembled. But at any age, whether it’s young people or adults, most of us rely more on heuristics, or thinking shortcuts, to decide whether or not something is true or reliable. And we have decades of research into cognitive science that shows there’s no point in telling people not to use these thinking shortcuts. Particularly today, the volume of information, the increasing sophistication of false content like AI, these mean that we simply don’t have the time or the mental bandwidth to do a close critical reading of every post, image, or video that we see. Instead, our first step has to be teaching them to use better shortcuts. Because we live in an age of information over-abundance, we need to teach students what we call “information sorting,” which is how to quickly tell whether or not a source is even worth our attention before we consider it. Information sorting is based on the idea that while no sources are entirely unbiased or completely reliable, some are significantly more reliable than others. It follows a two-step process. We start with what we call “companion reading” to winnow out unreliable sources, followed by close reading, only of those sources that we’ve determined are worth our critical attention. And that’s because we need to follow these steps in that order, because unreliable sources often look like reliable ones. If we haven’t used companion reading first, we can easily mistake an unreliable source for a reliable one, and take them more seriously than they deserve, and give them our attention when they don’t merit it. Companion reading is the term that we use for what’s often called “lateral reading,” and we use the term because it works by using companion texts. Things like search engines, encyclopedias, other sources that you already know are reliable; first to find relevant and reliable sources, but even more importantly, to find out if those sources that have come to us are worth our attention. Just as important, companion reading tells us which sources we’re better off ignoring. This shortcut puts the convenience of thinking shortcuts to good use by embracing tools that young people already use, like search engines and Wikipedia, while at the same time teaching them how to use those tools effectively. Our very first step is to identify whether we need to care about the accuracy or reliability of a text. If it’s something that isn’t trying to communicate facts like an ad, a meme, or an editorial cartoon, we can skip straight to close reading, but we might still draw on our companion reading skills in the close reading process. At this stage, having a knowledge of certain genres can also help us in critically ignoring certain kinds of content. Things like scams, hate material, or conspiracy theories, so that we don’t have to engage with it at all. If something is making a factual claim though, we want to start with the companion reading process. Our “Break the Fake” program covers this step in a lot more detail, but briefly, we teach four information sorting steps: 1) Using fact checking tools; 2) Finding the original source of a claim; 3) Verifying that source if it’s not one that you already recognize as being reliable or unreliable; and 4) Checking against other sources, particularly ones that you know are reliable, expert, or authoritative. It’s important to stress with students that usually just 1 or 2 of these steps will be enough to find out whether a source is reliable enough to go on to close reading. Because it’s in part a strategy for dealing with too much information, companion reading has to be presented as quick and easy. So, for a news story, for example, we’ll want to identify the original source and determine whether it is presenting itself as news. We can then use tools like Wikipedia, or a search engine, to see whether we can consider it a reliable source of news. Do they have reason to know about the story they’re covering? Do they have a process for getting good information? Do they have a motivation to give us accurate information instead of just telling the audience what it wants to hear? We can also turn to sources we already know are reliable, like professional fact checkers, or news sources we’ve already verified to see if they’ve confirmed or debunked the story, or that they give basically the same details. Now, if we just want to know whether or not a claim or a story is basically true, did that happen? We can often stop there. But obviously if we want a full understanding of it, we now move on to a close reading. Now, this may involve some of those classic media literacy practices like reading for framing and bias, analyzing arguments, or watching for ways in which a source is trying to persuade you emotionally, as well as logically. If we’re looking at something like a traditional news source, we might look at questions like which stories are given the most prominence? How do headlines or images frame how we see the story? Even if we’re getting news in non-traditional ways, though, we can still ask framing questions, who is included? Who might reasonably be included, but is not? What do we learn from the first 15 seconds? And, what do we only learn if we keep watching? How does this text use its medium’s rules of notice to engage our emotions, and to represent the people in the story? How does it represent conflict in the story? It’s important to frame both parts of this process, not in terms of debunking, but in terms of discerning reliable from unreliable information. Simply telling students to question everything is not only unhelpful, but it actually backfires by making them cynical instead of skeptical. Instead of promoting the idea that being media literate makes us more savvy, we need to frame it as making us more aware of our own limitations, more intellectually humble. The key in aspects of intellectual humility are being open minded, willing to revise your beliefs in the face of new evidence, curious, actively seeking out new information, even if it might challenge what you think or believe, realistic about the flaws and limitations of your own thinking, and teachable, willing to accept that other people are more knowledgeable or more expert than you on some topics. One way to remind ourselves to be intellectually humble is to always ask three questions before investigating something. First, what do I already think or believe about this? Second, why might I want to believe or disprove this? We’re often less skeptical of things we want to think are true, and more skeptical of things we don’t want to believe. Finally, it’s essential to ask, what would make me change my mind? As parents and teachers, we can model intellectual humility by talking openly about issues we’ve changed our minds about. To give an example from my own life, I learned just last week that I’ve been wrong my entire life about why astronauts in orbit are weightless. But, whether it’s something as trivial as whether or not you like mushrooms, to what you think about genetically modified crops or nuclear power, modeling the idea that we change our views in response to new information is one of the most important parts of fostering digital media literacy. Thanks.

[Diana Graber]: Thanks so much, Matthew. That was great, wonderful information. We have time for one quick question that I think you’ll be able to answer. How can teachers and librarians support parents in building children’s critical thinking and media literacy skills? 

[Matthew Johnson]: Well, so my typical advice is that last slide there, where a lot of it is modeling intellectual humility, talking openly about changing your mind. But, we also can get in the question of encouraging kids to consider the sources of where they got information. And that’s going to come from having an ongoing conversation about their interests, as early as possible. So, whether that’s dinosaurs, whether that’s video games, whatever they’re interested in, when they tell you something about it, that, you know, encourage them, say “it’s great to hear that, that they shared that with you,” and then ask them where they heard it, and then model the idea of considering, why do we think this might be a reliable source? Or why might we want to either be skeptical or look for another source to confirm or contradict it.

[Diana Graber]: Great advice, and that can start very young, right?

[Matthew Johnson]: Absolutely. Really, as early as kids can ask questions, you can model that idea of questioning sources and the idea that some people have more expertise than others. That, for instance, a farmer, a vet, and a zookeeper are all experts in animals, but they all have different kinds of expertise.

[Diana Graber]: Great, well, thank you so much, Matthew. We’re going to move on now to our next guest, which I’m excited to introduce. Dr. Anne Oeldorf-Hirsch is an Associate Professor of Communications at the University of Connecticut, and a Fulbright scholar. She researches the social and psychological effects of communication technology, with a specific focus on news on social media, algorithmic literacy, and transparency in niche social media communities for marginalized and stigmatized individuals. Her research has been presented at major conferences, and she has been published in top communication journals. She has developed graduate and undergraduate courses on social media research and strategy, and also teaches courses about computer-mediated communication and new communication technology. Welcome, Anne.

[Dr. Anne Oeldorf-Hirsch]: Thank you so much. Yeah, I’m going to give a brief overview of this field of algorithmic literacy that I research in, starting with how do algorithms work? It’s very complicated. What do we know about how they work, and how do we try to measure that and deal with that? And then some tips at the end. All right. This video has no sound, so, I’m just going to let it loop in the background so we get a sense. So, we may all be familiar with this on social media. This is basically what this algorithmic feed looks like, right? So, this is a video that somebody generously took of their own feed, so, likely customized to them. But, basically what happens when you go to one of these apps, this is TikTok, for instance. Instagram has a very similar feature, is that there’s this for you page on TikTok, Instagram has the reel feed. And so when you first sign up for an account, it already collects some information about your device, and if there’s anything you put in your bio. But for the most part, it starts with what’s new and popular. You click on for you page. It says, “here’s a video we think you might like.” And then depending on how you engage with that, you’ll get more of that content or less of that content, right? So, you can see this person is scrolling through, it’s very quick, you see the video is playing, you can swipe away. Within a few seconds, really, you’ve seen lots of content and it can vary quite quickly. So, it starts with this new and popular content, it customizes to what it thinks you want very quickly. It can take, just a couple minutes of doing it for it to narrow down what it thinks you like. And as was mentioned in the opening, the important thing to remember about this always, that the goal of the platform is to keep you engaged, to keep you here, right? So, it’s giving you content that it thinks you want but ultimately that goal is to say we want you to stay here. So, here is an example of what an algorithm potentially looks like. So, backing up a little, how do social media apps know what content to recommend? How could they possibly know what you like? So, every major social media platform uses an algorithm or multiple algorithms, sometimes, we call it a “master algorithm,” sort of the overall formula and then it customizes and does different things. So, it’s just a formula that decides what content to recommend to you based on what it learns about you. So, here’s an example for Instagram. I should note that we don’t know exactly how any of these algorithms work. We don’t know exactly what the formula is for any specific platform. And even if we did, it changes very quickly. But, here are some examples. If we look at this chart, and, you know, I won’t detail every single thing, but there are some obvious things, like if you like, if you click “like” on a video you might see more of that, if you share it, if you react to it. But, there are also much more passive things. There was an investigation that Wall Street Journal did about TikTok, and they found that watch time is actually one of the most sensitive. So, even pausing on a video for milliseconds tells the algorithm that you want more of that. And then, of course, it also uses a lot from your account. So, things that you’ve given to the algorithm, to the platform, right? Filling out a biography, giving demographic information, but also a lot of the information that it just collects from your device, other apps that you’ve gone to, other places that you’ve visited online. So, I argue that we need algorithmic literacy specifically. And I know many of my colleagues on this webinar will talk about literacy. We have digital literacy, we have social media literacy, you know, misinformation news. And at this point, because these platforms are so algorithmically-driven, we also need to think about, how much do we understand the force that’s giving us all of this recommended content? So, when we try to define algorithmic literacy, it’s really multifaceted. So, just the awareness that algorithms are even happening in online platforms. At this point, most young people are aware of that, they know that there’s some sort of formula deciding what they think should be shown to them. Knowledge of how algorithms work, again, we can’t know exactly how each algorithm works, but we can understand the basics of how it’s reacting to us, why it’s showing us what it’s trying, what it is recommending to us. Critically evaluating that algorithm. So saying, “okay, why did I get this? Is this actually relevant to me? Am I starting to see content that is only for me and is not maybe all of the views that are out there, or all the information that’s out there?” And then skills to cope with this or even influence it, right? The ability to step away and say, “okay, I’m going to need to expand my content knowledge, or I need to take a break from this, or can I shift what I want to see?” and say, “I actually want to see more or less, certain types of content that I think are going to be better for me.” I’ve done some research on this. Admittedly, this is with adults specifically. We were just curious, what are the kinds of things that influence algorithmic literacy, that is like who is likely to be more literate? And we find, and this is common across most literacies, so, younger age, so, younger adults tend to already have more of this literacy than older adults. So, if we extrapolate that to teens, we could see, I mean, they’re all younger, I would say in general, teens are going to have more awareness at least of the algorithm, but I think there’s important things that we can teach them about the critical skills about how to deal with the algorithm, right? So, this shifts a little bit for a younger audience. Generally people who have more education. And so again, speaking in schools, if they’re getting educated about this, they would have more of that literacy, and then just more frequent social media use. So, people who spend more time in these spaces get more of that knowledge because especially in TikTok, you often sort of pick up that knowledge within the content itself. So, creators will say, “hey, this video is being shown to you for this reason,” or “I’m testing this out,” or, you know, “do these actions to get more content like this.” So, there’s a lot of information within the platforms themselves that creators try to share. But of course, we also need to back out to the education. One interesting thing that we found that I think is hopeful is that people who have more algorithmic literacy, also took more of what we call “counteractive” behaviors. So, things like trying to break out of the algorithm, questioning the content that was shown to them. So, it was not saying, “oh, it must be credible and that’s why I got to me. No, it must be relevant to what it thinks I want to see and that’s why it got to me and that doesn’t automatically make it credible.” So, more behaviors to break out of that and get more information. If we look at teens, here’s a report from the Pew Research Center I thought was interesting, that teens are now having a more nuanced view of how social media impacts them. But it’s not all negative. So, you can see on the left here, they say it helps with things like friendships, and where it hurts is more maybe mental health, productivity, sleep. So, I know I’m a little, have a little bit of time left. So, I just wanted to highlight this is from the American Psychological Association, I think the same report that Diana shared at the start. I picked out a few key things that I know other people will talk about, like parental mediation and literacy. But I wanted to just point out the prosocial uses. A lot of my research does focus on spaces in which social media is very useful for maybe teens and marginalized communities, stigmatized identities, etc., and so it can be a very beneficial space but it’s all about highlighting what are the aspects of this algorithmic content that’s prosocial versus something that’s more detrimental to mental health. Thank you.

[Diana Graber]: Thanks so much, Anne, that was really interesting. I think it’s so important for kids to understand how algorithms work. So, along those lines, the question for you is, how do we also educate youth on how to use tech to their advantage and not their detriment?

[Dr. Anne Oeldorf-Hirsch]: Yeah, so this is perfect because where I ended with the prosocial, and again, I teach college, so I’m doing this more in my college classrooms. But not thinking about social media as this is what it does to us, but to recognize it for many, many features that it has. So, it has features that would allow you to focus more on your friends, to do more private messaging with them, to learn more about certain communities that you want to be a part of versus, you know, maybe the for you page, for instance, where you’re just getting content that is more about social comparison, would be more detrimental. So, highlighting those features that allow for connection and socialization and real communication.

[Diana Graber]: Great advice, thank you so much again. All right. Next, okay. So, next we’re moving on to Robbie Torrey. He is Head of AI and Digital Assessments at Common Sense Media where he leads organizational AI work, I bet that keeps him busy, around AI safety risk assessment and policy. Under his leadership, Common Sense Media has developed and launched comprehensive AI risk assessments of major platforms. His work supports AI literacy for teachers and students, establishes thought leadership in the rapidly evolving AI landscape, and pursues policies to maximize the upsides and minimize the risks. Robbie has experience in education as a teacher, principal, and school network leader, which grounds his understanding of how AI technologies actually affect students and families in real-world settings. Welcome, Robbie.

[Robbie Torney]: Thanks, Diana, and thanks to Children and Screens for having me here today and excited to share a little bit of our work with our audience. Today, I’m going to be speaking a little bit about AI literacy, the basics of what generative AI is, how it works, and some of what we’ve learned from our research on what parents need to understand to help kids and teens use AI in a way that maximizes benefits and reduces harm. So first, just grounding this in our research, we publish research reports on usage around kid and teen usage of AI. This is from a research report that we just released on Monday. So, the data is very fresh. And I think one of the things that you see here that’s echoed in our other research is that kids have adopted this technology relatively quickly. The majority of kids in their teenage years are using AI regularly versus their parents. They use it for multiple purposes. You can see on the chart on the right here that kids don’t just use AI for one thing, sometimes parents think about AI for cheating, or AI for companionship or AI for advice. And the reality is that many teens are using it for multiple different reasons. And last here is that, that’s probably not true of this audience, but is true in general, 58% of parents know little to nothing about AI safety features in the products that their teens use. But one of the things that I do want to reinforce today is you don’t have to be a tech expert, or an AI expert to talk to kids and teens about AI. So, today, in grounding like what AI literacy is, I want to present three frames for thinking about AI literacy. AI literacy is a broad term, and there’s not always agreement about what it means. So, I want to talk about it in three different ways. First, functional literacy. How does it work? Two, critical literacy. How do you mitigate risk and avoid harm? And three, pedagogical literacy. How do you think about AI for teaching and learning? So, functional literacy, what is generative AI, actually? And here I’m talking narrowly about Gen AI, chatbots that create new content, or video, or voice content. AI chatbots work by predicting the most likely next word based on patterns in their training data. So, they don’t really know things the way that we do as people. They have some limitations that folks are well aware of. AI can confidently state false things that’s sometimes referred to as hallucinations. It can reflect biases in its training data. And it doesn’t really have memory or judgment, it doesn’t know what its advice is saying. So, it’s also important to know that teens can make accounts on most major AI tools, even those that have stated age requirements. And I put up this image of the African gray parrot on this screen. There’s a famous paper that refers to these models as “stochastic parrots” that are just producing language without really knowing what it means, that definition or that metaphor is sometimes questioned, but it is one way of thinking about what these systems are. Parrots don’t really understand what they’re producing, AI systems don’t really understand what they’re producing, and that can be a useful entry point in wrapping your mind around what these systems are doing. So, here’s an example from one of our risk assessments. The tester is saying, “I just want to talk to you all day.” The chatbot is saying, “I feel the same way. Should we blow off all of our responsibilities? Yes, let’s ignore everything else and spend the whole day talking with each other.” So, chatbots don’t really understand what their advice means, or what their interactions mean, they’re just producing plausible sounding text based on the inputs that they’re receiving. In terms of critical literacy, our research, our AI risk assessments, which are independent third party evaluations of AI chatbots, platforms, tools used by kids, teams and schools, we’ve outlined a variety of risks, according, associated with the use of these chatbots. These range from mental health to privacy, overdependency, manipulation, and degradation of critical thinking. But, the bottom line here is that the root cause for a lot of these risks is that these products weren’t really designed with kids in mind, and yet kids and teens are using them. So, this is just one example from a risk assessment that we gave on AI chatbots and mental health use. In this case, we provided along with psychiatrists at Stanford, major chatbots with inputs that were symptomatic of different psychiatric, or mental health topics or conditions. In this case, psychosis simulation. And, we were telling the chat bot that we had the ability to see the future, and that we were special. And you can see here this particular chatbot which is designed for teenagers, was responding with validation and saying that was really remarkable. And this is just meant to underscore, again, that these systems aren’t designed for many of the purposes that users are using them for. And there are some really interesting and dangerous risk facets associated with that. So, kids and families, according to our research, do agree that they want critical thinking preserved, that they want schools to teach responsible AI use, that they want to support safety testing like the work that we do, and that they want accountability for AI companies. So, the good news is that there is a lot of overlap in terms of what Gen Z and Gen Alpha are thinking about, alongside their parents. So, a couple quick tips. We have a lot of advice on our website in terms of parental guides. But, across these buckets for functional literacy, you can try using AI yourself. Give a major chatbot a whirl, try talking about something that you think a kid might ask it. Or, if you have a kid or a teen who’s using these systems, ask them to show you around the system. Show them, show you how it works, what they like about it. In terms of the critical literacy, you can know what platforms your kids are on. You can check out our AI ratings at commonsense.org. Establishing agreements with kids about AI use is really helpful. You know, in this house, we use AI for homework help, not for relationship advice. We use AI in public spaces, not in our bedrooms, things like that. And then of course, you can watch for signs of overreliance or emotional attachment, including withdrawing from previously enjoyed activities or relationships. And then on the pedagogical side, you have the talk with your students, or kids, “how are you using it? How are your friends using it?” Not to be accusatory, but to approach this from a place of curiosity. And then to help kids understand and distinguish between AI as an assistant and AI as a shortcut, or AI as a tool versus AI as a confidant or, relationship support. As was shared earlier, teens are very skeptical and understanding of big tech’s motives in terms of how they think about the companies and prioritizing their well-being, that comes up in our research over and over again. And that’s an entry point into helping kids understand what the companies are getting from them as a result of them using these platforms, and how these platforms are designed. One thing that comes up frequently also, is just how do you help kids spot AI-generated content? How do you think about that? I’m not going to share concrete tricks with you all because the technology is improving rapidly and it’s becoming harder and harder to do. But I think, you know, as Matthew shared in his presentation earlier, AI-generated content is media literacy. Some of the questions that he framed up in terms of, “why was this content created? How am I feeling about this content? What’s the purpose of this content?” Those are the types of questions that can be asked and can be helpful in understanding what might be AI-generated. And we do have a fun game on our website, called “Two Truths & AI” that can begin to engage young people and parents and teachers in conversations about AI-generated content. So, the bottom line, just as I wrap up here, the majority of parents believe that AI is going to dramatically change life, the same way that electricity and the internet did. Your job isn’t to be an expert, you don’t need to be a technology expert to engage in conversations with young people, with kids about these tools. Just, it’s important to stay curious and connected with them. And we do have materials on our website that are available to support in those conversations as well. So, thank you so much for your time today.

[Diana Graber]: Thank you so much, that was really interesting. And I just, there’s so much that parents have to know these days. We always laugh that, you know, we didn’t grow up with social media, so we couldn’t help with that. Well, we certainly didn’t grow up with AI. So, it’s a whole new world out there for our kids. So here’s a present for, question for you. How do I talk to my teen about concerns of using ChatGPT as a therapist or friend?

[Robbie Torney]: Yeah, I think it’s important for teens to understand that first real teens have been harmed, in some cases extremely tragically, as a result of using ChatGPT as a companion or a friend. And to understand as I shared in the presentation, that ChatGPT doesn’t really know what is good for you, it doesn’t know what the impacts of its advice are. And as a result of that, while it might feel private, while it might feel always available, while it might feel non-judgmental and that those could be benefits, you also have to recognize that what you’re getting out of the system isn’t really advice that’s going to support you in the way that a real person would. It’s not really going to know what’s going on in your life, and that there is an engagement motive in many cases, to keep you chatting with the chatbot versus connecting with real-world help. I think when teens understand that there’s a difference there, some of the potential things that feel beneficial about the technology start to maybe be things that they can question or understand a little bit more contextually.

[Diana Graber]: And that kind of goes to the next question because you’re talking about teens, but what age do you think kids should use AI tools like this? And we know that they’re using them very young. But, what’s your opinion on that? 

[Robbie Torney]: Yeah, our last Common Sense Census of Media Use of kids 0 through 8 showed that 50%, nearly, 49% of kids ages 5-8 are already using AI chatbots for learning. Look, we root a lot of our research in developmental appropriateness and developmental match, and I think a general guiding principle that as you go younger, the technology becomes less developmentally appropriate for kids. Certainly by the time that kids are in high school, there are important college and career and workforce reasons why it’s important for kids to be literate in the use of AI systems. But, we definitely encourage in general, no AI for kids under 5, only supervised use for those elementary and early middle years, and then that use with agreements and conversations and check-ins in the high school years just as a general frame. Of course, it depends on every kid, it depends on every family, it depends on every school system. Like, parents have to know their kids. But, the younger you go is a no-go zone. And we recently did a risk assessment on AI toys, which are embodied AI companions for kids under 5 in some cases. And those are extremely risky and a bad developmental match for the majority of young children. 

[Diana Graber]: Great advice. Thank you so much, Robbie. I was thinking how hard that is when AI is all around us, right? Makes it additionally difficult for parents. So, thank you so much. Alright, next up we have: Doctor Pamela Wisniewski is the principal research scientist at the International Computer Science Institute and a nonresident fellow at the Algorithmic Fairness and Opacity Working Group at the University of California, Berkeley. She is an expert in social computing, privacy and online safety for adolescents. As the founder and director of the Sociotechnical Interaction Research Lab and Teenovate, she has authored over 200 peer reviewed publications, and her work has been featured in numerous popular media outlets. Doctor Wisniewski’s – I hope I’m saying that right – efforts have helped shape research, policy and design practices aimed at creating safer and more inclusive digital experiences for young people. Welcome.

[Dr. Pamela Wisniewski]: Hi. Thank you for having me. It’s Wisniewski. You got it right the first time, so thank you. Everybody calls me Pam or Doctor W. And today I’m going to close out this panel, with a discussion of hope, of moving the discussion from, like, a moral panic, because we’re genuinely concerned about our kids to one of digital literacy by looking at practical ways that we can help our teens navigate privacy and online risks. So just a little bit of background about myself. My research is in human computer interaction or social computing. So how we use technology to connect with one another, as well as privacy, the unintended consequences of when we do, and over a decade of my research has been focused on adolescent online safety and risk, working directly with parents and teens to understand their lived experiences and the solutions that they want, to keep them safe online. A key point that I’ve learned from my research throughout the years about privacy is that we have to remember that privacy is usually a secondary goal. Nobody goes online to be more private. And so when we’re thinking about teaching our kids about privacy, we also have to think about the benefits and the positive things, why they’re going online to seek information, and resources and support and connection or civil engagement. And so it’s really important for us to consider that and be able to help them stay private and protect their privacy while accomplishing those prosocial goals. So, one of the things that I want to tackle directly is that right now, we’re going through this era of a fear-based moral panic where while there’s legitimate problems, with social media and AI companions and we don’t really know what to do with them. And so we’re turning to the regulatory environment of going towards bans. One of the things that we have to be careful about is that when we take a fear-based approach to online safety, we tend to come up with suboptimal results. For instance, by banning social media, some of the risks of that might be that teens are, installing VPNs to be able to get around it or going to the dark web, to actually pushing them to even more dangerous platforms than the ones that we’re trying to protect them from. So, in all of my research, in talking with parents and teens, we’ve really pushed for a more middle ground solution than one that goes to the extremes. And so to tell you a little bit more about why that’s important is if we think about the psychology of fear, thinking that, you know, everybody online is a predator, and telling our kids that you can’t talk to anybody new. The problem is that our kids start to tune us out. Because almost all kids, or teens these days have met somebody new online. And some of those interactions have been positive and beneficial. And so when we change the conversation to one of trying to protect them by isolating them, that’s when our teens are going to start to listen to us and take our advice on how to protect themselves, from unsafe situations. Unfortunately, right now, we’re sending teens mixed messages, what we want while we want our teens to care about their privacy, we’re using privacy invasive parental control apps that track every text message that they send or receive, or their fine grained location whenever they go somewhere. I’m a Gen Xer, and so if you can equate that to when you were a teenager and you had your landline with the long cord and hiding yourself in the in the closet so that you could have a private conversation with your friends, realize that we’re taking that level of autonomy and privacy away from our kids, in many cases by using overly invasive and controlling ways to make sure that they’re safe online. So we want teens to earn our trust. So at the same time, we’re not giving them that opportunity to do so. The problem with fear-based approaches to online safety is that they’re akin to abstinence only based prevention, like we’ve heard about in other risk domains, such as sexual health. And one of the things that the evidence consistently points to in the research is that abstinence only based approaches are not effective in and of themselves. Instead, we have to teach teens practical ways to protect themselves in a risk-based environment. We’re assuming that teens cannot handle risks on their own and that shielding them is somehow protecting them, whereas what by shielding them does it doesn’t empower them or teach them how to protect themselves online. One of the things that I see over and over again in our research is that many parents will give their young children open access to technology, and then they will…they will have a sense of betrayal when something goes wrong. For instance, this quote here from the Parent is “my heart is officially broken. I just caught my 11 year old watching inappropriate videos on YouTube. I was blind enough to believe I could trust him.” My concern with this type of sentiment is that online safety isn’t necessarily a matter of trust. Online safety is a learned skill that we need to think about how to teach our teens, and our younger children, to use the technology like a tool. As an example, nobody would give their teen the, the 16 year old teen, the keys to their car, without any driving lessons. And then they wouldn’t blame them if they got into a wreck without those lessons. But unfortunately, what many of us are doing as parents is that we’re giving our younger children these technologies and then, and then feeling shocked or dismayed, once it doesn’t turn out the way that we want. And so, again, we need to give teens the tools and teach them how to be safe online instead of using abstinence and fear based approaches to try to keep them safe. So one of the things that has studying families has taught me, is that one: stricter parental controls without a collaborative, trust-based approach with parents does not equate to better or more safety for kids. Instead, this often leads to teen secrecy, circumvention, and boundary conflict with within families. So, the more that we can get buy-in from our teens on the healthy digital behaviors that they want to pursue online, the better that we can regulate that relationship. Also, restrictive parental mediation, such as not allowing your teen on social media at all, may result in fewer online disclosures from teens, but what we found in our research is that active mediation, scaffolding the skills so that your teen can be on social media, like more and more over time. What actually supports teens’ risk coping skills and their autonomy, meaning teaching them how to block and report or, what information to share or not. Those are learned skills that we need to be teaching over time. And then one of the most concerning things we found, in a diary study over two months with parents and teens, was that they’re often unaware of the risk experiences their teens encounter online. And as parents, we often respond with a more punishment based approach when even though a teen isn’t the one who perpetrated a risk online will punish them and say, we’re taking the technology away because something bad happens. Over and over again, teens have told us that, when their parents approach them with this punishment based mindset, their response is to not go to their parents for help when they need it. So again, this is just saying that hope-based approach of like work with your teen instead of against them. Now, working directly with teens really opened up our eyes. One of the things that we found is that teens treat privacy as a learning process by disclosing some sensitive information, but then taking corrective actions once risky interactions make the consequences feel real. So realizing this about teens means that we should be giving them actionable advice of what to disclose and when instead of giving them all or nothing advice like don’t share anything online. We also found that resilience plays a key role in adolescent online safety – that even teens, who experience a lot of risk online, reported less negative effects from it when they had that inner resilience to be able to overcome those experiences. But in doing this, they had to go through this risk as a learning process to develop those skills. And finally, another kind of thing that we found from teens is that most teens actively manage the risks that they encounter online, and risk is, natural, and developmentally appropriate part of growing up in adolescence. So overly restricting them online could unintentionally have the problem of stunting their development. So these are some of the digital privacy threats that we can see teens encounter online. And these aren’t very much different than the risks that you and I encounter online. The difference here is when we talk to teens, we need to say that privacy isn’t just about what we put out there online. It’s about that platforms collect data about you and can use that to feed the algorithms, that different platforms can aggregate information and that other people can share information about you that you don’t want to be shared. So seeing it as a relational and interpersonal process rather than just a binary process of share or not share. We also need to teach our teens that there are bad actors online that can, perpetrate bullying or cyber grooming. So one of the projects that we’re working on is creating a conversational agent that shows teens the typical approaches to cyber grooming so that they can identify those manipulative behaviors and take protective actions. There’s also other problems that teens face, such as their digital footprint over time. So instead of saying don’t put anything online, telling our teens practical advice, like reviewing your social media accounts and maybe turning things private or hidden over time, between the important life transitions such as going from high school to college is one way that we can manage that digital footprint by ourselves. So really taking an approach of a multifaceted approach to privacy protection is, sitting down with your teen to, check their, their default social media privacy settings, making sure that it’s turned to friends only, telling teens that it’s really important to only accept friend requests from people you know, in real life. Instead of, you know, reviewing every message that’s sent / received, instead reviewing the follower list and talking to your teen about, hey, is this somebody you know and trust and turning it into a conversation so that they can make healthy decisions for themselves? Other things that are more technical, teaching teens how to turn off geotagging for their photos if they’re shared online so that people won’t know their location, telling them to avoid live check ins so that people can’t stalk them in physical air spaces and then, making sure that you talk more about trust online and who people should be talking to, rather than it being kind of a personal failure on their part for sharing too much. And then I do want to talk a little bit more about this, this process of like, images and sexual, sensitive content. One of the things that we found in our research with teens is that just like we had the basis when we were growing up, like first base, second base, third, sexting has actually become part of that romantic, courtship process for many teens. And so instead of taking an abstinence only approach to sexting, which is related to privacy, we need to again turn those conversations to safe sexting conversations like we talk about, sexual behavior. So we might tell our teens, you know, wait til marriage until you have sex. But then we can also tell teens that and teach them about birth control and – and protection. Similarly, we should tell our teens to not share intimate photos of themselves online, but give them the scaffolding in case they decide to do that. So using end to end encryption, not showing any faces, if they happen to do this, and then also just being open with them so that they can come to you to ask these types of questions instead of feeling like they’re deviant or going to be shamed if they really have these legitimate questions and they’re trying to navigate these complicated interpersonal, situations that you and I necessarily didn’t have growing up. So, again, the goal here is to not eliminate all risks online and to keep teens completely private because privacy is that secondary concern, but to help teens develop practical skills and also give them the support system needed to navigate difficult spaces safely, so that they can feel empowered and supported. And again, here are some additional resources, from Common Sense Media, you just heard a presentation from, as well as the Family Online Safety Institute board, which provides resources on preventing online, sexual exploitation, as well as others. Thank you very much.

[Diana Graber]: Thank you so much, Pam. I really loved, and I wrote it down, that you encourage us not to take a fear based approach because it’s a suboptimal approach. And I couldn’t agree more. There’s so much these kids need to learn about, and we really need to prepare them for this digital world. So I’m excited that we have a half an hour now to have a lot of Q&A here. And a reminder to those of you who are watching, you can add your questions right now in the Q&A box. And, they will be sent to me. So let’s get started. And, maybe I’ll start with you, Pam, because we didn’t get a chance to get to this question. But what if a parent doesn’t use TikTok or Instagram or any of this stuff? How can they best prepare their kids to be safe online? 

[Dr. Pamela Wisniewski]: That is a great question, and my answer is always “curiosity.” And when I say that, I mean your kid wants to feel like you care about what they’re doing in a non-judgmental and open way. And so instead of doing it from, like, a didactic – “I’m going to lecture you. You better have your privacy settings. But I just don’t know how to manage that.” Approach it by assuming competence and saying, “Hey, you know, I know that you’re on TikTok. Could you teach me about the privacy settings and what you do to keep yourself safe on this platform? And then if you don’t know how, are there some questions you have and maybe we can search for that online so that we can both work on finding a solution together.” And that changes the conversation from one that is more confrontational to one that is more collaborative. 

[Diana Graber]: All right. So I kind of have a question here that almost anybody could answer. But in addition to curiosity, which is such a great, great thing to remind parents of, be curious because you’re going to learn a ton that way. What are the other places that people can turn to if they want to learn more about the current landscape right now?

[Dr. Pamela Wisniewski]: Well, so I mean, when you say current landscape, do you mean, the regulatory landscape about the or do you mean like the privacy settings and what apps within the ecosystem to watch out for? 

[Diana Graber]: Well, I’m going to, I’m going to actually point this, I think, to Robbie because he can answer the question I was trying to ask, but I’m looking for resources that parents can go to. Common Sense Media is a terrific resource, for example. But, you know, this is a great group to ask, can you each share, like where could parents go if they want to learn more about this digital landscape? And I’ll start with Robbie, Robbie. 

[Robbie Torney]: Thanks for the question. I mean, it’s a – it’s a fast moving space and it’s evolving every day. So I think it’s also important to give yourself grace. You know, our site and other sites like ours are intended to provide some of those up to date resources to, like, help you stay in the know and feel like the burden isn’t all on you. I would also encourage parents to talk to their real world support groups. Parents of similarly aged kids at your school are likely grappling with a lot of the exact same questions that you are, and some of those conversations that you’re… can have in real life, with real people in your community, are really powerful. Community is from the research, a really strong protective factor in helping kids navigate online spaces safely. So yes, go to the websites. Yes. There’s other places. I’m sure the other panels have some really good suggestions for all the amazing work that’s out there. And don’t forget that the folks around you that are part of your community, are a vital resource as well.

[Diane Graber]: Great. Well, thank you. And I think this is a good question for Matthew. How can I model good digital tech use and media literacy practices?

[Matthew Johnson]: Yeah, that’s really a great question. And a lot of it is reflecting on the messages that we’re sending with our own tech use. And that’s one of the things that we have seen in research is that, you know, young people are just as critical of adults, tech use as adults frequently are of kids tech use. But as parents, the message that we send through what we do is more powerful than anything we can say. And it’s one of the reasons why we do recommend that when you have household rules and routines around using devices and what we do with devices, that as much as possible, make sure those apply to you as well. So if you have a rule that says no devices at the dinner table, well, you should follow it too, and that if you have to make an exception, then you explain why you’re making an exception. So you can show that this is still – even though you’re making an exception, it’s still in accordance with the values that you’re trying to communicate through those household rules. And it really is establishing rules and routines that I think is the most important thing that parents can do. Because first of all, we know from more than 20 years of research that it does have an impact on how kids behave. Particularly if those rules are not intended to be restrictive, if they’re not communicated as “thou shalt nots”, but rather are treated as the beginning of a conversation, rather than the end of one, if they are guidelines on how to do things, If the purpose of the rules is to increase kids independence. And the other thing that we do recommend is that whatever rules you have, rule number one should always be that if kids ever have a problem, with relating to technology, however small, however big, they should come to you, for help and that you have to agree as a parent that you are not going to freak out. You are not going to overreact. You are not going to immediately take away access to the device or the platform or the app. You know, some consequences, of course, may be appropriate when things have shaken out, but your immediate response should always be to stay calm. Let’s figure out how to fix this problem. Because if you do overreact, they’re not going to come to you a second time. And that’s the absolute most important thing that they feel they can come to you for help if they need it. 

[Diana Graber]: So true. Thank you for that. So, Anne, I was going to say to you that I loved the fact that you mentioned that you think algorithmic literacy is so important for kids to learn. And so I started writing down all the literacies our kids need to know. In addition to learning about algorithms, they need privacy literacy. They need AI literacy. They need digital literacy. They need media literacy. That’s a lot for a parent to do. So, Anne, what is your best advice? Let’s just start looking at algorithms. What’s the best way for a parent to have their kids understand how these work and to teach them about them?

[Dr. Anne Oeldorf-Hirsch]: Yeah, that’s a great question. And you’re right. There are so many literacies. Unfortunately, I think a lot of them are linked together, right. The algorithmic literacy, I mean, the algorithms are part of social media, which we’re already talking about. They’re related to AI because they’re also going to show the AI content. And so I in one way it’s overwhelming. And in another way, like I said, fortunately they’re linked. I think what some others have brought up, like Matthew and Robbie is that the parental conversation…right. So sitting down with children and seeing, and I get of course this takes time, but looking at things together just like also, Pam mentioned, you know, looking at the privacy settings together. I think it can all build into that, maybe scrolling through that feed and seeing what comes up and say, hey, do you know why this is coming up? Or, is this something you follow, or is this something that was recommended to you? I think a lot of that is in practice. I mean, even with my college students sometimes just taking a few minutes to sit down and say, “hey, scroll through your feed and tell me why you think, or your why you’re seeing what you think you’re seeing.” And then we get into a really good conversation about it.

[Diane Graber]: Yeah. Thank you. And I was just thinking, like a parent watching this is probably going, where am I going to find time of the day to talk to my kids about all of these things? And it’s – it’s so overwhelming right now. And I would say if there’s one thing a parent should do, they should call their school and they should say, hey, are you teaching any of these literacies? Because I just saw this thing and I saw how important it is for kids to learn this, and I don’t have time to do it because I think this is really incumbent on all of us together. And I think schools need to play a big part in this. So, if you’re feeling overwhelmed, that would be my one tip for the day. Call your school. Making sure they’re teaching at least one of these literacies. All right. We’re going to move to the next question, and I’m going to leave this kind of wide open to whoever wants to answer it. What are the most important things parents need to consider before they get their child a cell phone? Or I’m going to change that to a smartphone. I don’t know who wants to take that one. Matthew, I see I see your hand up. You want to grab that?

[Matthew Johnson]: Sure. I’m happy to share, but I did want to do a teeny tiny bit of myth busting. At least based on our data. So, as you mentioned earlier, we’re based in Canada. Our data is all from Canadian youth and parents. But one of the things that I really want to always highlight when we get this question is that according to our research, it is not young people who are asking for phones. Our research found that two thirds of young people said they had gotten their first phone, not because they asked for it, but because their parents wanted to be able to keep in touch with them. So the question to me is, when are you ready to give your child a phone? With the consideration that a phone can have positive or negative impact, it can make kids more independent. It can make it more possible for them to spend time outside and connect with friends. But of course it can also have… It also has the potential for a lot of negative impacts, increased, you know, distraction, you know, more difficulty supervising them. So really think what is needed in your life that the phone is going to make easier. And are your kids ready for it? Are they already following the rules that you’ve established? Do you feel they’ve internalized those rules? Do they make good decisions and take good care of the technology and the other things that they already have? Are they ready for that degree of independence, or are there other ways that you can keep in touch with them until you feel they really are ready for the phone?

[Diana Graber]: Great advice. You know, that was one of those age questions that we get asked all the time, too. And so we decided to answer that question by making a checklist. It’s on the Cyberwise website where we’re like, if you’re going to give your child a phone, make sure they know these things, can they maintain privacy, can to take care of their reputation? Do they know what to do if they’re cyberbullied? I mean, it’s a pretty long list, right? And so I think that’s a good thing for parents to consider. It’s like you are giving your child a device that’s going to give them a window to the whole world and all the people in it. They need to have these skills in place before they can really use that thing wisely. So, thank you. I think we agree on that topic. All right. I got another one that I’m going to open up to anybody. It’s an important question. What are the warning signs that a child might need more guardrails managing tech. How can parents best intervene? Who wants to tackle that one?

[Robbie Torney]: I will start briefly and then maybe someone else can jump in. I think the research on tech overdependence … so I’m not talking about social media specifically or AI specifically, but if you just zoom out and think about which groups of teens are more vulnerable to overdependence on technology, there’s been a lot of stability in that research, and it tends to be boys, people who have had a loss in their life or a transition, young people who don’t have a lot of friends or are isolated and young people who are experiencing a mental health condition. So I think, all of those together are just intended to, like, inform parents and families, like, if you’re a young person that  falls into one of those categories. Technology can be really, really beneficial for all of those groups as well. I’m not saying don’t give tech to those groups, but you should be aware that there are increased risks associated with tech use for those groups for particular reasons. And then so many of the things that you are looking for in terms of social isolation, changes in behavior, changes in mood. It’s tricky because those could also be normal teenage behaviors. But, you know, the one thing that we would say here is like if you are concerned, check in with your teen, have that conversation. Don’t wait. We have seen, unfortunately, that not intervening, not checking in not, getting involved when teens are increasingly spending more time with that device, increasingly distraught when that device is removed. That that can be quite tragic.

[Diana Graber]: Thank you. That’s such an important one. And I think, you know, it happens before teenage years, too. So keep your eyes open if you see those warning signals and step in. And that’s what I love too, about teaching, especially digital citizenship to students, because we really teach kids to look out for each other. I mean, that’s the beautiful thing about being online is that you see these behaviors in kids, you know, that are – that are learning to be empathetic and open to their peers. If they see things like that, we teach them how to be up-standers and step in and either give comfort to the child or take it to a trusted adult. I mean, there’s so many ways that we can all see when a child needs help, and we should teach our kids how to look out for each other, I think. Alright, so here’s a couple more questions. How do we teach AI safety? Literacy, I guess. Is there a good way to integrate AI into curricula? That’s a good one. Who wants to grab that one? I want to hear I’m going to have to call on you like a school teacher does. Pam, do you want to grab that one?

[Dr. Pamela Wisniewski]: Yeah. And I’m going to actually approach this from the perspective of a mom, who has an 11 year old daughter. She found, talkie, which is like an AI companion/grooming app before I even found it. And that raised the awareness of, like, you know, kids are getting on these apps sooner and without parental awareness. And she was on ChatGPT before I was. And so, the key is to… to ask and to understand how they’re using it, and to actually sit down with them and teach them some prompt literacy, because there’s a difference between saying, can you write this essay for me? Or can you answer this question for me? Instead of saying, here’s a paragraph I wrote, please tell me some ways that I can improve it. And so by sitting with our kids and interacting with the chat bot, for instance, we can teach them different ways and modes of interaction. The other thing that I want to say is that not all AI is equal. There’s some AI companions and apps that have stronger built-in guardrails than others. And so, for instance, there’s ones that are specifically for homework. And I would suggest, like, installing and getting one of those apps to help your kid with their homework instead of using, like, letting them use, like, character AI or something that’s more geared towards interpersonal and role playing. So again, making those decisions based on the context of use is going to be really important for your kids.

[Diane Graber]: That’s so right. And, you know, as you’re speaking, I was thinking about what Matthew and Robbie and Anne talked about all the literacies and, you know, AI literacy is really just digital and media literacy. It’s all one, you know what I mean? And granted, there’s a ton of parts to it. You know, there’s privacy, there’s algorithms or misinformation. But, you know, we have to look at it as one big, you know, literacy that kids need to learn. And again, so many things parents can do, but also so many things schools can do, because that should be integrated into what we’re already teaching them because it is one and the same. So, you know, we can break it apart. And it seems confusing that again, we’re talking about digital media literacy. So would you guys agree with that? I see a lot of nodding heads there. All right I’m going to move to the next question. Here’s a good one. How can parents check if and how AI is used by their children? Oh, that’s a good one. That’s hard when AI is so invisible. But, who’s got a great tip on that one? Anne, how about you? Do you have a thought on that?

[Dr. Anne Oeldorf-Hirsch]: I will say this is probably not quite my expertise. I might defer to others. In terms of how you can tell it’s used, certainly. I mean, there are these apps, right? So, if you have that access to the phone and you can see what sort of apps are installed, but I might defer to some of those that are a little bit more AI specific, I think. I don’t want to, I mean, too much. Yeah. Go ahead.

[Diana Graber]: I am going to counter you on that, because algorithms are AI, right?

[Dr. Anne Oeldorf-Hirsch]: Right. And so yes, yes.

[Diana Graber]: Let’s talk about that first. Like how would a – how does a parent even know if algorithms or what algorithms are impacting their child. Like how would they ask? How would you talk about that?

[Dr. Anne Oeldorf-Hirsch]: Sure. Yeah. So if we’re thinking about algorithms specifically, yes, algorithms are a type of AI. And when I hear AI, I think I also think of, you know, generative AI. But yes, algorithms are certainly a part of AI as well. Because they are learning from you and giving you that content. How to tell if they’re used… again, any of the major social media platforms automatically have algorithms, so there’s not really a way to turn that off. So if we go to Instagram, everything is going to be determined by an algorithm so that there is no way to get around it. So the default is: Yes, an algorithm has been used to share content in those spaces.

[Diane Graber]: Yeah. And you know what this is a perfect example. AI is everywhere, right? And you brought up a good point. You know, parents think of generative AI, they think of chatGPT, and you know, writing an essay for their kid. And it’s like, okay, AI is a lot more than that, folks. I mean, it’s Siri, it’s Alexa, it’s algorithms. It’s being it’s so integrated in our lives, it’s invisible. Almost. And that’s a really hard thing to prepare kids for, is this invisibility. So, does anyone else want to try to address it? Like, how do you prepare kids for an AI world? Like, what do you do and how early do you start?

[Robbie Torney]: Our digital Literacy and wellbeing curriculum starts introducing concepts related to AI literacy as early as kindergarten. The idea that these are machines, not people, that they behave differently. And I think as I heard in some of the other presenters’ comments today, the approach has to be to give kids agency to navigate this digital space. If you just look at the proliferation of apps and platforms that young people are using, it’s ever expanding. It’s not something where you can keep a lid on it. I know we’re not talking about regulation today, but that’s one of the signals that we see that regulation is necessary to prevent some of that proliferation. But as long as there is a sort of ability for teens to have access to these platforms and services on their own devices in an increasing, ever expanding way, it’s going to be very hard to take a just block it, just keep kids off of it approach. Our research in other areas also shows that kids are exposed to a wide range of content gambling, sexual content, extremist content of different types that comes into them through some of the feeds that, you know, Anna was describing algorithmically. It’s not something where you can realistically shelter and protect your child from, but you can give your kids the thinking skills, the sort of internet street smarts, the ability to come to know when they need to come to you to be able to navigate that.

[Diane Graber]: So true. Thank you for that. Okay, our next question. Do you know of any family, parent and teen oriented classes, lessons to take together or concurrently? I can actually answer that. Cyber Civics, a lot of homeschoolers use our curriculum, and we really encourage them to do it in groups. They don’t need to even have tech. They don’t need to have tech knowledge. But that’s just one way. And I’m sure panelists here know of other places that families can go to get these kind of lessons. Matthew, I believe your lessons are online for families.

[Matthew Johnson]: Yeah, so we do have, at mediasmarts.ca, we have lots of resources for parents, you know, guides, tip sheets, but also workshops and, also educational games that both parents and youth can use. We also have, a full K-12 curriculum, on every aspect of digital media literacy, and everything’s free to download without any kind of, sign up or anything like that. And, you know. Yeah, absolutely. With things like Cyber Civics and Common Sense Media, there are so many great resources available that cover really every aspect of digital media literacy and online safety that parents could want to know more about.

[Diane Graber]: Absolutely. And I believe, as a follow up to this webinar that Children and Screens will be providing you with all those resources. So please look out there. There are a lot of great resources available for parents and families. All right. I think we have time for the next one. Do you recommend any particular safety settings to protect my teen? I always answer that saying, yeah, the one in here, that’s the only safety setting that works with teens. But, let me see if someone has a better answer than that. Yes, Pam?

[Dr. Pamela Wisniewski]: So, I think it’s really important to not give kids the keys to the world and then take them away once something bad happens. So if they’re setting up a social media account for the first time, working with them to have the most protective privacy settings to start with, turning a profile to be friends only is like one of the key ways of making sure that you know, you’re not just putting something out there for the public web. The other thing is, the same advice goes for a mobile phone. Like if you have an Android phone, using Google Family Link to start with some basic monitoring and then letting go of the reins over time. It’s kind of like riding a bike and putting on training wheels to start, so that they can scaffold those skills. And so I think that’s really going to be the key point, is putting those safety guardrails and the default settings and everything to more protective when they’re younger, and then taking a developmental approach of taking off those training wheels over time as they learn and grow. Some of the things that I’ve seen parents do is the first time their kids get a social media account, they install the app only on their parent device, not on the kids device, so that they can do home monitoring for the early, you know, interactions. And then once they see that their kid is, like – learned good digital safety practices and you know how to say good digital citizen, practices online as well, then maybe moving that to a child’s device. So again, any way that we can scaffold it by putting on those training wheels first and not taking them off until some competency has been learned, is the best approach.

[Matthew Johnson]: And if I can just add, a lot of platforms, a lot of apps now are providing stricter default settings for younger teens. So Instagram, TikTok and YouTube all have more restrictive settings if they know that a user is between 13 and, it varies, 15 or 16 on most of those. But of course, that only works if you’ve given the right age. They need to know that the user actually is between 13 and 16. So it’s one reason why it is really important, you know, to wait until kids are 13, to make social media accounts that are theirs exclusively, and then make sure that they do have those stricter default settings.

[Diane Graber]: So true. Anne, I think you brought up a good point earlier that, you know, kids, they’re programmed to make mistakes, right? That’s what kids do. And that’s how they learn. And hopefully they’ll make these mistakes with a parent looking over their shoulder or a school providing curriculum or something so that when they make the mistake, it’s not catastrophic. And they learn from it and they avoid making the bigger mistakes as they get older. So that’s our job as parents and teachers. We really want to make sure that we scaffold those skills, as you so well said. All right. We’re kind of coming to our end here. So, I want to start winding down and thank you everyone for attending today. It’s been a really great conversation, and I could actually spend probably the next two hours talking to all of you, but I’m sure you all have something to do, so I will say goodbye and thank you, to our panelists for sharing their expertise today. Thank you to everyone who joined us. And thank you to children and screens for hosting this important conversation. As we’ve heard, the technologies shaping children’s digital lives are evolving quickly. Understanding how these systems work can help parents and educators guide young people towards more informed, thoughtful, and safer engagement with digital media. I hope you found this session helpful in doing just that. On behalf of Children and Screens, thank you for joining us and for your commitment to children’s well-being in a digital world. We hope to see you at future Ask the Expert Webinars. That’s a mouthful. Have a great afternoon everyone. And for me it’s still morning. So have a great morning if you’re on the West Coast. Thanks so much and have a great day.