Generative AI technologies like ChatGPT are changing the nature of online communications, work, creativity, and learning at a dizzying pace.  What are the implications to children’s rights and safety from unfettered access to these powerful and largely unregulated technologies? What are the impacts to children’s social and cognitive development in this new, turbocharged Internet?

Children and Screens hosted the #AskTheExperts webinar “AI and Children: Risks and Opportunities of the Enhanced Internet” on December 6, 2023 from 12pm-1:30pm ET via Zoom. A panel of policy and child development specialists explored the new frontiers of the AI-enhanced internet, outlined its challenges and opportunities for children, and provided parents and families with a primer on guiding children’s use of generative AI  technology with thoughtful dialogue and understanding.


  • Naomi Baron, PhD

    Professor Emerita of Linguistics, American University
  • Steven Vosloo, MCom

    Digital Foresight and Policy Specialist, UNICEF
  • Ying Xu, PhD

    Assistant Professor of Learning Sciences and Technology, University of Michigan
  • Christine Bywater, MA

    Assistant Director, Center To Support Excellence in Teaching (CSET), Project Lead, CRAFT, Stanford Graduate School of Education
  • Tracy Pizzo Frey, MBA

    Senior Advisor, Common Sense Media

ChatGPT’s public launch in 2022 immediately and radically changed the landscape of digital experiences. Though generative AI has been in existence for well over a decade, the past year has seen rapid development of generative AI models and their accessibility to the general public, raising both excitement and concern. Caregivers and educators have been particularly curious about potential uses of GenAI and its possible unintended consequences to young people. In this #AskTheExperts webinar, an interdisciplinary panel of experts explored this topic, addressing questions such as: how does AI impact children’s development, well-being, and safety? How are children interacting with AI and what do they understand? How do we ensure these tools are safe?

00:00 Introduction

Kris Perry, MSW, Executive Director of Children and Screens: Institute of Digital Media and Child Development, introduces the webinar and panel moderator Naomi Baron, PhD, Professor Emerita of Linguistics at American University. Baron provides a brief definition and history of artificial intelligence as well as a concise overview of key terms for the webinar.

09:33 Ying Xu, PhD

Ying Xu, PhD, Assistant Professor of Learning Sciences and Technology at the University of Michigan, describes the impacts and implications of AI on holistic child development. She begins by summarizing important aspects of child development and shares research on how children interact with AI. She then describes how these interactions can impact children’s social development and academic development, depending on the age and child. She discusses evidence on children’s trust in AI tools, and ends by emphasizing that joint parent-child engagement while interacting with AI is crucial to correct potentially inaccurate information from conversation agents and ensure children engage with AI in a safe and healthy manner.

22:15 Steven Vosloo

Steven Vosloo, Technology, Policy and Innovation Specialist at UNICEF in the Global Office of Research and Foresight, discusses the importance of considering children’s rights in any conversation about artificial intelligence. He outlines several basic rights of children (as detailed in the Convention on the Rights of the Child), and how these can be enabled or constrained by generative AI. He emphasizes that the younger generation is a driving force in the adoption of generative AI tools, and he argues that children should be centered in technology’s design – not an afterthought. Next, Vosloo summarizes concerning features and risks of AI for children and shares resources from UNICEF’s AI for Children project including their policy guide, which can be a tool for researchers, advocates, educators and more.

34:03 Christine Bywater

Christine Bywater, Assistant Director of CSET at Stanford’s Graduate School of Education, provided an overview of AI literacy and what it means to youth, parents, and educators. She emphasizes the importance of understanding youth’s mindset with new tools like AI, and describes the affordances and risks new AI tools may pose in education settings. She shares the findings of a national survey on AI knowledge among adults, and suggests that a first step is to develop basic AI literacies for parents, educators, and researchers so that they may become better critical consumers and ethical designers. She shares resources from the CRAFT curriculum, and ends by encouraging parents and educators to dispel fear around AI, speak to students about bias within AI, and develop more confidence around using AI.

47:27 Tracy Pizzo Frey

Tracy Pizzo Frey, Senior Advisor for Common Sense Media, provides a primer on Common Sense Media’s AI ratings initiative. She discusses the history of the initiative and the eight AI principles that serve as the guiding values. She outlines the review process and summarizes the findings of the first ten platform reviews, including tools such as ChatGPT, Khanmigo, BARD, and DALL-E. She concludes with her top three takeaways and resources for talking with teens about AI.

01:00:29 Panel Q and A

Baron brings the panelists together for a group discussion addressing questions submitted from the audience. Panelists share insights on involving children in the use of AI and time limits on AI, ensuring children recognize AI is still human made and not “alive”, and empowering parents and educators to use their voice and influence for change. Other topics reviewed include co-designing curricular elements and technology of AGI, EdTech, and the risks and benefits around the accessibility of AI.

[Kris Perry]: Hello and welcome to today’s Ask the Experts Webinar AI and Children: Risks and Opportunities of the Enhanced Internet. I am Kris Perry, Executive Director of Children and Screens Institute of Digital Media and Child Development. Artificial Intelligence. Those two words elicit a wide range of reactions from excitement, inspiration and awe, to fear and concern. What is AI? In our last webinar, we explored algorithms and the role they play behind the scenes across digital platforms. Today, we take a deeper look at Generative AI, the technology behind tools that can produce content, including text, images, audio and video. Whether it’s a chat bot, a smart assistant, an interactive display, or social robots. What do we know about these technologies and the ways that children are interacting with them? How did children’s understanding and perceptions of AI affect their experiences and the associated impacts to social development and learning? How do we ensure that these tools are safe and developmentally appropriate for children? These are some of the questions where our outstanding interdisciplinary panel of experts will tackle today with their shared experience in education, child development, global policy and AI literacy. Now, I am pleased to introduce today’s moderator, Dr. Naomi Baron. Naomi is Professor Emerita of Linguistics at American University in Washington, DC. She is a former Guggenheim Fellow, Fulbright Fellow visiting scholar at the Stanford Center for Advanced Study in the Behavioral Sciences and a member of Children and Screens National Scientific Advisory Board. For more than 30 years, Baron has been studying the effects of technology on language, including the ways we speak, read, write and think. And she is the author of several books, including her latest “Who Wrote This? How AI and The Allure of Efficiency Threatened Human Writing.” Welcome, Naomi. 


[Dr. Naomi Baron]: Thank you so much, Kris. As Kris rightly says, AI is everywhere on everyone’s mind. And that means also in everyone’s concerns. Just a little bit of what’s been going on relatively recently. One year and one week ago, Open AI introduced a product called Chat GPT. I bet you’ve heard of it. The impact was explosive, more than a million users within the first two months. Now more to the present. Two weeks ago there was a big shake up, you might have heard about it at OpenAI, where Sam Altman, who was the CEO and founder and co-founder of the company was fired by the board. Then Microsoft hired him over the weekend. Then OpenAI rehired him back again as CEO. If you read the New York Times, you may have seen for the last couple of days front page stories on AI that have a lot of issues going on. But what is AI? Very simply put, it’s using computers and software programs to perform the kind of cognitive work that humans do. That sounds like a good idea. But what happens if I can do everything we can do? Another term you may have heard of recently, A.G.I., artificial general intelligence, which is not innocuous. It’s the notion that proper AI programs could do all of the cognitive work that humans do and maybe even do it better than us. So the question is, is this possible? We have it, now we need to think. Do we want it? Should governments regulate it? A lot of the shake up that’s going on at open AI is over the question of how much you release a technology that’s really powerful, in fact, maybe more powerful than we humans know how to control. And the reason Sam Altman was fired at first from OpenAI is he basically was interested more in the commercialization than the safety issues. Okay, Generative AI Everyone talks about the G part of Chat GPT, generation. What is that all that about? If you look at the history of work on AI, much of it focused on language, understanding it and generating it.

There’s something called natural language processing, which is what AI researchers did a lot of for many, many decades. They still do. And that has two parts to it. Natural language understanding and natural language generation. But it’s because of the kinds of software we have now and the concepts we have now that generation has really taken off. A really quick lightning-fast history of how we got to where we are today.The fundamental idea behind Generative AI technology has actually been around for now a smidge more than a decade. AI itself has an even longer history, starting with 1950. Alan Turing published a paper called Computing Machinery and Intelligence. The first sentence in that paper is, ‘I propose to consider the question, can machines think?’ The term artificial intelligence came in 1956 at a conference at Dartmouth College. But it was 2012 where a computer scientist from the University of Toronto, Geoffrey Hinton and some of his graduate students started using what are called neural networks and then these chips from NVIDIA called graphic processing units to figure out how really well to identify images in an image net competition. And it’s that technology that really became the basis of what we can now do in Generative AI. Incidentally, one of the graduate students working with Geoffrey Hinton, Ilya Sutskever is now the chief scientist at OpenAI. 2017, the so-called transformer architecture. Think about that GPT. What’s that G for? Generative. What’s the P for? Pre-trained. We’ll leave that in peace. The T is the transformer. It’s a particular type of algorithmic approach. That was 2017. 2019, OpenAI produces something called GPT2, this was sort of a one, but we only talk about the two, which combine together this transformer architecture with what’s known as a large language model, which is lots and lots and lots basically of language data all put together. And with these algorithms from the transformer architecture, you predict from this huge dataset what the next word is likely to be. 2020, GPT3, you can see how this is progressing for OpenAI, came out and it was licensed to lots of different companies for producing Generative applications, but a small number of people used it, so most people weren’t aware of the power of this kind of technology. November 30th, 2022, we know what happened. ChatGPT was released, which really was an older version of the technology. But what it did, is it made available to a lot of people for free, the ability to use a chat bot, which as being free and being really interesting and fun, millions of people took it up. Now we have GPT4, which by the way has scored in the 90th percentile of the multi state bar exam, 93rd percentile on the SAT reading and writing and 99th percentile on the GRE verbal. There are lots of uses of Generative AI, not just for language, but in computer coding and art and biological models for producing new drugs. Amazon is using this kind of technology to figure out what to put in which warehouse and how to do its delivery routes. But today’s conversation is about Generative AI and education. The original response when ChatGPT came out was a huge fear. Students are going to cheat. But over the past year, we’ve realized there are many other things we need to think about. We need to understand what kind of experiments we could try out. We don’t have the answers yet. The technology is not done developing. But among the questions that we know are important to think about are how do we understand what it means for children to make sense of AI? How do we address the risks of AI to children and the need for safety? How do we develop useful pedagogical designs that we can share with others? And what kinds of AI evaluation resources might we develop? We’re incredibly fortunate that today we have four experts addressing each one of these questions. Therefore, it’s my pleasure to introduce the first of our speakers, Dr. Ying Xu, who is an assistant professor in Learning Sciences and Technology at the University of Michigan. Her research is centered on designing and evaluating intelligent media. She has extensive experience collaborating with public media, schools, families and community organizations to develop media in ways that are safe and educationally beneficial for children. Ying, it’s all yours.


[Dr. Ying Xu]: Thank you for the introduction and for having me here today to be part of this very timely and interesting discussion around AI and children. So before we actually dive into AI, let’s think a little bit about what holistic child development actually means. As you know, growing from an infant to a young adult involves many complex and interrelated processes. It goes beyond just learning and cognitive development. It also involves physical and mental health, social skills and other known cognitive aspects. But at its core, what drives this development is children’s opportunities to learn to play and to engage in social interactions with others. And these others used to be on their parents, teachers, siblings and peers. With the adding of AI, children can now learn, play and engage with these technological others. And in these eight minutes, I’m going to give you a very quick snapshot of how these new forms of interactions can influence some of these developmental outcomes. So the first question is how do children actually interact with AI? So researchers have been observing children in their homes and schools and have found that children interact with AI in two different ways. One is to ask specific fact based questions, to gather information. And the other one is less common, but children sometimes engage in personal or social oriented conversations with AI. So here is a 30 second video I filmed just a couple of years ago in my own study that kind of showcases these two types of interactions.


[Video Recording]: ‘Okay, you go, do you know what’s six plus six?’ ‘The answer is 12’. ‘12! That thing is smart. That thing is really smarter than me’. ‘Hey Google, are they your favorite princess?’ ‘It’s hard to pick a favorite.’ ‘Hey Google, my favorite princess is Elsa’. ‘Okay, I’ll remember these things. My favorite princess is Elsa’. 


[Dr. Ying Xu]: These interactions might have implications on students’ social development.

One question people often have is whether talking with AI would make their children….. So children develop their social etiquette through interactions with others who model the appropriate behaviors, but given that AI might not always follow our social norms, children, might make demands on AI using impolite language or even insulting the AI that they are conversing with. People worry that children will carry this dynamic over into their real life interactions with people. So there is indeed tentative evidence that children can pick up linguistic routines through their conversations with AI. So knowing there’s a lot of tech companies have started to implement measures to encourage children to use polite language. For example, if a child uses the word please, Alexa will now respond…’Thanks for asking so nicely.’ This could be a step to the right direction, but it also poses risk of obscuring, at least from the children’s perspectives, the boundaries between AIand humans. So another perhaps related question involves children’s trust or ability to critically evaluate AI generated information. We often talk about how AI’s Chat GPT could generate inaccurate information, however, this issue is actually not unique AI, so humans, are subject to errors as well.

So researchers compared children’s trust of humans versus AI and they found that children use similar judgment strategies when judging the reliability of both. Often, their judgment is based on whether that informant has provided accurate information in the past and on the perceived expertise of the source. However, it appears that some children may be better at utilizing the strategies to calibrate the trust than others. So this capability is likely affected by the background knowledge in the relevant subject area of the conversation, and probably a more sophisticated understanding of AI mechanisms. Sometimes we could call it AI literacy. Enhancing AI literacy is an educational goal that could be taught, and we’ll hear probably more about this from the other panelists. So let’s move on and talk a little bit about the implications of children’s interaction with AI on academic development. So the big question is when students turn to allow prep for assistance with homework and assignment…are they engaging in the learning process or are they sidestepping it? The general sentiment is that the impacts depend on the timing of the learning objectives. For younger learners, particularly those in elementary and middle school, the priority is to develop foundational skills. Relying on AI too early for tasks meant to develop these foundational skills, could potentially hinder their development.

However, as students progress, especially when the students are preparing for the workforce, the integration of AI tools may be beneficial. So that brings to us the crucial question regarding the role of AI in children’s learning. For AI to be a valuable tool, it shouldn’t just provide easy answers, but rather it should guide children in their journey of sense-making, inquiry and discovery. So there is evidence that when AI is designed to guide children through the learning process, it can be quite effective. For instance, in a collaborative project with PBS kids, AI was integrated into children’s television shows, enabling children to verbally interact with their favorite characters while watching STEM related episodes. So this AI ‘s dialogs are designed to prime children to engage in observation, prediction, pattern finding and problem solving.

I’m proud of this research project and we have carried out multiple studies. One consistent finding is that when having interactions with the media character, children comprehend the science concepts better and are more motivated to think about assigned problems than the children who watched the broadcast version that does not have the AI assisted interactions. So lastly, however, even with the best design intentions, it is still important for parents and teachers to stay involved… when to interact with AI. So children sometimes face difficulties in making their speech understood by voice systems. So this could lead to frustration. Moreover, when interacting with each other with the content there is a possibility that some responses might be inappropriate for children. So given these challenges, it is important to encourage engagement from parents or other caregivers, just like when children use other media technologies. This perspective has led to the growing call to shift from designing AI solely for children’s individual use, to creating solutions that enhance existing social interactions. For instance, AI could be designed to support parent-child interactions by suggesting conversation starters or questions parents can ask their children during share activities like reading. This approach might not only mitigate the risk associated with unsupervised AI interaction, but could also leverage technology to strengthen interpersonal bonds or enhance learning experiences. So that’s my eight-minute summary on AI and young kids.


[Dr. Naomi Baron]: Thank you so much. Well, I think you make me think about the balance between the importance of real social interaction, which we’ve learned a lot about during…. if we hadn’t known already, during the pandemic and what kind of pedagogy can be effective from what kinds of sources. But the question that I’d like to ask you, if you don’t mind, is when should kids start interacting with AI chatbots like ChatGPT to answer questions, to learn….Is there an age when it’s too early? How would you advise parents and educators to go forward here? 


[Dr. Ying Xu]: Yeah, this is a super interesting question and we could approach it from two perspectives, which are the developmental stages of the child and also the capability of the chat bot. So the end really kind of lies between the synergies between these two aspects. So children are kind of naturally curious and they always ask ‘why’ questions, but their ability to ask good questions, which are questions that have meaningful answers, develop over time. So this development might be dependent upon their development of language skills or understanding of the world, as well as their awareness of what others know or don’t know. So you could imagine, the better a child can formulate a question, the more effective a chatbot carries out. So now from the chatbot’s perspective, it is worth noting that ChatGPT in particular requires the user, at least as of now, to be at least 13 years old with parental consent needed for those between 13 to 18. So ChatGPT might not be something that is designed for very young kids. But there are other chatbots like Amazon Echo, Dot… kids’ versions that are designed specifically for young children. So even with these child friendly chat bots, we still face a number of challenges. First, can you help us understand children’s questions? And second, is the content provided by the chatbot accurate and suitable for children? And third, is the language issue. Can a chatbot convey the information in a manner that children can understand? So the bottom line is, since smart speakers and other tools are already available and present in children’s homes, it could be quite difficult to strictly limit children’s access to them. So I believe that with the guidance and support from parents and other caring adults, it can be fine for even very young children to engage in asking questions through chat bots. We could consider this as an additional learning experience for children. However, it is very important for parents to be aware of the information provided by the chat bot and if necessary, rephrase or supplement it based on the child’s needs.


[Dr. Naomi Baron]: You’ve given us a lot to think about and you remind me of… when you’re talking about the importance of how a question is asked. That’s exactly the question of prompt engineering that’s coming for adults and using ChatGPT or other kinds of chatbots for more sophisticated purposes. Thank you so much, Ying, for your suggestions. Our second speaker is Steven Vosloo, who’s a technology policy and innovation specialist at UNICEF Innocenti, Global Office for Research and Foresight. He works at the intersection of children, emerging tech and policy, covering issues such as children and AI, digital disinformation, the Metaverse, Neurotechnology and Digital Equality. Steven will be speaking with us about AI safety and children’s rights, the importance of safety by design, and regulatory considerations. Steven, the floor is yours.


[Steven Vosloo]: So, thank you. And my….hopefully my points below will follow well from Ying’s excellent presentation around our children AI interact. And so I wanted to just zoom out a little bit. Like you said now, we look at some of the risks and concerns that we have. But first, this is just a quick reminder of children’s rights and this is the North Star for UNICEF and for anybody working with children, really, the Convention on the Rights of the Child from 1989 is the most ratified U.N. treaty of all time. And so you can see that these aren’t all children’s rights, but they are the ones that often interact with digital technologies. So at the bottom, the right to be protected, obviously, from discrimination and exploitation, the right to play, which is actually a UN right, which is amazing, the right to receive quality education, obtain information, freedom of thought, etc. And all of these can be enabled by technology, including AI and Generative AI, or they can be constrained. And even though I’m talking about the risks, I mean, we very much believe and UNICEF has done work in all of the opportunities for upholding these rights through AI and Generative AI, I want you to quickly share just three data points from two reports that I’ve recently seen. The one at top left from OFCOM in the UK. And this is very recent. It’s showing that the GEN-Z is really driving the adoption from a gen AI. So four fifths of online teenagers age 13 to 17 now, use Generative AI tools. And 40% of those are the younger 7 to 12 year cohort and the most popular from 7 to 17 is Snapchat AI. So that’s the one. The other study that you may have seen came from Fossey and commissioned by Fossey, but released recently at their event. And the two data points that I think are interesting, the one is on the right.

So it was a research done with parents and teenagers in the US, Germany and Japan. And here we can see that the teens and the parents pretty much used AI to the same degree… Generative AI rather, which is unusual. Usually that does not happen with technology. Usually parents are way behind. Like social media or gaming. This is different to what they found in the UK, which is also interesting. And at this point at the bottom, they asked the teens I’m looking at in particular… they said select… what would you use Gen AI for the top two ways you’re most interested in using it in the future. And this is how many selected for emotional support. At least, a third, or even up to half in Japan. So we know that children use technology. Children and youth are the most or the largest online cohort of any age group. And yet we also have this, and this is what one of the participants… this is a workshop in Brazil that we ran a few years ago on AI with adolescents. And so the technology we know is often not designed with children in mind, and yet they’re at the forefront of using it, which raises interesting questions, opportunities, but also concerns. In 2021, we released this policy guidance on AI for children. It’s before Generative AI, but as you’ll see, many of the same principles still apply. And so I want to share this as a resource. One because it’s still very applicable. And two, at the website you will find not only the guidance but also a bunch of case studies that we did with organizations that applied the AI principles with children. And then more recently we got a paper on Generative AI, just a short piece, a brief piece on risks and opportunities. So, let’s look at some of these risks, these concerns, these risks, these harms. So some of these come from the policy guidance and others we thought of since ChatGPT, which really, as you said Naomi, it’s not new, but it really kind of put it on everyone’s radar, and the public’s radar. So this idea of the systemic and automated discrimination and exclusion through bias, and the example they use has become …an obvious example that you can think of now is if you go to many of the text to image generators and you say, give me an image of the CEO, you’ll get a white male. And that’s because of the data that it’s been trained on, which is obviously deeply exclusionary for young people of color, for girls who want to get into IT. And so that’s one example of exclusion. The other is the limitations on children’s opportunities and development from AI content. And the three examples we can give is around persuasive misinformation and disinformation. And I’ve used the word persuasive because increasingly it’s impossible to tell whether something text was generated by a human or AI or images. It can also skew children’s world view. And again, this is about the data that’s been used in the models. But because much of the data comes from the global North, from the US, from Europe, or from the cohort of people who have created Internet content, a lot of that doesn’t represent the world view of people in the global South and obviously this is an issue for UNICEF that’s a global organization. And so children don’t see themselves represented in the stories or in the examples, but that they’ll give in to help with their homework. We’ve also seen research that says when children use large language models to help them write an essay, the bias in the model…and this is unintentional bias from the model trying to shape the child’s worldview, but it does shape the positions that they take in writing that essay and whether they’re for something or against. And so inside in time, this could nudge change how you see the world. I often think that previously we have thought about… there was a concern that algorithms have moved the minority on the edges to extreme positions. But perhaps the new danger is that the majority in the middle or a majority of users get moved to the middle and with a kind of a more generic, homogenous worldview that doesn’t represent everyone. And then lastly, this inappropriate emotional support. So we don’t have to spend too much time on that. But if you have a chatbot and you’re asking it for help with really serious concerns at home or personal issues I’ve given you, gives you the wrong support, that’s deeply problematic. We know about infringement on data protection and privacy rights. That’s not a new issue, but where it becomes new and the age of Generative AI is more intimate experiences. So you could be asking a large language model not just for help with your science project, but then you might say you want to study science and they say well tell me about yourself and how we can shape your career and your learning path. And that’s fine, but your conversations are much more intimate and personal. And so what happens to that data and what kind of risks does that put children at? And then lastly, we are seeing some cases of AI generated child sexual abuse material. These are obviously real harms, not just concerns and putting a real strain on law enforcement that is trying to prevent this terrible problem and trying to, you know…already strained, but haven’t made decisions around whether this is a real child or an AI generated child. So the last point I’ve put here is exacerbating the digital divide and obviously the digitization of society is not the same for every child. And so what kind of opportunities do children in countries or in socio economic situations different to wealthier nations? What needs do they miss? Which risks are they more exposed to? And lastly I just want to say that this obviously affects children’s… not only the present, but any of these could have lifelong effects on children. And so that’s why child users are really important and different to adult users. We also don’t know the long term impacts. This is all pretty new. So the kind of research that Ying and the other panelists are doing is crucial to understand the positive and the negative impacts. But you know, I won’t…I would just run through these. But these are some of the requirements for child centered AI that came from the policy guidance. There are nine of them and they really talk about this point of safety by design. So all of these risks can be mitigated in the design of AIproducts, and services and the data that goes into it, in the government regulation that needs to be appropriate to uphold all child children’s rights.

And so we give suggestions to both companies and to governments around the kind of policies that should be set and some of the practices that can be followed. I just want to point out there’s a really nice paper from the E-safety Commissioner in Australia at the bottom, and they’ve done a lot of work on safety by design and it’s a really interesting paper around Generative AI and safety by design.Thank you. 


[Dr. Naomi Baron]: Thank you, Thank you, Steven. You have so much information and so many ideas that we wish we could go on more. I’m going to hold the question that I have for you until the general discussion, if that’s okay. I promise to come back to it or you remind me if I don’t. At this point, though, I’d like to move to our third speaker, Christine Bywater. Christine serves as the Assistant Director of the Center to Support Excellence in Teaching at Stanford University School of Education, where she designs and facilitates professional learning experiences for K through 12 educators. Additionally, Christine is project manager for CRAFT. You’ll hear about what that is. An initiative at Stanford’s Graduate School of Education dedicated to developing co-designed curricula, resources for high school educators to assist students in developing their AI literacy skills. Christine will be talking with us about literacy for youth, its importance and impacts, examples from CRAFT and perspectives from Educators on inclusion of AI literacy in schools. Christine.


[Christine Bywater]: Thank you, Naomi. Also thank you to Children and Screens for having me and for Steven and Ying for sharing the wealth of information you’ve shared so far. I’m looking forward to sharing a little bit from our perspective of the work that we’ve been doing out of Stanford around AI literacy with you all. So before we get started, in addition to my professional bio that Naomi shared, it is important to know that I’m also a parent of young children, so I thought I’d start my conversation with you all with a little story. So the other day, this is true, I swear…I have an eight year old and a five year old. My eight year old says, ‘Mom, how does Siri know everything?’ And for those of you that have parents that are educators or are educators yourself, you probably know that my answer is, ‘well, what do you think? How do you think Siri hears us and responds?’ And my youngest five year old said, ‘I don’t know. Who cares? wait, I think it’s fairies’. And my eight year old thought a little bit more and said, ‘you know, I think there’s actually a room of people who hear what we say, figure out the answer and have Siri tell us.’ And the point of me sharing these stories is that as we think about these tools and we start to talk about literacy, it’s important to remember that children are naturally curious. They think about the world around them in incredible ways and are constantly doing their own sense making. Whether we hear that own sense making by asking questions, to hear how they’re interpreting the world or whether it’s happening internally, it is happening. So when we think about how and why to have conversations about AI, it’s really important to start with this asset lens of kids knowing that they’re thinking about these conversations. So considering that I want to lay out some stances myself as an educator, I’ve been a high school classroom teacher and the researchers that I work with, the stances that we take as we consider what knowing about AI means for teachers, kids and parents and families. So we know that artificial intelligence can provide great affordances and also extreme risks.


We also know that schools do not have room to just add whole new AI classes and curriculum. Nor do we think this would serve an authentic integration of educating students. We also believe that knowledge about artificial intelligence is a matter of equity. We know that some will be using it to optimize their learning experiences and others are being left behind. Now as Steven shared, this is going to expand that digital divide. And we also know that teachers are constantly being asked to deal with technology that is continuously changing on top of everything else we ask them to do in support of students’ learning. So all of this is as our stance is…. We’ve also learned from the last couple of years that AI literacy is lacking, but understandably so. We did a study with high school students who showed that they’re generally not as familiar with what algorithmic bias means and how it becomes available in sort of these AI systems. When we’ve worked with teachers, we’ve overwhelmingly found that teachers are very unsure about what counts as AI,  just like many of us are. We had a teacher attend a session when we did some work around facial recognition and they said I wanted to attend a session about AI, but the session was about face ID. We also know the Allen Institute administered a nationwide survey of questions and found that only 16% of adults received a passing score. So this shows that the need for AI literacy is critical, and this is different than other technologies that have come into our society because the potential benefits and the potential harms are so impactful. So how are we defining AI literacy? We see AI literacy as the ability to recognize and use AI technologies with a basic understanding of actual capabilities, benefits and risks. We also include individual awareness of rights and responsibilities we have with respect to the presence of AI in our everyday lives. Developing AI literacy is so critical to what previously you might have heard is 21st century skills. And the reason we kind of approach it from a literacy perspective is this doesn’t mean everyone has to be a coder. Rather, these efforts should familiarize people with what AI tools can do and what they can’t do. It’s really important to remember that teachers, parents, students… we all share many of the same digital life challenges, particularly when it comes to these technologies that are rapidly evolving so fast.

It’s important, as you think about this, to be co conspirators in this work to develop AI literacy alongside those who you are teaching. Learn alongside, co-design with them, so that we all can be critical consumers and develop future ethical designers. So I want to share a few quotes from teachers we’ve been working with the past couple of years around AI literacy, and I’ll share sort of that work about CRAFT that we’re doing. So one high school English teacher said ‘at my school there are a couple of other teachers who are excited about AI, but the rest are worried the sky is falling.’ Keeping kids from cheating is not the issue. We don’t need to fear this technology. Let’s unpack the bias, bring it into the light, teach it, and show that we can mitigate it. A third grade teacher shared ‘students of age should have the opportunity to be inquisitive and communicate openly about what they already know or might understand about AI. It is such an important tool. It can be utilized to support student learning by serving as assistive technology for marginalized student groups and populations. It can be instrumental for feeding the imagination, creativity and curiosity of young minds. They are essential in defining and determining how AI evolves from present day into the future. The Minds of Tomorrow’. The 10th grade English teacher shared ‘it’s imperative for students to understand how it works. As the upcoming generation, young students need to understand its role and purpose in today’s world. If we do not offer opportunities for students to question and be critical of AI, then there is a potential for the misuse and formulate misconceptions about it.’ So out of the Graduate School of Education at Stanford, we have a resource called CRAFT, which stands for Curriculum Resources About Artificial Intelligence for teaching. We really like acronyms in higher ed. But these are co-design resources and lesson plans that are multidisciplinary that teachers can use within their teaching time. And we’ve been working with teachers to co-design them. They’ve been implementing them in classrooms and then sharing back with us the feedback. One of the things we emphasize is because this technology is rapidly changing, it is so vital to co-design these curriculum resources alongside teachers. They’re the ones who are going to know what’s happening on the ground with students and in the field. And it’s really important to work with them, to co-design these resources that’s beneficial to them when teaching students. But for those of us who are not educators or those of us coming into the space, I thought it would be helpful to share some general AI literacy goals. So when we do develop these lessons and resources, what goals do we have for students and what goals do we have for teachers? You can see the first two general knowledge and responsible use is the same for students. Like I’ve said, it’s really important that we talk about and start to teach how it is to be a critical consumer of AI. Understand the responsibilities, the harms, the risks, including bias, fairness, misinformation. For educators, it’s this operating from uncertainty and the willingness to learn together with their students is a really important goal. And then to broaden participation in the conversation. Bring students into the conversation. Let them tell you what they think and how they can help us and co-designing this literacy world. And lastly, I want to leave you with some rules of engagement. So if you find yourself as a parent who’s wanting to have conversations with schools or a family member that wants to have conversations in the community, here are some general rules of engagement around building AI  literacy. One: recognize students as positively intentioned actors with AI. It is so valuable that we have them as an asset lens that they are critical to this AI literacy world. Acknowledge teachers value as important actors in shaping student use of AI and advocate for time and resources to support their development. We ask teachers to do a lot, so when we are asking them to add to their workload, it’s really important that we advocate for the time and resources for them to develop that. Identify strategies for building supportive norms around AI use. This can be done in a school community and a classroom, in your household… Transparency of instructional time, thinking about equity. And please invite students in. Identify deterrents for AI such as cost, privacy concerns, some of the things Steven and Ying have both shared with us. And lastly, it’s really important to investigate how data is collected and how data is being used. Not to just openly and willingly use a tool, but ask schools how privacy and security is assessed before accessing it. There are some resources we have that I know will be shared with you all, but I appreciate your time and listening. I look forward to more thoughtful conversations.


[Dr. Naomi Baron]: I’m very glad to talk, not just about student issues, but about teacher issues. I remember back in the 1980s when Apple computers, Apple Two, Apple Two Plus were brought into schools and the teachers didn’t know how to use them through no fault of their own.


And they basically sat in closets and we had a lot of hardware purchased with very little educational, positive results. But let’s come. And so we do have to figure out how in their incredibly overloaded days, teachers can get engaged with, not just get up to speed with, but get engaged with this kind of technology. But my question for you comes back to students, and teachers making the assignments. Should ChatGPT  be a go-to in the classroom for assignments and is it a source of reliable information?


[Christine Bywater]: Yeah, good question. I will answer it, but not directly, but I promise I will provide an answer. Always start with centering the learning goals, right? What do I want students to learn? And with technology use, why am I using this tool? What does this provide students in not only learning the content, but what do I want them to learn about the tool itself?

So with AI, we want to build those literacy skills, critical thinking, evaluating things like bias, misinformation. And the first thing to also consider from educators perspective outside learning goals are the things Ying shared, like what are the privacy? How old do I have to be to sign into this platform? What sort of data is it collecting? What are my school rules about this? But once all that is settled, there are some amazing ways that you could use this in class. We’ve seen a lot of teachers use it in contrasting cases. So empowering students to say, I asked ChatGPT this prompt. Here are the different answers that it gave me. What do you all think? What did it do well? Well, what did it not do well? What’s missing? And starting to ask those critical questions so students build in the literacy of evaluating what it is producing. Another way to think about it, it’s also a really good starter for students in writing. There’s an article that I shared around how you can use ChatGPT to improve student writing and not just around cheating, but it’s helpful to help students start with an outline and build from there. And so my short answer to your question is that ChatGPT is as reliable as the Internet. You need a critical eye to be able to assess where this information came from, who wrote it, what bias is built in. And if we do that with kids, we have more opportunity to utilize the power with mitigating the risk.


[Dr. Naomi Baron]: I’d like to take what you just said and circulate it to the tens of thousands of teachers who are trying to figure it out, so how do I make decisions? Thank you very much for your ideas. 


[Christine Bywater]: Thank you. 


[Dr. Naomi Baron]: And now we turn to our fourth speaker Tracy Pizzo Frey. Tracy is leading the Common Sense AI Ratings and Reviews Program. She has extensive background in applied use of advanced technologies in both public and private sector organizations around the globe. Tracy has spent 11 years at Google, where she founded and led Google Cloud’s responsible AI work, before leaving to launch and lead Restorative AI, a services based company that helps organizations ensure their creation, use, and adoption of AI tools, systems and products contribute to the future that we all deserve. Tracy is also the co-founder and managing partner at Uncommon Impact Studio. Tracy is going to be talking with us about Common Sense AI Ratings Initiative and recommendations for how caregivers, educators or even young older youth themselves can critically evaluate AI tools before and during use. Welcome, Tracy.


[Tracy Pizzo Frey]: Thanks so much, Naomi, and thanks for having me and for being here with all these incredible presenters. I’m just pulling up my slides here. So, I think we’ve talked a lot about what artificial intelligence is and what it is going to do or capable of doing in the future. And I just want to reemphasize that it’s predicted to be the largest disruptive change in our lifetime, so even larger than the change of the Internet itself. And while it has incredible potential, as we’ve talked about, it also has risks, which we’ve also discussed. And so these technologies, as Naomi shared in the beginning, are not new. The sudden explosion has brought with it more questions often than answers. What is this technology? What can it do? What can’t it do and what shouldn’t it do? And this is why Common Sense has really stepped in here. Common Sense is committed to creating clarity and trust and understanding through AI initiatives and build on the trusted voice that Common Sense media has across consumer media ratings, privacy ratings, our edtech ratings and our AI initiatives include the product ratings and reviews and literacy curricula, original research and more. But I want to start by grounding all of this work in our set of AI principles. So first of all, as we’ve talked about throughout this conversation, artificial intelligence isn’t magic. It’s math that trains computers to do tasks that have been programmed with super specific rules. And while this technology is exciting and it’s powerful, it’s not perfect. And that’s why these reviews are grounded in eight principles about what Common Sense believes AI should do. And these represent Common Sense’s values for AI, and they create a shared understanding that we use as our guide and our path forward. And this is something that I think is critically important for any organization to do to set out its goals and values around AI. So that as you are evaluating, you can use these as a benchmark to then test yourself against what makes sense for us. So the most successful AI is built with responsibility and ethics and inclusion by design. And what this also really means, and I cannot stress this enough, is that technical excellence alone is not enough for AI systems because they are sociotechnical. And that means the technology be separated from the humans and the human created processes that inform and develop and shape its use.

And that’s why the product reviews that we’ve created are contextual. They take into account the societal landscape in which the products will be used and they’re actively seeking what information might be missing or invisible to an AI system. And there are a few things that are sort of top line rules here that I think are important to stress. One is that no product is values or morals neutral. So a couple examples here that I’ll give for many, many decades, adhesive bandages, Band-Aids were made in one skin tone color…my skin tone. Now a Band-Aid is designed to hide a cut that anyone with darker skin tone than mine, which is most people in the world, would essentially walk around with a flag until a few short years ago, where Band-Aid were made in a wider variety of skin tone. They’re still very difficult to find, I will say. And another example here is automobiles, where until 2011, crash test dummies weren’t designed with stereotypical female bodies. And this meant that women were injured in car accidents at a far greater rate than men. So outside of the world of AI, the reality is that values and morals are actually integrated into product design everywhere. It becomes really challenging with AI because of that sociotechnical nature. One thing that I always stress is that there is no ethical checklist, and this is really complicated because it would be really great if there were one that we could all look at and say, this is okay, this is not okay. And the reality is that everything needs to be evaluated at this intersection of the technology, the data, the use and how it’s applied.

And this also means that there are no edge cases. So what do we do about this in this kind of complicated world? So the way that we are approaching this is with this idea of creating nutrition labels for AI, and that the intent here is that they describe a product’s opportunities, any considerations that you might want to make limitations in a clear and consistent way, putting that information that you need at your fingertips. So how have we done this? I’m not going to spend too much time on this, but the way that we approach this is a sort of two fold, which is looking at both information that we’ve gathered….I’m sorry, are you seeing this slide? Because it’s not showing that you’re seeing it on my view, but I will just keep going and somebody can let me know if not. One is by gathering information from companies if they choose to share it with us. And then the other is through doing our own sociotechnical assessment with a group of experts that come from a wide range of expertise around AI and children and technology to really then discuss and pressure test against each of our principles, as well as other forms of evaluating technology from an ethical standpoint. So what I want to do here is then also talk a little bit about Generative AI in particular and in particular around chat bots, because I know that that is part of a large part of the conversation that we’re having today. And the thing that I really want to stress and this ties into a lot of what our other presenters have talked about, is that essentially what an AI chatbot is doing, is guessing the next most likely words through a lot of really complicated and fast math. And this is what allows a chat bot when you input: it was a dark and stormy… it is more likely to then say ‘night’ as opposed to something like ‘algebra’ as an example here. And what that means is that it’s able to generate responses that are correct a lot of the time because as Christine said, it is about as reliable as the Internet. And I would actually say it’s probably a little less reliable than the Internet because it is putting together words in a novel way. And that is what leads to it being wrong a lot of the time when what is commonly referred to as hallucinations. And what’s really important to note is that large language models, which are the backbone of what provide the information into AI Generative chatbot cannot reason. They cannot think, they cannot feel they cannot problem solve, even though it might seem like that is what they are doing. They do not have an inherent sense of right, wrong and truth. And this is what is really, really important. One minute. Sure. It sounds like we might be having some technical difficulties. 


[Dr. Naomi Baron]: You are still on. Go ahead.


[Tracy Pizzo Frey]: Okay. You’re telling me one minute left. Sorry. I appreciate that. So I will just skip ahead here and talk about our first ten reviews. You can read about these online, but I think the important takeaway here is that the products that did the best in these reviews really married the responsible AI design along with very, very careful and curated data selection. And that really leads to some of our key takeaways here, which is that more data doesn’t make for a better AI. In fact, more data that an AI tool scrapes from the Internet, the riskier it can be. And that is often because it is then designed to be used in a myriad of ways, as opposed to specifically designed for a particular purpose. We also found that ethical practices really vary from application to application, and just because there is a process for transparency reporting or risk mitigation doesn’t actually mean that a product is safe and responsible to use. And that consumers and children especially must understand with Generative AI in particular, that these applications are best used for creative exploration and are not designed to give factual answers to questions or truthful representations of reality even if they do that, a fair bit of time. And as we pointed out earlier, particularly in the context of the text image generators, yet there is a massive concern around the bias and unfair bias and harmful stereotypes that can be generated in these contexts and I would really urge caution and concern. And with that, I will end these slides.


[Dr. Naomi Baron]: I was fascinated by the term sociotechnical. I think it should become the word of the year. We should tell the world that that’s what they should have. And the notion of a nutritional label is wonderful because the general public now understands and vastly uses those labels all the time to better our health. I have a question, though. And that has to do with resources. There’s so much good work Common Sense has done over the years in giving resources to parents and educators. What kind of resources are there available for talking with teens about the dangers of AI? 


[Tracy Pizzo Frey]: Yeah, it’s a great question and I would say I would be remiss, obviously, and I certainly come from my own biased perspective here if I didn’t say please take a look at the reviews that we’ve done. I think there’s a lot of information in there that can help parents and educators in speaking with teens about the dangers of AI. And I would really like to point to Christine’s work. There’s a lot of great work that’s happening in the context of AI literacy, and Common Sense has also created some AI literacy lessons and resources which are available also through Khan Academy. And Khan Academy is doing a sort of consortium of information that ranges from more technical literacy into some of the more responsible AI concerns. And that information doesn’t need to be constrained to educators. It’s accessible. And so those are some good resources as well. 


[Dr. Naomi Baron]: Those are wonderful starting points for us all. At this point, what I’d like to do is open the floor. And by that I mean, unfortunately, I can’t take all the questions that I know some people are asking, but to take some of the earlier questions that were asked that Children Screens has been collecting and to have a broader discussion with all of the panelists on some of the issues that have I know are on the minds of the people who are attending this webinar today. And what I at a fairness to Steven will do is start with the question that I didn’t have time to ask him earlier, namely, and I’ll say parenthetically, building on some of the really great suggestions that Christine shared with us, what ways can we involve children in the decision making about AI use when it comes to figuring out what’s right for them?


[Steven Vosloo]: Thanks, Naomi. I’m sorry I obviously took too long earlier. So my apologies. 


 [Dr. Naomi Baron]: Not to worry.

[Steven Vosloo]: So this is a great question. And I have two parts to the answer. The first is before we get into the how, just zooming out and as a principle to really make the point that children should always be involved in this process. And I’m not just saying this because I work at UNICEF. You know, it’s really their world. They’re at the forefront of interacting with AI systems. And I really like what Christine was saying, is that they actually give us really good ideas about how to make sense and how to navigate this space. It’s also their right…children have a right to participate and to have views on issues that affect them.


So part of that and I must just say that agaIn, before we get to the actual process or format, is that part of the principle is also that it needs to be ongoing and meaningful. And we’ve seen that sometimes children are consulted. That’s a one off event and it’s not really what we have in mind. So we need to walk this journey together. And I think it really depends on what, how much, and what the particular context is. But what we’ve done before, for example, the image I showed earlier, we held workshops with adolescents in five countries, some in person, and then COVID happened, so it moved online and both online and in-person were really interesting. And we really spoke about how do children feel…adolescents feel about AI, what excites them, what worries them, what do they want to know more about is extremely valuable. So that was the workshop format and we published the methodology and I really you know… that worked really well for us. The other example I would give is the Government of Scotland really in its national AI strategy, kind of adopted the UNICEF policy guidance. And so one of the requirements that we have is inclusion of children and for children, they really took that to heart. They’ve worked with the Scottish Children’s Parliament to survey children around Scotland..young children between six or nine, I think it is, and also workshop to get an in-depth… workshop with kids to really find out what are the the questions they have, where they need help. Yeah, so I think any of the great kind of child engagement modalities could work and there’s some great resources out there, but we just need to do it, we need to do it well and we need to kind of embed it as an approach going forward.


[Dr. Naomi Baron]: Thank you so much, Steven. I see a couple of questions have just come in that I’m going to try to get that will preempt other questions I had queued up. One was a really important one about how we ensure… and this one is for everybody on the panel, so just jump in when you’re ready. How do we ensure that children recognize that artificial intelligence tools are human made and they’re not alive? How do we stop children from anthropomorphizing or for young children, does it matter? I added that piece of the question myself. 


[Tracy Pizzo Frey]: Well, I’ll start here. And I think, you know, we are in a real challenge with this in particular, because the responsibility for that needs to also come from the creators of these tools. And right now there are no rules about what they do or do not need to disclose. And there need to be these rules so that we can have some consistency and some reassurance that children will know when they are interacting with AI and when they are not. And that another challenge here that I think I just want to put on the table is that particularly with chatbots, you know, even when they may say at times I am a virtual assistant or a virtual friend, that language alone is really challenging because it can be very confusing. Are you virtual? Are you a friend? Are you…you use words like ‘how do’, ‘what I think’ and ‘I feel’ that even in that sort of framing of how a conversation can happen is a real challenge and one that I think needs a lot of policy behind it in order to make it better. 


[Dr. Naomi Baron]: One of the points you raised is this is a problem… or that you imply this is a problem for all of us. You’ll remember the Screen Actors Guild and the Screen Writers Guild strikes, which were considerably over AI issues and whether AI was going to take away their livelihood by making their likeness, by writing instead of them. So the settlements, as best I understand them in terms of what’s been revealed, is you have to reveal when something is AI made. In principle, adults should be better at making sense of that than you can expect five, six, seven, even 15 year old child at being. But do we really feel we know the difference between human-made and technology-made? There’s so much that comes on social, there is so much that is generated for us. There are deep fakes and we’ve come to say, I don’t know if it’s real. And rather than say, oh, that is human-made and this is technology made by a human as opposed to it really is human. We’ve already, as adults, significantly given up on that as an issue. So for children, it’s going to be an even greater problem. Let me go to another question that just recently got asked. And that has to do with co design issues or what I’d like to call cart before the horse issues. Namely, the technology is commercially driven overwhelmingly and there are some really good souls out there who are developing things not specifically for commercial purposes, but most of the technology in AI, as I said at the very beginning of my remarks, has capitalism,money, profit as its base. Therefore, the products get developed and there is the catch up needed for the rest of us, including parents and educators, to figure out…well how would I make use of this? What kinds of guardrails should I put on and so forth.

In principle, one of the ways to address this problem is to say we have to co-design, not just the particular curricular elements, but maybe co-design the technology. I mean, I have some colleagues who are designing the way that children’s digital books should look to make them smart digital books, not the kinds that just distract children from learning from the text. And that means going and becoming real designers of technology, not just of curricular units. So what kinds of suggestions would we have? What do you guys have on how we can avoid this play catch up with capitalism, namely the tech giants, but rather produce technology that then has the right kinds of uses for children and for schools and for parents. Christine, you want to take a crack at that? 


[Christine Bywater]: Sure, we probably have a whole other webinar about it. It’s a hard challenge, but the piece I can say is for everyone listening to don’t discount your power, right? You, as an educator, as a school district administrator, you hold more power than you think when you are having these conversations with the technology companies, choosing the platforms you use in the school, advocating for things to be added in service of student learning and finding the right partners. There are edtech organizations who I think do really deeply care about students and teachers, and it’s elevating those partnerships and highlighting, you know, what’s possible. I think a good tangible example I’ll add and then I’ll let the other panelists jump in is zoom, we saw, you know, as soon as the pandemic hit, we were all then being asked to use a platform like Zoom to conduct classes. And to Zoom’s credit, you know, they heard and listened to what educators needed and added features that were conducive to learning. Right? So I think finding those partnerships is really valuable. And again, don’t discount your power. We hold a lot of power in the school ecosystem. 


[Dr. Naomi Baron]: Does anyone want to add?


[Steven Vosloo]: You know, if I may, I just… I think that’s a really great point, Christine. It’s a very interesting moment that we are at because having spent some time in edtech in my career, you get products that are classic and tech products. This helps middle schooler at this grade learn science or math. Now we get general purpose technologies being used in learning. And of course, I mean, the Internet has always been an example of that, but this is a much more intimate, much more creative process. And so the question is how actually and Christine, you’ve kind of answered it, but we always need to hold up products and services not intended for education to education level quality and accountability. And it is going to take collective voices to say this is how your product is being used. Think about the bias, think about what it’s doing. Think about the consequences and create more appropriate products and services. I think we do have to push back. It’s not an accusatory thing, but I think we do need to kind of, you know, hold up general purpose technologies to education standards if that’s how they’re being used.


[Dr. Naomi Baron]: And the good news, some of the technology is now becoming more accessible to people without technology backgrounds so that the chances of a larger swath of people developing the kinds of products that are going to be directed to children I think are getting larger. Let’s hope.


[Tracy Pizzo Frey]: Naomi, if I may, just really quickly, just to add on to what Steven was saying.

So I’ve worked at a number of technology companies and have been an educator myself in my career and in every single one of those technologies, educators were the number one users. Often times technology companies don’t talk about this. Right? But educators are always out there in front experimenting with new technologies. And it is…we are way beyond the time for those companies to design with that in mind.


[Dr. Naomi Baron]: Right. Let’s hope we build more partnerships to help make that happen. Another question came in and Ying, I’m going to ask if you could talk about this with us. At what age can children understand what AI is and how it works? And I know that’s not a simple question, but at some point, you know, it’s like asking, is there a Santa Claus? At some point, you probably decide… not the real thing. So what about for AI? What age is it reasonable, let’s say, for a parent or a… whether it’s a preschool teacher or a lower school teacher to say, let’s make sure we’re clear about this and to have some assumption that children are mature enough cognitively and socially to understand what you’re saying?


[Dr. Ying Xu]: Yeah, that’s a very interesting question. So I don’t have a complete or full answer, but I do have two interesting data points. So I work with our preschool age children a lot from three or five all the way to eight. So in one of my studies I asked them this question. So what is AI? So I just asked them this way. So what is AI? So most of the children said, I don’t know what AI is. But for those who actually provided answers, they kind of compared AI with a robot and they thought it is a smaller version of robots. So this is one interesting data point. And so another data point I have is… so we had kids talk with voice assistants, and after they finished the interaction, I asked the kids to draw what they thought was inside the smart speaker, and the findings were quite interesting. So for the younger kids, especially those who are three, some of them actually thought that it was a human hiding inside a smart speaker and talking with them. But as kids are older than six, all of them knew very clearly it is some sort of device, it is some sort of technology, just like my dad’s phone or just like a speaker. But then what we need to think about is how we should kind of move beyond that, kind of teach them more precisely what AI is. It seems that at the preschool age, if we don’t intentionally teach them, they could develop this rough idea… What is human? What is technology?  But to Christine, Stephen and Tracey’s point,  so we do need to kind of push them forward to think more deeply what AI actually entails. 


[Dr. Naomi Baron]: Good things to think about. And now I have a big thing to think about. What I call the elephants that’s plural in the room for all of us. There’s a lot of talk today about whether there are long term existential threats of AI. To us, that is, will artificial general intelligence get to be so smart, as it were, that it outsmarts us? But a lot of discussion today is also about near-term threats. We’ve heard about bias issues, disinformation, hallucination, knowing what is real. So the questions that actually came from some of you in the audience are… in the near term besides all those other threats are some of the things that humans have done as teachers or maybe as therapists about to be replaced by AI and robots obviously are a form of AI as we know. And what does that mean for education? What does that mean for our social, psychological health? The floor is opened.


[Christine Bywater]:  I can start. My answer is let’s hope not. And a reminder particularly to teachers and therapists. The reason why, if you think back to a teacher that really meant something to you, or a therapist that really helped improve your well-being, it was the relational human component of that person. That’s what teaching and learning is all about, right? It is how I, as a teacher, connect with you as a student and understand your identities, your interests, your experiences, and bring all of that into my classroom in service of learning. And if we start to move away from that, we’ve lost what teaching and learning is for and for whom it serves.


And so if we do move that, we have a lot more things to worry about. And I don’t know, we can start to get into our doomsday shelters, but that’s what I would remind us about. And hopefully the creators of the technology. 


[Dr. Naomi Baron]: This reminds me of one project on using AI in the context of therapy. We know there are lots of programs out there now…computer programs that you can sign onto to have an AI therapist. But the smart experiment that I saw was using AI to analyze the encounters between therapists and patients to see which one seemed to have productive outcomes, to analyze the language, analyze the interaction, and then train the therapist to see what have we learned about what makes for a successful therapy that you might want to incorporate into your own human interaction. But other comments on teachers, for example, and what this means, because we know…let’s go to Khan Academy for the moment. Khan Academy has created and is still refining something called Khanimigo, which is a personalized learning tool. It doesn’t give you all the answers. It’s not designed for that. This is Khan, after all, who’s very smart about the way he runs things, but it’s there to do personal learning. And the question is, is this a good thing? Is it a bad thing? How much of it do we want? We know that if you have a classroom of 20 or 30 or 40 students and now thinking about universities, you can’t attend to each person’s need individually. So is there a place for this kind of technology? And if so, at what point do we cross into not needing teachers at all?


[Tracy Pizzo Frey]: I’ll jump in here because I think a lot of this really mirrors the early rise of edtech where there were a lot of very similar concerns and a lot of, I’ll say unhelpful language coming from technology companies around…well now teachers are going to become coaches and we are not going to need them to be teachers anymore, right? And the reality to Christine’s point is that at the end of the day, it is the human interaction that makes the difference and AI can be a really powerful augmenter. It’s a horrible… artificial intelligence is a horrible name for what this technology actually can do. And if you think about it more as how might this help augment aspects of what is happening in a classroom, how might it help create more of that equity when across needs and learning styles and use it in that fashion? At the end of the day, my belief is that is what will win anyways, because that is what will be the most successful outcomes. But we all can play a part in ensuring that that is where we end up. 


[Dr. Naomi Baron]: So our agenda is now set for us. Speaking of agendas, we now have the lightning round of sound bites… 25 seconds each of what is the last thing you would like to leave the participants in this webinar with? And we’ll start in the order of the presentations. Christine.


[Christine Bywater]:  Sure, yeah. Sorry, I thought you said order and I was going to go… that’s fine. My 25 seconds sound bite is joint engagement, co-design, and to include students and kids in these conversations. 


[Dr. Naomi Baron]: Good. You’ve now seated a few seconds to Ying.


[Dr. YingXu]: Yeah. So hopefully I could finish it in 20 seconds. So just to follow up on the discussion with Khanmigo. So it’s just the relational component between human tutors and students are irreplaceable. 


[Dr. Naomi Baron]: You summed that up. Steven.


[Steven Vosloo]: Thank you. Just to really make the point that children are at the forefront of AI and the future of AI. They are today and obviously into the future. So we really need to get this right. The point about AI doomers, hit the brakes and AI Boomers hit the accelerator is not that helpful. We obviously need both and we need to strike that balance of responsible AI that’s safe and ethical, but also leveraging every opportunity that we have now.


[Dr. Naomi Baron]: And Tracy.


[Tracy Pizzo Frey]: Putting children at the center means that we will focus on what benefits AI can bring as well as the risks and harms that are occurring today, which is a far more helpful thing to do than a theoretical existential risk in the future.


[Dr. Naomi Baron]: Thank you. And I’m going to take my twenty seconds. I’ll try to compress it into ten. And to quote, believe it or not, Nancy Reagan, who said when asked, how do we stop kids from taking drugs, she said, Just say no. And one of the things that we get to decide as educators is what wonderful things can we do with the technology? But when do we decide, no, this is not what we want to use in this particular context with this particular aged child, with this particular type of child. The technology is going to keep racing ahead. There’s no question about it. But we get to decide. We are in the driver’s seat as educators and parents, we get to decide how we want to use it. Thank you all for an incredibly interesting conversation. Let me hand the microphone, as it were, back to Kris Perry. 


[Kris Perry]: Thank you to all of our panelists for this informative discussion about such a timely and important topic. Thank you also to our audience for joining us. To learn more about The Institute and all things digital media and child development, please visit our newly redesigned website at Follow us on these platforms and subscribe to our YouTube Channel. This is our last Ask the Experts webinar of the year, but we hope you will join us again in 2024. Until then, we encourage you to check out the learn and explore section of our website for more resources and content. Thank you and we wish you all a happy, safe and joyous, joyous holiday season season.