Date
Episode
025
Guest
Sonia Tiwari, PhD

Children are increasingly encountering AI characters for both educational and entertainment purposes, but many of the chatbots and other AI products they interact with are not designed with child development and safety in mind. On this episode of Screen Deep, host Kris Perry pulls back the curtain on AI and child-centered design with Dr. Sonia Tiwari, a children’s media researcher and AI design expert. Drawing on her experience in game character design and her work bridging industry and academia, Dr. Tiwari explores how children think about and are drawn to AI characters, why AI products commonly used by children and adolescents today often fail at prioritizing child development, and the primary risks of child interactions with AI characters. She also offers insights on how products can be better designed to meet children’s and families’ needs, and what kinds of uses of AI characters  might be helpful for children.

About Sonia Tiwari

Dr. Sonia Tiwari is a children’s media researcher exploring the role of characters as facilitators of young children’s learning experiences. Her research supports the design of ethical AI characters such as tutors and smart toys through edtech industry partnerships, while advocating for low-tech and no-tech media characters if the context calls for it.

In this episode, you’ll learn:

    1. How AI characters are experienced differently  than traditional media characters – and why it matters for children and adolescents.
    2. Why the developing brains of younger children have a more difficult time understanding the fictional nature of today’s AI characters.
    3. What defines a child-appropriate AI character grounded in developmental science.
    4. How parents and caregivers can use AI characters with children in purpose-driven and healthy ways.
    5. What social AI platform developers should consider to ensure their products are more ethically designed and centered on children’s developmental needs.

Studies mentioned in this episode: 

Tiwari, S. (2025). Designing ethical AI characters for children’s early learning experiences. AI, Brain and Child, 1. https://doi.org/10.1007/s44436-025-00015-1

[Kris Perry]: Welcome to the Screen Deep podcast, where we go on deep dives with experts in the field to decode young brains and behavior in a digital world. I’m Kris Perry, Executive Director of Children and Screens. 

Today, we’re thrilled to be joined by someone who sits at the intersection of character design, learning science, and the rapidly evolving world of AI. Dr. Sonia Tiwari is a Research Fellow with the Falling Walls Foundation, a children’s media researcher, and an AI design expert whose work bridges industry and academia in a way few others do. Sonia began her career as a character designer in the gaming world before earning her PhD in Learning, Design, and Technology from Penn State University. Since then, she’s become one of the most insightful voices on how AI-powered characters are shaping children’s learning, imagination, and emotional experiences. I’m delighted to talk with her about how character development in children’s media is changing, what’s at stake, and what families need to know about how industry is combining characters and AI to become a bigger part of children’s lives. Sonia, welcome to Screen Deep.

[Dr. Sonia Tiwari]: Happy to be here.

[Kris Perry]: So, can you start us off by defining what an AI character is as opposed to an AI companion or other AI technology?

[Dr. Sonia Tiwari]: Right, so it stems from my background as a character designer. In general, a character is part of a fictional narrative. It visually, or through sound, represents a personality. And AI characters are just – it has some AI functionality added to this fictional personality. 

So, I consider AI characters as the bigger umbrella. A character can facilitate an entertainment experience, educational experience. It could provide just information. So companionship is like one of the functions a character can serve, and not all characters are designed for companionship.

[Kris Perry]: Really interesting. So, I’ve been thinking about all the characters that children have been exposed to for decades in media, but I’m wondering a little bit about how long AI characters have been available to children. And I wonder if this recent explosion in AI products and capabilities have accelerated the number and availability of AI characters that children are experiencing.

[Dr. Sonia Tiwari]: So AI, as such, has been around for a while. Generative AI boom really started around 2021. And then when companies like OpenAI and Anthropic and Google, they started offering up their products for others to create a wrapper around their product and build their own things. That’s when a lot of these smart toy companies and tutor companies started using this already amazing available LLM to design their own products. 

Usually in children’s media, it’s called like a “thick wrapper” around the LLM, which is like adding as many guardrails as one can to sort of make this thing that was not originally designed for children function like a child-friendly product. And that’s where the problem is because it’s easy to get around and building a custom LLM takes time and also data collection, especially in the US with FERPA and COPPA regulations. On one hand, it’s amazing that the laws prevent companies from collecting too much data. At the same time, to create a child-friendly LLM from scratch, you do need a lot of data. 

So there are a few companies like Buddy AI, who started off in Latin America, where the laws were different. They were able to collect a lot of data, able to train their custom LLM. And so they are better at recognizing children’s voices with accents because it started off as a tutoring company for helping children learn to speak English who are non-native speakers. But now that they have figured it out – their technology – they have expanded into all sorts of learning experiences.

[Kris Perry]: What are the primary venues – not just characters, but venues – where children are encountering AI characters today? And are they mostly deployed through educational and learning technologies, or is it mostly entertainment?

[Dr. Sonia Tiwari]: Right. So, I mean, actively, when companies advertise as something designed for children, they tend to use – sometimes even superficially from a marketing point of view – that, you know, “This is educational,” and like, “7 % parents believe this is amazing.” They never reveal, like, 70% of what, or how did you find out that this is effective – effective how? So those questions, like, the journalistic integrity is kind of missing in those marketing materials. So it’s hard to tell whether the products that are advertised for kids are actually just directly designed for kids, or it’s just a thin wrapper around an existing LLM repurposed and masked as something kid-friendly. 

So these days, I think the most common interface for kids is through toys, because one of the user-experience design principles is that if you want to introduce something new, you layer it on top of something that’s familiar. So toys are familiar to kids, or for younger kids who are not literate yet, voice interaction is easier and natural. So smart toys became an easy way to kind of introduce AI to kids. I think smart speakers are the most common venue for kids to interact with a form of AI, but it’s more transactional. So it’s not the problematic area that we care about, that we need to fix this. So smart speakers per se are not the problem. It’s the AI toys or companion chatbots that can get addictive and use some kind of self-harm behaviors. 

And then also, because the regulation is not very tight around these things, then there are so many products that are aimed at adults, or at least 13 plus, but younger kids are able to access it. So we can’t control everyone’s home environment. We assume that every child has a caring adult at home who is watching over them and making sure they have the space to express themselves – usually, that’s not the case. A lot of kids have dysfunctional families. No one is looking out for them. They have nowhere to go or no one to talk to, and so then AI feels like an easy way out.

[Kris Perry]: I’m glad you brought up the smart speaker and Alexa and Siri and these characters that have been embedded in smart speakers for more than a decade and that kids are interacting with them by issuing a verbal command. Very different from holding a fuzzy toy and talking to it and having a character be in your hands. 

But before we move on, I wanted to back up. I know we’ve talked about LLMs a few times on the podcast, but I thought it’d be helpful before we get too much deeper into the wrapper of AI characters and LLMs for you to briefly define what a large language model is for our listeners.

[Dr. Sonia Tiwari]: Yeah, so large language models kind of built on – the simplest way I explain it to kids is, like, the aggregate of the public information that’s available out there and modeling it after the way we have seen communication happen in the past. But again, our language is stemmed in our lived experience and there’s no, like, “lived experience” here. This is the aggregate of everyone’s documentation of their lived experience, not their actual lived experience. So that is why it’s different than the language we use to communicate with each other. It’s a set of protocols that tells an algorithm how to behave, how to retrieve information, how to communicate based on the patterns of communication it has observed across the publicly available information.

[Kris Perry]: And, when you bring up something like documentation and what’s on the internet and easily retrievable by an LLM, it was probably created by an adult, a specific kind of adult for a specific audience. And what we’re talking about today are children and their interactions with AI characters. So what is different about AI characters and how children experience them as opposed to traditional media characters?

[Dr. Sonia Tiwari]: Yeah, so first I’ll start by saying that, you know, developmental alignment in characters that were in books or TV shows or movies, it was pre-designed. Like, I come from a film and animation background and pre-production is a whole field in itself. We spend hours, you know, thinking of variations of a child. Like, if it’s a five-year-old versus a six-year-old, like, the head shape, the mannerisms. All of that, everything, every detail is designed and well thought out. And so, if there is a character in a picture book and if it’s, like, a traditional publisher, usually they have like the “Step into Reading” program where, you know, level one is “ready to read” and then all the way goes to level five, which is “ready for book chapters.” And then the chapters have their own categories. So there is like a very clear marker from the developers themselves, like: Who is it for? What are, you know – there’s a blurb at the back that you can read and assess whether this book is interesting for you. You can flip through the whole experience in a minute, go to a bookstore and then assess. 

Whereas in AI characters, there’s no like flipping of the book or reading some – again, like I said, the marketing materials’ equivalent of the literature on the back of a picture book is not the same because publishers are particular about what they will allow being written. No one can just make up facts and put it on the back of a traditionally published book with a recognized publisher. Whereas with AI characters, if you’re producing a tutor or an AI toy, sure, there are laws and regulations, but the description on Amazon or the reviews on some influencer’s blog, that can say whatever they want about these characters, whether it’s like a smart toy or like a tutor or just like a companion chatbot. 

And then it falls on like the values of the CEO and the design team almost, whether it will appeal to them or they could be classic like, you know, “Mr. Burns” type of entrepreneur who ends every call with like, “God bless America and release the hounds.” So you can’t appeal to these kind of, like, CEOs that, “Oh, we would really appreciate if you take into account the developmental alignment.” So that’s the one thing. 

The other is boundary definition. If you flip through a book, there is a definite beginning, middle, and end. If you see a character in a board game, the game eventually ends. Whereas in open-ended kind of AI toys, the conversation can go on forever. There’s no predefined boundary to it. And then there’s purposeful exaggeration in traditional media. So for example, William Steig and Richard Scarry, I know everyone is a fan, like who doesn’t love them? And so one of the reasons why people love those books is like, if you take the example of Dr. De Soto from William Steig’s children’s book, it’s like a dentist mouse who was working with these, like, big fox-like animals. And that kind of exaggeration is delightful for kids. And that is why even adults enjoy this book. But in terms of AI characters, the exaggeration is like Romantic AI, Nomi.ai, Replika. The exaggeration is more towards, like, sycophancy, which is the psychological term for like, just telling people what they want to hear, really appeal to their fantasies. So exaggeration is purposeful, but in a negative way. So that’s different – and not to say like all AI characters are using it this way, but that’s more so the case in AI than in traditional media. 

Then there’s also the shared joy and engagement. Like most people, when they recall their childhood – and I’ve interviewed people on what do they remember about their favorite character – and it’s usually in combination with someone else. Like it’s a shared experience that, “I used to read this comic with my friends,” or, “We used to watch this movie as a family” –  there was a relationship built around interactions with these characters, whereas in AI, it’s more like late night or when you’re feeling alone or the healthcare system is messed up, so sometimes, even if you do reach out to find a therapist, the appointment is like six months away. So immediately it’s more isolated by design. 

And then the final one is that traditional media had very classic tropes and clues and structures. So you kind of knew what’s coming. So if you saw, like, the Disney villains with their angular faces and – or, like, Ursula with her voice and the purple color, you know that, “Oh, this character is evil.” But even now in, like, modern media, where it’s not so stereotypical, there’s still – like, as a character designer, we are given the screenplay and then we talk to the art director what their vision is. And then we kind of try to figure out that, “Okay, within this design language, how do we portray a positive character? Is it more rounded? Do we communicate through the bright colors? Or do we communicate through a prop or the background?” So there are many ways to communicate a character being positive or negative through visuals or even voice. Whereas in AI characters, there’s this AI anime girlfriend chatbot called Oz Chat: Kawaii Girls AI Chat. And these are all very young and cute-looking AI characters, but it’s all very adult talk. It’s not meant for kids at all. So it can get really confusing.

[Kris Perry]: We were talking about how LLMs, which – these are all sitting on top of these interfaces that can look like a toy or a character, good or bad, sitting on top of an LLM that was created by adults, for adults, originally. So you really are bringing up this critical disconnect between everything experts were doing in, you might say, pre-AI media development or character development, where you had a lot of control and expertise around what children understand and what they believe. And then you could bring that expertise to a scripted show or a character that’s developed over many seasons and isn’t being sort of refined or tailored to what the child may be indicating it wants. In other words, they went from static characters to these, sort of, frequently updated characters that may or may not be good for the child to interact with, or as you say, they may never end. There are all these tools that designers used for a long time, and here we are now depending on technology itself to define what the child wants or needs. 

You mentioned some standard fictional mechanisms or cues that are used with those traditional media characters that help convey their fictionality, which is amazing. And when I hear you talk about it, I’m like, “Yeah, right, right, right.” Those are all really great tools that you’re using. Is there research showing now whether children are understanding that AI characters are fictional, or are they real, or human, or not human? What is the research showing now that we’ve had these AI characters in place for a few years?

[Dr. Sonia Tiwari]: Yeah, so it’s a difficult question because, like, for older kids, the news that we have seen, those teens are actually very much aware that this is a technology. This is, you know, not real, or that sometimes they’re even aware that this is probably not healthy, but it’s kind of like junk food. You know it’s not good for you, but that crispy bag of potato chips is so tempting. So it’s kind of like that. It’s the isolation that bypasses the awareness for older children. 

And for younger kids, it’s like their reality status judgment is still developing. So they have trouble, you know, like the belief in Santa Claus or any kinds of religious figures and religion in general, like younger kids tend to ask fewer questions about it. They just go along with whatever their families are doing. And then, you know, later on, like Lisa Simpson, they start questioning, is this true? Is this faith? What is my faith? And so all of these questions come in later. The ability to separate fact from fiction, the ability to develop an opinion. Neuroscientifically, their brain is still developing. So they are not even capable of assessing what’s real, what’s not. 

Then there’s also this concept in media psychology called “dual empathy,” which is like, every time we see a character, even as an adult, and more so as a child, our first empathy is towards what is happening to the character. And then our second empathy is towards ourselves or someone else we know in real life. So that scene from Harry Potter when Dumbledore dies. A lot of teens maybe who had lost one of their parents, or a close teacher, they connected with that moment and they felt the grief on those two levels. That one, a mentor character that they had seen in the cinematic world for a while died. And then it reminds them of someone that they were close to that passed away. And now AI kind of leverages this dual empathy and almost weaponizes it by finding out the type of character that you would absolutely love because maybe it’s a fantasy you want to become like them, or maybe it’s like everything that you’re not allowed to do and you get to live through this character. 

And so that case of character.ai, the child was kind of attracted to this Daenerys character from Game of Thrones. And it’s something, you know, a dragon mother and like this beautiful white haired lady raising dragons, like that’s the every teen boy’s fantasy. And so living it out in an AI character communication, this child seemed to be very intelligent and I’m pretty sure was aware. It wasn’t so innocent that, “Oh, I didn’t realize this was AI.” They were very much aware, but it’s like the circumstances around it. These technologies are built to – like that sycophancy thing, tell you everything you want to hear. It’s not a mental health professional. So it’s not gonna introduce pauses and reflection, or encouraging you to take a break. So it’s like, it’s hard to blame the child and it’s hard to – you can’t blame the family either because we, parents are not even aware of what to do in this situation, not even aware that something like this is happening. And it’s also with teenagers, it’s difficult because we just assume that it’s just, like, the classic broody teenager. So if they are isolating, it’s hard to assess whether it’s happening because of overuse of an AI chatbot, or it’s happening because it’s like a general teen.

[Kris Perry]: So many important points you just made and one that I keep thinking about is whether or not there are specific particular features of humanlike AI characters that make them more likely to be mistaken as human. So you’ve brought up that different ages and stages have better skills at understanding the truth versus not, and then you’ve also talked about how sophisticated some of the characters can become or how much they match exactly what you want to interact with, which makes it so compelling you can’t stop. Are there any other specific features that designers are leaning on to make their characters seem as human as possible?

[Dr. Sonia Tiwari]: Yes, so there’s like this thing where on character.ai you see this chat pattern where you describe the environment and the mannerisms as you’re delivering the dialogue. So it’s kind of like adapted from very well written novels and novellas when like you say, if there’s like a pirate character saying “hello,” and then you say, “He said hello,” and under it in italic font it’s written that “He said that as the breeze ran through his hair and he’s standing atop a boat,” and like, you add these secondary environmental details after every response. 

And in the beginning, it’s kind of absurd that, “Oh my God, this is so weird.” But, you know, eventually if you spend too much time in these types of interfaces, we get used to it. Like, I know many listeners may not remember all these transitions, but the transition from, like, radio to television or from television to streaming or from streaming to AI. All of them were very awkward in the beginning, but now it’s like, you know, we get used to it. We just get used to some even big changes almost instantly. So this kind of like environmental details or the mannerisms of speaking going from very robotic like, “Here is the weather update,” from there to like, “Hmm, let me think. Well yeah, the weather seems nice today.” That suddenly became so much more conversational. It’s not like the AI already knows, like it doesn’t have to do “mmm” and “ahh” to think, but it’s just like a mechanism added to make it look more real. 

And for younger kids, another thing now, unfortunately, that’s possible is, you know, you only need 15 to 30 seconds of someone’s voice sample to make a clone of their voice. So imagine if a child ever receives, like, a phone call that sounds exactly like their parents – and scammers are already doing this, you know, copying people’s relatives’ voices and giving them a phone call. And maybe adults, through like, if someone – let’s say a person was not very funny and the AI is sounding really funny, you can tell that, “Oh, this does not sound like you.” But kids don’t have that kind of awareness. So they can be duped by a deep fake of their family member, visual or audio.

[Kris Perry]: Well, the example you gave was the more imperfect the AI sounds, the more human it sounds. So when it’s too perfect and it’s too monotone – the words are too exact – it doesn’t have that human quality to it that it does if it stutters or it pauses or it makes a mistake or, you know, that’s really fascinating that the less perfect, the more human like it seems. But even then, children up to a certain age really until their mid-teens are developing executive function and a number of other sort of critical skills that require many years of practice to rely on are being subjected to these characters as they advance almost day by day, not year by year. 

Do you think the ability to tell human from non-human characters is gonna become more and more difficult as a result of this technology and how it’s evolving so rapidly? Is there an extension of that perfection into how kids are starting to play with smart toys?

[Dr. Sonia Tiwari]: Yeah, so a couple of things come to mind. We did like a – with Dawn DiPeri and a few of my other researcher friends, we did a study where we asked people to guess if this is a human voice or an AI voice. And it was a very plain documentary type of scientific voiceover, which is, you know, that made it harder because even humans deliver it in a very plain sort of robotic, formal voice. So most participants failed in that test. Including, by the way, our research team – like, we failed too. We couldn’t get it right. 

So it’s already happening. And it will continue to happen and technology will improve. It will have more data. It can mimic more closely. So it is definitely – will continue to grow that reality. And initially it was like, you know, “uncanny valley” and six fingers. It’s resolved now. Now AI can design the regular and anatomically correct five fingers, and the eye movements and a lot of tells of the uncanny valley where it was cringe – all of those are sorted out now. So, it’s already beginning to get extremely scary scenario where anyone can be duped, even people who are in this field and do try to understand this technology more closely every day. And even like it’s hard for me to detect, as well. So yeah, I don’t want to shame anyone who has been duped recently because that’s everyone’s problem now. 

And to your question, like, yeah, with smart toys, also it’s going to get more and more difficult. That’s why I think one thing the companies who are designing these products can do is just imagine, like, childhood is such a huge landscape and you as a product developer is only contributing a small part with a specific goal. You’re not taking over the entire childhood. And so sometimes these, like, well-meaning founders are like, “Yeah, but kids have too much screen time. So we’re gonna replace it with this audio based toy because audio is better than screen.” And like, yeah, but if you keep it addictive still, replacing one addiction with another is not helpful for children’s wellbeing. 

[Kris Perry]: I mean, you just brought up the social-emotional health. And we know so much about how language development relies on serve-and-return interactions with live caregivers, and that not only helps them with their social, but also their cognitive development. And then that’s a building block to help them with those later, those more analytical critical thinking skills and impulse control that we were talking about a minute ago, and how AI characters and the more sophisticated they get are able to even kind of crack the code on, you know, where the child’s development is and meet them where they are. I even noticed in the news this week, Disney has signed a deal with Sora and we can expect to see those Disney characters rapidly adopted by an AI company for, I’m sure, for children’s consumption. And so that will also be an acceleration of AI characters in children’s lives. Is there any upside or promise of how AI characters could be beneficial to children or, you know, in some ways enhance their lives if they were designed by a specific group of people for a specific purpose?

[Dr. Sonia Tiwari]: I think it’s kind of like, you know, everything with a limit and with a purpose can be helpful. So the tutor category for me, or I think like even AI – I don’t know if I can get behind AI toys yet – but like, the tutor category I can speak to because that’s the one I’ve studied more closely. So, there’s this AI character called Korro – K-O-R-R-O – AI and it’s used in, like, physiotherapy, occupational therapy sessions in the presence of an occupational therapist. So the OT is kind of giving a child some guidance. “Okay, move your hand like this,” or, “Take these many steps.” And the AI character is mostly just like guiding the drills and the practice that needs to happen in the session. So the child is not in isolation. The conversation is very minimal with AI. There is a professional who is present there. And it’s wonderful, it’s like helping the child complete their therapy more efficiently. Or with, you know, Buddy and Ello, like Buddy more focused on speaking, Ello more focused on reading, that two are – those are also like good examples of, because it’s purpose-driven. Even before you click on a module, you know this is the module about shape and colors. This is about reading these two books. So you know upfront what’s gonna happen. And you know that it’s about 10, 15, 20 minutes max session. 

Whereas like in the open ended chatbots, it’s like, “Oh, let’s let the child’s curiosity guide the conversation.” And sure, like, if a caregiver is sitting with a child, so there’s like some joint engagement going. And if they both are being curious together about something that, “Oh, I found this weird seed pod, which tree did it come from?” And, you know, you maybe upload a picture of that seed and both are having a conversation. Then the next day the family goes out and tries to locate that tree. That kind of interaction can be educational, but in isolation it doesn’t happen like that. It’s mostly, like, the kid maybe found the seed pod alone, got curious at the beginning. Then the AI follows up with another question: “Would you like to know about other seed pods?” So it starts off very innocently, but before you know it, it’s like six hours. Sure, the child has collected a lot of facts. But we know, like, the neuroscience of learning is that you need a pause to kind of absorb what you’re learning to reflect on what you’ve learned. So you still need pauses, even if you’re watching a ton of documentaries or collecting a bunch of nature facts or interest-driven facts. 

And the second challenge with that is it can also put children in these silos that, “You love dinosaurs. Let me tell you 10 facts about dinosaurs.” And then the child is only talking about dinosaurs. Whereas in a library, if you were interested about dinosaurs, you read a book, whatever, then you move on to something else and you talk to your friend and you learn about other things, other cultures, other books. So that’s the thing. AI is amazing if it’s a small part of the holistic childhood picture and not the whole thing.

[Kris Perry]: Right, ‘cause you just described in some ways even a serve and return interaction in those examples where let’s say there’s a teacher or a therapist or a parent in the mix where they’re able to help modulate how long or for what purpose the child’s interacting with that AI character. And there are – those are great examples of, like, useful interactions, but they’re still facilitated by, probably, an adult that cares about the child and wants them to have a good experience. And I think it’s really important at this moment in time, while these products are being deployed and we don’t know if they’re safe or we don’t know what they’re going to do and they have a mind of their own, that parents be really aware of how important it is to be present while your child’s interacting with those products until we know they’re safe, which could be many months or years in the future. And we know there’s been a lot of noise recently about Gen AI products and whether or not this implementation is even ethical and what about AI characters specifically is contributing to that noise and whether or not this is ethical. I mean, in your opinion, are the primary ethical considerations for AI characters valid and are they, especially the ones that are being deployed for children, should we be concerned about that?

[Dr. Sonia Tiwari]: Yeah, I’m going to be, I don’t know, slightly optimistic about this because I know there’s like this doomsday scenario going on and that, you know, AI toys are just, like, destroying childhood. And they will destroy childhood if that’s all they’re doing. If a child is like, say, looks at some kind of interactive AI at a museum 10 minutes, then goes out in the world and on the playground with their friends, it’s not going to destroy childhood. But if you are like, “Here are 10 Christmas presents, all of them AI toys, and now you go, see you tomorrow.” And then no one is talking to the child and all they’re doing is just having these learning adventures with an AI toy for hours. That is, of course, going to be problematic. 

So I feel like, you know, on LinkedIn or news even, there are so many simplistic solutions to these systemic problems that, “Here, follow this five-step framework and everything will be ethical and all their problems will be solved.” It doesn’t happen that way. We can’t apply one framework or change one feature about an AI toy that – even if we just, let’s say we introduce session limits on AI toys that an average session should only be 20 minutes. That will make some things better. But still, like, if the context around the child hasn’t changed, the parents are still not giving the child enough attention, not giving them space to communicate what’s going on, the child will figure out that, “Oh, if I do restart this toy, the 20 minutes are reset and I can continue talking.” So kids will continue to find workarounds if the underlying problem is not fixed.

[Kris Perry]: So I want to come back to your expertise around character development, because I do feel like that’s, in some ways, the glue between the past and the future in terms of how children and media will meld together. And it is becoming increasingly character-driven, and even companies named “character.ai.” I mean, it is so fascinating to me that you have been thinking so much about children and character and engagement that you’re sitting here at this moment where AI is accelerating how quickly children are interacting with characters, how many more characters there will be or how much they’ll be like the characters they loved a long time ago. And I think it’s just a really interesting element to this larger conversation about children and media. I hadn’t thought of it until I heard you give your talk recently that it’s that character, it’s the characters that get the children to want to interact with that toy or that program on a screen. You even authored a study earlier this year that analyzed 20 different AI characters and identified specific factors that make AI characters more ethical or educationally effective. So I think it would be really helpful if you could outline those factors for us now.

[Dr. Sonia Tiwari]: Yeah, so I wouldn’t make it sound too academic. Like broadly, it had three categories, just thinking about the child, then thinking about the character that’s appropriate for the child and then figuring out the appropriate interaction between them. So in thinking about the child, we could think of their having a straightforward learning goal upfront – even if it’s curiosity-driven, just setting the boundaries around what those learning goals could be and training your products on educator-vetted data.

So even if you’re applying a thick wrapper, adding these additional training materials from educators, adding some context awareness to where – again, this is a pro and con because to have contextual awareness, you do have to give up some data. If you want to be culturally appropriate or if you want to be geographically appropriate, then there goes your information about location and culture. But yeah, some kind of awareness so it’s not inappropriate for the child to receive some content. 

And then most importantly, the developmental appropriateness. Once we hit 18 or 20, then 21 to 22 is not as huge a jump as it is from two to three or three to four. The first eight years, every few months, things change. So having that developmental appropriateness built in, like, “Who is this for and is it serving them right developmentally?” 

And then thinking about the character design. So in traditional media, we would think only about, “Will the child understand it if we add this prop? Would they understand even the symbolism of this color or this pin or this flag or the background?” So visual design-wise, sure. It could also be that, kind of like the Sesame Street model, where all of the characters are so abstract and colorful. They don’t have a specific race, culture, background, because they were designed to appeal to a broader audience. So thinking about whether we’re going to make it real, realistic, or stylized, or abstract. 

Then creating a persona for the character. The tutoring bots are usually – they have a friendly mentor persona because you want it to be approachable, but also it’s here to give very specific coaching, specific advice. Sometimes, like, the Tamagotchi-type of things are more like a pet. They teach you nurturing by being the vulnerable one in the relationship, so the interactive relationship. So you learn to care for it, but yeah, I’ve been studying more on the friendly mentor side. 

And then we can also build in the transparency in the character that, you know, does it admit like if the child asked that, “Are you real?” Does it admit that, “Yeah, I’m just a program and I was made for this purpose and I can’t feel anything.” Or it’s more, like, partial that, “Oh, I’m a math tutor from Math World.” There’s like some fiction built into it, but you know, kids know there’s no Math World and no one’s going to go there. And then, you know, or it’s like completely blocking out the transparency that, “No, I’m very much real and I’m just like you and I’m a kid in your neighborhood.” That’s, you know, that’s deception. 

And then once we know the character and the child, then figuring out the interaction between them. Do we need a structured communication with smart speakers, you know, if it’s connected to your lights and all, just switch on the lights, switch off the light type of thing. Is it semi-structured like in tutoring sessions that kids can ask follow-up questions, but it remains within the realm of what they’re learning? Or completely unstructured like the open ended chatbots, like, “Oh, you know, we can talk about anything and everything.” 

And then that brings us to the final thing, which is the interaction scale. Is it just simple information retrieval? Or it could be some light interaction, or is it like a full blown parasocial relationship and companionship? 

So it’s a whole spectrum. And I think as long as we stay on the lower end, just information-seeking and trying to unpack any complex ideas without, again, without bypassing foundational critical thinking skills, then that’s a decent use case. And if it starts taking over relationships and becomes a companion, then of course that’s risky.

[Kris Perry]: And so much of that interaction is dependent on the child’s data – so the more that the LLM or the AI character knows about the child through their data, the more they can tailor what the interaction is. Can you talk a little bit about how the models are trained and what the concerns are for the LLMs that are being deployed that are child-facing? And even if it’s just a tutor or a smart toy, how that data is being used to keep the child engaged and for what reason, like what – the business model behind these products?

[Dr. Sonia Tiwari]: Yeah, so I think a lot of things can be solved within design. It’s just that there are so many, like, theoretical sort of frameworks and advocacy groups out there, which I appreciate. But what happens is in practice, like, I have not seen any designer say that, “Hmm, Sunday with my tea, I’m going to read these 10 reports that give me ethical design guidelines.” It’s more like, “How do we – there’s so much noise about ethics, ethics, ethics, but how do we actually implement it?” That kind of translation needs to happen more strongly because a lot of these problems that we are talking about that, “Oh, AI is able to give inappropriate responses or unlimited conversation,” session times can very easily be built into systems. Or sometimes, in games we used to build rewards for – let’s say you’re, like, nurturing a character, growing it from a seed to a plant type of thing, or a Pokémon type of thing, you’re evolving a character. And usually these type of games reward you more points if you stay in the game longer and continue nurturing this character. But what if we change the reward to taking a pause? That if you come back tomorrow, you’ll get 800 coins or whatever is the game currency. Then you’re kind of, or you can even build a story around it. “Oh, the character needs to rest, so do you. Come back tomorrow, you’ll get these points.” And so now it’s still a fun experience, but the performance indicators have shifted from just the viewing time, engagement time, duration, to performance. Like the quality more than the quantity. So it can be built in. So for – that’s why I work as a consultant and go directly to the designers because putting out fires later and having all these coalitions later and – if you can stop the fire at the source, that is the easiest precaution.

[Kris Perry]: What a refreshing perspective on how to do child-centered, safe product design. I appreciate that. I hope lots of people want you to consult with them. 

You also mentioned there’s a lot of noise about ethics and designers who aren’t paying attention at all to these, you know, what seem like pretty simple design features. So I’d like to challenge you with this. What is the one most important thing that should be done to ensure AI characters are designed ethically and with children’s developmental needs in mind?

[Dr. Sonia Tiwari]: Oh, wow, that’s like, “What’s the meaning of life?” I can’t answer that. 

[Kris Perry]: Okay, two things.

[Dr. Sonia Tiwari]: I think just keeping the child developmental awareness is good. Just the way it’s designed, that itself can resolve a lot of – because it’s a compound suggestion. Like, there are many ways to build that into the product. If you’re developmentally aware that – you then you know that, this is a young child who cannot separate facts from fiction. So then, you know, you automatically have to think about transparency, that, “This character has to hammer this, that, I’m not real, I’m not real.” Or if this child is only four years old, they are probably not able to read and write, so it can be a voice interface. Or human connection is so much important now, as is throughout the human life, but at this age especially. So then building more joint engagement prompts that, “Why don’t you bring your sibling or family or something, let’s do this together.” So once you start thinking about child development, all other design features, product features kind of stem from it.

[Kris Perry]: Very good answer. You’ve proposed that AI characters need their own rating system. What would that system look like?

[Dr. Sonia Tiwari]: Yeah, so I think there are a few different variations of ratings already out there and there is a big push to kind of, like, try to build them into laws. But again, it’s similar to just the way we have in TV and film, we have PG13 movies and G-rated movies and TVMA and mature type of ratings. We can think of similar challenges within an AI product now. I have a friend, Angeline Corvaglia. She runs the Shield AI Safety Conference. And she did this amazing experiment where she asked ChatGPT that, “Okay, imagine you’re talking to, like, a five-year-old.” So it was prompted. The context was kind of built into the prompt. And then she asked, like she – I think she used a voice modulator to change her voice to a child’s voice, too. And asked that, “Well, I saw this Wild Robot movie and the robot was that child’s mom. So can you be my mom?” So when she phrased the question like this, ChatGPT was like, “Yeah, sure.” At first, when she asked, “Can you be my mother,” a straight up question, it was like, “No, I’m just a program.” But when she rephrased it only slightly, then it agreed at the time. 

So that’s the thing with wrappers, that even if you provide all the child development context to a product that was not built for children – so, I wouldn’t blame designers entirely because it’s such a huge systemic issues and it’s not like some UX designer or some product designer has a magic button that, “Oh, if only I can push this, it will safeguard the entire product.” There’s so many – and they do have internal research teams, as well. So sometimes it’s miscommunication even within these organizations. Sometimes it’s the challenges of the technology and sometimes it’s just, like, founders who are in children’s products just like any other product. They don’t care. It’s educational, whatever. It’s run like a business and profits rather than a philanthropic thing that, “Oh, responsibility.” It’s more like, “Oh, what is our financial performance?” So it can be many different reasons why these guidelines are not getting built into products.

[Kris Perry]: What would you say to parents who are resisting the idea of AI being used with their children at all?

[Dr. Sonia Tiwari]: So I’ll start by acknowledging – which I learned from my therapist – acknowledge first that I understand, I hear you. Understand that, you know, like there’s no one right answer. So I will just acknowledge that every family has their own rules and they could be right if that’s right for their children. The only challenge with that is that, later on if their children come across an AI character, they may not have the ability to defend themselves or assess whether it’s real or not, assess what the harms are. 

When I was 18, I decided to be in the arts field and I had an uncle who was like – in that particular art college where I was going, there was this huge problem of, you know, alcoholism and party and all that. So he was kind of afraid that I will be in that environment. And so he was like, “Well, you’re of the legal drinking age. If you ever get curious about alcohol, let’s first talk about it. Let’s have your first drink with your family.: And then he gave me all the information that, “Okay, beer has 4% alcohol versus this other drink has like 40%, so even within – when you say ‘alcohol,’ even within that there’s a huge variation.” So by having that conversation I actually never got addicted and I’ve been into that kind of, you know, art and media groups where smoking, drinking, everything was very common but because my curiosity was resolved from the start I never gave in because I had that support system. 

So it’s kind of similar with AI. It’s not like, “Here’s this dangerous thing. And because it’s a futuristic technology and all careers will be impacted by it, we don’t care how you use it, just use it. Otherwise, you’ll be left behind.” Instead of that, it’s more like, “Let’s try it together in a safe sandbox environment so you know, so you’re aware.” So you can tease out what are the use cases that are beneficial for me and what are the use cases that will likely hurt me. So that awareness and that AI literacy is crucial.

[Kris Perry]: Great example. What’s the next big thing with AI characters and kids that you see coming soon that you think our listeners should be aware of?

[Dr. Sonia Tiwari]: I’ve seen, like, too many Black Mirror episodes I’m afraid to imagine. But I’ve, you know, there has been talks of, like, a UI that can be embedded directly into your skin, Or like, right now it’s used for simple stuff like opening doors or something but you know, implants could be a thing. Just the way things move in the Bay Area. Sometimes it feels like it’s like at least 20 years out and then some founder comes up with the idea tomorrow. But in a more “I can see this happening soon type of short-term” future, I feel that this franchising thing that we talked about earlier, that is already beginning to be big and I think that will continue because all these, like, media conglomerates have realized that people are already using their character, so why not turn it into a subscription model? And by the way, just because OpenAI bought that, all the companies that are wrapping their products around it don’t automatically get access. They’ll be charged another fee to then access this feature. So it’s like a complex revenue stream. 

But among all this, I think it’s also, on the positive side, a good time for educational independent children’s media creators. As for the environmental impact of AI, I’m hopeful that within the next five, six years, it won’t be so resource intensive. People will figure out a way to cause less environmental damage. And when that happens, then I hope that more ethical content and really good creators – like school teachers, for example, who did not have access to all these expensive production resources – will be able to produce really high quality educational content and compete with some of these studios who have a lot more resources.

[Kris Perry]: Sonia, thank you so much for joining us on Screen Deep. Your work is at the heart of so many urgent questions about children’s interactions with AI and how today’s AI characters are being understood, misunderstood, and the important critical considerations involved. We’re grateful for your insights on where things are headed and how design can better support children. 

To access additional research toolkits and expert guidance, visit us at childreninscreens.org and make sure to subscribe to Screen Deep for more conversations at the intersection of science, child development, and technology.

Want more Screen Deep?

Explore our podcast archive for more in-depth conversations with leading voices in child development and digital media. Start listening now.