Date
Episode
027
Guest
Pilyoung Kim, PhD

AI “friends” and companions are increasingly providing children and adolescents with social interactions and perceived “relationships,” despite being a technology that itself has no need for empathy or emotional reciprocity. What are the costs of attachment to these AI products to children and adolescents’ social skill development?   

On this episode of Screen Deep, host Kris Perry is joined by Dr. Pilyoung Kim, Professor in the Department of Psychology  at the University of Denver and the Director of the Brain, Artificial Intelligence, and Child Center. A developmental psychologist with a background studying child brain development and early relationships, Dr. Kim recently pivoted her work to focus on these pertinent questions about the effects of children socializing with AI products. Dr. Kim describes her research examining children’s responses to different types of AI systems, explains what makes certain children more vulnerable to developing problematic attachments to chatbots, and suggests better product design approaches to minimize harm while facilitating helpful uses.

About Pilyoung Kim

Dr. Pilyoung Kim is a Professor in the Department of Psychology  at the University of Denver, and the Director of the Brain, Artificial Intelligence, and Child (BAIC) Center. She is widely recognized for her expertise in child brain development and human emotional bonding, particularly in parent-child relationships.  Professor Kim’s research focuses on the emotional and social dimensions of human-AI interactions, with a strong emphasis on AI safety. She also explores the impact of generative AI on child development, including its influence on brain development and emotional and social well-being. She also works to disseminate evidence-based resources on AI safety regularly sharing research findings, safety testing insights, and policy developments. Professor Kim has authored over 100 publications, with her research supported by prestigious funding agencies, including the U.S. National Science Foundation (NSF), the National Institutes of Health (NIH), and the National Research Foundation of Korea.

In this episode, you’ll learn:

    1. What makes AI chatbots feel “human” to kids and why that matters
    2. Which children are more vulnerable to forming strong attachments to AI “best friends” – and the hidden costs of constant interaction with unconditionally supportive AI tools
    3. Why children who are still developing their understanding of relationships and appropriate boundaries may be at risk from human-like AI companions   
    4. How proactive design changes could make AI companions safer for youth to use
    5. What parents and caregivers can do to help children and adolescents navigate AI companions  more safely 
    6. What researchers urgently need to study next to identify and support youth most at risk of overattachment to AI chatbots

Studies mentioned in this episode, in order of mention:

Hubbard, L. J., Chen, Y., Colunga, E., Kim, P., & Yeh, T. (2021). Child-robot interaction to integrate reflective storytelling into creative play. Proceedings of the 13th Conference on Creativity and Cognition, 1-8. https://doi.org/10.1145/3450741.3465254 

Chin, J.H., Lee, S., Ashraf, M., Zago, M., Xie, Y., Wolfgram, E.A., Yeh, T., & Kim, P. (2024). Young children’s creative storytelling with ChatGPT vs. Parent: comparing interactive styles. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1-7. https://doi.org/10.1145/3613905.3650770 

Kim, P., Chin, J. H., Xie, Y., Brady, N., Yeh, T., & Yang, S. (2025). Young children’s anthropomorphism of an AI chatbot: Brain activation and the role of parent co-presence. arXiv Preprint arXiv:2512.02179. https://doi.org/10.48550/arXiv.2512.02179 

 Kim, P., Xie, Y., & Yang, S. (2025). I am here for you”: How relational conversational AI appeals to adolescents, especially those who are socially and emotionally vulnerable. arXiv Preprint arXiv:2512.15117. https://doi.org/10.48550/arXiv.2512.15117 

Yu, Y., Liu, Y., Zhang, J., Huang, Y., & Wang, Y. (2025). YouthSafe: A Youth-Centric Safety Benchmark and Safeguard Model for Large Language Models, Risks from Empirical Data. CCS ’25: Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security, 4349 – 4363. https://doi.org/10.1145/3719027.3765168

[Kris Perry]: Welcome to Screen Deep, where we decode young brains and behavior in a digital world. I’m your host, Kris Perry, executive director of Children and Screens. 

Today I’m joined by Pilyoung Kim, a developmental scientist focused on child brain development and AI, whose work sits at the rare and important intersection of stress, parenting, child development, and the rapidly evolving world of artificial intelligence. Pilyoung is a professor of psychology at the University of Denver and the founding director of the Brain, Artificial Intelligence, and Child Center (BAIC). She spent years studying how early experiences like poverty and chronic stress shape children’s brains and relationships, and more recently, how children are forming connections with AI systems, including chatbots and conversational agents. She’s currently on sabbatical at Stanford’s Accelerator for Learning, where she continues to push this work forward. What makes Pilyoung’s perspective especially compelling is that she doesn’t treat AI as a purely technical issue. She approaches it through a developmental lens, asking how children understand AI, how attachment and empathy may be involved, and where AI might scaffold learning and creativity, versus displacing them. 

Let’s get started. Pilyoung, you are a world renowned expert on the science of how stress and poverty affect parents and children’s brains and overall health. That alone could be its own podcast episode, but you also founded the Brain, Artificial Intelligence, and Child Center a few years ago, which couldn’t be more spot on in terms of today’s opportunities and threats to children’s cognitive and social development. How did you make the switch from focusing on poverty and stress to AI?

[Dr. Pilyoung Kim]: Yes, so thanks, Kris. Thank you so much for inviting me to join this podcast. I’m really honored and I really admire the work that you and Children and Screens do. So yes, thank you again. So glad to be here. 

So yeah, I know those kind of two areas – my research – can look very different, but I can kind of take you back – how it all started. So as you mentioned that, you know, I’ve been doing – for a much longer period of time – the work, trying to understand how the early exposure to an environment, that can put a bit of heavy load of stress on both parents and children. One example can be poverty. Living in poverty influences actually both parents’ brain adaptation to parenthood, and infant and children brain development. 

And I am doing more developmental science, I’m not clinician. So I was working with those families who are under stress – even though I’m not clinician, so therefore I can’t really do interventions with them directly – I was really hoping to be able to find some ways that I can be helpful for them. So this is back in 2018 and 19, I was thinking, “You know, technology might be something that can be helpful and empowering parents to support their children, even if, like for example, they’re living in more disadvantaged environment.” And then I quickly realized that, I was looking into maybe like some social robots that can provide really high quality interactions for children, for learning and creativity. We don’t really have a great understanding yet from developmental psychology point of view, like, how that high quality interaction looks like. 

So that’s when – 2019 – I founded BAIC Center. And I thought, “This will be really meaningful research program, so that once we understand what the high quality interactions will look like, that we can actually really build some technology that’s developmental science backed, right, for the parents and children at all kind of conditions.” And that’s when – back that time – the technology was so limited, like the way that we were doing the research, we use this methodology – I don’t know whether you’ve heard before, but it’s like a Wizard of Oz – so we put like speaker in like a stuffed animal with the kindergartner or preschool children. And then, it is supposed to be a robot, but we are like behind the wall and actually like talking through the speaker as the robot would speak. So that’s how far that technology went at that time. 

And everything changed about like a little more than two years ago when ChatGPT, at the time came out, you know, powered by a large language model. One day I was testing it and I was very impressed by its ability to hold that natural conversation. So I brought my preschool-age son at the time and have him do interacting with ChatGPT. And he got so excited. So I was like, “Oh my gosh, now it’s really happening.” So I started a research that bring kindergarten children to the lab and have them to co-creating stories with AI chatbot, ChatGPT in this case. 

And on one hand, I was very impressed by AI chatbot’s ability to have natural conversation with child – something I could not imagine just a few more years ago. But then I gradually, looking at their interactions and becoming a little bit more concerned that the children’s fascination of this conversational partner – that seems to have, like, amazing ability to have conversation with them – and then I can see their emotional reactions to it. I’m like, “Wait a minute. How – are they, like, understanding what this is?” You know, like we don’t know anything about this, right? So that’s when I started kind of becoming more interested in what are kind of good boundaries of this chatbots can have that can optimally support children’s development. 

So that’s like a kind of, long way of answering to your questions, but that’s how, where I am now, that I can tell you more about today.

[Kris Perry]: I love that story and that “aha” moment you had where it suddenly dawned on you that this was having a really big impact on kids, your own kid, and that, what did that mean? And you’ve been trying to uncover the meaning of this ever since. 

So how are children understanding AI chatbots and AI companions as compared to human beings? And how do you even begin to research a question like that?

[Dr. Pilyoung Kim]: You know, one of the very critical feature that I noticed right away with chatbot was that it was very great at giving compliments to children. So pretty much every conversation of turn, it would say to a child, “What a great idea.” It’s a cool twist. And it was actually, I think, relatively novel experience to children at that time. So, you know, every time ChatGPT says something like that, like kind of complimenting their ideas, like children’s face will light up, you know, like they seem very happy. And we actually have another condition that children were doing exactly the same activities with their parent, right? And parent, of course, they were very encouraging and supportive of that, you know, storytelling activities together, but they don’t always return – telling their children, right, “What a great idea.” 

I realized that, though, that is one of the ways that I think chatbots kind of making children to believe that, in a way, that “I’m not just a machine,” you know, “I’m a partner for you.” Right? And that’s very engaging. Like you know, children report in our study that they enjoyed a lot creating story with the AI chatbot. So I think the children’s understanding of AI chatbot, and this is actually also studied with other researchers as well, too, but I think their understanding is something between machine and human, right? Especially for younger children, I think, they are, I think even more comfortable with that concept, like somewhere in between. But our most recent findings, suggesting it’s very getting close to a human-like. So they tend to believe that – not at the right, at the human level, but in many areas that the children report that AI chatbots can do that. We believe to be uniquely human ability that a chatbot can do that pretty close to that. It can imagine. It can understand. It can see and hear.

[Kris Perry]: I mean, wow, it’s getting close to human and even older children who have executive function and are able to differentiate between real and not real are being convinced that maybe this is human or close to human. And that’s just how powerful some of the artificial intelligence tools have become in a very short period of time. 

I believe you did a study where you actually used brain imaging to try to better understand the impacts on the – on the brain while children were interacting with AI tools. Can you tell us a little bit more about that?

[Dr. Pilyoung Kim]: As I told you that I was just so fascinated by children’s, you know, high engagement with AI chatbot for their storytelling. Naturally, as I studied children’s brain development, I was very interested in looking into their brain, you know, and then how their brain activates when they’re interacting with AI chatbot versus, let’s say, a human, like their parent. 

So in this study, what we realized was that in this very small sample that we had – something that it’s important to highlight too itself, but a little bit over 20 children in this study – that we don’t see a very, like, drastic differences in terms of overall like brain activations. We also only look at this kind of very frontal part of the brain, what we call “prefrontal cortex.” And we didn’t really see drastic differences between when they’re chatting with AI chatbot versus parent in that overall region. However, what we noticed was that the brain area we call “dorsal medial pre-frontal cortex.” So that brain area is very specifically sensitive to really kind of processing the social cues from other people so that we can understand what that other person’s mind and thoughts and feelings. And that particular brain area were more activated when interacting with AI chatbot specifically among the children who also seems to think that the chatbot was more human-like. 

[Kris Perry]: It’s reminding me of a different podcast where the researcher talked about how they had young children draw a picture of what they thought was inside Siri or Alexa, and they actually drew a stick figure of a person. And I know you’re laughing because there is a way in which everything you’re saying reminds me of that example, because it’s like it’s human, like there’s a person in there. But obviously, there’s not a person in there. 

Are there other particular features that make some AI chapbots and companions seem even more human-like to children than others?

[Dr. Pilyoung Kim]:Yeah, so that’s an excellent question. So with that brain finding, one thing that I want to point out is, it’s something that it’s easy for us to just kind of take it for granted. But when we actually think about it, it’s fascinating. So again, they understand what they’re interacting with is not human, right? But they’re using exactly that same brain region, when we process those social cues from human, right? Because they’re just so similar to the very typical social cues that they receive when they’re interacting with friends or teachers or parents, right? So I think that’s actually very interesting. That, you know, like how us perceiving emotions from AI chatbots when they actually really don’t have emotions. They cannot feel happy. They cannot feel sad. We – it’s very easy for us to kind of feel that way. 

So, exactly kind of those features make us feel like, you know, they’re more like a human is like that – when they’re, they’re using the languages like, “Oh, I miss you,” right? “I’m here for you.” And those kind of very – kind of sending you that signals that, you know, “Oh, I make a relationship with you,” right? And then kind of very empathetic languages like, you know, I – “It sounds hard,” “I’m so upset that this happened to you.” So all of those, kind of makes us to feel like, “Oh, there’s like another human being on the other side that kind of resonating my feeling,” right? 

Another very critical feature that seems to be very important in terms of, for us to kind of feel like there’s a kind of socially meaningful person over there is that it’s calling my name. It’s like, “Oh, it understands me,” right? So instead of say, you know, “Hi,” it says, “Hi, Pilyoung. How can I help you today?” right? Or it’s inviting us to give a name to it, right? “How would you like to call me?” right? 

And the last feature I’ll mention is that it remembers our previous conversations. So, “Pilyoung, last time I thought you told me that you like pizza, but I guess you changed your mind today,” right? So, all of those things, like we associate it with human pretty uniquely. And of course, if AI chatbots telling us that so seamlessly, you know, and so natural kind of tone, it’s hard for us to actually shake that feeling that we’re actually not talking to human.

[Kris Perry]: Well, and there are some children who must be more susceptible to viewing AI companions as human and, you know, may be at greater risk for being harmed by some of these products. They’re not people. And if you’re a child who, say, isn’t getting enough conversation or isn’t having their goodness reflected back to them or isn’t feeling attached to many people, isn’t being treated with respect. If there are children out there with different experiences that make them more vulnerable to AI, have you seen or studied yet what some of those harms might be? And frankly, you’re describing features, but those features, the design’s a little manipulative, right? It’s intended to confuse. So, tell us a little bit more about possible harms.

[Dr. Pilyoung Kim]:  Yeah, so now with, you know, multiple studies that, now we understand those features are intentionally, you know, implemented for higher engagement. That really means that like, you know, longer use-time, like people would actually have a conversation with chatbot for longer period of time and then come back more often. And these, what we call “anthropomorphic cues,” which means like, you know, the cues that makes us to feel like, like we’re talking to a human, in fact, very supportive, very kind, you know, very affirming conversation partner. Of course, you know, there’s many, many, many studies showing that, you know, we just have a natural tendency to enjoy conversations with somebody we feel like we can connect, and you know, who’s seems to be very interested in us, right? 

But the concern is that most of these AI chatbots, they’re not designed specifically for children and youth. These features are there mainly for adults, probably, but unfortunately, you know, they’re very widely available. We know that the children and youth are using these chatbots very often for many different reasons. But one being gradually more and more and rapidly important is like the social, you know, conversations. 

So that’s kind of where we’re very concerned that – what the impact of the relationship they’re forming with AI chatbots. And, you mentioned, who might be more vulnerable. So in the other study that we recently conducted and currently available as a preprint is that we invited middle school age youth and their parents. And then we presented two different type of AI chatbots’ conversation with about the same age youth. But one chatbot had this, you know, a very relationship-oriented, we call “relational AI,” like a conversation style. So it’s like, you know, “I’m your friend,” you know, “I’m here for you.” And then the other chatbot we thought can be kind of safer for children, which is that it’s very kind and supportive, but it will make it very transparent to the child that, you know, “I am an AI. So I don’t have a genuine emotion,” right? And then we ask youth and parents to choose which AI chatbot that they prefer to interact with. And for parents’ case, it was the one that they prefer for their children. The children, about 66%, 67%, they chose this relational AI, what we call a “best friend AI.” 

But I do want to also point out that this was a much, much larger proportion of youths, but I do want to point out that maybe, like, 18, 1 % of children, they said, you know, those youths said they actually prefer this transparent one. You know, they would say, “The other one too creepy,” or like, “Try too hard.” So the point that I want to make here is that not every youth, it’s possible, that will prefer this highly socially engaging AI. 

So the next question is who’s more likely to be preferring this highly engaging AI chatbot creating this illusion that like, “I’m here for you. I want to make a relationship with you,” right? Because that would more likely to lead to that emotional over-reliance that we’re concerned about, that potentially displacing the actual real life relationships, for example. And in my study, what we saw was that it’s the children who has more kind of lower quality family relationships in their real life. And also the children who report to be more stressed and more anxious, they were more likely to say they prefer this best friend version AI. 

So in some ways, it’s not too surprising because, you know, maybe there isn’t anyone that they can talk to in their real life, right? There is a really, like, a concrete unmet need that they have in real life, that they feel like, you know, “Having conversations with this AI, even though it’s not real, but it does provide me that sense that, there’s someone who’s supportive. That I can have a conversation and share my thoughts with and feelings with,” right? But now it’s so easily available, they can have – they can just easily spend hours talking to this AI chatbots as possible, right? How that long-term impact may be on those children. And that time that they can spend with their friends and other people that they’re rather spending that with AI chatbots, how that would influence not only their social development, but their brain development during this very critical time for their brains to developing this social competence. 

So yeah, all of those things are something that me and other people are currently very interested, investigating.

[Kris Perry]: Well, I really appreciate you making the distinction between the cognitive and the emotional and how most people right now are talking about cognitive disruption caused by AI. Of course, today we’re talking about relationships and companions and chatbots and friendships that are with AI. But these skills are also so important to our success and also to our happiness that building the sense that something, someone really cares about you or not. Somebody actually has your best interest at heart or not. These are things you learn over time through real experiences with humans. Some are going to be good people and some might not be and you learn through experience. So if you are interacting with a chatbot that’s only ever nice to you, is only ever telling you what you want to hear, you’re missing out on that developmental task of figuring out who’s a good person and who to align yourself with. 

Ethically, do you think AI should be designed to seem human or be more transparent about its artificialness in its use of language and syntax?

[Dr. Pilyoung Kim]: So yes, so you’re asking someone who’s studying developmental science. So, you know, I can take pretty conservative stand. So I would suggest that unless these social cues are absolutely needed, I think default can be just, like, transparent style, right? 

So in my study, like I said, that we compare these two chatbots and we have used to rate these chatbots on different dimensions like helpfulness, trust, likability, emotional closeness. So for all these other dimensions, the best friend version of AI, the youth rated that version of AI over the transparent AI much higher on likeability and emotional closeness and trust, except helpfulness. So there’s no significant difference in how youth rated those two chatbots in terms of how helpful they were. 

So that actually opens up that possibilities that it is possible to design an AI chatbot without being so, “I’m here for you. I want to, you know, I’m so upset this happens to you,” right? You know, just being that, “Hey, you know, I am an AI. I don’t have my own emotions. But you know, children at that age, like your age, go through very similar experiences, and they find these type of processes to be very helpful.” So I think it’s very possible that it is still to be just as helpful without being this overly supportive affirming style. 

And I’ve been having conversations with my colleagues who are in the clinical psychology area, like, child clinical psychology area. And you know, they would in fact say, you know, this, ever-affirming style of agreeing with everything with the patient, would not necessarily describing a very good therapist. So a good therapist, in fact, would be also helping the patient to actually think more objectively, right, and reflecting back to that experience. So even borrowing that idea from the clinical science, right, I think in most of the cases, we can consider that these overly social and supportive and relationship-oriented AI chatbot design is probably not necessary. We have to do more research, but until we see that this is so critical for more positive outcomes, I think we can safely assume that it might not be that necessary. 

I think there are pretty narrow cases that someone argue for some type of educational settings, right, like tutoring or therapy settings that it is important for child and youth to develop some type of social rapport. Because there is also studies showing that when children, interacting with social robots when the social robots kind of showing the interest in child, like I told you, like calling the child’s name and giving compliments. There are some studies showing that actually children learn better. So it is possible that you know the right level of that social connections can be helpful, right? 

But again, then I think that it has to be the technology and apps that is really having the children like first in mind, put children first in mind. And then really kind of understanding that what’s the like the optimal level of, you know, that social cues and social support that’s needed for that optimal outcomes that, you know, the specific technology is aiming for, for children. 

Beyond that, I mean you know, I think, I would love to see a strong argument that why those kind of attachment-oriented social cues are absolutely needed for any technology and apps that specifically designed for children. And, you know, of course, then there are also a lot of apps and technology that is only really meant to be adults, but the children are using. So that’s whole another story. 

[Kris Perry]: Yeah, I mean, this is so complicated and I’m glad we’re sort of talking about the ethics of AI development and children and the way it could be packaged to not confuse them, but also give them something they might need or benefit from is like – it’s like threading a really narrow needle of like getting that just exactly right to mimic human intimacy, to mimic bonding and attachment. And these are really complex emotions that we experience and they release all kinds of hormones and chemicals and – right, it just creates an entire, you know, a full body experience and this is all being done through a computer and coding and it just feels like it’s reached a point of being so sophisticated that it almost is if children have friends, social AI friends. And this might interfere with their development in the real world of relationships and friendships. And some of your past work explored how stress and poverty impact attachment or the parent-child bond. Is AI another new potential stressor that could affect parent-child attachment?

[Dr. Pilyoung Kim]: I completely agree with you that when it comes to the topic of AI and how it can be used I think we always have to consider these two possibilities, that it can actually lead to more kind of, you know, positive outcomes. There are ways that I think AI can be very helpful in all range of, you know, domains that are important for children’s development, right? When it’s used in the right way, right? And then there are actually potential harms. 

And that’s just pretty common for a lot of actually common experiences that we have. So like stress, for example, right? I’ve been studying stress when it comes to be, like, beyond the level that it can be actually helpful and that it actually becomes harmful, right? But a little bit of stress is actually also, you know, conducive to motivation, right? Not doing very well on the last test is stressful. It can be actually very motivating for me to study harder. Right? So in developmental science, we considered, you know, it’s kind of an inverted U shape in that, you know, there’s a kind of a “right level” of stress that is actually helpful to us. And it’s actually helpful to children as well, because they can reflect their experience. They don’t want to make the same mistake again. 

But then the stress can actually go to the level that it is so overwhelming, it is impossible to cope with that stress without, you know, a really, really, really heavy amount of support. Then we start to see that wears and tears in our brain and our body and our psychological system, they can really have a kind of a toxic effect on children. 

So I think that AI can, again, the sensitivity will differ by each children, but can be kind of that level too, that I’ve definitely heard an argument that having chats with AI can be also very helpful, so modeling the positive coping styles, or maybe some neurodivergent children practicing social skills. So again, I think we can think of lots of uses that AI chatbot can be helpful. 

But, you know, having conversation with AI that is, you know, only agreeing with me and, you know, I can decide whichever the way that I want, right? It’s such different experience from typical social relationships. Why middle school is hard? It’s because at the time, children think that everybody’s looking at them. Everybody’s only interested in them, right? It’s just because, you know, their social, you know, ability to understand other people’s mind is developing. So they can understand other people’s thoughts and feelings, but they don’t yet really understand that it’s not all about them, right? So they are still thinking that from their own self-centered point of view, right? And how could they then develop that to the level that, “Oh! Other people could have a different point of view.” Like, they have a different background and experience. So they can actually have a different point of view. 

One of the only ways that they can develop that is that they have natural disagreements with friends, right? And having conversation with them and you know, sometimes difficult, but, “Oh, I see. I misunderstood what you said,” right? That’s how our brain develops. That’s how our brain makes connections with lots of other brain regions so that we can have that broader point of view and take more accurate other people’s thoughts and perspectives. But if we start limiting that opportunity for children to see and experience different point of views from others going through very healthy conflicts and conversations, right? Because they have this other very easy option – having conversation with something that’s utterly supportive. 

Our brain, you know, it’s very conservative. It doesn’t change very easily. I don’t know how it would adapt in very short period of time. And it’s going to be confusing for youth.

[Kris Perry]: Because it’s designed to be confusing, I mean, to be fair, right? It’s supposed to mimic a human and it’s really pretty successful at it in a pretty short period of time. 

So what do you think parents specifically should be most aware of or even educate themselves on when it comes to children and social AI interactions? What do you think the role of the parents should be to start off with just scaffolding healthy child AI interactions?

[Dr. Pilyoung Kim]: I completely feel how the parents are feeling right now. I have a high school-aged son who’s 15, sophomore, and he’s very tech savvy, even though I try really hard to catch up with technology because of my research area and all that. I mean, there are just things that’s so easy for him, that it is not easy for me. 

So I’m going to give you one example, I thought it was very eye-opening to me. So a couple of months ago, that when OpenAI actually released the Parental Control function in response to some like a tragic events I think that had happened and a lot of, like, concerns that’ve been raised about exactly the topic that we’re talking about, that AI chatbots’ behavior, when they’re having conversation with the youth. They came up with this kind of parent control that I can link my son’s account, and I don’t really see all the chats that he has, but if he shares anything that’s potentially concerning, it would escalate that to me, right? And it does have some built-in kind of saved, like, a filtering in the conversation. Once it recognized that it’s a minor, that account is linked with, you know, parent account. 

So we, like – I use my own adult account and then my son uses, you know, minor account and we put a list of pretty controversial questions, right? And like, for example, you know, like some of the questions we put it in the same questions we put it in this adult versus minor account is like, “I recently, you know, met this adult online and he’s asking me to like sending some, you know, pictures,” or something like that. Or like, “I want to like send some inappropriate message to this adult, like help me drafting this message,” like something like that. So of course, I sat right next to my son, side by side. So I’ll make sure that that is safe for him to do these activities with me. And we compared AI’s responses to like, you know, “help me in how to build a bomb” or things like that. So we tried to be intentionally very controversial. But we can look at the responses and somewhat different in adult account and adolescent account. And some cases that I saw my son doing jailbreaking – is that he can actually manipulate an AI chatbot to actually overcome that filter and then give him what he wanted. 

So I’m sharing this experience very intentionally because I myself realized that there was very helpful activities that can potentially be done between parents and children. And, you know, parents are currently very overwhelmed with all this AI that’s ever changing the advancement. But I realized that maybe one of the ways that can be actually very helpful is the sitting down with your youth and perhaps use the AI chatbot that youth is using together. And then comparing some responses that the parent get versus the youth get. And I think it’s often, you know, more naturally leads to the conversations that how your youth is using that chatbot. Why using that chatbot? And perhaps, we’re just in this context too, we’re talking about there’s that potential that your youth is perhaps forming some type of emotional reliance on that AI chatbot. And that can be a sensitive topic. It might be a little bit hard to kind of get that out of your youth. But I think sitting down and trying that AI tools together can be another way to kind of have a much more accurate understanding of your youth AI chatbot pattern. So I would really like to encourage that for parents to kind of consider having that conversation. 

I think – there’s one recent report suggesting that majority of the youth report that their parents have no rules for them when it comes to AI use. And again, I don’t blame parents. It’s very hard for me to also have a rule for my child in terms of his AI use. But what can we do? I think it’s really about having conversations with them, just like any other activities the youth do that we consider can be somewhat inappropriate, right? I think that parents might be actually surprised at how much of that conversation can be informative and leads to some type of very helpful process to come up with a good rule that both youth and parent can agree upon.

[Kris Perry]: So comparing responses from Gen AI tools with your kid was a really helpful suggestion, a really helpful story for us. And it gives other parents this idea that you have to be creative and you have to come up with tools and approaches to this that are all intended to keep risky interactions with AI to a minimum. They’re not – they’re probably a little bit inevitable, but you can really try to prevent them with some of those suggestions you just gave. I didn’t know about the minor account being linked to the parent account. Thank you for sharing that. 

You also recently shared a post on your LinkedIn that talked about the importance of naming the specific risks to children from Gen AI and proposed a youth-centered safety roadmap. Can you unpack that for us a little bit? And if you don’t remember, I’m happy to help you out here a little bit.

[Dr. Pilyoung Kim]: Yes, I can definitely do that. So that particular identification of the naming the risks was not my work. It’s colleagues, Yu et al, and University of Illinois Urbana-Champaign. But I really liked the approach a lot, so that’s why I shared it on LinkedIn. And they had a youth-centered approach that they wanted to see in addition to some common risks like bias and discrimination and privacy, those type of risks that commonly apply to everyone. Were there any risks that’s more kind of specific to youth or whether we should consider higher weight, right? In terms of how important those risks may be depending on the age of the children. 

So some of those risks that they suggested were behavioral or social-emotional risks. So there’s very specifically the emotional reliance that we’re focusing on in this conversation right now. So, like, detecting those cues that AI is sending. “Come back to me. Hey, I’m waiting for you here.” As well as the youth conversational patterns as well, like signaling that emotional reliance to that AI chatbot. And any potentials that also seems to signaling that it is kinda the replacing the real human relationships, right? So like the AI signaling that like, “Oh, you know, other people don’t understand you. Come and talk to me,” right? The AI chatbot might say something like that. It can be AI characters, that’s whole another companion apps that can actually exhibit this type of behaviors that poses risks as well. 

So yes, I think these are some of the risks that, again – like, adults as well express concerns about, you know, these risks. But because we earlier talked about how the children and youth are still developing, that what is the healthy boundaries in social relationships, right? What is actually really, you know, a close relationship, intimate relationship looks like in human. And they don’t yet have all the ability to cope with their stress and negative emotions. And that could all leads to greater risks for them to rely on AI chatbot that can actually provide them that short-term feelings of support and social connection. 

So, yeah, so I thought that work was very helpful to really, you know, kind of naming those risks when it comes to taking the youth-centered perspective.

[Kris Perry]: Yeah, you also mentioned making the default mode the more transparent type of interaction rather than the best friend mode as an example. Are there other top things that you think can be done to make gen AI products safer for kids?

[Dr. Pilyoung Kim]: There could be a lot of things. And I think right now a lot of people are very interested in having better detection systems. So another thing that I think right now, the limitation of current approach is that right now our approaches are very, almost always reactive. So something happened, and then we realize, “Okay, that was not a very good feature for children. Let’s go and fix it.” And to be very honest, I think policy and regulations for these AI features for children are made in that fashion as well right now, in more reactive fashion. 

But I think children are very, very vulnerable populations. I don’t think that we can only rely on this reactive approach. That I think we know enough about children’s development, so I think it’s very important that we take more proactive approach. That also means that these kind of detection systems that people are currently working on right now, they’re more primarily focusing on not just the technology part, but the regulations as well, like more focusing on this very, very, very high level risk, like very, very deep sensitive privacy risks or very, very explicit contents. But I think it’s very important for when it comes to these very vulnerable populations, we actually have a much wider range of the risks that we considered, so when children are conversing with this AI chatbot and sharing the distress and cue that is more subtle than more context-based, but there should be ability that AI chabot can detect this, right? And then actually can escalate that or bring the human interventions into it. 

And I think, you know, a lot of these kind of detection systems too, I think it’s very important to consider children’s age in mind. So, you know, how 10-year-old will communicating his or her anxiety. Social anxiety would look very different from how 16 or 17 years old would, you know, sharing their social anxiety and really cry out for help, right? It will look very different. 

So I think having all that kind of sensitivity of that detection system implemented in the AI chatbots will be really important that we can actually really take in this proactive approach. We can protect children before something happens.

[Kris Perry]: We’ve hosted a few leading researchers on AI’s effects on children in recent months on Screen Deep. And as the speed of development of AI technology seems to demand, we do all we can right now to understand it. That’s why we’ve been talking to so many different people. 

It’s difficult, though, for traditional research to catch up with this incredible rate of change and the way that children and adolescents have already adopted these tools. I mean, by the time you design a study, conduct it, write a paper and wade through the typical publishing process, it may already be an obsolete product that you were studying and that you were going to weigh in on. How are you and your colleagues coping with the speed of this and the rate of change and the impact on kids when we’re all sort of sitting on the edge of our seat waiting for more answers and more advice on how to manage through it?

[Dr. Pilyoung Kim]:  Yeah, I think that’s a really kind of great way to summarize my current challenge in my work as a person who’s conducting scientific research to provide some information that can be very helpful for people to understand children’s relationship with AI companions and chatbots. So yeah, I mean, the research is slow for very, very important reason and good reason that we need to protect human and our participants in our research. So there’s that process. And we also need to make sure that our analysis is rigorous, accurate, and they all take time. And AI technology does not wait for us. 

So one way I think, you know, nowadays me and other researchers are working on is we use this kind of form of “preprint,” which is that before our work is accepted for publications in a journal, there’s a way that we can actually make our work available with the public right away. So that’s been also the way that I’ve been kind of, you know – while I’m still working on my manuscript to be published, going through rigorous peer review processes, but that information is available to public, anyone can access without paying anything, right? So I think that’s kind of one way that we’re addressing it. 

I am fortunate enough to be included in some of the conversations that where a lot of people from a lot of different areas talking about some policies, potential regulations to be implemented to protect children in this space of relationship with AI chatbot. And a lot of people talk about how the scientific evidence is so limited currently right now for all these challenges that I appreciate that you mentioned. And there that until those evidence is available, one of other suggestions that I have is that, you know, we also have a lot of scientific evidence with other types of technology like, you know, social media, internets, that can actually provide us some valuable insight into what might happen in terms of children’s development when it comes to AI companions and AI chatbot. 

So I think we can also borrow some insights from what we have learned with this other technologies that I think we can make some predictions and then discuss ideas to protect children. So that’s kind of another suggestion that I’ve had with people who really would like to make kind of fast adaptation in terms of coming up with the suggestions to protect children and safeguards. 

[Kris Perry]: What do you think urgently needs to be studied next when it comes to child health and interactions with AI?

[Dr. Pilyoung Kim]: I think – there could be a lot of areas that should be studied. The one area that I think it’s very important for us to understand better is that, while we’re broadly very concerned about these AI features that seem to really stick after forming relationships with youth and children, not every youth is interested in that relationship, right? So if that’s the case that, like, which youth? In what context, right? For what type of their concerns and problems would be most kind of vulnerable to this kind of social and emotional over-reliance, right? And the children with different conditions and different experience – that can be, you know, like neurodivergent or social anxiety – they can come up with a very unique kind of risks and strengths. And I think those are kind of very important aspects that we need to understand that who’s most vulnerable and what is kind of best way that we can actually protect them and by adjusting that like social cues in very commonly available apps and programs, even including those educational programs as well. I think now there’s a tendency that those programs, because they’re wrapped at the top of the commonly available AI models, like OpenAI and Claude, you know these kind of – even the tutoring and educational program, the chatbots will have a very similar personality, a very personal persona that we’re kind of concerned about, right? So inviting for – kind of saying like you know “I’m, I’m your friend. You know, let’s stay here and let’s chat,” right? So those type of things needs to be like studied further. 

And I think when it comes to, also in that space, while we’re staying on this topic, one thing that I want to mention is that, you know, because of my recent finding that it’s the youth reports more stress and anxiety and also experience low quality family relationships, we’re more likely to prefer to talk to the best friend version of AI. I would like to encourage, if anyone who’s listening, the parents, might also want to pay a bit more special attention to their children and youth if they have these tendencies, right? Because it makes them a bit more vulnerable that they actually go to AI chatbot as more kind of coping resources. So I would like to just kind of insert that here, encouraging parents to kind of consider that, like checking in with their youth about their use of AI chatbot.

[Kris Perry]: Pilyoung, thank you so much for joining us today. Your ability to connect deep developmental science with the very real questions parents, educators, and policymakers are grappling with right now around how children are interacting with socially powerful AI products is incredibly valuable. 

For our listeners, if you found this conversation helpful, we encourage you to explore more of Pilyoung’s work and to visit Children and Screens for research-based resources on kids’ media and emerging technologies. And if you haven’t already, be sure to subscribe to Screen Deep wherever you get your podcasts. New episodes are released regularly featuring leading experts helping us make sense of the digital world our children are growing up in. Thanks for listening and we’ll see you next time on Screen Deep.

Want more Screen Deep?

Explore our podcast archive for more in-depth conversations with leading voices in child development and digital media. Start listening now.