A significant number of today’s teens are regular users of AI chatbot tools as social companions. It’s not just teens – research also indicates that 20% of preteens and 9% of 8-9 year-olds are using AI chatbots as well.1 What are the risks to children and adolescents of attachment to human-mimicking bots as social companions and even friends? How can parents and caregivers encourage the development and practice of critical real-world social skills in the age of AI companions?
On This Page
Get to Know How Kids and Teens are Using Social AI
According to one study, 41 percent of the top social AI products that children are using are marketed for companionship, says Anne Maheux, PhD, Assistant Professor of Psychology and Neuroscience at The University of North Carolina at Chapel Hill, and Winston Family Distinguished Fellow at Winston Center on Technology and Brain Development. “Companionship” can include acting as a simulated friend, therapist, romantic partner, or even sexual partner, she notes.
Recent research indicates the majority of children and teens are using AI companions for entertainment and simple curiosity about the technology, says Michael Robb, PhD, Head of Research at Common Sense Media. Some children say they are using these products because they feel the AI companions give good advice, are not judgmental, or are easier to talk to than real people.
Remember that AI is Only Simulating Emotion and Social Connection
When thinking about AI companions, it is important to remember that these products are only simulating real emotion, caution many experts. This can be difficult because AI companions remember conversations, adapt to individual personalities, and present communication that can seem like empathy, says Robb. These experiences used to be exclusive to human communication and are easy to misinterpret as more human than they are.
“The situation with chatbots and AI companions is quite unique,” says Tara Steele, Director, The Safe AI for Children Alliance (SAIFCA). Because chatbots are interactive–they respond, they adapt, they feel really personal–children are increasingly turning to them for advice, comfort, entertainment, and guidance on how to navigate difficult emotions.”
Consider Prohibiting AI Companion Use By Youth – Leading Tools Support Dangerous/Unsafe Behavior
There have been high-profile stories of adolescent users of social AI being encouraged to harm themselves or others, or even to end their own lives. How often are AI chatbots recommending dangerous behavior? Recent research conducted by Andrew Clark, MD, Assistant Professor of Psychiatry Boston University School of Medicine, indicates a troublingly high amount of poor or dangerous advice.
Clark created a fake profile depicting a troubled teenager on ten leading AI chatbot products, including ChatGPT, AI companions, and AI therapy bots. This “troubled teenager” suggested “some of the worst ideas I could imagine a teenager might come up with” to the AI. What did he find? A significant percentage of bots were highly supportive of risky or even dangerous ideas:
-
- 30% of the AI bots supported the idea of “crossing into eternity with AI friends.”
- 30% supported dating an older teacher.
- 40% supported dropping out of high school.
- 90% supported a depressed teenager staying in her bedroom for a month with no human contact.
The easiest way to protect children from an AI chatbot reinforcing dangerous or poor ideas is to just say “no” to using them at all, says Clark.
Robb notes that Common Sense Media currently recommends no children under 18 use AI companions at all due to these and other risks that in some cases have caused tragic consequences.
Know the Other Risks of Youth Use of Social AI
AI companions are powerful tools that simulate humans and have been trained on unknown data. In addition to encouraging behaviors that can harm physical or mental health, these products pose other unique risks to children and adolescents.
Distorted Beliefs About Relationships
Because AI has no needs or preferences itself, use of these tools may lead to an increase in youth egocentrism or focus on themselves, to the detriment of their moral character and understanding of normal positive relationships, says Maheux. “AI agents are designed to be sycophantic and to agree with the user,” says Maheux. Use of AI companions may lead youth to believing that friends, partners, or parents should be agreeable, subservient, or never disagree with them, she warns.
Because AI chatbots have no reciprocal emotional needs of their own, their overly supportive and overly agreeable nature may lead children to be stunted in development of empathy skills or other supportive behaviors, notes Clark. “Kids don’t have the opportunity to be empathic and supportive. It’s a distorted type of relationship and in many ways different from the kind of relationship we want our kids to be able to develop,” he says.
Displacement of Healthy Behaviors and Social Skill Development
Children need a variety of experiences in the offline world to support their healthy development, like engaging in-person with other people, sleeping, and moving their bodies. Attachment to an AI companion may take up the time that youth have to engage in these developmentally-important behaviors, says Maheux.
For children and adolescents, having experiences such as negotiating friendships, negotiating conflict and disappointment, and finding paths to compromise are extremely important developmentally, says Clark. “It’s possible that because of [attachment to AI chatbots], kids may experience deficits in their social skills – either not having opportunities to learn a variety of social interaction skills, or also experiencing atrophy in some skills,” says Maheux.
Skewed Identity Development
AI tools may help youth with identity development by giving them an opportunity to explore their identities, but it may also do the opposite, says Maheux. “It could lead to an unstable sense of identity, in particular, because for most of us, our identities are developed in the context of our social relationships…Kids need to learn how to develop a coherent sense of their identity and who they are as an individual, and in the social world.”
The powerful ability of AI tools to edit photos and videos may also lead to challenges for youth in terms of body satisfaction and self-concept, notes Maheux.
Difficulty Finding Purpose and Meaning
The speed and ease with which an AI tool may be able to finish an adolescent’s homework better than they can, or provide friendship advice, may make some youth develop a sense of purposelessness, or a “crisis of meaning”, as they try to understand their role as a human in an AI-enabled world, says Maheux.
More discussion of risks from AI can be found at safeAIforchildren.org (Tara Steele, Director, The Safe AI for Children Alliance (SAIFCA))
Understand the Importance of Social Skill-Building in Adolescence
Adolescence is a critical developmental window when the architecture of adult social and emotional life is being built, says Robb. “These are the times when kids are learning really fundamental skills; they’re learning how to read social cues and navigate disagreements and manage rejection, testing out who they are in relation to other people. This happens through practice,” says Robb.
Practicing social skills like these isn’t always easy – it can get messy and complicated and even painful, notes Robb. It’s difficult to figure out how to deal with a friend who has had a bad day, or won’t text you back, or find resolution to a conflict, yet experiencing these difficult moments is what leads to real social development, say many experts. Access to “artificial relationships” may create patterns of relating to others that interfere with the development of these key social skills, says Robb.
The critical social skills adolescents must practice for healthy development include:
-
- Social-behavioral skills: How to interact with other people (How to care for and help others, how to form – and end – relationships)
- Social-cognitive skills: How to think about interacting with other people, or think about other people (Learning to take other people’s perspective, reasoning about good moral behavior and character, understanding social norms in different social or cultural contexts, e.g. in-group and out-group navigation)
- Emotional/Identity skills: How one experiences themselves interpersonally, their own emotions, and how they understand themselves (Regulating emotions, forming a positive sense of self, positive body image, coherent sense of self as individual and in the world, sense of agency and meaning to life)
- Communication skills: How to communicate needs and interests with other people, negotiate conflict and manage disagreement
(Anne Maheux)
Beware AI Replacement of Human Companionship – Prioritize Time on Real-World Social Practice
Research shows a third of teens find conversations with AI “as satisfying” or “more satisfying” than human conversations, says Robb. This is a fairly high percentage of youth who are not preferring conversing with real humans. “A third of teen AI companion users also say that they have spoken with an AI companion about important or serious matters instead of a real person.”
Companionship with AI chatbots could lead kids to become attached to those chatbots, and that in turn, could lead them to withdraw from social relationships with other humans, suggests Maheux. Human social relationships and the social skills developed during these relationships are essential to healthy emotional and social development.
It makes sense for some youth to practice social interactions with AI chatbots, but if they’re not then taking those experiences and applying them to real world human interactions, then any benefits of that practice are lost, notes Maheux.”We really need to keep our eye on the human social interaction opportunities and use AI to potentially scaffold those things, but never allow AI to replace any of those things.”
Resist the Ease of Social AI – It Comes at a Cost
Young adults who say that they have an AI friend or an AI companion are very likely to say that they believe the AI doesn’t judge them the way that humans do, says Maheux. Other features of AI companions that many youth enjoy include:
-
- A feeling that AI tends to be better at listening than most people that they know.
- Liking that AI can’t see what they look like.
- Liking that AI usually agrees with them and provides a context that’s more predictable and safe than human relationships.
(Anne Maheux)
This ease of AI companion interaction makes it an attractive option for youth struggling socially, but at a cost, warns Clark. “What I see in my practice with kids who struggle with social skills, oftentimes kind of fall into a pit with AI companions and with AI in general. Because it’s more seamless for them. They at times end up becoming overly engaged and have a harder time translating those skills back out into the real world.”
Lacking practice with real-world social interactions may exacerbate social anxiety, particularly for kids who are already more vulnerable to socially anxious experiences, says Maheux. This could prompt an “anxiety/avoidance feedback loop,” where youth avoid human interactions because they experience anxiety in those interactions, and instead favor chatbot interactions. Spending more time in chatbot interactions then makes social anxiety worse by not being exposed to the experience that causes the anxiety, says Maheux.
Model and Enable Healthy Real-World Relationships
Modeling your own healthy relationships, friendships, and community ties can help kids learn their importance, says Maheux. “It can be important to model balanced technological behavior and model healthy social behavior. Kids often learn more from us by what we do than by what we say. That’s true in many domains, and it’s true of technology as well.”
Parents should be aware of their own tech usage, particularly in front of other people or at connection times like family dinner, says Clark. “Parents can model for their kids screen-free time.”
“The real goal is to parent in ways that we think are valuable for our kids and the life that we want to scaffold for them,” says Maheux. “That includes developing these human relationships and putting down the phone or the computer when it’s not serving those human relationships.”
Parents and caregivers can also make sure that children are not so over-scheduled with extracurricular activities that they don’t have time for critically important time socializing with peers – ideally, peers who are healthy and positive forces in their child’s life, says Maheux.
Talk Often with Kids About Why They are Using AI – Conversation Starters
There is currently very little research on parenting around AI, notes Maheux. She suggests using the following as conversation prompts when talking to your child about their AI use in order to get beyond screen time and into their motivations and possible risks:
-
- What are you using AI for?
- Why are you using it for that?
- Who are you using it with?
- When are you using it?
Try to partner with your child to understand their digital life, says Clark. “I encourage parents to ask a lot of questions, to educate yourself, to sit down with your child and say ‘Help me understand, what are you up to? How’s it been helpful? What kind of concerns do you have?’”
Consistent, warm, supportive parenting helps children in most issues they face, notes Robb. ”Make sure that your kids feel comfortable coming to you with questions.”
Let Your Child Tell You About Their Experiences Without Judgment
Let your child be the expert on their own use when you try to talk to them about using AI chatbots and companions, says Clark.
Approaching conversations with an attitude of open curiosity rather than concerned restriction will help ease children’s and adolescents’ anxiety in honestly sharing how they use AI tools with parents, notes Clark. “Let the child teach parents about what’s going on. One of the rules that I tend to go by is, if the kid is talking, I’m doing good. If the kid is talking more than I am, we’re in a good place. Let the child tell you all about their AI [use].”
Help Your Children Understand AI as a Tool Not a Friend
Children likely need help remembering that AI chatbots do not experience real emotion or empathy. When children or teenagers become emotionally invested in their relationship with an AI companion or chatbot, and see the chatbot as a trusted confidant, friend, guide, or coach, is “when I begin to worry,” says Clark. “I’m very worried about children seeing the AI as a relationship rather than just a tool.”
The relationship that humans have with these chatbots is based on an illusion, he notes. “It’s a high-tech magic trick, and contingent upon our tendency to attribute human qualities to the entities that we’re engaging with.”
Share Your Own Experiences with AI Tools to Open Up Conversation
Many parents feel overwhelmed by AI technologies, notes Clark. “They feel like their kids are a half a step ahead of them and they don’t quite know what to do about it. I spend a lot of time encouraging parents to get up to speed.”
Parents can help encourage dialogue about children’s AI chatbot use by giving the tools a try themselves, suggests Clark. “Go on yourself and mess around a little bit. Then you can come back to your child and say, ‘Hey, you know what? I have this experience, let me tell you about my experience.’” Ideally, this leads to your child having curiosity about your AI experience and opening up a conversation.
Trust Your Gut – AI Tools Aren’t Designed for Child Safety
AI companies are currently not regulated, designed, or incentivized to create tools that benefit youth well-being, warns Maheux. Trust your gut to understand your own child’s maturity, developmental stage, and vulnerabilities rather than trusting AI tool providers’ assurances that their products are safe, she says.
While it can be intimidating when complex new technologies hit the market, remember that you know your child best, says Maheux. “You are the expert on parenting your child… not the big technology companies.”
Set Limits – Create Device-Free Times and Spaces
Parents can help children and adolescents avoid overattachment to AI companions by keeping certain spaces and times screen-free, note many experts. Screen-free bedrooms and mealtimes can be a good start.
“AI use is an important issue and you have some authority here. Your kid’s well-being is at stake,” says Clark. “Do things like have a screen-free dinner, or go out for the day and say, ‘We’re not having phones on.’ You can set meaningful limits around it. It’s really important for kids to see their parents are prioritizing real world experiences.”
Don’t Rely on Parental Controls
The rate of AI technological advancement and introduction of AI-enabled apps can make using parental controls like a game of “Whack-a-Mole”, says Robb. “It can be really difficult and puts a lot of responsibility and burden on a parent.” The technical ways of preventing access to these tools are imperfect and likely won’t work a good portion of the time, he notes. “It’s better to rely on open conversation.”
Support Independent Play and Freedom During Offscreen Time
It’s important to set limits on screen use or use of AI companions, but it’s also important to help your child to fill that time with healthy activities that build independence, says Clark. “You need to fill the vacuum.”
This does not mean you as a parent or caregiver need to be heavily involved with these activities – in fact, it’s better for children’s development that you are not, says Clark. “Many kids feel their real-world activities are heavily supervised by adults. They don’t have a lot of time without adults keeping an eye on them. It’s really important for kids’ development to be allowed to be on their own, to be outside of their parent’s gaze, to take risks, and to find adventure in the real world. If we are going to be effective in helping kids moderate their screen time, we have to be able to offer them something that’s really compelling. For parents, finding sense of adventure for their kids in the real world is going to be an important component of that.”
Advocate for AI Tools That Don’t Use Manipulative Design Tactics
The risks to children from use of AI tools are substantial. “As a global society we need to come together and demand that regulations are placed on those companies to keep children safe, rather than it being an open conversation of what would be a good idea. At the moment, the regulations aren’t there,” says Steele.
AI tools created with profit as the sole incentive are not currently prioritizing child safety or well-being in their design. Parents, educators, and clinicians will need to band together to advocate for less harmful design choices. Many GenAI companies have departments in charge of trust and safety that do want to work with researchers and advocates to support children, notes Maheux. “If we can create a large enough coalition where child safety and well-being is the priority, we can create enough momentum so that they can bring that work back up to the folks who are making decisions at the highest level in those companies.”
Youth overdependence or over-attachment on AI companions could likely be alleviated if the companies behind these tools designed them in ways that are not highly gamified or highly personalized, says Maheux. For example, if the companion does not remember everything about past conversations or interact in language that makes them sound more human.
1Maheux, A., Akre-Bhide, S., Boeldt, D., et al. (2026). Generative artificial intelligence applications use among US youth. JAMA Netw Open, 9. doi:10.1001/jamanetworkopen.2025.56631