The introduction of free, publicly available generative artificial intelligence technologies (aka Gen AI) like ChatGPT has spurred experimentation and use of these powerful tools by curious users of all ages. While many have found generative AI tools useful for certain tasks and as productivity time-savers, there is significant concern about the implications of children using generative AI during critical periods of social learning and skill development, as well as about deployment of these technologies without a strong body of research into their potential risks.

Children and Screens convened a panel of researchers, child development specialists and policymakers for a webinar on the emergence of generative AI technology and the risks and opportunities it poses for children and families.

What is new about generative AI?

The idea of artificial intelligence is not new. Since 1950, when Alan Turing was weighing the prospects of computers performing feats of human intelligence, the field of artificial intelligence has matured, aided by developments in computer hardware and programming models. Many common functions of today’s internet use utilize AI, such as any online search or using Google Translate.

Naomi Baron, PhD, Professor Emerita of Linguistics, American University, explains that the major leap in the last few years has been the emergence of a programming scheme called generative pretrained transformers (GPT), coupled with huge sets of data (large language models). Those massive collections of texts are used to predict what the next word in a piece of writing should likely be. While this approach was developed for producing (“generating”) new written language, this same principle underlies generating images or computer code.

Generative AI existed before the release of ChatGPT in November 2022, but relatively few people were aware of it. The difference with ChatGPT was that it went viral, and within weeks, students began experimenting. Since then, educators and parents have been grappling with how to respond. Does use of generative AI lead children to cheat on school essays? To become more creative? As the number of generative AI tools multiples, and as the tools become increasingly sophisticated, the challenges only increase.

How are children using AI?

In some countries, generation Z youth are driving the adoption of generative AI, according to Steven Vosloo, Digital Foresight and Policy Specialist at UNICEF.  Vosloo cites research from OfCom (Office of Communications) in the UK indicating that 80% of teenagers online ages 13-17 are now using generative AI tools, as well as 40% of 7-12 year olds.

In what ways are youth using AI technologies? “Researchers have been observing children in their homes and schools and have found that children primarily interact with AI in two different ways. One is to ask specific fact-based questions to gather information. The other one is less common, but children sometimes engage in personal or social-oriented conversations with AI,” says Ying Xu, PhD, Assistant Professor of Learning Sciences and Technology, University of Michigan.

Uses – and sometimes misuses – of AI vary across age, as well as whether individuals are employing AI for personal exploration or for school assignments, says Baron. “When most people talk about generative AI, they are thinking of chatbots that can produce new text or images. However, the same underlying programming scheme now also drives familiar writing functions like spellcheck, grammar and style programs, and predictive texting,” says Baron. A survey of high school students in the US suggests much higher usage of generative AI for personal purposes than in school.

What are the risks to children from AI?

Overtrust in AI-provided Information

Tracy Pizzo Frey, MBA, Senior Advisor at Common Sense Media, cautions adults and children alike to remember what AI is in essence – “It’s math that trains computers to do tasks that have been programmed with super-specific rules. While this technology is exciting and it’s powerful, it’s not perfect,” she says.

Children use similar comparable strategies when judging the reliability of AI as they do with humans, says Xu, basing their judgment on whether the informant has provided accurate information in the past as well as on the level of expertise they perceive their source to have. “However, it appears that some children may be better at utilizing the strategies to calibrate the trust than others,” she says. Youth with better background knowledge of the subject area of the conversation or more sophisticated understanding of the AI mechanisms will be better at making these judgments. Conversely, children with lower AI literacy may be prone to trust information received from AI without critically evaluating its quality.

Bias Reinforcement

Gen AI technologies generate responses based on the data they have been trained on – and therefore can serve to strengthen and reinforce certain cultural biases. Vosloo cites one example:  “If you go to many of the text-to-image generators and you say ‘Give me an image of a CEO,’ you’ll get a white male. And that’s because of the data that it’s been trained on, which is obviously deeply exclusionary for young people of color, or for girls,” he says. 

Similarly, much of the information used for training AI systems has come from more historically wealthy and developed nations, leaving out content, representation, and worldview from a significant number of other countries and cultures worldwide. Pizzo Frey notes that “no product is values or morals neutral,” including AI, yet the nature of AI obscures the underlying values and morals contained in the data used to train the technology.

Research has shown that unintentional bias in the large language models children use to write essays shapes the positions they take in the essay, says Vosloo. In time, this effect could influence broader changes in how children see the world around them.

Reshaped Social Skills

“Children develop their social etiquette through interactions with others who model the appropriate behaviors,” says Xu, “but given that AI might not always follow our social norms, children might make demands on AI using impolite language or even insulting the AI that they are conversing with,” she says. This style of communication used with AI may carry into real-life interactions with humans, and Xu cites preliminary evidence indicating that children can and do pick up linguistic routines through their interactions with AI.

Some AI products are now incorporating measures to encourage children to use polite language, says Xu. While they may be a step in the right direction, “it also poses a risk of obscuring, at least from the children’s perspectives, the boundaries between AI and humans,” she says.

Underdevelopment of Foundational Learning Skills

What happens when students turn to AI tools like ChatGPT for assistance with homework? “Are they engaging in the learning process or are they sidestepping it?” asks Xu, who says that the impacts depend on the timing of the learning objectives.  “For younger learners, particularly those in elementary and middle school, the priority is to develop foundational skills. Relying on AI too early for tasks meant to develop these foundational skills, could potentially hinder their development.” Older children and young people preparing for the workforce, who have already developed strong foundational skills, may benefit more from integration of AI tools into their education.

“For AI to be a valuable tool, it shouldn’t just provide easy answers, but rather it should guide children in their journey of sense-making, inquiry, and discovery,” says Xu. There is evidence that when AI is specifically designed to guide children through the learning process, it can be quite effective, she says.  However, the most widely available Gen AI systems are not designed for child use, and Xu notes that as of late 2023, OpenAI (the maker of Chat GPT) requires users to be at least 13 years old, with parental consent needed for those between 13 and 18.

Baron adds that the spectrum of writing functions today’s AI can serve should not be underestimated. Besides producing new text, writing summaries, or constructing point-by-point arguments, AI is also increasingly taking over basic editing: everything from capitalization, punctuation, and spelling, to grammar and style. “Parents, educators, and students need to think carefully through which of these skills are important to be able to handle on your own and when it’s OK to cede control to a computer program,” she says.

Persuasive Misinformation and Disinformation

Increasingly, it can be “impossible” to determine whether text or images have been generated by AI or humans, says Vosloo.  Children’s worldviews may become skewed from content that has been unknowingly manipulated.

Pizzo Frey urges “consumers and children especially must understand, with Generative AI in particular, that these applications are best used for creative exploration, and are not designed to give factual answers to questions or truthful representations of reality, even if they do that a fair bit of the time.” Pizzo Frey advocates for the development of guidelines to create “consistency and reassurance” so that children will know when they are interacting with AI and when they are not.

How should I help my children use AI technologies safely and responsibly?

Co-use

“Even with the best design intentions, it is still important for parents and teachers to stay involved when children interact with AI,” says Xu. There is always the risk that some responses from Gen AI technology might be inappropriate for young children, she says. In addition, young children may face difficulties making their speech understood by voice systems, leading to frustration. “Given these challenges, it is important to encourage engagement from parents or other caregivers, just like when children use other media technologies,” says Xu.

With guidance from parents or other caring adults, “it can be fine for even very young children to engage in asking questions through chat bots,” says Xu. “We could consider this as an additional learning experience for children. However, it is very important for parents to be aware of the information provided by the chat bot and if necessary, rephrase or supplement it based on the child’s needs.”

Recognize the Limits of AI

One common misconception of AI is that the more data that feeds the product, the better quality the outputs. This is not true, says Pizzo Frey. “More data doesn’t make for a better AI. In fact, the more data that an AI tool scrapes from the Internet, the riskier it can be. And that is often because it is then designed to be used in a myriad of ways, as opposed to specifically designed for a particular purpose. We also found that ethical practices really vary from application to application, and just because there is a process for transparency reporting or risk mitigation doesn’t actually mean that a product is safe and responsible to use.” 

AI systems are sociotechnical, says Pizzo Frey. That means that “technical excellence alone is not enough” for assessing the quality and impact of AI systems because the technology “cannot be separated from the humans and the human-created processes that inform and develop and shape its use.”

Embrace Curiosity

“It’s important to remember that children are naturally curious,” says Christine Bywater, Associate Director at the Center to Support Excellence in Teaching (CSET). “They think about the world around them in incredible ways and are constantly doing their own sense-making.”  When having conversations with kids about AI, Bywater says it is important to keep this natural curiosity in mind and remember that children are likely already thinking about these questions themselves.

Children are at the forefront of AI integration into society, both today and in the future, says Vosloo. “So we really need to get this right.”  It’s not helpful to take an approach that either overly catastrophizes or glorifies the risks and opportunities of AI. “We obviously need to strike that balance of responsible AI that’s safe and ethical, but also leveraging every opportunity that we have now.”

Understand the Importance of AI Literacy

Bywater defines AI literacy as not only the ability to recognize and use AI technologies with a basic understanding of actual capabilities, benefits and risks, but also as personal awareness of rights and responsibilities with respect to the presence of AI in our individual lives. However, she notes that “we’ve learned from the last couple of years that AI literacy is lacking. We did a study with high school students who showed that they’re generally not as familiar with what algorithmic bias means and how it becomes available in these AI systems. When we’ve worked with teachers, we’ve overwhelmingly found that teachers are very unsure about what counts as AI, just like many of us are.”

Having AI literacy doesn’t mean one has to become a computer coder or technologist, but developing a familiarity with what AI tools can and cannot do, says Bywater.  Particularly for educators, Bywater urges collaboration with youth in the work to develop AI literacy. “For educators, it’s operating from uncertainty and the willingness to learn together with students that is a really important goal.”

Build AI Literacy Skills

Bywater provides some “general rules of engagement” around building AI literacy in schools and homes. 

  1. Recognize students and youth as positively intentioned actors with AI and be open to their perspectives. “It is so valuable that we have them as an asset lens – they are critical to this AI literacy world.”
  2. Acknowledge teachers’ value as important actors in shaping student use of AI and advocate for time and resources to support their development. “We ask teachers to do a lot, so when we are asking them to add to their workload, it’s really important that we advocate for the time and resources for them to develop that.”
  3. Identify strategies for building supportive norms around AI use at home and in school communities and classrooms. Encourage thinking about transparency of instructional time, equity concerns around access, and identifying deterrents for use including cost or privacy concerns.
  4. Investigate how data is collected and how data is being used. Ask schools how privacy and security is assessed before allowing students to use an AI tool. 
  5. Realize your power – as a parent, educator, or administrator, “You hold more power than you think when you are choosing the platforms you use in the school, advocating for things to be added in service of student learning, and finding the right partners.” 

For information on risks and considerations for use of specific products, there are online tools and product reviews being developed to help assess individual AI products – see the resources section below.

Don’t Undervalue the Human Connection

As society continues to grapple with the best way to integrate the power of AI in the workplace and classroom, it’s important to remember the unique value that human interactions can provide. The best teaching and learning is “all about” the relational human component, says Bywater. “If you think back to a teacher that really meant something to you, or a therapist that really helped improve your well-being, it was the relational human component of that person,” she says. “It is how I, as a teacher, connect with you as a student and understand your identities, your interests, your experiences, and bring all of that into my classroom in service of learning. And if we start to move away from that, we’ve lost what teaching and learning is for and for whom it serves.”

Related Webinar

This tip sheet is based on the webinar “AI and Children: Risks and Opportunities of the Enhanced Internet," held on December 6, 2023. Watch a recording of the event, read the transcript, and view other content in Learn and Explore.