The breakneck pace of AI development and computing power has created a new age of algorithmic content delivery online, where youth (and adults) are all-too-easily funneled into rabbit holes, echo chambers, and media consumption marathons. But what are algorithms and how do they work? How can parents and caregivers protect themselves and their children from the harms of hidden mechanisms designed to maximize screen time, hijack attention, and minimize creative thought and exploration?

Children and Screens held “Algorithms 101: Youth and AI-Driven Tech” on November 15, 2023. Panelists with expertise in computer science, human-computer learning, communications, and mental health surveyed the current state of data-driven media online, how algorithms work to increase and solidify bias, and what families need to know to develop essential skills to cope with the growing influence of algorithmically-delivered content on youth’s development, preferences, and minds.

Speakers

  • Imran Ahmed

    Founder and CEO, Center for Countering Digital Hate
    Moderator
  • Ranjana Das, PhD

    Professor in Media and Communication, University of Surrey
  • Amy Ogan, PhD

    Associate Professor of Learning Science, Human-Computer Interaction Institute, Carnegie Mellon University
  • Elvira Perez Vallejos, PhD

    Professor Mental Health and Digital Technologies The University of Nottingham
  • Motahhare Eslami, PhD

    Assistant Professor of Computer Science, Human-Computer Interaction Institute & Software and Societal Systems Department, Carnegie Mellon University

Algorithms are omnipresent in the digital world: the movie that’s flagged for you on streaming services, the videos served to you on social media, the ads you see when scrolling through news sites. None of these are a coincidence. Rather, they are created to entice users to increase their usage and time on online platforms. But how do these algorithms work? And how do they impact our children? Our panelists set out to explore that in this #AsktheExperts webinar. 

0:00: Introduction

Kris Perry, MSW, Executive Director of Children and Screens: Institute of Digital Media and Child Development, introduces the webinar and panel moderator, Imran Ahmed, founder and CEO of the Center for Countering Digital Hate. Ahmed begins by providing a high level overview of what an algorithm is and how it works. He then offers examples of how algorithms used on social media platforms can contribute to toxicity online and promote harmful content to young people. Based on his experience and reports from CCDH, Ahmed emphasizes how important it is to understand the use, function and harm caused by algorithms on social media platforms in order to advocate for better. 

13:08 Elvira Perez Vallejos, PhD

Elvira Perez Vallejoz, PhD, Professor of Mental Health and Digital Technologies at the University of Nottingham shares research on young people’s perceptions of algorithms, addressing both risks and benefits. Vallejos considers the impact of algorithms on child development, specifically cognitive development, relationships, and physical health. Vallejos concludes with a summary of what we can do to advocate for change and recommendations on how to ensure that children interact with algorithmic content in a critical and knowledgeable manner. 

30:12 Motahhare Eslami, PhD

Motahhare Eslami, PhD, Assistant Professor of Computer Science at the Human-Computer Interaction Institute & Software and Societal Systems Department at Carnegie Mellon University, explores the impact of algorithmic bias. Eslami explains how algorithms serve to reinforce bias, delivering content that is often more negative towards racial and ethnic minorities, prioritizes Western ideals, and perpetuates gender norms. Eslami offers specific recommendations for caregivers to help empower youth to navigate and challenge algorithmic bias in their lives.

43:38 Amy Ogan, PhD

Amy Ogan, PhD, Associate Professor of Learning Science at the Human-Computer Interaction Institute at Carnegie Mellon University, discusses the importance of AI literacy in empowering youth to make informed decisions online. She identifies three misconceptions about youth that can be barriers to increasing youth AI literacy, and shares research that challenges these notions, underscoring the importance of involving youth in shaping the future of AI decision-making. She urges educators and caregivers to foster children’s critical consciousness of AI biases and potential risks too.

59:32 Ranjana Das, PhD

Ranjana Das, PhD, Media and Communication, and Professor at the University of Surrey, UK, shares research about parental perceptions of algorithms. She dives into four significant dimensions of parents’ algorithmic literacies: algorithmic awareness, technical competencies, critical capacities, and championing parents’ and children’s best interests. Das recommends that parents and caregivers consider the four aspects of parents’ algorithmic literacies and find what works for them, concluding that parents can use everyday moments to talk with their children about algorithms and their influence on our everyday lives.

01:12:08 Panel Q&A

Ahmed brings the panelists together for a brief group discussion addressing questions submitted from the audience. Panelists discuss possibilities for blocking and retraining algorithms, ways to set time limits with technology, how to advocate for legislation around algorithms, and how young people are actively using existing tools. 

[Kris Perry]: Hello, and welcome to today’s Ask the Experts Webinar Algorithms 101 use and  A.I. driven tech. I am Kris Perry, Executive Director of Children Screens, Institute of Digital Media and Child Development. Algorithms are all around us in the digital world. The next video or post as you scroll social media, shows ,and movies flagged for you on streaming platforms, search results on Google, the ads you see in your favorite game show or platform, and maybe even the next worksheet resource or learning intervention recommended for a student. Though not always obvious, algorithms are often driving the landscape of what and who we interact with online every day. And children are no exception. For today’s webinar, we’ve brought together an interdisciplinary panel with expertise in computer science, human computer learning, communications and youth mental health to discuss how youth interact with algorithms and what risks and opportunities those experiences may present for development, safety and well-being. They will also discuss the impact of encoded bias and the role of algorithmic literacy for youth and caregivers alike. To get us started, I am pleased to introduce you to today’s moderator, Imran Ahmed. Imran is the founder and CEO of the Center for Countering Digital Hate, US, UK. He is an authority on social and psychological malignancies seized on social media such as identity based hate, extremism, disinformation and conspiracy theories. He regularly appears in the media and in documentaries as an expert in how bad actors use digital spaces to harm others and benefit themselves, as well as how and why bad platforms allow them to do so. Imran also advises politicians around the world on policy and legislation. Welcome, Imran.

 

[Imran Ahmed]: Thanks so much, Kris. It’s a real pleasure to be here with you all today. My name is Imran Ahmed. I am Chief Executive and I’m founder of the Center for Countering Digital Hate. I set up CCDH seven years ago when working in British politics. I’m British, the accent  and giving it away. Seeing simultaneously the rise of a virulent antisemitism on the left of our political movement, but also working on the EU referendum, seeing the rise of conspiracism and hate towards black people and Muslims in the UK, and then the assassination of my colleague Jo Cox, who was 35 year old mother of two, a member of Parliament for Batley and Spen by a far right terrorist who had been radicalized online in parts. And what I wanted to understand was why was it that bad actors were so good at using the spaces online? I spent three years studying the way that bad actors operate, the way that they drip, drip disinformation to try and color the lens through which people see the world and then activate them in moments of crisis.

But what we realized over time was that the platforms were integral to why this was why the problem was occurring. And in particular, you know, that magical word that all of us talk about, but many of us don’t really know how to easily define an algorithm. And so what is an algorithm? I mean, at its core, it is fundamentally just a mathematical formula. It’s just a way of ordering things, in particular on social media. The way of ordering the information that’s on there. If you think about a social media company is, its basically a business that takes in lots of speech from lots of different people and data points and you know who you are, your cell phone number, etc., what you look at, what you’re interested in, and then it just orders that speech in a way that creates a timeline. And the timeline is the product. The product isn’t you being able to post. The product is the timeline. And why do I know that’s the product? Because that’s where the ads are. And 98% of the revenues of a company like Meta come from the ads. The answers that are specific to your interest quite often, but also, you know, put onto a timeline that’s designed to keep you scrolling. Why do they want to keep you scrolling? Because you can see more ads. And how do they keep you scrolling? Well, they do that through a series of psychological mechanisms that keep you addicted, but put you in a state where you need to see what’s next. And quite often, sadly, disinformation, hate are actually more interesting for people. Content that is harmful is more interesting because it’s like, it is the forbidden fruit. It’s what we what we want to argue over, what we want to talk about, what we want to – the things that trigger us psychologically individually. The misconception about social media platforms is that they just give you what you want to see. They don’t. They give you what they want to see based on your unique psychology. What keeps you addicted based on your unique psychology? Let me give you an example of how this works in practice. A couple of years ago, I had the great honor of meeting a man called Ian Russell. Ian’s 14 year old daughter, Molly was 14 years old when she took her life in England and Ian, being a man of great integrity and resolve, decided that he wanted to find out why and he wouldn’t leave any stone unturned. Now, one of the things that he realized was that her social media consumption was worrying, and so he persuaded a coroner in the UK to actually force matter and interest to hand over the data that she’d been consuming.

And what they realized was that the platforms had so systematically overwhelmed her with content that told her that it was normal, that if you hurt inside, you hurt yourself outside. And if you really hurt inside, you kill yourself. That it was understandable that she had concluded that this was normal. They normalized the idea of self-harm through a frequency with which the content was delivered to her. We did a study a couple of….no, a year ago called Deadly by Design, in which we studied setting up accounts of 13 year old girls in Tik Tok in four different countries. And then we recorded what content they subbed up a brand new account, no information beyond that – it’s a 13 year old girl. Within 2.6 minutes, suicide content. Within 8 minutes, eating disorder content every 39 seconds on average. And here’s what was really pernicious. We named half the accounts with a normal girl’s name, like Susan. And then the other half with a name like Susan Lose Weight to give it an indication that there was something psychologically there. Those vulnerable accounts got 12 times the malignant content of the other accounts. So the platform recognizes vulnerability. The algorithms are sophisticated enough, based on so much data that they’ve captured from billions of interactions, that they know that those people will be triggered and will be addicted to that kind of content. We did a report recently on steroid-like drugs, looking at how young men are being told on – again on Tik Tok, that they’re not good enough the way that they are. That the real measure of a man is not being kind and, you know, financially responsible, which is probably the only two reasons my wife might have married me. But being strong, physically powerful, massive muscles, Captain America physique. And they were being told that the only way to do that is through steroids. By the way, they were linking to that Web stores where you could buy these illegal steroids and then telling them that when they come home, when they deliver to your home, just tell your parents that vitamins. Now, about a year ago and about a year ago, we started talking with the Entertainment Industry Foundation in Hollywood about a PSA which is running across the US right now. I want to play it for you very quickly. You might recognize the voices. Laura Linney, the actress, the Emmy and Globe Golden Globe winning actress, I am told. I don’t know quite what that means. Here we go. I’m just going to play it for you now. 

 

[Video Clip]: Within 15 seconds of logging onto social media, the algorithm has your daughter in its crosshairs. It sends her a steady flow of images telling her she isn’t fit enough, pretty enough. They invade her brain, causing body dysmorphia, anxiety, depression leading to the worst rates of eating disorders, self-harm and suicide we have ever known. All while she’s sitting right next to you on her phone. Congress knows, but it refuses to act. Don’t let her suffer the secret pain alone. Use your voice. Demand a plan. Join us at the Center for Countering Digital Hate. Protectingkidsonline.org. Because it’s up to you to protect your children from social media nightmares. Join us for her, for your daughter. 

 

[Imran Ahmed]: That PSA is played around the country for a few months now. I’m told it’s had close to a billion views, and has been played, you know, tens of thousands of times across the US. And it is important that we are talking about these algorithms because two thirds of our American teens, and my children will be American too, use Tik Tok. One in six say they watch it almost constantly. On average, they spend more than 90 minutes a day there. And what I’m scared about, what my wife Liz and I talk about all the time is, well, what will we do? What could we do that could counteract 90 minutes of programming a day by an algorithm that is purely commercially motivated? You know, our worry is that we did some surveys on conspiracy theories recently. 49% of adults agreed with four or more conspiracy theories that we asked them about, 60% of 13 to 17 year olds. 34% of adults thought that Jews control the world, 43% of 14 to 17 year olds. 59% of 14 to 17 year olds use social media for four plus hours a day. Those are really disturbing numbers. That is, those are the sorts of numbers that in Europe, we know what happens if the vast majority of people think that Jews are a danger to our society.

It leads to terror beyond imagination. And my worry is about the future of our society, our children and our democracy. We know through other studies that when people watch one conspiracy video, they’re shown other types. We did a study called My Algorithm. If you watch stuff about COVID or anti-vax conspiracies, it starts reading anti-Semitic and Q Anon conspiracies on Instagram. And we know that parents worry about this. 68% of adults and 83% of children acknowledge that online harms online harms of a real world impact. 74% want platforms to build products according to safety by design. We need legislation. We need change. And that’s what we’re here to talk about a little bit today. To find out more about what are some of the solutions that are out there and to give us. You know, I will be listening just as intently as everyone else, because this is something that’s relevant to my life, too. And I feel as powerless as anyone else does when faced with the notion of a child who wants to be on social media, but we know how harmful it can be. So I’m absolutely delighted to be introducing our first speaker, Dr. Elvira Perez Vallejos. Please correct me if I was wrong on that. Is a professor of mental health and technology at the University of Nottingham where her multidisciplinary work crosses boundaries between a school of computer science and the School of Medicine. Currently, she is the director of Responsible Research and Innovation at the School of…at the UKI Responsible A.I.UK Ecosystem as well as RRI lead the UKRI Trustworthy Autonomous Systems Hub. She’s the youth leader, the MRC Digital Youth Program and she specializes in assessing the impact that  technology has on the mental wellbeing of groups with protected characteristics, so children, young people, older adults, applying co-design and participatory methods e.g. youth juries. I’m delighted to be able to introduce Elvira to you all. 

 

[Dr. Elvira Perez Vallejos]: Thanks so much. Thanks for having me. It’s a pleasure. I share my screen and I will present some work we did a few years ago with children and young people. So as you correctly said, algorithms are a piece of code that’s intentionally well-designed to make, let’s say the Internet more efficient, to personalize searches. When we look at social media, what we do is, or what we intend is to present, for example, feeds that are more in line with your preferences, with who you are. So some people may not know that if you do a search in Google and you are logged in, the results will be different for you than for me because Google, over time, creates an image, creates a persona, a profile of who you area and then will…. the algorithm, we personalize the results. So what we did a couple of years ago was to ask young people about their awareness of what they think an algorithm is. And also we asked them to reflect on the impact that algorithms may have on their wellbeing, and on their lives. So what I will be presenting today is based on two specific pieces of work and I will provide the links. The first one is a paper that looks at the impact of algorithmic decision making processes on young people’s well-being. And the second one is a report that looks at the impact of screen time on education and wellbeing, again in children and young people. So the first study, what we did was engage with around 200 young people and we created these youth juries where we present scenarios that previously have been co-produced with young people. And we asked them to discuss, to tell us what they thought. So when we presented examples, so, for example TikTok –  TikTok, it’s able to personalize some of the content or YouTube….So the videos you watch? So they said that actually there are lots of benefits. It’s convenient. Sometimes it is just easier to search in Google than actually do any of the more complicated bits of research to find out a specific answer. So it’s easy, it’s convenient, it’s there. It is also entertaining. It takes away your mind if you had a bad day and also is good that it has personalized content so it’s important and relevant for you as a user. So there are some benefits, but unfortunately there are lots of harms. So the main issue is around privacy. So young people are concerned about how algorithms are able to collect so much data about a person. They are worried about proper engagement. So, for example, TikTok is a perfect example. TikTok has been designed to capture your attention and it is not fair to put the responsibility on the children and young people to be able to disengage. And I will talk about responsibility in a second. The design is over engaging, and that’s something that is almost impossible to escape. So there is an issue with that design. It’s unethical to create these interfaces designed for children and young people. There’s also issues around trust. So, for example, when you are constantly interacting with a system that you don’t trust 100%, that again causes a sense of being uncomfortable. And again, that affects your wellbeing. Unfortunately, there’s a lot of harmful content that you describe. And another main issue is the way you are looking at your phone. You are not moving. You are inactive, just sedentary. And again, as a child, children should be running around, should be very active. So it has a very important consequence for their physical health and obviously for their mental health. So, how can I move my presentation to the next slides? Perfect. So when overall, when we think are algorithms, good or bad? We know and there’s a lot of evidence… is that there is a massive impact on the developmental milestones of children. So cognitively, children are and unfortunately we do all that. We parents, we will give a child a phone or tablet when they are still very young because it’s convenient and they will entertain themselves. Covid had a massive impact. Children learned to be in front of a screen hours and hours. So it has a massive impact on the way children socialize, on the way children develop language writing skills. It also, as you said, has a massive impact on the physical health because they are very inactive, but also impact on relationships and emotions. Relationships are extremely complicated. You have to constantly negotiate and children nowadays are interacting less face to face with other children. They do that in school, but they are spending, again, too much time in front of their screens. So whose responsibility? It is completely unfair to put all that responsibility on the parents, but also it’s very – not a good idea to put also that responsibility on the children. So whose responsibility it is? Obviously parents have to have the skills, the digital literacy to understand to put time limits, but it is not fair to design this technology that is over engaging, give it to parents, give it to children, and then expect that they are going to be able to self-regulate themselves. Is it the responsibility of the platforms of the companies that are designing these services that are creating these algorithms? The answer is yes. And there are lots of interesting pieces of regulation that are trying to basically legislate and make these practices unethical. And what I’m hoping is that maybe in ten years we look back and there will be a lot of changes and we will be surprised at the current situation right now, because it’s been very, very harmful to children and young people. So what can we do in the meantime? So we need to lobby politicians. We need to ensure that there are policy interventions that really address both the benefits but also the harms. We need policy. We need technical, regulatory and also very strong and well-defined education strategies, and we need that urgently. In the UK, we have the online safety bill that actually is designed to protect children, young people of harms. Unfortunately, companies are so powerful, that this safety bill actually is not having the effect that it was designed for. So when we ask young people what should we do? Young people are telling us, government, they have lots of responsibility, lots of lots and lots of work to do to create digital products that are to comply with the needs of the user. We need more transparency on the data that these companies gather from children and young people simply by accessing their services. There is a complete power imbalance and the user should be the owner of that data and should have more control of how that data is being used and is used to create algorithms that are more and more engaging. And most of these are duty of care. Government should protecting children and young people should be ensured that their well-being is a priority. So there are also recommendations for tech companies and these again are recommendations that are coming from children and young people that are from 14 to 17 years old. What they would like to see is that companies are being incentivized to create more positive content and remove harmful content. They want to see opportunities in the design for… to ensure that the wellbeing is central. For example, adding a button that says I’m distressed about this content, so they can be removed very fast. They want to see more transparency about harms. They want to understand the impact of the specific content. Why is that? But sometimes they don’t understand  the short and medium and long term impact that it can have on their wellbeing. And again, age appropriateness is such a crucial, so many children and young people say we would love to have an Internet, we would love to have social media, that is kind to children and they feel protected and they feel safe. There is also recommendations for education. So, for example, there is important…  lots of work to do on bringing more evidence and bringing more research to build coping strategies for children, for them to be able to disengage and have more control.

And also we have to improve the training and education they receive around data  literacy, A.I. literacy, etc.. So this is the end of my talk. But also, I wanted to mention that when we look at the screen time, it is not necessarily a good index. Children nowadays have to interact with social media, with the Internet, and they can be creative, they can socialize. There are lots of benefits we cannot take away. Unfortunately, this is the way it is, so what we need to ensure is that when they engage, they do it creatively, they do it safely and they feel control and they feel safe, and it’s an experience that is going to support their wellbeing and mental health. So algorithms are not going anywhere. So the hope is to regulate them and to ensure that they are kind and supportive. And it is kind of sad that we are in this, but I think I want to bring hope and hopefully there will be regulation and legislation that hopefully in five, ten years, everything hopefully will be different. Thank you. 

 

[Imran Ahmed]: Thank you so much. That was really helpful. And thank you for the note of hope at the end. I, I share your hope. Technology is a wonderful thing. You know, life is much more interesting being able to access all this information, being able to communicate with people. I moved to the US in the middle of the pandemic. How else did I receive love, friendship, soccer be able to see what my well, the people I care about doing through social media. And so is it so sad that there are all these really negative aspects of them. Let’s just talk about the positive aspects. Everyone talks about ban the algorithms and, you know, anyone that spends time in these… working on technology knows that you can’t actually ban algorithms. They are a fundamental part of how you operate a system. What’s the good side to algorithms? Why do we use them in the first place?

 

[Dr. Elvira Perez Vallejos]: So algorithms are essential. The amount of data that is out there is massive. So without algorithms we will never be able to select, filter and customize searches or preferences, your news feed. If you like cooking, you like to receive more information about cooking or beauty. The important thing is that if the structures and the systems that are governing those algorithms, if they are built responsibly, then we can trust the algorithms.

Algorithms are neutral, are simply tools. Actually they are no good or bad, it’s how they are being used and the data that feeds those algorithms, but also the intentions, the commercial intentions behind. So my hope comes from initiatives that promote responsible innovation, responsible research, EDI, equality, diversity, inclusion, so more democratic ways and more transparent ways to build algorithms. So hopefully soon companies will have to have very strict regulations and ensure that algorithms are safe, and the benefits are massive. We… entertainment, information sharing, it can be an extremely creative part of our lives and many people, many young people use them without any issues. So we cannot forget that group that actually benefits from use. They create amazing music and videos and share and connect with other people. So the algorithms are necessarily not good or bad, it’s just unfortunately, the way many companies use them to engage children to a point where it is unfortunately unhealthy. 

 

[Imran Ahmed]: Thank you. And I think, you know, you really make the point now about the algorithms are a necessary part of modern life, in particular in big data systems. And that’s you know, that’s what our Internet is based on. However, what we don’t understand is how those commercial imperatives that these large corporations that own the algorithms that underpin the most popular platforms in the world, how they’re constructed, what the logic is inside them, what the, what they are oriented to deliver on, it’s transparency that’s really necessary. You know, the European Union has the digital services that institutes and they’ve got their new transparency…. they’ve got their algorithm study center in this. I think it’s in Seville, in Spain. And the UK Online Safety Act, which became law a couple of weeks ago. you know, that may change things as well. But the US, we have nothing. We have an absolute black box for American parents where they’re not being protected because legislators have not even done the basic thing…forced a bit of transparency on these algorithms that are so important in our day to day lives. And that’s why I’m so excited about introducing our next speaker. We’ve got Dr. Motahhare Eslami, who’s the assistant professor at the School of Computer Science, Human Computer Interactions, the HCII, and Software and Societal Systems Department S3D at Carnegie Mellon University. And Motahhare’s research goal is to investigate the existing accountability challenges in algorithmic systems and to empower the users of algorithmic systems, particularly those who belong to marginalized communities and those whose decisions impact marginalized communities make transparent, fair and informed decisions in interaction with algorithmic systems. Thanks so much. 

 

[Dr. Motahhare Eslami]: Thank you very much, Imran for the introduction. I appreciate that. So I’m going to talk about, as Imran mentioned, about how we can empower our users, particularly youth, as a large group of I’m going to make system users to be able to deal with, navigate and challenges algorithmic bias existing in their everyday use. And I’m going to start with this example. This looks old. It’s more than ten years ago. Then Google search would show you that if you search for blockers, it would show you inappropriate search results sometimes like, you know, like sexual content, which was not the case for white girls or other races and did the same thing for, you know, if you search for three black teenagers, it would show your mug shots of all three teenagers with three or four teenage pictures or some friendly, lovely pictures. And you might think oh we’re past that time, now these systems are improved. They are, you know, the more level of these racial, sexual or misogynist biases are removed, it’s not the case. So this is the case of the whistle blower at Facebook a couple of years ago about how these social media platforms, particularly the way that the algorithms work, impact –  have negative impacts on girls similar to what we saw at the beginning of the panel. And I want to talk more about what happens that these harms take effect as because we just talked about, you know, algorithms are not bad or good, they are just a way to make things more efficient, powerful. They connect us to the world. So I’m a computer scientist so I love working with algorithms and code and programs, but we want to see what happens. And one of the reasons these things happen is that the algorithms just learn from us, from the world that we are in and just amplify things. So if you have one biased person that you might encounter in a street and just get past them and over now algorithms can exacerbate a device of many biased people have like the racist people, the sexist people, and show it in the platform rooms like social media, which we don’t know, like which the youth might not not know that these these algorithms are not necessarily talking about truth. It’s not about what the words should look like, it’s just exacerbating what the bias word we have here. Another example is actually how this can even affect kids in higher stakes domains. For example, this is a study showed that, you know, racism and gender injustice, they are just embedded in algorithmic driven technology, such as, there are some child or families coding tools in cities that they try to predict or try to understand how hard is it is a family to a child if they use child abuse. And research has shown that these algorithms are racially biased. They would accuse more black families of child abuse when that’s not the case, actually. So what does it mean? And when we talk about who is responsible, we talk about all these corporations, these systems are developers, the domain experts dealing with these algorithmic systems, but we are talking also about youth. The people who are encountering with these systems every day. It has become a part of their life. And as much as I’d love living in an ideal world that, you know, we have responsible organizations and like a lot of research and work going into the systems before they are released to make sure we are protecting our youth, it’s not the case. So we are already in this world to how we can prepare our youth in navigating of a before a biased world, but now an algorithmic biased world. So I’m going to talk about youth as stakeholders actually in this algorithmic driven world and how they deal with bias. So just when the pandemic hit about three years ago, the UK government and many of you might have known that, started to get the challenge of teachers like staff or a staff shortage, so they couldn’t get the grades on time for the kids, particularly the youth, the high school that they wanted to get to college and they need to do get their grades. So the UK educational department decided to use an algorithmic driven system to predict the grades of the people, like based on their performance up to that time before the pandemic. And I don’t think they had bad or malicious intentions. They were like, okay, let’s just use the system to make things go on. We don’t want to get these kids stopped from getting applied, applying to like colleges and stuff like that. But the problem that no one thought about is that but, you know, like how this algorithm is going to predict tha. The results show that the kids from lower society economic backgrounds or from vulnerable backgrounds actually got lower grades with these the systems. And what was interesting here that kids took this into their own hands. So they started a protest outside the Department of Education for many days. They talked about all the challenges of these algorithms. They said these algorithms are not knowing me. They are doing wrong. And the part of the law, even though this was an unfortunate incident, was that the UK government had to ditch the exam results and go back to find human evaluations for these kids. It doesn’t mean that everything has been reverted, like some of these kids already got college rejections. There were definitely this algorithm imprint stayed there, but these kids did something that maybe, like we adults didn’t know. They realized that this algorithm is not really distributing the grades fairly. Like I’m seeing that I did pretty good schooling amongst classmates, but I’m getting lower grades. So what it shows is that can we have youth as a stakeholders to actually learn to deal with this biases? You know, you, you talk to your kids about biases in society. Sooner or later we talk about, hey, there are racist people out there. There are sexist people out there. How we can help them to know that these algorithmic systems can be as biased or even more as humans and do not look at them as an objective system?  So we ran a study actually with some of my colleagues. Amy Ogan is going to continue talking about this in the next part. And the goal was to understand can really kids or youth know about these biases? So what we did was we showed them some results of algorithmic curated content and they asked them if they are fair. And we intentionally had some nuanced biases. For example, this is a search. If you search for wedding. And if you look at this, this is particularly Western. This is all heterosexual couples. They are… there’s not much interracial marriage here. And the kids really noticed those. So they were really good. They were like only young people, white marrying white, black, marrying black, no gay and lesbian. So they are really better than maybe we think they are. And then we also talked about what this very algorithmic system means. For example, this is the search for computer programmers. They talked about, there are more men in computer programming. Does it mean that the algorithmic results should show that or it should inspire an ideal board? So they are knowing about these nuanced contemplations. So what we are doing right now is actually building a tool called We Audit, which its goal is to engage everyday users like us here and including youth to learn about biases in the systems, to learn that these algorithms be as harmful as some mistakes people make. And also we help them to find these issues, try to become a part of this larger advocacy to bring awareness and action against algorithmic bias. And I want to end with this – a note about what to do as parents, education professionals. Some of us here are researchers or we can have multiple roles here. Is that how we can empower youth advocacy and action around this inequity and algorithmic systems? Again, we have other stakeholders. They have a lot of role to play, but we need also to prepare our youth. And I think we all need to know A.I. in your math. The algorithmic systems needs to be learned, but now it’s with flaws. If two plus two always is four, that’s not the case algorithmic systems work. So the A.I. literacy is the first stage. And my colleague Amy Ogan is going to talk about how to bring A.I. literacy to the kids and to parents and families to inform youth to be a better users of algorithmic system. And with that, I’m going to… I’d be happy to take questions. 

 

[Imran Ahmed]: That was absolutely fascinating. Thank you for that. The thing really came to mind is we’re doing some work at the moment on A.I. and how generative A.I. systems, which are those A.I. systems that create new content from nothing remote, from a simple prompt, how they encode the biases of the content that’s fed into them. And, you know, unfortunately, an A.I. system is only as smart as the content it’s given really. If the content is given is nonsense, some of these A.I.  platforms are literally just taking everything that’s written on the Internet, you know, including a lot of absolutely bonkers stuff. But what are your concerns about A.I. and how that might hardcode some of these algorithmic biases further in our society?

 

(Dr. Motaharre Eslami]: That’s a really good question. And I know the next series is going to be about this, about generative A.I. and I encourage our audience to watch that because I’m looking forward to that. And I think it’s going to be very informative because the kids are now also users of, you know, generative A.I. and Chat GpT and we actually also have run into a study that we found that these kids really trust this type of generative A.I. You know, they think that is going to give you the right answers. It does not. I don’t know if you know, but generative A.I and Chat GPT can’t do multiplication or simple math even problem because that’s not how it’s built. And still people trust it. What is happening, though, is that the difference between what we talk so far about is what I call curative A.I., means that you give some information to that, it’s going to rehash, choose it, and show it to you. But generative A.I. is now a new word because the potential is limitless. You know, you can just generate indefinitely. And then the problem is that in that generation of content, no one, I’m going to emphasize no one, even those companies building it, have control of what it’s going to generate. And that’s the problem that we as youths, we need to help them to understand. That these biases that we had so far, that was  reflecting the society, it can actually go to another level. I still don’t know. I mean, so far we were like, oh these are the existing biases we’ve seen in these systems. I even expect, unfortunately, in five years you will see new biases coming out. Even things you didn’t think about. So that’s I think that how A.I. is getting to get more like an opportunity…but at the same time when the opportunity grows, the limitations and the challenges can happen too.

 

[Imran Ahmed] Thank you. And one of the things I always say is that if we haven’t called it artificial intelligence,  people would realize that sometimes A.I. can be very stupid. It reminds me sometimes of one of those young men who’s been to a very good private school, is very good at blagging, but actually knows nothing, so is just a very competent idiot, which is a term that has been also used to describe me at times. 

 

[Dr. Motaharre Eslami]: Yeah, that’s a very good description.

 

[Imran Ahmed]: Coming up next is not a competent idiot. It is a confident, clever person. So I’m delighted to introduce Dr. Amy Ogan, who’s an associate Professor. And thank you Motahhare. That was wonderful. It’s an Associate Professor of Learning Sciences of the Human Computer Interaction Institute, Carnegie Mellon. She’s an educational technologist with degrees in computer science and Spanish and a Ph.D. in human computer interaction supported by an Institute of Education Sciences IES Fellowship. She’s received many awards and fellowships to study the use of educational technologies in emerging economies across many international sites. Delighted to introduce Amy.

 

[Dr. Amy Ogan]: Great. Thank you so much for the introduction and I’m very pleased to be talking after my colleague Motahhare, who has introduced all of the work that we’ve been doing together so well. So I’m going to continue on from that discussion of algorithmic bias and how youth engage with it to talk about ideas around algorithmic literacy. So as Elvira said, of course we would feel that it’s the responsibility of companies to create more fair A.I. However, as Motahhare said, it’s also important that we don’t sit around and just wait for that to happen. So what are the things that we can do about it? Well, we can help young people develop A.I. literacy. Now, a few decades ago, we recognized that we had to develop new concepts of digital literacy to help children learn to use computer based tools. But A.I. literacy goes beyond that, and it in fact supports individuals’ agency giving them the ability to make important, informed decisions about what happens with that technology and how that A.I. impacts their lives as well as to better advocate for their rights and participate in those critical conversations around A.I. that Motahhare was talking about just a minute ago. So if it’s such an important part of their current and future engagement with technology, why is it that young people as stakeholders are not really included in learning about and even shaping the future of responsible A.I.? Of course, there are many reasons for this, but there are three in particular that currently prevent schools and parents from engaging young people in developing A.I. literacy and in our labs we’ve been working very hard on thinking about these issues, understanding those barriers and finding ways to overcome them. And so what I’ll tell you about today is the work that we’ve been doing, in particular with our Ph.D. student, Jay Marie Solst, who runs large workshops with children in many different settings and with different sets of demographics, to look at how children engage with A.I. and what we might do to increase their A.I. literacy. So the first barrier that we often see is that people may believe that children do not have enough technical knowledge. They don’t know how to program, they may not understand the technical details of what an algorithm is and exactly how it works, and so maybe they aren’t ready to engage in A.I. literacy. But through our work, we’ve seen that children as young as 11 and even younger can easily identify bias in A.I. examples. And Motahhare showed you some of those. And that was true even for a youth with very low prior experience, so who may not have had any programing, who have not really engaged with the technical devices, who didn’t have phones, even those with low prior exposure to technology were sensitive to and articulate about bias in A.I.. In fact, many of our learners, even without being prompted to, we’re able to differentiate between those who build the technology, the coders or developers and those who help design technology. And many of them stated that they specifically wanted to be in an empowered designer role and they wanted to be the one who called the shots when creating futuristic A.I. technologies. And what we saw when we had them engage in design activities, was that they were really amazing at bringing their own identities into the technology designs that they came up with. So here’s just one quick example of two learners who created some A.I. powered robotics ideas, and they designed their robots to have hairstyles like theirs, even though they had never seen an example like this anywhere in their real lives. Another one of our learners talked about A.I. that would encourage boys to be kinder. She told us about her experiences of feeling being discriminated against by boys in sports class. So they were able to take these ideas of bias and harm that they experienced in real life and bring it to the design of technologies in these activities. The second barrier that we sometimes see with parents in schools is that kids don’t quite yet have the ethical or moral reasoning that they might need to have to recognize algorithmic bias or unfairness. And in fact, what we saw was that learners were really easily able to quickly identify bias, harm or unfairness and express it as a feeling of equality. Things should be equal. Everyone should get treated the same. That’s the idea of equality. However, what we know is that with discussion, with prompts for deeper thinking, they were able to transfer that to more complex ideas like thinking about equity rather than equality. So should everyone get exactly the same? Or maybe people who need something different should get more or should get a different version. And they were also able to think about things like consent, unfairness, collecting data without, without telling people. So a more nuanced idea of what it means to be fair or unbiased or unharmful. Our third barrier that we often see, and this one may come particularly from some sets of parents, is that young people need protection from serious topics. And I’m not denying that this is true. We want our children to remain children. We want to keep them in a safe environment. But what we also know is that children have exposure to serious topics all of the time. And this happens, in particular, for some demographic groups of children earlier and more frequently than others. They are ready confronted with these issues. And in fact, we did see that when we run our workshops and our activities with children that they are in fact affected by this exposure to algorithmic bias, so it brings up strong emotions for them. So here is one example of an A.I. prediction that we showed to children. So we asked Google ‘why are Asian’ and we left it blank in the search and then brought up a set of autocomplete answers for ‘why are Asians’ and the top one is ‘why are Asians so good at math’? This is a question your child might definitely enter into a search engine. They hear something. They ask Google about it, and the immediate reaction from our children was shock. This is too stereotypical. This is racist. Even at 11 years old. This is something that they’re thinking about. And where we take the conversation from there is the idea of critical consciousness. This is an idea that stems from black feminism, and it’s the ability to recognize and critique systems of oppression, but also to take action against oppressive systems. And this is something that does not come automatically, but it can be cultivated through reflection and analysis. And so that’s what we engage in with our A.I. literacy programs to help foster this critical consciousness in the context of A.I. for things that children are already seeing in the world and thinking about. So we found that girls who are engaged in arts activities were able, with support of facilitators, to clearly express their ideas and articulate how it made them feel and why. They were able to show resilience and informed opinions supported by their emotions. So if we have these three barriers that we know can be overcome, what do we do about them? Well, Motahhare In the last talk, showed you one A.I. literacy activity that you might engage in as a parent or a school with your child and that is the idea of algorithm auditing. So I’m not going to go too much into detail there, but we’re really excited to have the version of We Audit come out that children can engage with. In the meantime, it’s something that parents can walk through with their children. A simpler activity that we have used with success in many of our research studies and workshops is to engage young children in imagination activities. So here’s a really simple one. Tell us a story about a fair artificial intelligence or an unfair artificial intelligence. What happened? Why was it fair? Why was it unfair? How did the person using the technology feel or how did they react? This is a really easy activity for young children to engage in. Here’s one that goes a little bit deeper, asking them to look at actual examples of artificial intelligence. So very simply, just look at this example of artificial intelligence. Did you notice anything about it? Is there anything unfair about it? And we do this in our workshops. We show them about eight different examples. And in fact, Motaharre showed you some of those, the idea of weddings, food, you know, a rich doctor walking on a street, what does that look like and that’s from a generative A.I. example. And then a final activity that we often run is to actually ask young children to design their own technical that is something that they would like to have in the world, something that would make their world better. And so we ask them to just draw it out, to make, to craft with some materials, to build out an A.I. technology that they think would help the world and here is a little just a very simple workbook that we have for girls to do this activity with. So what my A.I. technology idea is called and what it does. And then we asked them first to list the beneficial things it might do.

But the important part of this activity is asking them to think about the risks or the harms that they might actually introduce by having this A.I. built into their technology and this one is a real challenge for them because often times children think if I’ve built something, then it must be good. Or if a developer with good intentions built something, then it must be good.

So this is the tricky bit where the conversation with parents, with peers, with teachers becomes important to talk about how intentions are not necessarily the same thing as outcomes. But all of these are ways that we can, in very simple terms, without needing any technical background or understanding exactly how A.I. and algorithms work, engage young people in collaborative learning and engagement. We also find that the parental endorsement is really key. So as we said, many parents have concerns about showing their children serious topics. And so engagement with parents and that’s something our next speaker will talk about is really essential. And our third facet of this is connecting to their own lived experiences like in my final workshop activity that I showed about how they could actually build an A.I. that would help in their community. So these are some really great ways that we think we can help prepare young people for a future in which algorithms are all over the place and ideally where they’re helping us live better lives. And we’ve got lots of awesome people who have helped us with this work. Thanks, everyone. 

 

[Imran Ahmed]:Thank you so much. Amy. That was really, really fascinating. Look, and you know, what’s really interesting to me is that there’s some really creative solutions and how young people can be involved in some really creative solutions and also some advice for parents and how they can take part in it as well. You know, part of my job is making sure that our legislators have the backs of parents and of young people. But here’s a question for you. What roles, what sort of educational intervention, so systemic educational interventions, would you recommend to support youth agency and children’s rights online? 

 

[Dr. Amy Ogan]: Yes. We believe very strongly that as more and more schools are introducing computer science or programming as a core concept that children and young people need to know in order to live in a world in which technology is everywhere, that A.I. literacy has to be a core component of that. So it is not enough to learn to program. Many people will go on and actually never be a programmer, but every child should know about A.I. literacy. And the brilliant thing is, while in order to introduce a programming component into your school, you often need a specialist, somebody who is trained in computer science and in how to teach it, the types of activities that we showed just here are really easy. We’ve trained lots of facilitators to engage in them who have no technical background whatsoever. So I would advocate for A.I. literacy being a component, absolutely, of any digital literacy programing, computer science activities that children are doing in schools.

 

[Imran Ahmed]: That’s incredibly helpful. Thank you so much, Amy. We’ve got one more presenter today, so I’m delighted to move on to Professor Ranjana Das. She’s a professor in Media and Communications in the Department of Sociology at the University of Surrey in the United Kingdom. Professor Das’ researches users. She started a career of researching audiences and users and her current research interest span technology use, user centric research on algorithms, datafication and broader digital technologies. Very often, she dovetails these interests with her interest in families, parenting and parenthood. And she’s currently completing her fifth research book, Parents Talking Algorithms, due out in 2024 with Bristol University Press, and between 2023 and 2025, she’s leading Leverhulme Research Grant and a British Academy grant, both on various aspects of parents, parenting and technologies. And it couldn’t be any one. Better to speak to us now. So thanks, Ranjana. 

 

[Dr. Ranjana Das]: Thank you very much for having me today. And it was fascinating to listen to the previous speakers and there’s so many dovetails, o get right into it. So I’m going to talk about four dimensions of parents out of the matrices from a project that I have been doing over the course of 2023.  I’ve been listening to parents up and down England, parents of kids aged between two weeks to 18 years old, really listening to the various markers and dimensions of their literacies with algorithms. And I’m going to put a QR code on the screen there now, but there’s a link in the chat to the paper that this draws from because I’m going to completely not talk about the research literature on the background today and just my findings. If you want to dig into those references, that paper is the one to check out. So I’m going to begin with a really powerful quote about something called additional comments, books that I heard from a mum called Nandini, a mom of Indian Origin, with a seven year old and a five year old. And this mum was speaking to me about algorithms in the public domain and speaking to me about her kids’ futures. And Nandini was telling me surely there’s going to be an additional comments book, right? I mean if there’s an algorithm determining grades or something in my kids’ futures it can’t just be based on a man, an Oxford educated man, a middle class white man putting some things, setting an algorithm up and not allowing for possible deviations. And that that became a metaphor for me that additional comments books because it seemed parents, mums and dads were asking for human intervention and not sometimes saying that it’s human beings actually behind tech and making those decisions. But there was this sort of strong call for that additional comments box with a real person and that really stayed with me throughout my fieldwork.

So I’m going to talk today about parents’ algorithm literacies, and I wanted to just throw out that it’s not really a new concept because, you know, we’ve have things like media literacy and digital literacies, and I quite like to see algorithm literacy as part of that conversation of how we sort of understand, engage with, create with media and technologies. And I also wanted it really highlighted that it’s really important that we don’t place all the responsibility on individuals, parents, carers, kids, communities to, you know, learn various things and resist big, powerful platforms who pass and evade responsibility. But I really think that literacies in general are  a really cool concept to keep in mind. If we feel tempted to think that, you know, this is big, powerful media and completely powerless, inactive audiences and users, which is totally not the case. So I think literacies do work really as an important idea. So I’m going to talk today about four dimensions of parents’ algorithm literacies that I saw and heard parents demonstrate and talk about to me as we went along. First, their awareness of the presence of algorithms. And here we are talking about parenting everyday, boring, mundane acts of parenting, online shopping, you know, watching other parents’ kids get the words online and feeling bad about your own kid. Just everyday acts of parenting and where algorithms come in. And also algorithms in your kids’ lives. Your kids’ lives on social media algorithms and your kids’ futures. So all of that. And the four dimensions I wanted to talk about was being aware that they’re there, that algorithms are in the room with you. Two, parents technical competencies with algorithms, three parents, critical capacities without rhythms. And fourth, their ability to champion their best interests and their kids best interests. And I’m going to talk about a few examples and each of these categories with some unreadable tables, but bear with me. First dimension, being aware that algorithms are there in the room with you, being aware of personalized search results, being aware that the feed that you see is curated for you and not chronological, being aware that the rankings of search results you’ll see when you, I don’t know,  looks for cots online or water bottles online are possibly different from person to person. Being aware that the news you see on that news aggregator is possibly different from somebody else’s news aggregator. Being aware of why certain recommendations keep coming up to you more to go and buy that outfit for your birthday. And noticing things like that. And I argue that just being aware that they’re there just might shape parents’ abilities to sort of critically interpret and resist some of the pressures around parenting and some of the messages around, you know, what you should be buying for your and doing for your kids, what your kids should be doing and so on. This relates also to your technical competencies. I saw many mums and dads doing things like leaving fewer data traces, not hitting like on something, not scrolling too slowly on something, deliberately searching Amazon for something to try to throw off the Amazon algorithm. Really playful things that they weren’t quite, sort of, presenting as sort of, yes, I’m trying to train the algorithm, but doing these little things here and there, clicking, not clicking, scrolling quickly, not scrolling quickly, almost trying to change the journey of their data. And I argue that these technical competencies are really important as well because they shape the visibility and invisibility of, you know, your kids online on social media platforms. It shapes the journey of your data. And these little things might seem little and might seem sort of really informal, but they amounted to something, something big enough for me to say that, okay, this was a really key  component of how parents interface with algorithms. Third, really important, again, parents’ critical capacities with algorithms.

So what are these? So some of the markers of this I found in my research with parents was sort of really understanding the if/then logic of algorithms, that rule and that formula behind them. And understanding the commercial purposes of platforms’ intentions, when you know you’re engaging with platforms. Understanding that commercial purposes behind private organizations are different from, say, purposes behind public organizations, and then that sort of laying the foundation for being able to resist some of these pressures and resist some of these kinds of surveillance. And you might say, well, that sort of links to those technical things they might do it right, like not leaving too many data traces, trying to throw off the Amazon search algorithm or really critiquing, you know, what a news recommendation algorithm does. And absolutely, these aren’t watertight dimensions, but those critical capacities of scholars of media and digital literacy have always talked about apply here as well. It’s sort of really understanding, ah hah, that’s that. If making that then on my feed, you know I’ve been searching about X or Y or Z and oh, maybe that’s why I keep saying something. And I had some really powerful conversations with a mum who, like me, is a mum of color and who possibly has to have that big talk at home with him.

You know, kids of color, just like I do about, you know, growing up in England. And this mum keeps telling me in our conversation that my YouTube is full of kids of color being attacked and she’s genuinely worried for genuine reason. But also you can see that recursivity, that looping there, of this mum’s genuine anxiety and the many conversations at home and more and more videos and more and more and more videos.  So these aspects of being aware of algorithms, knowing how to sort of do little technical, playful things with algorithms and also being critical about that if/then logic, they sort of coexist and go hand-in-hand to add up to that fourth and final dimension, which is about, you know, jump in your best interests, champion your kids’ best interests. And here I found so many examples of parents going and talking to the nursery that their kid goes to and saying, hey, this app that you’ve, you know, made mandatory and you’ve got rid of all the handwritten notes and all the handovers at the door, and we now all have this fancy app. Where’s the data going? Hey, can I ask for my kid’s face to be blurred out, please? Can you let me download my data? These sorts of many conversations that some parents were feeling able to have and some parents were not feeling able to have. Some parents feeling like, well, what the school tells me about our technology seems really dated, but what do I know?

You know, I don’t know much. So that sort of feeling able to actually engage with institutions and by institutions, I don’t mean some big granddad. It’s that moment of being able to go and talk to your kids’ nursery. It’s that moment of being able to go and talk to your, you know, kids teacher at school and engaging in conversation with institutions involving children. And these don’t have to be, you know, technology related organizations, but they do employ technology. And this, I found, opened up possibilities for parents to engage with the myriad institutions involved in childhood and care. And those many little conversations, that little message, that little email asking for more clarity, asking for a leaflet to be edited. Just thinking of examples here from these conversations, asking for more details about that app, they really matter. So I’m going to end there. The paper is in the chat and if you have any questions, please ask. 

 

[Imran Ahmed]: Thank you so much. Ranjana. That was fascinating. I wanted to ask, you know, one of the things you were talking about was retraining algorithms and these really opaque algorithms. And people really are feeling in the dark. And one of the things we did when we put together a PSA that I talked about earlier on, you know, the reason for the PSA was a friend of mine who’s a mom said to me that when she was young, there was a PSA which said, it’s 10 p.m., do you know where your kids are now? She knows where her kids are. Kids are next to her, but she doesn’t know who’s with her kids because they’re on their cell phone. She doesn’t know what algorithms are shaping her kids’ lives and how to communicate with them about it. How can parents explain to their kids, young kids, what an algorithm is and when they’re interacting with one. What are the ways in which you do that?

 

[Dr. Ranjana Das]: So I found these answers happened around mealtimes sometimes, sometimes in the car. One of the parents I spoke to told me, well, if they’re in a car with me, then I’m not going anywhere else and she’s not going anywhere else. And we’ve got to have those conversations there. So I think there’s something there about not making a huge, big deal out of it and not making a big event out of it, but really using those everyday moments to have these sort of dinner table conversations that perhaps fall in a gap between what schools offer in terms of sort of, let’s teach you how to code, but then these sorts of wider and more critical conversations possibly don’t happen. And in my research, I found that the best conversations were actually had, you know, in between, sort of on the school run, picking up a sibling from a dance class, waiting outside a swimming pool, because people have devices with them. And that moment of… taking that moment to really talk to your kid about, oh well, you see what’s why that particular Minecraft video is coming up on your feed and why that particular YouTuber is coming up on your feed.  I think those informal moments have real power in them. And I saw these parents, none of whom were tech experts at all by any stretch, using moments by the pool, those moments in the car, those moments with the sandwich, trying to have those if and then conversations. And I think that was really powerful and fascinating.  

 

[Imran Ahmed]: Thank you so much, Ranjana. I think we’re having a group discussion and Q&A now, so if all of our participants can come back, that would be wonderful. We’ve got a few questions that have been sent in and I’m just going to chuck them out. And if people want to answer first, then whoever gets first wins. So a question that’s come in. What are the most effective strategies a parent can use to protect their child? Are there any specific softwares or products that can be used to block algorithms? What are the most effective strategies a parent can use to protect their child?

 

[Dr. Elvira Perez Vallejos]: I can share my not only professional, but as a parent. I have two teenage girls 13 and 15 so I have time limits. I have rules so they can not go to their bedrooms in the night when it’s time to go to sleep. And I do some content moderation, so they will not be accessing content that is not appropriate for their age. And some of our phones allow for that. But I think the key is to have… to be able to talk to your child, to provide that safe space where you are not judgmental and you can ask them questions. And if unfortunately they present or they witness any upsetting. So, for example, I asked them explicitly if they saw any content from the Gaza-Israel conflict and they said yes. So they were able to describe how they felt. And we had an important conversation around that topic. So I think the crucial point is for the child, if the child witnessed something distressing to be able to talk to you and then by the conversation with the child, they understand that there are things they can do to protect themselves because unfortunately, there’s harmful content there. So talk to your child. Create a safety relationship where they can talk to you. And then there are some practical limits you can incorporate. 

 

[Imran Ahmed]: Thank you. Does anyone else have any ideas? 

 

[Dr. Ranjana Das]: I mean, it’s I’m sorry, Yeah, I just wanted to add that I think that the think that Elvira said about talking about it really is it might sound sort of you know, we always say that, but it really is sort of important to have the conversation channel going. And for me, it’s not. I mean, I have an eight year old and a three year old. It’s not specifically about any one technology. I think we have to bear in mind that this is an ongoing unfolding in progress conversation because they’ll change, they will get older, the world will change, the world will get more complex. And technology, of course, as we know, there’ll be a new wave of new technology. Right?  But I think if you’ve got that relationship and you chat about these things openly, perhaps even share some research findings and in an age appropriate way and perhaps gamify a few things. I remember last year we had a whole thing about, you know, let’s play the Internet detective where he would come and show me any pop ups and adverts that he got.

And, you know, he got a whole point for it and a little sticker on a chart. Different families have different methods, but I think it’s that ongoing conversation. It’s not a conversation that ends or is about one particular thing, but that’s an ongoing, lifelong conversation where you grab those informal moments and have fun chats about technology. Really key.

 

[Imran Ahmed]: I think now that I’ve moved to America. I’ve lived here for three and a half years. I started doing therapy and I’ve learned the word vulnerability, which I didn’t know is a British person. And apparently communication is involved.  And when we were writing…so I know you were on, I should say, we’ve got a website protectingkidsonline.org, so protecting kids online. The parents we’ve got there like Ian, the father of Molly Russell and I wrote about, how you create a you know, a two way conversation, a permanent two way conversation based on vulnerability and honesty and openness and in which kids can tell… teach parents. So it’s really symmetrical. Kids are teaching parents what they’re seeing online, how these apps work as user, and parents are helping them to contextualize their experiences that contextualize on the content there, understand that sometimes they’re seeing things frequently actually isn’t normal, it isn’t something that you would see frequently in the world around them. And especially right now with everything that’s happened with Israel-Palestine. I mean, you know, CCDH does a lot of extremism and counterterrorism work, and we spent the last five weeks looking at babies, and it hasn’t been very pleasant. And that can be very, very disturbing to a child. But that’s, you know, the kind of rabbit hole that kids can go down. And it is very disturbing. I wanted to ask,  you know, we’ve talked a lot about parents and what kids can do, and we’ve talked a bit about what educational institutions can do as well. But how can parents advocate for more legislation or more effort by government to enforce ethical development and use of algorithms? 

 

[Dr. Motahhare Eslami]: Can you repeat the last part, I got lost there.

 

[Imran Ahmed]: So just what can parents advocate? How can parents advocate for more legislation and more effort by governments to enforce ethical development in the use of algorithms? So, what’s missing that only government can do for us to help us with this challenge of dealing with a world in which algorithms are everywhere?

 

[Dr. Motahhare Eslami]: I think, unfortunately, like as I mentioned, like a lot of this is going to come from advocacy efforts or, you know, community driven efforts like we here in the city of Pittsburgh, in the United States, we had this case of a predictive policing system that was unfortunately actually was built in collaboration with some researchers, which I sure their intention was to best, you know, try to find areas of crime can happen, but it definitely affected black communities, including youth. You know, what we saw and there was like this concern about over policing and these predictive policing is like a sci-fi movie that you’re going to predict or which location they’re going to have crimes, which is ethically is under question. So what happened is that a lot of youth here, like students among a lot of other community members who were worried about this had brought it to the legislation level or at least a concerns.

And I don’t think this is the right way necessarily, because we don’t want to put the people who are already vulnerable under more vulnerability and work a label to just defend their own rights. I think parents need to help kids to be more advocate of that, but I think at the legislation level, it’s more awareness. I think as more and more adults get more aware of these systems and their challenges, they can have a better say.

 

But again, as I said, unfortunately, we are at this stage that we need more advocates to push this through. 

 

[Dr. Elvira Perez Vallejos]: But it is a massive challenge as a parent to influence law. So some of the research we’ve done, we’ve used that evidence as…so they are sometimes parliamentarian inquiries and they ask for information, for evidence, and we present that. But as a parent, it’s really difficult. I guess you need to talk to the MPs, the representatives of your locality, and talk to them about…But it’s very difficult because their symmetries of power are massive. We had recently in the UK, they had an A.I. summit and and it was a very clear example of how powerful all these massive cooperation Google alphabet is. They are so powerful and innovation, it seems to be a synonym of prosperity and economy. So I think there’s a fear of regulate too much because the investment in A.I. is massive. So this is I mean it’s a very interesting time. 

 

[Imran Ahmed]: So just one final question for the group. I mean, some of the most popular platforms do advertise user controls and tools to increase agency for users. Do we know if young people are actually using these features and whether the tools actually change the experience in a meaningful way? 

 

[Dr. Elvira Perex Vallejos]: They do. For example, with Facebook, they’re very savvy on their privacy settings. So my experience working with the children, the answer is yes, definitely. 

 

[Imran Ahmed]: And I mean, I think Tik Tok, in response to our study that we did Deadly by Design, really allowed you to reset the algorithm. You know, speaks a little bit to what I think a couple of you’ve said about the ability to reprogram or to change the way the algorithm understands you. Is that something that you think parents should be doing on a regular basis, for example, with their kids, with their kids accounts? Resetting that algorithm so they’re not being sucked into holes based on their previous experiences and interaction patterns?

 

[Dr. Motahhare Eslami]: I can comment on that more as the general way that people manipulate algorithms. Maybe is like, you know, particularly on social media or these feeds are curated by algorithmic filtering. I’ve seen people that intentionally confuse the algorithm or try to go broad, you know, like for example, to not get into echo chambers or rabbit holes, as you mentioned by following people with different opinions, by trying to look at different content. Sometimes we had people that said, okay, sometimes I like something to tell that friend that I like that, but then I hide that later to tell the algorithm that I don’t want to see more of this. So we definitely have ways to impact it. And I think it would be important to also teach kids about this, you know, like it’s like a tool. It’s like every other tool that you work with. It’s like, you drive, you know, you teach how to drive a car and what to do when emergencies happen. So people definitely can do that. And I suggest that. It’s hard to get you out of your echo chamber. It’s hard to maybe follow the things that you don’t necessarily agree with. But if you see it’s best for you to not get into conspiracy theories or for your child, I think that’s the way people done it.

 

[Imran Ahmed]: Look, thank you. We’re going to sort of come to an end soon. But I just wanted to give everyone a chance for a final thought. You know, one take away from today that you’d like to communicate to our audience out there. And we’ll just go an order of the speakers from today, so starting with Elvira. 

 

[Dr. Elvira Perez Vallejos]: So, again I want to close with a hopeful message. I think the awareness nowadays and the consequences… I think we are all part of a massive social experiment. And I have a feeling that all the new tech that appear in Silicon Valley, their motto was break things, just try, say sorry instead of not doing, just innovate, disrupt. That culture is changing because now we can see the harm that has caused. Now we’re in a second phase. We have generative A.I.. It feels that we’re doing again the same. Has been deployed without assessing or evaluating the potential risks, for example, in education and jobs. So we have again doing, a second wave of a massive social experiment. My hope is that we are hopefully learning and there are really interesting frameworks for responsible innovation, responsible research, and I really hope we promote them within the research community, but also within the tech community because I hope people feel… I hope people have a conscience and computer scientist developers, they decide to change things and they have the power to change things.

I’m just hoping that to simply… from the historical and the evidence, we can see around us that things are going to change because they have to change. 

 

[Imran Ahmed]: Thank you. Motahhare.

 

[Dr. Motahhare Salemi]: I want to follow up on what  Elvira said and say, I know it’s challenging. I’m a parent and I know how it’s hard dealing with all the screens, all this learning content curated, but one thing I think we need to learn is that, you know, any new technology that comes, that our kids and youth are going to be the audience of that is like, I give the example of cars a lot. You know, when the cars came, all these teenagers can drive or what are the safety mechanisms we can put into effect. So I think this is the same. But the problem with this one is that it’s very invisible and that is the hard part. But we need to learn about that. Like, you know, you can see if your kid hit a wall with a car, and you can think about what to do next, like think about this invisibility and how it’s harder and more complex, but as parents, as educational experts, researchers, whoever is dealing in this domain, we are responsible to make these algorithmic systems safe as much as we can for our kids. 

 

[Imran Ahmed]: Thank you. Amy.

 

[Dr. Amy Ogan]: I’ll just go back to how we started this Q&A. That communication is so critical. And again, while we want governments and we want companies to be leading the charge on revising and improving their algorithms, it’s so important that parents, carers and teachers feel comfortable, even though they don’t have any technical knowledge in just opening up a conversation. Asking your children, you know, what are you doing? Can we think about how these algorithms work? How do you think it works? How might we be, you know, impacted by these things? Doesn’t take anything professional level to have those sorts of open-ended conversations, but it might take time. And so leaving those channels of communication open and being willing to engage in those discussions to me is the most important thing. And that’s, you know, I find that optimistic as well. That’s going to improve our society overall in the long run if we’re able to do that. 

 

[Imran Ahmed]: Thank you, Amy. And Ranjana.

 

[Dr. Ranjana Das]: So two final thoughts. I think the first one aligns with what colleagues have just said. Grab those moments that you have in your family, study your own family. I mean, if there are moments in the car, moments by the pool, moments on the school run where you think you can have those little chats, keep having those little chats because that relationship really matters and you don’t need to be a tech expert for that. Play a little games, set down rules if you have to, but have that space. But the second thing, I don’t know if any educators are listening to us, but one of the things that comes up in my research again and again is that there’s a gulf in what schools say about technology between, yay, we’re going to teach you how to code, and we are going to talk about one big danger of you don’t know who you’re talking to online. There’s so much more to talk about in terms of risks and in terms of big data and algorithms and data verification that isn’t getting talked about as part of those leaflets that are sent home about technology. And I think schools and educators possibly need to think about how they speak about technology and to diversify and expand that offering beyond let’s learn how to code and stranger danger, because there’s other things that’s not getting talked about.

 

[Imran Ahmed]: Thank you so much. It’s been really, really amazing. And if I could have one final thing, it would be that, you know, we don’t want all legislators necessarily writing their own algorithms or or you know, certainly they’re use in some things like policing and education have been controversial. But one thing that is uncontroversial is that we have the right to know about the algorithms that are shaping our lives in our kids lives and transparency, real transparency of those algorithms and meaningful accountability to well-informed lawmakers, in the US in particular, is crucial if we’re going to have a culture of safety by design, if we’re going to have a culture of safe algorithms that enhance our society, help our children make our society better; rather than the way that they operate today. And the only way we’re going to get that is if they get off their bums and stop fighting each other literally in Congress and start talking and legislating for parents and for our society. But thank you so much, all of you. It’s been such a pleasure to meet you all. And I’m coming back over to Kris. 

 

[Kris Perry]: Thank you to all of our panelists and for this in-depth and informative dialog today. Thank you also to our audience for tuning in and submitting your questions. This was a lively conversation. To learn more about the institute and all things digital media and child development, please visit our website childrenandscreens.org, follow us on these platforms, and subscribe to our YouTube channel. We hope you will join us again for the next Ask the Experts webinar on Wednesday, December 6th as we take a look at generative A.I. Technologies with A.I. and Children Risks and Opportunities of the Enhanced Internet. Thank you.