Social media algorithms and other digital media are contributing towards increased online and offline polarization and extremism for both youth and adults – from political conspiracy theories to white nationalism and climate change debate.  What can parents, educators, and communities do to increase awareness and prevent children from falling into polarizing echo chambers or interacting in radical online spaces?

Children and Screens’ #AskTheExperts webinar Taken to Extremes: Online Radicalism, Polarization and Youth” was held on Wednesday, March 15, 2023 at 12:00pm ET  via Zoom. A panel of investigative researchers, thought leaders, and advocates with expertise in online extremism shared examples of the digital mechanisms by which extremists target and capture the attention of youth, how media impacts polarization around the world, and what steps could be taken to limit the features of digital media that contribute most to extremism in youth.

Speakers

  • Brian Hughes, PhD

    Associate Director PERIL
    Moderator
  • Paul Barrett, JD

    Deputy Director and Senior Research Scholar NYU Stern Center for Business and Human Rights
  • Jennie King

    Head of Climate Research and Policy; Head of Civic Action and Education Institute for Strategic Dialogue
  • Alex Newhouse

    Senior Research Fellow, Emerging Technologies Center on Terrorism, Extremism, and Counterterrorism, Middlebury Institute of International Studies at Monterey
  • Katie Paul

    Director Tech Transparency Project

Digital mechanisms are contributing to the growing radicalization of today’s youth and young adults as they navigate online spaces. What can parents, educators, and communities do to increase awareness and prevent children from falling into polarizing echo chambers or interacting in dangerous radical online spaces? In this “Ask the Experts” webinar, a panel of investigative researchers, thought leaders, and advocates with expertise in online extremism share examples of the digital tools extremists use to target and capture the attention of youth and how unregulated online platforms and algorithms auto-deliver extreme content to youth around the world. These experts also provide recommendations for how governments might regulate these platforms to reduce polarization and how caregivers and educators can teach youth to think critically and interact safely in digital spaces.

00:00 Introduction

Kris Perry, MSW, Executive Director of Children and Screens: Institute of Digital Media and Child Development, introduces the webinar and panel moderator, Brian Hughes, PhD, Research Assistant Professor in the American University School of Public Affairs and co-founder and Associate Director of the Polarization and Extremism Research and Innovation Lab (PERIL) at American University. Dr. Hughes briefly explains how polarization and extremism look different in the digital age, how youth are targeted online, and how the topic will be addressed throughout the webinar.

05:08 Katie A. Paul, MA

Katie Paul, Director of the Tech Transparency Project (TTP), co-director and co-founder of the Antiquities Trafficking and Heritage Anthropology Research (ATHAR) Project, and founding member of the Alliance to Counter Crime Online (ACCO), summarizes the landscape of extremism and polarization on big tech platforms like Facebook and Youtube. She discusses how specific tools — like algorithms, inaccurately moderated ads, and auto-generated community pages — amplify extremist and harmful content to all users, including youth.

19:23 Paul Barrett, JD

Paul Barrett, JD, Deputy Director of the Center for Business and Human Rights at New York University’s Stern School of Business and adjunct professor at the NYU School of Law, examines the “regulatory Wild West” and how the lack of federal regulation allows self-regulating social media companies to regularly promote harmful ideologies to the unsuspecting. Barrett delves into current debates and Supreme Court cases about Section 230 of the 1996 Communications Decency Act, and discusses the possible implications for Congress, industry and society.

33:54 Alex Newhouse, MA, MS

Alex Newhouse, Deputy Director of Middlebury Institute’s Center on Terrorism, Extremism, and Counterterrorism, discusses the role that video games may play in the radicalization of youth. To begin, Newhouse reviews potential benefits of video games. He explains the gamification of violence and how propaganda circulates in extremist networks on gaming platforms, and why some youth are more vulnerable to these influences than others. He briefly notes what we still do not know about this space, and what red flags parents can watch out for.

50:43 Jennie King

Jennie King, Head of Climate Research and Policy at the Institute for Strategic Dialogue (ISD), explains how to teach youth about digital citizenship and critical thinking in online spaces, including how to connect the dots between different layers that can alter what is encountered online. She offers practical tips and concrete examples for how caregivers can approach conversations with young people about hate, extremism and violence online.

01:06:00 Q&A

Dr. Hughes leads our panelists through a thought-provoking group discussion addressing questions submitted from the audience. They explore how to encourage intergenerational, community-wide dialogue and efforts to limit radicalism, the complexities of regulatory systems in the United States, impacts of “artificial” spaces, and alternative activities or social movements that may promote positive youth development.

[Kris Perry]: Welcome, everyone. I am Kris Perry, executive director of Children Screens, Institute of Digital Media and Child Development. And I am pleased to welcome you to today’s Ask the Experts webinar, Taken to Extremes: Online Radicalism, Polarization and Youth. Today’s topic is one of global importance and one that can generate a great amount of fear in parents. Today, children’s lives are often stretched across both the online and offline worlds, and online spaces are becoming increasingly more divisive, promoting hateful and extreme speech, ideology and behaviors. Both national and international organizations are calling for more attention to be paid to the role of digital spaces in increasing polarization within our societies and particularly for the risk they may pose in radicalizing youth. To address this very topic. We have brought together a panel of researchers, thought leaders and advocates with expertise in online extremism to share their knowledge and recommendations for protecting youth online.They will share examples of the digital mechanisms by which extremists target and capture the attention of youth, how media is impacting polarization around the world and what steps could be taken to limit the features of digital media that contribute most to extremism in youth. The panel has reviewed your pre-submitted questions, and you may use the Q&A box at the bottom of your Zoom screen to submit any additional questions you may have throughout the webinar. The panel will answer as many as they are able through their presentation and group discussion across the next 90 minutes. The webinar is being recorded and will be posted on our Youtube channel in the next coming days. All registrants will receive a link to our channel where you can also view all of our previous webinars. Now, I am pleased to introduce you to today’s moderator, Dr. Brian Hughes. Dr. Hughes is a research assistant professor in the American University School of Public Affairs Program of Justice, Law and Criminology. He is also the co-founder and associate director of the Polarization and Extremism Research and Innovation Lab, PERIL, where he develops studies and interventions to reduce the risk of radicalization to extremism. Welcome, Brian. 

 

[Dr. Brian Hughes]: Thank you so much, Kris. It’s wonderful to be here. I’m so happy to be speaking with all these incredible scholars and practitioners and very grateful for our audience who have joined us, too. I’m just going to set the stage a little bit before we begin to try to put some context to this issue. I think that when we speak about radicalization and polarization online, particularly as it pertains to youth, it’s important to get a sense for the impact on our society. And I think that when we speak about this, it’s important to keep at the forefront of our minds and at the front of the conversation the impact that this has on the targeted individuals and the targeted communities who bear the brunt and carry the heaviest burden of this issue in our society. We should also keep in mind the impact that this has on families and on our communities. The way that radicalization, polarization, and extremism in youth can tear communities and families apart. So mistrust and damage our democratic process. And then, of course, we need to think about the impact that this has on radicalized youth. Radicalization is not a pathway with good outcomes. This is a process of development that leads to negative life outcomes. So the digital world is a bit different from the pre-digital world, especially when it comes to radicalization and extremism and youth. In the days before computers, a young person who exhibited many of the vulnerabilities that we associate with radicalization would have to be very unlucky to meet the kind of recruiter or propagandist who could channel those vulnerabilities in the direction of political violence, extremism or hate. However, nowadays, it’s frankly impossible not to encounter that kind of propaganda. Any time we log on to our digital devices, we encounter extremist propaganda, hate and their various cognates. And this is especially true of spaces that are frequented by youth, extremist organizations, and individuals. Understand that there are spaces and places online where youth are especially reachable and where vulnerable youth are more easily identified. They consciously target those spaces and they consciously target those youth. And we see the outcomes to this every day from a chronic, pervasive atmosphere of anxiety and hostility and suspicion in our communities to acute outbreaks of deadly violence. So our panelists today are going to be looking at this problem from several different angles to help us understand it and to help us approach some solutions to it. I’m very happy to introduce our first speaker, Katie A Paul. Katie Paul is an anthropologist and investigative researcher who serves as the director of the Tech Transparency Project, where she specializes in tracking extremism, disinformation and criminal activity on online platforms such as Facebook. Katie Paul also serves as co-director and co-founder of the Antiquities Trafficking and Heritage Anthropology Research Project, or ATHAR, and she is a founding member of the Alliance to Counter Crime Online, where she focuses on trafficking, terror finance and organized crime facilitated by big tech platforms. Katie, thank you so much. We can’t wait to hear from you so take it away, please. 

 

[Katie Paul]: Bryan, thank you so much. And I think you really set the tone appropriately regarding the fact that unlike 20 years ago, most people are able to easily encounter extremists online and recruiters. And we’re seeing not just in the United States with domestic extremism, but even internationally with terrorist groups like ISIS capitalizing on that social media to amplify their messaging. So for those of you that aren’t familiar with the Tech Transparency Project, TTP is an information and research hub for journalists, academics, policymakers and members of the public that is exploring power, influence and the effects of major tech companies that dominate our everyday lives. And really, our goal is to give the public a seat at the table with these often opaque companies. Now, of course, we all know there’s bad, harmful content online. That’s part of the reason that we’re here. And big tech companies like Facebook and YouTube often try to argue that they are simply a mirror to society. But over the years, TTP’s research, and particularly our research on extremism, has shown that platforms are dangerous by design, targeting youth, amplifying extremism and profiting in the process. Now we know that the primary revenue driver of these tech platforms is ads. They’ve told us that themselves. And as such, companies like Facebook have repeatedly looked the other way, often profiting from extremism. Not only have we seen Facebook profit from militia gear, but we’ve also found that following the insurrection, Facebook was running ads for ammo and tactical gear alongside posts about the insurrection and suggested pages from militia groups. The screen grab you see here was taken shortly after January sixth, using an account that we have established that explicitly follows militia and extremism. So you can see what the ad targeting is doing here. Now, despite Facebook’s very public claims to ban militia and what it calls militarized social movements back in August 2021, TTP found that in April 2021, months after the insurrection, and nearly a year after Facebook’s huge statements about banning these movements, the platform still had a interest ad category targeting people interested in militia. And we’ve also found that Facebook doesn’t adequately scan its ads. The company claims that all ads on the platform go through a review process. But in 2020, 2021, and 2022, we found Facebook running multiple recruitment ads for militia groups. These are actually militia that had a presence on the platform despite allegedly being banned. And not only that, the company was profiting as these groups ran explicit recruitment ads. It’s not just militia movements either. In August of this past year, we found that Facebook was running ads in searches for white supremacist groups that were explicitly banned by the platform themselves. So alarmingly, this included multiple ads for black churches. And not only has Facebook monetized searches for banned white supremacist groups, but it’s offering up targets for extremists, running ads for black churches and searches for variations of the Ku Klux Klan and other designated white supremacist groups on Facebook. And, of course, it’s not just Facebook that profits from militia content. YouTube is also profiting in a report that we did in 2022, we found that YouTube was actually running ads on recruitment videos for groups like the 3 Percent and it was monetizing creators of militia content through Super Chat. So that means that both YouTube and the militia content creators have been making money in the process. But obviously to continue to profit, platforms need to keep you and your child engaged. And to do that, they need to continue to feed you more content that you would want to see on the platform, which is the driving purpose behind the algorithms that can be so harmful in this process. We all recall the threats to kidnap Governor Whitmer. And one of the notable things about this case was that this was allegedly organized by individuals who met through a Michigan militia on Facebook. Facebook very publicly claimed that it had been working with the FBI and removed the group that was involved. But here we show, well after that announcement, not only was that militia still active, but Facebook’s algorithm recommended yet another Michigan militia group that was also part of the plot to kill Governor Whitmer. And it’s not just recommending harmful content. Facebook’s algorithms in particular actually create content for extremists. TTP found that Facebook had been auto generating pages for a range of extremist groups from domestic hate groups like the Proud Boys page you see here that was actually created on January sixth, 2021 by Facebook and for international designated terrorist groups like ISIS. We recently released a report with regard to the Gonzalez case showing that for years Facebook has been auto generating ISIS pages, despite the fact that the company claims its technology to moderate content specifically targets ISIS and Al Qaeda. And it’s not just those groups. Facebook’s recommendation algorithms have altered or Facebook auto creation algorithms have done similar things with groups related to the militant boogaloo movement. And its recommendation algorithms in 2020 were aggressively pushing a myriad of Boogaloo groups. Some of you may recall that there was recently a lawsuit against Facebook by the surviving family of a officer who was killed by a member of the Boogaloo, suggesting that Facebook’s recommendation algorithms pushed his killer into extremism by recommending content like this. And despite repeated warnings and its claims to be investing in security and safety, which we see may not be the case now that Facebook has laid off another 10,000 people, we’ve repeatedly found the platform failing to address this movement in particular, which is one of the few that it has explicitly banned by name. It may on the sound here. And this issue is continuing today. The video is here. And the man pictured in that image was Steven Correia, the killer who was at the center of that lawsuit regarding the Boogaloo movement and his inspiration to kill. This video and the propaganda around it was captured last month in our research, and we shared it exclusively with Vice. Now, Facebook, of course, removed the content after it was brought to their attention, but it’s yet another example of the failure to proactively moderate this kind of content. Notably, this is a meme based movement, which means it’s largely borne by millennials and Gen Zers. But you can’t take the meme nature for granted. For example, this meme that you see here that includes a SpongeBob cartoon is actually instructions to create homemade Napalm using content that you can find in their home. This was posted in 2022 by yet another Boogaloo group, despite the fact that all the content is banned on the platform. The Boogaloo movement in particular is concerning when it comes to youth, because unlike Three Percenters or other domestic extremist groups, this is a movement that was born online and uses the Internet, and particularly large platforms to organize. Facebook has been at the center of that because of the platform’s reach, and we’ve even seen Boogaloo members say, “Why do we use Facebook when they continue to censor us?” Another said, “That’s how we reach the normies.” They try to gather as many people as possible and then shift them to smaller, more fringe platforms. But let’s keep in mind that one of the goals of the platforms in creating this content that gets pushed to youth and to vulnerable populations amid all of these issues: profit, unchecked algorithms. We need to keep in mind tech companies are targeting the impressionable population of children with the goal of creating lifetime consumers for their products. The Facebook files showed that it was clear that the companies are explicitly targeting children and even platforms like YouTube that have kids focused content have had violating and concerning content on the platforms. We know platforms like Instagram are deliberately designed to target youth, and despite all of that, we find it very easy for- this was based on a profile created for a 14 year old to find illicit content like opioid on the platform. Here we found that it takes two clicks to get connected with a drug dealer. But Instagram’s platform took five clicks to actually figure out how to log out. And as I mentioned, YouTube kids, which is supposed to be some of the company’s most curated content to protect people from harm, it’s geared toward ages 2 to 12, had content like this: teaching you how to build your own gun concealment shell. Now, this is just another example of how, despite the company’s alleged claims that they really hone their protections on content for children like and YouTube kids when there really isn’t any effort at moderation, while at the same time the companies are profiting from failing to enforce their policies on these harmful groups. I know that’s a lot to cover in just a few minutes, but it’s important to note that your congressman is paying attention. We’ve seen dozens of letters from members of Congress on both sides of the aisle regarding the need for platforms to stop targeting children and hone in tech. Right now, they’re operating in a regulatory Wild West that I’m sure Paul will be discussing a little bit further. But there is something you can do in this rare instance of bipartisanism, and that is call or write your elected officials, your representatives, senators and state attorneys general. And all of these individuals on both sides of the aisle are coming together to figure out how to rein in tech. And it’s a matter of the public needing to put pressure on them to make sure that actually happen. 

 

[Dr. Brian Hughes]: Katie, thank you so much for that talk and presentation. I think you laid out the stakes so clearly and also showed that this is not just an organically occurring problem. There are choices being made both commercially and legislatively, that impact this for better and for worse. We have to stay on track. So I only have time to ask you one question. It actually comes from the audience. Can you explain a little bit what auto generation is and how that factors into this profit model that is creating these problems you identify?

 

[Katie Paul]: So we’ve been following the auto generation issue for several years, and this is something that is unique to Facebook and not the other platforms. Those of you who are on Facebook about a decade ago will remember that in your profile you could list all of your interests and activities like hiking and biking and traveling. At some point, Facebook realized that these were data points that it could use to target people for ads, and it created something called community pages. So one day, all of the things that you had put in your profile linked to pages suddenly that had been auto generated by the platform. Facebook was doing this because it needed more digital real estate to offer for advertisements, and it was offering ads up on those pages. Now, this was over a decade ago. We saw no edits or adjustments to this issue. And Facebook, like many things with Silicon Valley, didn’t consider all of the problems that could come along with it. So quickly, we saw terrorist groups like ISIS and Al Qaeda using, utilizing this feature. So essentially, if you list in your profile that you are a sniper at the Islamic State and there is no existing page for Islamic State, Facebook creates one. It’s still doing it to this day. And what’s important about that is that Section 230, which the companies have largely used as a shield, only protects platforms in the case of third party generated content. But auto generated content is first party created, which means Facebook making it itself and amplifying these groups that thrive on propaganda. It’s still unclear why the company has not made any adjustments to this, despite the obvious vulnerabilities and legal liability. We suspect that it may be because it is so deeply baked into their code from over a decade ago that the cost of undoing that would be more than they are willing to put in. But of course, the company is the one that would need to weigh in on that. 

 

[Dr. Brian Hughes]: Fascinating. And you mentioned section 238. That’s a perfect segue into our next speaker, who’s going to talk a little bit about the legal and regulatory landscape. We’ll turn now to Paul Barrett. Paul Barrett is the deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business. He joined the center in September of 2017 after working for more than three decades as a journalist and author, focusing on the intersection of business, law and society. At the Center for Business and Human Rights, Paul has focused primarily on researching and writing a series of reports on the role and obligations of the social media industry in a democracy. Paul, We’re so grateful to have you with us. Please take it away.

 

[Paul Barret]: All right. Well, thank you very much for that introduction. And I think you’ve offered a perfect segway into what I’m going to talk about. Katie referred to a regulatory Wild West as far as social media is concerned. And I think that that is an accurate description. Unlike most other industries, you know, whether it’s the equity markets, which is overseen by the SCC or the broadcast and cable markets, which are overseen by the FCC, and you could go on and on. There isn’t a federal agency, the national level, that oversees the social media industry. In fact, the main law, which you’ve referred to, Section 230, the Communications Decency Act of 1996, is a deregulatory law. It is a law that does not restrict what social media platforms can do. It protects them against essentially regulation by litigation, by saying that content that is posted on a social media platform cannot be the basis of a lawsuit if it’s posted by a third party, cannot be the basis of a lawsuit against the platform itself. You have to go against- you have to seek out, seek any damages from the person or entity that posted the material. The second part of the law says that a platform cannot be held liable for taking down objectionable material. The purpose behind this law, which is a valid purpose at the outset of the Internet industry, was to protect a burgeoning new commercial area where companies were starting up to offer various services online, and some of them were being sued for defamation. As people posted negative material about other people. And Congress decided that if this industry is going to get off- its it’s going to get aloft and if it is to deliver on the promise of enhancing and promoting free speech and other types of expression, then it had to be protected against liability lawsuits because if it were not, it would be not feasible for the platforms to preemptively prevent potential defamatory or otherwise damaging material from being posted by their many users. We come 25 years later and the landscape of social media and the Internet in general, of course has changed dramatically. We have these huge companies like the ones that Katie was referring to, Facebook, which of course is part of Meta, YouTube which is part of Google and these companies are now having pervasive influence and playing a significant role in our day to day lives and in how our democracy is managed. As we speak right now, Section 230 is the subject of a very potentially important Supreme Court case called Gonzalez versus Google. And it involves the kind of extremism Katie was referring to. And I think this gathering is generally devoted to, namely terrorism executed by ISIS. At issue in that case is a YouTube video or set of a set of videos promoting ISIS and the kind of terrorism it has been responsible for. And the plaintiffs in that case were the family and others related to a young woman, a blameless young woman who was killed in 2015 in an ISIS attack in Paris. And the question phrased most broadly in the case before the Supreme Court is, should it be easier for those relatives of a terrorist victim or anybody else to sue YouTube and its parent company Google over content that has been posted not in the first instance by YouTube, but by some other party, in this case ISIS. And you can immediately see the tension here created by Section 230. It promotes free speech, but at a considerable social costs in that you have not just speakers who are disseminating positive views, constructive views and so forth, but also speakers who are disseminating dangerous and damaging views like ISIS. Of course, other protections of free speech like the First Amendment of the Constitution similarly have social costs to go along with them. You can’t guarantee that all speakers are going to be the sort of speakers you’d ideally like to have, and the Supreme Court will have to address this question, Should it be easier to go after the platforms in court? More specifically, the plaintiffs in this case put the question to the court, and this relates again to some of the material that Katie was talking about, of does it make a difference in this kind of lawsuit that the platform in question didn’t simply passively host material, but that it actually arguably promoted the material by recommending it, which in the case of YouTube means, you know, placing it in the what’s next category that runs to the right alongside the video you’re watching. And so it’s difficult to know how the Supreme Court will resolve this case. Interestingly, during the oral arguments some weeks ago, the plaintiffs, the relatives of the terrorism victim, their lawyer, actually had a quite visibly difficult time making his argument in the face of a lot of skepticism from members of the court. But as someone who’s watched the Supreme Court closely for many years, I can underscore that it’s perilous to try to read into the comments at an oral argument how the court will come out in the end. This, however, we can say. If the Supreme Court announces a new rule that makes it easier to sue platforms and we therefore get some degree of regulation via litigation. I think efforts across the street in Congress to curtail Section 230 will be diminished. I think a lot of the air will go out of efforts to legislatively restrict the platforms. If, on the other hand, Google prevails, I think you will see renewed efforts in Congress to use legislation to curb Section 230. And again, this means curbing the degree of protection the Section 230 now provides to the platforms. A lot of the debate has focused on Section 230 in Congress. And while there is agreement across the board among Democrats and Republicans that something should be done to restrict or to rein in social media, there’s been so much partisan animosity in this debate that Congress for years now has been unable to come up with a consensus as to how to do this via Section 230. I’m afraid to disappoint people who want to see more restrictions on platforms that I would predict that in the new Republican controlled House it would be very difficult for the Republicans in charge of the House to reach any kind of agreement with Democrats. And I think it would be unlikely that we’ll see legislative change. Nevertheless, let me offer an alternative approach to regulation that I think has more promise, or at least it should be considered as a broader approach alongside reform of Section 230, and that is enhancing the Consumer Protection Authority that the FTC already enjoys to provide sustained oversight of social media. I think that the FTC in its already existing statutory authority to regulate unfair and deceptive commercial practices, and that that’s the terminology in the statute, that it could impose certain transparency requirements and further impose requirements that the platforms demonstrate that they have in place procedurally adequate content moderation, which might address some of the concerns that we have about how platforms currently self-regulate, which I think most people agree. Outside of Silicon Valley, that this self-regulation has been inadequate. I’d be happy to expand on my ideas about transparency and procedurally adequate content moderation if you’d like me to. But I think for now, I’ll leave it there. 

 

[Dr. Brian Hughes]: Well, I would, as a matter of fact. First, thank you, Paul. Thank you for explaining a very complex topic so clearly for all of us. But I would like to hear more about a procedural approach to this and how could the FTC maybe succeed in reining in some of these excesses where Congress has failed and maybe the Supreme Court opts not to? 

 

[Paul Barret]: All right. Well, what to emphasize at the outset and I’ll emphasize that at the end of my answer to you again is that Congress’s room to maneuver here is limited both in the lane that I’m proposing it operates in, and anywhere else. These opportunities to regulate are limited because the business in question is one that traffics in expression. And the First Amendment simply prevents Congress or regulatory agencies from getting involved in the substance of setting policies for these businesses, let alone in actually participating in or overruling decisions about how to enforce those policies. We shouldn’t want Congress to have that authority. We shouldn’t want the FTC to be telling Facebook, you can post this, but you can’t post that. And the reason for that is just imagine that a president comes into power who you don’t like whichever party you’re in. And that president then inherits the ability to dictate what appears on Facebook. You shouldn’t want that any more than you would want them to dictate what appears on the front page of The Washington Post or The New York Times. So what can the FTC do? I think the FTC can require these businesses to engage in the type of disclosure that many other industries engage in to describe, for example, how the algorithms operate that control recommendations, that control the sort of, you know, creation of pages that that Katie was referring to in connection with Facebook. Here, they would not have to get involved in what exactly the companies should be doing, but simply require these currently largely mysterious automated systems to be described. For example, what are the top 20 criteria that the companies used in operating these algorithms? And what is value, what is prioritized, what isn’t value? And in so doing, we would come to have a much greater understanding of how and why certain content gets promoted and other content basically is so far down on your feed that no one ever sees it, that’s the transparency piece. The procedurally adequate content moderation piece is oversight that would ensure that the platforms fulfill the promises that they are making. All of these major platforms make a large number of promises about policies that they’ve written and how they enforce those policies. The FTC could say that where you have made promises to consumers, you need to follow through. So you need to have, for example, an adequate automated system. You need to have the number and type of human content moderators who could even conceivably provide the kind of screening that you’re currently promising your users your offer. 

 

[Dr. Brian Hughes]: Thanks for that. Yeah, you know, it seems like a lot of our work, regardless of what discipline we happen to be working in when we’re looking at this issue of tech platforms and objectionable content or objectionable social dynamics. This issue of transparency and the trustworthiness of these platforms’ spokespeople is really one of the major barriers to doing good research and therefore setting good policy. You know, we’re looking at a black box and sometimes we’re trying to figure out what’s in the box based on the symptoms that are happening outside the box and we’ll hear assurances here and there from people within the companies, and we’ll hear objections from whistleblowers. But I think as Katie pointed out, there’s a very good chance that even the people working in these companies don’t fully understand what’s in the box. So it would seem that, you know, incentives or, you know, carrots or sticks to get at that information would be very valuable for everyone. And again, thank you so much, Paul. I have to move on now, or I get to- we get to hear from I should say, Alex Newhouse. Alex Newhouse is the deputy director of C-TEC or the Center on Terrorism, Extremism and Counterterrorism at Middlebury, and he specializes in mixed methods analysis of online extremism. His area of focus includes militant accelerationism, Christian nationalism, eco fascism, conspiracy theories and the exploitation of games by violent extremists. And it’s this last topic that he’s here to discuss with us today. So, Alex, thank you so much. What can you tell us about video games, extremism and the youth? 

 

[Alex Newhouse]: Yeah, thank you Brian and I just set the stage here a little bit. I will probably scare the crap out of a lot of you. I don’t intend to do that too bad. So hopefully we can contextualize this and make sure that we all don’t come out of this being terrified of video games to the depth of our being, which sometimes happens. But I’m going to pull up here the slide that I have. So quick caveat here: As Brian mentioned, my specializations do tend to focus on far  right extremism and my research on games and extremism does sort of is focused on that primarily. That’s not to say that Far-Right extremism is the only threat that intersects with video games and digital gaming, but it’s just the one that I focus on the most. So something to keep in mind here. In pursuit of not scaring everyone and not turning everyone off of video games altogether, I do want to start off with a discussion of why video games are actually on average quite a positive force in the world. I myself am an avid gamer, I’ve spent, you know, dozens, hundreds, thousands of hours over the course of my life playing all manner of digital games. It means something. Games are very important to me and I’ve actually worked in the games industry in the past, both at PlayStation on various teams and actually at a video game website doing reporting. So games are, in my view, incredibly beneficial in a lot of different ways. And there’s actually quite a bit of psychological studies that show that video games can actually have positive benefits on people’s behavior and well-being. Some of the studies show that games can actually satisfy some basic psychological needs so they can connect people to one another, they can provide them with a sense of meaning, they can create opportunities for friendships, for long lasting relationships that can be incredibly positive. They can also, we are increasingly realizing, they can actually be uniquely situated to help players, sometimes actually be able to process grief, trauma and loneliness in a more productive way. There are video games that are increasingly actually being used in some forms of therapy for this particular reason. They can also actually provide motivation for striving to better oneself outside of the game. And the reason for this is that there’s studies that show that we actually live a sort of idealized form of our own identity when we interact with a video game. And what this means is that there’s actually there can be a reinforcing effect on one’s behavior outside the game where because you’re living out this sort of idealized version of yourself in-game that actually pushes you and encourages you to live a better life and improve yourself outside of it. And then the last thing I’ll mention here, too, is that studies have suggested over and over that there is no direct link between violence in games and violence in real life. And this is an important thing to keep in mind. What we know pretty definitively at this point is that playing Call of Duty or playing Grand Theft Auto does not sort of make you more likely to carry out violence in the real world. The problem is, though, that all of these sort of forces that can be beneficial in most people that can drive this sort of positive change in the majority of the population can also have a potentially negative effect in some people and a small minority of the total subset of people who are playing video games. And one way to illustrate this is basically everyone’s behavior is informed by their own sort of internal sense of identity and factors that come from the environment. And what we are realizing is that games can have a sort of pretty significant impact on this equation on impacting one’s way that they look at the world, one way that they process stimuli coming in and ultimately the way they interact both in game and out of game. So you can think about video games as potentially sort of one one element in the full equation of a person’s behavior. It is not as simple as playing violent video games increases the potential of violence, but there is increasingly indications that video games can sort of shift people towards certain directions that they might have already been predisposed to. And we know that this has an impact and we know that games have an impact like this because of the sort of recent history of extremist violence, especially white supremacist violence. What we have seen is that white supremacist violence in particular has become increasingly gamified. What that means is that attackers and extremists who idolize those attackers have increasingly used the language, the esthetics and the mechanics of video games in carrying out and celebrating mass casualty violence in pursuit of extremist ideologies. The best example of this is actually the Christchurch, New Zealand shooting from a handful of years ago. This was the first example we had where an ideological mass shooter live-streamed a video of their attack. And what that meant is that the attack, the livestream, basically looked identical to a first person shooter or video game. So you might be thinking that this somewhat complicates the whole, you know, what I was mentioning about there being no link between violent video games and violence in the real world. But what we’re arguing here and what researchers have learned over the past few years is that Call of Duty and other types of violent video games aren’t like aren’t sparking the violence. They are being used as a sort of visual choreography if they’re being used to get a sort of shared language for the celebration of that violence, for the carrying  out of that violence. And one of the real sort of shocking impacts of is that these live streams of these shootings, starting with Christchurch and now going on to a handful of others, have become increasingly illustrative of the sort of mechanics of first person shooters. This goes from everything from the way that the lens distorts the image, and like the GoPro mounted on the shooter’s helmet, distorted the image to the way that the shooter actually adorned his weapon with icons and logos. That’s something that comes from Call of Duty. And all of these things together sort of contribute to this gamification process that we think has both a dehumanizing process on the victims, but also sort of like perpetuating self-perpetuating impact on how that violence is processed in extremist communities. And another way that we know that this is happening is because of the way that these attacks are received, that these attacks are received in a way that is also gamified. So on the left here of this slide, we have a real example of a leaderboard created by an extremist community that looks like a video game, first person shooter leaderboard, but instead it celebrates the quote unquote, “high scores of various ideological mass shooters over the past ten, 15 years”. And on this middle part here, we have a 4Chan post of someone who’s actually criticizing the Poway synagogue shooter from a few years ago for only having a kill death assist ratio of 103. This kill death assist ratio language is straight from video game communities. And then finally on the bottom here, we have a quote from the Buffalo Shooter’s Manifesto from last year saying that “I probably wouldn’t be as nationalistic if it weren’t for blood and iron on Roblox”. Again, showing that these shooters, these attackers and these extremist communities are adopting the language, the aesthetics and the mechanics of games in the perpetuation of violence. So we know that this is happening. We know that that games are having an impact on the ways that violence is perpetrated by mass shooters. We also know that there are increasing numbers of extremists actually creating social networks in games and on gaming platforms. So this is an example from a video game marketplace called Steam. It’s basically the one main stop for anyone who’s trying to buy a game on a PC. It also has a number of different forums and social networking features embedded within it. As you can see, this user named Mars posted a swastika on this person’s profile. AstroZulia down here is actually the user who owns this profile. What we found is that this AstroZulia user was an ex-leader of a proscribed terrorist organization operating openly on Steam and interacting with other people and sharing extremist propaganda on Steam. What’s notable about this is that Steam is a videogames first platform, which is indicative of the fact that these extremist networks are both exploiting and integrating video game platforms into their sort of their normal organizational procedures. This example is a favorites pane from Roblox, actually a Roblox profile. And what it shows is that this Roblox user is using this place where they can highlight their favorite pieces of content to actually stitch together a propaganda poster for an extremist organization called Patriot Front. And you can sort of see the Patriot Front logo sort of cropped here under the Patriotic Front tab and the Liberty tab. It’s a fasces like the old fascist logo superimposed on top of the American flag. And what we found too here is that again, these Robux users are increasingly exploiting the actual mechanics of Roblox to both propagandize and create extremist networks for communication and mobilization, and also look for potential new recruits. So it’s becoming like we also know that these extremist networks are increasingly organizing on these platforms. So it is becoming a substantial problem in how we understand the growth of far right extremism worldwide. So we know these two things are happening. One of the problems is that we don’t understand fully why this is happening, but we have some sort of nascent ideas of what’s going on here. So first, we have some initial psychological studies that suggest that video games might have a uniquely strong impact on certain psychological vulnerabilities to radicalization. This is relative to other types of social media. So the reason we think this is happening is because games can actually invert the ways that social relationships are created. So if you think about it, you’re in a game, a multiplayer game, for instance, you’re sort of put into this situation where you have to trust your teammate before you know them. So you end up having this sort of like implicit trust relationship created before you have any other types of relationship built on top of it. And that’s inverted from the normal of,  you meet someone, you get to know them and then you trust them later. And what we found statistically is that actually playing games more often throughout a week, so increasing the amount of hours played per week can have an impact with a bunch of different sort of vulnerability indicators, de-radicalisation, including expressions of authoritarianism, white nationalism, Machiavellianism Something called identity fusion, which is basically subsuming your own individual identity into the group. So like you fully associate yourself with the group that you’re interacting with. So this is really this is really important to note, and it’s also important to note that really the relationship we’re looking at here is the amount of time someone is playing games, not the type of game, but the amount of time they’re engaging with it. So we don’t know a lot. Still, unfortunately, this is still in its very nascent stages and we’re trying to get a better handle on this. But we do not know fully how widespread these trends are, how extreme this exploitation differs between different types of games, whether extremist propaganda that uses game aesthetics is actually effective in the extremist sense and what types of mitigations game developers can implement. But we’re working on all of these things and we hope to have, hopefully will have some good numbers and good conclusions soon. And then finally-  

 

[Dr. Brian Hughes]: Alex, I’m going to have to- in order to keep this conversation time with the  audience Q&A at the end. 

 

[Alex Newhouse]: Perfect 

 

[Dr. Brian Hughes]: We definitely want to hear more about this as we kind of continue on here. But I did just want to- I wanted to ask a question that could maybe steer this towards this issue of youth engagement specifically, because that will help segway us into Jenny’s discussion. You showed the propaganda that’s being spread on Steam and on Roblox. Roblox is a game specifically for children, if I’m not mistaken. Can you talk a little bit about how recruitment and propagandizing are directly and intentionally targeted at young people using these games and what platforms should parents be concerned about? How does this process work? Just any insight you can give to any parents or educators in the audience? 

 

[Alex Newhouse]: Yeah, absolutely. That’s a great question.The first context here is that the extremists that I focus on these types of highly militant white supremacists often are adolescents themselves. So the people creating the propaganda are likely between the ages of 13 and 21. We’ve tracked the terrorist organization that I mentioned that was posting on Steam was founded by a 13 year old. So this is already sort of being carried out by adolescents, so the types of propaganda and the platforms that they choose are already geared towards adolescent audiences. So we know that for instance that Roblox is a big target for this type of radicalization. They use popular game modes in Roblox to do this type of recruitment and radicalization. We know for instance that like Fortnite is used to basically bond people together within extremist organizations. So they will go out and they will talk in Discord and say “Let’s go hop into a Fortnite game” and they’ll go do that. And things like, I’ve seen extremists playing Among Us, Minecraft, all of these different games that are highly popular at the adolescent stages. You know, those are the ones that extremists are targeting because they are the ones that adolescents use. It’s really as simple as that, the same trends that are seen among adolescents generally are the ones that extremists focus on. 

 

[Dr. Brian Hughes]: That’s a fascinating point I think that is worth underlining. This sounds, from what you are saying, it sounds like this isn’t a question of some 40-year-old lifetime neo-nazi movement guy saying “I’m going to learn how to play Roblox, I’m going to learn how to play Minecraft so I can get in here and exploit some children and recruit them and radicalize them.” It’s adolescents themselves who are out on these platforms acting as the propagators of extremist ideology and this sort of violent ideolation viewpoint. I think that is a very worthwhile point to bear in mind. Thank you for that. I can talk to you about video games all day, and we have sometimes. But we are gonna move on to our final speaker, and after that we are going to open it up to a kind of more free-wheeling conversation and question and answer so the audience, do get your questions in. Put them in the Q&A. We will do our best to answer as many as we can. But before that I am very happy to introduce our final speaker, Jennie King. Jennie King is the head of climate research and policy at the Institute for Strategic Dialog. And until January, she also served as head of education and Civic Action, co-authoring the curriculum “be Internet citizens”. She currently advises the, quote, “making sense of media monitoring and evaluation working group” within Ofcom, which is the UK regulator with devolved responsibility for actioning the government’s national media literacy agenda. So Jennie, please tell us a little bit about how caregivers and educators can do better about teaching youth about these problems.

 

[Jennie King]: Well, thank you for having me. I think the first thing I’d like to do is for everyone just to take a breath, because that was a lot of very overwhelming information about the toxicity on the Internet and the diversity of online harms that not only young people are facing, but anyone who wants to interact with digital platforms may well encounter or be exposed to.  I think from my experience of working both with formal education systems, with regulators and governments, but also with, you know, everyday people, people who have kids, people who work with younger demographics. Is this sense that they themselves lack the confidence or the connective tissue to share a vocabulary with talking to their own children or to their students about these topics. The kinds of spaces that these  young people are occupying may be entirely alien to their own experience. And as a result, they feel like they lack the kind of tools in their armory to engage in a good faith way in a way that feels relevant and contemporary, and as a result, that they can’t support safer, more inclusive experiences online for the young people that they most care about. And want to emphasize from the start is I don’t believe that that’s the case. And I also don’t believe that you need to have the level of granular expertise that has been incredible to see today in the other speakers in order to provide a constructive sounding board for young people in helping them to navigate whichever online communities they’re a part of in a safer way. So that’s the first thing I want to say is please don’t despair. Of course, there are trends that reach a critical mass of exposure that you might feel it is worth understanding more about. And I think at the moment, extreme misogyny and the kind of madness-phere as it’s called, or incel ideology and how that kind of rhetoric is becoming more and more prevalent, particularly in classrooms and around among young men.That might be something that you think, okay, I would like to understand more about that. I want to raise my literacy equally. You know, when conspiracies like QAnon or the New World Order start to penetrate mainstream media and get that level of oxygen, at that point, you might think, okay, I need to understand a little bit more about the specifics of this If I’m going to engage in good faith with my child or with the students. But what I would say in general is you’re never going to be able to keep up with the evolving landscape online. And so if you’re constantly holding yourself to that standard, you’re going to feel on the back foot eternally. Three years ago, Tiktok didn’t exist. Two years ago, you know, Deepfake audio and really sophisticated deepfake imagery were not endemic online. Six months ago, Chat GPT didn’t exist and the bold glamor filter on TikTok had not been launched. So it’s very difficult for you to constantly anticipate the new trends. And instead I would really encourage people to focus about on the core principles and competencies that help young people or any people to engage with information and to reflect on their own experience as a consumer of content and the way that they engage with that content in the safest and most constructive way, rather than the specifics of whichever trend is, you know, generating crisis or panic amongst the general public at a given time. So we have a theory of change for work that we’ve done, in particular the Internet systems curriculum. And I guess it takes you on a journey from basic e-safety through to what we call digital citizenship. And for everybody on this call, it’s really a case of where you feel you’re most able to engage, but also recognizing that all three of these things are important. So, you know, the very basic level you have, what are the tools or what’s the lens that we can help young people to apply when they’re coming across anything online, whether it’s hateful content, legal but harmful content, dis-informative content, you know, all the way through to the more extreme and radicalizing end of the spectrum. But beyond that, you don’t just want it to be a passive exercise. It’s also about creating environments and the role that each individual plays in structuring the online communities that they’re a part of. And the fact that by being a consumer of content, you are more often than not also a producer of content in what you repost and what you tweet and the comments that you add on other people’s videos. All of that contributes to the energy of that of a particular space and also its level of inclusivity and the kind of nature of discourse that takes place in that particular environment. So, you know, when we talk about digital citizenship, what we’re really encouraging is a conversation around what it means to be a good member of society in environments where you may never meet the people that you’re engaging with face to face.

And that sounds like a kind of self explanatory idea. But when we think about citizenship offline, we have very specific kind of tenets of social behavior that are kind of built into our way of thinking. So being a good citizen might mean that you pay your taxes or you vote in elections, or you pick up litter in your local area, or you are friendly with your neighbors. But we don’t really have this parallel framework or rubric of thinking about what it means to be a member of society when that society takes place entirely in this kind of mediated online format. So, you know, aside from anything else, one of the main things that we encourage both parents and educators and practitioners to do is to have that conversation. What kind of web do you want to be a part of as a young person? What is your manifesto for the Internet? And how do you as an individual contribute to building that kind of environment, both in the way that you police your peers behavior and in the way that you police your own behavior? I’m not going to delve into this diagram. It’s mainly a nice visual aid to have in the background since, you know, it’s got colors on it. But the reason why I wanted to show this is that I also want to emphasize the point that it’s not always about things that you can teach, that there are also huge elements of this that are to do with how young people feel and what they are encouraged or how they feel part of a cultural moment. So, you know, there has been more and more attention paid in recent years to creating digital literacy or media literacy curricula that can be delivered through formal education environments. And that is absolutely essential. It’s really important that we integrate that into our systems. But it’s not the end of the journey. You could spend an entire year teaching young people about what mis- and disinformation means, what the motivation is behind that content, you know, radicalization pathways, all of these things. And it doesn’t mean that the next time they log on, they’re actually going to change their behavior. And so another thing that I think people can do really easily is to try and highlight positive examples both in their own lives or in broader culture that make young people feel like, okay, there is an incentive for me to think about these issues, to engage with them proactively, because more often than not, what we’re seeing play out in the mainstream media is more of the toxicity, it’s more of the finger pointing at, Look, the adults are just as bad. The adults are all falling prey to fake news. The adults are driving us versus them narratives. The adults are leaning in to the most kind of polarizing and divisive forms of rhetoric. So why are you then coming to us and preaching about the fact that we need to be aware of hate speech or, you know, we need to be resistant to two extremist ideologies?

There’s a kind of fundamental disconnect here. So there’s all of these things about encouraging young people to find ways that they can enact the things that they care about and believe in in the digital space in the same way that they might do by volunteering offline or, you know, contributing to a cause. So I just wanted to kind of lay out what we’ve highlighted in this diagram, which is the distinction between things that can be taught, things that can be facilitated and things that can be encouraged, and the fact that all of them have equal weight. And in really tackling this systemic level. The next is, most of the speakers on the course so far have spoken about extremism and extremism is to an extent the most downstream part of this problem. Right? It’s what happens at the end of a journey. But in the curricula that we write, we’re really keen to emphasize that extremism doesn’t exist in a vacuum. Disinformation, hate and extremism are a very complicated and interconnected web in the online space, and that it’s very difficult to talk about one in a way that is isolated from the other. So if you don’t feel like you have the confidence or the understanding to really delve into the weeds of us of a specific extremist ideology, I would encourage thinking much more about the upstream elements of this, which is what is the purpose of information online? Is it there inspire? Is it there to teach? Is it there to create belonging and identity? Is it there to polarize? Is it there to generate advertising revenue? You know, thinking about the relationship between the individual and what they’re consuming and then how a lot of these extremist ideologies are based in fundamental misunderstandings that are driven by algorithms that are driven by online filter bubbles and echo chambers. And I’ve put some of the key definitions that we use in showing these are layers, right? You start out with bias or misinformation over time that can drive people into these online echo chambers where their opinions are constantly being reinforced. There’s never any challenge where you get identity fusion,a s Alex was saying, you know, you’re subsumed into these kind of tribal communities that take on a sort of cultish sentiment to them. And over time that means that hate speech is not challenged or you don’t even have a perception of what hate speech is because everybody is echoing the same things. And then the final, I think, pro tip that I would give and this is the next two slides is rather than talking about these things in the abstract or trying to, you know, sit down and have a conversation with with your kids about ISIS, which isn’t necessarily going to feel particularly resonant to their everyday lives, is instead encourage them to think about some of the principles as it relates to them. So you know, us versus them. Yes, we can talk about religion. We can talk about sexual politics, we can talk about nationality. But what about the things that are more likely to occur in that online experiences like stan culture in music or the tribal affiliations that they have to a sports team or the way that, you know, political rhetoric is so divided? What are the kinds of things that might come across when they’re on streaming platform? And these are just some scenarios that we use, you know, sit down and ask them if you were on a gaming platform and somebody posted a swastika, what would you do? And we give these scenarios to parents and teachers and some of them are absolutely horrified and say, well, shouldn’t they just go to the police?

But most of these things don’t cross a threshold for criminal prosecution, and they are absolutely routine, everyday experiences. So rather than kind of being terrified about them, confronting them head on, you have a friend who says a racial slur, you’re on a platform and someone comments with a homophobic joke. Someone posts a meme that mocks people with disabilities. What do you do? And you walk through those scenarios with them and be humble about how you might act in those particular encounters. Then the final thing that I wanted to say is look for the positive role models. You know, I think Alex mentioned this by saying the vast majority of people who engage with gaming platforms are getting something positive.

It’s contributing something constructive to their lives. And that’s true for the Internet as a whole. It’s very easy for us to focus on the worst and darkest pockets of it and go down the rabbit hole. But there are clearly huge benefits to being part of an online community and to having a personalized web. And so understanding who are the influences that your students or your children idolize online and really unpacking with them. Why do you like them? How do they communicate? How do they create spaces that are inclusive for people? How are they challenging ideas that you find interesting and sort of understanding who are the people that they see as gatekeepers or as trusted intermediaries, and which parts of that behavior would they like to model themselves? That goes a long way to sort of instilling them with values, which will then produce resilience against some of the biggest online harms. And I think I would stop there. 

 

[Dr. Brian Hughes]: Thank you so much, Jennie. I think that was a wonderful note to end on, both in terms of the practical ways that parents and really anyone in the community can work to positive change. And I think you brought up something that’s just essential, which is that radicalization and especially violence are the absolute worst case outcome of these dynamics. But these dynamics also, they bear kind of a chronic weight on society. And there’s really no easy distinction between the polarization, some of the hostility, the ambient misogyny, other forms of supremacy that we live with in our societies, and then the more acute expressions that we can easily identify as extremist or illegal or violent. So I’m going to actually I’ll just open things up to a conversation. But, Jennie, I’ll give you the opportunity to respond first. I’d love to hear from everyone what the whole of community solution or what one whole of community solution might be to the problems that you specifically addressed in your presentation. I think we had some questions in the Q&A about legislation, very curious to hear about that. Maybe some stuff about moderation and tech policy. You know, I’ll just shut up and I’ll let you all answer the question. Jennie, why don’t why don’t you go first? 

 

[Jennie King]: So one of the key principles that I think it’s important for inter-generational discussion around this is getting away from the concept that there are kind of neatly delineated heroes and villains in this, particularly when it comes to the softer edge, the kind of mis and disinformation space and the idea that there are stupid people who are susceptible to that kind of content. And then there are those who are able to do critical thinking. And as adults, I think coming to those conversations with humility and saying, “Oh, here’s a time that I fell prey to something that I actually thought was very legitimate and wasn’t, you know, the worst example of, quote unquote, fake news, but was just something that I didn’t bother to do lateral reading around, I didn’t check the sources on, and I accidentally shared it and, you know, provided oxygen to that idea. And I realize now that that was really unhelpful”. It’s important to have that awareness that even things like conspiracy theories, which then lead into extremist paths, that there are good reasons why average people are susceptible to those ideas. And they’re often to do with you know, being in moments of social and economic crisis, being in moments where people feel acutely vulnerable themselves, where they are driven by fear or a sense of trying to create patterns out of chaos. And that, again, it’s not some shadowy group of others who are going to fall prey to that. We all have the capacity to go down that path. If we come across enough of that kind of information or if we’re in enough of a moment of possible crisis. So sort of coming forward to all of these conversations from a position of both openness and, for want of a better word, kind of empathy with why these ideas are appealing that I think that that’s very important when it’s adults and young people trying to engage in the same discussion around these ideas.

 

[Dr. Brian Hughes]: Yeah. Empathy versus sympathy. I mean, it’s so essential to understand why people can find emotional and social gratification from this. And it’s not the same as excusing. And Katie, Paul, Alex, I’d love to hear from you. What does a whole of society or whole of community approach to the problems you identified? 

 

[Katie Paul]: I think one thing that’s worth highlighting that kind of follows on what Jennie was saying is that first of all kids are going to be the most well versed in the technology. A lot of kids now know that the algorithms drive things toward them or know that what they see online doesn’t mean something is true. But you do have an older generation that’s not as familiar with that, that is raising a generation of young people. And, you know, we do see particularly Facebook and these other platforms, the extremist content really creates filter bubbles of what people see. And when someone is arguing from what seems to be a parallel universe of truth, their digital world, that may be the universe that they’re fed. And so I think an overall education, not just, you know, we’ve been very Western centric on this, but I think it’s important to remember how dominant platforms especially meta-platforms are in the developing world, but making sure digital literacy and an understanding of algorithms and how content is fed to you and pushed you, you can do a lot to dissuade people from buying into something just because it comes across their screen. That is something that seems to be missing every time I get on a group. You know, my research on the Boogaloo showed that they are a little more aware of this and have learned how to manipulate those algorithms but that also, they are less susceptible to anything you know junk scams that are going to be pushed to them by older generations, we’ve even seen people offer up, you know, they say, my my kid’s not on Facebook, but he wants to join your three percenter group. How can I connect him? And I think understanding the impact of these algorithms on our everyday lives is very important for the broader population in, you know, taking a first step and kind of fighting how these affect people’s views of the world.

 

[Dr. Brian Hughes]: Yeah, there’s, you know, I think that the effects of algorithms are a really great example of how these issues affect us in kind of a chronic low to mid-level social social pathologies. You know, for the audience, I’ll take my, you know, privileges as the moderator here to recommend an excellent book by Max Fisher called The Chaos Machine just came out. It’s accessible. You know, it’s accessible reading. It’s not dense academic jargon about the ways that these platforms facilitate certain kinds of interaction that aren’t always positive. But Paul or Alex, I’d love to hear from either or both of you. What do you think?

 

[Paul Barret]: Sure well, I guess I want to offer something sobering to go along with what I think are very helpful or exhortations for how we should as a society react to these problems. And the sobering fact is that in the United States anyway, and I think this goes for the EU and some other parts of the world as well, Government is not going to solve this problem for us. There should not be a fantasy that, you know, when the Republicans, Democrats can figure out how to behave better in Congress, we’ll just march through a governmental response to all this. A lot of th, not all, but a lot of the harmful content we’ve been describing is not illegal, at least in the United States. It is illegal in some cases, in some European countries and is illegal, I guess, in some other countries, in other parts of the world. But that’s crucial to emphasize. The First Amendment protects free expression very broadly. The EU doesn’t have a First Amendment per se, but the value of free speech is still quite high in the EU as well. So they’re doing more in a regulatory sense. People should look up the Digital Services Act, which is a very solid first step in the direction of mandating transparency on the part of platforms, internal risk analysis and so forth. The U.S. would be well advised to move in a similar direction. But government regulation, it will be at its most effective when it creates incentives for more stealth regulation in the end and is discourages people might be now by the inadequacy of the self-regulation platforms are exercised. In the end it is really these platforms that are in a position to limit the kind of damage that we’re talking about, both because of constitutional constraints, but also of technological constraints. The government is, you know, unless it shuts down platforms, is not going to be able to monitor the flow of billions of posts a day. It just ain’t happening. A final sobering thought is we should also be talking about the next problems because they’re already here. We’re talking a lot about Facebook, you know, Facebook and we should be Facebook is huge, has had a huge impact on societies around the world. But Facebook has kind of topped out. And, you know, it may well be that that two dimensional social media has kind of plateaued. And the next steps are going to be things like what Alex is talking about as we turn our attention to gaming, which is a really important issue that our center is going to publish a report on in the coming months. But also there’s going to be 3-D, you know, immersive social media unless Mark Zuckerberg and others are completely wrong, we’re going to grapple with the metaverse. And of course, in the last few months, we’ve forgotten about the metaverse and everyone is excited about generative A.I. that’s going to present yet new problems for all of this. And it’s not something that can really be resisted. But the the conversation is even more complicated than it seems. If you just take a snapshot of what we have today. 

[Dr. Brian Hughes]: Yeah, yeah, certainly. Alex, over to you. Do you have any thoughts on this question of whole of society, whole of community solutions? 

 

[Alex Newhouse]: Yeah, I have a couple quick things to add. So the first is that my sort of magic wand, what I would change if I had all the power in the world would be to just fund after school activities, rec centers, libraries, etcetera. I think we often discount the just the shared social impacts that like kids having things to do as like there are studies from Iceland, for instance, that have massively reduces drug addiction rates. We know that COVID 19 was a mass radicalization event because specifically because it isolated people, made them alienated from each other. So just the thing I would encourage everyone on this call to do is just think about that.Encourage your local community, vote for a state policies that fund the rec centers that fund after school activities that just generally create those sort of third spaces that give kids opportunities to be kids and to interact with one another and to get a bigger diversity of experience. But that also might mean something that’s, you know, expansion of diversity of experiences in the digital world as well. Like this isn’t necessarily just getting someone off the computer, it’s getting someone off of a hyper fixation on a single community on the computer. The second thing I’ll say to you is that this, you know, this is going to require a recognition from all of us, from the community as a whole, that the youth will always be much more intimately familiar with the technology than any of us are. Like, I’m 28 and I’ve already been outstripped by like ten magnitudes by that, you know, the knowledge that current adolescents and young adults have about the new social media that is that is coming about and recognizing that is is incredibly important because what it means is that then you can switch supposedly you can switch intervention education, you can switch the sort of capacity building that adults are giving to adolescents from, here’s how to use the social platform to here’s what the behaviors you might encounter on these social platforms is going to be. Because like the one place where adults do have more experience is with the sort of diversity of human behaviors, how exploitation looks, those kinds of things. That is what really the education needs to be focused on, in my opinion.

 

[Dr. Brian Hughes]: Yeah, you know, we’re well, our time is our time is getting here. But I think that there’s a very interesting point that’s been said over again about youth being more in technologically savvy than adults. And I want to throw out a potentially complicating question to that, because, you know, in my own research, and this is just data, but in my experience, yes, youth are much more savvy consumers and users of digital technology.But there seems to be a little bit less understanding that these platforms are artificial spaces that facilitate some modes of behavior and discourage other modes of behavior and make some ways of interacting easy and make other modes of interacting impossible. And so I just wonder and we can talk about this or we can just let it sit there. But is that understanding something that comes with age or is this a feature of the way that consumer technology functions nowadays, whereas there’s this increasing need to costume the way that these spaces are artificial and that engineering and design choices alter the way that we interact socially on them. I’m not sure. As I said, I only have Anik data, but it’s an interesting thing to observe. And then maybe just one final question before I pass it back to Kris. You know, Alex mentioned this, but I’d like to hear from everyone maybe just very quickly, can we just be on the Internet less? Is that an unrealistic goal? Is a social movement to spend less time online, something that’s even worth discussing? Would it make these issues better even if we could pull it off?

 

[Jennie King]: I think it’s important to recognize that there are a number of people who can’t cultivate connection of that kind in their lived environment for a number of different reasons. Perhaps they have an identity that isn’t going to be supported by their family or by their school, and they only are able to find belonging and connection in the online space. Maybe it’s that they’re very isolated geographically, and being part of online communities gives them exposure to a transnational community of interests, community of feeling and really broadens their horizons in a way that their resources or their socioeconomic means are never going to be able to enable. So I don’t want to create a kind of complete dichotomy between online lives and offline lives, as though only seeing people face to face is a meaningful form of connection. But what I do think we’ve lost very much in public life is the community centers that were formerly the pillars of your lived environments that have been completely supplanted or commercialized. So, you know, now lots of people don’t go to religious institutions. They don’t have social clubs or centers. They’re not involved in afterschool programs, so they don’t have any kind of interfacing and social skills that can then be complemented or enhanced by the online. So I don’t think it’s about an either or, but I think we have to reengage with the people that are around us and find ways to increase basic contact theory, you know, getting people from different profiles and walks of life to cohabit or even to be involved in programs together where diversity is kind of the byproduct, but it’s not the ultimate goal. You know, sports clubs that happen to have mixed sets of kids all doing stuff together, that is an essential complement to the online world. But I don’t think, you know, just saying less time online is going to solve the problem unless you’re finding an alternative for people in their real lives. 

 

[Dr. Brian Hughes]: Excellent. Thank you. Thank you for that. Katie I think your hand was up first.

 

[Katie Paul]: You know, I think in terms of the social media aspect, we are seeing some young people that are kind of pushing the idea of social media. Now. It’s cool to not be on it, but at the same time, every other aspect of our lives is dominated by technology. Your medical records are technological and digital now. If you want to engage with a company that’s giving you bad customer service, you get on Twitter because it’s more responsive than their customer service line. Social media is required in some parts of the world. For instance, I was recently in Egypt and most companies don’t actually have websites. They use Facebook pages in lieu of paying for a domain. So there’s infrastructure and economy in large parts of the world that require the use of social media, particularly in the post-pandemic world. That’s probably not going to be going anywhere anytime soon. So that need for these companies, you know, the way that we live and operate, paired with what Jennie was just discussing, means that it’s unlikely that we will be less online. I mean, even right now we have companies like Google pushing Chromebooks and YouTube accounts with teachers in schools to get kids more online at a young age. So we have to balance the fact that this technology is going to be part of our lives. It’s to be here to stay whether or not we like it. And so it’s a matter of being as informed as possible about how that technology works, how it affects us. We have a tool on our website about trying to get- It’s a Google detector. See if you can get Google out of your life and in a large part you kind of can’t because the majority of everything you see, even if you visit a website that has nothing to do with social media, is going to be Google and it’s because you’ve been tracked all over the Internet. So I think it’s important to remember the necessity of technology in the current era and balance that with how we use that technology and what our knowledge about its function is. 

 

[Dr. Brian Hughes]: Thank you. Katie. Yes, No, I completely agree with everything that’s been said so far. Paul, I think you have the last word on this question before we hand it back to Kris.

 

[Paul Barret]: Well, yeah I mean, I completely agree to and maybe even passionately agree with the notion of the capability of technology at this point and the fact that we’re going to be dealing with new permutations and problems that are not evident yet. And yet I, I don’t know why it never occurred to me, to put it the way Alex put it, that, you know, what we need is more, you know, more bicycles. I mean, kids were riding around aimlessly on bicycles to while away the hours that that’s a really good way to grow up. And, you know, maybe we should be pressuring the social media titans to be sponsoring not programs that keep kids online more, but that they should be buying bicycles for kids who can’t afford bicycles. So I think that’s what parents should be doing, finding good, constructive alternatives for their kids to be englued constantly to the screen.

 

[Dr. Brian Hughes]:I think that’s an excellent place to leave it. Thank you all so much for just all of your expertise, all of your insight. Jennie, Katie, Paul, Alex, so grateful for your presence here today. Now, I’m going to pass it back to Kris, who is going to wrap things up for us. But just again, thank you so much to the audience. Thank you so much to our speakers. It was wonderful getting a chance to talk with you all. 

 

[Kris Perry]: Thank you so much, Brian. And I echo your thanks to Katie, Paul, Alex and Jennie for participating in the panel today and sharing this unbelievably timely and important information and recommendations with all of us and and to helping to translate what you know in the research for parents. Thank you so much also to our zoom audience for joining us. Once again, the recording will be available on Youtube soon, where you can review today’s content and share with others who might be interested. And as you leave, you will be prompted to complete a feedback survey. Please take a few minutes to share your thoughts and ideas about today’s workshop and make suggestions for future events. To learn more about this and other topics related to child development and digital media, check out our website at childrenandscreens.com. Follow us on the platforms and subscribe to our YouTube channel. We hope you’ll join us again for our next webinar on April 11th: The Social Brain on Screens. This webinar will break down the social cognitions that help us better understand ourselves, others, and the relationships between us and the impacts of digital media throughout child development. Thank you.