Date
Topics

Policy Voices is a recurring newsletter feature that spotlights thought leaders, policymakers, and advocates whose work interacts with the mission of Children and Screens. Through short, curated Q&As, this section elevates informed voices helping to shape the policy environment around children’s digital well-being. These individuals bring deep expertise, practical insight, and real-world experiences that enrich the broader conversation about how research, policy, and practice can better support children and families.

Thomas McBrien is an attorney, advocate, and policy expert focused on tech accountability, consumer protection, privacy, and online safety. He is Counsel at the Electronic Privacy Information Center (EPIC), where his work encompasses a broad range of issues such as online platform regulation, surveillance pricing, automated decision-making systems, and the Fourth Amendment. Tom’s work includes filing amicus briefs in landmark cases involving the intersection of consumer rights, the First Amendment, and Section 230; advising lawmakers on writing effective and constitutional laws; litigating on behalf of consumers; research; and producing public education materials. Tom graduated from the New York University School of Law in 2021 and the University of Michigan in 2015.

Q: What got you involved in issues of platform governance?

Platform governance is an exciting area to work in because of all the hard questions it poses. Individuals are often caught between enormously powerful tech companies on one side and government actors on the other, neither of whom are inherently incentivized to prioritize users’ speech rights, privacy rights, or safety. Advocates have the difficult but rewarding job of proposing legal and policy frameworks that balance the complicated tradeoffs among these values. Seeing—and experiencing—the harmful ways that companies have developed technology over the past couple of decades has felt like a betrayal of the promises these companies made and inspired me to get involved in this work. I have been shocked to read whistleblower reports about tech executives intentionally designing their platforms in ways they knew harmed users, especially kids. They intentionally borrowed from the gambling industry to boost engagement, all in the name of more attention, more data collection, and—ultimately—more profit, leaving more rewarding and enjoyable platform design frameworks by the wayside. But by the same token, some proposals to police this bad behavior can leave too much power and discretion to government actors, who have their own incentives that don’t always align with individuals’. Figuring out how to use the law to protect and empower users is a difficult but exciting task.

Q: Debates about regulating social media and AI often center on Section 230 of the Communications Decency Act. What is Section 230?

Section 230 is a law passed by Congress to protect users’ speech online. It does so by giving the companies that carry users’ speech a defense from lawsuits that, if successful, would force companies to either censor users or shut down. So, when a company is sued under a lawsuit that fits a certain pattern prohibited by Section 230, the company can get the case dismissed.

Congress passed Section 230 in 1996 to incentivize online platforms to moderate content and to protect the online speech environment. Congress was responding to how courts were applying existing media tort law principles to tech platforms such as online forums. Section 230 reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The legal details about how to apply this somewhat unclear language are complicated, so I would recommend reading EPIC’s amicus brief in an ongoing Ninth Circuit case that traces the history in detail: https://epic.org/wp-content/uploads/2025/07/Coalition-Amicus-Brief-CA-v.-Meta.pdf

For an abbreviated version of that story, Congress was responding to a combination of court decisions that created a situation called “the moderator’s dilemma.” In the moderator’s dilemma, if a company decided to moderate content (e.g., by removing pornography or hate speech), then it assumed legal responsibility for anything harmful that any of its users posted and that it did not remove, even if it was not aware of the harmful content. That means a company could be sued over something it couldn’t really control: what any given user posted at any given time. This is impossible for companies to adequately police because so many people post so much content on these platforms at any given time, and legal doctrines such as defamation rely on very detailed analyses of the context, truthfulness, and other aspects of a statement. A company that did decide to moderate content would therefore either have to impose extremely strict moderation policies to steer users away from any controversial topics, which would harm speech online, or eventually just shut down. Whereas if a company declined to moderate content (thus leaving its website full of junk), it only had legal responsibility for harmful content it actually knew about. Congress wanted websites to moderate content and recognized that the moderator’s dilemma was a huge disincentive to doing so. Congress passed Section 230 to tell courts: “If a company decides to moderate content, don’t hold them automatically responsible for every harmful thing one of their users posts.”

Q: Why does Section 230 play such a central role in tech governance?

Section 230 plays a central and controversial role in tech governance for four main reasons. 

First, it does something very important: preventing lawsuits against online speech intermediaries (like social media websites) that would harm speech by imposing the moderator’s dilemma. 

Second, while Section 230’s scope is narrowly cabined to the moderator’s dilemma, it is powerful: no state laws or lawsuits are permitted that violate Section 230, no matter the legal theory or cause of action. This allows companies to raise it in all sorts of surprising cases, such as arguing that they are immune from prosecution under the Clean Air Act for providing an online marketplace for devices that contravert environmental protections.

Third, Section 230’s language is slightly unclear because its drafters borrowed its language from media tort law, where definitions differ slightly but meaningfully from their everyday usage. Companies have seized on this ambiguity to attempt—sometimes successfully—to secure overly broad interpretations of the law in court that shield the companies from accountability without protecting users’ speech. 

Fourth, Section 230 has outsized importance because of a mythology that it’s the only thing protecting companies from weak or speech-endangering claims. Section 230’s focus is on claims that endanger speech by imposing the moderator’s dilemma. But there are other kinds of bad claims out there: ones that threaten companies’ or users’ speech rights in ways that don’t implicate the moderator’s dilemma, or ones that fault companies for designing their platforms in ways that don’t actually violate the law. Companies constantly try to use Section 230 to dispose of these claims because doing so would expand the law’s scope and shield them from accountability. Even many members of the public take the bait, forgetting that other legal mechanisms are more appropriate to dispose of those claims. 

For example, in the Supreme Court case Gonzalez v. Google, Google claimed it needed Section 230 to dispose of a weak legal claim alleging that they materially supported terrorists by deploying content recommender algorithms that served pro-ISIS videos. This started an enormous fight over whether Section 230 should have prohibited this claim (it shouldn’t have), with impassioned arguments that Section 230 was crucial because, otherwise, companies could not deploy recommendation algorithms and online information-sharing would crumble. But the Supreme Court simply, and reasonably, ruled that deploying a recommender algorithm didn’t constitute material support of terrorism and dismissed the case. Section 230 was unnecessary, but if the Court had ruled that content recommenders were per se immune under Section 230, we would not be able to hold companies accountable for the harmful ways they have deployed these algorithms now.

Q: In your view, how have courts interpreted Section 230 over time? Are you seeing meaningful shifts against social media companies, including the current California lawsuit? What stands out to you about how it is being argued in court?

Two things have shifted as courts have interpreted Section 230 over time. First, courts have developed more precise and narrow interpretations of Section 230 as they have grappled with the statute’s history and language. We traced this evolution in our amicus brief mentioned above. Second, courts have had to consider how to apply the law in the face of the changing nature of online platforms. Simple online forums no longer predominate: now we have online marketplaces, social media websites, generative AI platforms, etc. This change has led to lawsuits that focus less on whether companies carry harmful speech and more on whether companies engage in harmful non-speech design practices, such as the California multi-district litigation about addictive platforms. I have a lot of confidence that these suits will largely survive Section 230 challenges and result in some real reforms.

Q: From your perspective, how has Section 230 influenced recent legislative efforts to regulate social media? Are there particular proposals that you believe meaningfully engage with or misunderstand the statute?

By and large, legislators seem to understand the limits that Section 230 places on regulatory power. Most of the bills we see avoid Section 230 issues by focusing on companies’ own harmful conduct, such as the way they design their services, not on forcing a duty on companies to monitor for and remove harmful user content.

Occasionally, we do see bills that attempt to directly regulate companies’ approaches to user speech in ways that are unwise. These come in two main forms. Some bills will use the language of design to really target content. Imagine a law that says, “A company shall not design its platform in a way that permits distribution of content about Topic X.” While written in terms of design, this type of law would still trigger Section 230 by forcing companies to monitor for and intervene when user speech relates to Topic X, re-creating the moderator’s dilemma.

Other bills propose a targeted rollback of Section 230 for content relating to a specific harmful topic, such as political violence. While it would be within Congress’s power to do a targeted rollback of Section 230 to accomplish this, it would be unwise because it does not address the bad incentives posed by the moderator’s dilemma. It does not matter whether a company is liable for all harmful user speech or only harmful user speech about specific topics: the vast quantities of speech that these platforms carry and the complicated, contextual nature of speech itself means it is impossible for the companies to catch every harmful instance of user-generated speech.

Q: How has Section 230 informed EPIC’s work and strategic priorities?

One of EPIC’s strategic priorities has been filing amicus briefs to advocate for a narrow interpretation of Section 230 and educating the public about the law. While Congress often talks a big game about reforming or repealing Section 230, so far that has mostly been bluster, so courts are one of the most important forums to nudge toward good interpretations and fight bad interpretations of the law as it is written. Federal appeals courts and state supreme courts have been the main arenas because the Supreme Court has consistently refused to take up cases involving the proper interpretation of the law.

Section 230 is also very influential in our policy advocacy. Lawmakers often come to EPIC asking us to help them write laws that do not violate Section 230. We provide them with advice in these situations and we ensure that the model bills we write and advocate for do not violate Section 230.

Q: Looking at the history of the statute, do you think Section 230 needs reform? If so, what kinds of changes would you support and why?

I think Section 230 needs clarification, or—to borrow a term from Yael Eisenstat at Cybersecurity for Democracy—“modernization,” more than it needs to be fundamentally re-written. The moderator’s dilemma is not a thing of the past. Platforms still carry far more speech than they can moderate perfectly, and content moderation is still important. So the core of Section 230 is still important. But we’ve seen time and again how the law’s legal terminology leaves it open to misinterpretation in ways that harm users and society for tech companies’ benefit. So, my ideal would be amending the law to retain its protections but make its scope clearer to judges and society writ large.

That being said, there are large unanswered questions about whether technological advances have rendered some parts of the law obsolete. I don’t have all the answers, but the good news is that there is a large and growing community of legal, technological, and business experts who are trying in good faith to understand and apply Section 230 correctly, so I am optimistic about our ability to properly reform the law if the chance presents itself.

ScreenShots Newsletter

Read our monthly newsletter, featuring the latest Children and Screens news and resources.