Amid yet another federal funding fight, national child online safety legislation (and much of digital governance more broadly) has ground to a halt. Budget brinkmanship has once again consumed Congress, sidelining efforts to advance bills like the Kids Online Safety Act (KOSA) and COPPA 2.0 despite overwhelming bipartisan support. At the same time, momentum is shifting outside of Congress, as landmark social media litigation begins to test the limits of platform liability and expose what companies knew about the risks their products posed to young users. Together, these developments underscore a pivotal moment for kids’ online safety—one shaped as much by courtrooms as by Capitol Hill.
Why Has Kids’ Online Safety Stalled?
After a December House Subcommittee markup, kids’ online safety bills appeared to have a narrow path forward. The Senate was poised to pass a funding package in January, but the chances of that evaporated following a series of high-profile national incidents. After a brief government shutdown, Congress passed a budget funding DHS for two more weeks, preoccupying Congress with yet another budget debate and pushing lower-priority legislation aside.
Meanwhile, other legislation continues to move, raising questions about why kids’ online safety bills have failed to advance. To understand why, we have to look back to 2024. The Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) passed the Senate in a 91–3 vote and advanced out of the House Energy and Commerce Committee to the full House of Representatives. However, Speaker Mike Johnson (R-LA) never brought the bills to the House floor for a vote. House Republicans have long expressed concerns about KOSA, citing potential infringements on First Amendment rights and risks of censorship—concerns that ultimately dragged down COPPA 2.0 alongside KOSA. Constitutional objections clearly remain for House leadership. During a December 2025 Commerce, Manufacturing, and Trade Subcommittee hearing that included debate over KOSA and COPPA 2.0, Republican members repeatedly emphasized that any solutions addressing kids’ online safety must withstand constitutional challenges.
Now, KOSA is stalled in the Senate, too. Both COPPA 2.0 and KOSA were reintroduced in the Senate Commerce, Science, and Transportation Committee, now chaired by Senator Ted Cruz (R-TX). KOSA’s Senate challenges date back to May 2025, during debate over the House budget reconciliation bill. That bill included a moratorium on enforcement of state AI regulations, which ultimately carried over into the Senate budget bill. We covered the rise and fall of the moratorium in our May and August issues. After an attempted compromise, Senator Marsha Blackburn (R-TN)—KOSA’s sponsor and co-author—eliminated the AI moratorium through an eleventh-hour amendment. As chair of the Commerce, Science, and Transportation Committee, Cruz controls which bills are debated before advancing to the full Senate. While COPPA 2.0 successfully advanced out of his committee, KOSA has not received a markup despite having more than 75 co-sponsors. COPPA 2.0’s fate now appears tied to KOSA’s, and Cruz has given no clear indication of whether or when KOSA will receive a vote.
Social Media Lawsuits
In a historic development, a major lawsuit against Snap, TikTok, Google, and Meta began in January. The lawsuit, filed by a plaintiff identified as K.G.M., is expected to set a precedent for similar cases. Matt Bergman, founding attorney of the Social Media Victims Law Center and a member of the Children and Screens Scientific Advisory Board, is representing approximately 1,200 plaintiffs, including K.G.M.
In the lawsuit, the plaintiff alleges that major social media companies intentionally designed addictive platforms that drove K.G.M. to compulsive use and contributed to her depression and anxiety. Snap and TikTok have already settled. The case carries significant implications for similar lawsuits and for platform liability more broadly.
Why this trial is important:
At the risk of oversimplifying, this trial matters for two key reasons: it sets a precedent for similar cases, and it makes internal social media company documents available to the public.
K.G.M.’s lawsuit is what’s known as a “bellwether” trial—a test case selected when many lawsuits involve the same companies and similar claims. More than 1,000 related cases are currently awaiting trial, and both individuals and state governments have previously attempted to sue social media companies. Historically, online platforms have benefited from protections under Section 230 of the Communications Decency Act, which shields intermediaries from liability for content created by others, with limited exceptions such as participation in federal crimes.
Courts have interpreted Section 230 broadly, extending protections to everything from individuals forwarding emails to comment sections on websites, and shielding social media companies from liability for legal content hosted on their platforms. As a result, Section 230 has been defended as essential to the existence of the modern internet, while also criticized as overly broad and granting companies carte blanche authority. In recent years, however, courts have begun to narrow their interpretation of Section 230 protections. Whereas companies historically relied on Section 230 even for their design choices, cases since 2024 have started to chip away at its scope. The fact that K.G.M.’s case was allowed to proceed to trial is itself significant, signaling that Section 230 protections have limits. The outcome of this trial will help determine whether business decisions that harm users are shielded by Section 230 or exempt from it—either opening the door to a wave of lawsuits against social media platforms or leading to the dismissal of similar claims.
Regardless of the outcome, the trial is already having consequences. A trove of previously sealed documents has been released, and the Tech Oversight Project has reported on some of the most revealing disclosures. Based on a review of the available documents, several points emerge clearly:
-
- Recruiting teens to platforms was a major operational goal of social media companies, in part to secure lifetime or long-term users and profit.
- Employees and internal research raised concerns that platforms could be addictive.
- Employees and internal research questioned whether tools designed to help users control their usage or mitigate mental health harms were sufficient—or effective at all.
- Top-down decisions frequently overruled these concerns.
- The impact of platforms on mental health was a major internal concern and research focus.
- Based on the unsealed documents, companies focused more on new safety features or launching public education campaigns rather than change known risky platform designs and features.
We will continue with updates as the trial unfolds.