Date
Topics

As 2025 drew to a close, it became increasingly clear how deeply children’s well-being is shaped by political and regulatory forces beyond their control. A record-breaking federal government shutdown stalled child online safety legislation while simultaneously disrupting the lives of millions of families who depend on federal programs. In its wake, Congress moved with unusual speed to advance a sweeping—but controversial—package of online safety bills, sparking sharp debate over scope, constitutionality, and the preemption of state protections. Senators reintroduced a bill  aimed at expanding platform transparency and researcher access to data. At the same time, tensions between federal and state authority over AI regulation intensified, even as states and international regulators continued to press forward with their own child-focused digital safeguards. Together, these developments reveal a rapidly shifting policy landscape in which decisions made in Washington, state capitols, and abroad are converging to shape the digital environments children inhabit every day.

The Government Shutdown

The longest government shutdown in U.S. history ended on November 12, 2025, after 43 days. During the shutdown, the Senate continued to hold hearings but the House adjourned as lawmakers struggled to end the record-breaking government shutdown. The shutdown not only delayed all child online safety legislation, it impacted the millions of children whose parents were furloughed or fired, and any child relying on a federally-funded program, including the more than 16 million children whose SNAP benefits were impacted. The entire ordeal is a stark reminder of the ecosystem children grow up in, but have little control over.

The House Tackles Online Safety

After the shutdown ended, Congress intensified its digital governance efforts, particularly regarding child online safety. On November 18, the House subcommittee on Oversight & Investigations held a hearing on the risks and benefits of AI chatbots. Witnesses voiced their opinions and concerns on the topic of AI Chatbots; for which they unanimously agreed on the need for transparency across the board, and advocated for progress with integrity which prioritizes safeguards, and an honest conversation on the strengths, weaknesses, and gray areas of chatbots. 

That hearing was followed by a December 2 legislative hearing by the Commerce, Manufacturing, and Trade (CMT) subcommittee. Witnesses responded to a package of 19 draft kids online safety bills, including the House versions of the Kids Online Safety Act (KOSA), the Children and Teens’ Online Privacy Protection Act (COPPA 2.0), and Sammy’s Law. 

The House drafts were often narrower in scope than their Senate counterparts. Republican leadership positioned this as a multipronged approach to child online safety—each bill addressing unique harms and risks—creating a complementary suite of bills rather than a few comprehensive bills that overlap and may even compete with each other. They also argued the draft package was designed to resist constitutional challenges, stressing that a law is useless if it is struck down in court. Democratic lawmakers criticized this move as defanging many of the more comprehensive bills, like KOSA and COPPA 2.0. Core criticisms arose over limits to  KOSA’s Duty of Care provisions (which were no longer called a “duty of care” in the House draft), and the removal of COPPA’s constructive knowledge standard—which would require companies to apply COPPA protections if they had enough information to infer a user is underage. However, one of the biggest flashpoints was the vast preemption provisions in the House package. The preemptions risk nullifying new and longstanding state laws protecting children’s data and regulating technology and digital media companies.

Just nine days later, 18 of the 19 draft bills were formally introduced and debated, among them KOSA and COPPA 2.0. The bills were introduced unchanged from their draft language, reigniting the same criticisms from the earlier legislative hearing. All the bills advanced to the full Energy and Commerce Committee, although some along party lines. This hearing was promised back in June, but that cannot fully account for how rapidly these bills moved through the process once the drafts were released. Eighteen bills in a single package is far more than usual, let alone advancing them from drafts to official legislation in such a short time; advancing them unchanged with this degree of pushback speaks to a huge amount of work leadership likely did in the background. They likely achieved this by promising more  substantial revisions for full committee markup, in return for the votes necessary. The rush implies leadership wanted to advance the package as far as possible before the December recess, probably so they could use the break to negotiate more extensive revisions.

Transparency and Accountability have another go in the Senate

Just this month, Senators Coons and Cassidy reintroduced the Platform Accountability and Transparency Act (PATA). PATA stands out in a field of other digital governance bills that aim to restrict user access or govern platform practices. Instead, the legislation would require platforms to provide qualified, independent researchers with access to critical data, subject to strict privacy and security safeguards; protect researchers acting in good faith; establish meaningful reporting requirements; and mandate public disclosures to help users make informed decisions. Although PATA is fairly noncontroversial, it has been introduced in every Congress since 2021 and has repeatedly stalled—often overshadowed by bills deemed higher priority, and has faced ongoing disagreements over how data access and research oversight should be structured. Read our statement on PATA here.

AI Regulations and State Laws

The same day as the House markup, the White House signed an executive order intended to challenge and deter state laws regulating AI, and calling for a national standard for AI regulation. The order is just the latest step in the Trump administration’s fight against disparate state AI laws in favor of a national standard, which started back in May when the House included a moratorium on state AI laws; it was subsequently defeated in the Senate

Through the remainder of Fall and the beginning of Winter, state governments slowed child online efforts. All were dealing with the ramifications of the government shutdown, but many had already concluded a flurry of legislation and other policy activities, some of which we detailed in our October issue. New York state made headlines when the Attorney General released her proposed rules for the SAFE for Kids Act, which was signed into law in 2024. The first of its kind law restricts the use of personalized feeds and nighttime notifications for minors. The rules will determine how companies have to comply with these requirements, including age-verification and parental consent methods. Children and Screens submitted comments, co-signed by ParentsTogether and the Center for Countering Digital Hate.

Digital Governance Abroad

Internationally, regulators have been testing their authority under recently implemented laws. The European Commission published age-verification guidelines to inform compliance with the Digital Services Act (DSA), and released the preliminary findings of an investigation into Meta and TikTok for violations of the DSA’s data access for researcher requirements. Social media platforms are preparing for the implementation of Australia’s social media ban for people under 16, which goes into effect December 10, and Norway intends to increase the minimum age for social media to 15. Norway’s moves underscore a broader debate in the EU over age limits on social media.

ScreenShots Newsletter

Read our monthly newsletter, featuring the latest Children and Screens news and resources.