Source: $375 Million. One verdict. Are years of reckoning, finally arriving? by Kirra Pendergast
From New Mexico to global regulation, expectations of platform responsibility are shifting toward design, risk and prevention.
Written by Anna Hayes & Kirra Pendergast
Seven and a half hours.
That is all it took for twelve ordinary people to see what thirty years of legal abstraction, lobbying, child online safety spin, reputation management, and strategic delay had worked furiously to keep out of view. Even more damning is that the fog was not maintained by platforms and policymakers alone, but by a wider culture of reassurance that has too often translated a structural crisis into the comforting language of parental controls and digital resilience. Too often what is sold as education is little more than industry-friendly sedation for adults while children carry the risk.
There is a weird feeling that falls over a court room when a jury returns after short deliberation. And in the case we are writing about seven and a half hours is short enough to cast confusion, anxiety and the uncertainty of a door closing on something that can no longer be argued away.
New Mexico just closed that door on Meta.
The jury found the company liable for misleading users about the safety of its platforms and for endangering children. They imposed the maximum penalty available which is US$5,000 per violation, totalling US$375 million. And then they went home, presumably to their own children, their own families, their own digital lives, having done in seven and a half hours what regulators, legislators, and advocates had been trying to do for the better part of two decades. Seven and a half hours for twelve ordinary people to look at the evidence and reach a conclusion that the most powerful technology company in the world had spent years and extraordinary resources insisting was not possible to reach.
That they knew, the machine knew, and knowing, it chose to do nothing. This is not a story about one long overdue verdict. It would be convenient for Meta, for the industry, for anyone who prefers the comfort of exceptions to frame this as an outlier. One ambitious state attorney general. One rogue jury. One newsworthy moment, but that framing is a lie.
This verdict has arrived in the middle of a Los Angeles trial where Meta and YouTube stand accused of intentionally designing addictive features that harmed a young woman’s mental health. That case is one of three bellwether trials that are test cases selected specifically because their outcomes will shape the trajectory of thousands of lawsuits still working their way through the American legal system. Thousands. Not dozens. Thousands of families. Thousands of children. Thousands of harms alleged, documented, and filed. The New Mexico case was built around a foundation that the tech industry has spent decades trying to make invisible and that is the fact that the damage done to children online is not accidental. It is, in critical and documented ways, a foreseeable output of deliberate design.
The state’s attorneys put the product itself on trial. Infinite scroll, Autoplay video, Encrypted messaging. Features presented to the public as innovations, as user empowerment, connection, joy, love and as the inevitable march of progress finally challenged in a court of law as instruments of harm, built into platforms that executives knew were being used to exploit children at scale. Meanwhile the man behind every crack in US law sails the globe on a yacht this size of a small island. All at the expense of what internal documents shown to the jury estimated is at least 100,000 children who received sexually abusive content from adults on Facebook and Instagram every single day.
Not every month. Every day. A number so vast and so specific that it collapses the word “accidental” entirely. You cannot stumble into a harm you have quantified.
Meta’s own head of content policy, Monika Bickert, watching the rollout of encrypted messaging, said — in writing, on the record — “We are about to do a bad thing as a company. This is so irresponsible.” She warned that encryption would blind the platform to child exploitation, to terrorist planning, to the very categories of harm that the company was publicly promising to fight. Meta proceeded anyway.
That is not negligence in the ordinary sense. It is the decision, made with knowledge and written documentation, to accept a foreseeable harm in pursuit of a strategic objective. The EU’s Digital Services Act, in its assessment of systemic risk to children, is unambiguous that platforms must consider how design choices, including algorithmic systems, interface structures, and recommender engines, can exploit the vulnerabilities of minors. The UK’s Age Appropriate Design Code says the same. Australia’s Online Safety Act places the same obligation at the door of the service provider, not the user, not the child, not the parent reaching for their phone at midnight wondering where their teenager has gone.
The regulatory architecture of the developed world has been moving, steadily and with increasing urgency, toward a single conclusion. That conclusion is this, if you build a system, you are responsible for what the system does. Not what you intended. Not what you advertised. What it does.
New Mexico’s jury just wrote that conclusion into a US$375 million cheque. Not near enough but it is a start. Does this mean the the Section 230 shield is finally cracking? We live in hope! For twenty nine years Section 230 of the Communications Decency Act has been the invisible wall behind which American tech companies have sheltered. The law, designed in a moment when the internet was young and the stakes were theoretically modest, says that platforms are not publishers and cannot be held liable for what their users post.
It was, in its time, a reasonable idea to protect a fledgling new spoke in the US economic wheel. Now it is something else entirely. The New Mexico judge rejected Meta’s Section 230 defence. Not on the question of content, but on the question of conduct. The state was not suing Meta for what users posted. It was suing for what Meta built, what Meta chose, what Meta knew, and what Meta continued to deploy after the knowledge was internally documented and internally ignored. That means the legal question is no longer whether a platform hosts harmful content. It is whether the platform’s systems (its algorithms, its design, its defaults, its business model) created the conditions in which harm was not just possible but predictable.
This is precisely the analytical framework that regulators across the globe have been building toward. The DSA’s systemic risk assessment obligations. The UK Online Safety Act’s duty of care. Australia’s Safety by Design principles. These frameworks all ask the same foundational question that the New Mexico jury asked in seven and a half hours. Did you know? Did you design it anyway? Did you profit while it happened?
Section 230 was never designed to answer those questions in the platform’s favour. It was designed for a different internet, with different stakes, and different children. Children who, in 1996, were not spending six, eight, ten hours a day inside algorithmically-optimised systems built to hold their attention at any developmental cost.
There is a phrase that recurs in corporate crisis communications with frequency “We take the safety of our users, especially young people, very seriously.” The phrase positions the company as a force for good responding to challenges it did not anticipate and does not welcome. They cast harm as something that arrived from outside from bad actors, from an unruly internet, from the inherent complexity of scale.
Meta’s spokesperson, when the New Mexico lawsuit was first filed in 2023, said the company used “sophisticated technology,” hired “child safety experts,” and worked with law enforcement “to help root out predators.” That statement now sits in the trial record alongside the internal estimate of 100,000 children harmed daily, alongside the content policy chief’s warning that the company was “about to do a bad thing,” alongside the evidence from the attorney general’s own undercover investigation in which accounts posing as children under 14 were quickly sent sexually explicit material and contacted by adults seeking more. The gap between public statement and private knowledge is not a gap. It is a canyon. And the New Mexico verdict has just illuminated every inch of it.
This matters beyond Meta. It matters for every platform that is currently reviewing its legal exposure, calibrating its public language, and deciding whether to invest meaningfully in child safety or continue to invest in the appearance of child safety while managing the litigation risk on the other side of the spreadsheet. The cost of that calculation has just changed. Dramatically.
New Mexico did not happen in isolation. It happened at the precise moment when the international regulatory architecture for child online safety has reached a kind of critical mass.
Australia has passed a Social Media Age Delay law, making it among the strictest in the world in terms of restricting platform access for children under 16. The EU’s Digital Services Act now requires very large online platforms to conduct systemic risk assessments that explicitly cover risks to children — including addictive design, algorithmic harm, and the exploitation of minors’ inexperience. The UK’s Online Safety Act imposes a duty of care that is enforceable, auditable, and backed by the power to fine companies up to 10% of global annual turnover. Ofcom is not waiting.
Across the Middle East, South Asia, and Latin America, child-focused digital regulation is accelerating. The UAE is developing a child-centred model that distributes responsibility across platforms, service providers, and caregivers. Brazil’s LGPD contains specific child protections. South Africa’s POPIA treats children as a special category deserving heightened safeguards.
The direction of travel is unmistakable. The speed is accelerating.
Big Tech has operated, for thirty years, under a different standard to every other product on earth. Not because the harm was smaller but because it was, in many cases, vastly larger and more pervasive. But because the harm was mediated by screens and networks, because Section 230 provided legal cover, because the platforms were culturally new and the law moved slowly, and because the children most affected were, in the language of engagement metrics, also the users most valuable to retain.
That era is ending. The New Mexico verdict is one marker of its end. The European enforcement actions are another. The Australian legislation is another. The thousands of lawsuits in American courts are another. The senators who have testified about algorithms connecting children with predators are another.
The Meta machine knew. For years, in documented memos and risk assessments and internal research reports, the machine knew. And the question that every regulator, every lawyer, every board member, every trust and safety professional now has to answer is not whether platforms caused harm to children.
The path from here — for platforms, for regulators, for the compliance professionals and trust and safety teams and governance architects who do this work — runs directly through three obligations that are no longer optional.
Safety by Design must be structural, not cosmetic. The era of safety teams as reputational insurance is over. Safety must be embedded in product architecture, in the design review process, in the pre-launch risk assessment, in the default settings that every child encounters before they ever choose anything themselves. The UK Age Appropriate Design Code made this a legal standard. The DSA made systemic risk assessment a legal requirement. And the New Mexico jury just made the consequences of cosmetic safety a US$375 million lesson. Governance must be honest about what it knows. The most damning thing about the New Mexico trial was not what the company did. It was what it knew. Internal documentation of harm, internal warnings from senior employees, internal estimates of scale all of it existing alongside public statements that said the opposite. That gap is a governance failure of the most fundamental kind. Boards must demand honest risk reporting. Executives must demand honest product safety assessments. And the systems that allow internal knowledge to be quarantined from public accountability must be treated, from this day forward, as the legal liability they have always been.
Children must be treated as rights holders, not engagement metrics.
Every major international framework from the UN Convention on the Rights of the Child, the EU Charter of Fundamental Rights, the national laws of more than 190 countries recognises children as rights holders with specific and elevated protections. Design choices that treat children’s developmental vulnerabilities as features to be exploited rather than risks to be mitigated are not just ethically wrong. They are, as the New Mexico jury confirmed, legally actionable.
The New Mexico verdict is a turning point in the global accountability arc for platform harm to children. For platforms, for regulators, for investors, and for the compliance and trust & safety professionals building the systems of the next decade…..the moment to act was years ago. The second-best moment is now.