Attorneys representing victims of social media may have finally found Big Tech’s Achilles Heel: product design.
From the late 1990s on, hundreds of lawsuits against social media companies (including Meta, Google/YouTube, TikTok, and Snap) alleging harms such as cyberbullying, self-harm and suicide, eating disorders, sexual exploitation, and harm resulting from viral challenges were dismissed in early stages.
In most cases, plaintiffs argued that the platform failed to remove harmful content or allowed harmful communities to exist. Courts summarily dismissed these cases under Section 230 of the Communications Decency Act, which shields platforms from liability for third-party user content.
That pattern has begun to shift. In two recent bellwether cases, juries in New Mexico and Los Angeles, California, decided against Meta (parent company of Facebook and Instagram) and Google (parent company of YouTube) after attorneys for the plaintiffs successfully argued that deliberate platform design choices resulted in harm. Claims that target the companies’ own conduct rather than third-party content.
In New Mexico, a jury found that Meta violated state consumer protection law for enabling child exploitation on Facebook, Instagram, and WhatsApp and misleading consumers about the platforms’ safety, imposing $375 million in civil penalties. The following day, jurors in Los Angeles held Meta and YouTube liable for the depression and anxiety of a 20-year-old woman who compulsively used the platforms as a child, awarding her $3 million in damages, 70% of which must be paid by Meta. Additional damages, for malice and fraud, may still be imposed.
Over the last five or so years, whistleblower disclosures during congressional hearings, discovery from lawsuits brought by state attorneys general, and investigations by news outlets like The Wall Street Journal have produced a wellspring of internal studies and memos.
Those materials reveal that these companies were aware their products were causing harm yet continued to prioritize profit over users’ mental health.
An investigation by The Wall Street Journal uncovered internal slide decks detailing Facebook’s research into Instagram’s effects on teen well‑being. Those materials linked Instagram use to outcomes including self‑harm, sleep disruption, anxiety, unhealthy social comparison, and body image issues. Meta’s own findings included admissions such as: “We make body image issues worse for one in three teen girls,” “Teens blame Instagram for increases in the rate of anxiety and depression,” and “Teens who struggle with mental health say Instagram makes it worse.”
Whistleblower Frances Haugen testified in a 2021 Senate hearing that internal product safety teams were often at odds with product design teams focused on growth. As she explained, “The parts of the organization responsible for growing and expanding the organization are separate and not regularly cross‑pollinated with the parts of the company that focus on harms that the company is causing, and as a result, integrity actions—projects that were hard fought by the teams trying to keep us safe—are undone by new growth projects that counteract those same remedies.”
Haugen’s account is consistent with internal memos disclosed by the Tech Oversight Project, which show that Meta was well aware of existing research on addictive and harmful design features. Those concerns were also reflected in Meta’s own internal research, presented in 2021, which identified several features as “primarily negative,” including video and photo filters, location sharing, autoplay, and pop‑up notifications—now standard elements of Facebook and Instagram.
Other internal documents show that Meta explicitly studied how changes in platform design could increase engagement. As early as 2016, the company explored strategies for keeping teens on its platforms and posting content more frequently, including research into notification systems and their ability to induce habitual or addictive behavior.
That orientation toward engagement was captured succinctly by Meta’s Vice President of Product, Max Eulenstein, who wrote in a January 26, 2021, email: “No one wakes up thinking they want to maximize the number of times they open Instagram that day. But that’s exactly what our product teams are trying to do.”
These companies have dominated the culture, shaped the narrative, and changed the way young people view themselves and others, largely through product design choices made with little meaningful external oversight or regulation. As Haugen testified, “Facebook’s closed design means it has no oversight—even from its own Oversight Board, which is as blind as the public.”
If the recent verdicts withstand appeal, they could begin to reshape incentives across the industry and bring about meaningful oversight for the first time. Design choices that once operated largely outside public view may increasingly be evaluated through the lenses of consumer protection, product safety, and duty of care. These lawsuits suggest that social media companies may no longer be insulated from scrutiny over how their products are designed.
For parents, these cases change little in the short term. Until clearer boundaries are imposed on platform design, they remain the primary line of defense.
In the meantime, families are left to rely on existing, though limited, safeguards such as time limits, notification controls, and age‑appropriate account settings. But parents need to recognize that the risks associated with social media are not accidental or isolated; they are embedded in the architecture of the platforms themselves. Meaningful protection may therefore require more than monitoring content. It may require reducing children’s exposure to systems deliberately designed to encourage compulsive use or delaying access to social media altogether.



