History Doesn’t Repeat, But It Often Rhymes

In 1996, the Internet was still in its infancy. AOL was only introduced to the American public a few years prior in 1992, and it would be another five years before the majority of American households had Internet access.

Yet that same year, Congress passed the Communications Decency Act, aiming to protect innovation and “preserve the vibrant and competitive free market that exists for the Internet and other interactive computer services,” free from federal or state interference.

Sound familiar?

It should. Because today we are having almost exactly the same conversations around Artificial Intelligence (AI).

Section 230 of the Communications Decency Act explicitly protects interactive computer services and information content providers from civil liability. It argues, in effect, that these services are not substantially different from a community bulletin board. The business or institution that makes the bulletin board available to the public should not be held liable for what members of the community might post on it, however dangerous or inflammatory it might be.

Now, nearly thirty years later, we are wrestling with the devastating consequences of this “hands off” approach.

In the early days of social media, sites like Myspace and Facebook did act more like a morally neutral bulletin board, providing essentially a chronological feed of what your friends and the pages you follow were posting.

But that is no longer the case.

Today, algorithms dictate your feed, delivering content based on what they predict you’ll engage with, even if that engagement is negative. Click-bait, rage-bait … it’s all the same to them. It is psychological manipulation designed to optimize engagement and thereby increase their profits.

And we are now dealing with the mental health consequences of this psychological manipulation.  Jonathan Haidt, author of “The Anxious Generation” notes, “The rise of smartphones and social media use among teens after 2010 coincides with a doubling or tripling of rates of depression, anxiety, self-harm, and suicide—especially among girls.”

Because we decided that tech innovation mattered more than child safety, we are now playing catch-up with a patchwork of legislation aimed at protecting children online: age restrictions for social media, protecting children’s privacy online, reforming section 230 of the Communications Decency Act, and on and on. Even so, only one — the TAKE IT DOWN Act – has managed to pass in both chambers.

We are in grave danger of not only failing to learn from this oversight, but of repeating recent history, as Congress is poised to make the same mistake again.

Earlier this year, Big Tech lobbyists tried to force a 10-year moratorium on state-level efforts to regulate AI into the budget bill. That provision was killed, thanks to the courageous leadership of Sen. Marsha Blackburn (R-Tennessee).

But now it is back, and this time hidden as a provision in the National Defense Authorization Act (NDAA), which is considered a “must-pass” piece of legislation.

While parents, teachers, and health professionals struggle to mitigate the fallout of unregulated social media, this provision would give AI developers complete freedom to entrench themselves in the lives of vulnerable children with no oversight, no accountability, and complete immunity from federal or state-level efforts to regulate them.

We already know enough to see where this will end.

Like social media, AI chatbots are designed to maximize engagement, often using manipulative tactics to keep users engaged. Like social media, AI companions are known to lead to compulsive use patterns. Like social media, heavy chatbot use is associated with increased feelings of isolation and reduced real-world social interaction. Like social media, chatbots often fail to recognize signs of distress or suicidal ideation and may even encourage and validate harmful thoughts. Like social media, some bots have reportedly encouraged self-harm or violence. Like social media, chatbots can provide dangerous health or life advice. Like social media, AI companions have exhibited harassment, sexual misconduct, and privacy violations, including inappropriate conversations with minors. Like social media, children and teens are at heightened risk.

Have we learned nothing?

Now we face a more powerful, more unpredictable threat in AI—and Congress is on the verge of repeating the same mistake. Enough is enough.

We cannot afford another lost generation. AI’s impacts on children’s mental health, development, and safety must be prioritized, and policymakers must have the freedom to investigate the risks and respond proportionately. Companies that prioritize profits over safety should not be immune from liability. Our children deserve better.

Related