Is This the End of the Internet As We Know It?

A person using a laptop with just their fingers showing.

Two pending Supreme Court cases interpreting a 1996 law could drastically alter the way we interact online. That law, Section 230 of the Communications Decency Act, is often disparaged as a handout to Big Tech, but that misses the point. Section 230 promotes free speech by removing strong incentives for platforms to limit what we can say and do online.

Under Section 230, platforms generally may not be held liable for the content posted by users. Without this protection, important speech such as communication about abortion, especially in states where abortion is outlawed, could be silenced. Movements like #MeToo and #BLM may not have been able to catch on if platforms were worried that they’d be sued, even improperly, for defamation or other claims. People could have found their voices censored, especially when talking about ideas that are under political attack today: race and racism , sexuality , and gender justice . The internet as we know it would be a very different place.

Section 230 promotes free speech by removing strong incentives for platforms to limit what we can say and do online.

Before Section 230, companies cultivating online communities were legally responsible for what their users posted, while those that exercised no editorial control were not. The natural consequence of this was that some platforms would choose to limit conversations to only the most uncontroversial matters, while other platforms had an incentive to host free-for-all spaces, tolerating pornographic, abusive, or other unwanted content to avoid any legal responsibility. Congress wisely recognized that the internet could be so much more than this and passed Section 230.

While Section 230 immunizes online platforms from legal liability for the posts, comments, and other messages contributed by their users, it does not free platforms from liability for content that violates federal criminal law, intellectual property rights, or a few other categories of legal obligations. Section 230 also does not apply to platform conduct that falls outside the publication of others’ content, such as discriminatory targeting of ads for housing or employment on the basis of race or sex.

If we lose Section 230, we stand to lose the internet as we know it.

It also does not provide a safe harbor for platforms that provide advertisers with tools designed to target ads to users based on sex, race, or other statuses protected by civil rights laws. Nor does it provide immunity from claims that a platform’s own ad delivery algorithms are discriminatory. The ACLU recently explained why this conduct falls outside the scope of Section 230. In these scenarios, where the alleged basis for liability is the platform’s own discrimination, the ACLU seeks to stop platforms from misusing or misinterpreting Section 230 immunity.

Today, the internet enables people to communicate with one another at a previously impossible scale. It is one of the “principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge” as the Supreme Court recently recognized in Packingham v. North Carolina . At the same time, platforms are free to manage user content, taking down problematic posts containing nudity, racist slurs, spam, or fraudulent information.

This term, the Supreme Court will consider the scope of the law’s protections in Twitter v. Taamneh and Gonzalez v. Google . These cases were brought by family members of U.S. citizens who were killed by ISIS in terrorist attacks. The suits allege that platforms, including Twitter and Google’s YouTube, are “aiding and abetting” ISIS attacks by failing to adequately block or remove content promoting terrorism.

But Twitter and YouTube did not, and do not, have any intention of promoting terrorism. The videos plaintiffs identified were posted by ISIS operatives and, while lawful, violate Twitter’s and YouTube’s terms of service. The companies would have removed them if they were flagged. There is also no allegation that the people behind the terrorist attack were inspired by these videos.

speech on internet kills communication

  • Communications Decency Act Section 230

To protect our rights and liberties, we urge the Supreme Court to ensure that online platforms are free to recommend content without legal risk.

Source: American Civil Liberties Union

The ACLU’s amicus brief in Twitter v. Taamneh asserts that imposing liability under these circumstances would improperly chill speech. Of course, a platform could promote terrorism through its policies and actions. But imposing liability merely for hosting content without malicious intent or specific knowledge that any specific post furthered a particular criminal act would squelch online speech and association. It already happens, such as when Instagram confused a post about a landmark mosque with one about a terrorist group. These relatively common errors would become the new norm.

The Gonzalez case asks a different question: whether Section 230 immunity applies to amplified content. The plaintiffs argue that when platforms suggest content to users, such as in “Up Next,” “You Might Like,” or “Recommended For You,” those suggestions are not protected by Section 230. So, while a provider would remain immunized for merely hosting content, it would be responsible for highlighting it.

The ACLU filed an amicus brief in the Gonzalez case to explain why online platforms have no choice but to prioritize some content over others, and should be immune from liability for those choices when they include content from a third party. Given the vast amount of material posted every minute, platforms must select and organize content in order to display it in any usable manner. There is no way to visually present information to app or webpage users without making editorial choices that are, at the very least, implicit “recommendations.”

Moreover, organizing and recommending content helps us to find what we are looking for, to receive and create information, to reach an audience and to build community. If Section 230 doesn’t apply to this kind of content organization, platforms will be incentivized to present information in a disorganized jumble and will feel pressure to include only the most innocuous content that lawyers can be certain wouldn’t inspire anyone to sue.

Section 230 has allowed public expression on the internet to flourish. It has created space for social movements; enabled platforms to host the speech of activists and organizers; and allowed users and content creators on sites like Instagram, TikTok, and Twitch to reach an audience and make a living. Without it, the internet will be a far less hospitable place for human creativity, education, politics, and collaboration. If we lose Section 230, we stand to lose the internet as we know it.

Learn More About the Issues on This Page

  • Free Speech
  • Internet Speech

Related Content

ACLU Urges Eighth Circuit to Protect Students' Right to Learn about Race in School

ACLU Urges Eighth Circuit to Protect Students' Right to Learn about Race in School

Walls v. Sanders

Walls v. Sanders

ACLU Cheers Ninth Circuit Decision to Block Content-Based Provisions of California Age-Appropriate Design Code Act

ACLU Cheers Ninth Circuit Decision to Block Content-Based Provisions of California Age-Appropriate Design Code Act

A graphic of a child looking at a shelf of library books surrounded by other graphics of thought bubbles and lightbulbs.

High School Students Explain Why We Can’t Let Classroom Censorship Win

We need your support today

Independent journalism is more important than ever. Vox is here to explain this unprecedented election cycle and help you understand the larger stakes. We will break down where the candidates stand on major issues, from economic policy to immigration, foreign policy, criminal justice, and abortion. We’ll answer your biggest questions, and we’ll explain what matters — and why. This timely and essential task, however, is expensive to produce.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Section 230, the internet law that’s under threat, explained

The pillar of internet free speech seems to be everyone’s target.

by Sara Morrison

The US Supreme Court building exterior, seen from behind barricades.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them.

Decades later, it’s never been more controversial. People from both political parties and all three branches of government have threatened to reform or even repeal it. The debate centers around whether we should reconsider a law from the internet’s infancy that was meant to help struggling websites and internet-based companies grow. After all, these internet-based businesses are now some of the biggest and most powerful in the world, and users’ ability to speak freely on them bears much bigger consequences.

While President Biden pushes Congress to pass laws to reform Section 230, its fate may lie in the hands of the judicial branch, as the Supreme Court is considering two cases — one involving YouTube and Google , another targeting Twitter — that could significantly change the law and, therefore, the internet it helped create.

Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post (with a few exceptions ). That third-party content could include things like a news outlet’s reader comments, tweets on Twitter, posts on Facebook, photos on Instagram, or reviews on Yelp. If a Yelp reviewer were to post something defamatory about a business, for example, the business could sue the reviewer for libel, but thanks to Section 230, it couldn’t sue Yelp.

Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark. A repeal of Section 230 wouldn’t just affect the big platforms that seem to get all the negative attention, either. It could affect websites of all sizes and online discourse.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around, and anyone, including impressionable children, could easily find and see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act, which would extend laws governing obscene and indecent use of telephone services to the internet. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street , and the latter was a pioneer of the early internet. But in 1994, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. The court ruled in Stratton Oakmont’s favor, saying that because Prodigy moderated posts on its forums, it exercised editorial control that made it just as liable for the speech on its platform as the people who actually made that speech. Meanwhile, Prodigy’s rival online service, Compuserve, was found not liable for a user’s speech in an earlier case because Compuserve didn’t moderate content.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks, and mindful of the Prodigy decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment to CDA that said “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden told Vox’s Emily Stewart in 2019. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

As the beginning of Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are considered by some to be the 26 words that created the internet , but the law says more than that.

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that their Constitutional right to free speech has been violated — doesn’t apply. Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions about transmitting porn to minors were immediately challenged by civil liberty groups and struck down by the Supreme Court, which said they were too restrictive of free speech. Section 230 stayed, and so a law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. On the other hand, a website that didn’t moderate anything at all would quickly become a spam-filled cesspool that few people would want to swim in.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet to flourish. Others say it allows platforms that have become hugely influential and important to suppress and censor speech based on their own whims or supposed political biases. Depending on who you talk to, internet platforms are either using the sword too much or not enough. Either way, they’re hiding behind the shield to protect themselves from lawsuits while they do it. Though it has been a law for nearly three decades, Section 230’s existence may have never been as precarious as it is now.

The Supreme Court might determine Section 230’s fate

Justice Clarence Thomas has made no secret of his desire for the court to consider Section 230, saying in multiple opinions that he believes lower courts have interpreted it to give too-broad protections to what have become very powerful companies. He got his wish in February 2023, when the court heard two similar cases that include it. In both, plaintiffs argued that their family members were killed by terrorists who posted content on those platforms. In the first, Gonzalez v. Google , the family of a woman killed in a 2015 terrorist attack in France said YouTube promoted ISIS videos and sold advertising on them, thereby materially supporting ISIS. In Twitter v. Taamneh , the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content, which is in violation of the Justice Against Sponsors of Terrorism Act — and could then mean that Section 230 doesn’t apply to such content.

These cases give the Supreme Court the chance to reshape, redefine, or even repeal the foundational law of the internet, which could fundamentally change it. And while the Supreme Court chose to take these cases on, it’s not certain that they’ll rule in favor of the plaintiffs. In oral arguments in late February, several justices didn’t seem too convinced during the Gonzalez v. Google arguments that they could or should, especially considering the monumental possible consequences and impact of such a decision. In Twitter v. Taamneh , the justices focused more on if and how the Sponsors of Terrorism law applied to tweets than they did on Section 230. The rulings are expected in June.

In the meantime, don’t expect the original authors of Section 230 to go away quietly. Wyden and Cox submitted an amicus brief to the Supreme Court for the Gonzalez case, where they said: “The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users today, Section 230’s protection is even more important now than when the statute was enacted.”

Congress and presidents are getting sick of Section 230, too

In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law , which changed parts of Section 230. The updates mean that platforms can now be deemed responsible for prostitution ads posted by third parties. These changes were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but it did so by carving out an exception to Section 230. That could open the door to even more exceptions in the future.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped create them. Alex Jones and his expulsion from Facebook and other social media platforms — even Twitter under Elon Musk won’t let him back — is perhaps the best example of this.

In a 2018 op-ed, Sen. Ted Cruz (R-TX) claimed that Section 230 required the internet platforms it was designed to protect to be “neutral public forums.” The law doesn’t actually say that, but many Republican lawmakers have introduced legislation that would fulfill that promise. On the other side, Democrats have introduced bills that would hold social media platforms accountable if they didn’t do more to prevent harmful content or if their algorithms promoted it.

There are some bipartisan efforts to change Section 230, too. The EARN IT Act from Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT), for example, would remove Section 230 immunity from platforms that didn’t follow a set of best practices to detect and remove child sexual abuse material. The partisan bills haven’t really gotten anywhere in Congress. But EARN IT, which was introduced in the last two sessions, was passed out of committee in the Senate and ready for a Senate floor vote. That vote never came, but Blumenthal and Graham have already signaled that they plan to reintroduce EARN IT this session for a third try.

In the executive branch, former President Trump became a very vocal critic of Section 230 in 2020 after Twitter and Facebook started deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. He issued an executive order that said Section 230 protections should only apply to platforms that have “good faith” moderation, and then called on the FCC to make rules about what constituted good faith. This didn’t happen, and President Biden revoked the executive order months after taking office.

But Biden isn’t a fan of Section 230, either. During his presidential campaign, he said he wanted it repealed. As president, Biden has said he wants it to be reformed by Congress. Until Congress can agree on what’s wrong with Section 230, however, it doesn’t look likely that they’ll pass a law that significantly changes it.

However, some Republican states have been making their own anti-Section 230 moves. In 2021, Florida passed the Stop Social Media Censorship Act, which prohibits certain social media platforms from banning politicians or media outlets. That same year, Texas passed HB 20 , which forbids large platforms from removing or moderating content based on a user’s viewpoint.

Neither law is currently in effect. A federal judge blocked the Florida law in 2022 due to the possibility of it violating free speech laws as well as Section 230. The state has appealed to the Supreme Court. The Texas law has made a little more progress. A district court blocked the law last year, and then the Fifth Circuit controversially reversed that decision before deciding to stay the law in order to give the Supreme Court the chance to take the case. We’re still waiting to see if it does.

If Section 230 were to be repealed — or even significantly reformed — it really could change the internet as we know it. It remains to be seen if that’s for better or for worse.

Update, February 23, 2023, 3 pm ET: This story, originally published on May 28, 2020, has been updated several times, most recently with the latest news from the Supreme Court cases related to Section 230.

  • Open Sourced
  • Supreme Court

Most Popular

  • The state of the 2024 race, explained in 7 charts
  • America isn’t ready for another war — because it doesn’t have the troops
  • What the polls show about Harris’s chances against Trump
  • Two astronauts are stranded in space. This one is jealous.
  • Take a mental break with the newest Vox crossword

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Technology

California’s governor has the chance to make AI history

Gavin Newsom could decide the future of AI safety. But will he cave to billionaire pressure?

I’m an AI skeptic. But one critique misses the mark.

Are tech companies actually pushing AI down our throats?

Amazon’s recommendations are getting a little too creepy

The retail giant now uses your grocery purchases to recommend prescription drugs.

We spoke with the architect behind the notorious AI safety bill

California legislator Scott Wiener on why the home of the AI industry is the right place to regulate it.

The AI safety bill Big Tech hates has passed the California legislature

Some tech leaders think SB 1047 will kill innovation. It won’t.

Why Telegram’s CEO was detained in France

Telegram’s lax content moderation policy is catching up with its CEO.

  • Ethics & Leadership
  • Fact-Checking
  • Media Literacy
  • The Craig Newmark Center
  • Reporting & Editing
  • Ethics & Trust
  • Tech & Tools
  • Business & Work
  • Educators & Students
  • Training Catalog
  • Custom Teaching
  • For ACES Members
  • All Categories
  • Broadcast & Visual Journalism
  • Fact-Checking & Media Literacy
  • In-newsroom
  • Memphis, Tenn.
  • Minneapolis, Minn.
  • St. Petersburg, Fla.
  • Washington, D.C.
  • Poynter ACES Introductory Certificate in Editing
  • Poynter ACES Intermediate Certificate in Editing
  • Ethics Training
  • Ethics Articles
  • Get Ethics Advice
  • Fact-Checking Articles
  • IFCN Grants
  • International Fact-Checking Day
  • Teen Fact-Checking Network
  • International
  • Media Literacy Training
  • MediaWise Resources
  • Ambassadors
  • MediaWise in the News

Support responsible news and fact-based information today!

What you need to know about Section 230, the ‘most important law protecting internet speech’

Section 230 grants broad legal protections to websites that host user-generated content, like facebook and google..

speech on internet kills communication

A law credited with birthing the internet — and with spurring misinformation — has drawn bipartisan ire from lawmakers who are vowing to change it.

Section 230 of the Communications Decency Act shields internet platforms from liability for much of what its users post.

Both Democrats and Republicans point to Section 230 as a law that gives too much protection to companies like Facebook, YouTube, Twitter, Amazon and Google — with different reasons.

Former President Donald Trump wanted changes to Section 230 and  vetoed  a military spending bill in December because it didn’t include them. President Joe Biden has  said  that he’d be in favor of revoking the provision altogether. Biden’s pick for commerce secretary  said  she will pursue changes to Section 230 if confirmed.

There are  several bills  in Congress that would repeal Section 230 or amend its scope in order to limit the power of the platforms. In response,  even tech companies  have called for revising a law they say is outdated.

“In the offline world, it’s not just the person who pulls the trigger, or makes the threat or causes the damage — we hold a lot of people accountable,” said Mary Anne Franks, a law professor at the University of Miami. “Section 230 and the way it’s been interpreted essentially says none of those rules apply here.”

How did Section 230 come to be, and how could potential reforms affect the internet? We consulted the law and its experts to find out. (Have a question we didn’t answer here? Send it to  [email protected] .)

What is Section 230?

speech on internet kills communication

Donna Rice Hughes, of the anti-pornography organization Enough is Enough, meets reporters outside the Supreme Court in Washington Wednesday, March 19, 1997, after the court heard arguments challenging the 1996 Communications Decency Act. The court, in its first look at free speech on the Internet, was asked to uphold a law that made it a crime to put indecent words or pictures online where children can find them. They struck it down. (AP Photo/Susan Walsh)

Congress passed the Communications Decency Act as Title V of the  Telecommunications Act of 1996 , when an increasing number of Americans  started to use  the internet. Its original purpose was to prohibit making “indecent” or “patently offensive” material available to children.

In 1997, the Supreme Court  struck down  the Communications Decency Act as an unconstitutional violation of free speech. But one of its provisions survived and, ironically, laid the groundwork for protecting online speech.

Section 230  says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That provision, grounded in the language of First Amendment law,  grants broad legal protections  to websites that host user-generated content. It essentially means they can’t be sued for libel or defamation for user posts. Section 230 is especially important to social media platforms, but it also protects news sites that allow reader comments or auction sites that let users sell products or services.

RELATED TRAINING: Pay Attention: Legal Issues and Your Media Company

“Section 230 is understood primarily as a reaction to state court cases threatening to hold online service providers liable for (possible) libels committed by their users,” said Tejas Narechania, an assistant law professor at the University of California-Berkeley.

Section 230 changed that. For example, if a Facebook user publishes something defamatory, Facebook itself can’t be sued for defamation, but the post’s original author can be. That’s different from publishers like the New York Times, which can be held liable for content they publish — even if they didn’t originate the offending claim.

There are some exceptions in Section 230, including for copyright infringement and violations of federal and state law. But in general, the provision grants social media platforms  far more leeway  than other industries in the U.S.

Why does it matter?

speech on internet kills communication

Sen. Ron Wyden (D-Ore.), one of the authors of Section 230, in 2021. (Demetrius Freeman/The Washington Post via AP, Pool)

Section 230 is the reason that you can post photos on Instagram, find search results on Google and list items on eBay. The Electronic Frontier Foundation, a nonprofit digital rights group,  calls it  “the most important law protecting internet speech.”

Section 230  is generally considered  to be speech-protective, meaning that it allows for more content rather than less on internet platforms. That objective was baked into the law.

In crafting Section 230, Sen. Ron Wyden, D-Ore., and Rep. Chris Cox, R-Calif., “both recognized that the internet had the potential to create a new industry,” wrote Jeff Kosseff in  “The Twenty-Six Words That Created the Internet .”

“Section 230, they hoped, would allow technology companies to freely innovate and create open platforms for user content,” Kosseff wrote. “Shielding internet companies from regulation and lawsuits would encourage investment and growth, they thought.”

Wyden and Cox were right — today, American tech platforms like Facebook and Google  have billions of users  and are among the wealthiest companies in the world. But they’ve also become vehicles for  disinformation  and  hate speech , in part because Section 230 left it up to the platforms themselves to decide how to moderate content.

Until relatively recently, most companies took a light touch to moderation of content that’s not illegal, but still problematic. (PolitiFact, for example, participates in programs run by Facebook and TikTok to  fight misinformation. )

“You don’t have to devote any resources to make your products and services safe or less harmful — you can solely go towards profit-making,” said Franks, the law professor. “Section 230 has gone way past the idea of gentle nudges toward moderation, towards essentially it doesn’t matter if you moderate or not.”

Without Section 230, tech companies would be forced to think about their legal liability in an entirely different way.

“Without Section 230, companies could be sued for their users’ blog posts, social media ramblings of homemade online videos,” Kosseff wrote. “The mere prospect of such lawsuits would force websites and online service providers to reduce or entirely prohibit user-generated content.”

Has the law changed?

The law has changed a little bit since 1996.

Section 230’s first major challenge came in 1997, when America Online was sued for failing to remove libelous ads that erroneously connected a man’s phone number to the Oklahoma City bombing. The U.S. Court of Appeals for the Fourth Circuit  ruled  in favor of AOL, citing Section 230.

“That’s the case that basically set out very expansive protection,” said Olivier Sylvain, a law professor at Fordham University. “It held that even when an intermediary, AOL in this case, knows about unlawful content … it still is not obliged under law to take that stuff down.”

That’s different from how the First Amendment treats other distributors, such as booksellers. But the legal protections aren’t limitless.

In 2008, the Ninth Circuit appeals court  ruled  that Roommates.com could not claim immunity from anti-discrimination laws for requiring users to choose the preferred traits of potential roommates. Section 230 was  further weakened  in 2018 when Trump  signed  a package of bills aimed at limiting online human trafficking.

The package created an exception that held websites liable for ads for prostitution. As a result, Craigslist  shut down  its section for personal ads and certain Reddit groups  were banned .

What reforms are being considered?

speech on internet kills communication

Sen. Joshua Hawley (R-Mo.) is one of several senators who has introduced a bill to modify or repeal Section 230. (Graeme Jennings/Pool via AP)

In 2020, following  a Trump executive order  on “preventing online censorship,” the Justice Department  published a review  of Section 230. In it, the department recommended that Congress revise the law to include carve-outs for “egregious content” related to child abuse, terrorism and cyber-stalking. The review also proposed revoking Section 230 immunity in cases where a platform had “actual knowledge or notice” that a piece of content was unlawful.

The Justice Department review came out the same day that Sen. Josh Hawley, R-Mo.,  introduced a bill  that  would require companies  to revise their terms of service to include a “duty of good faith” and more transparency about their moderation policies. A flurry of other Republican-led efforts came in January after  Twitter banned Trump  from its platform. Some proposals  would make  Section 230 protections conditional, while others  would repeal  the provision altogether.

Democrats have instead focused on reforming Section 230 to hold platforms accountable for harmful content like hate speech,  targeted harassment  and  drug dealing .  One proposal   would require  platforms to explain their moderation practices and to produce quarterly reports on content takedowns. The Senate Democrats’ SAFE Tech Act  would  revoke legal protections for platforms where payments are involved.

That last proposal is aimed at reining in online advertising abuses, but critics say even small changes to Section 230 could have unintended consequences for free speech on the internet. Still, experts say it’s time for change.

“Section 230 is a statute — it is not a constitutional norm, it’s not free speech — and it was written at a time when people were worried about electronic bulletin boards and newsgroups. They were not thinking about amplification, recommendations and targeted advertising,” Sylvain said. “Most people agree that the world in 1996 is not the world in 2021.”

This article was originally  published by PolitiFact , which is part of the Poynter Institute. It is republished here with permission. See the sources for these facts checks  here  and more of their fact-checks  here .

More about Section 230

  • What journalists should know about Communications Decency Act Section 230
  • Opinion: It’s time to repeal the law that gives social media sites immunity for anything their users post
  • Americans want some online misinformation removed, but aren’t sure who should do it

speech on internet kills communication

Opinion | The future is not all bleak: Poynter president Neil Brown talks about a new report on the state of journalism

The OnPoynt report addresses trends -- and some reasons for optimism -- in audience, revenue, local news, trust, AI and more

speech on internet kills communication

Meet the second round of public media journalists in our Poynter/CPB fellowship

These 26 fellows will receive custom training on editorial standards, leading through change and innovation

speech on internet kills communication

Here are the AI essentials that our experts are using, promoting and nervous about

A new Poynter course is designed to orient journalists to tools they can use, skills they can employ, and the ethics they should mind.

speech on internet kills communication

A ‘media organizer’ built an abolitionist newsroom in Kansas City. Is he a journalist? He’s not yet sure.

At The Kansas City Defender, Ryan Sorrell is following Ida B. Wells and Claudia Jones in honoring the legacy of the radical Black press

speech on internet kills communication

Opinion | Looking back at Kamala Harris’ first interview since becoming the Democratic presidential nominee

CNN's Dana Bash asked tough, fair questions of the candidate and her running mate, and both did an admirable job answering them.

Start your day informed and inspired.

Get the Poynter newsletter that's right for you.

speech on internet kills communication

The dying art of conversation – has technology killed our ability to talk  face-to -face?

speech on internet kills communication

Senior Lecturer, Media, Communication and Culture, Leeds Beckett University

Disclosure statement

Melanie Chan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Leeds Beckett University provides funding as a member of The Conversation UK.

View all partners

What with Facetime, Skype , Whatsapp and Snapchat, for many people, face-to-face conversation is used less and less often.

These apps allow us to converse with each other quickly and easily – overcoming distances, time zones and countries. We can even talk to virtual assistants such as Alexa, Cortana or Siri – commanding them to play our favourite songs, films, or tell us the weather forecast.

Often these ways of communicating reduce the need to speak to another human being. This has led to some of the conversational snippets of our daily lives now taking place mainly via technological devices . So no longer do we need to talk with shop assistants, receptionists, bus drivers or even coworkers, we simply engage with a screen to communicate whatever it is we want to say.

In fact, in these scenarios, we tend to only speak to other people when the digital technology does not operate successfully. For instance, human contact occurs when we call for an assistant to help us when an item is not recognised at the self-service checkout .

And when we have the ability to connect so quickly and easily with others using technological devices and software applications it is easy to start to overlook the value of face-to-face conversation. It seems easier to text someone rather than meet with them.

Bodily cues

My research into digital technologies indicates that phrases such as “word of mouth” or “keeping in touch” point to the importance of face-to-face conversation . Indeed, face-to-face conversation can strengthen social ties: with our neighbours, friends, work colleagues and other people we encounter during our day.

It acknowledges their existence, their humanness, in ways that instant messaging and texting do not. Face-to-face conversation is a rich experience that involves drawing on memories, making connections, making mental images, associations and choosing a response. Face-to-face conversation is also multisensory: it’s not just about sending or receiving pre-programmed trinkets such as likes, cartoon love hearts and grinning yellow emojis.

speech on internet kills communication

When having a conversation using video you mainly see another person’s face only as a flat image on a screen. But when we have a face-to-face conversation in real life, we can look into someone’s eyes, reach out and touch them. We can also observe the other person’s body posture and the gestures they use when speaking – and interpret these accordingly. All these factors, contribute to the sensory intensity and depth of the face-to-face conversations we have in daily life.

Speaking to machines

Sherry Turkle , professor of social studies of science and technology, warns that when we first “speak through machines, [we] forget how essential face-to-face conversation is to our relationships, our creativity, and our capacity for empathy”. But then “we take a further step and speak not just through machines but to machines”.

In many ways, our everyday lives now involve a blend of face-to-face and technologically mediated forms of communication. But in my teaching and research I explain how digital forms of communication can supplement, rather than replace face-to-face conversation.

At the same time though, it is also important to acknowledge that some people value online communication because they can express themselves in ways they might find difficult through face-to-face conversation.

Look up from your phone

Gary Turk , is a spoken word poet whose poem Look Up illustrates what is at stake by becoming entranced by technological ways of communicating at the expense of connecting with others face-to-face.

Turk’s poem draws attention to the rich, sensory aspects of face-to-face communication, valuing bodily presence in relation to friendship, companionship and intimacy. The central idea running through Turk’s evocative poem is that screen-based devices consume our attention while distancing us from the bodily sense of being with others.

Ultimately the sound, touch, smell and observation of bodily cues we experience when having a face-to-face conversation cannot be fully replaced by our technological devices. Communicating and connecting with others through face-to-face discussion is valuable because it is not something that can be edited, paused or replayed.

So next time you’re deciding between human or machine at the supermarket checkout or whether to get up from your desk and walk to another office to talk to a colleague – rather than sending them an email – it might be worth following Turk’s advice and engaging with the human rather than the screen.

  • Social media
  • Body language
  • Text messages
  • Face-to-face
  • Conversations

speech on internet kills communication

Chief Financial Officer

speech on internet kills communication

Director of STEM

speech on internet kills communication

Community member - Training Delivery and Development Committee (Volunteer part-time)

speech on internet kills communication

Chief Executive Officer

speech on internet kills communication

Head of Evidence to Action

Advertisement

Supported by

Supreme Court Poised to Reconsider Key Tenets of Online Speech

The cases could significantly affect the power and responsibilities of social media platforms.

  • Share full article

An illustration of a gavel held by the kind of pointing hand that a cursor becomes when it clicks on something.

By David McCabe

David McCabe, who is based in Washington, has reported for five years on the policy debate over online speech.

For years, giant social networks like Facebook , Twitter and Instagram have operated under two crucial tenets.

The first is that the platforms have the power to decide what content to keep online and what to take down, free from government oversight. The second is that the websites cannot be held legally responsible for most of what their users post online, shielding the companies from lawsuits over libelous speech, extremist content and real-world harm linked to their platforms.

Now the Supreme Court is poised to reconsider those rules, potentially leading to the most significant reset of the doctrines governing online speech since U.S. officials and courts decided to apply few regulations to the web in the 1990s.

On Friday, the Supreme Court is expected to discuss whether to hear two cases that challenge laws in Texas and Florida barring online platforms from taking down certain political content. Next month, the court is scheduled to hear a case that questions Section 230 , a 1996 statute that protects the platforms from liability for the content posted by their users.

The cases could eventually alter the hands-off legal position that the United States has largely taken toward online speech, potentially upending the businesses of TikTok, Twitter, Snap and Meta, which owns Facebook and Instagram.

“It’s a moment when everything might change,” said Daphne Keller, a former lawyer for Google who directs a program at Stanford University’s Cyber Policy Center.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Home

U.S. Government Accountability Office

Online Extremism is a Growing Problem, But What’s Being Done About It?

A hate crime occurs nearly every hour in the U.S. It’s a growing problem that’s been fueled by hate-filled internet posts on social media and other internet platforms. Many of us have seen news headlines about extremist attacks that were fueled by online hate speech—such as the mass shootings at Emanuel African Methodist Episcopal Church in Charleston, South Carolina in 2015; a Walmart in El Paso, Texas in 2019; and a nightclub in Colorado Springs, Colorado in 2022.

In a new report , we looked at the connection between hate crimes and online hate speech, and how internet-based companies and law enforcement are combatting these problems. Today’s WatchBlog post looks at our work.

Close up of a computer keyboard. In the place of the "enter" key is a red colored key with the words "Hate Speech" on it.

What do we know about the connection between online hate and extremist acts?

Online hate speech is widespread. It includes prejudiced comments about race, national origin, ethnicity, gender, gender identification, religion, disability, or sexual orientation. Research indicated up to a third of internet users have experienced hate speech online. That number is even higher when looking at just the online gaming community—where about 50% have experienced hate speech.

Those who post hateful or extremist speech online may do so in an effort to spread their ideologies.

Extremist attacks—such as those in Charleston, El Paso, and Colorado Springs—illustrate how exposure to hate speech online may have contributed to the attackers’ biases against people based on race, national origin, and sexual orientation. Additionally, these attacks showed how the internet has offered the perpetrators of such attacks a vehicle for disseminating hateful materials—such as manifestos containing disparaging and racist rhetoric prior to the attacks. The perpetrators of these three attacks were convicted of, or pled guilty to, federal or state hate crimes.

In response to the rise of hate crimes, the FBI has elevated such acts to its highest-level national threat priority. FBI’s designation placed hate crimes at the same priority level as preventing domestic violent extremism. But the government and others are also taking steps to respond to online hate crimes specifically.

What’s being done to combat online hate?

In our new report , we looked at how internet companies and the federal government are trying to combat online hate crimes.

We looked at six companies that run online forums and platforms—including social media, livestreaming, and crowdfunding platforms—that are tackling this issue in different ways. Each company has its own definition of content that violates the platform’s terms of use. But every definition prohibited hateful content related to disability, ethnicity, race, and religion.

We also found that each company had a different way of flagging hateful posts. All of the companies used algorithms, to varying degrees, to flag content and remove it. Some also relied on users to identify harmful content.

We also reviewed what the federal government is doing to address hate crimes that may be linked to online hate speech. For example, federal law enforcement agencies have used online hate posts as evidence during prosecutions of those who commit domestic violent extremism incidents and other hate crimes.

The Department of Justice is also collecting data from law enforcement agencies and the public about hate crimes to better understand their prevalence. One way it does this is through an annual national survey of about 150,000 households, which asks questions about potentially underreported crimes like hate crimes. While the survey can help estimate the prevalence of hate crimes, it doesn’t ask specifically about online hate crimes. Having this information could greatly inform federal law enforcement’s efforts, including putting resources where they are needed most.

Because of this, we recommended that the Department of Justice consider methods to collect information in the annual survey about hate crimes that occur on the internet.

Learn more about our work about the connection between online hate speech and violent extremism by reading our new report .

  • GAO’s fact-based, nonpartisan information helps Congress and federal agencies improve government. The WatchBlog lets us contextualize GAO’s work a little more for the public. Check out more of our posts at  GAO.gov/blog .

GAO Contacts

Triana McNeil

GAO's mission is to provide Congress with fact-based, nonpartisan information that can help improve federal government performance and ensure accountability for the benefit of the American people. GAO launched its WatchBlog in January, 2014, as part of its continuing effort to reach its audiences—Congress and the American people—where they are currently looking for information.

The blog format allows GAO to provide a little more context about its work than it can offer on its other social media platforms. Posts will tie GAO work to current events and the news; show how GAO’s work is affecting agencies or legislation; highlight reports, testimonies, and issue areas where GAO does work; and provide information about GAO itself, among other things.

Please send any feedback on GAO's WatchBlog to [email protected] .

Official Logo MTSU Freedom Of Speech

  • ENCYCLOPEDIA
  • IN THE CLASSROOM

Home » Articles » Topic » Internet

Ronald Kahn

George W. Truett

Yi Li uses a computer terminal at the New York Public Library to access the Internet, Wednesday, June 12, 1996. The graduate student from Taiwan supports the federal court decision issued in Philadelphia on Wednesday that bans government censorship of the Internet. Yi Li fears that government control of the Internet could be used by authoritarians to control or confine people. "We can use the (free flow of) information to unite the world," she said. (AP Photo/Mark Lennihan, reprinted with permission from The Associated Press)

The Supreme Court faces special challenges in dealing with the regulation of speech on the internet. The internet’s unique qualities — such as its ability to spread potentially dangerous information quickly and widely, harass others , and provide an easy way for minors to access pornographic content — have prompted lawmakers to call for tighter restrictions on internet speech. 

Others argue that Congress and the courts should refrain from limiting the possibilities of the internet unnecessarily and prematurely because it is a technologically evolving medium. For its part, the Supreme Court continues to balance First Amendment precedents with the technological features of the medium.

Congress has tried to protect minors from internet pornography

One major area of internet regulation is  protecting minors  from  pornography and other indecent or obscene speech . For example, Congress passed the  Communications Decency Act (CDA)  in 1996 prohibiting “the knowing transmission of obscene or indecent messages” over the internet to minors. However, in 1997 the Supreme Court in  Reno v. American Civil Liberties Union  struck down this law as being too vague. The court held that the regulation created a  chilling effect  on speech and prohibited more speech than necessary to achieve the objective of protecting children.

The court also rejected the government’s arguments that speech on the internet should receive a reduced level of First Amendment protection, akin to that of the broadcast media which is regulated. Instead, the court ruled that speech on the internet should receive the highest level of First Amendment protection — like that extended to the print media.

In response to the court’s ruling, Congress in 1998 passed the  Child Online Protection Act (COPA) , which dealt only with minors’ access to commercial pornography and provided clear methods to be used by site owners to prevent access by minors. However, in 2004 the court struck down COPA in  Ashcroft v. American Civil Liberties Union , stating that less restrictive methods such as filtering and blocking should be used instead. The court suggested that these alternative methods were at least in theory more effective than those specified in COPA because of the large volume of foreign pornography that Congress cannot regulate.

The Supreme Court has allowed the federal government to require libraries to install filters on their public computers to protect children from obscene material as a condition for receiving  federal aid  to purchase computers. But the three dissenting justices in  United States v. American Library Association  (2003)  viewed the requirement of filtering devices on library computers, which both adults and children must request to be unlocked, to be an overly broad restriction on adult access to protected speech.

Congress has attempted to criminalize virtual child pornography

Congress also ventured into the area of child pornography, passing the  Child Pornography Prevention Act (CPPA)  in 1996. The CPPA criminalized virtual child pornography—that is, pornography that sexually depicts, or conveys the impression of depicting, minors. Although the act targeted computer-generated or altered works advertised as child pornography, in  Free Speech Coalition v. Reno  (9th Cir. 1999) the federal appeals court found some language in the statute to be so overly broad and vague that much protected speech would be covered under the CPPA. The court noted that the state’s interest in protecting children from the physical and psychological abuse arising from their participation in the making of pornography—the basis for its ban in  New York v. Ferber  (1982) —was not present in virtual child pornography.

speech on internet kills communication

People use computers to access the Internet at the Boston Public Library, in Boston, 2003. The Supreme Court said public libraries must make it harder for internet surfers to look at pornography or they will lose government funding. Justices ruled in United States v. American Library Association (2003) that the federal government can withhold money from libraries that won’t install blocking devices. (AP Photo/Steven Senne)

‘Hit list’ of abortion doctors on website found to be ‘true threat’

U.S. courts have also dealt with other areas of internet speech that traditionally have been less protected or unprotected under the First Amendment.  Planned Parenthood of the Columbia/Willamette, Inc. v. American Coalition of Life Activists  (9th Cir. 2002) , decided by an en banc panel of the 9th U.S. Circuit Court of Appeals, centered on what constitutes dangerous speech and a true threat in the context of the internet.

The American Coalition of Life Activists (ACLA) posted the personal contact information of doctors who performed abortions, including details such as the names of their children. The names of doctors who had been murdered were crossed off and the names of those who had been wounded by anti-abortion activists were grayed. Although the site did not contain explicit threats, opponents argued that it was akin to a hit list, and the doctors on the list believed it to be a serious threat to their safety.

The appeals court held that the ACLA could be held liable for civil damages, and that the website did not contain political speech protected under the First Amendment. The court wrote: “It is the use of the ‘wanted’-type format in the context of the poster pattern—poster followed by murder—that constitutes the threat.” The posters were not “political hyperbole” because “[p]hysicians could well believe that ACLA would make good on the threat.” Thus it was a  true threat , not protected as political speech.

In an earlier case in 1997, the 6th U.S. Circuit Court of Appeals reached a different decision on violent material posted on the web and sent in emails by a university student, appearing to plan an attack on a woman at his college. Abraham Alkhabaz claimed the emails were mere fantasy. In  United States v. Alkhabaz,  and circuit court agreed, concluding that the messages did not constitute a true threat because they were not “conveyed to effect some change or achieve some goal through intimidation.”

Rights of anonymous speech not absolute on internet

The extent of the right to anonymous speech on the internet has also become an issue in some court cases. The Supreme Court has recognized  anonymity rights in speech , albeit not an absolute right, and lower courts have generally taken the same view when it comes to anonymous speech on the internet.

In “ The United States of Anonymous: How the First Amendment Shaped Online ,”   author Jeff Kosseff explores two cases,  Dentrite International, Inc. v. Doe  No. 3, 775 A.2d 756 (N.J. App. Div. 2001) and  Cahill v. Doe , 879 A.2d 943 (Del. Super. Ct., June 14, 2005), in which courts recognized relatively strong First Amendment presumptions on behalf of purveyors of anonymous speech, especially for those that are statements of opinions rather than obvious falsehoods, while recognizing that government sometimes has the right to identify such speakers when they have used their platforms to harass, engage in slander or sexual predation, make true threats, or allow foreign governments to influence U.S. elections.

This article was originally published in 2009 and was updated in March 2022.

Send Feedback on this article

How To Contribute

The Free Speech Center operates with your generosity! Please  donate now!

Main navigation

  • Clinical Education
  • O'Brien Fellowships
  • Steinberg Fellowships
  • CHRLP Article Lab

Protecting Freedom of Expression Online

Black tablet showing the blue and white Twitter logo on top of a cardboard box with 'handle with care' stamped on a lid flap. By Ravi Sharma, via Unsplash.

  • Add to calendar
  • Tweet Widget

Questions around freedom of expression are once again in the air. While concern around the Internet’s role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including through mechanisms such as the principle of online intermediary immunity, arguably one of the main principles that has allowed the Internet to flourish as vibrantly as it has.

What is online intermediary immunity?

Laws that enact online intermediary immunity provide Internet platforms (e.g., Facebook, Twitter, YouTube) with legal protections against liability for content generated by third-party users.

Simply put, if a user posts illegal content, the host (i.e., intermediary) may not be held liable. An intermediary is understood as any actor other than the content creator. This includes large platforms such as Twitter where, for example, if a user posts an incendiary call to violence, Twitter may not be held liable for that post. It also holds for smaller platforms, such as a personal blog, where the blogger is protected from being held liable for comments left by readers. The same is true for the computer servers hosting the content.

These laws have multiple policy goals, ranging from promoting free expression and information access, to encouraging economic growth and technical innovation. But balancing these objectives against the risk of harm has proven complicated, as seen in debates about how to prevent online election disinformation campaigns, hate speech, and threats of violence.

There is also a growing public perception that large-scale Internet platforms need to be held accountable for the harms they enable. With the European Union reforming its major legislation on Internet regulation, the ongoing debate in the United States regarding similar reforms, and the recent January 6 attack on Capitol Hill, it is a propitious time to examine how different jurisdictions implement online intermediary liability laws and what that means for ensuring that the Web continues to allow deliberative democracy and civic participation.

The United States

Traditionally, the United States has provided some of the most rigorous protections for online intermediaries under section 230 of the Communications Decency Act (CDA) [.pdf], which bars platforms from being treated as the “publisher or speaker” of third-party content and establishes that platforms moderating content in good faith maintain their immunity from liability. However, there are increasing calls on both the left and right for this to change.

Republican Senator Josh Hawley of Missouri introduced two pieces of legislation in 2020 and 2019 respectively ― the Limiting Section 230 Immunity to Good Samaritans Act and the Ending Support for Internet Censorship Act ― to undercut the liability protections provided for in section 230 CDA. If passed, the Limiting Section 230 Immunity to Good Samaritans Act would limit liability protections to platforms that use value-neutral content moderation practices, meaning that content would have to be moderated with absolute neutrality, free from any set of values, to be protected. However, this is an unrealistic standard, given that all editorial decisions involve choices based on value, be it merely a question of how to sort that content (e.g., chronologically, alphabetically, etc.) or the editor’s own personal interests and taste. The Ending Support for Internet Censorship Act also seeks to remove liability protections for platforms that curate political information, the vagueness of which risks aggressively demotivating platforms from hosting politically sensitive conversations and chilling free speech online.

The bipartisan Platform Accountability and Consumer Transparency (PACT) [.pdf], introduced by Democrat Senator Brian Schatz of Hawaii and Republican John Thune of South Dakota in 2020, would require platforms to disclose their content moderation practices, implement a user complaint system with an appeals process, and remove court-ordered illegal content within 24 hours. While a step in the right direction towards greater platform transparency, PACT could still endanger free speech on the Internet; it might motivate platforms to remove any content that might be found illegal rather than risk the costs of litigation, thereby taking down legitimate speech out of an abundance of caution. PACT would also entrench the already overwhelming power and influence of the largest platforms, such as Facebook and Google, by imposing onerous obligations that small-to-medium size platforms might find difficult to respect.

During his presidential campaign, Joe Biden even called for the outright repeal of section 230 CDA , with the goal of holding large platforms more accountable for the spread of disinformation and extremism. This remains a worrisome position and something that President Biden should reconsider, given the importance of section 230 CDA for prohibiting online censorship and allowing the Internet to flourish as an arena for public debate.

Questions around to how ensure the Internet remains a viable space for freedom of expression are particularly important in Canada, which does not currently have domestic statutory measures limiting the civil liability of online intermediaries. Although proposed with the laudable goals of combating disinformation, harassment, and the spread of hate, legislation that increases restrictions on freedom of speech, such as the reforms described above, should not be taken in Canada. These types of measures risk incentivizing platforms to actively engage in censorship due to the prohibitive costs associated with the nearly impossible feat of preventing all objectionable content, especially for smaller providers. Instead, what is needed is national and international legislation that balances protecting users against harm while also safeguarding their right to freedom of expression.

One possible model forward for Canada can be found in the newly signed free trade agreement between Canada, the United States, and Mexico, known as the United States–Mexico–Canada Agreement (USMCA). Article 19.17 USMCA mirrors section 230 CDA by shielding online platforms from liability relating to content produced by third party users, but a difference in wording [1] suggests that under USMCA, individuals who have been harmed by online speech may be able to obtain non-monetary equitable remedies, such as restraining orders and injunctions.

It remains to be seen how courts will interpret the provision, but the text leaves room to allow platforms to continue to enjoy immunity from liability, while being required to take action against harmful content pursuant to a court order, such as taking down the objectionable material. Under this interpretation, platforms would be free to take down or leave up content based on their own terms of service, until ordered otherwise by a court. This would leave ultimate decision-making with courts and avoid incentivizing platforms to overzealously take down content out of fear of monetary penalties.

USMCA thus appears to balance providing redress for harms with protecting online platforms from liability related to user-generated content, and provides a valuable starting point for legislators considering how to reform Canada’s domestic online intermediary liability laws.

Going forward

The Internet has proven itself to be a phenomenally transformative tool for human expression, community building, and knowledge dissemination. That power, however, can also be used for the creation, spread, and amplification of hateful, anti-democratic groups and ideas.

Countries are now wrestling with how to balance the importance of freedom of expression with the importance of protecting vulnerable groups and democracy itself. Decisions taken today on how to regulate online intermediary liability will play a crucial role in determining whether the Web remains a place for the free and open exchange of ideas, or a chill and stagnant desert.

Although I remain sympathetic to the legitimate concerns that Internet platforms do too little to prevent their own misuse, I fear that removing online intermediary liability protections will result in the same platforms having too much power and incentive to monitor and censor speech, something that risks being equally harmful.

There are other possible ways forward. We could take the roadmap offered by article 19.17 USMCA. We could prioritize prosecuting individuals for unlawful behaviour on the web, such as peddling slander, threatening bodily violence or fomenting sedition. Ultimately, we need nuanced solutions that balance empowering freedom of expression with protecting individuals against harm. Only then can the Internet remain a place that fosters deliberative democracy and civic participation.

[1] CDA 230(c) provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” USMCA 19.17.2 instead provides that “No Party shall adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability [emphasis added] for harms related to information stored, processed, transmitted, distributed, or made available by the service, except to the extent the supplier or user has, in whole or in part, created or developed the information.”

About the writer

Rachel Zuroff, BCL/LLB’16

She resides in Montreal, where she continues to pursue her interests in human rights and legal pluralism.

Department and University Information

Centre for human rights and legal pluralism.

  • Faculty of Law
  • Law Admissions - BCL/JD
  • Law Admissions - graduate programs
  • Law Student Affairs Office
  • Law Career Development Office
  • Nahum Gelber Law Library
  • Focus online
  • CHRLP Facebook page
  • Business Law Platform
  • Centre for Intellectual Property Policy
  • Fortier Chair in Int'l Arbitration & Commercial Law
  • Institute of Air & Space Law
  • Jean Monnet Chair in International Economic Integration
  • Labour Law and Development Research Laboratory
  • Oppenheimer Chair in public international law
  • Paul-André Crépeau Centre for Private & Comparative Law
  • Peter MacKell Chair in Federalism
  • Private Justice and the Rule of Law
  • Research Group on Health & Law
  • Rule of Law and Economic Development
  • Stikeman Chair in Tax Law
  • Wainwright Fund

Greater Good Science Center • Magazine • In Action • In Education

How Smartphones Are Killing Conversation

What happens when we become too dependent on our mobile phones? According to MIT sociologist Sherry Turkle, author of the new book Reclaiming Conversation , we lose our ability to have deeper, more spontaneous conversations with others, changing the nature of our social interactions in alarming ways.

Turkle has spent the last 20 years studying the impacts of technology on how we behave alone and in groups. Though initially excited by technology’s potential to transform society for the better, she has become increasingly worried about how new technologies, cell phones in particular, are eroding the social fabric of our communities.

In her previous book, the bestselling Alone Together , she articulated her fears that technology was making us feel more and more isolated, even as it promised to make us more connected. Since that book came out in 2012, technology has become even more ubiquitous and entwined with our modern existence. Reclaiming Conversation is Turkle’s call to take a closer look at the social effects of cell phones and to re-sanctify the role of conversation in our everyday lives in order to preserve our capacity for empathy , introspection, creativity, and intimacy.

speech on internet kills communication

I interviewed Turkle by phone to talk about her book and some of the questions it raises. Here is an edited version of our conversation.

Jill Suttie: Your new book warns that cell phones and other portable communication technology are killing the art of conversation. Why did you want to focus on conversation, specifically?

Sherry Turkle: Because conversation is the most human and humanizing thing that we do. It’s where empathy is born, where intimacy is born—because of eye contact, because we can hear the tones of another person’s voice, sense their body movements, sense their presence. It’s where we learn about other people. But, without meaning to, without having made a plan, we’ve actually moved away from conversation in a way that my research was showing is hurting us.

JS: How are cell phones and other technologies hurting us?

ST: Eighty-nine percent of Americans say that during their last social interaction, they took out a phone, and 82 percent said that it deteriorated the conversation they were in. Basically, we’re doing something that we know is hurting our interactions.

I’ll point to a study. If you put a cell phone into a social interaction, it does two things: First, it decreases the quality of what you talk about, because you talk about things where you wouldn’t mind being interrupted, which makes sense, and, secondly, it decreases the empathic connection that people feel toward each other.

So, even something as simple as going to lunch and putting a cell phone on the table decreases the emotional importance of what people are willing to talk about, and it decreases the connection that the two people feel toward one another. If you multiply that by all of the times you have a cell phone on the table when you have coffee with someone or are at breakfast with your child or are talking with your partner about how you’re feeling, we’re doing this to each other 10, 20, 30 times a day.

JS: So, why are humans so vulnerable to the allure of the cell phone, if it’s actually hurting our interactions?

ST: Cell phones make us promises that are like gifts from a benevolent genie—that we will never have to be alone, that we will never be bored, that we can put our attention wherever we want it to be, and that we can multitask, which is perhaps the most seductive of all. That ability to put your attention wherever you want it to be has become the thing people want most in their social interactions—that feeling that you don’t have to commit yourself 100 percent and you can avoid the terror that there will be a moment in an interaction when you’ll be bored.

Actually allowing yourself a moment of boredom is crucial to human interaction and it’s crucial to your brain as well. When you’re bored, your brain isn’t bored at all—it’s replenishing itself, and it needs that down time.

We’re very susceptible to cell phones, and we even get a neurochemical high from the constant stimulation that our phones give us.

I’ve spent the last 20 years studying how compelling technology is, but you know what? We can still change. We can use our phones in ways that are better for our kids, our families, our work, and ourselves. It’s the wrong analogy to say we’re addicted to our technology. It’s not heroin.

JS: One thing that struck me in your book was that many people who you interviewed talked about the benefits of handling conflict or difficult emotional issues online. They said they could be more careful with their responses and help decrease interpersonal tensions. That seems like a good thing. What’s the problem with that idea?

ST: It was a big surprise when I did the research for my book to learn how many people want to dial down fighting or dealing with difficult emotional issues with a partner or with their children by doing it online.

But let’s take the child example. If you do that with your child, if you only deal with them in this controlled way, you are basically playing into your child’s worst fear—that their truth, their rage, their unedited feelings, are something that you can’t handle. And that’s exactly what a parent shouldn’t be saying to a child. Your child doesn’t need to hear that you can’t take and accept and honor the intensity of their feelings.

People need to share their emotions—I feel very strongly about this. I understand why people avoid conflict, but people who use this method end up with children who think that the things they feel aren’t OK. There’s a variant of this, which is interesting, where parents give their children robots to talk to or want their children to talk to Siri, because somehow that will be a safer place to get out their feelings. Again, that’s exactly what your child doesn’t need.

JS: Some studies seem to show that increased social media use actually increases social interaction offline. I wonder how this squares with your thesis?

ST: How I interpret that data is that if you’re a social person, a socially active person, your use of social media becomes part of your social profile. And I think that’s great. My book is not anti-technology; it’s pro-conversation. So, if you find that your use of social media increases your number of face-to-face conversations, then I’m 100 percent for it.

Another person who might be helped by social media is someone who uses it for taking baby steps toward meeting people for face-to-face conversations. If you’re that kind of person, I’m totally supportive. 

I’m more concerned about people for whom social media becomes a kind of substitute, who literally post something on Facebook and just sit there and watch whether they get 100 likes on their picture, whose self-worth and focus becomes dictated by how they are accepted, wanted, and desired by social media.

And I’m concerned about the many other situations in which you and I are talking at a dinner party with six other people, and everyone is texting at the meal and applying the “three-person rule”—that three people have to have their heads up before anyone feels it’s safe to put their head down to text. In this situation, where everyone is both paying attention and not paying attention, you end up with nobody talking about what’s really on their minds in any serious, significant way, and we end up with trivial conversations, not feeling connected to one another.

JS: You also write about how conversation affects the workplace environment. Aren’t conversations just distractions to getting work done? Why support conversation at work?

More on Technology

Read Jill Suttie's review of Reclaiming Conversation .

How healthy are your online and offline social networks? Take the quiz !

five ways to build caring community on social media .

Take Christine Carter's advice to use technology intentionally and stop checking your freaking phone .

Learn how technology is shaping romance .

ST: In the workplace, you need to create sacred spaces for conversation because, number one, conversation actually increases the bottom line. All the studies show that when people are allowed to talk to each other, they do better—they’re more collaborative, they’re more creative, they get more done.

It’s very important for companies to make space for conversation in the workplace. But if a manager doesn’t model to employees that it’s OK to be off of their email in order to have conversation, nothing is going to get accomplished. I went to one workplace that had cappuccino machines every 10 feet and tables the right size for conversation, where everything was built for conversation. But people were feeling that the most important way to show devotion to the company was answering their email immediately. You can’t have conversation if you have to be constantly on your email. Some of the people I interviewed were terrified to be away from their phones. That translates into bringing your cell phone to breakfast and not having breakfast with your kids.

JS: If technology is so ubiquitous yet problematic, what recommendations do you make for keeping it at a manageable level without getting so hooked?

ST: The path ahead is not a path where we do without technology, but of living in greater harmony with it. Among the first steps I see is to create sacred spaces—the kitchen, the dining room, the car—that are device-free and set aside for conversation. When you have lunch with a friend or colleague or family member, don’t put a phone on the table between you. Make meals a time when you are there to listen and be heard.

When we move in and out of conversations with our friends in the room and all the people we can reach on our phones, we miss out on the kinds of conversations where empathy is born and intimacy thrives. I met a wise college junior who spoke about the “seven-minute rule”: It takes seven minutes to know if a conversation is going to be interesting. And she admitted that she rarely was willing to put in her seven minutes. At the first “lull,” she went to her phone. But it’s when we stumble, hesitate, and have those “lulls” that we reveal ourselves most to each other.

So allow for those human moments, accept that life is not a steady “feed,” and learn to savor the pace of conversation—for empathy, for community, for creativity.

About the Author

Headshot of Jill Suttie

Jill Suttie

Jill Suttie, Psy.D. , is Greater Good ’s former book review editor and now serves as a staff writer and contributing editor for the magazine. She received her doctorate of psychology from the University of San Francisco in 1998 and was a psychologist in private practice before coming to Greater Good .

You May Also Enjoy

speech on internet kills communication

Nine Tips for Talking With Kids About Trauma

speech on internet kills communication

The Place of Talk in a Digital Age

speech on internet kills communication

How to Get Your Kid to Talk about What Happened at School

speech on internet kills communication

Why Won’t Your Teen Talk To You?

speech on internet kills communication

Should We Talk to Young Children about Race?

Talking to strangers, and other things that bring good luck.

GGSC Logo

Is the internet killing off language?

Emojis and micro-blog slang are changing the way we communicate

Emoji

  • Introduction and positive effects on writing
  • Mobile devices and context

The internet is changing the way we communicate. LOL, awks, amazeballs, BRB, the use of emoji and emoticon – and even writing facial expressions such as 'sad face' – have all become standard in digital communications. So ingrained, in fact, that they're changing the way we write and even talk.

"People are becoming less concerned with grammar, spelling and sentence structure, and more concerned with getting their message across," says Gavin Hammar, CEO and founder of Sendible , a UK-based social media dashboard for business.

There's no doubt that the consumption of abbreviated digital content is having a huge effect on language. "Over the last five years attention spans have shortened considerably, which is reflected in the contracted forms of language we see in social media," says Robin Kermode, founder of communications coaching consultancy Zone2 and author of the book 'Speak So Your Audience Will Listen: A practical guide for anyone who has to speak to another human being'.

However, some think that the internet has made us better communicators since we increasingly use much more streamlined language. "To get a message across using Twitter for example, it must be concise and must conform to the tone used there, which includes abbreviations, acronyms and emoticons," say Hammar.

What about emoticons and emojis?

The fastest growing 'new language' in the world is emoticons (faces) and emojis (images of objects, which hail from Japan), which are one of the biggest changes caused by digital communications. "Facial expressions, visual presence and body language have always been vital to being a confident speaker, but now emojis are blurring the lines between verbal and written communication," thinks Kermode, who adds that cavemen had early versions of emojis on the sides of their caves. "Pictures, cartoons or emojis are 'shortcuts' so we can be clear about what our message really means."

If you mainly use emojis, why not get a keyboard based around smiley faces and cartoon icons? That's exactly what Swyft Media recently created, and while it's more of a PR stunt the keyboards of the future will probably contain at least some emojis.

How emojis add meaning

Emoticons and emoji are arguably more meaningful than slang and shorthand, which can be too easily misunderstood. "I once witnessed a girl being dumped in a text, which consisted of a message with just five letters, 'U R MY X' – linguistically economic, but emotionally harsh," says Kermode. Trouble is, the sender had actually meant 'YOU ARE MINE. X'. "If he'd added three emojis – like a smiley face, a heart and a wedding ring, he might now be happily married!"

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The same goes for a statement such as "I NEED TO SPEAK TO YOU RIGHT NOW", which needs a qualifying emoticon or emoji to give it meaning. "It could signal an angry meeting or a passionate meeting but add a coffee cup, a big smiley face or an angry face and it becomes clear what's really going on," says Kermode.

They may be derided by traditionalists, but emoticons and emojis used to describe mood are the body language add-on that the written word has always lacked. In most instances, these icons represent language evolution and progress, not regression.

Mood emoticons are the body language add-on the written word has never had

The web's positive effects on writing

Some think that the internet is actually sharpening up writing skills, particularly of professional writers, creating new niches and specialisms. "[The internet] lays bare the disparity between good and bad copy, which has resulted in writers and editors becoming better educated and more aware of global grammatical standards, raising the bar overall," says Paul Parreira, founder of digital content creation agency Company Cue , which has a network of 800 highly skilled writers and programming experts working in 32 languages.

He thinks that the internet is also driving language to become more globalised, with Americanisms such as 'road trip', 'what's up?' and 'like' being used as a conversational link now ingrained into what's fast being called 'International English' or ELF (English Lingua Franca). It has nothing to do with where the language originated, and often those that use a basic form of ELF online can understand each other far easier than native English speakers.

However, online English has also spawned new specialism and skills among professional, often native English speaking writers. "Writing has become more idiosyncratic and unique," says Parreira, "creating new breeds of writers – those that specialise in short form and those that focus on long form … it's rare to find writers than can excel in both."

Current page: Introduction and positive effects on writing

Jamie is a freelance tech, travel and space journalist based in the UK. He’s been writing regularly for Techradar since it was launched in 2008 and also writes regularly for Forbes, The Telegraph, the South China Morning Post, Sky & Telescope and the Sky At Night magazine as well as other Future titles T3, Digital Camera World, All About Space and Space.com. He also edits two of his own websites,  TravGear.com  and  WhenIsTheNextEclipse.com  that reflect his obsession with travel gear and solar eclipse travel. He is the author of  A Stargazing Program For Beginners  (Springer, 2015),

Qilin ransomware targets Google Chrome credentials

This rebranded malware digs deep into your data leveraging Telegram API for data exfiltration

How to watch Road Cycling at Paralympics 2024: free live streams

Most Popular

  • 2 NYT Connections today — hints and answers for Wednesday, September 4 (game #451)
  • 3 NYT Strands today — hints, answers and spangram for Wednesday, September 4 (game #185)
  • 4 Quordle today – hints and answers for Wednesday, September 4 (game #954)
  • 5 Qilin ransomware targets Google Chrome credentials

speech on internet kills communication

By William Fisher

Last Updated June 14, 2001

Table of Contents
Introduction The Internet offers extraordinary opportunities for "speakers," broadly defined.  Political candidates, cultural critics, corporate gadflies -- anyone who wants to express an opinion about anything -- can make their thoughts available to a world-wide audience far more easily than has ever been possible before.  A large and growing group of Internet participants have seized that opportunity. Some observers find the resultant outpouring of speech exhilarating.  They see in it nothing less than the revival of democracy and the restoration of community.  Other observers find the amount -- and, above all, the kind of speech -- that the Internet has stimulated offensive or frightening.  Pornography, hate speech, lurid threats -- these flourish alongside debates over the future of the Democratic Party and exchanges of views concerning flyfishing in Patagonia.  This phenomenon has provoked various efforts to limit the kind of speech in which one may engage on the Internet -- or to develop systems to "filter out" the more offensive material. This module examines some of the legal issues implicated by the increasing bitter struggle between the advocates of "free speech" and the advocates of filtration and control.     Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources   Background Before plunging into the details of the proliferating controversies over freedom of expression on the Internet, you need some background information on two topics. The first and more obvious is the Free-Speech Clause of the First Amendment to the United States Constitution. The relevance and authority of the First Amendment should not be exaggerated; as several observers have remarked, "on the Internet, the First Amendment is just a local ordinance."  However, free-expression controversies that arise in the United States inevitably implicate the Constitution. And the arguments deployed in the course of American First-Amendment fights often inform or infect the handling of free-expression controversies in other countries. The upshot: First-Amendment jurisprudence is worth studying. Unfortunately, that jurisprudence is large and arcane. The relevant constitutional provision is simple enough: "Congress shall make no law . . . abridging the freedom of speech, or of the press . . .."  But the case law that, over the course of the twentieth century, has been built upon this foundation is complex. An extremely abbreviated outline of the principal doctrines would go as follows:   If a law gives no clear notice of the kind of speech it prohibits, it’s "void for vagueness." If a law burdens substantially more speech than is necessary to advance a compelling government interest, it’s unconstitutionally "overbroad." A government may not force a person to endorse any symbol, slogan, or pledge. Governmental restrictions on the "time, place, and manner" in which speech is permitted are constitutional if and only if: they are "content neutral," both on their face and as applied; they leave substantial other opportunities for speech to take place; and they "narrowly serve a significant state interest." On state-owned property that does not constitute a "public forum," government may restrict speech in any way that is reasonable in light of the nature and purpose of the property in question. Content-based governmental restrictions on speech are unconstitutional unless they advance a "compelling state interest."  To this principle, there are six exceptions: 1.  Speech that is likely to lead to imminent lawless action may be prohibited. 2. "Fighting words" -- i.e., words so insulting that people are likely to fight back -- may be prohibited. 3.  Obscenity -- i.e., erotic expression, grossly or patently offensive to an average person, that lacks serious artistic or social value -- may be prohibited. 4.  Child pornography may be banned whether or not it is legally obscene and whether or not it has serious artistic or social value, because it induces people to engage in lewd displays, and the creation of it threatens the welfare of children. 5.  Defamatory statements may be prohibited.  (In other words, the making of such statements may constitutionally give rise to civil liability.)  However, if the target of the defamation is a "public figure," she must prove that the defendant acted with "malice."  If the target is not a "public figure" but the statement involved a matter of "public concern," the plaintiff must prove that the defendant acted with negligence concerning its falsity. 6. Commercial Speech may be banned only if it is misleading, pertains to illegal products, or directly advances a substantial state interest with a degree of suppression no greater than is reasonably necessary.  

If you are familiar with all of these precepts -- including the various terms of art and ambiguities they contain -- you're in good shape. If not, you should read some more about the First Amendment.  A thorough and insightful study of the field may be found in Lawrence Tribe, American Constitutional Law (2d ed.), chapter 12.  Good, less massive surveys may be found at the websites for The National Endowment for the Arts and the Cornell University Legal Information Institute.

The second of the two kinds of background you might find helpful is a brief introduction to the current debate among academics over the character and desirability of what has come to be called "cyberdemocracy."  Until a few years ago, many observers thought that the Internet offered a potential cure to the related diseases that have afflicted most representative democracies in the late twentieth century:  voter apathy; the narrowing of the range of political debate caused in part by the inertia of a system of political parties; the growing power of the media, which in turn seems to reduce discussion of complex issues to a battle of "sound bites"; and the increasing influence of private corporations and other sources of wealth.  All of these conditions might be ameliorated, it was suggested, by the ease with which ordinary citizens could obtain information and then cheaply make their views known to one another through the Internet.

A good example of this perspective is a recent article by Bernard Bell , where he suggests that “[t]he Internet has, in many ways, moved society closer to the ideal Justice Brennan set forth so eloquently in New York Times v. Sullivan .  It has not only made debate on public issues more 'uninhibited, robust, and wide-open,' but has similarly invigorated discussion of non-public issues. By the same token, the Internet has empowered smaller entities and even individuals, enabling them to widely disseminate their messages and, indeed, reach audiences as broad as those of established media organizations.”

Recently, however, this rosy view has come under attack.  The Internet, skeptics claim, is not a giant "town hall."  The kinds of information flows and discussions it seems to foster are, in some ways, disturbing.  One source of trouble is that the Internet encourages like-minded persons (often geographically dispersed) to cluster together in bulletin boards and other virtual clubs.  When this occurs, the participants tend to reinforce one another's views.  The resultant "group polarization" can be ugly.  More broadly, the Internet seems at least potentially corrosive of something we have long taken for granted in the United States: a shared political culture.  When most people read the same newspaper or watch the same network television news broadcast each day, they are forced at least to glance at stories they might fight troubling and become aware of persons and groups who hold views sharply different from their own.  The Internet makes it easy for people to avoid such engagement -- by enabling people to select their sources of information and their conversational partners.  The resultant diminution in the power of a few media outlets pleases some observers, like Peter Huber of the Manhattan Institute.  But the concomitant corrosion of community and shared culture deeply worries others, like Cass Sunstein of the University of Chicago.

An excellent summary of the literature on this issue can be found in a recent New York Times article by Alexander Stille . If you are interested in digging further into these issues, we recommend the following books:

  • Cass Sunstein, Republic.com (Princeton Univ. Press 2001)
  • Peter Huber, Law and Disorder in Cyberspace: Abolish the F.C.C. and Let Common Law Rule the Telecosm (Oxford Univ. Press 1997)

To test some of these competing accounts of the character and potential of discourse on the Internet, we suggest you visit - or, better yet, participate in - some of the sites at which Internet discourse occurs. Here's a sampler:

  • MSNBC Political News Discussion Board

Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

Current Controversies

1.  Restrictions on Pornography

Three times in the past five years, critics of pornography on the Internet have sought, through federal legislation, to prevent children from gaining access to it.  The first of these efforts was the Communications Decency Act of 1996 (commonly known as the "CDA"), which (a) criminalized the "knowing" transmission over the Internet of "obscene or indecent" messages to any recipient under 18 years of age and (b) prohibited the "knowin[g]" sending or displaying to a person under 18 of any message "that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs."  Persons and organizations who take "good faith, . . . effective . . . actions" to restrict access by minors to the prohibited communications, or who restricted such access by requiring certain designated forms of age proof, such as a verified credit card or an adult identification number, were exempted from these prohibitions.

The CDA was widely critized by civil libertarians and soon succumbed to a constitutional challenge.  In 1997, the United States Supreme Court struck down the statute, holding that it violated the First Amendment in several ways:

  • because it restricted speech on the basis of its content, it could not be justified as a "time, place, and manner" regulation;
  • its references to "indecent" and "patently offensive" messages were unconstitutionally vague;
  • its supposed objectives could all be achieved through regulations less restrictive of speech;
  • it failed to exempt from its prohibitions sexually explicit material with scientific, educational, or other redeeming social value.

Two aspects of the Court's ruling are likely to have considerable impact on future constitutional decisions in this area.  First, the Court rejected the Government's effort to analogize the Internet to traditional broadcast media (especially television), which the Court had previously held could be regulated more strictly than other media.  Unlike TV, the Court reasoned, the Internet has not historically been subject to extensive regulation, is not characterized by a limited spectrum of available frequencies, and is not "invasive."  Consequently, the Internet enjoys full First-Amendment protection.  Second, the Court encouraged the development of technologies that would enable parents to block their children's access to Internet sites offering kinds of material the parents deemed offensive.

A year later, pressured by vocal opponents of Internet pornography -- such as "Enough is Enough" and the National Law Center for Children and Families -- Congress tried again.  The 1998 Child Online Protection Act (COPA) obliged commercial Web operators to restrict access to material considered "harmful to minors" -- which was, in turn, defined as any communication, picture, image, graphic image file, article, recording, writing or other matter of any kind that is obscene or that meets three requirements:

(1) "The average person, applying contemporary community standards, would find, taking the material as a whole and with respect to minors, is designed to appeal to, or is designed to pander to, the prurient interest." (2) The material "depicts, describes, or represents, in a manner patently offensive with respect to minors, an actual or simulated sexual act or sexual conduct, an actual or simulated normal or perverted sexual act or a lewd exhibition of the genitals or post-pubescent female breast." (3) The material, "taken as a whole, lacks serious literary, artistic, political, or scientific value for minors."  

Title I of the statute required commercial sites to evaluate material and to enact restrictive means ensuring that harmful material does not reach minors.  Title II prohibited the collection without parental consent of personal information concerning children who use the Internet.  Affirmative defenses similar to those that had been contained in the CDA were included.

Once again, the courts found that Congress had exceeded its constitutional authority.  In the judgment of the Third Circuit Court of Appeals , the critical defect of COPA was its reliance upon the criterion of "contemporary community standards" to determine what kinds of speech are permitted on the Internet:

Because material posted on the Web is accessible by all Internet users worldwide, and because current technology does not permit a Web publisher to restrict access to its site based on the geographic locale of a each particular Internet user, COPA essentially requires that every Web publisher subject to the statute abide by the most restrictive and conservative state's community standard in order to avoid criminal liability.  

The net result was to impose burdens on permissible expression more severe than can be tolerated by the Constitution.  The court acknowledged that its ruling did not leave much room for constitutionally valid restrictions on Internet pornography:

We are forced to recognize that, at present, due to technological limitations, there may be no other means by which harmful material on the Web may be constitutionally restricted, although, in light of rapidly developing technological advances, what may now be impossible to regulate constitutionally may, in the not-too-distant future, become feasible.  

In late 2000, the anti-pornography forces tried once more.  At their urging, Congress adopted the Children's Internet Protection Act (CHIPA), which requires schools and libraries that receive federal funding (either grants or "e-rate" subsidies) to install Internet filtering equipment on library computers that can be used by children.  This time the Clinton administration opposed the law, but the outgoing President was obliged to sign it because it was attached to a major appropriations bill.

Opposition to CHIPA is intensifying.  Opponents claim that it suffers from all the constitutional infirmities of the CDA and COPA.  In addition, it will reinforce one form of the "digital divide" -- by subjecting poor children, who lack home computers and must rely upon public libraries for access to the Internet, to restrictions that more wealthy children can avoid.  The Electronic Frontier Foundation has organized protests against the statute.   In April of this year, several civil-liberties groups and public library associations filed suit in the Eastern District of Pennsylvania seeking a declaration that the statute is unconstitutional.  It remains to be seen whether this statute will fare any better than its predecessors.

The CDA, COPA, and CHIPA have one thing in common: they all involve overt governmental action -- and thus are subject to challenge under the First Amendment.  Some observers of the Internet argue that more dangerous than these obvious legislative initiatives are the efforts by private Internet Service Providers to install filters on their systems that screen out kinds of content that the ISPs believe their subscribers would find offensive.  Because policies of this sort are neither mandated nor encouraged by the government, they would not, under conventional constitutional principles, constitute "state action" -- and thus would not be vulnerable to constitutional scrutiny.  Such a result, argues Larry Lessig, would be pernicious; to avoid it, we need to revise our understanding of the "state action" doctrine.  Charles Fried disagrees:

Note first of all that the state action doctrine does not only limit the power of courts to protect persons from private power that interferes with public freedoms. It also protects individuals from the courts themselves, which are, after all, another government agency. By limiting the First Amendment to protecting citizens from government (and not from each other), the state action doctrine enlarges the sphere of unregulated discretion that individuals may exercise in what they think and say. In the name of First Amendment "values," courts could perhaps inquire whether I must grant access to my newspaper to opinions I abhor, must allow persons whose moral standards I deplore to join my expressive association, or must remain silent so that someone else gets a chance to reach my audience with a less appealing but unfamiliar message. Such inquiries, however, would place courts in the business of deciding which opinions I would have to publish in my newspaper and which would so distort my message that putting those words in my mouth would violate my freedom of speech; what an organization's associational message really is and whether forcing the organization to accept a dissenting member would distort that message; and which opinions, though unable to attract an audience on their own, are so worthy that they must not be drowned out by more popular messages. I am not convinced that whatever changes the Internet has wrought in our environment require the courts to mount this particular tiger.

"Perfect Freedom or Perfect Control," 114 Harvard Law Review 606, 635 (2000).

The United States may have led the way in seeking (unsuccessfully, thus far) to restrict the flow of pornography on the Internet, but the governments of other countries are now joining the fray.  For the status of the struggle in a few jurisdictions, you might read:

  • Joseph C. Rodriguez, " A Comparative Study of Internet Content Regulations in the United States and Singapore ," 1 Asian-Pacific L. & Pol'y J. 9 (February 2000).   (Singapore)

In a provocative recent article, Amy Adler  argues that the effort to curb child pornography online -- the kind of pornography that disgusts the most people -- is fundamentally misguided.  Far from reducing the incidence of the sexual abuse of children, governmental efforts to curtail child pornography only increase it.  A summary of her argument is available here .  The full article is available here .

2.  Threats

When does speech become a threat?  Put more precisely, when does a communication over the Internet inflict -- or threaten to inflict -- sufficient damage on its recipient that it ceases to be protected by the First Amendment and properly gives rise to criminal sanctions?  Two recent cases addressed that issue from different angles.

The first was popularly known as the "Jake Baker" case.  In 1994 and 1995, Abraham Jacob Alkhabaz, also known as Jake Baker, was an undergraduate student at the University of Michigan.  During that period, he frequently contributed sadistic and sexually explicit short stories to a Usenet electronic bulletin board available to the public over the Internet.  In one such story, he described in detail how he and a companion tortured, sexually abused, and killed a young woman, who was given the name of one of Baker's classmates.  (Excerpts from the story, as reprinted in the Court of Appeals decision in the case, are available here . WARNING: This material is very graphic in nature and may be troubling to some readers.  It is presented in order to provide a complete view of the facts of the case.)  Baker's stories came to the attention of another Internet user, who assumed the name of Arthur Gonda.  Baker and Gonda then exchanged many email messages, sharing their sadistic fantasies and discussing the methods by which they might kidnap and torture a woman in Baker's dormitory.  When these stories and email exchanges came to light, Baker was indicted for violation of 18 U.S.C. 875(c), which provides:  

Whoever transmits in interstate or foreign commerce any communication containing any threat to kidnap any person or any threat to injure the person of another, shall be fined under this title or imprisoned not more than five years, or both.  

Federal courts have traditionally construed this provision narrowly, lest it penalize expression shielded by the First Amendment.  Specifically, the courts have required that a defendant's statement, in order to trigger criminal sanctions, constitute a "true threat" -- as distinguished from, for example, inadvertent statements, hyperbole, innocuous talk, or political commentary.  Baker moved to quash the indictment on the ground that his statements on the Internet did not constitute "true threats." The District Court agreed , ruling that the class of women supposedly threatened was not identified in Baker's exchanges with Gonda with the degree of specificity required by the First Amendment and that, although Baker had expressed offensive desires, "it was not constitutionally permissible to infer an intention to act on a desire from a simple expression of desire."  The District Judge's concluding remarks concerning the character of threatening speech on the Internet bear emphasis:  

Baker's words were transmitted by means of the Internet, a relatively new communications medium that is itself currently the subject of much media attention.  The Internet makes it possible with unprecedented ease to achieve world-wide distribution of material, like Baker's story, posted to its public areas.  When used in such a fashion, the Internet may be likened to a newspaper with unlimited distribution and no locatable printing press - and with no supervising editorial control. But Baker's e-mail messages, on which the superseding indictment is based, were not publicly published but privately sent to Gonda.  While new technology such as the Internet may complicate analysis and may sometimes require new or modified laws, it does not in this instance qualitatively change the analysis under the statute or under the First Amendment.  Whatever Baker's faults, and he is to be faulted, he did not violate 18 U.S.C. § 875(c).  

Two of the three judges on the panel that heard the appeal agreed .  In their view, a violation of 875(c) requires a demonstration, first, that a reasonable person would interpret the communication in question as serious expression of an intention to inflict bodily harm and, second, that a reasonable person would perceive the communications as being conveyed "to effect some change or achieve some goal through intimidation."  Baker's speech failed, in their judgment, to rise to this level.

Judge Krupansky, the third member of the panel, dissented .  In a sharply worded opinion, he denounced the majority for compelling the prosecution to meet a standard higher that Congress intended or than the First Amendment required.  In his view, "the pertinent inquiry is whether a jury could find that a reasonable recipient of the communication would objectively tend to believe that the speaker was serious about his stated intention."  A reasonable jury, he argued, could conclude that Baker's speech met this standard -- especially in light of the fact that the woman named in the short story had, upon learning of it, experienced a "shattering traumatic reaction that resulted in recommended psychological counselling."

For additional information on the case, see Adam S. Miller, The Jake Baker Scandal: A Perversion of Logic .

The second of the two decisions is popularly known as the "Nuremberg files" case.  In 1995, the American Coalition of Life Activists (ACLA), an anti-abortion group that advocates the use of force in their efforts to curtail abortions, created a poster featuring what the ACLA described as the "Dirty Dozen," a group of doctors who performed abortions.  The posters offered "a $ 5,000 [r]eward for information leading to arrest, conviction and revocation of license to practice medicine" of the doctors in question, and listed their home addresses and, in some instances, their phone numbers.  Versions of the poster were distributed at anti-abortion rallies and later on television.  In 1996, an expanded list of abortion providers, now dubbed the "Nuremberg files," was posted on the Internet with the assistance of an anti-abortion activist named Neil Horsley.  The Internet version of the list designated doctors and clinic workers who had been attacked by anti-abortion terrorists in two ways:  the names of people who had been murdered were crossed out; the names of people who had been wounded were printed in grey.  (For a version of the Nuremberg Files web site, click here. WARNING: This material is very graphic in nature and may be disturbing to many readers.  It is presented in order to provide a complete view of the facts of the case).

The doctors named and described on the list feared for their lives.  In particular, some testified that they feared that, by publicizing their addresses and descriptions, the ACLA had increased the ease with which terrorists could locate and attack them -- and that, by publicizing the names of doctors who had already been killed, the ACLA was encouraging those attacks.

Some of the doctors sought recourse in the courts.  They sued the ACLA, twelve individual anti-abortion activists and an affiliated organization, contending that their actions violated the federal Freedom of Access to Clinic Entrances Act of 1994 (FACE), 18 U.S.C. §248, and the Racketeer Influenced and Corrupt Organizations Act (RICO), 18 U.S.C. §1962.  In an effort to avoid a First-Amendment challenge to the suit, the trial judge instructed the jury that defendants could be liable only if their statements were "true threats."  The jury, concluding that the ACLA had indeed made such true threats, awarded the plaintiffs $107 million in actual and punitive damages.  The trial court then enjoined the defendants from making or distributing the posters, the webpage or anything similar.

This past March, a panel of the Court of Appeals for the Ninth Circuit overturned the verdict , ruling that it violated the First Amendment.  Judge Kozinski began his opinion by likening the anti-abortion movement to other "political movements in American history," such as the Patriots in the American Revolution, abolitionism, the labor movement, the anti-war movement in the 1960s, the animal-rights movement, and the environmental movement.  All, he argued, have had their "violent fringes," which have lent to the language of their non-violent members "a tinge of menace."  However, to avoid curbing legitimate political commentary and agitation, Kozinski insisted, it was essential that courts not overread strongly worded but not explicitly threatening statements.  Specifically, he held that:  

Defendants can only be held liable if they "authorized, ratified, or directly threatened" violence. If defendants threatened to commit violent acts, by working alone or with others, then their statements could properly support the verdict. But if their statements merely encouraged unrelated terrorists, then their words are protected by the First Amendment.  

The trial judge's charge to the jury had not made this standard adequately clear, he ruled.  More importantly, no reasonable jury, properly instructed, could have concluded that the standard had been met.  Accordingly, the trial judge was instructed to dissolve the injunction and enter judgment for the defendants on all counts.

In the course of his opinion, Kozinski offered the following reflections on the fact that the defendants' speech had occurred in public discourse -- including the Internet:  

In considering whether context could import a violent meaning to ACLA's non-violent statements, we deem it highly significant that all the statements were made in the context of public discourse, not in direct personal communications. Although the First Amendment does not protect all forms of public speech, such as statements inciting violence or an imminent panic, the public nature of the speech bears heavily upon whether it could be interpreted as a threat.  As we held in McCalden v. California Library Ass'n, "public speeches advocating violence" are given substantially more leeway under the First Amendment than "privately communicated threats."  There are two reasons for this distinction: First, what may be hyperbole in a public speech may be understood (and intended) as a threat if communicated directly to the person threatened, whether face-to-face, by telephone or by letter. In targeting the recipient personally, the speaker leaves no doubt that he is sending the recipient a message of some sort. In contrast, typical political statements at rallies or through the media are far more diffuse in their focus because they are generally intended, at least in part, to shore up political support for the speaker's position.  Second, and more importantly, speech made through the normal channels of group communication, and concerning matters of public policy, is given the maximum level of protection by the Free Speech Clause because it lies at the core of the First Amendment.

2.  Intellectual Property

The First Amendment forbids Congress to make any law “abridging the freedom of speech.”  The copyright statute plainly interferes with certain kinds of speech: it prevents people from “publicly performing” or “reproducing” copyrighted material without permission.  In other words, several ways in which people might be inclined to “speak” have been declared by Congress illegal .  Does this imply that the copyright statute as a whole – or, less radically, some specific applications of it – should be deemed unconstitutional?

Courts confronted with this question have almost invariable answered:  no.  Two justifications are commonly offered in support of the compatibility of copyright and “freedom of speech.”  First, Article I, Section 8, Clause 8 of the Constitution explicitly authorizes Congress “To promote the Progress of Science and the useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries,” and there is no indication that the drafters or ratifiers of the First Amendment intended to nullify this express grant of lawmaking power.  Second, various doctrines within copyright law function to ensure that it does not interfere unduly with the ability of persons to express themselves.  Specifically, the principle that only the particular way in which an idea is “expressed” is copyrightable, not the idea itself, ensures that the citizenry will be able to discuss concepts, arguments, facts, etc. without restraint.  Even more importantly, the fair use doctrine (discussed in the first module) provides a generous safe harbor to people making reasonable uses of copyrighted material for educational, critical, or scientific purposes.  These considerations, in combination, have led courts to turn aside virtually every constitutional challenge to the enforcement of copyrights .

Very recently, some of the ways in which copyright law has been modified and then applied to activity on the Internet has prompted a growing number of scholars and litigants to suggest that the conventional methods for reconciling copyright law and the First Amendment need to be reexamined.   Two developments present the issue especially sharply:

(1) For reasons we explored in the second module , last summer a federal court in New York ruled that posting on a website a link to another website from which a web surfer can download a software program designed to break an encryption system constitutes “trafficking” in anti-circumvention technology in violation of the Digital Millennium Copyright Act.  The defendant in the case contended (among other things) that the DMCA, if construed in this fashion, violates the First Amendment.  Judge Kaplan rejected this contention, reasoning that a combination of the Copyright Clause and an generous understanding of the "Necessary and Proper" clause of the Constitution provided constitutional support for the DMCA:  

In enacting the DMCA, Congress found that the restriction of technologies for the circumvention of technological means of protecting copyrighted works "facilitate[s] the robust development and world-wide expansion of electronic commerce, communications, research, development, and education" by "mak[ing] digital networks safe places to disseminate and exploit copyrighted materials." That view can not be dismissed as unreasonable. Section 1201(a)(2) of the DMCA therefore is a proper exercise of Congress' power under the Necessary and Proper Clause. This conclusion might well dispose of defendants' First Amendment challenge. Given Congress' justifiable view that the DMCA is instrumental in carrying out the objective of the Copyright Clause, there arguably is no First Amendment objection to prohibiting the dissemination of means for circumventing technological methods for controlling access to copyrighted works. But the Court need not rest on this alone. In determining the constitutionality of governmental restriction on speech, courts traditionally have balanced the public interest in the restriction against the public interest in the kind of speech at issue.  This approach seeks to determine, in light of the goals of the First Amendment, how much protection the speech at issue merits. It then examines the underlying rationale for the challenged regulation and assesses how best to accommodate the relative weights of the interests in free speech interest and the regulation. As Justice Brandeis wrote, freedom of speech is important both as a means to achieve a democratic society and as an end in itself.  Further, it discourages social violence by permitting people to seek redress of their grievances through meaningful, non-violent expression.  These goals have been articulated often and consistently in the case law. The computer code at issue in this case does little to serve these goals. Although this Court has assumed that DeCSS has at least some expressive content, the expressive aspect appears to be minimal when compared to its functional component.  Computer code primarily is a set of instructions which, when read by the computer, cause it to function in a particular way, in this case, to render intelligible a data file on a DVD. It arguably "is best treated as a virtual machine . . . ." On the other side of this balance lie the interests served by the DMCA. Copyright protection exists to "encourage individual effort by personal gain" and thereby "advance public welfare" through the "promot[ion of] the Progress of Science and useful Arts."  The DMCA plainly was designed with these goals in mind. It is a tool to protect copyright in the digital age. It responds to the risks of technological circumvention of access controlling mechanisms designed to protect copyrighted works distributed in digital form. It is designed to further precisely the goals articulated above, goals of unquestionably high social value. This is quite clear in the specific context of this case. Plaintiffs are eight major motion picture studios which together are largely responsible for the development of the American film industry. Their products reach hundreds of millions of viewers internationally and doubtless are responsible for a substantial portion of the revenue in the international film industry each year. To doubt the contribution of plaintiffs to the progress of the arts would be absurd. DVDs are the newest way to distribute motion pictures to the home market, and their popularity is growing rapidly. The security of DVD technology is central to the continued distribution of motion pictures in this format. The dissemination and use of circumvention technologies such as DeCSS would permit anyone to make flawless copies of DVDs at little expense.  Without effective limits on these technologies, copyright protection in the contents of DVDs would become meaningless and the continued marketing of DVDs impractical. This obviously would discourage artistic progress and undermine the goals of copyright. The balance between these two interests is clear. Executable computer code of the type at issue in this case does little to further traditional First Amendment interests. The DMCA, in contrast, fits squarely within the goals of copyright, both generally and as applied to DeCSS. In consequence, the balance of interests in this case falls decidedly on the side of plaintiffs and the DMCA.  

One of the axes of debate in the ongoing appeal of the lower-court ruling concerns this issue.  For a challenge to Judge Kaplan's discussion of the First-Amendment, see the amicus brief submitted to the Second Circuit by a group of law professors .

(2) Some scholars believe that the ambit of the fair use doctrine should and will shrink on the Internet.  Why?  Because, in their view, the principal purpose of the doctrine is to enable people to use copyrighted materials in ways that are socially valuable but that are likely, in the absence of a special legal privilege, to be blocked by transaction costs.  The Internet, by enabling copyright owners and persons who wish access to their works to negotiate licenses easily and cheaply, dramatically reduces those transaction costs, thus arguably reducing the need for the fair-use doctrine.  Recall that one of the justifications conventionally offered to explain the compatibility of copyright law and the First Amendment is the safety valve afforded critical commentary and educational activity by the fair use doctrine.  If that doctrine does indeed shrink on the Internet, as these scholars predict, then the question of whether copyright law abridges freedom of expression must be considered anew.

   

Discussion Topics

1.  Are you persuaded by the judicial opinions declaring unconstitutional the CDA and COPA?  Should CHIPA suffer the same fate?  Are there any ways in which government might regulate the Internet so as to shield children from pornography?

2.  Some authors have suggested that the best way to respond to pornography on the Internet is through "zoning."  For example, Christopher Furlow suggests the use of “restricted top-level domains” or “rTLDs” which would function similarly to area codes to identify particular areas of the Internet and make it easier for parents to control what type of material their children are exposed to online.  See Erogenous Zoning on The Cyber-Frontier, 5 Va. J.L. & Tech. 7, 4  (Spring 2000) .  Do you find this proposal attractive?  practicable?  effective?

3.   Elizabeth Marsh raises the following question:  Suppose that the Ku Klux Klan sent unsolicited email messages to large numbers of African-Americans and Jews.  Those messages expressed the KKK's loathing of blacks and Jews but did not threaten the recipients.  Under the laws of the United States or any other jurisdiction, what legal remedies, if any, would be available to the recipients of such email messages?  Should the First Amendment be construed to shield "hate spam" of this sort?  More broadly, should "hate spam" be tolerated or suppressed?  For Marsh's views on the matter, see " Purveyors of Hate on the Internet: Are We Ready for Hate Spam ?", 17 Ga. St. U. L. Rev. 379 (Winter 2000).

4.  Were the Jake Baker and Nuremberg Files cases decided correctly?  How would you draw the line between "threats" subject to criminal punishment and "speech" protected by the First Amendment?

5.  Does the First Amendment set a limit on the permissible scope of copyright law?  If so, how would you define that limit?

6.  Lyrissa Lidsky , points out that the ways in which the Supreme Court has deployed the First Amendment to limit the application of the tort of defamation are founded on the assumption that most defamation suits will be brought against relatively powerful institutions (e.g., newspapers, television stations).  The Internet, by enabling relatively poor and powerless persons to broadcast to the world their opinions of powerful institutions (e.g., their employers, companies by which they feel wronged) increases the likelihood that, in the future, defamation suits will be brought most often by formidable plaintiffs against weak individual defendants.  If we believe that "[t]he Internet is . . . a powerful tool for equalizing imbalances of power by giving voice to the disenfranchised and by allowing more democratic participation in public discourse," we should be worried by this development.  Lidsky suggests that it may be necessary, in this altered climate, to reconsider the shape of the constitutional limitations on defamation.  Do you agree?  If so, how would you reformulate the relevant limitations?

7.  Like Lessig, Paul Berman suggests that the Internet should prompt us to reconsider the traditional "state action" doctrine that limits the kinds of interference with speech to which the First-Amendment applies.  Berman supports this suggestion with the following example:  “…an online service provider recently attempted to take action against an entity that had sent junk e-mail on its service, a district court rejected the e-mailer's argument that such censorship of e-mail violated the First Amendment.  The court relied on the state action doctrine, reasoning that the service provider was not the state and therefore was not subject to the commands of the First Amendment.”  Such an outcome, he suggests, is unfortunate.  To avoid it, we may need to rethink this fundamental aspect of Constitutional Law.  Do you agree?  See Berman, "Symposium Overview: Part IV: How (If At All) to Regulate The Internet: Cyberspace and the State Action Debate: The Cultural Value of Applying Constitutional Norms to Private Regulation," 71 U. Colo. L. Rev. 1263 (Fall 2000).     Back to Top | Intro | Background | Current Controversies | Discussion Topics | Additional Resources

Additional Resources

Memorandum Opinion, Mainstream Loudoun v. Loudoun County Library , U.S. District Court, Eastern District of Virginia, Case No. 97-2049-A. (November 23, 1998)

Mainstream Loudoun v. Loudoun County Library , (Tech Law Journal Summary)

Lawrence Lessig, Tyranny of the Infrastructure , Wired 5.07 (July 1997)

Board of Education v. Pico

ACLU Report, "Fahrenheit 451.2: Is Cyberspace Burning?"

Reno v. ACLU

ACLU offers various materials relating to the Reno v. ACLU case.

Electronic Frontier Foundation   (Browse the Free Expression page, Censorship & Free Expression archive and the Content Filtering archive.)

The Electronic Privacy Information Center (EPIC) offers links to various aspects of CDA litigation and discussion.

Platform for Internet Content Selection (PICS)  (Skim the "PICS and Intellectual Freedom FAQ".  Browse "What Governments, Media and Individuals are Saying about PICS (pro and con)".)

Jason Schlosberg, Judgment on "Nuremberg": An Analysis of Free Speech and Anti-Abortion Threats Made on the Internet , 7 B.U. J. SCI. & TECH. L. (Winter 2001)

CyberAngels.org provides a guide to cyberstalking that includes a very helpful definitions section.

Cyberstalking: A New Challenge for Law Enforcement and Industry – A Report from the Attorney General to the Vice President (August 1999) provides very helpful definitions and explanations related to cyberstalking, including 1 st Amendment implications; also provides links to additional resources.

National Center for Victims of Crime

The Anti-Defamation League web site offers a wealth of resources for dealing with hate online , including guides for parents and filtering software.  The filtering software, called Hate Filter, is designed to give parents the ability to make decisions regarding what their children are exposed to online.  The ADL believes that “Censorship is not the answer to hate on the Internet. ADL supports the free speech guarantees embodied in the First Amendment of the United States Constitution, believing that the best way to combat hateful speech is with more speech.”

Laura Lorek, "Sue the bastards!."   ZDNet 3/12/2001.

"At Risk Online: Your Good Name."   ZDNet April 2001.  

Jennifer K. Swartz, " Beyond the Schoolhouse Gates: Do Students Shed Their Constitutional Rights When Communicating to a Cyber-Audience ," 48 Drake L. Rev. 587 (2000).

  • Foreign Affairs
  • CFR Education
  • Newsletters

Council of Councils

Climate Change

Global Climate Agreements: Successes and Failures

Backgrounder by Lindsay Maizland December 5, 2023 Renewing America

  • Defense & Security
  • Diplomacy & International Institutions
  • Energy & Environment
  • Human Rights
  • Politics & Government
  • Social Issues

Myanmar’s Troubled History

Backgrounder by Lindsay Maizland January 31, 2022

  • Europe & Eurasia
  • Global Commons
  • Middle East & North Africa
  • Sub-Saharan Africa

How Tobacco Laws Could Help Close the Racial Gap on Cancer

Interactive by Olivia Angelino, Thomas J. Bollyky , Elle Ruggiero and Isabella Turilli February 1, 2023 Global Health Program

  • Backgrounders
  • Special Projects

United States

speech on internet kills communication

Book by Max Boot September 10, 2024

  • Centers & Programs
  • Books & Reports
  • Independent Task Force Program
  • Fellowships

Oil and Petroleum Products

Academic Webinar: The Geopolitics of Oil

Webinar with Carolyn Kissane and Irina A. Faskianos April 12, 2023

  • Students and Educators
  • State & Local Officials
  • Religion Leaders
  • Local Journalists

NATO’s Future: Enlarged and More European?

Virtual Event with Emma M. Ashford, Michael R. Carpenter, Camille Grand, Thomas Wright, Liana Fix and Charles A. Kupchan June 25, 2024 Europe Program

  • Lectureship Series
  • Webinars & Conference Calls
  • Member Login

Hate Speech on Social Media: Global Comparisons

A memorial outside Al Noor mosque in Christchurch, New Zealand.

  • Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing.
  • Policies used to curb hate speech risk limiting free speech and are inconsistently enforced.
  • Countries such as the United States grant social media companies broad powers in managing their content and enforcing hate speech rules. Others, including Germany, can force companies to remove posts within certain time periods.

Introduction

A mounting number of attacks on immigrants and other minorities has raised new concerns about the connection between inflammatory speech online and violent acts, as well as the role of corporations and the state in policing speech. Analysts say trends in hate crimes around the world echo changes in the political climate, and that social media can magnify discord. At their most extreme, rumors and invective disseminated online have contributed to violence ranging from lynchings to ethnic cleansing.

The response has been uneven, and the task of deciding what to censor, and how, has largely fallen to the handful of corporations that control the platforms on which much of the world now communicates. But these companies are constrained by domestic laws. In liberal democracies, these laws can serve to defuse discrimination and head off violence against minorities. But such laws can also be used to suppress minorities and dissidents.

How widespread is the problem?

  • Radicalization and Extremism
  • Social Media
  • Race and Ethnicity
  • Censorship and Freedom of Expression
  • Digital Policy

Incidents have been reported on nearly every continent. Much of the world now communicates on social media, with nearly a third of the world’s population active on Facebook alone. As more and more people have moved online, experts say, individuals inclined toward racism, misogyny, or homophobia have found niches that can reinforce their views and goad them to violence. Social media platforms also offer violent actors the opportunity to publicize their acts.

A bar chart of the percent agreeing "people should be able to make statements that are offensive to minority groups publicly" showing the U.S. with 67% in agreement

Social scientists and others have observed how social media posts, and other online speech, can inspire acts of violence:

  • In Germany a correlation was found between anti-refugee Facebook posts by the far-right Alternative for Germany party and attacks on refugees. Scholars Karsten Muller and Carlo Schwarz observed that upticks in attacks, such as arson and assault, followed spikes in hate-mongering posts .
  • In the United States, perpetrators of recent white supremacist attacks have circulated among racist communities online, and also embraced social media to publicize their acts. Prosecutors said the Charleston church shooter , who killed nine black clergy and worshippers in June 2015, engaged in a “ self-learning process ” online that led him to believe that the goal of white supremacy required violent action.
  • The 2018 Pittsburgh synagogue shooter was a participant in the social media network Gab , whose lax rules have attracted extremists banned by larger platforms. There, he espoused the conspiracy that Jews sought to bring immigrants into the United States, and render whites a minority, before killing eleven worshippers at a refugee-themed Shabbat service. This “great replacement” trope, which was heard at the white supremacist rally in Charlottesville, Virginia, a year prior and originates with the French far right , expresses demographic anxieties about nonwhite immigration and birth rates.
  • The great replacement trope was in turn espoused by the perpetrator of the 2019 New Zealand mosque shootings, who killed forty-nine Muslims at prayer and sought to broadcast the attack on YouTube.
  • In Myanmar, military leaders and Buddhist nationalists used social media to slur and demonize the Rohingya Muslim minority ahead of and during a campaign of ethnic cleansing . Though Rohingya comprised perhaps 2 percent of the population, ethnonationalists claimed that Rohingya would soon supplant the Buddhist majority. The UN fact-finding mission said, “Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet [PDF].”
  • In India, lynch mobs and other types of communal violence, in many cases originating with rumors on WhatsApp groups , have been on the rise since the Hindu-nationalist Bharatiya Janata Party (BJP) came to power in 2014.
  • Sri Lanka has similarly seen vigilantism inspired by rumors spread online, targeting the Tamil Muslim minority. During a spate of violence in March 2018, the government blocked access to Facebook and WhatsApp, as well as the messaging app Viber, for a week, saying that Facebook had not been sufficiently responsive during the emergency.

Does social media catalyze hate crimes?

The same technology that allows social media to galvanize democracy activists can be used by hate groups seeking to organize and recruit. It also allows fringe sites, including peddlers of conspiracies, to reach audiences far broader than their core readership. Online platforms’ business models depend on maximizing reading or viewing times. Since Facebook and similar platforms make their money by enabling advertisers to target audiences with extreme precision, it is in their interests to let people find the communities where they will spend the most time.

Users’ experiences online are mediated by algorithms designed to maximize their engagement, which often inadvertently promote extreme content. Some web watchdog groups say YouTube’s autoplay function, in which the player, at the end of one video, tees up a related one, can be especially pernicious. The algorithm drives people to videos that promote conspiracy theories or are otherwise “ divisive, misleading or false ,” according to a Wall Street Journal investigative report. “YouTube may be one of the most powerful radicalizing instruments of the 21st century,” writes sociologist Zeynep Tufekci .

YouTube said in June 2019 that changes to its recommendation algorithm made in January had halved views of videos deemed “borderline content” for spreading misinformation. At that time, the company also announced that it would remove neo-Nazi and white supremacist videos from its site. Yet the platform faced criticism that its efforts to curb hate speech do not go far enough. For instance, critics note that rather than removing videos that provoked homophobic harassment of a journalist, YouTube instead cut off the offending user from sharing in advertising revenue.  

How do platforms enforce their rules?

Social media platforms rely on a combination of artificial intelligence, user reporting, and staff known as content moderators to enforce their rules regarding appropriate content. Moderators, however, are burdened by the sheer volume of content and the trauma that comes from sifting through disturbing posts , and social media companies don’t evenly devote resources across the many markets they serve.

A ProPublica investigation found that Facebook’s rules are opaque to users and inconsistently applied by its thousands of contractors charged with content moderation. (Facebook says there are fifteen thousand.) In many countries and disputed territories, such as the Palestinian territories, Kashmir, and Crimea, activists and journalists have found themselves censored , as Facebook has sought to maintain access to national markets or to insulate itself from legal liability. “The company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities,” ProPublica found.

Daily News Brief

A summary of global news developments with cfr analysis delivered to your inbox each morning.  weekdays., the world this week, a weekly digest of the latest from cfr on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. every friday., think global health.

A curation of original analyses, data visualizations, and commentaries, examining the debates and efforts to improve health worldwide.  Weekly.

Addressing the challenges of navigating varying legal systems and standards around the world—and facing investigations by several governments—Facebook CEO Mark Zuckerberg called for global regulations to establish baseline content, electoral integrity, privacy, and data standards.

Problems also arise when platforms’ artificial intelligence is poorly adapted to local languages and companies have invested little in staff fluent in them. This was particularly acute in Myanmar, where, Reuters reported, Facebook employed just two Burmese speakers as of early 2015. After a series of anti-Muslim violence began in 2012, experts warned of the fertile environment ultranationalist Buddhist monks found on Facebook for disseminating hate speech to an audience newly connected to the internet after decades under a closed autocratic system.

Facebook admitted it had done too little after seven hundred thousand Rohingya were driven to Bangladesh and a UN human rights panel singled out the company in a report saying Myanmar’s security forces should be investigated for genocidal intent. In August 2018, it banned military officials from the platform and pledged to increase the number of moderators fluent in the local language.

How do countries regulate hate speech online?

In many ways, the debates confronting courts, legislatures, and publics about how to reconcile the competing values of free expression and nondiscrimination have been around for a century or longer. Democracies have varied in their philosophical approaches to these questions, as rapidly changing communications technologies have raised technical challenges of monitoring and responding to incitement and dangerous disinformation.

United States. Social media platforms have broad latitude [PDF], each establishing its own standards for content and methods of enforcement. Their broad discretion stems from the Communications Decency Act . The 1996 law exempts tech platforms from liability for actionable speech by their users. Magazines and television networks, for example, can be sued for publishing defamatory information they know to be false; social media platforms cannot be found similarly liable for content they host.

A list of data points on Americans' level of concern over online hate speech, including that 59% believe online hate and harassment make hate crimes more common.

Recent congressional hearings have highlighted the chasm between Democrats and Republicans on the issue. House Judiciary Committee Chairman Jerry Nadler convened a hearing in the aftermath of the New Zealand attack, saying the internet has aided white nationalism’s international proliferation. “The President’s rhetoric fans the flames with language that—whether intentional or not—may motivate and embolden white supremacist movements,” he said, a charge Republicans on the panel disputed. The Senate Judiciary Committee, led by Ted Cruz, held a nearly simultaneous hearing in which he alleged that major social media companies’ rules disproportionately censor conservative speech , threatening the platforms with federal regulation. Democrats on that panel said Republicans seek to weaken policies  dealing with hate speech and disinformation that instead ought to be strengthened.

European Union. The bloc’s twenty-eight members all legislate the issue of hate speech on social media differently, but they adhere to some common principles. Unlike the United States, it is not only speech that directly incites violence that comes under scrutiny; so too does speech that incites hatred or denies or minimizes genocide and crimes against humanity. Backlash against the millions of predominantly Muslim migrants and refugees who have arrived in Europe in recent years has made this a particularly salient issue, as has an uptick in anti-Semitic incidents in countries including France, Germany, and the United Kingdom.

In a bid to preempt bloc-wide legislation, major tech companies agreed to a code of conduct with the European Union in which they pledged to review posts flagged by users and take down those that violate EU standards within twenty-four hours. In a February 2019 review, the European Commission found that social media platforms were meeting this requirement in three-quarters of cases .

The Nazi legacy has made Germany especially sensitive to hate speech. A 2018 law requires large social media platforms to take down posts that are “manifestly illegal” under criteria set out in German law within twenty-four hours. Human Rights Watch raised concerns that the threat of hefty fines would encourage the social media platforms to be “overzealous censors.”

New regulations under consideration by the bloc’s executive arm would extend a model similar to Germany’s across the EU, with the intent of “preventing the dissemination of terrorist content online .” Civil libertarians have warned against the measure for its “ vague and broad ” definitions of prohibited content, as well as for making private corporations, rather than public authorities, the arbiters of censorship.

India. Under new social media rules, the government can order platforms to take down posts within twenty-four hours based on a wide range of offenses, as well as to obtain the identity of the user. As social media platforms have made efforts to stanch the sort of speech that has led to vigilante violence, lawmakers from the ruling BJP have accused them of censoring content in a politically discriminatory manner, disproportionately suspending right-wing accounts, and thus undermining Indian democracy . Critics of the BJP accuse it of deflecting blame from party elites to the platforms hosting them. As of April 2018, the New Delhi–based Association for Democratic Reforms had identified fifty-eight lawmakers facing hate speech cases, including twenty-seven from the ruling BJP. The opposition has expressed unease with potential government intrusions into privacy.

Japan. Hate speech has become a subject of legislation and jurisprudence in Japan in the past decade [PDF], as anti-racism activists have challenged ultranationalist agitation against ethnic Koreans. This attention to the issue attracted a rebuke from the UN Committee on the Elimination of Racial Discrimination in 2014 and inspired a national ban on hate speech in 2016, with the government adopting a model similar to Europe’s. Rather than specify criminal penalties, however, it delegates to municipal governments the responsibility “to eliminate unjust discriminatory words and deeds against People from Outside Japan.” A handful of recent cases concerning ethnic Koreans could pose a test: in one, the Osaka government ordered a website containing videos deemed hateful taken down , and in Kanagawa and Okinawa Prefectures courts have fined individuals convicted of defaming ethnic Koreans in anonymous online posts.

What are the prospects for international prosecution?

Cases of genocide and crimes against humanity could be the next frontier of social media jurisprudence, drawing on precedents set in Nuremberg and Rwanda. The Nuremberg trials in post-Nazi Germany convicted the publisher of the newspaper Der Sturmer ; the 1948 Genocide Convention subsequently included “ direct and public incitement to commit genocide ” as a crime. During the UN International Criminal Tribunal for Rwanda, two media executives were convicted on those grounds. As prosecutors look ahead to potential genocide and war crimes tribunals for cases such as Myanmar, social media users with mass followings could be found similarly criminally liable.

Recommended Resources

Andrew Sellars sorts through attempts to define hate speech .

Columbia University compiles relevant case law from around the world.

The U.S. Holocaust Memorial Museum lays out the legal history of incitement to genocide.

Kate Klonick describes how private platforms have come to govern public speech .

Timothy McLaughlin chronicles Facebook’s role in atrocities against Rohingya in Myanmar.

Adrian Chen reports on the psychological toll of content moderation on contract workers.

Tarleton Gillespie discusses the politics of content moderation .

  • Technology and Innovation

More From Our Experts

How Will the EU Elections Results Change Europe?

In Brief by Liana Fix June 10, 2024 Europe Program

Iran Attack Means an Even Tougher Balancing Act for the U.S. in the Middle East

In Brief by Steven A. Cook April 14, 2024 Middle East Program

Iran Attacks on Israel Spur Escalation Concerns

In Brief by Ray Takeyh April 14, 2024 Middle East Program

Top Stories on CFR

Ukraine’s Attack on Kursk, With Liana Fix

Podcast with James M. Lindsay and Liana Fix August 27, 2024 The President’s Inbox

The IMF’s Latest External Sector Report Misses the Mark

Blog Post by Brad W. Setser August 26, 2024 Follow the Money

Democratic Republic of Congo

DRC-Rwanda Talks Underway, But Lasting Peace Remains Elusive

Blog Post by Michelle Gavin August 20, 2024 Africa in Transition

More From Forbes

Has technology killed face-to-face communication.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Most of us use our cell phones and computers to inform, make requests of, and collaborate with co-workers, clients and customers. The digital age has connected people across the world, making e-commerce and global networking a reality. But does this reliance on technology, also mean we are losing the ability to effectively communicate with each other in person?

Ulrich Kellerer thinks so. He is a leadership expert, international speaker, and author. According to Kellerer, “When it comes to effective business communication, over reliance on technology at work can be a hindrance, especially when it ends up replacing face-to-face, human interaction.”

Carol Kinsey Goman: You were the founder and CEO of Faro Fashion in Munich, Germany. What did you discover about business communication in this role?

Ulrich Kellerer: The digital age has fundamentally changed the nature and function of business communication. It has blurred international boundaries allowing people to connect with each other across the world. Communication is mobilized and instantaneous, and it is easier than ever to access and share information on a global scale.

However, I’ve also seen the negative impact of digital communication on business both internally and externally. While digital methods themselves are not detrimental – in fact many devices help us boost productivity, increase and inspire creativity -- it is our intensifying relationship with the digital environment that leads to unhealthy habits that not only distract us from the “present,” but also negatively impact communication effectiveness.

Goman: In the midst of a digital age, I believe that face-to-face is still the most productive and powerful communication medium. An in-person meeting offers the best opportunity to engage others with empathy and impact. It builds and supports positive professional connections that we can’t replicate in a virtual environment. Would you agree?

Kellerer: Connection is critical to building business relationships. Anyone working in sales knows that personal interactions yield better results. According to Harvard research, face-to-face requests were 34 times more likely to garner positive responses than emails. Communication in sales is complicated. It requires courtesies and listening skills that are simply not possible on digital platforms.

Interpersonal communication is also vital for a business to function internally.  While sending emails is efficient and fast, face-to-face communication drives productivity. In a recent survey, 67% of senior executives and managers said their organization’s productivity would increase if superiors communicated face-to-face more often.

Goman: In my research on the impact of body language on leadership effectiveness I’ve seen the same dynamic. In face-to-face meetings our brains process the continual cascade of nonverbal cues that we use as the basis for building trust and professional intimacy. As a communication medium, face-to-face interaction is information-rich. People are interpreting the meaning of what you say only partially from the words you use. They get most of your message (and all of the emotional nuance behind the words) from vocal tone, pacing, facial expressions and body language. And, consciously or unconsciously, you are processing the instantaneous nonverbal responses of others to help gauge how well your ideas are being accepted.

Kellerer: While digital communication is often the most convenient method, face-to-face interaction is still by far the most powerful way to achieve business goals. Having a personal connection builds trust and minimizes misinterpretation and misunderstanding. With no physical cues, facial expressions/gestures, or the ability to retract immediately, the risk of disconnection, miscommunication, and conflict is heightened.

Goman: Human beings are born with the innate capability to send and interpret nonverbal signals. In fact, our brains need and expect these more primitive and significant channels of information. When we are denied these interpersonal cues, the brain struggles and communication suffers. In addition, people remember much more of what they see than what they hear -- which is one reason why you tend to be more persuasive when you are both seen and heard.

In addition to eye contact, gestures, facial expressions and body postures, another powerful nonverbal component (and one that comes solely in face-to-face encounters) is touch . We are programmed to feel closer to someone who’s touched us. For example, a study on handshakes by the Income Center for Trade Shows showed that people are twice as likely to remember you if you shake hands with them.

Kellerer: Business leaders must create environments in which digital communication is used strategically and personal communication is practiced and prioritized. Technology is a necessary part of business today but incorporating the human touch is what will give businesses the competitive edge in the digital marketplace.

Goman: Agreed!

Carol Kinsey Goman, Ph.D.

  • Editorial Standards
  • Reprints & Permissions

Internet Essays

A report on a marketing article.

Publication Date: January 22, 2018 Headline/Title: How Marketing and Advertising are bound to Change In 2018 Major Company(s) mentioned: Facebook Company Summary: The article indicates…

The Effect of Online Advertising on Consumers

The ad fission of the internet has increasingly been adopted thereby gradually enabling the world web to become a great advertisement platform (Kalia & Mishra…

Digital Media Analysis: ASOS Website

Introduction The business world has gravely changed in the recent past, making it quite hard for one to sustainably keep abreast with many things trending…

Anonymity and Abuse on the Internet

Internet has been the fastest growing medium, as more and more people are becoming a part of the internet fraternity; it becomes more difficult to…

History of casinos

Introduction Casinos have become rampant in the modern society, especially in the metropolitan areas. In fact, casinos have an impact on both social and economic…

speech on internet kills communication

Creating Your LinkedIn Profile

Making your existing LinkedIn profile professional. First step Enter your login details on the LinkedIn page. From your browser’s address bar, please note LinkedIn URL…

Impact of video games on children under 18

What are the legal, ethical and social issues in society when dealing with video games? One of the critical technology issues that the society is…

E-Business Essay

Introduction Electronic business is a type that utilizes technological innovations and involves business taking place by the help of computer networks. E-business refers to the…

Agile Information Systems

Introduction  ISD refers to incremental and iterative approaches used in software development performed by collaborative manner through self-organizing teams that produce quality software that is…

Library Management System Design

Process to Improve Quality of Management Systems in Academics Library forms an important part of any learning and research program. Therefore, effective management of learning…

Main Features of Cyber Harassment

Traditional bullying transformed into cyber harassment in the 1990s after personal computers became popular. People with personal computers use the web to hide their identities…

Cultural Considerations on Website Creations

Summary The study indicates how ESE can improve its website for better communication to users with varied cultures. In this regard, the study identified that…

Artificial Intelligence Possible Benefits and Challenges of Artificial Intelligence in…

Introduction In the recent years, artificial intelligence (AI) is no longer considered as a robot in the fiction of science, despite the rapid development of…

E-Commerce and System Design

Abstract Ecommerce is the emerging concept in the world of business during the current era. The companies managing the physical business for long times are…

Electronic Media

How has electronic media (the internet especially and self-produced DVD’s) reversed some of this dominant cultural hegemony generated by Hollywood movies by democratizing access to…

Changes in the Documentary as A Result of Technological Changes

Documentary film is motion picture which is non-fiction and is meant to document some aspects of reality. Documentary films are made primarily for the purpose…

Our Media and the Immorality Explosion

Introduction Morality in the society has been an interesting topic over the years. It is important to have good morals where people are disciplined and…

Has the internet brought about more harm than it is…

Internet is a system that links computer networks globally for serving its users present across the world. In the present decade, the use of internet…

How technology has affected conversations

Description How people conduct conversations has changed these days due to advancements in technology. In a recent workshop that I attended, people were using their…

Internet kills communication

Introduction A famous quote by Peter Drucker, “the most important thing in communication is hearing that which is not said”, remains a very relevant dictum…

  • Artificial Intelligence
  • Computer Science
  • Cyber Crime
  • Cyber Security
  • Data Analysis
  • Internet Of Things

speech on internet kills communication

Ukraine war latest: Zelenskyy reveals plan after Kursk invasion; more than 50 killed in double missile strike

Two missiles have killed more than 50 people and injured hundreds in a city in central Ukraine, one of the deadliest attacks by Russia since the invasion of Ukraine. Meanwhile, Volodymyr Zelenskyy has spoken about his intentions after Kyiv launched an incursion into Russia last month.

Wednesday 4 September 2024 06:11, UK

Rescue workers

  • Missiles kill more than 50 people in double strike on Ukrainian city
  • 'Russian scum will pay,' Zelenskyy vows
  • Video shows damage left by deadly attack
  • Mongolia explains why Putin was not arrested
  • Ukraine plans to indefinitely hold seized territory in Russia, Zelenskyy says  
  • Watch: Zelenskyy discusses Kursk invasion in TV interview
  • Big picture: Everything you need to know on the war this week
  • Live reporting by Katie Williams

Expert view

  • Dominic Waghorn: Putin rubbing salt in wounds as Kyiv pleads for long-range attacks
  • Sean Bell: Strike proves Putin's priority is on Ukraine, not incursion into Russia

Welcome back to our live coverage of the Ukraine war.

Yesterday saw Russia carry out one of the deadliest attacks of the war as it struck a military academy and nearby hospital in the central city of Poltava with two ballistic missiles, killing at least 51 people and injuring 271 more.

Volodymyr Zelenskyy said last night that some people were still under the rubble and everything was being done to save as many lives as possible.

Earlier in the day he said "Russian scum" would be held accountable for the deadly strike.

The region is now in three days of mourning over the attack, which Russia is yet to comment on.

Less than a day after the strike on Poltova, Ukrainian authorities said Russia launched an airstrike on the northeastern Sumy region, hitting a university building. There were no immediate reports of any casualties.

Elsewhere, Vladimir Putin received a red-carpet welcome as he visited Mongolia on his first trip to a member country of the International Criminal Court.

The lavish visit came despite the ICC having a warrant out for Mr Putin's arrest on allegations of war crimes - with member countries bound to detain suspects. Mongolia, however, said this would be too difficult as it is dependent on Russian-sourced energy.

Stay with us as we bring you the latest throughout the day today.

That brings an end to our live coverage of the Ukraine war for this evening.

We'll be back with any major developments overnight and our rolling updates will continue soon.

Before we go, here's a reminder of the day's key events:

  • Ukrainian authorities said at least 51 people were killed and more than 200 injured in a Russian ballistic missile attack on the central city of Poltava. The missiles hit a military academy and a nearby hospital, officials said;
  • Three days of mourning were declared by Poltava regional head Philip Pronin after one of the deadliest attacks of the war;
  • Moldova's government said its energy dependence on Russia made it difficult to heed a requirement by the International Criminal Court  to arrest Vladimir Putin  as he visited the country.
  • Meanwhile, several government ministers resigned and the deputy head of the Ukrainian president's office was sacked ahead of an expected government reshuffle;
  • And Volodymyr Zelenskyy told NBC News in an exclusive interview that Ukraine planned to indefinitely hold territory seized in its shock invasion of Kursk last month as part of a plan to force Mr Putin to the negotiating table.

Washington plans to send more military aid to Ukraine in the coming weeks, the White House has said.

National security spokesperson John Kirby told a briefing that the US's support for Kyiv remains "unshakeable" and it was focused on strengthening Ukraine's military and defences against Russian attacks.

He noted that the US recently announced another drawdown of military assistance - and said it intends to send another round of aid in the coming weeks.

Mongolia ignored the International Criminal Court's arrest warrant for Vladimir Putin as it rolled out the red carpet to receive him today.

The Russian president should, in theory, have been handcuffed as he arrived - but the Mongolian government said earlier that it was difficult to arrest him due to the country's position of energy dependence on Moscow (see 17.35 post).

Mr Putin was welcomed in the main square of Ulaanbaatar by an honour guard as a crowd of people watched behind temporary barriers.

He and Mongolian leader Ukhnaa Khurelsukh later laid a wreath at a monument to Soviet Marshal Georgy Zhukov, visited a school curated by a Russian economic university and attended a reception ceremony where Mr Putin gave a toast.

Russia has launched another attack on Ukraine, officials have said, less than a day after its deadly ballistic missile strike killed more than 50 people in Poltava.

The regional administration of the northeastern Sumy region said Russian forces launched an airstrike on Sumy city tonight, hitting one of its university buildings.

It said a guided bomb was believed to have been used in the attack. There were no details of any casualties.

"All necessary services are available on site," the administration said in a post to Telegram .

We reported earlier on Voldymyr Zelenskyy's comments that Ukraine plans to indefinitely hold territory it has seized in its Kursk invasion (see 18.17 post).

Asked about the surprise invasion in an interview with our US partner network   NBC News , Mr Zelenskyy said the operation was aimed at restoring Ukraine's "territorial integrity".

He also said Kyiv did not "need" Russian land - but he remained silent on what the next steps would be.

Watch a clip from the interview here...

People are still trapped under the rubble of a building destroyed by a Russian attack in Poltava, Volodymyr Zelenskyy has said.

Rescue efforts are ongoing and rubble is still being cleared, he said in his nightly address.

The Ukrainian president confirmed at least 51 people have died and 271 are injured as a result of one of the deadliest attacks of the war so far.

"I am grateful to all the rescuers, doctors, medical nurses and all the Poltava residents who have joined in to help, donated blood, and who provide support," he said.

"We know that there are people under the rubble of the destroyed building. Everything is being done to save as many lives as possible."

Volodymyr Zelenskyy has posted a video of his meeting with the UN's nuclear chief Rafael Mariano Grossi in Kyiv today.

Mr Grossi, director-general of the International Atomic Energy Agency, met the Ukrainian president ahead of his visit to the Zaporizhzhia power plant, where he said the situation was "very fragile".

"The station is again on the verge of being on a blackout. We've had eight of those in the past. A blackout [means] no power; no power, no cooling. No cooling, then maybe you have a disaster," he told reporters before the meeting.

Mr Zelenskyy said his talks with Mr Grossi were focused "on the issue of strengthening nuclear safety in Ukraine, ensuring constant monitoring not only of the state of NPPs (nuclear power plants), but also of substations that are critical for their operation".

Russia has launched a new military-focused school course as it looks to prepare teenagers "mentally and physically" for service, the UK Ministry of Defence has said.

In its latest intelligence update, the ministry said the course for 15 to 18-year-olds is part of a new programme called "Foundations of Security and Defence of the Motherland".

Over 11 modules, students will cover a range of topics including "combined arms combat and small arms familiarisation", it said.

"The programme seeks to create and will likely result in a more militarised and security focused society," the MoD said.

It added that the new youth strategy in Russia was aimed primarily at preparing "pre-conscription age teenagers mentally and physically for military service".

Russia has also increased the number of summer camps for children which involve various military activities, according to the MoD.

Volodymyr Zelenskyy has dismissed the deputy head of the Ukrainian president's office, Rostyslav Shurma, according to a decree on the presidential website.

Ukrainian media outlets had been reporting that the dismissal of Mr Shurma, who took up the role in November 2021, was being considered by parliament.

The decree does not explain why he was sacked.

Meanwhile, Olha Stefanishyna, Ukraine's deputy prime minister responsible for European integration, has resigned, according to Ukraine's parliamentary speaker.

Her resignation comes after Ukraine's strategic industries, justice and environment ministers also tendered their resignation earlier today (see 15.37 post).

An expected shakeup of Ukraine's government comes at a critical time during the war with Russia, and ahead of Mr Zelenskyy's visited to the US this month where he hopes to present a "victory plan" to Joe Biden.

The Ukrainian president said changes were being undertaken to ensure Ukraine had the strengthened government it needed. 

"Autumn will be extremely important. Our state institutions must be structured in such a way so that Ukraine can achieve all the results it needs," he said.

"For this, we must strengthen some areas of the government and changes in its makeup have been prepared. There will also be changes in the (president's) office." 

Be the first to get Breaking News

Install the Sky News app for free

speech on internet kills communication

IMAGES

  1. PPT

    speech on internet kills communication

  2. Combating Hate Speech on the Internet

    speech on internet kills communication

  3. Free speech vs. safe Internet: We can have both?

    speech on internet kills communication

  4. Free Speech on the Internet Continues to Confuse Everyone

    speech on internet kills communication

  5. PPT

    speech on internet kills communication

  6. Speech On Internet

    speech on internet kills communication

VIDEO

  1. Pushinsky on the Line Between Free Speech and Hate Speech on College Campuses

  2. The US Congress Is About To DESTROY The Internet

  3. SLOW INTERNET KILLS 💔 #sonic #funk #music #memes #edit #funny #duet #marvel #memesdaily #kalki #kgf

  4. True Teammate Communication

  5. Есть ли смысл бороться с нецензурной лексикой в интернете?

  6. Don't fear free speech

COMMENTS

  1. Is This the End of the Internet As We Know It?

    Two pending Supreme Court cases interpreting a 1996 law could drastically alter the way we interact online. That law, Section 230 of the Communications Decency Act, is often disparaged as a handout to Big Tech, but that misses the point. Section 230 promotes free speech by removing strong incentives for platforms to limit what we can say and do online.

  2. What is Section 230? The internet free speech law before the Supreme

    You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from ...

  3. What you need to know about Section 230, the 'most ...

    The court, in its first look at free speech on the Internet, was asked to uphold a law that made it a crime to put indecent words or pictures online where children can find them. They struck it ...

  4. What to Know About the Supreme Court Case on Free Speech on Social

    Published Feb. 25, 2024 Updated Feb. 26, 2024. Social media companies are bracing for Supreme Court arguments on Monday that could fundamentally alter the way they police their sites. After ...

  5. has technology killed our ability to talk

    Speaking to machines. Sherry Turkle, professor of social studies of science and technology, warns that when we first "speak through machines, [we] forget how essential face-to-face conversation ...

  6. Supreme Court Poised to Reconsider Key Tenets of Online Speech

    David McCabe, who is based in Washington, has reported for five years on the policy debate over online speech. Jan. 19, 2023 For years, giant social networks like Facebook , Twitter and Instagram ...

  7. Online Extremism is a Growing Problem, But What's Being Done About It?

    A hate crime occurs nearly every hour in the U.S. It's a growing problem that's been fueled by hate-filled internet posts on social media and other internet platforms. Many of us have seen news headlines about extremist attacks that were fueled by online hate speech—such as the mass shootings at Emanuel African Methodist Episcopal Church in Charleston, South Carolina in 2015; a Walmart ...

  8. Perspective

    Facebook considers hate speech to be a "direct attack" on users based on "protected characteristics," including race, ethnicity, national origin, sexual orientation and gender identity ...

  9. Internet

    One major area of internet regulation is protecting minors from pornography and other indecent or obscene speech. For example, Congress passed the Communications Decency Act (CDA) in 1996 prohibiting "the knowing transmission of obscene or indecent messages" over the internet to minors. However, in 1997 the Supreme Court in Reno v.

  10. Protecting Freedom of Expression Online

    Questions around freedom of expression are once again in the air. While concern around the Internet's role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including through mechanisms such as the ...

  11. The Rutherford Institute :: Digital Kill Switches: How Tyrannical

    Communications kill switches have become tyrannical tools of domination and oppression to stifle political dissent, shut down resistance, forestall election losses, reinforce military coups, and keep the populace isolated, disconnected and in the dark, literally and figuratively. ... The internet kill switch is just one piece of the government ...

  12. How Smartphones Are Killing Conversation

    According to MIT sociologist Sherry Turkle, author of the new book Reclaiming Conversation, we lose our ability to have deeper, more spontaneous conversations with others, changing the nature of our social interactions in alarming ways. Sherry Turkle. Turkle has spent the last 20 years studying the impacts of technology on how we behave alone ...

  13. Is the internet killing off language?

    The fastest growing 'new language' in the world is emoticons (faces) and emojis (images of objects, which hail from Japan), which are one of the biggest changes caused by digital communications ...

  14. Freedom of Expression on the Internet

    The Internet version of the list designated doctors and clinic workers who had been attacked by anti-abortion terrorists in two ways: the names of people who had been murdered were crossed out; the names of people who had been wounded were printed in grey. (For a version of the Nuremberg Files web site, click here.

  15. Hate Speech on Social Media: Global Comparisons

    Summary. Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing. Policies used to curb hate speech risk ...

  16. Internet kills communication

    Here's Namrata Motwani speaking on the topic "Internet kills communication".Speak UP 4.0 is a Speech Competition event organized by the Tachyons. It is an Op...

  17. Has Technology Killed Face-To-Face Communication?

    While sending emails is efficient and fast, face-to-face communication drives productivity. In a recent survey, 67% of senior executives and managers said their organization's productivity would ...

  18. Does social media kill communication skills?

    Yes they are fast, available at all hours and easy, but should never take the place of verbal discussions. Listening to some people as they try to put together a complete sentence is uncomfortable ...

  19. Remarks on Internet Freedom

    Communication networks have played a critical role in our response. They were, of course, decimated and in many places totally destroyed. And in the hours after the quake, we worked with partners in the private sector; first, to set up the text "HAITI" campaign so that mobile phone users in the United States could donate to relief efforts ...

  20. PDF Is the Internet Killing Communication

    Well, frankly speaking, the internet offers an expedient method to communicate but. it still kills the transmission of messages, for the reason that the internet segregates one and another. For example, our feelings may not be fully transmitted through words and recipients may interpret them wrongly. Thus, the conveying of message is unsuccessful.

  21. Internet Essay Examples

    Pages: 4. Words: 1177. Rating: 4,8. Internet has been the fastest growing medium, as more and more people are becoming a part of the internet fraternity; it becomes more difficult to…. Internet Cyber Bullying Cyber Crime Cyber Security Virtual Reality ⏳ Social Issues. View full sample.

  22. Internet kills communication by Shannon Grega on Prezi

    Benefits of Internet in Communication. 1. The act or process of communicating; fact of being communicated. 2. The imparting or interchange of thoughts, opinions, or information by speech, writing, or signs. 3. Something imparted, interchanged, or transmitted. 4.

  23. Speech on Internet for Students and Children

    Speech for Students. Very good morning to all. Today, I am here to present a speech on internet. Someone has rightly said that the world is a small place. With the advent of the internet, this saying seems realistic. The internet has really bought the world together and the distance between two persons is really not a distance today.

  24. Ukraine war latest: Zelenskyy reveals plan after Kursk ...

    Two missiles have killed more than 50 people and injured hundreds in a city in central Ukraine, one of the deadliest attacks by Russia since the invasion of Ukraine. Meanwhile, Volodymyr Zelenskyy ...