• Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • March Madness
  • AP Top 25 Poll
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

How Section 230 helped shape speech on the Internet

FILE - The Supreme Court building is seen on Capitol Hill in Washington, Jan. 10, 2023. The Supreme Court is taking up its first case about a federal law that is credited with helping create the modern internet by shielding Google, Twitter, Facebook and other companies from lawsuits over content posted on their sites by others. The justices are hearing arguments Tuesday, Feb. 21, about whether the family of a terrorism victim can sue Google for helping extremists spread their message and attract new recruits. (AP Photo/Patrick Semansky, File)

FILE - The Supreme Court building is seen on Capitol Hill in Washington, Jan. 10, 2023. The Supreme Court is taking up its first case about a federal law that is credited with helping create the modern internet by shielding Google, Twitter, Facebook and other companies from lawsuits over content posted on their sites by others. The justices are hearing arguments Tuesday, Feb. 21, about whether the family of a terrorism victim can sue Google for helping extremists spread their message and attract new recruits. (AP Photo/Patrick Semansky, File)

  • Copy Link copied

Twenty-six words tucked into a 1996 law overhauling telecommunications have allowed companies like Facebook, Twitter and Google to grow into the giants they are today.

A case the U.S. Supreme Court heard Tuesday, Gonzalez v. Google , challenges this law — namely whether tech companies are liable for the material posted on their platforms.

Justices will decide whether the family of an American college student killed in a terror attack in Paris can sue Google, which owns YouTube, over claims that the video platform’s recommendation algorithm helped extremists spread their message.

They seemed unlikely to side with the family, but indicated they are wary of Google’s claims that the law gives it and other companies immunity from lawsuits.

A second case being heard Wednesday, Twitter v. Taamneh , also focuses on liability, though on different grounds. That case involves the family members of a man killed in an Istanbul nightclub attack for which the Islamic State group claimed responsibility.

The family accuses Twitter, Facebook and YouTube parent Google of assisting in the growth of IS by recommending extremist content through their algorithms. The platforms argue that they can’t be sued because they did not knowingly or substantially assist in the attack.

File - The Instagram logo is seen on a cell phone in Boston, USA, Oct. 14, 2022. Instagram says it’s testing out new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages. The social media platform said in a blog post on Thursday, April 11, 2024 that the new features are part of its work to fight sexual scams and other forms of “image abuse” and to make it tougher for criminals to contact teens. (AP Photo/Michael Dwyer, File)

The outcomes of these cases could reshape the internet as we know it. Section 230 won’t be easily dismantled. But if it is, online speech could be drastically transformed.


If a news site falsely calls you a swindler, you can sue the publisher for libel. But if someone posts that on Facebook, you can’t sue the company — just the person who posted it.

That’s thanks to Section 230 of the 1996 Communications Decency Act, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That legal phrase shields companies that can host trillions of messages from being sued into oblivion by anyone who feels wronged by something someone else has posted — whether their complaint is legitimate or not.

Politicians on both sides of the aisle have argued, for different reasons, that Twitter, Facebook and other social media platforms have abused that protection and should lose their immunity — or at least have to earn it by satisfying requirements set by the government.

Section 230 also allows social platforms to moderate their services by removing posts that, for instance, are obscene or violate the services’ own standards, so long as they are acting in “good faith.”


The measure’s history dates back to the 1950s, when bookstore owners were being held liable for selling books containing “obscenity,” which is not protected by the First Amendment. One case eventually made it to the Supreme Court, which held that it created a “chilling effect” to hold someone liable for someone else’s content.

That meant plaintiffs had to prove that bookstore owners knew they were selling obscene books, said Jeff Kosseff, the author of “The Twenty-Six Words That Created the Internet,” a book about Section 230.

Fast-forward a few decades to when the commercial internet was taking off with services like CompuServe and Prodigy. Both offered online forums, but CompuServe chose not to moderate its, while Prodigy, seeking a family-friendly image, did.

CompuServe was sued over that, and the case was dismissed. Prodigy, however, got in trouble. The judge in their case ruled that “they exercised editorial control — so you’re more like a newspaper than a newsstand,” Kosseff said.

That didn’t sit well with politicians, who worried that outcome would discourage newly forming internet companies from moderating at all. And Section 230 was born .

“Today it protects both from liability for user posts as well as liability for any claims for moderating content,” Kosseff said.


“The primary thing we do on the internet is we talk to each other. It might be email, it might be social media, might be message boards, but we talk to each other. And a lot of those conversations are enabled by Section 230, which says that whoever’s allowing us to talk to each other isn’t liable for our conversations,” said Eric Goldman, a professor at Santa Clara University specializing in internet law. “The Supreme Court could easily disturb or eliminate that basic proposition and say that the people allowing us to talk to each other are liable for those conversations. At which point they won’t allow us to talk to each other anymore.”

There are two possible outcomes. Platforms might get more cautious, as Craigslist did following the 2018 passage of a sex-trafficking law that carved out an exception to Section 230 for material that “promotes or facilitates prostitution.” Craigslist quickly removed its “personals” section, which wasn’t intended to facilitate sex work, altogether. But the company didn’t want to take any chances.

“If platforms were not immune under the law, then they would not risk the legal liability that could come with hosting Donald Trump’s lies, defamation, and threats,” said Kate Ruane, former senior legislative counsel for the American Civil Liberties Union who now works for PEN America.

Another possibility: Facebook, Twitter, YouTube and other platforms could abandon moderation altogether and let the lowest common denominator prevail.

Such unmonitored services could easily end up dominated by trolls, like 8chan, a site that was infamous for graphic and extremist content.

Any change to Section 230 is likely to have ripple effects on online speech around the globe.

“The rest of the world is cracking down on the internet even faster than the U.S.,” Goldman said. “So we’re a step behind the rest of the world in terms of censoring the internet. And the question is whether we can even hold out on our own.”

speech on internet kills communication

speech on internet kills communication

The dying art of conversation – has technology killed our ability to talk  face-to -face?

speech on internet kills communication

Senior Lecturer, Media, Communication and Culture, Leeds Beckett University

Disclosure statement

Melanie Chan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Leeds Beckett University provides funding as a member of The Conversation UK.

View all partners

What with Facetime, Skype , Whatsapp and Snapchat, for many people, face-to-face conversation is used less and less often.

These apps allow us to converse with each other quickly and easily – overcoming distances, time zones and countries. We can even talk to virtual assistants such as Alexa, Cortana or Siri – commanding them to play our favourite songs, films, or tell us the weather forecast.

Often these ways of communicating reduce the need to speak to another human being. This has led to some of the conversational snippets of our daily lives now taking place mainly via technological devices . So no longer do we need to talk with shop assistants, receptionists, bus drivers or even coworkers, we simply engage with a screen to communicate whatever it is we want to say.

In fact, in these scenarios, we tend to only speak to other people when the digital technology does not operate successfully. For instance, human contact occurs when we call for an assistant to help us when an item is not recognised at the self-service checkout .

And when we have the ability to connect so quickly and easily with others using technological devices and software applications it is easy to start to overlook the value of face-to-face conversation. It seems easier to text someone rather than meet with them.

Bodily cues

My research into digital technologies indicates that phrases such as “word of mouth” or “keeping in touch” point to the importance of face-to-face conversation . Indeed, face-to-face conversation can strengthen social ties: with our neighbours, friends, work colleagues and other people we encounter during our day.

It acknowledges their existence, their humanness, in ways that instant messaging and texting do not. Face-to-face conversation is a rich experience that involves drawing on memories, making connections, making mental images, associations and choosing a response. Face-to-face conversation is also multisensory: it’s not just about sending or receiving pre-programmed trinkets such as likes, cartoon love hearts and grinning yellow emojis.

speech on internet kills communication

When having a conversation using video you mainly see another person’s face only as a flat image on a screen. But when we have a face-to-face conversation in real life, we can look into someone’s eyes, reach out and touch them. We can also observe the other person’s body posture and the gestures they use when speaking – and interpret these accordingly. All these factors, contribute to the sensory intensity and depth of the face-to-face conversations we have in daily life.

Speaking to machines

Sherry Turkle , professor of social studies of science and technology, warns that when we first “speak through machines, [we] forget how essential face-to-face conversation is to our relationships, our creativity, and our capacity for empathy”. But then “we take a further step and speak not just through machines but to machines”.

In many ways, our everyday lives now involve a blend of face-to-face and technologically mediated forms of communication. But in my teaching and research I explain how digital forms of communication can supplement, rather than replace face-to-face conversation.

At the same time though, it is also important to acknowledge that some people value online communication because they can express themselves in ways they might find difficult through face-to-face conversation.

Look up from your phone

Gary Turk , is a spoken word poet whose poem Look Up illustrates what is at stake by becoming entranced by technological ways of communicating at the expense of connecting with others face-to-face.

Turk’s poem draws attention to the rich, sensory aspects of face-to-face communication, valuing bodily presence in relation to friendship, companionship and intimacy. The central idea running through Turk’s evocative poem is that screen-based devices consume our attention while distancing us from the bodily sense of being with others.

Ultimately the sound, touch, smell and observation of bodily cues we experience when having a face-to-face conversation cannot be fully replaced by our technological devices. Communicating and connecting with others through face-to-face discussion is valuable because it is not something that can be edited, paused or replayed.

So next time you’re deciding between human or machine at the supermarket checkout or whether to get up from your desk and walk to another office to talk to a colleague – rather than sending them an email – it might be worth following Turk’s advice and engaging with the human rather than the screen.

  • Social media
  • Body language
  • Text messages
  • Face-to-face
  • Conversations

speech on internet kills communication

Operations Manager

speech on internet kills communication

Senior Education Technologist

speech on internet kills communication

Audience Development Coordinator (fixed-term maternity cover)

speech on internet kills communication

Lecturer (Hindi-Urdu)

speech on internet kills communication

Director, Defence and Security

  • Newsletters

Site search

  • Israel-Hamas war
  • 2024 election
  • Solar eclipse
  • Supreme Court
  • All explainers
  • Future Perfect

Filed under:

Section 230, the internet law that’s under threat, explained

The pillar of internet free speech seems to be everyone’s target.

Share this story

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: Section 230, the internet law that’s under threat, explained

The US Supreme Court building exterior, seen from behind barricades.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them.

Decades later, it’s never been more controversial. People from both political parties and all three branches of government have threatened to reform or even repeal it. The debate centers around whether we should reconsider a law from the internet’s infancy that was meant to help struggling websites and internet-based companies grow. After all, these internet-based businesses are now some of the biggest and most powerful in the world, and users’ ability to speak freely on them bears much bigger consequences.

While President Biden pushes Congress to pass laws to reform Section 230, its fate may lie in the hands of the judicial branch, as the Supreme Court is considering two cases — one involving YouTube and Google , another targeting Twitter — that could significantly change the law and, therefore, the internet it helped create.

Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post (with a few exceptions ). That third-party content could include things like a news outlet’s reader comments, tweets on Twitter, posts on Facebook, photos on Instagram, or reviews on Yelp. If a Yelp reviewer were to post something defamatory about a business, for example, the business could sue the reviewer for libel, but thanks to Section 230, it couldn’t sue Yelp.

Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark. A repeal of Section 230 wouldn’t just affect the big platforms that seem to get all the negative attention, either. It could affect websites of all sizes and online discourse.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around, and anyone, including impressionable children, could easily find and see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act, which would extend laws governing obscene and indecent use of telephone services to the internet. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street , and the latter was a pioneer of the early internet. But in 1994, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. The court ruled in Stratton Oakmont’s favor, saying that because Prodigy moderated posts on its forums, it exercised editorial control that made it just as liable for the speech on its platform as the people who actually made that speech. Meanwhile, Prodigy’s rival online service, Compuserve, was found not liable for a user’s speech in an earlier case because Compuserve didn’t moderate content.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks, and mindful of the Prodigy decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment to CDA that said “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden told Vox’s Emily Stewart in 2019. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

As the beginning of Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are considered by some to be the 26 words that created the internet , but the law says more than that.

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that their Constitutional right to free speech has been violated — doesn’t apply. Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions about transmitting porn to minors were immediately challenged by civil liberty groups and struck down by the Supreme Court, which said they were too restrictive of free speech. Section 230 stayed, and so a law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. On the other hand, a website that didn’t moderate anything at all would quickly become a spam-filled cesspool that few people would want to swim in.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet to flourish. Others say it allows platforms that have become hugely influential and important to suppress and censor speech based on their own whims or supposed political biases. Depending on who you talk to, internet platforms are either using the sword too much or not enough. Either way, they’re hiding behind the shield to protect themselves from lawsuits while they do it. Though it has been a law for nearly three decades, Section 230’s existence may have never been as precarious as it is now.

The Supreme Court might determine Section 230’s fate

Justice Clarence Thomas has made no secret of his desire for the court to consider Section 230, saying in multiple opinions that he believes lower courts have interpreted it to give too-broad protections to what have become very powerful companies. He got his wish in February 2023, when the court heard two similar cases that include it. In both, plaintiffs argued that their family members were killed by terrorists who posted content on those platforms. In the first, Gonzalez v. Google , the family of a woman killed in a 2015 terrorist attack in France said YouTube promoted ISIS videos and sold advertising on them, thereby materially supporting ISIS. In Twitter v. Taamneh , the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content, which is in violation of the Justice Against Sponsors of Terrorism Act — and could then mean that Section 230 doesn’t apply to such content.

These cases give the Supreme Court the chance to reshape, redefine, or even repeal the foundational law of the internet, which could fundamentally change it. And while the Supreme Court chose to take these cases on, it’s not certain that they’ll rule in favor of the plaintiffs. In oral arguments in late February, several justices didn’t seem too convinced during the Gonzalez v. Google arguments that they could or should, especially considering the monumental possible consequences and impact of such a decision. In Twitter v. Taamneh , the justices focused more on if and how the Sponsors of Terrorism law applied to tweets than they did on Section 230. The rulings are expected in June.

In the meantime, don’t expect the original authors of Section 230 to go away quietly. Wyden and Cox submitted an amicus brief to the Supreme Court for the Gonzalez case, where they said: “The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users today, Section 230’s protection is even more important now than when the statute was enacted.”

Congress and presidents are getting sick of Section 230, too

In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law , which changed parts of Section 230. The updates mean that platforms can now be deemed responsible for prostitution ads posted by third parties. These changes were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but it did so by carving out an exception to Section 230. That could open the door to even more exceptions in the future.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped create them. Alex Jones and his expulsion from Facebook and other social media platforms — even Twitter under Elon Musk won’t let him back — is perhaps the best example of this.

In a 2018 op-ed, Sen. Ted Cruz (R-TX) claimed that Section 230 required the internet platforms it was designed to protect to be “neutral public forums.” The law doesn’t actually say that, but many Republican lawmakers have introduced legislation that would fulfill that promise. On the other side, Democrats have introduced bills that would hold social media platforms accountable if they didn’t do more to prevent harmful content or if their algorithms promoted it.

There are some bipartisan efforts to change Section 230, too. The EARN IT Act from Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT), for example, would remove Section 230 immunity from platforms that didn’t follow a set of best practices to detect and remove child sexual abuse material. The partisan bills haven’t really gotten anywhere in Congress. But EARN IT, which was introduced in the last two sessions, was passed out of committee in the Senate and ready for a Senate floor vote. That vote never came, but Blumenthal and Graham have already signaled that they plan to reintroduce EARN IT this session for a third try.

In the executive branch, former President Trump became a very vocal critic of Section 230 in 2020 after Twitter and Facebook started deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. He issued an executive order that said Section 230 protections should only apply to platforms that have “good faith” moderation, and then called on the FCC to make rules about what constituted good faith. This didn’t happen, and President Biden revoked the executive order months after taking office.

But Biden isn’t a fan of Section 230, either. During his presidential campaign, he said he wanted it repealed. As president, Biden has said he wants it to be reformed by Congress. Until Congress can agree on what’s wrong with Section 230, however, it doesn’t look likely that they’ll pass a law that significantly changes it.

However, some Republican states have been making their own anti-Section 230 moves. In 2021, Florida passed the Stop Social Media Censorship Act, which prohibits certain social media platforms from banning politicians or media outlets. That same year, Texas passed HB 20 , which forbids large platforms from removing or moderating content based on a user’s viewpoint.

Neither law is currently in effect. A federal judge blocked the Florida law in 2022 due to the possibility of it violating free speech laws as well as Section 230. The state has appealed to the Supreme Court. The Texas law has made a little more progress. A district court blocked the law last year, and then the Fifth Circuit controversially reversed that decision before deciding to stay the law in order to give the Supreme Court the chance to take the case. We’re still waiting to see if it does.

If Section 230 were to be repealed — or even significantly reformed — it really could change the internet as we know it. It remains to be seen if that’s for better or for worse.

Update, February 23, 2023, 3 pm ET: This story, originally published on May 28, 2020, has been updated several times, most recently with the latest news from the Supreme Court cases related to Section 230.

Will you support Vox today?

We believe that everyone deserves to understand the world that they live in. That kind of knowledge helps create better citizens, neighbors, friends, parents, and stewards of this planet. Producing deeply researched, explanatory journalism takes resources. You can support this mission by making a financial gift to Vox today. Will you join us?

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

speech on internet kills communication

Next Up In Technology

Sign up for the newsletter today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

A closeup of O.J. Simpson wearing a football helmet and smiling.

O.J. Simpson’s story is built on America’s national sins

In this photo illustration, water from a tap fills a glass on July 6, 2023, in San Anselmo, California.

What to do if you’re worried about “forever chemicals” in your drinking water

A metal container with a label reading “For use as a motor fuel only. Contains lead (tetraethyl)”

Why is there so much lead in American food?

On the white, wide steps outside the Supreme Court building in Washington, DC, protesters hold signs and yell chants.

Florida and Arizona show why abortion attacks are not slowing down

Bishops pass by Pop Francis, who is sitting in a large chair.

The Vatican’s new statement on trans rights undercuts its attempts at inclusion

A blond woman wearing a black, spaghetti-strap mini dress and a black choker necklace, sits on an armchair with her legs crossed and her hands clasped over one knee.

The messy legal drama impacting the Bravo universe, explained

Combating Hate Speech Through Counterspeech

Daniel Jones

Daniel Jones

Susan Benesch

Susan Benesch

From misogyny and homophobia, to xenophobia and racism, online hate speech has become a topic of greater concern as the Internet matures, particularly as its offline impacts become more widely known. And with hate fueled tragedies across the US and New Zealand, 2019 has seen a continued rise in awareness of how social media and fringe websites are being used to spread hateful ideologies and instigate violence.

Through the Dangerous Speech Project , Berkman Klein Faculty Associate Susan Benesch studies the kinds of public speech that can catalyze intergroup violence, and explores the efforts to diminish such speech and its impacts while protecting the rights of freedom of expression.  Like the Center’s own work examining the legal, platform-based, and international contours of harmful speech, the Dangerous Speech Project brings new research and framing to efforts to reduce online hate and its impacts.

This work often involves observing and cataloging extremely toxic speech on social media platforms, including explicit calls for violence against vulnerable populations around the world. But dangerous speech researchers also get to interact with practitioners of “counterspeech” - people who use social media to battle hateful and bigoted messaging and ideology.

The Dangerous Speech Project’s Senior Researcher Cathy Buerger convened a group of counterspeech practitioners at RightsCon 2019 to talk about the most effective counterspeech efforts. Here she reflects on these efforts, and how activists can better combat hate in online spaces and prevent its offline impacts.  

How has social media facilitated the proliferation of hatred/harmful speech? Do you think there is more hate today as a result of Internet-enabled communication, or is it just more visible and noticeable?

It’s hard to say if there is more hate in the world today or not. My instinct is no. At the Dangerous Speech Project, we’ve examined the speech used before incidents of mass violence in various historical periods, and the rhetorical patterns are remarkably similar. The hate that we see today is certainly nothing new.

But there are some new factors that impact the spread of this hate. First, social media makes it relatively simple to see speech produced in communities outside of one’s own. I’m an anthropologist, so I’m always thinking about how communities set and enforce norms. Different communities have divergent opinions about what kind of speech is considered “acceptable.” With social media, speech that might be seen as acceptable by its intended audience can easily be discovered and broadcast to a larger audience that doesn’t share the same speech norms. That audience may attempt to respond through counterspeech, which can be a positive outcome. But even if that doesn’t happen, at the very least, this speech becomes more visible than it otherwise would have been.

A second factor that is frequently discussed is how quickly harmful messages on social media can reach a large audience. This can potentially have horrifying consequences. Between January 2017 and June of 2018, for example, 33 people were killed by vigilante mobs in India following rumors that circulated on WhatsApp suggesting that men were coming to villages in order to kidnap children. The rumors were, of course, false. In an effort to battle these kinds of rumors, WhatsApp has since placed a limit on how many times a piece of content can be forwarded.

These are just two of the ways that technology is affecting the spread and visibility of hateful messages. We need to understand this relationship, and the relationship between online speech and offline action, if we are going to develop effective policies and programs to counter harmful speech and prevent intergroup violence. 

You've spoken with a number of folks who work online to counter hateful speech. What are some of your favorite examples?

There are so many fascinating examples of people and organizations working to counter online hateful speech. One of my favorites is #Jagärhär, a Swedish group that collectively responds to hateful posts in the comment sections of news articles posted on Facebook. They have a very specific method of action. On the #Jagärhär Facebook page, group administrators post links to articles with hateful comments, directing their members to counterspeak there. Members tag their posts with #Jagärhär (which means, “I am here”), so that other members can find their posts and like them. Most of the news outlets have their comments ranked by what Facebook calls “relevance.” Relevance is, in part, determined by how much interaction (likes and replies) a comment receives. Liking the counterspeech posts, therefore, drives them up in relevance ranking, moving them to the top and ideally drowning out the hateful comments. 

The group is huge – around 74,000 members, and the model has spread to 13 other countries as well. The name of each group is “#iamhere” in the local language (for example, #jesusilà in France and #somtu in Slovakia). I like this example because it demonstrates how powerful counterspeech can be when people work together. In the bigger groups (the groups range in size from 64 in #iamhereIndia to 74,274 in #Jagärhär), their posts regularly have the most interaction, and therefore become the most visible comments.

One of the questions that I am interested in right now is how counterspeaking as a group may serve as a sort of protective factor for group members. I’ve interviewed lots of counterspeakers, and most of them talk about how lonely and emotionally difficult the work is – not to mention the fact that they often become the targets of online attacks themselves. In the digital ethnography that I am working on right now, members of #iamhere groups frequently mention how working as a group makes them feel braver and more able to sustain their counterspeech work over time. 

I’m also very interested in efforts that try to counter hateful messages by sharing those messages more widely. The Instagram account Bye Felipe , for example, is dedicated to “calling out dudes who turn hostile when rejected or ignored.” The account allows users to submit screenshots of conversations they have had with men – often on dating sites – where the man has lashed out after being ignored or rejected. I interviewed Alexandra Tweten, who founded and runs the account, and she told me that although she started it mostly to make fun of the men in the interactions, she quickly realized that it could be a tool to spark a larger conversation about online harassment against women. A similar effort is the Twitter account @YesYoureRacist . Logan Smith, who runs the anti-racism account, retweets racist posts that he finds to his nearly 400,000 followers in an effort to make people aware that the racism exists. 

Broadcasting hateful comments to a larger audience may seem somewhat counterintuitive because we are frequently so focused on deleting content. But by drawing the attention of a larger audience to a particular piece of speech, these efforts can serve as an educational tool - for example, showing men the type of harassment that women face online. By connecting a piece of speech with a larger audience, it is also very likely that at least some members of that new audience may not share the same speech norms as the original author. Sometimes, this is primarily a source of amusement for the new audience. At other times, though, it can be a quick way to inspire counterspeech responses from members of that new audience.

Why do you think these efforts are effective? What can folks who work in counterspeech efforts learn from one another? 

Effectiveness is an interesting issue. The first thing we have to ask is “effective at doing what?” One of the findings from my research on those who are working on countering hatred online is that they don’t all have the same goal. We often think that counterspeakers are primarily trying to impact the behavior or the views of the hateful speakers to whom they are responding. But of the 40 or so people that I have interviewed who are involved in these efforts, most state that they are actually trying to do something different. They are trying to reach the larger reading audience or have a positive impact on the discourse within particular online spaces. The strategies that you use to accomplish goals like that are going to be very different from those you might use if you are trying to change the mind or behavior of someone posting hateful speech. The projects that are most effective are those that clearly know their audience and goals and choose their strategies accordingly.

Last November, we hosted a private meeting in Berlin of people who use various methods to respond to hateful or harmful speech online. This group of 15 counterspeakers from around the world discussed counterspeech best practices and the challenges that they face in their work. After the workshop, we heard from many of them about how useful the experience had been because they no longer felt as isolated. The work of responding to hatred online can be lonely work. Although some people do this work in groups – like those involved in the #iamhere groups – most people do it by themselves. So, of course, counterspeakers can learn a lot from each other in terms of what kinds of strategies might work in different contexts, but there is also tremendous potential benefit in getting to know one another simply because it reminds them that they are not alone in their efforts. 

What did your group at RightsCon learn from one another? Did any surprising or exciting ideas emerge?

One of the best parts about RightsCon is that it brings people together from different sectors, from all over the world, who are working on issues related to securing human rights in the digital age. During our session, which focused on online anti-hatred efforts, one of the topics that was raised by both the session participants and several audience members was just how hard this work can be – the toll it can take on a person’s personal and emotional life. At one point, an audience member asked Logan Smith (of @yesyoureracist) whether he had ever received a death threat. He answered “oh yeah.” People laughed, but it also really brought home the point. This is really tough work. It’s emotionally demanding. It can make you the target of online attacks. One seldom gets that perfect moment where someone who had posted something hateful says “oh, you’re right. Thank you so much for helping me see the light.” An online anti-hatred effort is successful if it can reach its goal, whether that goal is to reach the larger reading audience or to change the mind or behavior of person posting hateful comments. But to do any of those things, it has to be sustainable. So I think that learning more about what helps counterspeakers avoid burnout and stay active is an important piece of better understanding what makes efforts effective in the long run.

You might also like

  • community GOP reacts to Trump search with threats and comparisons to ‘Gestapo’
  • community Why Elon Musk’s Twitter might be (more) lethal
  • community An expert on ‘dangerous speech’ explains how Trump’s rhetoric and the recent spate of violence are and aren’t linked

Projects & Tools 01

Harmful speech online.

The Berkman Klein Center for Internet & Society is in the third year of a research, policy analysis, and network building effort devoted to the study of harmful speech, in close…

Is Internet Language a Destroyer to Communication?

  • Conference paper
  • First Online: 25 July 2023
  • Cite this conference paper

Book cover

  • Chan Eang Teng 13 &
  • Tang Mui Joo 13  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 693))

Included in the following conference series:

  • International Congress on Information and Communication Technology

442 Accesses

Internet language is known as the new form of language that has been used on social media by the Internet users. Since the Internet language has been widely used through social media, there is some influence on the users’ behaviour. As a result, the use of the Internet language is feared to undermine the authenticity of the original language. Language is a significantly important communication tool for everyone, and hence, there are many studies on human communication. As the emergence of the Internet is growing fast, language has been affected and caused by the inventions of Internet language which is also known as Internet slang. It is also now widely used by people in their daily communication. However, there are several problems caused by Internet language. For example, people who seldom use the Internet could not understand the Internet language which might cause communication problems, the loss of language authenticity, and generation gap. As the study of Internet language is not broad, a few problems still remain unknown such as the communication habits of Internet language users, the level of understanding of the original language, and how elders are out of touch with contemporary society due to the Internet language. Quantitative research method which is an online survey is used to study the generation Z who were born from 1997 to 2012 and baby boomers who were born from 1955 to 1964. The reason why this research targets these selected samples is to investigate the generation gap between gen Z and baby boomers that the Internet language brings to them. The research is intended to investigate how the Internet language affects human communication habits. This research has found out that the Internet language has actually affected human communication habits, because the Internet language has become a part and parcel of their communication style.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unbabael (2019) Do you speak internet? How internet slang is changing language. https://resources.unbabel.com/blog/speak-internet-slang . Last accessed 5 Aug 2022

Kern R (2015) Language, literacy and technology . Cambridge University Press, Cambridge

Google Scholar  

Alvernia University (2000) Applying business models to higher education. https://academicjournals.org/journal/IJEAPS/article-full-text-pdf/1380FAC58982 . Last accessed 22 Sep 2022

Sabri NAB, Hamdan SB, Nadarajan N-T M, Sing SR (2020) The usage of English internet slang among malaysians in social media. Selangor Humaniora Review

Coleman J (2022) The life of slang: the history of slang. https://ebookcentral-proquest-com.tarcez.tarc.edu.my/lib/tarc-ebooks/reader.action?docID=943382 . Last accessed 23 Aug 2022

Farina F, Lyddy F (2011) The language of text messaging. “Lingustic ruin or resource?” The Irish Psychologist 37(6):145–149

Kadir ZA, Idris H, Husain SSS (2012) Playfulness and creativity: a look at language use online in Malaysia. Proced-Soc Behav Sci 65:404–409

Indera WAIWA, Ali AAER (2021) The relationship between internet slang and English language learning 4(2):1–6

Fish TW (2015) Internet slang and high school students: a teacher’s perspective . A thesis submitted for Master of Arts in Communication and Leadership Studies, Gonzaga University

Petrova YA, Vasichkina ON (2021) The impact of the development of information technology tools of communication on digital culture and Internet . https://doi.org/10.1051/shsconf/20110101002 . Last accessed 23 Aug 2021

Subramaniam V, Razak NA (2014) Examining language usage and patterns in online conversation: communication gap among generation y and baby boomers 118:468–474

Eliza MJ, Marigrace DC (2022) Digital culture and social media slang of gen Z, https://uijrt.com/articles/v3/i4/UIJRTV3I40002.pdf . Last accessed 5 Aug 2022

Mansell R (2021) Imagining the internet. https://ebookcentral-proquest-com.tarcez.tarc.edu.my/lib/tarc-ebooks/detail.action?docID=998950&pq-origsite=summon. Last accessed 2021/08/20 . Last accessed 20 Aug 2021

Rezeki TI, Wahyudin R (2019) Language Acquisition Pada Anak Periode Lingustik . Serunal Jurnal Ilmiah Ilmu Pendidikan 5(1):84–89

Download references


The authors acknowledged the raw materials provided by Alice Tan, Hor Yan, Wern Jing, and Sherwyn Yap.

Author information

Authors and affiliations.

Tunku Abdul Rahman University of Management and Technology, 53300, Kuala Lumpur, Malaysia

Chan Eang Teng & Tang Mui Joo

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Chan Eang Teng .

Editor information

Editors and affiliations.

Department of Design Engineering and Mathematics, Middlesex University London, London, UK

Xin-She Yang

Department of Biomedical Engineering, University of Reading, England, UK

R. Simon Sherratt

Department of Computer Science and Engineering, Techno International Newtown, Chakpachuria, West Bengal, India

Nilanjan Dey

Global Knowledge Research Foundation, Ahmedabad, India

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Teng, C.E., Joo, T.M. (2023). Is Internet Language a Destroyer to Communication?. In: Yang, XS., Sherratt, R.S., Dey, N., Joshi, A. (eds) Proceedings of Eighth International Congress on Information and Communication Technology. ICICT 2023. Lecture Notes in Networks and Systems, vol 693. Springer, Singapore. https://doi.org/10.1007/978-981-99-3243-6_42

Download citation

DOI : https://doi.org/10.1007/978-981-99-3243-6_42

Published : 25 July 2023

Publisher Name : Springer, Singapore

Print ISBN : 978-981-99-3242-9

Online ISBN : 978-981-99-3243-6

eBook Packages : Engineering Engineering (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Main navigation

  • Clinical Education
  • O'Brien Fellowships
  • Steinberg Fellowships
  • CHRLP Article Lab

Protecting Freedom of Expression Online

Black tablet showing the blue and white Twitter logo on top of a cardboard box with 'handle with care' stamped on a lid flap. By Ravi Sharma, via Unsplash.

  • Add to calendar
  • Tweet Widget

Questions around freedom of expression are once again in the air. While concern around the Internet’s role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including through mechanisms such as the principle of online intermediary immunity, arguably one of the main principles that has allowed the Internet to flourish as vibrantly as it has.

What is online intermediary immunity?

Laws that enact online intermediary immunity provide Internet platforms (e.g., Facebook, Twitter, YouTube) with legal protections against liability for content generated by third-party users.

Simply put, if a user posts illegal content, the host (i.e., intermediary) may not be held liable. An intermediary is understood as any actor other than the content creator. This includes large platforms such as Twitter where, for example, if a user posts an incendiary call to violence, Twitter may not be held liable for that post. It also holds for smaller platforms, such as a personal blog, where the blogger is protected from being held liable for comments left by readers. The same is true for the computer servers hosting the content.

These laws have multiple policy goals, ranging from promoting free expression and information access, to encouraging economic growth and technical innovation. But balancing these objectives against the risk of harm has proven complicated, as seen in debates about how to prevent online election disinformation campaigns, hate speech, and threats of violence.

There is also a growing public perception that large-scale Internet platforms need to be held accountable for the harms they enable. With the European Union reforming its major legislation on Internet regulation, the ongoing debate in the United States regarding similar reforms, and the recent January 6 attack on Capitol Hill, it is a propitious time to examine how different jurisdictions implement online intermediary liability laws and what that means for ensuring that the Web continues to allow deliberative democracy and civic participation.

The United States

Traditionally, the United States has provided some of the most rigorous protections for online intermediaries under section 230 of the Communications Decency Act (CDA) [.pdf], which bars platforms from being treated as the “publisher or speaker” of third-party content and establishes that platforms moderating content in good faith maintain their immunity from liability. However, there are increasing calls on both the left and right for this to change.

Republican Senator Josh Hawley of Missouri introduced two pieces of legislation in 2020 and 2019 respectively ― the Limiting Section 230 Immunity to Good Samaritans Act and the Ending Support for Internet Censorship Act ― to undercut the liability protections provided for in section 230 CDA. If passed, the Limiting Section 230 Immunity to Good Samaritans Act would limit liability protections to platforms that use value-neutral content moderation practices, meaning that content would have to be moderated with absolute neutrality, free from any set of values, to be protected. However, this is an unrealistic standard, given that all editorial decisions involve choices based on value, be it merely a question of how to sort that content (e.g., chronologically, alphabetically, etc.) or the editor’s own personal interests and taste. The Ending Support for Internet Censorship Act also seeks to remove liability protections for platforms that curate political information, the vagueness of which risks aggressively demotivating platforms from hosting politically sensitive conversations and chilling free speech online.

The bipartisan Platform Accountability and Consumer Transparency (PACT) [.pdf], introduced by Democrat Senator Brian Schatz of Hawaii and Republican John Thune of South Dakota in 2020, would require platforms to disclose their content moderation practices, implement a user complaint system with an appeals process, and remove court-ordered illegal content within 24 hours. While a step in the right direction towards greater platform transparency, PACT could still endanger free speech on the Internet; it might motivate platforms to remove any content that might be found illegal rather than risk the costs of litigation, thereby taking down legitimate speech out of an abundance of caution. PACT would also entrench the already overwhelming power and influence of the largest platforms, such as Facebook and Google, by imposing onerous obligations that small-to-medium size platforms might find difficult to respect.

During his presidential campaign, Joe Biden even called for the outright repeal of section 230 CDA , with the goal of holding large platforms more accountable for the spread of disinformation and extremism. This remains a worrisome position and something that President Biden should reconsider, given the importance of section 230 CDA for prohibiting online censorship and allowing the Internet to flourish as an arena for public debate.

Questions around to how ensure the Internet remains a viable space for freedom of expression are particularly important in Canada, which does not currently have domestic statutory measures limiting the civil liability of online intermediaries. Although proposed with the laudable goals of combating disinformation, harassment, and the spread of hate, legislation that increases restrictions on freedom of speech, such as the reforms described above, should not be taken in Canada. These types of measures risk incentivizing platforms to actively engage in censorship due to the prohibitive costs associated with the nearly impossible feat of preventing all objectionable content, especially for smaller providers. Instead, what is needed is national and international legislation that balances protecting users against harm while also safeguarding their right to freedom of expression.

One possible model forward for Canada can be found in the newly signed free trade agreement between Canada, the United States, and Mexico, known as the United States–Mexico–Canada Agreement (USMCA). Article 19.17 USMCA mirrors section 230 CDA by shielding online platforms from liability relating to content produced by third party users, but a difference in wording [1] suggests that under USMCA, individuals who have been harmed by online speech may be able to obtain non-monetary equitable remedies, such as restraining orders and injunctions.

It remains to be seen how courts will interpret the provision, but the text leaves room to allow platforms to continue to enjoy immunity from liability, while being required to take action against harmful content pursuant to a court order, such as taking down the objectionable material. Under this interpretation, platforms would be free to take down or leave up content based on their own terms of service, until ordered otherwise by a court. This would leave ultimate decision-making with courts and avoid incentivizing platforms to overzealously take down content out of fear of monetary penalties.

USMCA thus appears to balance providing redress for harms with protecting online platforms from liability related to user-generated content, and provides a valuable starting point for legislators considering how to reform Canada’s domestic online intermediary liability laws.

Going forward

The Internet has proven itself to be a phenomenally transformative tool for human expression, community building, and knowledge dissemination. That power, however, can also be used for the creation, spread, and amplification of hateful, anti-democratic groups and ideas.

Countries are now wrestling with how to balance the importance of freedom of expression with the importance of protecting vulnerable groups and democracy itself. Decisions taken today on how to regulate online intermediary liability will play a crucial role in determining whether the Web remains a place for the free and open exchange of ideas, or a chill and stagnant desert.

Although I remain sympathetic to the legitimate concerns that Internet platforms do too little to prevent their own misuse, I fear that removing online intermediary liability protections will result in the same platforms having too much power and incentive to monitor and censor speech, something that risks being equally harmful.

There are other possible ways forward. We could take the roadmap offered by article 19.17 USMCA. We could prioritize prosecuting individuals for unlawful behaviour on the web, such as peddling slander, threatening bodily violence or fomenting sedition. Ultimately, we need nuanced solutions that balance empowering freedom of expression with protecting individuals against harm. Only then can the Internet remain a place that fosters deliberative democracy and civic participation.

[1] CDA 230(c) provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” USMCA 19.17.2 instead provides that “No Party shall adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability [emphasis added] for harms related to information stored, processed, transmitted, distributed, or made available by the service, except to the extent the supplier or user has, in whole or in part, created or developed the information.”

About the writer

Rachel Zuroff, BCL/LLB’16

She resides in Montreal, where she continues to pursue her interests in human rights and legal pluralism.

Department and University Information

Centre for human rights and legal pluralism.

  • Faculty of Law
  • Law Admissions - BCL/JD
  • Law Admissions - graduate programs
  • Law Student Affairs Office
  • Law Career Development Office
  • Nahum Gelber Law Library
  • Focus online
  • CHRLP Facebook page
  • Business Law Platform
  • Centre for Intellectual Property Policy
  • Fortier Chair in Int'l Arbitration & Commercial Law
  • Institute of Air & Space Law
  • Jean Monnet Chair in International Economic Integration
  • Labour Law and Development Research Laboratory
  • Oppenheimer Chair in public international law
  • Paul-André Crépeau Centre for Private & Comparative Law
  • Peter MacKell Chair in Federalism
  • Private Justice and the Rule of Law
  • Research Group on Health & Law
  • Rule of Law and Economic Development
  • Stikeman Chair in Tax Law
  • Wainwright Fund
  • Skip to content
  • Skip to navigation

Stanford University

Header Menu

SystemX Alliance

Search form

You are here, why ai struggles to recognize toxic speech on social media.

speech on internet kills communication

Automated speech police can score highly on technical tests but miss the mark with people, new research shows. 

Facebook says its artificial intelligence models identified and  pulled down 27 million pieces of hate speech in the final three months of 2020 . In 97 percent of the cases, the systems took action before humans had even flagged the posts.

That’s a huge advance, and all the other major social media platforms are using AI-powered systems in similar ways. Given that people post hundreds of millions of items every day, from comments and memes to articles, there’s no real alternative. No army of human moderators could keep up on its own.

But a team of human-computer interaction and AI researchers at Stanford sheds new light on why automated speech police can score highly accurately on technical tests yet  provoke a lot dissatisfaction from humans with their decisions.  The main problem: There is a huge difference between evaluating more traditional AI tasks, like recognizing spoken language, and the much messier task of identifying hate speech, harassment, or misinformation — especially in today’s polarized environment.

Read the study:  The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality

“It appears as if the models are getting almost perfect scores, so some people think they can use them as a sort of black box to test for toxicity,’’ says Mitchell Gordon, a PhD candidate in computer science who worked on the project. “But that’s not the case. They’re evaluating these models with approaches that work well when the answers are fairly clear, like recognizing whether ‘java’ means coffee or the computer language, but these are tasks where the answers are not clear.”

The team hopes their study will illuminate the gulf between what developers think they’re achieving and the reality — and perhaps help them develop systems that grapple more thoughtfully with the inherent disagreements around toxic speech.

Too Much Disagreement

There are no simple solutions, because there will never be unanimous agreement on highly contested issues. Making matters more complicated, people are often ambivalent and inconsistent about how they react to a particular piece of content.

In one study, for example,  human annotators rarely reached agreement  when they were asked to label tweets that contained words from a lexicon of hate speech. Only 5 percent of the tweets were acknowledged by a majority as hate speech, while only 1.3 percent received unanimous verdicts.  In a study  on recognizing misinformation, in which people were given statements about purportedly true events, only 70 percent agreed on whether most of the events had or had not occurred.

Despite this challenge for human moderators, conventional AI models achieve high scores on recognizing toxic speech —  .95 “ROCAUC” — a popular metric for evaluating AI models in which 0.5 means pure guessing and 1.0 means perfect performance. But the Stanford team found that the real score is much lower — at most .73 — if you factor in the disagreement among human annotators.

Reassessing the Models

In a new study,  the Stanford team re-assesses the performance of today’s AI models by getting a more accurate measure of what people truly believe and how much they disagree among themselves.

The study was overseen by  Michael Bernstein  and  Tatsunori Hashimoto , associate and assistant professors of computer science and faculty members of the  Stanford Institute for Human-Centered Artificial Intelligence  (HAI). In addition to Gordon, Bernstein, and Hashimoto, the paper’s co-authors include Kaitlyn Zhou, a PhD candidate in computer science, and Kayur Patel, a researcher at Apple Inc.

To get a better measure of real-world views, the researchers developed an algorithm to filter out the “noise” — ambivalence, inconsistency, and misunderstanding — from how people label things like toxicity, leaving an estimate of the amount of true disagreement. They focused on how repeatedly each annotator labeled the same kind of language in the same way. The most consistent or dominant responses became what the researchers call "primary labels," which the researchers then used as a more precise dataset that captures more of the true range of opinions about potential toxic content.

The team then used that approach to refine datasets that are widely used to train AI models in spotting toxicity, misinformation, and pornography. By applying existing AI metrics to these new “disagreement-adjusted” datasets, the researchers revealed dramatically less confidence about decisions in each category. Instead of getting nearly perfect scores on all fronts, the AI models achieved only .73 ROCAUC in classifying toxicity and 62 percent accuracy in labeling misinformation. Even for pornography — as in, “I know it when I see it” — the accuracy was only .79.

Someone Will Always Be Unhappy. The Question Is Who?

Gordon says AI models, which must ultimately make a single decision, will never assess hate speech or cyberbullying to everybody’s satisfaction. There will always be vehement disagreement. Giving human annotators more precise definitions of hate speech may not solve the problem either, because people end up suppressing their real views in order to provide the “right” answer.

But if social media platforms have a more accurate picture of what people really believe, as well as which groups hold particular views, they can design systems that make more informed and intentional decisions.

In the end, Gordon suggests, annotators as well as social media executives will have to make value judgments with the knowledge that many decisions will always be controversial.

“Is this going to resolve disagreements in society? No,” says Gordon. “The question is what can you do to make people less unhappy. Given that you will have to make some people unhappy, is there a better way to think about whom you are making unhappy?”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more . 

Why AI Struggles To Recognize Toxic Speech on Social Media - by Edmund L. Andrews -  Human-Centered Artificial Intelligence  - July 13, 2021

Via :  hai.stanford.edu

Stanford University

  • Maps & Directions
  • Search Stanford
  • Terms of Use
  • Copyright Complaints

©  Stanford University , Stanford , California 94305

Join Pilot Waitlist

speech on internet kills communication

Home » SEL Implementation » Texting Miscommunication: Causes, Effects, and Solutions

Post Image

Texting Miscommunication: Causes, Effects, and Solutions

Key takeaways.

  • Texting miscommunication is a prevalent issue in the digital age, leading to misunderstandings and conflicts.
  • Lack of nonverbal cues, ambiguity in text, and misinterpretation of tone are primary causes.
  • Effects include strained relationships, emotional distress, and decreased productivity.
  • Solutions involve clear and concise communication, use of contextual clues, and alternative communication methods for complex or sensitive topics.

Introduction: Texting Miscommunication: Causes, Effects, and Solutions

Have you ever found yourself in a situation where a simple text message led to misunderstandings and conflicts? Texting miscommunication is a common problem in today’s digital age. In this blog post, we will explore the causes, effects, and solutions to texting miscommunication. Understanding and addressing this issue is crucial for maintaining healthy relationships and effective communication.

I. Introduction

Texting miscommunication refers to the misunderstandings and misinterpretations that can arise from text messages. With the increasing reliance on texting as a primary mode of communication, it is essential to recognize the impact it can have on our interactions. This blog post aims to shed light on the causes of texting miscommunication, its effects on relationships and well-being, and provide practical solutions to overcome this challenge.

A. Brief explanation of the topic: texting miscommunication

Texting miscommunication occurs when the intended meaning of a message is not accurately conveyed or understood through text messages. This can happen due to various factors such as the lack of nonverbal cues, ambiguity in text messages, and misinterpretation of tone.

B. Importance of understanding and addressing texting miscommunication

In today’s digital world, texting has become a prevalent form of communication. It is crucial to recognize the potential for miscommunication that exists within this medium. Addressing texting miscommunication is essential for maintaining healthy relationships, avoiding conflicts, and promoting effective communication.

C. Overview of the blog post structure

In this blog post, we will first explore the causes of texting miscommunication, including the lack of nonverbal cues, ambiguity in text messages, and misinterpretation of tone. We will then discuss the effects of texting miscommunication, such as strained relationships, emotional distress, and decreased productivity. Finally, we will provide practical solutions to overcome texting miscommunication, including clear and concise communication, utilization of contextual clues, and alternative communication methods.

II. Causes of Texting Miscommunication

Texting miscommunication can occur due to several factors. Understanding these causes is crucial for effectively addressing and preventing miscommunication.

A. Lack of nonverbal cues

Nonverbal cues play a significant role in communication. They include facial expressions, body language, and tone of voice. However, texting lacks these nonverbal cues, making it challenging to accurately interpret the intended meaning of a message.

1. Importance of nonverbal cues in communication

Nonverbal cues provide additional information that complements and enhances the meaning of verbal communication. They help convey emotions, attitudes, and intentions, which are often lost in text messages.

2. How texting lacks nonverbal cues

Texting relies solely on written words, eliminating the visual and auditory cues that are present in face-to-face or phone conversations. As a result, the recipient of a text message may misinterpret the tone or intent behind the words.

B. Ambiguity in text messages

Text messages often involve the use of abbreviations, acronyms, and emojis, which can lead to ambiguity and misinterpretation.

1. Use of abbreviations, acronyms, and emojis

In an attempt to be concise and efficient, people often use abbreviations, acronyms, and emojis in text messages. While these can enhance communication in some cases, they can also introduce ambiguity and confusion.

2. Interpretation challenges due to brevity

Text messages are typically short and lack the context that is present in face-to-face conversations. This brevity can make it challenging to accurately interpret the intended meaning of a message, leading to misunderstandings.

C. Misinterpretation of tone

Tone plays a crucial role in communication, conveying emotions and attitudes. However, conveying tone accurately through text messages can be difficult.

1. Difficulty in conveying tone through text

Text messages often lack the vocal inflections and nuances that help convey tone in spoken conversations. As a result, the recipient may misinterpret the intended tone of a message, leading to misunderstandings.

2. Impact of misinterpreted tone on communication

Misinterpreting the tone of a message can lead to misunderstandings, conflicts, and strained relationships. It is essential to address this challenge to ensure effective communication.

III. Effects of Texting Miscommunication

Texting miscommunication can have significant effects on relationships, emotional well-being, and productivity. Understanding these effects is crucial for recognizing the importance of addressing and overcoming texting miscommunication.

A. Strained relationships

Misunderstandings and conflicts arising from texting miscommunication can strain relationships and erode trust.

1. Misunderstandings leading to conflicts

When messages are misinterpreted or misunderstood, it can lead to conflicts and arguments. These conflicts can strain relationships and create a negative atmosphere.

2. Trust issues arising from miscommunication

Repeated instances of miscommunication can erode trust between individuals. When messages are consistently misinterpreted, it can create doubts and uncertainties, leading to strained relationships.

B. Emotional distress

Texting miscommunication can cause emotional distress, leading to frustration, confusion, and negative impacts on mental well-being.

1. Frustration and confusion caused by misinterpreted messages

When messages are misinterpreted, it can lead to frustration and confusion. Trying to decipher the intended meaning of a message can be mentally taxing and emotionally draining.

2. Negative impact on mental well-being

Consistent miscommunication can have a negative impact on mental well-being, leading to stress, anxiety, and feelings of isolation. It is crucial to address texting miscommunication to promote positive mental health.

C. Decreased productivity

Texting miscommunication can result in decreased productivity, as time is wasted in clarifying misunderstandings and goals are not efficiently achieved.

1. Time wasted in clarifying misunderstandings

When miscommunication occurs, individuals often spend additional time clarifying misunderstandings through follow-up messages or phone calls. This wasted time can hinder productivity and efficiency.

2. Inefficiency in achieving goals due to miscommunication

When messages are misinterpreted, it can lead to inefficiency in achieving goals. Miscommunication can result in incomplete or inaccurate information, leading to delays and errors.

IV. Solutions to Texting Miscommunication

Addressing texting miscommunication requires proactive measures to promote effective communication and minimize misunderstandings. The following solutions can help overcome texting miscommunication.

A. Clear and concise communication

Clear and concise communication is essential to minimize misinterpretation and misunderstandings in text messages.

1. Importance of using complete sentences and proper grammar

Using complete sentences and proper grammar can enhance clarity and reduce ambiguity in text messages. It is important to avoid using fragmented phrases or relying solely on abbreviations.

2. Avoidance of ambiguous language

Avoiding ambiguous language and being explicit in conveying the intended meaning can help minimize misinterpretation. It is important to provide sufficient context and clarity in text messages.

B. Contextual clues

Providing relevant information and utilizing contextual clues can enhance understanding and reduce miscommunication.

1. Providing relevant information to avoid misinterpretation

When sending a text message, it is important to provide relevant information to avoid misinterpretation. This can include background information, previous conversations, or any other context that may be necessary for understanding the message.

2. Using contextual cues to enhance understanding

Utilizing contextual cues, such as referencing previous messages or events, can help enhance understanding and reduce miscommunication. It is important to provide sufficient context to ensure the recipient accurately interprets the message.

C. Utilizing alternative communication methods

For complex discussions or sensitive topics, utilizing alternative communication methods, such as voice or video calls, can help overcome texting miscommunication.

1. Voice or video calls for complex discussions

When discussing complex topics or conveying detailed information, it is often more effective to utilize voice or video calls. These methods allow for real-time interaction, tone of voice, and nonverbal cues, reducing the likelihood of miscommunication.

2. Face-to-face conversations for sensitive topics

For sensitive topics that require empathy and understanding, face-to-face conversations are often the most appropriate communication method. In-person interactions provide the opportunity for immediate feedback, clarification, and emotional connection.

V. Conclusion

Texting miscommunication is a common challenge in today’s digital age. Understanding the causes, effects, and solutions to texting miscommunication is crucial for maintaining healthy relationships, promoting positive mental well-being, and achieving effective communication. By implementing the suggested solutions, such as clear and concise communication, utilization of contextual clues, and utilizing alternative communication methods, we can overcome texting miscommunication and foster meaningful connections.

Looking for More Insights on Texting Miscommunication? Discover Causes, Effects, and Solutions Here

Start your Everyday Speech free trial today and improve your social emotional learning skills.

Post Image

Related Blog Posts:

Pragmatic language: enhancing social skills for meaningful interactions.

Pragmatic Language: Enhancing Social Skills for Meaningful Interactions Pragmatic Language: Enhancing Social Skills for Meaningful Interactions Introduction: Social skills play a crucial role in our daily interactions. They enable us to navigate social situations,...

Preparing for Success: Enhancing Social Communication in Grade 12

Preparing for Success: Enhancing Social Communication in Grade 12 Key Takeaways Strong social communication skills are crucial for academic success and building meaningful relationships in Grade 12. Social communication includes verbal and non-verbal communication,...

Preparing for Success: Enhancing Social Communication in Grade 12 Preparing for Success: Enhancing Social Communication in Grade 12 As students enter Grade 12, they are on the cusp of adulthood and preparing for the next chapter of their lives. While academic success...

Share on facebook


Better doesn’t have to be harder, social skills lessons students actually enjoy.

Be the best educator you can be with no extra prep time needed. Sign up to get access to free samples from the best Social Skills and Social-Emotional educational platform.

Get Started Instantly for Free

Complete guided therapy.

The subscription associated with this email has been cancelled and is no longer active. To reactivate your subscription, please log in.

If you would like to make changes to your account, please log in using the button below and navigate to the settings page. If you’ve forgotten your password, you can reset it using the button below.

Unfortunately it looks like we’re not able to create your subscription at this time. Please contact support to have the issue resolved. We apologize for the inconvenience. Error: Web signup - customer email already exists

Welcome back! The subscription associated with this email was previously cancelled, but don’t fret! We make it easy to reactivate your subscription and pick up right where you left off. Note that subscription reactivations aren't eligible for free trials, but your purchase is protected by a 30 day money back guarantee. Let us know anytime within 30 days if you aren’t satisfied and we'll send you a full refund, no questions asked. Please press ‘Continue’ to enter your payment details and reactivate your subscription

Notice About Our SEL Curriculum

Our SEL Curriculum is currently in a soft product launch stage and is only available by Site License. A Site License is currently defined as a school-building minimum or a minimum cost of $3,000 for the first year of use. Individual SEL Curriculum licenses are not currently available based on the current version of this product.

By clicking continue below, you understand that access to our SEL curriculum is currently limited to the terms above.

speech on internet kills communication

  • Open access
  • Published: 10 February 2024

Online hate speech victimization: consequences for victims’ feelings of insecurity

  • Arne Dreißigacker   ORCID: orcid.org/0000-0003-4393-0171 1 ,
  • Philipp Müller   ORCID: orcid.org/0009-0003-8500-9388 1 ,
  • Anna Isenhardt   ORCID: orcid.org/0000-0001-6766-909X 2 &
  • Jonas Schemmel   ORCID: orcid.org/0000-0003-1656-1825 3  

Crime Science volume  13 , Article number:  4 ( 2024 ) Cite this article

1938 Accesses

3 Altmetric

Metrics details

This paper addresses the question whether and to what extent the experience of online hate speech affects victims’ sense of security. Studies on hate crime in general show that such crimes are associated with a significantly higher feeling of insecurity, but there is little evidence concerning feeling of insecurity due to online hate speech. Based on a secondary data analysis of a representative population survey in Lower Saxony, Germany, on the topic of cybercrime in 2020 (N = 4,102), we tested three hypotheses regarding the effect of offline and online hate speech on feelings of insecurity. As a result, compared to non-victims, victims of online hate speech exhibit a more pronounced feeling of insecurity outside the Internet, while victims of other forms of cybercrime do not differ in this regard from non-victims. We found no effect for offline hate speech when relevant control variables were included in the statistical model. Possible reasons for this finding are assumed to lie in the characteristics of the phenomenon of online hate speech, for example, because the hateful content spreads uncontrollably on the Internet and reaches its victims even in protected private spheres.


While the Internet has become a seemingly indispensable part of our lives, its digital landscape has also given rise to new challenges. With the growing importance of digital communication, online hate speech has increased sharply in recent years (Costello et al., 2017 ; Ștefăniță & Buf, 2021 ). Hate speech is defined as a verbal attack against a certain group of people with a common characteristic, such as race, gender, ethnic group, religion, or political preference (Castaño-Pulgarín et al., 2021 ). Due to the perpetrators' (perceived) prejudicial motives, hate speech is a form of "Group-Related Misanthropy" (Zick et al., 2008 ). Depending on the respective legal provisions, acts of hate speech can be hate crimes (Sheppard et al., 2021 ) and online hate speech can be a form of cyber-enabled crime. Footnote 1 Regardless of its legal assessment, hate speech can have serious consequences for those affected. Therefore, we use criminological terms such as “victim” or “victimization” in reference to hate speech although not all acts of hate speech are necessarily illegal.

With the increasing rise in online hate speech, it has become the focus of scientific interest in various disciplines (Benier, 2017 ; Paz et al., 2020 ). There is a large body of studies and literature that discusses the consequences of being exposed to online hate, with the conclusion that the experience of online hate has a negative effect on the mental health and well-being of both victim and observer (Näsi et al., 2015 ; Stahel & Baier, 2023 ; Tynes, 2006 ; Tynes et al., 2016 ; Walther, 2022 ). Hence, the exposure and experience of online hate have a negative impact on levels of depression, anxiety, self-doubt and/or confidence. Nevertheless, recent articles have highlighted that some consequences of online hate speech remain insufficiently researched from a victimological perspective (i.e., Wachs et al., 2022 ). Here, we focus on feelings of insecurity as studies have shown that prejudice-motivated crimes outside the Internet are associated with increased feelings of insecurity among the victims (Benier, 2017 ; Dreißigacker et al., 2020 ; Gelber & McNamara, 2016 ; McDevitt et al., 2001 ). This gives rise to the question whether hate speech as a form of online prejudice-motivated incidence has similar consequences. Since the Internet and more specifically, the growing meaning of social media as everyday means of communication, makes one vulnerable to hate speech almost constantly, an influence on insecurity feelings outside the Internet would suggest far-reaching significance of hate speech in the daily lives of those affected. Also, understanding the consequences of online hate speech is crucial not only for the mental and emotional well-being of individuals but also on a social level. A more nuanced understanding of the impact on different demographic groups can help to identify specific minorities or marginalized groups which are disproportionately affected by online hate speech and develop targeted interventions and policies that aim to protect the rights and well-being of all victims. Therefore, the question whether online hate speech also influences feelings of insecurity outside the Internet is highly relevant.

State of research

Hate crime and feelings of insecurity.

As already explained above, some acts of hate speech qualify as criminal acts in the legal systems of some countries, and therefore as a form of hate crime. There are several studies on the impact of hate crime on the victims. However, these studies mostly refer to incidences outside the Internet or do not differentiate between online and offline acts. A common finding is that victims of hate crimes experience more severe psychological consequences, such as anger, stress, and fear, compared to those affected by crimes not motivated by prejudice (Barnes & Ephross, 1994 ; Ehrlich et al., 2003 ; Herek et al., 1999 ; Iganski, 2019 ; Iganski & Lagou, 2016 ; McDevitt et al., 2001 ).

In addition, victims of hate crime have a greater sense of insecurity than non-hate crime victims (Benier, 2017 ; Gelber & McNamara, 2016 ). For example, in the survey by McDevitt et al. ( 2001 ), over two-fifths of hate crime victims reported feeling unsafe when alone in their neighborhood at night, compared to just under one-third of non-hate crime victims. Similar differences have also been reported in numerous studies in Germany (Church & Coester, 2021 ; Dreißigacker, 2018 ; Dreißigacker et al., 2020 ; Groß et al., 2018 ). An increased feeling of insecurity is related to lower trust in state institutions such as the police (Blanco & Ruiz, 2013 ) and lower generalized trust in other people. It affects the assessment of the personal risk of becoming a victim of similarly motivated acts outside the Internet (Dreißigacker et al., 2020 ; Groß et al., 2018 ), the avoidance of certain places as well as behavioral changes (Iganski, 2019 ; Mellgren et al.,  2017 ).

Online hate speech and feelings of insecurity

Regarding online hate speech as a potentially criminal form of prejudice-motivated online harassment, there have been various studies on detection (Qian et al., 2021 ; Schmidt & Wiegand, 2017 ; Warner & Hirschberg, 2012 ), prevalence (Dreißigacker et al., 2020 ; Geschke et al., 2019 ; Kansok-Dusche et al., 2022 ; Saha et al., 2019 ), the consequences for society (Bilewicz & Soral, 2020 ), regulation (Bleich, 2011 ; Judge & Nel, 2018 ; Reed, 2009 ; Sheppard et al., 2021 ), and possible risk and protective factors for potential victims (Costello et al., 2017 ; Garland et al., 2022 ; Hinduja & Patchin, 2022 ; Wright et al., 2021 ). However, the impact of online hate speech on the lives of victims, specifically on the feeling of insecurity outside the Internet, has hardly been studied so far (Berg & Johansson, 2016 ; Salmi et al., 2007 ). A positive correlation was found between victimization of adolescents and depressive symptoms (Wachs et al., 2022 ) and population surveys indicate that experiencing online hate speech is positively associated with loneliness (Stahel & Baier, 2023 ) and negatively associated with psychological well-being (Geschke et al., 2019 ; Waldron, 2012 ) and life satisfaction (Stahel & Baier, 2023 ). Nevertheless, there is no empirical evidence on the relationship between experiencing hate speech online and feelings of insecurity offline.

Moreover, even though the specifics of online hate speech compared to offline hate speech are increasingly being discussed (Brown, 2018 ; Citron, 2014 ; Cohen-Almagor, 2011 ), the existing empirical studies on the consequences of online hate speech hardly make systematic comparisons to those affected by (cyber)crime without a hate motive. Moreover, they mostly refer only to specific victim groups, such as the LGBTQ + community (Herek et al., 1999 , 2002 ; Ștefăniță & Buf, 2021 ), religious groups (Awan & Zempi, 2016 ), or to youth and adolescents or young adults (Hawdon et al., 2014 ; Keipi et al., 2017 ; Saha et al., 2019 ; Wachs et al., 2022 ). This study contributes to the existing literature by providing insights into the impact of hate speech on feelings of insecurity in comparison to cybercrimes in a representative sample of the general population.

Theoretical considerations

Janoff-Bulman and Hanson Frieze ( 1983 ) noted that a crime victimization can shatter victim’s perception of basic assumptions about themselves and the world. Consequently, they may no longer be able to see the world as a safe place and feel unsafe and vulnerable. Therefore, a criminal victimization can have serious consequences and can affect the feelings of safety, particularly if victims are unable to cope with their victimization and such experiences cannot be integrated into one’s own worldview. This leads to the question how victims of hate speech deal with the victimization. Following Sykes and Matza’ neutralization thesis (Sykes & Matza, 1957 ), crime victims in general use various neutralization techniques and social support (Green & Pomeroy, 2007 ) to reduce negative reactions or emotions such as fear, insecurity, guilt, and shame (Agnew, 1985 ; Ferraro & Johnson, 1983 ; Maruna & Copes, 2005 ; Weiss, 2011 ). These techniques include victimization denial, vulnerability denial, denial of one's innocence, or denial of (serious) harm. According to Agnew ( 1985 ), such rationalizations may explain the low global correlation between general victimization and fear of crime and feelings of insecurity that has frequently been found in various victimization surveys (DuBow et al., 1979 ).

However, the effectiveness of such neutralization techniques may vary as a function of the characteristics of the victimized person (like age, gender, or education) (Agnew, 1985 ), the level of social support (for example from family and friends) (Green & Pomeroy, 2007 ; Wright et al., 2021 ), and, most importantly here, the type of victimization (such as delict type/severity).

Based on the neutralization thesis on the processing and effects of crime, we not only assume that those affected by hate speech have difficulties to deny their own vulnerability, but contrary, the perceived prejudice motive of the perpetrators is likely to even increase perceived vulnerability and thus the feeling of insecurity. Hate crime in general has been said to convey a "message character" (Bannenberg et al., 2006 ). It degrades all members of a certain social group and thus suggests further victimization in the future. It is also associated with an "incitement character" (Bannenberg et al., 2006 ), meaning the assault can be perceived as an appeal to be imitated by all like-minded people with a similar ideology. We assume that hate speech, too, is strongly associated with a message and an incitement character. Thus, who are affected by hate speech, it indicates that they should expect similarly motivated acts, which are not specified in terms of the type of incidence. Hate speech experiences occur based on personal characteristics that cannot simply be changed or hidden. In this respect, those affected by hate crime or hate speech may find it difficult to avoid it through their own behavior, which is likely to increase their perceived vulnerability (McDevitt et al., 2001 ).

Moreover, we assume that hate speech experienced online rather than outside the internet should have an even higher impact on subjective vulnerability and perceived feelings of insecurity. Brown ( 2018 ) points out that online hate speech is more spontaneous and immediate, more widespread via social media, and permanently present. While a verbal attack on the street may fade away, the attack on social media remains present for both the victim and the perpetrator's peers. It can be called up again at any time and spread uncontrollably. In addition, those potentially affected by hate speech can also be reached in their own homes if they do not avoid digital communication. Both aspects showed in a qualitative interview study in which those affected by cyberbullying reported an increased burden due to the feeling of being permanently exposed to cyberbullying, even in their own homes (Müller et al., 2022 ).

In summary, we assume that neutralization techniques are less effective in the context of hate speech victimization. Given their message and inciting character, instances of it should therefore increase feelings of insecurity among those affected. This should be particularly the case in the context of online experiences as these are more permanent, less controllable and harder to avoid.

Based on our theoretical considerations and the state of research on the connection between hate speech and feelings of insecurity, and the considerations regarding more severe consequences of online hate speech compared to offline hate speech, the following hypotheses will be tested:

H1: Having experienced offline hate speech increases feelings of insecurity outside the Internet compared to not having experienced crime.

H2: Having experienced online hate speech increases feelings of insecurity outside the Internet compared to not having experienced crime, and the effect is likely to be even stronger than for offline hate speech.

H3: Having experienced offline and online hate speech increases feelings of insecurity outside the Internet compared to not having experienced crime cumulatively, thus more than online hate speech and offline hate speech each individually.

Based on previous findings and to exclude confounding variables and increase test strength, gender, age, migration background, urban or rural living environment, and social support are included as control variables. On average, women feel more insecure than men (Smith & Torstensson, 1997 ). Increasing age may also be associated with higher insecurity due to decreasing mental and physical capacity and associated higher vulnerability in case of victimization (Parker & Ray, 1990 ). People with a migration background may feel more insecure due to their status in the majority society (Ortega & Myles, 1987 ) and residents from urban areas also show higher feelings of insecurity compared to residents from less anonymous rural areas (Belyea & Zingraff, 1988 ; Scarborough et al., 2010 ; Snedker, 2015 ). Finally, various studies show that social support can be a protective factor regarding the consequences of different types of crime for victims (Hardyns et al., 2018 ; Kimpe et al., 2020 ; Leets, 2002 ; Wachs et al., 2022 ).

Data collection

The following analysis is based on the data from a representative population survey (16 years and older) in Lower Saxony (N = 10,000), a German federal state, regarding the experiences and consequences of cybercrime and other potentially harmful online experiences that are not (yet) criminal offenses in Germany. Footnote 2 The paper–pencil survey was conducted between August and October 2020 based on a two-stage sampling procedure. First, a sample of 73 municipalities was selected by GESIS—Leibniz Institute for the Social Sciences. In a second step, the target persons to be interviewed were randomly selected from the registers of the respective residents' registration offices. The selected persons were then contacted and sent a 16-page questionnaire, with the option to complete the survey either in writing and return it via a pre-stamped envelope or answer the questionnaire online. In addition, a monetary incentive of a five Euro note attached to the questionnaire was used. After 2 weeks, all survey participants were sent a reminder/thank-you letter. Overall, 9636 questionnaires could be delivered. Of these, 4102 people participated, 511 of them online. This resulted in a response rate of 42.6%.

Cases with missing values and respondents who stated that they did not use the Internet for private purposes were excluded. After the data cleansing (e.g., excluding speeders with a completion time less than 5 min of the online questionnaire and implausible/contradictory answers), this resulted in a final sample of N = 3,293.

A total of 52.0% of the respondents were female (Table  1 ). The cases of non-binary respondents were not included in this analysis since their number was in the low single digits and could not be meaningfully evaluated separately. The average age of the respondents was 49.2 years with a standard deviation of 17.7 years. A total of 14.5% had a migration background, meaning they or at least one parent was not born in Germany. In terms of location, 27.1% of the respondents lived in a municipality/town with more than 50,000 inhabitants (50,000 to 1,000,000 inhabitants), and the rest of the respondents lived in a municipality/town with fewer than 50,000 inhabitants. Footnote 3


Dependent variable.

Following Groß et al. ( 2018 ), McDevitt et al. ( 2001 ), and Tseloni and Zarafonitou ( 2008 ), the following items were used to measure feelings of insecurity outside the Internet: "In general, how safe do you feel in your neighborhood?" "… in your apartment/house?" "… alone in your neighborhood at night?" "… alone in your neighborhood at night when you meet a stranger?". Response options ranged from 1: "Very safe," 2: "Safe," 3: "Somewhat safe," 4: " Somewhat unsafe," 5: "Unsafe," to 6: " Very unsafe ." The individual items were combined into a mean index (Table  1 ). The internal consistency of the items was good (Cronbach’s α = 0.83).

Independent variables

Assignments to different (non-)victim groups served as independent variables. Following Wachs and Wright ( 2018 ), online hate speech victims (online HSVs) included those who at some point experienced at least one of the following items (lifetime prevalence): "Someone has insulted (me online) or sent me other unpleasant messages online…", "Someone has spread lies or rumors about me online", "Someone has excluded me from online groups, chats, or online games", "Someone has threatened or bullied me online…" and "Someone has made fun of me online because of my gender, national origin, race, religious affiliation, or sexual orientation”. Similar items were used for experiences of offline hate speech. Thus, in addition to online hate speech victims (n = 51), two further groups were distinguished: offline hate speech victims (n = 202) and both online and offline hate speech victims (n = 71). Note that not all items cover criminal acts but may nevertheless seriously impair those affected.

Cybercrime victims (CVs) who were not online hate speech victims (n = 1282) included those who had not experienced hate speech online or offline but had experienced at least one of the other cybercrime types surveyed in their life, for example online fraud or a ransomware attack. The non-victims (NVs) included those who had never experienced any of the surveyed offense types (n = 1687).

Each of the three constructs CV, online HSV and offline HSV were questioned separately in the questionnaire. For CV, the participants were informed that the following questions are related to specific experiences with cybercrime. Before asking about an online hate speech victimization, participants were given a definition of online hate speech. They were told that online hate speech refers to insults or hurtful posts, comments, videos or images because of their gender, national origin, race, religious affiliation, or sexual orientation on the Internet. For offline HSV, the participants were asked at the end of the questionnaire whether they had experienced the respective incidents outside the Internet.

The detailed operationalization of all victimization forms (Online Hate Speech, Offline Hate Speech, Cybercrime) is shown in Table  3 in the Appendix.

Control variables

In addition to the control variables age, gender, migration background, and number of inhabitants in the home municipality/city, the degree of social support was assessed with a short scale based on Fydrich et al. ( 2009 ) and Kliem et al. ( 2015 ). The following items were used and combined into a mean index: "I receive a lot of understanding and security from others," "There is someone very close to me whose help I can always count on," "I have friends/relatives who will definitely take time to listen if I need someone to talk to," "If I’m very depressed, I know whom I can turn to." The response options ranged from 1: "Does not apply at all," 2: "Does not apply," 3: "Does not really apply," 4: " Rather applies," 5: "Applies," to 6: " Applies completely ." The internal consistency of these items was also good (Cronbach’s α = 0.85).

Descriptive statistics

In the descriptive evaluation in Fig.  1 , online hate speech victims stand out. The correlation of social support with the feeling of insecurity deviates most clearly in this group from the non-victims (red reference line: male non-victims), especially among male online hate speech victims with little social support. As expected, feelings of insecurity among women exceed those of men in all groups but are most pronounced among online hate speech victims. In contrast, the level of insecurity among female offline hate speech victims hardly differs from female non-victims or cybercrime victims. For the group of women, both online and offline hate speech victims, social support seems to have at best a minor influence on feelings of insecurity.

figure 1

Scatterplot matrix (red reference line: male NVs)

Hypothesis testing

The association of victimization types and feelings of insecurity were estimated using two multiple linear regression models (Table  2 ), with the independent variables introduced simultaneously. Footnote 4 In the first model (Model 1), only (non-)victim groups were included as independent variables. Compared to non-victims, hate speech victims (whether online, offline, or online and offline) tend to have significantly higher feelings of insecurity outside the Internet, while cybercrime have no statistically relevant coefficient in this regard. The two largest coefficients come from online and offline hate speech victims with b = 0.30 (β = 0.04) and b = 0.29 (β = 0.04) from online hate speech victims, followed by offline hate speech victims with b = 0.14 (β = 0.03). The included victim groups alone can account for about 1% of the variance in feelings of insecurity (R 2  = 0.01).

In the second model, to determine whether the association with the types of victimization remain stable, the control variables described above were included. The coefficient of an online hate speech victimization hardly changes in Model 2 (b = 0.27), whereas the coefficient of respondents who were victims of both online and offline hate speech becomes slightly smaller (b = 0.22). The coefficient of an offline hate speech victimization is no longer significantly different from zero at b = 0.08. The latter is related to controlling for respondent gender. Once gender is included in the model, the significant coefficient of an offline hate speech victimization disappears.

As expected, increased social support is associated with significantly decreased feelings of insecurity, women have stronger feelings of insecurity than men, and those in larger municipalities/towns (50,000 inhabitants or more) feel more insecure than those in smaller municipalities/towns. In contrast, age and migration background have no independent correlations with feelings of insecurity. When comparing the standardized coefficients within Model 2, social support (β = − 0.19) and gender (β = 0.17) have the greatest explanatory power. With the variables included in Model 2, about 10% of the variance of the feelings of insecurity can be explained (R 2  = 0.10). Footnote 5

Ultimately, our data could only confirm H2: online hate speech increases feelings of insecurity outside the Internet both in comparison to non-victims and to victims of offline hate speech. In contrast, H1 and H3 could not be confirmed: victims of offline hate speech did report stronger feelings of insecurity than non-victims. However, this relationship disappeared when controlling for various sociodemographic characteristics, especially gender (H1). Moreover, there was no cumulative correlation between online and offline hate speech and feelings of insecurity as hypothesized in H3: Victims who had been victimized by both online and offline hate speech at least once in their lives had significantly stronger feelings of insecurity compared to non-victims but not compared to victims of online-only hate speech.

Using a representative dataset of the resident population in Lower Saxony, Germany, we explored the question of whether online hate speech victimization affects feelings of insecurity outside the Internet. For this purpose, we tested three hypotheses using multiple linear regression. We confirmed that online hate speech increases feelings of insecurity outside the Internet compared to non-victims and victims of offline hate speech (H2), which speaks for differences in coping with different types of victimization in terms of our assumptions based on the neutralization thesis (Sykes & Matza, 1957 ). One possible explanation is that online hate speech is associated with messages to victims and incitements to like-minded potential perpetrators, which spread uncontrollably via the Internet and affect their victims even in protected private spheres (Brown, 2018 ). In addition, the harmful contents may remain visible for the victim and a large audience because the incident does not necessarily have to violate laws or the terms of use of the communication platforms. Also, subjectively, it may be harder to escape the digital space than to terminate a stressful hate speech situation outside the Internet as smartphones keep people online almost all the time. Another important aspect may be that online hate speech often happens in the digital public and cannot be easily removed from platforms. Moreover, hurtful messages are at least temporarily stored in inboxes and mobile phones which may be perceived as intrusion of personal space, especially since those affected may be “victimized” everywhere they use their phone—even at home (Müller et al., 2022 ). All these factors combined might increase the vulnerability of online hate speech victims, especially since recognizable personal characteristics that motivated the perpetrators cannot be easily discarded, and a similar motivated attack may be possible in other contexts and outside the Internet. Although the effect size of the association of online hate speech experiences and feelings of insecurity was rather small, it must be interpreted as a total effect of the whole sample and can therefore be higher in individual cases, but of course also lower. It may well be that some aspects of the hate speech experience, for example its motivation or its severity, are associated with more serious effects. Other individual factors (level of education, social networks, etc.) could also play a role, as could situational and contextual characteristics (counter-speech by third parties, social structure of the neighborhood, access to support facilities, etc.). However, future research on the connection between online hate speech and feelings of insecurity should consider additional factors. For instance, Hawdon et al. ( 2017 ) suggest that exposure to online hate is linked to varying degrees of risky online behavior. Moreover, exposure to online hate material does not always have negative consequences, possibly due to different coping strategies among victims (Obermaier et al., 2018 ; Obermaier & Schmuck, 2022 ).

Contrary to our expectations, when key variables are controlled for, offline hate speech victimization does not significantly affect feelings of insecurity compared to non-victims, nor does it have a cumulative reinforcing effect when combined with online hate speech victimization. Thus, H1 and H3 could not be confirmed. However, the analysis could not control for whether and how the reported offline hate speech experiences differed from the reported online hate speech experiences, for example in terms of motivation and severity (Iganski & Lagou, 2015 ; Mellgren et al., 2017 ). One indication of possible differences is that offline hate speech was reported more frequently by women and respondents with a migration background, whereas no corresponding correlations were found for online hate speech victimizations (see Appendix Fig.  2 ).

Some limitations should be mentioned when interpreting the results. First, this is a secondary analysis of a cross-sectional survey. It was not conducted to answer the current research question and does not allow for proof of causality. Victimizations were surveyed retrospectively, with the drawback that distant memories may be distorted. As we stated in the introduction, hate speech is not necessarily a crime, as the legal assessment depends on the country. Also, the respondents' assessments regarding the illegality of the crime and the motivation of the perpetrators were subjective. However, the question whether online hate speech has potentially damaging consequences, is independent of the legal assessment but depends on the perspective of those affected. To include a sufficiently large number of cases for statistical evaluations, the life prevalence had to be used instead of the annual prevalence; the victimized thus include all persons who had ever experienced a corresponding act. The experienced victimizations can therefore also lie further in the past. Since the number of such cases is also relatively small, further differentiations between different types of severe (online) hate speech victimization and the modeling of interaction effects were not possible. Except for offline hate speech victimization, no other prejudice-motivated types of victimization, group memberships (such as LGBTQ + , religion, etc.), or personal characteristics on which the victimization may have been based were asked about. Corresponding comparisons, for example, between xenophobic, homophobic, sexist, or racist acts, could therefore not be made and should be considered in future studies.


The main aim of this study was to examine whether and to what extent the experience of online hate speech affects victims’ sense of security. Overall, we found that online hate speech affects feelings of insecurity, even outside the internet. Compared to non-victims and victims of offline hate speech, victims of online hate speech exhibit a more pronounced feeling of insecurity outside the Internet. The reasons for this finding may lie in the characteristics of the phenomenon of online hate speech. Since online hate speech exposes and attacks victims based on their personal characteristics and group affiliation, the victim itself and others must fear a (renewed) victimization by like-minded people of the perpetrator at any time, even outside the internet. Therefore, this uncertainty transfers to the victim’s sense of insecurity outside the Internet.

Because of its unique characteristics, online hate speech can have a profound impact on the psychological well-being of its victims, leading not only to feelings of fear or anxiety but also insecurity. Our study’s emphasis on the transfer of insecurity from online to offline spaces underscores the interconnectedness of these domains. This interconnectedness underlines the importance of understanding and addressing feelings of insecurity induced by online hate speech, as it challenges traditional boundaries between virtual and real-world experiences. Our results emphasize the urgent need for ongoing efforts to combat online hate speech and its offline ramifications and point to the lasting impact it has on the victims’ lives and the importance of specific interventions and support mechanisms. Anti-hate speech initiatives should not only focus on mitigating the spread of hateful online content but also on addressing the psychological consequences and the emotional well-being of the victims. One possible measure is to increase awareness of the issue and the impact on victims’ well-being. Our findings also underline the importance of further judicial analyses as well as collaborative efforts between online platforms and law enforcements to strengthen laws and regulations aimed at combating online hate speech. As the digital landscape continues to evolve, addressing the psychological and societal impact of online hate speech remains a pressing concern. Due to its relevance for fear of crime in general and the increasing prevalence of online hate speech, our results hopefully encourage further empirical research on online hate speech consequences.

Availability of data and materials

The dataset on which this work relies was not shared publicly. However, upon request, the authors are willing to share the data using a data use agreement.

The assessment whether (online) hate speech is a crime depends on the country and its specific legal provisions. In Germany, hate speech is punishable if it exceeds the limits of freedom of expression and violates the rights of others. Possible offenses related to hate speech include insult, incitement to hatred, incitement to commit crimes, and approval of crimes.

To minimize the risk of emotionally straining the participants, participants were clearly informed about the topic of the survey in the cover letter as well as on the first page of the questionnaire. It was also explicitly stated that the participation is voluntary and can be cancelled at any time without further consequences. Also, the additional information sheet provided the participants with a victim counseling service in case they needed help.

A more detailed description of the sample can be found in Müller et al. ( 2022 ).

To test the predictors of the regression models for collinearity, we calculated the variance inflation factors (VIF) with the R package „car “ under R version 4.2.1. The highest VIF in model 2 is around 1.2, so there is no indication of multicollinearity (James et al., 2013 ).

For model validation, see Fig.  3 in the Appendix. To additionally test the robustness of the findings, the bootstrap procedure was applied and model 2 was estimated repeatedly for 5,000 random samples from the data set used. The R package "car" under R version 4.2.1 was used for this purpose (Fox & Weisberg 2018 ). The bootstrapping results raise concerns about the robustness, as the significant regression weights of online HSVs and online and offline HSVs were only present in 94% of the bootstrap samples (see Table  4 in the appendix). Therefore, additional research is required to confirm these findings.

Agnew, R. S. (1985). Neutralizing the impact of crime. Criminal Justice and Behavior, 12 (2), 221–239. https://doi.org/10.1177/0093854885012002005

Article   Google Scholar  

Awan, I., & Zempi, I. (2016). The affinity between online and offline anti-Muslim hate crime: Dynamics and impacts. Aggression and Violent Behavior, 27 , 1–8. https://doi.org/10.1016/j.avb.2016.02.001

Bannenberg, B., Rössner, D., & Coester, M. (2006). Hasskriminalität, extremistische Kriminalität, politisch motivierte Kriminalität und ihre Prävention. In R. Egg (Ed.), Extremistische Kriminalität: Kriminologie und Prävention (pp. 17–59). KrimZ.

Google Scholar  

Barnes, A., & Ephross, P. H. (1994). The impact of hate violence on victims: Emotional and behavioral responses to attacks. Social Work, 39 (3), 247–251.

CAS   PubMed   Google Scholar  

Belyea, M. J., & Zingraff, M. T. (1988). Fear of crime and residential location. Rural Sociology, 53 (4), 473–486.

Benier, K. (2017). The harms of hate: Comparing the neighbouring practices and interactions of hate crime victims, non-hate crime victims and non-victims. International Review of Victimology, 23 (2), 1–23. https://doi.org/10.1177/0269758017693087

Article   ADS   Google Scholar  

Berg, M., & Johansson, T. (2016). Trust and safety in the Segregated City: Contextualizing the relationship between institutional trust, crime-related insecurity and generalized trust. Scandinavian Political Studies, 39 (4), 458–481. https://doi.org/10.1111/1467-9477.12069

Bilewicz, M., & Soral, W. (2020). Hate speech epidemic. The dynamic effects of derogatory language on intergroup relations and political radicalization. Political Psychology, 41 (S1), 3–33. https://doi.org/10.1111/pops.12670

Blanco, L., & Ruiz, I. (2013). The impact of crime and insecurity on trust in democracy and Institutions. American Economic Review, 103 (3), 284–288. https://doi.org/10.1257/aer.103.3.284

Bleich, E. (2011). The rise of hate speech and hate crime laws in liberal democracies. Journal of Ethnic and Migration Studies, 37 (6), 917–934. https://doi.org/10.1080/1369183X.2011.576195

Brown, A. (2018). What is so special about online (as compared to offline) hate speech? Ethnicities, 18 (3), 297–326. https://doi.org/10.1177/1468796817709846

Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic Review. Aggression and Violent Behavior, 58 , 101608. https://doi.org/10.1016/j.avb.2021.101608

Church D, Coester M. (2021). Opfer von Vorurteilskriminalität: Thematische Auswertung des Deutschen Viktimisierungssurvey 2017 (Forschungsbericht 2021/4). Wiesbaden. https://www.bka.de/SharedDocs/Downloads/DE/Publikationen/Publikationsreihen/Forschungsergebnisse/2021KKFAktuell_OpferVorurteilskriminalitaet.pdf . Accessed 6 Feb 2024

Citron, D. K. (2014). Hate crimes in cyberspace . Harvard Univ.

Book   Google Scholar  

Cohen-Almagor, R. (2011). Fighting Hate and bigotry on the internet. Policy & Internet, 3 (3), 89–114. https://doi.org/10.2202/1944-2866.1059

Costello, M., Hawdon, J., & Ratliff, T. N. (2017). Confronting online extremism: The effect of self-help, collective efficacy, and guardianship on being a target for hate speech. Social Science Computer Review, 35 (5), 587–605. https://doi.org/10.1177/0894439316666272

de Kimpe, L., Ponnet, K., Walrave, M., Snaphaan, T., Pauwels, L., & Hardyns, W. (2020). Help, I need somebody: Examining the antecedents of social support seeking among cybercrime victims. Computers in Human Behavior, 108 , 106310. https://doi.org/10.1016/j.chb.2020.106310

Dreißigacker A. (2018). Erfahrungen und Folgen von Vorurteilskriminalität: Schwerpunktergebnisse der Dunkelfeldstudie des Landeskriminalamtes Schleswig-Holstein 2017 (KFN-Forschungsbericht No. 145). Hannover. https://kfn.de/wp-content/uploads/2019/03/FB_145.pdf . Accessed 6 Feb 2024

Dreißigacker, A., Riesner, L., & Groß, E. (2020). Vorurteilskriminalität: Ergebnisse der Dunkelfeldstudien der Landeskriminalämter Niedersachsen und Schleswig-Holstein 2017. In C. Grafl, M. Stempkowski, K. Beclin, & I. Haider (Eds.), Neue Kriminologische Schriftenreihe: Sag, wie hast du‘s mit der Kriminologie?“: Die Kriminologie im Gespräch mit ihren Nachbardisziplinen (Vol. 118, pp. 125–150). Forum Verlag Godesberg. https://doi.org/10.25365/phaidra.213

Chapter   Google Scholar  

DuBow, F., McCabe, E., & Kaplan, G. (1979). Reactions to crime: A Critical review of the literature . National Institute of Law Enforcement and Criminal Justice.

Ehrlich, H. J., Larcom, B. E. K., & Purvis, R. D. (2003). The traumatic effects of ethnoviolence. In B. Perry (Ed.), Hate and bias crime: A reader (pp. 153–170). Routledge.

Ferraro, K. J., & Johnson, J. M. (1983). How women experience battering: The process of victimization. Social Problems, 30 (3), 325–339. https://doi.org/10.2307/800357

Fox J, Weisberg S. (2018). Bootstrapping regression models in R: An appendix to an r companion to applied regression. https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Bootstrapping.pdf . Accessed 6 Feb 2024

Fydrich, T., Sommer, G., Tydecks, S., & Brähler, E. (2009). Fragebogen zur sozialen Unterstützung (F-SozU): Normierung der Kurzform (K-14). Zeitschrift Für Medizinische Psychologie, 18 , 43–48.

Garland, J., Ghazi-Zahedi, K., Young, J.-G., Hébert-Dufresne, L., & Galesic, M. (2022). Impact and dynamics of hate and counter speech online. EPJ Data Science . https://doi.org/10.1140/epjds/s13688-021-00314-6

Gelber, K., & McNamara, L. (2016). Evidencing the harms of hate speech. Social Identities, 22 (3), 324–341. https://doi.org/10.1080/13504630.2015.1128810

Geschke, D., Klaßen, A., Quent, M., Richter, C. (2019). #Hass im Netz: Der schleichende Angriff auf unsere Demokratie: Eine bundesweite repräsentative Untersuchung (Forschungsbericht). Jena.

Green, D. L., & Pomeroy, E. C. (2007). Crime victims: What is the role of social support? Journal of Aggression, Maltreatment & Trauma, 15 (2), 97–113. https://doi.org/10.1300/J146v15n02_06

Groß E, Pfeiffer H, Andree C. (2018). Vorurteilskriminalität (Hate Crime): Erfahrungen und Folgen. Hannover. https://www.lka.polizei-nds.de/download/73836/Sondermodul_Hasskriminalitaet_2017.pdf . Accessed 6 Feb 2024

Hardyns, W., Pauwels, L. J. R., & Heylen, B. (2018). Within-individual change in social support, perceived collective efficacy, perceived disorder and fear of crime: results from a two-wave panel study. The British Journal of Criminology, 58 (5), 1254–1270. https://doi.org/10.1093/bjc/azy002

Hawdon, J., Oksanen, A., & Räsänen, P., et al. (2014). Victims of Online Groups: American youth’s exposure to online hate speech. In J. Hawdon, J. Ryan, & M. Lucht (Eds.), The Causes and consequences of group violence: from bullies to terrorists (pp. 165–182). Lexington Books.

Hawdon, J., Oksanen, A., & Räsänen, P. (2017). Exposure to online hate in four nations: A cross-national consideration. Deviant Behavior, 38 (3), 254–266. https://doi.org/10.1080/01639625.2016.1196985

Herek, G. M., Cogan, J. C., & Gillis, J. R. (2002). Victim experiences in hate crimes based on sexual orientation. Journal of Social Issues, 58 (2), 319–339. https://doi.org/10.1111/1540-4560.00263

Herek, G. M., Gillis, J. R., & Cogan, J. C. (1999). Psychological sequelae of hate-crime victimization among lesbian, gay, and bisexual adults. Journal of Consulting and Clinical Psychology, 67 (6), 945–951. https://doi.org/10.1037//0022-006x.67.6.945

Article   CAS   PubMed   Google Scholar  

Hinduja, S., & Patchin, J. W. (2022). Bias-based cyberbullying among early adolescents: Associations with cognitive and affective empathy. The Journal of Early Adolescence . https://doi.org/10.1177/02724316221088757

Iganski, P. (2019). Hate crime victimization survey: Report. https://www.osce.org/files/f/documents/8/c/424193.pdf . Accessed 6 Feb 2024

Iganski, P., & Lagou, S. (2015). Hate crimes hurt some more than others: Implications for the just sentencing of offenders. Journal of Interpersonal Violence, 30 (10), 1696–1718. https://doi.org/10.1177/0886260514548584

Article   PubMed   Google Scholar  

Iganski, P., & Lagou, S. (2016). The psychological impact of hate crimes on victims: An exploratory analysis of data from the US National crime victimization survey. In E. Dunbar, A. Blanco, & D. CrËvecoeur-MacPhail (Eds.), The psychology of hate crimes as domestic terrorism: U.S. And global issues (pp. 279–292). ABC-CLIO.

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning with applications in R (Vol. 103). Springer, New York. https://doi.org/10.1007/978-1-4614-7138-7

Janoff-Bulman, R., & Hanson Frieze, I. (1983). A theoretical perspective for understanding reactions to victimization. Journal of Social Issues, 39 (2), 1–17.

Judge, M., & Nel, J. A. (2018). Psychology and hate speech: A critical and restorative encounter. South African Journal of Psychology, 48 (1), 15–20. https://doi.org/10.1177/0081246317728165

Kansok-Dusche, J., Ballaschk, C., Krause, N., Zeißig, A., Seemann-Herz, L., Wachs, S., & Bilz, L. (2022). A systematic review on hate speech among children and adolescents: Definitions, prevalence, and overlap with related phenomena. Trauma, Violence & Abuse, . https://doi.org/10.1177/15248380221108070

Keipi, T., Näsi, M., Oksanen, A., & Räsänen, P. (2017). Online hate and harmful content. Cross-national perspectives . Routledge Taylor & Francis Group.

Kliem, S., Mößle, T., Rehbein, F., Hellmann, D. F., Zenger, M., & Brähler, E. (2015). A brief form of the perceived social support questionnaire (F-SozU) was developed, validated, and standardized. Journal of Clinical Epidemiology, 68 (5), 551–562. https://doi.org/10.1016/j.jclinepi.2014.11.003

Leets, L. (2002). Experiencing hate speech: Perceptions and responses to Anti-Semitism and antigay speech. Journal of Social Issues, 58 (2), 341–361. https://doi.org/10.1111/1540-4560.00264

Maruna, S., & Copes, H. (2005). What have we learned from five decades of neutralization research? Crime and Justice, 32 , 221–320. https://doi.org/10.1086/655355

McDevitt, J., Balboni, J., Garcia, L., & Gu, J. (2001). Consequences for Victims: A comparison of bias- and non-bias-motivated assaults. American Behavioral Scientist, 45 (4), 697–713. https://doi.org/10.1177/0002764201045004010

Mellgren, C., Andersson, M., & Ivert, A.-K. (2017). For Whom does hate crime hurt more? A comparison of consequences of victimization across motives and crime types. Journal of Interpersonal Violence, 36 (3–4), 1–25. https://doi.org/10.1177/0886260517746131

Müller P., Dreißigacker, A., Isenhardt, A. (2022). Cybercrime gegen Privatpersonen: Ergebnisse einer repräsentativen Bevölkerungsbefragung in Niedersachsen (KFN-Forschungsbericht No. 168). Hannover. https://kfn.de/wp-content/uploads/Forschungsberichte/FB_168.pdf . Accessed 6 Feb 2024

Näsi, M., Räsänen, P., Hawdon, J., Holkeri, E., & Oksanen, A. (2015). Exposure to online hate material and social trust among Finnish youth. Information Technology & People, 28 (3), 607–622. https://doi.org/10.1108/ITP-09-2014-0198

Obermaier, M., Hofbauer, M., & Reinemann, C. (2018). Journalists as targets of hate speech. How German journalists perceive the consequences for themselves and how they cope with it. Studies in Communication and Media, 7 (4), 499–524. https://doi.org/10.5771/2192-4007-2018-4-499

Obermaier, M., & Schmuck, D. (2022). Youths as targets: factors of online hate speech victimization among adolescents and young adults. Journal of Computer-Mediated Communication, 27 (4), zmzc012. https://doi.org/10.1093/jcmc/zmac012

Ortega, S. T., & Myles, J. L. (1987). Race and gender effects on fear of crime: An interactive model with age. Criminology, 25 (1), 133–152. https://doi.org/10.1111/j.1745-9125.1987.tb00792.x

Parker, K. D., & Ray, M. C. (1990). Fear of crime: An assessment of related factors. Sociological Spectrum, 10 (1), 29–40. https://doi.org/10.1080/02732173.1990.9981910

Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate speech: A systematized review. SAGE Open, 10 (4), 1–12. https://doi.org/10.1177/2158244020973022

Qian, J., Wang, H., ElSherief, M., Yan, X. (2021). Lifelong learning of hate speech classification on social media. Advance online publication. https://doi.org/10.48550/arXiv.2106.02821

Reed, C. (2009). The challenge of hate speech online. Information & Communications Technology Law, 18 (2), 79–82. https://doi.org/10.1080/13600830902812202

Saha, K., Chandrasekharan, E., & de Choudhury, M. (2019). Prevalence and psychological effects of hateful speech in online college communities. Proc ACM Web Sci Conf, 2019 , 255–264. https://doi.org/10.1145/3292522.3326032

Article   PubMed   PubMed Central   Google Scholar  

Salmi, V., Smolej, M., & Kivivuori, J. (2007). Crime victimization, exposure to crime news and social trust among adolescents. Young, 15 (3), 255–272. https://doi.org/10.1177/110330880701500303

Scarborough, B. K., Like-Haislip, T. Z., Novak, K. J., Lucas, W. L., & Alarid, L. F. (2010). Assessing the relationship between individual characteristics, neighborhood context, and fear of crime. Journal of Criminal Justice, 38 (4), 819–826. https://doi.org/10.1016/j.jcrimjus.2010.05.010

Schmidt, A., Wiegand, M. (2017). A survey on hate speech detection using natural language processing. In: Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pp. 1–10. https://doi.org/10.18653/v1/W17-1101

Sheppard, K. G., Lawshe, N. L., & McDevitt, J. (2021). Hate Crimes in a Cross-Cultural Context. In H. N. Pontell (Ed.), Oxford research encyclopedias Oxford research encyclopedia of criminology and criminal justice. Oxford University Press. https://doi.org/10.1093/acrefore/9780190264079.013.564

Smith, W. R., & Torstensson, M. (1997). Gender differences in risk perception and neutralizing fear of crime: Toward resolving the paradoxes. British Journal of Criminology, 37 (4), 608–634. https://doi.org/10.1093/oxfordjournals.bjc.a014201

Snedker, K. A. (2015). Neighborhood conditions and fear of crime: A reconsideration of sex differences. Crime & Delinquency, 61 (1), 45–70. https://doi.org/10.1177/0011128710389587

Stahel, L., & Baier, D. (2023). Digital hate speech experiences across age groups and their impact on well-being: A nationally representative survey in Switzerland. Cyberpsychology, Behavior and Social Networking, 26 (7), 519–526. https://doi.org/10.1089/cyber.2022.0185

Ștefăniță, O., & Buf, D.-M. (2021). Hate speech in social media and its effects on the LGBT community: A review of the current research. Romanian Journal of Communication and Public Relations, 23 (1), 47. https://doi.org/10.21018/rjcpr.2021.1.322

Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22 (6), 664–670. https://doi.org/10.2307/2089195

Tseloni, A., & Zarafonitou, C. (2008). Fear of crime and victimization. European Journal of Criminology, 5 (4), 387–409. https://doi.org/10.1177/1477370808095123

Tynes, B. M. (2006). Children, Adolescents, and the Culture of Online Hate. In N. E. Dowd, D. G. Singer, & R. F. Wilson (Eds.), Handbook of children, culture, and violence (pp. 267–288). Sage Publications.

Tynes, B. M., Rose, C. A., Hiss, S., Umaña-Taylor, A. J., Mitchell, K., & Williams, D. (2016). Virtual environments, online racial discrimination, and adjustment among a diverse, school-based sample of adolescents. International Journal of Gaming and Computer-Mediated Simulations, 6 (3), 1–16. https://doi.org/10.4018/ijgcms.2014070101

Wachs, S., Gámez-Guadix, M., & Wright, M. F. (2022). Online hate speech victimization and depressive symptoms among adolescents: The protective role of resilience. Cyberpsychology, Behavior and Social Networking, 25 (7), 416–423. https://doi.org/10.1089/cyber.2022.0009

Wachs, S., & Wright, M. F. (2018). Associations between Bystanders and perpetrators of online hate: The moderating role of toxic online disinhibition. International Journal of Environmental Research and Public Health . https://doi.org/10.3390/ijerph15092030

Waldron, J. (2012). The harm in hate speech . Harvard University Press.

Walther, J. B. (2022). Social media and online hate. Current Opinion in Psychology, 45 , 101298. https://doi.org/10.1016/j.copsyc.2021.12.010

Warner, W., Hirschberg, J. (2012). Detecting Hate Speech on the World Wide Web. In: Proceedings of the 2012 Workshop on Language in Social Media, pp. 19–26.

Weiss, K. G. (2011). Neutralizing sexual victimization: A typology of victims’ non-reporting accounts. Theoretical Criminology, 15 (4), 445–467. https://doi.org/10.1177/1362480610391527

Wright, M. F., Wachs, S., & Gámez-Guadix, M. (2021). Youths’ coping with cyberhate: Roles of parental mediation and family support. Comunicar, 29 (67), 21–33. https://doi.org/10.3916/C67-2021-02

Zick, A., Wolf, C., Küpper, B., Davidov, E., Schmidt, P., & Heitmeyer, W. (2008). The syndrome of group-focused enmity: The interrelation of prejudices tested with multiple cross-sectional and panel data. Journal of Social Issues, 64 (2), 363–383. https://doi.org/10.1111/j.1540-4560.2008.00566.x

Download references


The authors thank the anonymous reviewers for their helpful comments.

This work is based on the data of the population survey in Lower Saxony 2020 as part of the project Cybercrime against private users funded by the Pro*Niedersachsen funding program of the Lower Saxony Ministry of Science and Culture.

Author information

Authors and affiliations.

Criminological Research Institute of Lower Saxony (KFN e.V.), Lützerodestr. 9, 30161, Hannover, Germany

Arne Dreißigacker & Philipp Müller

Educational Institute of Lower Saxony’s Prison System, Criminological Services, Fuhsestr. 30, 29221, Celle, Germany

Anna Isenhardt

University of Kassel, Institute for Psychology, Holländische Str. 36-38, 34127, Kassel, Germany

Jonas Schemmel

You can also search for this author in PubMed   Google Scholar


Author 1: Conceptualization (equal); Formal analysis; Methodology (lead); Visualization; Writing—original draft (equal); Writing—review & editing (equal). Author 2: Investigation; Conceptualization (equal); Writing—original draft (equal); Writing—review & editing (equal). Author 3: Investigation (lead); Project administration; Supervision (project); Writing—review & editing (equal). Author 4: Conceptualization (equal); Methodology (supporting); Supervision; Writing—review & editing (equal).

Corresponding author

Correspondence to Arne Dreißigacker .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Fig.  2 and 3

figure 2

Correlation matrix

figure 3

Residual plots (Model 2)

See Table  3

See Table  4

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Dreißigacker, A., Müller, P., Isenhardt, A. et al. Online hate speech victimization: consequences for victims’ feelings of insecurity. Crime Sci 13 , 4 (2024). https://doi.org/10.1186/s40163-024-00204-y

Download citation

Received : 08 November 2023

Accepted : 03 February 2024

Published : 10 February 2024

DOI : https://doi.org/10.1186/s40163-024-00204-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Online hate speech
  • Victimization
  • Feelings of insecurity

Crime Science

ISSN: 2193-7680

speech on internet kills communication

Kicking toxic people off social media reduces hate speech on the internet

A facebook study shows that deleting 100 ‘insult’ accounts had a positive impact on viewership.

Donald Trump supporters on Capitol Hill

Controlling hate speech on the internet poses one of the greatest challenges of our information age. Everyone says it’s important, but how effective is it? Some platforms have chosen to remove individual accounts who disseminate toxic content. An internal study by Facebook, which analyzed interactions between 26,000 users, reveals that excluding extremist community leaders is an effective means of eradicating hate speech on social media, particularly over the long term. Removing just 100 accounts produced a noticeable impact, since it denied proponents of hate speech a microphone and ultimately improved the broader social media environment.

Earlier studies had suggested that deleting harmful accounts on platforms like Twitter , Reddit and Telegram helped reduce unwanted activity, including broader levels of hate speech. But a cause-and-effect relationship was only recently demonstrated by Meta (Facebook’s parent company) researchers in a study published in PNAS , a peer-reviewed journal of the National Academy of Sciences (NAS).

Daniel Robert Thomas and Laila A. Wahedi examined how the removal of most active representatives from six Facebook communities affected their audience. The Meta researchers aimed to measure how much the audience continued to watch, post, and share harmful content after the instigators were removed. The study found that, on average, “the network disruptions reduced the consumption and production of hateful content, along with engagement within the network among audience members.”

After the accounts were deleted, users saw 10% less hateful content on average. Given that they consumed around five toxic posts daily, the result translates to one less every two days. Furthermore, those who ceased interacting with toxic community members were then presented with different content, groups, or communities that were not explicitly linked to violent behavior. However, Facebook’s privacy protection guidelines prevented data tracking of specific user accounts throughout the study.

Organizations that propagate hate may retain a loyal audience for a while, but the expulsion of their leaders may drive some viewers away. Meanwhile, those who are less attached to these leaders are less likely to engage with this content in the first place. This is a positive finding since this is the group most susceptible to the influence of malicious communities. “The results suggest that strategies of targeted removals, such as leadership removal and network degradation efforts, can reduce the ability of hate organizations to successfully operate online,” concludes the study.

But there is no silver bullet that can kill this particular werewolf. People who are kicked off a platform can easily create new accounts and build new networks. They can also migrate to other platforms. Additionally, the authors suggest that other toxic organizations could take over and attract sympathizers of the deleted accounts. To increase the effectiveness of the deletion strategy, the authors propose simultaneous removal of multiple accounts, as this hinders an organization’s ability to find its members and regroup.

Hate speech or toxic speech?

But if account deletion decisions are left to the platforms, will they really want to do it? Sílvia Majó-Vázquez , a research associate at the Reuters Institute for the Study of Journalism at Oxford University (U.K.) and a professor at Vrije University in Amsterdam , said that content moderation on social networks must “be done by seeking a balance between freedom of expression and the preservation of other rights,” so it’s essential to differentiate between hate speech, toxic speech and incivility.

Majó-Vázquez says that incivility, such as disrespectful and sarcastic comments, is the mildest form of negative language. But when it becomes more extreme and “people are chased away from participating in a conversation,” toxic speech is born, which can become violent. “From a democratic perspective, this is very harmful because it discourages the democratic ideal of public debate,” she said.

To ensure the preservation of freedom of expression on social media platforms, careful consideration should be given to suspending or deleting accounts. According to Majó-Vázquez, the suspension process must incorporate conceptual dimensions and utilize manual mechanisms that sufficiently balance the right to freedom of expression with the preservation of other fundamental rights. She advises that a similar exercise should be applied to political figures as well. Automated mechanisms for deleting messages and suspending accounts must be continuously scrutinized, with a priority on expert evaluation of messages, similar to the external advisory boards some platforms have already implemented.

According to a recent study conducted in seven countries by the Reuters Institute, the correlation between toxicity and engagement is not always direct, and varies based on the content topic and severity. The study analyzed Twitter data during the pandemic and found that the most toxic tweets were often unpopular with audiences. “In fact, we see that the most toxic tweets lose popularity and messages with low levels of toxicity increase in popularity,” said Majó-Vázquez. The study did not offer conclusive insights on whether this was due to audiences disliking toxic content or the moderation techniques employed by the platform. “We can’t answer this question with the data from our study, but this result challenges the premise that toxicity is always the most popular online currency,” she said.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information


How algorithmic recommendations can push internet users into more radical views and opinions

Manifestación antirracista en Colonia (Alemania)

Activism or ‘clicktivism’? Solidarity and posturing on social media

Archived in.

  • Donald Trump
  • Francés online
  • Inglés online
  • Italiano online
  • Alemán online
  • Crucigramas & Juegos

Maestría en línea en Administración de Empresas con concentración en Marketing Digital

Internet Essays

A report on a marketing article.

Publication Date: January 22, 2018 Headline/Title: How Marketing and Advertising are bound to Change In 2018 Major Company(s) mentioned: Facebook Company Summary: The article indicates…

The Effect of Online Advertising on Consumers

The ad fission of the internet has increasingly been adopted thereby gradually enabling the world web to become a great advertisement platform (Kalia & Mishra…

Digital Media Analysis: ASOS Website

Introduction The business world has gravely changed in the recent past, making it quite hard for one to sustainably keep abreast with many things trending…

Anonymity and Abuse on the Internet

Internet has been the fastest growing medium, as more and more people are becoming a part of the internet fraternity; it becomes more difficult to…

History of casinos

Introduction Casinos have become rampant in the modern society, especially in the metropolitan areas. In fact, casinos have an impact on both social and economic…

speech on internet kills communication

Creating Your LinkedIn Profile

Making your existing LinkedIn profile professional. First step Enter your login details on the LinkedIn page. From your browser’s address bar, please note LinkedIn URL…

Impact of video games on children under 18

What are the legal, ethical and social issues in society when dealing with video games? One of the critical technology issues that the society is…

E-Business Essay

Introduction Electronic business is a type that utilizes technological innovations and involves business taking place by the help of computer networks. E-business refers to the…

Agile Information Systems

Introduction  ISD refers to incremental and iterative approaches used in software development performed by collaborative manner through self-organizing teams that produce quality software that is…

Library Management System Design

Process to Improve Quality of Management Systems in Academics Library forms an important part of any learning and research program. Therefore, effective management of learning…

Main Features of Cyber Harassment

Traditional bullying transformed into cyber harassment in the 1990s after personal computers became popular. People with personal computers use the web to hide their identities…

Cultural Considerations on Website Creations

Summary The study indicates how ESE can improve its website for better communication to users with varied cultures. In this regard, the study identified that…

Artificial Intelligence Possible Benefits and Challenges of Artificial Intelligence in…

Introduction In the recent years, artificial intelligence (AI) is no longer considered as a robot in the fiction of science, despite the rapid development of…

E-Commerce and System Design

Abstract Ecommerce is the emerging concept in the world of business during the current era. The companies managing the physical business for long times are…

Electronic Media

How has electronic media (the internet especially and self-produced DVD’s) reversed some of this dominant cultural hegemony generated by Hollywood movies by democratizing access to…

Changes in the Documentary as A Result of Technological Changes

Documentary film is motion picture which is non-fiction and is meant to document some aspects of reality. Documentary films are made primarily for the purpose…

Our Media and the Immorality Explosion

Introduction Morality in the society has been an interesting topic over the years. It is important to have good morals where people are disciplined and…

Has the internet brought about more harm than it is…

Internet is a system that links computer networks globally for serving its users present across the world. In the present decade, the use of internet…

How technology has affected conversations

Description How people conduct conversations has changed these days due to advancements in technology. In a recent workshop that I attended, people were using their…

Internet kills communication

Introduction A famous quote by Peter Drucker, “the most important thing in communication is hearing that which is not said”, remains a very relevant dictum…

  • Artificial Intelligence
  • Computer Science
  • Cyber Crime
  • Cyber Security
  • Data Analysis
  • Internet Of Things

speech on internet kills communication

  • Speech on Internet for Students and Children

Speech on Internet

Very good morning to all. Today, I am here to present a speech on internet. Someone has rightly said that the world is a small place. With the advent of the internet, this saying seems realistic. The internet has really bought the world together and the distance between two persons is really not a distance today. We all know about the technological advancements happening in the world. One of the major attributes of technological advancement is the internet. Today the internet is available easily to many individuals. Also, it is rapidly changing the way we work, travel, educate and entertain.

Speech on Internet

Source: pixabay.com

Evolution of Internet

Many of you are aware of what the internet facility is. Still, I would like to highlight the aspects of the internet. The internet is a facility wherein two gadget screens are connected through signals. Thus, through this medium, the information can be exchanged between two gadgets.

The history of the internet dates back to 40 years ago with its first use in the United States of America and the inventor of the internet was Robert E.Kahn and Vint Cerf. Earlier the internet was only used to send emails between two computers. Today it has reached all distant parts of the globe with more than 1.5 million users. They use the internet for exchange of information, entertainment, money exchanges, etc.

Get the Huge list of 100+ Speech Topics here

Pros of the Internet

The internet facility has many advantages and it has proved to be a milestone in the technical advancement of humankind. It allows users to exchange and communicate information. Two users who are sitting in distant corners of the world can easily communicate through mails, chats, and video conferencing by using the internet.

It provides information of all kinds to its users. Also, it provides entertainment by offering services of watching movies, listening to music, playing a game. Various day to day activities such as travel ticket bookings, banking facilities, shopping, etc. can be easily done through the internet.

Nowadays the internet also offers various dating websites and matrimonial websites by which one can find their prospective soul mate.

The Internet also offers a facility to its users where they can earn online by means of blogs and video blogs. These are some of the major benefits of the internet has a dark side also.

Cons of the Internet

Many a number of people misuse information for fraud and illegal works. Due to excessive use of the internet in the wrong hands, a number of cybercrimes are happening which is affecting the trust of the people on the internet.

Abuse over social media is also prevailing through the internet wherein people of negative mentality abuse other people on the basis of caste, race, color, appearance, etc. Addiction to online games is one of the major problems of parents today as children get addicted to online games and avoid their studies and outdoor activities.

The internet has nowadays become such an important part of the life of the people that it is hardly possible to spend even a day without using the internet. Thus after seeing the negatives of the internet, it is not practically possible to completely avoid the internet. However, we can put a timeline or restriction on its usage especially to children.

The parents and teachers can monitor the online activities of their children and guide them on the proper use of the internet. We should also educate and aware people of online cybercrime and fraud. Thus through proper precautions and adopting safety measures the internet can prove to be a boon for the development of human society.

Read Essays for Students and Children here !

Customize your course in 30 seconds

Which class are you in.


Speech for Students

  • Speech on India for Students and Children
  • Speech on Mother for Students and Children
  • Speech on Air Pollution for Students and Children
  • Speech about Life for Students and Children
  • Speech on Disaster Management for Students and Children
  • Speech on Generation Gap for Students and Children
  • Speech on Indian Culture for Students and Children
  • Speech on Sports for Students and Children
  • Speech on Water for Students and Children

16 responses to “Speech on Water for Students and Children”

this was very helpful it saved my life i got this at the correct time very nice and helpful

This Helped Me With My Speech!!!

I can give it 100 stars for the speech it is amazing i love it.

Its amazing!!

Great !!!! It is an advanced definition and detail about Pollution. The word limit is also sufficient. It helped me a lot.

This is very good

Very helpful in my speech

Oh my god, this saved my life. You can just copy and paste it and change a few words. I would give this 4 out of 5 stars, because I had to research a few words. But my teacher didn’t know about this website, so amazing.

Tomorrow is my exam . This is Very helpfull

It’s really very helpful

yah it’s is very cool and helpful for me… a lot of 👍👍👍

Very much helpful and its well crafted and expressed. Thumb’s up!!!

wow so amazing it helped me that one of environment infact i was given a certificate

check it out travel and tourism voucher

thank you very much

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play


Speech On Communication [1,2,3 Minutes]

Communication is an important aspect of human life. It helps us convey our thoughts and feelings to others. However, communication can solve giant issues, sometimes wrong communication can lead you many controversies.

In this article, we are sharing some examples of “ speech on communication ” of different word lengths and delivery duration. These are written in easy-to-understand and simple English language.

Speech On Communication for 1 Minute

Good morning and welcome all of you gathered here. I am here to present a speech on communication.

Communication has the purpose of transferring thoughts, ideas, and information to others. But it is very important to convey the information in the correct form otherwise people interpret it the wrong way.

Hence, communication is not only firing loads of words towards others but we need to make quality conversations with the help of enhanced communication skills. First of all, good communication skills involve the choice of words, gestures, silence, expressions etc.

Apart from that, we need to understand the other person’s perspective by listening to him carefully. This will give us an idea of how to communicate with a specific person.

Furthermore, you can choose your words wisely to create a positive influence on people. For example: if you drive a person to wait for you, you can say to him “ thank you for sparing me your valuable time ” instead of saying “ sorry, I got late for this reason “.

At last, I want to say; From the first ray of the sun to the last minute of the day, we communicate with a number of people. Good communication skills can help us grow in each aspect of life. So, we should keep improving our communication skills. Thank you!

1 Minute Speech On Communication

2-Minute Speech On Communication

Welcome honourable principal, respected teacher, loved parents and dear friends. Today, we are gathered here for this special occasion of… I am here to speak a few words about communication skills.

We use a number of tools to make our life easy. One of these tools is communication. Fundamentally, the purpose of communication is to convey our message to other people. But if the other person interprets your words in the wrong way, the purpose of communication will not get satisfied.

It means we need to use this tool very carefully otherwise it can create problems for us rather than solving one. So, there is a need to improve our communication skills in order to convey the correct message. First of all, we should understand that communication is an art more than a science.

Once we master the art of communication, we can win the hearts of people and convince them. Now, communication can help you progress in every sphere of life be it your personal life or your professional life. This is the reason that most companies employ people with good communication skills.

Most importantly, good communication skills do not focus only on the choices of words, there are many other elements that make a conversation healthy and sound good. These elements involve gestures, signs, symbols, pauses, silence, body language and expressions.

One can easily improve communication skills through various means such as by enrolling in a course, following good communicators, and reading books on good communication skills. But this is not enough, you need to practice once you understand the basics of good communication.

To sum it up, scaling up communication skills is the need for each one of us so that we can build good relationships with others. Thank you!

3 Minute Speech On The Importance Of Communication

First of all, good morning to the honourable principal, respected teachers and loving friends and all of you present here today. In your special presence, I would like to say a few words about communication and its importance.

We live in two different worlds. One is the internal world of desires, thoughts, feelings, fear and emotions etc. The second is the external world we are surrounded with. In order to bridge the gap between the internal and external worlds, we need a device. This device is called “communication”.

Human life has always been and is full of communication. In earlier times when no language was developed, Humans conversed with each other using hand gestures, signs and expressions. Today, we have various means of communication such as social media , instant messaging, video calls, phone calls, emails etc.

Whether you are a student or a working professional, you need to communicate with people for a number of reasons. Communication helps us convey our thoughts and feelings to others. However, communication can solve giant issues, sometimes wrong communication can lead you many controversies.

Hence, it is essential for everyone to communicate well because people understand each other with the help of communication. On the one hand, healthy communication can help you build good relationships. On the other hand, poor communication can destroy healthy relationships.

First of all, one needs to understand the basics of communication in order to develop good communication skills. Communication involves many elements one needs to pay attention to. These elements involve gestures, signs, symbols, pauses, silence, body language and expressions.

Apart from this, you can choose your words wisely to create a positive influence on people. For example: if you drive a person to wait for you, you can say to him “ thank you for sparing me your valuable time ” instead of saying “ sorry, I got late for this reason “.

A person with good communication skills is respected and loved by all. This is because he knows how to win people’s hearts and convince them. This quality can lead you to the path of progress in all walks of life be it personal or professional.

Most notably, good communication skills open many doors for employment as companies prefer hiring people with good communication skills. So, everyone should start improving his communication skills. This will not only make a splash on your personality, but also you get recognition in society.

To sum it up, communication skills play a crucial role in our daily lives. We must strive to improve them continuously. This is all I wanted to share with you. Thank you!

3 Minute Speech On Communication

Other Speeches

Importance of time management speech [1,2,3 minutes], speech on ethics and etiquette [1,2,3 minutes], speech about mahatma gandhi jayanti 2023.

  • 1 Minute Speech On Health Is Wealth
  • 2 Minute Speech On Child Labour
  • 1 Minute Speech On Child Labour
  • Speech On Nature [ 1-2 minutes ]
  • 2 Minute Speech on Importance Of Education
  • 1 Minute Speech on Pollution
  • 2 Minute Speech on Population Explosion

Essay On Leadership- Banner

Related Posts

Speech on Time management

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

an image, when javascript is unavailable

Twitter Hate Speech Accounts Exploited Israel’s War in Gaza to Grow Four Times Faster

  • By Miles Klee

Last month, a judge tossed a lawsuit from Elon Musk ‘s X (formerly Twitter ) against the Center for Countering Digital Hate, an anti-extremism watchdog, that alleged the group had financially harmed the company by reporting on violent and hateful speech proliferating across its platform. The ruling stated in no uncertain terms that Musk and X were trying to punish the CCDH for exercising First Amendment rights.

“None of the accounts deserved the enormous visibility they received by cynically goading, upsetting, and terrorizing others into emotional responses,” writes Imran Ahmed, CEO of the CCDH, in an introduction to the new report, titled “Hate Pays: How X accounts are exploiting the Israel-Gaza conflict to grow and profit.” Ahmed adds that the influencers’ fiercest critics also gave them greater reach and exposure because “even ‘negative’ engagement counts as engagement on social media platforms, increasing their attractiveness to the algorithms that control what content gets promoted in timelines and what does not.”

Editor’s picks

The 250 greatest guitarists of all time, the 500 greatest albums of all time, the 50 worst decisions in movie history, every awful thing trump has promised to do in a second term.

Rolling Stone previously reported on a number of the influencers tracked in the CCDH report who have used nominal support for Palestine as cover to disseminate antisemitic content, including Ryan Dawson, once banned from the platform but reinstated under Musk, and former UFC fighter Jake Shields, who together openly dabble in Holocaust denial on X. Others quick to exploit the slaughter in Gaza to demonize Jewish people were self-described “raging antisemite” Keith Woods, known to have welcomed white nationalist Nick Fuentes onto his YouTube show; anonymous account @CensoredMen, which originally existed to support misogynist manosphere celebrity Andrew Tate as he faces rape and human trafficking charges in Romania; and Jackson Hinkle, a prolific purveyor of misinformation whose account picked up a staggering 2 million followers in the four months after the Oct. 7 attacks.

Biden Slams Netanyahu Over Gaza: ‘I Think What He’s Doing Is A Mistake’

Inside the pro-palestine movement bird-dogging biden everywhere he goes, jon stewart calls out u.s. support of israel amid solar eclipse frenzy.

The last two hate-fueled feeds the CCDH studied were “Way of the World,” an equal-opportunity racism account just as likely to post offensive memes about Black people or Jews , often while railing against immigration, and the account of failed U.S. senate candidate Sam Parker, who has shared conspiracist content about a cabal of Jews supposedly running the world , and invoked the canard that the Rothschild banking family sits at the top of this hidden organization.

“It is clear from the results of a number of studies by CCDH and others that the new leadership of X has not discouraged antisemites, and instead has effectively welcomed them, and accounts which seek to spread hate,” the report concludes. “The increase in followers and reach for the accounts identified in this report is, we believe, a direct result of the policy changes made under owner, Elon Musk, and CEO, Linda Yaccarino.”

Musk hasn’t merely enabled antisemitism, though, as he sometimes wades into such rhetoric himself, once encouraged followers to follow an antisemitic account for updates on the Gaza war, and effectively endorsed ideas akin to the “ Great Replacement ” conspiracy theory, many versions of which posit that Jews are deliberately orchestrating mass immigration of non-white populations into the West.

“It’s titled in favor of hate and lies,” Imran Ahmed writes of social media in his summary of this research. “Those preaching tolerance and goodwill have to ice-skate uphill to keep up.” And in his time as the owner of X, Musk has done nothing to reverse that dynamic — quite the opposite. In the absence of any corporate responsibility, the CCDH can only recommend that regulators and lawmakers demand accountability and transparency, while advertisers and users can reevaluate whether they want any part of this destructive online economy.

Musk, meanwhile, can complain about this damning appraisal all he wants, as he typically does when the sheer volume of malignant discourse on X is laid out in the starkest terms. The only difference is that this time, he probably won’t bother to launch a frivolous lawsuit about it.

JoJo Siwa Is Trying Too Hard to Shock Us

Rfk jr. campaign fires staffer who said the quiet part out loud, norm macdonald was the hater o.j. simpson could never outrun, ice spice will make acting debut alongside denzel washington in spike lee's 'high and low', how the o.j. simpson car chase and trial changed media forever.

  • must-see tv
  • By Brian Stelter

Kato Kaelin Offers Condolences to O.J. Simpson’s Kids, Sends Heartfelt Messages to Goldmans, Browns

  • Condolences
  • By Nancy Dillon

Ohtani Interpreter Allegedly Stole $16 Million, Placed Almost 20,000 Bets

  • By Ryan Bort

Fred Goldman on O.J. Simpson's Death: 'The Only Thing That Is Important Today Are the Victims'

  • By Jon Blistein and Nancy Dillon

Most Popular

Jodie foster pulled robert downey jr. aside on their 1995 film set and told him: 'i’m scared of what happens to you next' because of addiction, where to stream 'quiet on set: the dark side of kids tv' online, sources claim john travolta is ‘totally smitten’ with this co-star, angel reese signs multiyear agreement with panini america, you might also like, blur work out the kinks in real time at pre-coachella warm-up show: concert review, what the first-ever revolve festival was like in 2015 compared to 2024: the coachella weekend party’s evolution, the best swim goggles for men, according to competitive swimmers, barry jenkins says making ‘mufasa: the lion king’ was ‘one of the best decisions of my life’, american flag football league delays men’s pro launch to 2025.

Rolling Stone is a part of Penske Media Corporation. © 2024 Rolling Stone, LLC. All rights reserved.

Verify it's you

Please log in.

Middle East latest: Israel says it will open crossing for aid trucks to enter Gaza

COGAT, the Israeli body which coordinates humanitarian aid to Gaza, has told Sky News' Mark Stone that a crossing next to Erez is to open. Scroll through live updates while you listen to our latest podcast on how tensions are escalating in the region.

Friday 12 April 2024 04:38, UK

  • Israel-Hamas war
  • Biden says US support for Israel 'ironclad' on Iran
  • Iranian threats against Israel 'unacceptable', PM says
  • UK foreign secretary 'deeply concerned' about possible Iranian 'miscalculation'
  • Israel says crossing will open for aid trucks tonight  
  • Three sons of Hamas leader killed in strike | IDF gives details of attack
  • Alistair Bunkall : Attack from Iran on Israel reported to be imminent
  • Explained: Who is Ismail Haniyeh?
  • Watch: Moment he is told his family has been killed
  • Alex Crawford report : Yemeni fishermen face threat of Houthi attack - but on Gaza they are firmly behind the militants

We'll be back tomorrow morning with more updates on the Israel-Hamas war and wider tensions in the Middle East. 

The US has told its staff in Israel not to travel outside three cities amid the threat of a retaliatory strike from Iran. 

Employees and their family members have been restricted from personal travel outside the greater Tel Aviv, Jerusalem and Be'er Sheva areas, the US embassy said. 

"Out of an abundance of caution, US government employees and their family members are restricted from personal travel outside the greater Tel Aviv (including Herzliya, Netanya, and Even Yehuda), Jerusalem, and Be'er Sheva areas until further notice," it said in a security alert on its website. 

Iran has vowed revenge for a deadly airstrike on its embassy compound in Damascus last week.

US President Joe Biden has said Iran was threatening to launch a "significant attack in Israel," and that his country remained committed to its ally's security.

By Alex Rossi , international correspondent

In Ashkelon, an Israeli city on the border of Gaza, they are used to living under rockets from Hamas but they are worried about where this conflict is going and an aerial attack from an even bigger foe.

Along the seafront, on the promenade, a few walkers and runners are out trying to enjoy the warming weather. Others are attempting to celebrate.

At a bar mitzvah on the terrace of a local restaurant, there are drinks and jokes but the talk is of war and security.

Korin Peretz tells me her fears about the future, saying: "I hope there is nothing happening from Iran. It's very terrifying.

"Today we celebrate the bar mitzvah of my son. I couldn't sleep at night, always worrying about this situation and I hope it's all over. It's not comfortable. It doesn't feel very good.

"Our life here in Israel is not safe right now but there is no other place."

For years, Israel and Iran have been enemies, but the shadow war that's been fought between them is now threatening to burst into the open.

There's no doubt Israel is in a dangerous region and since its creation in 1948, it's had to deal with a number of existential threats.

But the trauma of the 7 October Hamas attack has left this nation feeling especially vulnerable

Iran is vowing retaliation after two generals were killed in an airstrike on the consulate in Damascus, Syria.

Read more of Rossi's eyewitness report here ...

Israel has told Sky News it will open a crossing to allow aid trucks into Gaza tonight. 

A spokesman for COGAT, the Israeli body which coordinates humanitarian aid to Gaza, has told our US correspondent Mark Stone the crossing is next to Erez. 

The Port of Ashdod will open to humanitarian aid "in the coming days", he added. 

The intention to open the new crossing and port were announced by the Israeli government last week following a phone call between US President Joe Biden and Israeli Prime Minister Benjamin Netanyahu.

The original crossing between Israel and Gaza at Erez was heavily damaged during fighting on 7 October, so the aid is expected to cross the border via a newly constructed unofficial crossing point.

Israel would respond directly to any attack by Iran, Israeli Defence Minister Yoav Gallant has said. 

"A direct Iranian attack will require an appropriate Israeli response against Iran," Mr Gallant told the US defence secretary. 

Mr Gallant's comments come as tensions continue to rise in the Middle East, with Iran vowing to launch a retaliatory strike to Israel's apparent attack on its embassy in Syria. 

The US has told Iran that it was not involved in an airstrike on its embassy in Syria, the White House has said. 

Suspected Israeli warplanes bombed the Iranian building in Damascus, killing a top military commander and marking a major escalation in Middle East tensions. 

Israel has not commented on the attack, but the US military believes the country carried out the airstrike. 

"We communicated to Iran that the US had no involvement in the strike that happened in Damascus and we have warned Iran not to use this attack as a pretext to escalate further in the region or to attack US facilities or personnel," White House press secretary Karine Jean-Pierre said. 

The US has been on high alert about possible retaliatory strikes from Iran, and US envoys have been working urgently to try to lower tensions. 

"Obviously, we don't want this conflict to spread," Ms Jean-Pierre said. 

The Israeli military has said it "will know how to act where needed" as Iran vowed to retaliate to last week's deadly strike on the Iranian embassy in Syria. 

"An attack from Iranian territory would be clear proof of Iranian intentions to escalate the Middle East and stop hiding behind the proxies," said Israel Defence Forces spokesman Daniel Hagari. 

"In the last few months, we have improved and advanced our offensive capabilities and we will know how to act where needed."

Mr Hagari also said that US Central Command General Michael Kurilla arrived in Israel today and "held a strategic assessment of the security challenges in the region" with Israel's chief of staff.

Despite the new threats, Israel's Home Front Command has not ordered any changes in the public's routine.

US President Joe Biden has emphasised his country's "ironclad" support for Israel after Iran's threat of retaliation.

Israel has been widely blamed for the strike on Iran's embassy in Damascus, but it has not commented on the attack. 

The UK's foreign secretary has said he is "deeply concerned" about a potential "miscalculation" by Iran. 

David Cameron said he had "made clear" to the country's foreign minister Hossein Amir-Abdollahian that Iran "must not draw the Middle East into a wider conflict".

"I am deeply concerned about the potential for miscalculation leading to further violence," he wrote on X. 

"Iran should instead work to de-escalate and prevent further attacks." 

Tehran has vowed to retaliate after two of its top generals were killed in an airstrike on its consulate in Syria earlier this month that the US military believes was carried out by Israel.

Although Israel has not commented on the attack, Iran's leader the Ayatollah Ali Khamenei said the country "must be punished". 

Earlier today, the UK's Prime Minister Rishi Sunak said the Iranian threats were "unacceptable". 

The world would be in "uncharted territory" if Iran follows through on threats to attack Israel, a former head of the Middle East department at the Foreign Office has told Sky's Politics Hub. 

Iran has vowed retaliation for a deadly strike on the Iranian consulate in Syria earlier this month, which killed two of its top generals.

Israel has not commented on the attack, but Tehran's leader, Ayatollah Ali Khamenei, said the country "must be punished, and it shall be".

"If Iran was to miscalculate and attack Israel directly, then I think … Israel would respond in kind, and we could find ourselves in uncharted territory," Sir William Patey said. 

He added that by publicly vowing retaliation, it will be hard for the Ayatollah to back down, although they will be facing a "dilemma" on potential targets.

"The Ayatollah Khamenei has said twice now that they're going to attack, so he's very publicly out there saying that there will be reprisals - quite hard for him to back down," he said. 

"It is possible that they might use proxies, which is their standard methodology. But they have said they feel the need to attack Israel directly." 

He added, they may not attack directly in Israel, but they are limited when it comes to other targets.

The contents of 600 aid trucks are stuck at the Kerem Shalom crossing, an Israeli authority has said. 

The Coordination of Government Activities in the Territories called out the United Nations for the backlog. 

It said the supplies were "waiting to be collected" by the United Nations on the Gaza side of the crossing. 

"We extended crossing hours and scaled up our capacities. UN do your job," it added. 

"The bottlenecks are not on the Israeli side." 

Israel has been facing mounting international pressure to boost aid deliveries to the besieged region. 

Earlier today, the Israeli military said it was constructing a new northern crossing for aid to reach Gaza. 

Aid trucks currently come from Egypt to the Gaza border and are inspected by Egyptian and Israeli authorities before being able to proceed. 

Once checked, they are allowed to enter the region and the Palestinian Red Crescent or the UNRWA deliver the goods to civilians. 

Be the first to get Breaking News

Install the Sky News app for free

speech on internet kills communication

  • Share full article

For more audio journalism and storytelling, download New York Times Audio , a new iOS app available for news subscribers.

How Tesla Planted the Seeds for Its Own Potential Downfall

Elon musk’s factory in china saved his company and made him ultrarich. now, it may backfire..

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.

From “The New York Times,” I’m Katrin Bennhold. This is “The Daily.”


Today, the story of how China gave Tesla a lifeline that saved the company — and how that lifeline has now given China the tools to beat Tesla at its own game. My colleague, Mara Hvistendahl, explains.

It’s Tuesday, April 9.

So, Mara, you’ve spent the past four months investigating Elon Musk and his ties to China through his company, Tesla. Tell us why.

Well, a lot of American companies are heavily invested in China, but Tesla’s kind of special. As my colleagues and I started talking to sources, we realized that many people felt that China played a crucial role in rescuing the company at a critical moment when it was on the brink of failure and that China helps account for Tesla’s success, for making it the most valuable car company in the world today, and for making Elon Musk ultra rich.

That’s super intriguing. So maybe take us back to the beginning. When does the story start?

So the story starts in the mid 2010s. Tesla had been this company that had all this hype around it. But —

A lot of people were shocked by Tesla’s earnings report. Not only did they make a lot less money than expected, they’re also making a lot less cars.

Tesla was struggling.

The delivery of the Model 3 has been delayed yet again.

Tesla engineers are saying 40 percent of the parts made at the Fremont factory need reworking.

At the time, they made their cars in Fremont, California, and they were facing production delays.

Tesla is confirming that Cal/OSHA is investigating the company over concerns over workplace safety.

Elon Musk has instituted a kind of famously grueling work culture at the factory, and that did not go over well with California labor law.

The federal government now has four active investigations involving Tesla.

They were clashing with regulators.

The National Transportation Safety Board will investigate a second crash involving Tesla’s autopilot system.

Billionaire entrepreneur Elon Musk — friends are really concerned about him. That’s what Musk told “The New York Times.”

And by 2018, he was having all of these crises.

According to “The Times,” Musk choked up multiple times and struggled to maintain his composure during an hour-long interview about turmoil at his electric car company, Tesla.

So all of this kind of converged to put immense pressure on him to do something.

And where does China come in?

Well, setting up a factory in China, in a way, would solve some of these problems for Musk. Labor costs were lower. Workers couldn’t unionize there. China provided access to this steady supply of cheaper parts. So Elon Musk was set on going to China. But first, Tesla and Musk wanted to change a key policy in China.

Hmm, what kind of policy?

So they wanted China to adopt a policy that was aimed at lowering car emissions. And the idea was that it would be modeled after a similar policy in California that had benefited Tesla there.

OK, so explain what that policy actually did. And how did it benefit Tesla?

So California had this system called the Zero-Emission Vehicle program. And that was designed to encourage companies to make cleaner cars, including electric vehicles. And they did that by setting pollution targets. So companies that made a lot of clean cars got credits. And then companies that failed to meet those targets, that produced too many gas-guzzling cars, would have to buy credits from the cleaner companies.

So California is trying to incentivize companies to make cleaner cars by forcing the traditional carmakers to pay cleaner car makers, which basically means dirtier car makers are effectively subsidizing cleaner cars.

Yes, that’s right. And Tesla, as a company that came along just making EVs, profited immensely from this system. And in its early years, when Tesla was really struggling to stay afloat, the money that it earned from selling credits in California to polluting car companies were absolutely crucial, so much so that the company structured a lot of its lobbying efforts around this system, around preserving these credits. And we talked to a former regulator who said as much.

How much money are we talking about here?

So from 2008, when Tesla unveiled its first car, up until the end of last year, Tesla made almost $4 billion by selling credits in California.

Wow. So Musk basically wants China to recreate this California-style program, which was incredibly lucrative for Tesla, there. And they’re basically holding that up as a condition to their building a factory in China.

Right. And at this point in the story, an interesting alliance emerges. Because it wasn’t just Tesla that wanted this emissions program in China. It was also environmentalists from California who had seen the success of the program up close in their own state.

If you go back to that period, to the early 2010s, I was living in China at the time in Beijing and Shanghai. And it was incredibly polluted. We called it airpocalypse at times. I had my first child in China at that point. And as soon as it was safe to put a baby mask on her, we put a little baby mask on her. There were days where people just would try to avoid going outside because it was so polluted. And some of the pollution was actually wafting across the Pacific Ocean to California.

Wow, so California is experiencing that Chinese air pollution firsthand and, in a way, has a direct stake in lowering it.

That’s right. So Governor Jerry Brown, for example — this became kind of his signature issue, was working with China to clean up the environment, in part by exporting this emission scheme. It was also an era of a lot more US-China cooperation. China was seen as absolutely crucial to combating climate change.

So you had all these groups working to get this California emissions scheme exported to China — and the governor’s office and environmental groups and Tesla. And it worked. In 2017, China did adopt a system that was modeled after California’s.

It’s pretty incredible. So California basically exports its emissions-trading system to China, which I imagine at the time was a big win for Californian environmentalists. But it was also a big win for Tesla.

It was definitely a big win for Tesla. And we know that in just a few years Tesla, made almost $1 billion from the emissions-trading program he helped lobby for in China.

So Elon Musk goes on, builds a factory in China. And he does so in Shanghai, where he builds a close relationship with the top official in the city, who actually is now the number-two official in all of China, Li Qiang.

So according to Chinese state media, Elon Musk actually proposed building the factory in two years, which would be fast. And Li came back and proposed that they do it in one year, which — things go up really quickly in China. But even for China, this is incredibly fast. And they broke ground on the factory in January 2019. And by the end of the year, cars were rolling off the line. So then in January 2020, Musk was able to get up on stage in Shanghai and unveil the first Chinese-made Teslas.

Really want to thank the Tesla team and the government officials that have been really helpful in making this happen.

Next to him on stage is Tesla’s top lobbyist who helped push through some of these changes.

Thank you. Yeah, everybody can tell Elon’s super, super happy today.


And she says —

Music, please.

Cue the music. [UPBEAT MUSIC]

And he actually broke into dance. He was so happy, a kind of awkward dance.


And what is the factory like?

The Shanghai factory is huge. 20,000 people work there. Tesla’s factories around the world tend to be pretty large, but the Shanghai workers work more shifts. And when Tesla set up in China, Chinese banks ended up offering Tesla $1.5 billion in low-interest loans. They got a preferential tax rate in Shanghai.

This deal was so generous that one auto industry official we talked to said that a government minister had actually lamented that they were giving Tesla too much. And it is an incredibly productive factory. It’s now the flagship export factory for Tesla.

So it opens in late 2019. And that’s, of course, the time when the pandemic hits.

Yes. I mean, you might think that this is really poor timing for Elon Musk. But it didn’t quite turn out that way. In fact, Tesla’s factory in Shanghai was closed for only around two weeks, whereas the factory in Fremont was closed for around two months.

That’s a big difference.

Yes, and it really, really mattered to Elon Musk. If you can think back to 2020, you might recall that he was railing against California politicians for closing his factory. In China, the factory stayed open. Workers were working around the clock. And Elon Musk said on a podcast —

China rocks, in my opinion.

— China rocks.

There’s a lot of smart, hardworking people. And they’re not entitled. They’re not complacent, whereas I see —

We’ve seen a lot of momentum and enthusiasm for electric vehicles, stocks, and Tesla certainly leading the charge.

Tesla’s stock price kept going up.

Tesla has become just the fifth company to reach a trillion-dollar valuation. The massive valuation happened after Tesla’s stock price hit an all-time high of more than $1,000.

So this company that had just a few years earlier been on the brink of failure, looking to China for a lifeline, was suddenly riding high. And —

Tesla is now the most valuable car company in the world. It’s worth more than General Motors, Ford, Fiat, Chrysler.

By the summer, it had become the most valuable car company in the world.

Guess what? Elon Musk is now the world’s richest man.

“Forbes” says he’s worth more than $255 billion.

And Elon Musk’s wealth is tied up in Tesla stock. And in the following year, he became the wealthiest man in the world.

So you have this emission trading system, which we discussed and which, in part, thanks to Tesla, is now established in China. It’s bringing in money to Tesla. And now this Shanghai factory is continuing to produce cars for Tesla in the middle of the pandemic. So China really paid off for Tesla. But what was in it for China?

Well, China wasn’t doing this for charity.

What Chinese leaders really wanted was to turn their fledgling electric vehicle industry into a global powerhouse. And they figured that Tesla was the ticket to get there. And that’s precisely what happened.

We’ll be right back.

So, Mara, you’ve just told us the story of how Elon Musk used China to turn Tesla into the biggest car maker in the world and himself — at one point — into the richest man in the world. Now I want to understand the other side of this story. How did China use Tesla?

Well, Tesla basically became a catfish for China’s EV industry.

A catfish, what do you mean by that?

It’s a term from the business world. And, essentially, it means a super aggressive fish that makes the other fish in the pond swim faster. And by bringing in this super competitive, aggressive foreign company into China, which at that point had these fledgling EV companies, Chinese leaders hoped to spur the upstart Chinese EV makers to up their game.

So you’re saying that at this point, China actually already had a number of smaller EV companies, which many people in the West may not even be aware of, these smaller fish in the pond that you were referring to.

Yes, there were a lot of them. They were often locally based. Like, one would be strong in one city, and one would be strong in another city. And Chinese leaders saw that they needed to become more competitive in order to thrive.

And China had tried for decades to build up this traditional car industry by bringing in foreign companies to set up joint ventures. They had really had their sights set on building a strong car industry, and it didn’t really work. I mean, how many traditional Chinese car company brands can you name?

Exactly none.

Yeah, right. So going back to the aughts and the 2010s, they had this advantage that many Chinese hadn’t yet been hooked on gas-guzzling cars. There were still many people who were buying their first car ever. So officials had all these levers they could pull to try to encourage or try to push people’s behavior in a certain direction.

And their idea was to try to ensure that when people went to buy their first car, it would be an EV — and not just an EV but, hopefully, a Chinese EV. So they did things like — at the time, just a license plate for your car could cost an exorbitant amount of money and be difficult to get. And so they made license plates for electric vehicles free. So there were all these preferential policies that were unveiled to nudge people toward buying EVs.

So that’s fascinating. So China is incentivizing consumers to buy EV cars and incentivizing also the whole industry to get its act together by chucking this big American company in the mix and hoping that it will increase competitiveness. What I’m particularly struck by, Mara, in what you said is the concept of leapfrogging over the conventional combustion engine phase, which took us decades to live through. We’re still living in it, in many ways, in the West.

But listening to you, it sounds a little bit like China wasn’t really thinking about this transition to EVs as an environmental policy. It sounds like they were doing this more from an industrial-policy perspective.

Right. The environment and the horrible era at the time was a factor, but it was a pretty minor factor, according to people who were privy to the policy discussions. The more significant factor was industrial policy and an interest in building up a competitive sphere.

So China now wants to become a leader in the global EV sector, and it wants to use Tesla to get there. What does that actually look like?

Well, you need sophisticated suppliers to make the component parts of electric vehicles. And just by being in China, Tesla helped spur the development of several suppliers. Like, for example, the battery is a crucial piece of any EV.

And Tesla, with a fair amount of encouragement — and also various levers from the Chinese government — became a customer of a battery maker called CATL, a homegrown Chinese battery maker. And they have become very close to Tesla and have even set up a factory near Teslas in Shanghai. And today, with Tesla’s business — and, of course, with the business of some other companies — CATL is the biggest battery maker in the world.

But beyond just stimulating the growth of suppliers, Tesla also made these other fish in the pond swim faster. And the biggest Chinese EV company to come out of that period is one called BYD. It’s short for Build Your Dreams.

We are BYD. You’ve probably never heard of us.

From battery maker to the biggest electric vehicle or EV manufacturer in China.

They’ve got a lot of models. They’ve got a lot of discounts. They’ve got a lot of market growth.

China’s biggest EV maker just overtook Tesla in terms of worldwide sales.

BYD 10, Chinese automobile redefined.

I’ve actually started seeing that brand on the streets here in Europe recently, especially in Germany, where my brother actually used to lease a Tesla and now leases a BYD.

Does he like it?

He does. Although he did, to be fair, say that he misses the luxury of the Tesla, but it just became too expensive, really.

The price point is a huge reason that BYD is increasingly giving Tesla a run for its money. Years ago, back in 2011 —

Although there’s competitors now ramping up. And, as you’re familiar with, BYD, which is also —

— Elon Musk actually mocked their cars.

— electric vehicles, here he is trying to compete. Why do you laugh?

He asked an interviewer —

Have you seen their car?

I have seen their car, yes.

— have you seen their cars? Sort of suggesting, like, they’re no competition for us.

You don’t see them at all as a competitor?

Why is that? I mean, they offer a lower price point.

I don’t think they have a great product. I think their focus is — and rightly should be — on making sure they don’t die in China.

But they have been steadily improving. They’ve been in the EV space for a while, but they really started improving a few years ago, once Tesla came on the scene. That was due to a number of factors, not entirely because of Tesla. But Tesla played a role in helping train up talent in China. One former Tesla employee who worked at the company as they were getting set up in China told me that most of the employees who were at the company at the time now work for Chinese competitors.

So they have really played this important role in the EV ecosystem.

And you mentioned the price advantage. So just for comparison, what does an average BYD sell for compared to a more affordable Tesla car?

So BYD has an ultra-cheap model called the Seagull that sells for around $10,000 now in China, whereas Tesla Model 3s and Model Ys in China sell for more than twice that.

Wow. How’s BYD able to sell EVs at these much lower prices?

Well, the Seagull is really just a simpler car. It has less range than a Tesla. It lacks some safety measures. But BYD has this other crucial advantage, which is that they’re vertically integrated. Like, they control many aspects of the supply chain, up and down the supply chain. When you look at the battery level, they make batteries. But they even own the mines where lithium is mined for the batteries.

And they recently launched a fleet of ships. So they actually operate the boats that are sending their cars to Europe or other parts of the world.

So BYD is basically cutting out the middleman on all these aspects of the supply chain, and that’s how they can undercut other car makers on price.

Yeah. They’ve cut out the middleman, and they’ve cut out the shipping company and almost everything else.

So how is BYD doing now as a company compared to Tesla?

In terms of market cap, they’re still much smaller than Tesla. But, crucially, they overtook Tesla in sales in the last quarter of last year.

Yeah, that was a huge milestone. Tesla still dominates in the European market, which is a very important market for EVs. But BYD is starting to export there. And Europe traditionally is kind of automotive powerhouse, and the companies and government officials there are very, very concerned. I interviewed the French finance minister, and he told me that China has a five - to seven-year head start on Europe when it comes to EVs.

Wow. And what has Elon Musk said about this incredible rise of BYD in recent years? Do you think he anticipated that Tesla’s entry into the Chinese market could end up building up its own competition?

Well, I can’t get inside his head, and he did not respond to our questions. But —

The Chinese car companies are the most competitive car companies in the world.

— he has certainly changed his tune. So, remember, he was joking about BYD some years ago.

Yeah, he’s not joking anymore.

I think they will have significant success.

He had dismissed Chinese EV makers. He now appears increasingly concerned about these new competitors —

Frankly, I think if there are not trade barriers established, they will pretty much demolish most other car companies in the world.

— to the point that on an earnings call in January, he all but endorsed the use of trade barriers against them.

They’re extremely good.

I think it’s so interesting, in a way — of course, with perfect hindsight — the kind of maybe complacency or naivete with which he may not have anticipated this turn of events. And in some ways, he’s not alone, right? It speaks to something larger. Like, China, for a long time, was seen as kind of the sweatshop or the manufacturer of the world — or perhaps as an export market for a lot of these Western companies. It certainly wasn’t putting out its own big brand names. It was making stuff for the brand names.

But recently, they have quite a lot of their own brand names. Everybody talks about TikTok. There’s Huawei. There’s WeChat, Lenovo. And now there is BYD. So China is becoming a leader in technology in certain areas. And I think that shift in some ways has happened. And a lot of Western companies — perhaps like Tesla — were kind of late to waking up to that.

Right. Tesla is looking fragile now. Their stock price dropped 30 percent in the first quarter of this year. And to a large degree, that is because of the threat of companies like BYD from China and the perception that Tesla’s position as number one in the market is no longer guaranteed.

So, Mara, all this raises a much bigger question for me, which is, who is going to own the future of EVs? And based on everything you’ve said so far, it seems like China owns the future of EVs. Is that right?

Well, possibly, but the jury is still out. Tesla is still far bigger for now. But there is this increasing fear that China owns the future of EVs. If you look at the US, there are already 25 percent tariffs on EVs from China. There’s talk of increasing them. The Commerce Department recently launched an investigation into data collection by electric vehicles from China.

So all of these factors are creating uncertainty around what could happen. And the European Union may also add new tariffs against Chinese-made cars. And China is an economic rival and a security rival and, in many ways, our main adversary. So this whole issue is intertwined with national security. And Tesla is really in the middle of it.

Right. So the sort of new Cold War that people are talking about between the US and China is, in a sense, the backdrop to this story. But on one level, what we’ve been talking about, it’s really a corporate story, an economic story that has this geopolitical backdrop. But it’s also very much an environmental story. So, regardless of how Elon Musk and Tesla fare in the end, is BYD’s rise and its ability to create high-quality and — perhaps more importantly — affordable EVs ultimately a good thing for the world?

If I think back on those years I spent living in Shanghai and Beijing when it was extremely polluted and there were days when you couldn’t go outside — I don’t think anyone wants to go back to that.

So it’s clear that EVs are the future and that they’re crucial to the green energy transition that we have to make. How exactly we get there is still unclear. But what is true is that China did just make that transition easier.

Mara, thank you so much.

Thank you, Katrin.

Here’s what else you need to know today.


Millions of people across North America were waiting for their turn to experience a rare event on Monday. From Mexico —

Cuatro, tres, dos, uno.

— to Texas.

Awesome, just awesome.

We can see the corona really well. Oh, you can see —


Oh, and we are falling into darkness right now. What an incredible sensation. And you are hearing and seeing the crowd of 15,000 gathered here in south Illinois.

Including “Daily” producers in New York.

It’s like the sky is almost —

— like a deep blue under the clouds.

Wait, look. It’s just —

Oh my god. The sun is disappearing. And it’s gone. Oh. Whoa.

All the way up to Canada.

Yeah, that’s what I’m talking about. That’s what I’m talking about.

The moon glided in front of the sun and obscured it entirely in a total solar eclipse, momentarily plunging the day into darkness.

It’s super exciting. It’s so amazing to see science in action like this.

Today’s episode was produced by Rikki Novetsky and Mooj Zadie with help from Rachelle Bonja. It was edited by Lisa Chow with help from Alexandra Leigh Young, fact checked by Susan Lee, contains original music by Marion Lozano, Diane Wong, Elisheba Ittoop, and Sophia Lanman and was engineered by Chris Wood.

Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly.

That’s it for “The Daily.” I’m catching Katrin Bennhold. See you tomorrow.

The Daily logo

  • April 11, 2024   •   28:39 The Staggering Success of Trump’s Trial Delay Tactics
  • April 10, 2024   •   22:49 Trump’s Abortion Dilemma
  • April 9, 2024   •   30:48 How Tesla Planted the Seeds for Its Own Potential Downfall
  • April 8, 2024   •   30:28 The Eclipse Chaser
  • April 7, 2024 The Sunday Read: ‘What Deathbed Visions Teach Us About Living’
  • April 5, 2024   •   29:11 An Engineering Experiment to Cool the Earth
  • April 4, 2024   •   32:37 Israel’s Deadly Airstrike on the World Central Kitchen
  • April 3, 2024   •   27:42 The Accidental Tax Cutter in Chief
  • April 2, 2024   •   29:32 Kids Are Missing School at an Alarming Rate
  • April 1, 2024   •   36:14 Ronna McDaniel, TV News and the Trump Problem
  • March 29, 2024   •   48:42 Hamas Took Her, and Still Has Her Husband
  • March 28, 2024   •   33:40 The Newest Tech Start-Up Billionaire? Donald Trump.

Hosted by Katrin Bennhold

Featuring Mara Hvistendahl

Produced by Rikki Novetsky and Mooj Zadie

With Rachelle Bonja

Edited by Lisa Chow and Alexandra Leigh Young

Original music by Marion Lozano ,  Diane Wong ,  Elisheba Ittoop and Sophia Lanman

Engineered by Chris Wood

Listen and follow The Daily Apple Podcasts | Spotify | Amazon Music

When Elon Musk set up Tesla’s factory in China, he made a bet that brought him cheap parts and capable workers — a bet that made him ultrarich and saved his company.

Mara Hvistendahl, an investigative reporter for The Times, explains why, now, that lifeline may have given China the tools to beat Tesla at its own game.

On today’s episode

speech on internet kills communication

Mara Hvistendahl , an investigative reporter for The New York Times.

A car is illuminated in purple light on a stage. To the side, Elon Musk is standing behind a lectern.

Background reading

A pivot to China saved Elon Musk. It also bound him to Beijing .

Mr. Musk helped create the Chinese electric vehicle industry. But he is now facing challenges there as well as scrutiny in the West over his reliance on China.

There are a lot of ways to listen to The Daily. Here’s how.

We aim to make transcripts available the next workday after an episode’s publication. You can find them at the top of the page.

Fact-checking by Susan Lee .

The Daily is made by Rachel Quester, Lynsea Garrison, Clare Toeniskoetter, Paige Cowett, Michael Simon Johnson, Brad Fisher, Chris Wood, Jessica Cheung, Stella Tan, Alexandra Leigh Young, Lisa Chow, Eric Krupke, Marc Georges, Luke Vander Ploeg, M.J. Davis Lin, Dan Powell, Sydney Harper, Mike Benoist, Liz O. Baylen, Asthaa Chaturvedi, Rachelle Bonja, Diana Nguyen, Marion Lozano, Corey Schreppel, Rob Szypko, Elisheba Ittoop, Mooj Zadie, Patricia Willens, Rowan Niemisto, Jody Becker, Rikki Novetsky, John Ketchum, Nina Feldman, Will Reid, Carlos Prieto, Ben Calhoun, Susan Lee, Lexie Diao, Mary Wilson, Alex Stern, Dan Farrell, Sophia Lanman, Shannon Lin, Diane Wong, Devon Taylor, Alyssa Moxley, Summer Thomad, Olivia Natt, Daniel Ramirez and Brendan Klinkenberg.

Our theme music is by Jim Brunberg and Ben Landsverk of Wonderly. Special thanks to Sam Dolnick, Paula Szuchman, Lisa Tobin, Larissa Anderson, Julia Simon, Sofia Milan, Mahima Chablani, Elizabeth Davis-Moorer, Jeffrey Miranda, Renan Borelli, Maddy Masiello, Isabella Anderson and Nina Lassam.

Katrin Bennhold is the Berlin bureau chief. A former Nieman fellow at Harvard University, she previously reported from London and Paris, covering a range of topics from the rise of populism to gender. More about Katrin Bennhold

Mara Hvistendahl is an investigative reporter for The Times focused on Asia. More about Mara Hvistendahl



  1. PPT

    speech on internet kills communication

  2. Speech On Internet

    speech on internet kills communication

  3. Online hate speech: Introduction into motivational causes, effects and

    speech on internet kills communication

  4. Internet kills communication

    speech on internet kills communication

  5. Combating Hate Speech on the Internet

    speech on internet kills communication

  6. Free speech vs. safe Internet: We can have both?

    speech on internet kills communication


  1. Pushinsky on the Line Between Free Speech and Hate Speech on College Campuses

  2. BOMBING HOSPITALS: IDF FALSE PROPAGANDA Debunked: No "Terror Nest Under Hospital," Just Fuel Storage

  3. CALL INS: Researching With AI Assistance: Recommendations & Predictions For AI Research Tools PART 2

  4. Workshop on the topic ''Communication Kills without Skills'' Dr. Ghanshyam Singh Sir

  5. Warning: Technology has brought MORE DISADVANTAGESthanADVANTAGES& Internet kills progress of mankind

  6. Huge News In AI: New Free Top-Tier Image Generator With Word Support, GPT5 May Release Pi Day- 2 Wks


  1. How Section 230 helped shape speech on the Internet

    Any change to Section 230 is likely to have ripple effects on online speech around the globe. "The rest of the world is cracking down on the internet even faster than the U.S.," Goldman said. "So we're a step behind the rest of the world in terms of censoring the internet. And the question is whether we can even hold out on our own.".

  2. The dying art of conversation

    Speaking to machines. Sherry Turkle, professor of social studies of science and technology, warns that when we first "speak through machines, [we] forget how essential face-to-face conversation ...

  3. What is Section 230? The internet free speech law before the Supreme

    The pillar of internet free speech seems to be everyone's target. The section of the 1996 Communications Decency Act is now being considered by the Supreme Court in two cases, including Gonzalez ...

  4. Combating Hate Speech Through Counterspeech

    Combating Hate Speech Through Counterspeech. Aug 9, 2019. Media, Democracy, & Public Discourse. Daniel Jones. Susan Benesch. Share To. From misogyny and homophobia, to xenophobia and racism, online hate speech has become a topic of greater concern as the Internet matures, particularly as its offline impacts become more widely known.

  5. The Rutherford Institute :: Digital Kill Switches: How Tyrannical

    Communications kill switches have become tyrannical tools of domination and oppression to stifle political dissent, shut down resistance, forestall election losses, reinforce military coups, and keep the populace isolated, disconnected and in the dark, literally and figuratively. ... The internet kill switch is just one piece of the government ...

  6. Why regulating "bad speech" online is one of society's biggest

    Because the internet is inherently a global communications system, "bad" speech can arise from foreign as well as domestic sources. No one doubts that these kinds of very harmful expression ...

  7. Is Internet Language a Destroyer to Communication?

    Table 7 also shows that Internet language makes the communication funny and humorous. It helps to improve the communication speed, gives space to cultivate creativity, eases the process of learning language, and builds some bonding in the community. Table 6 Negative impact of Internet language. Full size table.

  8. Protecting Freedom of Expression Online

    Questions around freedom of expression are once again in the air. While concern around the Internet's role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including through mechanisms such as the ...

  9. Hate Speech on Social Media: Global Comparisons

    Summary. Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing. Policies used to curb hate speech risk ...

  10. Supreme Court Poised to Reconsider Key Tenets of Online Speech

    David McCabe, who is based in Washington, has reported for five years on the policy debate over online speech. Jan. 19, 2023 For years, giant social networks like Facebook , Twitter and Instagram ...

  11. Has Technology Killed Face-To-Face Communication?

    While sending emails is efficient and fast, face-to-face communication drives productivity. In a recent survey, 67% of senior executives and managers said their organization's productivity would ...

  12. Why AI Struggles To Recognize Toxic Speech on Social Media

    Automated speech police can score highly on technical tests but miss the mark with people, new research shows. Facebook says its artificial intelligence models identified and pulled down 27 million pieces of hate speech in the final three months of 2020. In 97 percent of the cases, the systems took action before humans had even flagged the posts.

  13. Texting Miscommunication: Causes, Effects, and Solutions

    Texting miscommunication is a prevalent issue in the digital age, leading to misunderstandings and conflicts. Lack of nonverbal cues, ambiguity in text, and misinterpretation of tone are primary causes. Effects include strained relationships, emotional distress, and decreased productivity. Solutions involve clear and concise communication, use ...

  14. Online hate speech victimization: consequences for victims' feelings of

    While the Internet has become a seemingly indispensable part of our lives, its digital landscape has also given rise to new challenges. With the growing importance of digital communication, online hate speech has increased sharply in recent years (Costello et al., 2017; Ștefăniță & Buf, 2021).Hate speech is defined as a verbal attack against a certain group of people with a common ...

  15. Canada Wants to Regulate Online Content. Critics Say It Goes Too Far

    Reporting from Toronto. April 9, 2024. Canada has waded into the contentious issue of regulating online content with a sweeping proposal that would force technology companies to restrict and ...

  16. Kicking toxic people off social media reduces hate speech on the internet

    Removing just 100 accounts produced a noticeable impact, since it denied proponents of hate speech a microphone and ultimately improved the broader social media environment. Earlier studies had suggested that deleting harmful accounts on platforms like Twitter , Reddit and Telegram helped reduce unwanted activity, including broader levels of ...

  17. Internet kills communication

    In this manner, the internet is killing the conveyance of meaning through communication. Because of excessive use of the internet, it is not easy to pay attention during face-to-face communication with others. When one is trying to communicate with another, who is busy chatting on the internet, the conversation does not yield anything.

  18. Internet kills communication

    Here's Namrata Motwani speaking on the topic "Internet kills communication".Speak UP 4.0 is a Speech Competition event organized by the Tachyons. It is an Op...

  19. Antisemitic and Anti-Muslim Hate Speech Surges Across the Internet

    Anti-Muslim hate speech on X jumped 422 percent on Oct. 7 and Oct. 8, and rose 297 percent over the next five days, said the Institute for Strategic Dialogue, a London-based political advocacy group.

  20. Internet kills communication by Shannon Grega on Prezi

    Benefits of Internet in Communication. 1. The act or process of communicating; fact of being communicated. 2. The imparting or interchange of thoughts, opinions, or information by speech, writing, or signs. 3. Something imparted, interchanged, or transmitted. 4.

  21. Internet Essay Examples

    Pages: 4. Words: 1177. Rating: 4,8. Internet has been the fastest growing medium, as more and more people are becoming a part of the internet fraternity; it becomes more difficult to…. Internet Cyber Bullying Cyber Crime Cyber Security Virtual Reality ⏳ Social Issues. View full sample.

  22. Speech on Internet for Students and Children

    Speech for Students. Very good morning to all. Today, I am here to present a speech on internet. Someone has rightly said that the world is a small place. With the advent of the internet, this saying seems realistic. The internet has really bought the world together and the distance between two persons is really not a distance today.

  23. Speech On Communication [1,2,3 Minutes]

    3 Minute Speech On The Importance Of Communication. First of all, good morning to the honourable principal, respected teachers and loving friends and all of you present here today. In your special presence, I would like to say a few words about communication and its importance. We live in two different worlds.

  24. Twitter Hate Speech Accounts Grew Faster During Israel's War in Gaza

    Twitter Hate Speech Accounts Exploited Israel's War in Gaza to Grow Four Times Faster. A new report from the Center for Countering Digital Hate reveals how bad actors have capitalized on carnage ...

  25. Middle East latest: Iran attack on Israel could be 'imminent'

    Security circles in the US and Israel have likely accepted that the killing of a top Iranian commander in Damascus earlier this month means Iran will have to "retaliate in some form", with reports ...

  26. How Tesla Planted the Seeds for Its Own Potential Downfall

    Featuring Mara Hvistendahl. Produced by Rikki Novetsky and Mooj Zadie. With Rachelle Bonja. Edited by Lisa Chow and Alexandra Leigh Young. Original music by Marion Lozano , Diane Wong , Elisheba ...