Conclusion: Social Media the Only Constant Is Change

  • First Online: 30 August 2024

Cite this chapter

truth about social media essay

  • Karen E. Sutherland 2  

The previous 16 chapters in this text have provided an overview of the key functions of strategic social media management. In Part I we explored how to develop a social media strategy, including the importance of audience research, managing issues and risks, helping more than selling, selecting relevant tactics, plus the importance of storytelling, listening, monitoring, measurement and scheduling. Part II focused on strategic content curation, paying particular attention to ethical approaches, processes and techniques. This chapter concludes the entire text by setting you up for a successful career as a Social Media Manager. 

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bakker, A. B., & de Vries, J. D. (2021). Job Demands-Resources theory and self-regulation: New explanations and remedies for job burnout. Anxiety, Stress, & Coping, 34 (1), 1–21.

Article   Google Scholar  

Bierema, L. L. (2019). Enhancing employability through developing T-shaped professionals. New Directions for Adult and Continuing Education, 2019 (163), 67–81.

Brandon, J. (2022). Artificial intelligence will help social media radically evolve in 2023. Forbes , viewed 17.10.2023: https://www.forbes.com/sites/johnbbrandon/2022/12/15/artificial-intelligence-will-help-social-media-radically-evolve-in-2023/?sh=3a7a044c2aa5

Brooks, S. (2015). Does personal social media usage affect efficiency and well-being? Computers in Human Behavior, 46 , 26–37.

Caputo, F., Cillo, V., Fiano, F., Pironti, M., & Romano, M. (2023). Building T-shaped professionals for mastering digital transformation. Journal of Business Research, 154 , 113309.

Criddle, C. (2021). Facebook moderator: ‘Every day was a nightmare’, BBC News , viewed 20.09.2023: https://www.bbc.com/news/technology-57088382

Cyca, M. (2022). 12 ways for social marketers to avoid social media burnout. Hootsuite, viewed 20.09.2023: https://blog.hootsuite.com/ways-to-avoid-social-media-burnout

Davis, H. (2018). Connecting the dots for professional practice in higher education: Leadership, energy management, and motivation. Professional and Support Staff in Higher Education , 1–15.

Google Scholar  

Demirkan, H., & Spohrer, J. C. (2018). Commentary—Cultivating T-shaped professionals in the era of digital transformation. Service Science, 10 (1), 98–109.

Di Mario, S., Cocchiara, R. A., & Torre, G. L. (2023). The use of yoga and mindfulness-based interventions to reduce stress and burnout in healthcare workers: An umbrella review. Alternative Therapies in Health & Medicine, 29 (1).

Flanigan, A. E., Brady, A. C., Dai, Y., & Ray, E. (2023). Managing student digital distraction in the college classroom: A self-determination theory perspective. Educational Psychology Review, 35 (2), 60.

Gianecchini, M., Gubitta, P., & Dotto, S. (2022). Employability in the Era of Digitization of Jobs. Employability and Industrial Mutations: Between Individual Trajectories and Organizational Strategic Planning, 4 , 85–99.

Goldhill, O. (2016). Multitasking is exhausting your brain, say neuroscientists. World Economic Forum , viewed 20.09.2023: https://www.weforum.org/agenda/2016/07/multitasking-is-exhausting-your-brain-say-neuroscientists

Han, B. (2018). Social media burnout: Definition, measurement instrument, and why we care. Journal of Computer Information Systems, 58 (2), 122–130.

He, X. (2014). Is social media a fad? A study of the adoption and use of social media in SMEs. SAIS 2014 proceedings . Paper, 13.

Hill, C. (2023). AI won’t replace social teams: Here’s why. Sprout Social Blog , viewed 17.10.2023: https://sproutsocial.com/insights/ai-social-media/

Kecmanovic, J. (2019). 6 ways to protect your mental health from social media’s dangers. The Conversation, viewed 20.09.2023: https://theconversation.com/6-ways-to-protect-your-mental-health-from-social-medias-dangers-117651

Kemp, S. (2023). The global state of digital in April 2023. We Are Social, viewed 20.09.2023: https://wearesocial.com/au/blog/2023/04/the-global-state-of-digital-in-april-2023

Korte, W. B., Hüsing, T., & Dashja, E. (2018). T-shaped professionals in Europe Today and in 2020. In J. Spohrer & H. Demirkan (Eds.), T-shaped professionals: Adaptive innovators. Business Experts Press.

Lau, Y., Fang, L., Cheng, L. J., & Kwong, H. K. D. (2019). Volunteer motivation, social problem solving, self-efficacy, and mental health: A structural equation model approach. Educational Psychology, 39 (1), 112–132.

Macnamara, J. (2011). Social media strategy & governance: Gaps, risk and opportunities. Australian centre for public communication. University of Technology Sydney. http://www.communication.uts.edu.au/centres/acpc/docs/social-mediaresearch-report-online.pdf

Macnamara, J., & Zerfass, A. (2012). Social media communication in organizations: The challenges of balancing openness, strategy, and management. International. Journal of Strategic Communication, 6 (4), 287–308.

Marci, C. D. (2022). Rewired: Protecting your brain in the digital age . Harvard University Press.

Mardiana, N., Jima, H., & Prasetya, M. D. (2023, May). The effect of hustle culture on psychological distress with self compassion as moderating variable. In 3rd Universitas Lampung International Conference on Social Sciences (ULICoSS 2022) (pp. 1062–1073). Atlantis Press.

Maslach, C., & Leiter, M. P. (2008). Early predictors of job burnout and engagement. Journal of Applied Psychology, 93 (3), 498.

McFadden, C. (2023). A chronological history of social media. Interesting Engineering , viewed 20.09.2023: https://interestingengineering.com/a-chronological-history-of-social-media

Neill, M. S., & Moody, M. (2015). Who is responsible for what? Examining strategic roles in social media management. Public Relations Review, 41 (1), 109–118.

Newton, C. (2019). The trauma floor—The secret lives of facebook moderators in America. The Verge , viewed 20.09.2023: https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

Nichol, B., Wilson, R., Rodrigues, A., & Haighton, C. (2023). Exploring the effects of volunteering on the social, mental, and physical health and well-being of volunteers: An umbrella review. VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations , 1–32.

Schendzielarz, D., Alavi, S., & Guba, J. H. (2022). The impact of salespeople’s social media adoption on customer acquisition performance—A contextual perspective. Journal of Personal Selling & Sales Management, 42 (2), 139–157.

Solis, B. (2019). Lifescale—How to live a more creative, productive, happy life . Wiley.

Sutherland, K. (2022). The evolution of social media management as professional practice. In The emerald handbook of computer-mediated communication and social media (pp. 413–430). Emerald Publishing Limited.

Sutherland, K. (2019). Interviews; balancing the grind Dr. Karen Sutherland, social media educator, author, researcher & consultant. Balance the Grind , viewed 2.10.2019: https://www.balancethegrind.com.au/interviews/balancing-the-grind-dr-karen-sutherland-social-media-educator-author-researcher-consultant/

Sutherland, K. E. (2015). Towards an integrated social media communication model for the not-for-profit sector: A case study of youth homelessness charities, viewed 20.09.2023: https://monash.figshare.com/articles/Towards_an_integrated_social_media_communication_model_for_the_not-for-profit_sector_a_case_study_of_youth_homelessness_charities/4711576/1

Van Dijck, J. (2013). The culture of connectivity: A critical history of social media . Oxford University Press.

Further Reading

Eyal, N. (2019). Indistractable: How to control your attention and choose your life . Bloomsbury Publishing.

Feedspot.com. (2023). Top 10 social media magazines & publications: https://magazines.feedspot.com/social_media_magazines/

Hanlon, A., & Tuten, T. L. (Eds.). (2022). The Sage handbook of social media marketing. Sage.

Jenkins, A. (2022). Social media marketing for business: Scaling an integrated social media strategy across your organization . Kogan Page.

Quesenberry, K. (2018). Over 300 social media tools and resources for 2018. Post Control Marketing , viewed 20.09. 2023: https://www.postcontrolmarketing.com/300-social-media-tools-resources-2018/Social Media+ Society : https://journals.sagepub . com/home/sms

Spohrer, J., & Demirka, H. (2018). T-shaped professionals: Adaptive innovators. Business Experts Press.

Thaichon, P., & Quach, S. (Eds.). (2022). Artificial intelligence for marketing management. Taylor & Francis.

Van Looy, A. (2022). Social media management: Using social media as a business instrument. Springer Nature.

Zahay, D., Roberts, M. L., Parker, J., Barker, D. I., & Barker, M. (2022). Social media marketing: A strategic approach. Cengage Learning.

Helpful Links

Social media commentators.

Dennis Yu: https://www.dennis-yu.com/

Gary Vaynerchuk: https://www.garyvaynerchuk.com/blog/

Jeff J Hunter: https://jeffjhunter.com/blogs/

Neil Patel: https://neilpatel.com/blog/

Matt Navarra: https://thenextweb.com/author/matthewnavarra/

Madalyn Sklar: https://madalynsklar.com/blog/

Mari Smith: https://www.marismith.com/mari-smith-blog/

Molly Pittman: https://mollypittman.com/blog/

Seth’s Blog (Seth Godin): https://seths.blog/

Social Media Platform Newsrooms

Facebook: https://about.meta.com/

Instagram: https://about.instagram.com/en_US/blog

LinkedIn: https://news.linkedin.com

Pinterest: https://newsroom.pinterest.com/en

Snapchat: https://newsroom.snap.com/en-GB

TikTok: https://newsroom.tiktok.com/en-us

WeChat: https://blog.wechat.com/category/news/

WhatsApp: https://blog.whatsapp.com/

X (formerly Twitter): https://blog.twitter.com/

YouTube: https://blog.youtube/press/

Blogs & News Sources

Digital Marketer: https://www.digitalmarketer.com/blog/

Google Alerts: https://www.google.com.au/alerts

Hootsuite: https://blog.hootsuite.com/

HubSpot: https://blog.hubspot.com/

Later: https://later.com/blog/

Social Media Examiner: https://www.socialmediaexaminer.com/

Social Media Today: https://www.socialmediatoday.com/

Sprout Social: https://sproutsocial.com/insights/

Courses and Certifications

BlitzMetrics Courses: https://academy.yourcontentfactory.com/

Dash Academy—: https://www.dashacademy.com.au/socials-boss-course

Generative AI Fundamentals (Google): https://www.cloudskillsboost.google/course_templates/556

Google: https://grow.google/intl/ALL_au/learn-skills/

Facebook Ads Targeting: https://learn.fiverr.com/courses/facebook-ads-targeting

HubSpot Free Social Media Courses: https://www.hubspot.com/resources/courses/social-media

Meta Blueprint: https://www.facebook.com/business/learn

So You Want To Be A Social Media Manager: https://bit.ly/SocialMediaManagerWorkshop

AI Marketing Education: https://www.marketingaiinstitute.com/education

Online Groups

Become a Social Media Manager—With Rachel Pedersen: https://www.facebook.com/groups/becomeasocialmediamanager

Social Media Managers: https://www.facebook.com/groups/socialmediamanagers/learning_content/

Social Media Managers: https://www.facebook.com/groups/managers.social.media

Social Media Masterminds Group: https://www.facebook.com/groups/1509722192640979/

Social Media Manager & Freelancers Group: https://www.facebook.com/groups/seankylesmm

Social Media and Community Management Jobs: https://www.facebook.com/groups/1734717390148085

The Social Media Geek Out: https://www.facebook.com/groups/socialgeekout/

Tik Tok Marketing Secrets | Growth Hacks for Marketers and Influencers: https://www.facebook.com/groups/tiktoksecrets/

Conferences

The 20 Best Social Media & Digital Marketing Conferences You Should Attend (2023/2024): https://nealschaffer.com/best-social-media-conferences/

Social Media Marketing World: https://www.socialmediaexaminer.com/smmworld/

VidCon: https://vidcon.com/

Industry Bodies

The International Association of Social Media Professionals (IASMP): https://socialmediaprofessionals.org/

AI Marketers Guild: https://aimarketersguild.com/

Marketing AI Institute: https://www.marketingaiinstitute.com/

Social Media Club: https://socialmediaclub.org/

Social Media Marketing

Social Media Marketing Podcasts to Check Out in 2023: https://blog.hootsuite.com/social-media-marketing-podcasts/

Social Media Advertising

Digital Advertising Podcasts: https://player.fm/podcasts/Digital-Advertising

Public Relations

Public Relations Podcasts: https://player.fm/featured/public-relations

AI Marketing

The AI Marketing Show: https://www.marketingaiinstitute.com/podcast-showcase

Download references

Author information

Authors and affiliations.

University of the Sunshine Coast, Sippy Downs, QLD, Australia

Karen E. Sutherland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Karen E. Sutherland .

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 3223 kb)

Rights and permissions.

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Sutherland, K.E. (2024). Conclusion: Social Media the Only Constant Is Change. In: Strategic Social Media Management. Palgrave Macmillan, Singapore. https://doi.org/10.1007/978-981-99-9496-0_17

Download citation

DOI : https://doi.org/10.1007/978-981-99-9496-0_17

Published : 30 August 2024

Publisher Name : Palgrave Macmillan, Singapore

Print ISBN : 978-981-99-9495-3

Online ISBN : 978-981-99-9496-0

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth

  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on Facebook
  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on Twitter
  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on Email
  • Share Chicago Journal of International Law | Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the Search for the Possible Truth on LinkedIn

Download PDF

This Essay is a critical reflection on the impact of the digital revolution and the internet on three topics that shape the contemporary world: democracy, social media, and freedom of expression. Part I establishes historical and conceptual assumptions about constitutional democracy and discusses the role of digital platforms in the current moment of democratic recession. Part II discusses how, while social media platforms have revolutionized interpersonal and social communication and democratized access to knowledge and information, they also have led to an exponential spread of mis- and disinformation, hate speech, and conspiracy theories. Part III proposes a framework that balances regulation of digital platforms with the countervailing fundamental right to freedom of expression, a right that is essential for human dignity, the search for the possible truth, and democracy. Part IV highlights the role of society and the importance of media education in the creation of a free, but positive and constructive, environment on the internet.

I. Introduction

Before the internet, few actors could afford to participate in public debate due to the barriers that limited access to its enabling infrastructure, such as television channels and radio frequencies. 1 Digital platforms tore down this gate by creating open online communities for user-generated content, published without editorial control and at no cost. This exponentially increased participation in public discourse and the amount of information available. 2 At the same time, it led to an increase in disinformation campaigns, hate speech, slander, lies, and conspiracy theories used to advance antidemocratic goals. Platforms’ attempts to moderate speech at scale while maximizing engagement and profits have led to an increasingly prominent role for content moderation algorithms that shape who can participate and be heard in online public discourse. These systems play an essential role in the exercise of freedom of expression and in democratic competence and participation in the 21st century.

In this context, this Essay is a critical reflection on the impacts of the digital revolution and of the internet on democracy and freedom of expression. Part I establishes historical and conceptual assumptions about constitutional democracy; it also discusses the role of digital platforms in the current moment of democratic recession. Part II discusses how social media platforms are revolutionizing interpersonal and social communication, and democratizing access to knowledge and information, but also lead to an exponential spread of mis- and disinformation, hate speech and conspiracy theories. Part III proposes a framework for the regulation of digital platforms that seeks to find the right balance with the countervailing fundamental right to freedom of expression. Part IV highlights the role of society and the importance of media education in the creation of a free, but positive and constructive, environment on the internet.

II. Democracy and Authoritarian Populism

Constitutional democracy emerged as the predominant ideology of the 20th century, rising above the alternative projects of communism, fascism, Nazism, military regimes, and religious fundamentalism . 3 Democratic constitutionalism centers around two major ideas that merged at the end of the 20th century: constitutionalism , heir of the liberal revolutions in England, America, and France, expressing the ideas of limited power, rule of law, and respect for fundamental rights; 4 and democracy , a regime of popular sovereignty, free and fair elections, and majority rule. 5 In most countries, democracy only truly consolidated throughout the 20th century through universal suffrage guaranteed with the end of restrictions on political participation based on wealth, education, sex, or race. 6

Contemporary democracies are made up of votes, rights, and reasons. They are not limited to fair procedural rules in the electoral process, but demand respect for substantive fundamental rights of all citizens and a permanent public debate that informs and legitimizes political decisions. 7 To ensure protection of these three aspects, most democratic regimes include in their constitutional framework a supreme court or constitutional court with jurisdiction to arbitrate the inevitable tensions that arise between democracy’s popular sovereignty and constitutionalism’s fundamental rights. 8 These courts are, ultimately, the institutions responsible for protecting fundamental rights and the rules of the democratic game against any abuse of power attempted by the majority. Recent experiences in Hungary, Poland, Turkey, Venezuela, and Nicaragua show that when courts fail to fulfill this role, democracy collapses or suffers major setbacks. 9

In recent years, several events have challenged the prevalence of democratic constitutionalism in many parts of the world, in a phenomenon characterized by many as democratic recession. 10 Even consolidated democracies have endured moments of turmoil and institutional discredit, 11 as the world witnessed the rise of an authoritarian, anti-pluralist, and anti-institutional populist wave posing serious threats to democracy.

Populism can be right-wing or left-wing, 12 but the recent wave has been characterized by the prevalence of right-wing extremism, often racist, xenophobic, misogynistic, and homophobic. 13 While in the past the far left was united through Communist International, today it is the far right that has a major global network. 14 The hallmark of right-wing populism is the division of society into “us” (the pure, decent, conservatives) and “them” (the corrupt, liberal, cosmopolitan elites). 15 Authoritarian populism flows from the unfulfilled promises of democracy for opportunities and prosperity for all. 16 Three aspects undergird this democratic frustration: political (people do not feel represented by the existing electoral systems, political leaders, and democratic institutions); social (stagnation, unemployment, and the rise of inequality); and cultural identity (a conservative reaction to the progressive identity agenda of human rights that prevailed in recent decades with the protection of the fundamental rights of women, African descendants, religious minorities, LGBTQ+ communities, indigenous populations, and the environment). 17

Extremist authoritarian populist regimes often adopt similar strategies to capitalize on the political, social, and cultural identity-based frustrations fueling democratic recessions. These tactics include by-pass or co-optation of the intermediary institutions that mediate the interface between the people and the government, such as the legislature, the press, and civil society. They also involve attacks on supreme courts and constitutional courts and attempts to capture them by appointing submissive judges. 18 The rise of social media potentializes these strategies by creating a free and instantaneous channel of direct communication between populists and their supporters. 19 This unmediated interaction facilitates the use of disinformation campaigns, hate speech, slander, lies, and conspiracy theories as political tools to advance antidemocratic goals. The instantaneous nature of these channels is ripe for impulsive reactions, which facilitate verbal attacks by supporters and polarization, feeding back into the populist discourse. These tactics threaten democracy and free and fair elections because they deceive voters and silence the opposition, distorting public debate. Ultimately, this form of communication undermines the values that justify the special protection of freedom of expression to begin with. The “truth decay” and “fact polarization” that result from these efforts discredit institutions and consequently foster distrust in democracy. 20

III. Internet, Social Media, and Freedom of Expression 21

The third industrial revolution, also known as the technological or digital revolution, has shaped our world today. 22 Some of its main features are the massification of personal computers, the universalization of smartphones and, most importantly, the internet. One of the main byproducts of the digital revolution and the internet was the emergence of social media platforms such as Facebook, Instagram, YouTube, TikTok and messaging applications like WhatsApp and Telegram. We live in a world of apps, algorithms, artificial intelligence, and innovation occurring at breakneck speed where nothing seems truly new for very long. This is the background for the narrative that follows.

A. The Impact of the Internet

The internet revolutionized the world of interpersonal and social communication, exponentially expanded access to information and knowledge, and created a public sphere where anyone can express ideas, opinions, and disseminate facts. 23 Before the internet, one’s participation in public debate was dependent upon the professional press, 24 which investigated facts, abided by standards of journalistic ethics, 25 and was liable for damages if it knowingly or recklessly published untruthful information. 26 There was a baseline of editorial control and civil liability over the quality and veracity of what was published in this medium. This does not mean that it was a perfect world. The number of media outlets was, and continues to be, limited in quantity and perspectives; journalistic companies have their own interests, and not all of them distinguish fact from opinion with the necessary care. Still, there was some degree of control over what became public, and there were costs to the publication of overtly hateful or false speech.

The internet, with the emergence of websites, personal blogs, and social media, revolutionized this status quo. It created open, online communities for user-generated texts, images, videos, and links, published without editorial control and at no cost. This advanced participation in public discourse, diversified sources, and exponentially increased available information. 27 It gave a voice to minorities, civil society, politicians, public agents, and digital influencers, and it allowed demands for equality and democracy to acquire global dimensions. This represented a powerful contribution to political dynamism, resistance to authoritarianism, and stimulation of creativity, scientific knowledge, and commercial exchanges. 28 Increasingly, the most relevant political, social, and cultural communications take place on the internet’s unofficial channels.

However, the rise of social media also led to an increase in the dissemination of abusive and criminal speech. 29 While these platforms did not create mis- or disinformation, hate speech, or speech that attacks democracy, the ability to publish freely, with no editorial control and little to no accountability, increased the prevalence of these types of speech and facilitated its use as a political tool by populist leaders. 30 Additionally, and more fundamentally, platform business models compounded the problem through algorithms that moderate and distribute online content. 31

B. The Role of Algorithms

The ability to participate and be heard in online public discourse is currently defined by the content moderation algorithms of a couple major technology companies. Although digital platforms initially presented themselves as neutral media where users could publish freely, they in fact exercise legislative, executive, and judicial functions because they unilaterally define speech rules in their terms and conditions and their algorithms decide how content is distributed and how these rules are applied. 32

Specifically, digital platforms rely on algorithms for two different functions: recommending content and moderating content. 33 First, a fundamental aspect of the service they offer involves curating the content available to provide each user with a personalized experience and increase time spent online. They resort to deep learning algorithms that monitor every action on the platform, draw from user data, and predict what content will keep a specific user engaged and active based on their prior activity or that of similar users. 34 The transition from a world of information scarcity to a world of information abundance generated fierce competition for user attention—the most valuable resource in the Digital Age. 35 The power to modify a person’s information environment has a direct impact on their behavior and beliefs. Because AI systems can track an individual’s online history, they can tailor specific messages to maximize impact. More importantly, they monitor whether and how the user interacts with the tailored message, using this feedback to influence future content targeting and progressively becoming more effective in shaping behavior. 36 Given that humans engage more with content that is polarizing and provocative, these algorithms elicit powerful emotions, including anger. 37 The power to organize online content therefore directly impacts freedom of expression, pluralism, and democracy. 38

In addition to recommendation systems, platforms rely on algorithms for content moderation, the process of classifying content to determine whether it violates community standards. 39 As mentioned, the growth of social media and its use by people around the world allowed for the spread of lies and criminal acts with little cost and almost no accountability, threatening the stability of even long-standing democracies. Inevitably, digital platforms had to enforce terms and conditions defining the norms of their digital community and moderate speech accordingly. 40 But the potentially infinite amount of content published online means that this control cannot be exercised exclusively by humans.

Content moderation algorithms optimize the scanning of published content to identify violations of community standards or terms of service at scale and apply measures ranging from removal to reducing reach or including clarifications or references to alternative information. Platforms often rely on two algorithmic models for content moderation. The first is the reproduction detection model , which uses unique identifiers to catch reproductions of content previously labeled as undesired. 41 The second system, the predictive model , uses machine learning techniques to identify potential illegalities in new and unclassified content. 42 Machine learning is a subtype of artificial intelligence that extracts patterns in training datasets, capable of learning from data without explicit programming to do so. 43 Although helpful, both models have shortcomings.

The reproduction detection model is inefficient for content such as hate speech and disinformation, where the potential for new and different publications is virtually unlimited and users can deliberately make changes to avoid detection. 44 The predictive model is still limited in its ability to address situations to which it has not been exposed in training, primarily because it lacks the human ability to understand nuance and to factor in contextual considerations that influence the meaning of speech. 45 Additionally, machine learning algorithms rely on data collected from the real world and may embed prejudices or preconceptions, leading to asymmetrical applications of the filter. 46 And because the training data sets are so large, it can be hard to audit them for these biases. 47

Despite these limitations, algorithms will continue to be a crucial resource in content moderation given the scale of online activities. 48 In the last two months of 2020 alone, Facebook applied a content moderation measure to 105 million publications, and Instagram to 35 million. 49 YouTube has 500 hours of video uploaded per minute and removed more than 9.3 million videos. 50 In the first half of 2020, Twitter analyzed complaints related to 12.4 million accounts for potential violations of its rules and took action against 1.9 million. 51 This data supports the claim that human moderation is impossible, and that algorithms are a necessary tool to reduce the spread of illicit and harmful content. On the one hand, holding platforms accountable for occasional errors in these systems would create wrong incentives to abandon algorithms in content moderation with the negative consequence of significantly increasing the spread of undesired speech. 52 On the other hand, broad demands for platforms to implement algorithms to optimize content moderation, or laws that impose very short deadlines to respond to removal requests submitted by users, can create excessive pressure for the use of these imprecise systems on a larger scale. Acknowledging the limitations of this technology is fundamental for precise regulation.

C. Some Undesirable Consequences

One of the most striking impacts of this new informational environment is the exponential increase in the scale of social communications and the circulation of news. Around the world, few newspapers, print publications, and radio stations cross the threshold of having even one million subscribers and listeners. This suggests the majority of these publications have a much smaller audience, possibly in the thousands or tens of thousands of people. 53 Television reaches millions of viewers, although diluted among dozens or hundreds of channels. 54 Facebook, on the other hand, has about 3 billion active users. 55 YouTube has 2.5 billion accounts. 56 WhatsApp, more than 2 billion. 57 The numbers are bewildering. However, and as anticipated, just as the digital revolution democratized access to knowledge, information, and public space, it also introduced negative consequences for democracy that must be addressed. Three of them include:

a) the increased circulation of disinformation, deliberate lying, hate speech, conspiracy theories, attacks on democracy, and inauthentic behavior, made possible by recommendation algorithms that optimize for user engagement and content moderation algorithms that are still incapable of adequately identifying undesirable content;
b) the tribalization of life, with the formation of echo chambers where groups speak only to themselves, reinforcing confirmation bias, 58 making speech progressively more radical, and contributing to polarization and intolerance; and
c) a global crisis in the business model of the professional press. Although social media platforms have become one of the main sources of information, they do not produce their own content. They hire engineers, not reporters, and their interest is engagement, not news. 59 Because advertisers’ spending has migrated away from traditional news publications to technological platforms with broader reaches, the press has suffered from a lack of revenue which has forced hundreds of major publications, national and local, to close their doors or reduce their journalist workforce. 60 But a free and strong press is more than just a private business; it is a pillar for an open and free society. It serves a public interest in the dissemination of facts, news, opinions, and ideas, indispensable preconditions for the informed exercise of citizenship. Knowledge and truth—never absolute, but sincerely sought—are essential elements for the functioning of a constitutional democracy. Citizens need to share a minimum set of common objective facts from which to inform their own judgments. If they cannot accept the same facts, public debate becomes impossible. Intolerance and violence are byproducts of the inability to communicate—hence the importance of “knowledge institutions,” such as universities, research entities, and the institutional press. The value of free press for democracy is illustrated by the fact that in different parts of the world, the press is one of the only private businesses specifically referred to throughout constitutions. Despite its importance for society and democracy, surveys reveal a concerning decline in its prestige. 61

In the beginning of the digital revolution, there was a belief that the internet should be a free, open, and unregulated space in the interest of protecting access to the platform and promoting freedom of expression. Over time, concerns emerged, and a consensus gradually grew for the need for internet regulation. Multiple approaches for regulating the internet were proposed, including: (a) economic, through antitrust legislation, consumer protection, fair taxation, and copyright rules; (b) privacy, through laws restricting collection of user data without consent, especially for content targeting; and (c) targeting inauthentic behavior, content control, and platform liability rules. 62

Devising the proper balance between the indispensable preservation of freedom of expression on the one hand, and the repression of illegal content on social media on the other, is one of the most complex issues of our generation. Freedom of expression is a fundamental right incorporated into virtually all contemporary constitutions and, in many countries, is considered a preferential freedom. Several reasons have been advanced for granting freedom of expression special protection, including its roles: (a) in the search for the possible truth 63 in an open and plural society, 64 as explored above in discussing the importance of the institutional press; (b) as an essential element for democracy 65 because it allows the free circulation of ideas, information, and opinions that inform public opinion and voting; and (c) as an essential element of human dignity, 66 allowing the expression of an individual’s personality.

The regulation of digital platforms cannot undermine these values but must instead aim at its protection and strengthening. However, in the digital age, these same values that historically justified the reinforced protection of freedom of expression can now justify its regulation. As U.N. Secretary-General António Guterres thoughtfully stated, “the ability to cause large-scale disinformation and undermine scientifically established facts is an existential risk to humanity.” 67

Two aspects of the internet business model are particularly problematic for the protection of democracy and free expression. The first is that, although access to most technological platforms and applications is free, users pay for access with their privacy. 68 As Lawrence Lessig observed, we watch television, but the internet watches us. 69 Everything each individual does online is monitored and monetized. Data is the modern gold. 70 Thus, those who pay for the data can more efficiently disseminate their message through targeted ads. As previously mentioned, the power to modify a person’s information environment has a direct impact on behavior and beliefs, especially when messages are tailored to maximize impact on a specific individual. 71

The second aspect is that algorithms are programmed to maximize time spent online. This often leads to the amplification of provocative, radical, and aggressive content. This in turn compromises freedom of expression because, by targeting engagement, algorithms sacrifice the search for truth (with the wide circulation of fake news), democracy (with attacks on institutions and defense of coups and authoritarianism), and human dignity (with offenses, threats, racism, and others). The pursuit of attention and engagement for revenue is not always compatible with the values that underlie the protection of freedom of expression.

IV. A Framework for the Regulation of Social Media

Platform regulation models can be broadly classified into three categories: (a) state or government regulation, through legislation and rules drawing a compulsory, encompassing framework; (b) self-regulation, through rules drafted by platforms themselves and materialized in their terms of use; and (c) regulated self-regulation or coregulation, through standards fixed by the state but which grant platform flexibility in materializing and implementing them. This Essay argues for the third model, with a combination of governmental and private responsibilities. Compliance should be overseen by an independent committee, with the minority of its representatives coming from the government, and the majority coming from the business sector, academia, technology entities, users, and civil society.

The regulatory framework should aim to reduce the asymmetry of information between platforms and users, safeguard the fundamental right to freedom of expression from undue private or state interventions, and protect and strengthen democracy. The current technical limitations of content moderation algorithms explored above and normal substantive disagreement about what content should be considered illegal or harmful suggest that an ideal regulatory model should optimize the balance between the fundamental rights of users and platforms, recognizing that there will always be cases where consensus is unachievable. The focus of regulation should be the development of adequate procedures for content moderation, capable of minimizing errors and legitimizing decisions even when one disagrees with the substantive result. 72 With these premises as background, the proposal for regulation formulated here is divided into three levels: (a) the appropriate intermediary liability model for user-generated content; (b) procedural duties for content moderation; and (c) minimum duties to moderate content that represents concrete threats to democracy and/or freedom of expression itself.

A. Intermediary Liability for User-Generated Content

There are three main regimes for platform liability for third-party content. In strict liability models, platforms are held responsible for all user-generated posts. 73 Since platforms have limited editorial control over what is posted and limited human oversight over the millions of posts made daily, this would be a potentially destructive regime. In knowledge-based liability models, platform liability arises if they do not act to remove content after an extrajudicial request from users—this is also known as a “notice-and-takedown” system. 74 Finally, a third model would make platforms liable for user-generated content only in cases of noncompliance with a court order mandating content removal. This latter model was adopted in Brazil with the Civil Framework for the Internet (Marco Civil da Internet). 75 The only exception in Brazilian legislation to this general rule is revenge porn: if there is a violation of intimacy resulting from the nonconsensual disclosure of images, videos, or other materials containing private nudity or private sexual acts, extrajudicial notification is sufficient to create an obligation for content removal under penalty of liability. 76

In our view, the Brazilian model is the one that most adequately balances the fundamental rights involved. As mentioned, in the most complex cases concerning freedom of expression, people will disagree on the legality of speech. Rules holding platforms accountable for not removing content after mere user notification create incentives for over-removal of any potentially controversial content, excessively restricting users’ freedom of expression. If the state threatens to hold digital platforms accountable if it disagrees with their assessment, companies will have the incentive to remove all content that could potentially be considered illicit by courts to avoid liability. 77

Nonetheless, this liability regime should coexist with a broader regulatory structure imposing principles, limits, and duties on content moderation by digital platforms, both to increase the legitimacy of platforms’ application of their own terms and conditions and to minimize the potentially devastating impacts of illicit or harmful speech.

B. Standards for Proactive Content Moderation

Platforms have free enterprise and freedom of expression rights to set their own rules and decide the kind of environment they want to create, as well as to moderate harmful content that could drive users away. However, because these content moderation algorithms are the new governors of the public sphere, 78 and because they define the ability to participate and be heard in online public discourse, platforms should abide by minimum procedural duties of transparency and auditing, due process, and fairness.

1. Transparency and Auditing

Transparency and auditing measures serve mainly to ensure that platforms are accountable for content moderation decisions and for the impacts of their algorithms. They provide users with greater understanding and knowledge about the extent to which platforms regulate speech, and they provide oversight bodies and researchers with information to understand the threats of digital services and the role of platforms in amplifying or minimizing them.

Driven by demands from civil society, several digital platforms already publish transparency reports. 79 However, the lack of binding standards means that these reports have significant gaps, no independent verification of the information provided, 80 and no standardization across platforms, preventing comparative analysis. 81 In this context, regulatory initiatives that impose minimum requirements and standards are crucial to make oversight more effective. On the other hand, overly broad transparency mandates may force platforms to adopt simpler content moderation rules to reduce costs, which could negatively impact the accuracy of content moderation or the quality of the user experience. 82 A tiered approach to transparency, where certain information is public and certain information is limited to oversight bodies or previously qualified researchers, ensures adequate protection of countervailing interests, such as user privacy and business confidentiality. 83 The Digital Services Act, 84 recently passed in the European Union, contains robust transparency provisions that generally align with these considerations. 85

The information that should be publicly provided includes clear and unambiguous terms of use, the options available to address violations (such as removal, amplification reduction, clarifications, and account suspension) and the division of labor between algorithms and humans. More importantly, public transparency reports should include information on the accuracy of automated moderation measures and the number of content moderation actions broken down by type (such as removal, blocking, and account deletion). 86 There must also be transparency obligations to researchers, giving them access to crucial information and statistics, including to the content analyzed for the content moderation decisions. 87

Although valuable, transparency requirements are insufficient in promoting accountability because they rely on users and researchers to actively monitor platform conduct and presuppose that they have the power to draw attention to flaws and promote changes. 88 Legally mandated third-party algorithmic auditing is therefore an important complement to ensure that these models satisfy legal, ethical, and safety standards and to elucidate the embedded value tradeoffs, such as between user safety and freedom of expression. 89 As a starting point, algorithm audits should consider matters such as how accurately they perform, any potential bias or discrimination incorporated in the data, and to what extent the internal mechanics are explainable to humans. 90 The Digital Services Act contains a similar proposal. 91

The market for algorithmic auditing is still emergent and replete with uncertainty. In attempting to navigate this scenario, regulators should: (a) define how often the audits should happen; (b) develop standards and best practices for auditing procedures; (c) mandate specific disclosure obligations so auditors have access to the required data; and (d) define how identified harms should be addressed. 92

2. Due Process and Fairness

To ensure due process, platforms must inform users affected by content moderation decisions of the allegedly violated provision of the terms of use, as well as offer an internal system of appeals against these decisions. Platforms must also create systems that allow for the substantiated denunciation of content or accounts by other users, and notify reporting users of the decision taken.

As for fairness, platforms should ensure that the rules are applied equally to all users. Although it is reasonable to suppose that platforms may adopt different criteria for public persons or information of public interest, these exceptions must be clear in the terms of use. This issue has recently been the subject of controversy between the Facebook Oversight Board and the company. 93

Due to the enormous amount of content published on the platforms and the inevitability of using automated mechanisms for content moderation, platforms should not be held accountable for a violation of these duties in specific cases, but only when the analysis reveals a systemic failure to comply. 94

C. Minimum Duties to Moderate Illicit Content

The regulatory framework should also contain specific obligations to address certain types of especially harmful speech. The following categories are considered by the authors to fall within this group: disinformation, hate speech, anti-democratic attacks, cyberbullying, terrorism, and child pornography. Admittedly, defining and consensually identifying the speech included in these categories—except in the case of child pornography 95 —is a complex and largely subjective task. Precisely for this reason, platforms should be free to define how the concepts will be operationalized, as long as they guide definitions by international human rights parameters and in a transparent manner. This does not mean that all platforms will reach the same definitions nor the same substantive results in concrete cases, but this should not be considered a flaw in the system, since the plurality of rules promotes freedom of expression. The obligation to observe international human rights parameters reduces the discretion of companies, while allowing for the diversity of policies among them. After defining these categories, platforms must establish mechanisms that allow users to report violations.

In addition, platforms should develop mechanisms to address coordinated inauthentic behaviors, which involve the use of automated systems or deceitful means to artificially amplify false or dangerous messages by using bots, fake profiles, trolls, and provocateurs. 96 For example, if a person publishes a post for his twenty followers saying that kerosene oil is good for curing COVID-19, the negative impact of this misinformation is limited. However, if that message is amplified to thousands of users, a greater public health issue arises. Or, in another example, if the false message that an election was rigged reaches millions of people, there is a democratic risk due to the loss of institutional credibility.

The role of oversight bodies should be to verify that platforms have adopted terms of use that prohibit the sharing of these categories of speech and ensure that, systemically, the recommendation and content moderation systems are trained to moderate this content.

V. Conclusion

The World Wide Web has provided billions of people with access to knowledge, information, and the public space, changing the course of history. However, the misuse of the internet and social media poses serious threats to democracy and fundamental rights. Some degree of regulation has become necessary to confront inauthentic behavior and illegitimate content. It is essential, however, to act with transparency, proportionality, and adequate procedures, so that pluralism, diversity, and freedom of expression are preserved.

In addition to the importance of regulatory action, the responsibility for the preservation of the internet as a healthy public sphere also lies with citizens. Media education and user awareness are fundamental steps for the creation of a free but positive and constructive environment on the internet. Citizens should be conscious that social media can be unfair, perverse, and can violate fundamental rights and basic rules of democracy. They must be attentive not to uncritically pass on all information received. Alongside states, regulators, and tech companies, citizens are also an important force to address these threats. In Jonathan Haidt’s words, “[w]hen our public square is governed by mob dynamics unrestrained by due process, we don’t get justice and inclusion; we get a society that ignores context, proportionality, mercy, and truth.” 97

  • 1 Tim Wu, Is the First Amendment Obsolete? , in The Perilous Public Square 15 (David E. Pozen ed., 2020).
  • 2 Jack M. Balkin, Free Speech is a Triangle , 118 Colum. L. Rev. 2011, 2019 (2018).
  • 3 Luís Roberto Barroso, O Constitucionalismo Democrático ou Neoconstitucionalismo como ideologia vitoriosa do século XX , 4 Revista Publicum 14, 14 (2018).
  • 4 Id. at 16.
  • 7 Ronald Dworkin, Is Democracy Possible Here?: Principles for a New Political Debate xii (2006); Ronald Dworkin, Taking Rights Seriously 181 (1977).
  • 8 Barroso, supra note 3, at 16.
  • 9 Samuel Issacharoff, Fragile Democracies: Contested Power in the Era of Constitutional Courts i (2015).
  • 10 Larry Diamond, Facing up to the Democratic Recession , 26 J. Democracy 141 (2015). Other scholars have referred to the same phenomenon using other terms, such as democratic retrogression, abusive constitutionalism, competitive authoritarianism, illiberal democracy, and autocratic legalism. See, e.g. , Aziz Huq & Tom Ginsburg, How to Lose a Constitutional Democracy , 65 UCLA L. Rev. 91 (2018); David Landau, Abusive Constitutionalism , 47 U.C. Davis L. Rev. 189 (2013); Kim Lane Scheppele, Autocratic Legalism , 85 U. Chi. L. Rev. 545 (2018).
  • 11 Dan Balz, A Year After Jan. 6, Are the Guardrails that Protect Democracy Real or Illusory? , Wash. Post (Jan. 6, 2022), https://perma.cc/633Z-A9AJ; Brexit: Reaction from Around the UK , BBC News (June 24, 2016), https://perma.cc/JHM3-WD7A.
  • 12 Cas Mudde, The Populist Zeitgeist , 39 Gov’t & Opposition 541, 549 (2004).
  • 13 See generally Mohammed Sinan Siyech, An Introduction to Right-Wing Extremism in India , 33 New Eng. J. Pub. Pol’y 1 (2021) (discussing right-wing extremism in India). See also Eviane Leidig, Hindutva as a Variant of Right-Wing Extremism , 54 Patterns of Prejudice 215 (2020) (tracing the history of “Hindutva”—defined as “an ideology that encompasses a wide range of forms, from violent, paramilitary fringe groups, to organizations that advocate the restoration of Hindu ‘culture’, to mainstream political parties”—and finding that it has become mainstream since 2014 under Modi); Ariel Goldstein, Brazil Leads the Third Wave of the Latin American Far Right , Ctr. for Rsch. on Extremism (Mar. 1, 2021), https://perma.cc/4PCT-NLQJ (discussing right-wing extremism in Brazil under Bolsonaro); Seth G. Jones, The Rise of Far-Right Extremism in the United States , Ctr. for Strategic & Int’l Stud. (Nov. 2018), https://perma.cc/983S-JUA7 (discussing right-wing extremism in the U.S. under Trump).
  • 14 Sergio Fausto, O Desafio Democrático [The Democratic Challenge], Piauí (Aug. 2022), https://perma.cc/474A-3849.
  • 15 Jan-Werner Muller, Populism and Constitutionalism , in The Oxford Handbook of Populism 590 (Cristóbal Rovira Kaltwasser et al. eds., 2017).
  • 16 Ming-Sung Kuo, Against Instantaneous Democracy , 17 Int’l J. Const. L. 554, 558–59 (2019); see also Digital Populism , Eur. Ctr. for Populism Stud., https://perma.cc/D7EV-48MV.
  • 17 Luís Roberto Barroso, Technological Revolution, Democratic Recession and Climate Change: The Limits of Law in a Changing World , 18 Int’l J. Const. L. 334, 349 (2020).
  • 18 For the use of social media, see Sven Engesser et al., Populism and Social Media: How Politicians Spread a Fragmented Ideology , 20 Info. Commc’n & Soc’y 1109 (2017). For attacks on the press, see WPFD 2021: Attacks on Press Freedom Growing Bolder Amid Rising Authoritarianism , Int’l Press Inst. (Apr. 30, 2021), https://perma.cc/SGN9-55A8. For attacks on the judiciary, see Michael Dichio & Igor Logvinenko, Authoritarian Populism, Courts and Democratic Erosion , Just Sec. (Feb. 11, 2021), https://perma.cc/WZ6J-YG49.
  • 19 Kuo, supra note 16, at 558–59; see also Digital Populism , supra note 16.
  • 20 Vicki C. Jackson, Knowledge Institutions in Constitutional Democracy: Reflections on “the Press” , 15 J. Media L. 275 (2022).
  • 21 Many of the ideas and information on this topic were collected in Luna van Brussel Barroso, Liberdade de Expressão e Democracia na Era Digital: O impacto das mídias sociais no mundo contemporâneo [Freedom of Expression and Democracy in the Digital Era: The Impact of Social Media in the Contemporary World] (2022), which was recently published in Brazil.
  • 22 The first industrial revolution is marked by the use of steam as a source of energy in the middle of the 18th century. The second started with the use of electricity and the invention of the internal combustion engine at the turn of the 19th to the 20th century. There are already talks of the fourth industrial revolution as a product of the fusion of technologies that blurs the boundaries among the physical, digital, and biological spheres. See generally Klaus Schwab, The Fourth Industrial Revolution (2017).
  • 23 Gregory P. Magarian, The Internet and Social Media , in The Oxford Handbook of Freedom of Speech 350, 351–52 (Adrienne Stone & Frederick Schauer eds., 2021).
  • 24 Wu, supra note 1, at 15.
  • 25 Journalistic ethics include distinguishing fact from opinion, verifying the veracity of what is published, having no self-interest in the matter being reported, listening to the other side, and rectifying mistakes. For an example of an international journalistic ethics charter, see Global Charter of Ethics for Journalists , Int’l Fed’n of Journalists (June 12, 2019), https://perma.cc/7A2C-JD2S.
  • 26 See, e.g. , New York Times Co. v. Sullivan, 376 U.S. 254 (1964).
  • 27 Balkin, supra note 2, at 2018.
  • 28 Magarian, supra note 23, at 351–52.
  • 29 Wu, supra note 1, at 15.
  • 30 Magarian, supra note 23, at 357–60.
  • 31 Niva Elkin-Koren & Maayan Perel, Speech Contestation by Design: Democratizing Speech Governance by AI , 50 Fla. State U. L. Rev. (forthcoming 2023).
  • 32 Thomas E. Kadri & Kate Klonick, Facebook v. Sullivan: Public Figures and Newsworthiness in Online Speech , 93 S. Cal. L. Rev. 37, 94 (2019).
  • 33 Elkin-Koren & Perel, supra note 31.
  • 34 Chris Meserole, How Do Recommender Systems Work on Digital Platforms? , Brookings Inst.(Sept. 21, 2022), https://perma.cc/H53K-SENM.
  • 35 Kris Shaffer, Data versus Democracy: How Big Data Algorithms Shape Opinions and Alter the Course of History xi–xv (2019).
  • 36 See generally Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (2019).
  • 37 Shaffer, supra note 35, at xi–xv.
  • 38 More recently, with the advance of neuroscience, platforms have sharpened their ability to manipulate and change our emotions, feelings and, consequently, our behavior in accordance not with our own interests, but with theirs (or of those who they sell this service to). Kaveh Waddell, Advertisers Want to Mine Your Brain , Axios (June 4, 2019), https://perma.cc/EU85-85WX. In this context, there is already talk of a new fundamental right to cognitive liberty, mental self-determination, or the right to free will. Id .
  • 39 Content moderation refers to “systems that classify user generated content based on either matching or prediction, leading to a decision and governance outcome (e.g. removal, geoblocking, account takedown).” Robert Gorwa, Reuben Binns & Christian Katzenbach, Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance , 7 Big Data & Soc’y 1, 3 (2020).
  • 40 Jack M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation , 51 U.C. Davis L. Rev. 1149, 1183 (2018).
  • 41 See Carey Shenkman, Dhanaraj Thakur & Emma Llansó, Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis 13–16 (May 2021),https://perma.cc/J9MP-7PQ8.
  • 42 See id. at 17–21.
  • 43 See Michael Wooldridge, A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going 63 (2021).

Perceptual hashing has been the primary technology utilized to mitigate the spread of CSAM, since the same materials are often repeatedly shared, and databases of offending content are maintained by institutions like the National Center for Missing and Exploited Children (NCMEC) and its international analogue, the International Centre for Missing & Exploited Children (ICMEC).

  • 45 Natural language understanding is undermined by language ambiguity, contextual dependence of words of non-immediate proximity, references, metaphors, and general semantics rules. See Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do 52–55 (2021). Language comprehension in fact requires unlimited common-sense knowledge about the actual world, which humans possess and is impossible to code. Id . A case decided by Facebook’s Oversight Board illustrates the point: the company’s predictive filter for combatting pornography removed images from a breast cancer awareness campaign, a clearly legitimate content not meant to be targeted by the algorithm. See Breast Cancer Symptoms and Nudity , Oversight Bd. (2020), https://perma.cc/U9A5-TTTJ. However, based on prior training, the algorithm removed the publication because it detected pornography and was unable to factor the contextual consideration that this was a legitimate health campaign. Id .
  • 46 See generally Adriano Koshiyama, Emre Kazim & Philip Treleaven, Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms , 55 Computer 40 (2022).
  • 47 Elkin-Koren & Perel, supra note 31.
  • 48 Evelyn Douek, Governing Online Speech: From “Posts-as-Trumps” to Proportionality and Probability , 121 Colum. L. Rev. 759, 791 (2021).
  • 53 See Martha Minow, Saving the Press: Why the Constitution Calls for Government Action to Preserve Freedom of Speech 20 (2021). For example, the best-selling newspaper in the world, The New York Times , ended the year 2022 with around 10 million subscribers across digital and print. Katie Robertson, The New York Times Company Adds 180,000 Digital Subscribers , N.Y. Times (Nov. 2, 2022), https://perma.cc/93PF-TKC5. The Economist magazine had approximately 1.2 million subscribers in 2022. The Economist Group, Annual Report 2022 24 (2022), https://perma.cc/9HQQ-F7W2. Around the world, publications that reach one million subscribers are rare. These Are the Most Popular Paid Subscription News Websites , World Econ. F. (Apr. 29, 2021), https://perma.cc/L2MK-VPNX.
  • 54 Lawrence Lessig, They Don’t Represent Us: Reclaiming Our Democracy 105 (2019).
  • 55 Essential Facebook Statistics and Trends for 2023 , Datareportal (Feb. 19, 2023), https://perma.cc/UH33-JHUQ.
  • 56 YouTube User Statistics 2023 , Glob. Media Insight (Feb. 27, 2023), https://perma.cc/3H4Y-H83V.
  • 57 Brian Dean, WhatsApp 2022 User Statistics: How Many People Use WhatsApp , Backlinko (Jan. 5, 2022), https://perma.cc/S8JX-S7HN.
  • 58 Confirmation bias, the tendency to seek out and favor information that reinforces one’s existing beliefs, presents an obstacle to critical thinking. Sachin Modgil et al., A Confirmation Bias View on Social Media Induced Polarisation During COVID-19 , Info. Sys. Frontiers (Nov. 20, 2021).
  • 59 Minow, supra note 53, at 2.
  • 60 Id. at 3, 11.
  • 61 On the importance of the role of the press as an institution of public interest and its “crucial relationship” with democracy, see id. at 35. On the press as a “knowledge institution,” the idea of “institutional press,” and data on the loss of prestige by newspapers and television stations, see Jackson, supra note 20, at 4–5.
  • 62 See , e.g. , Jack M. Balkin, How to Regulate (and Not Regulate) Social Media , 1 J. Free Speech L. 71, 89–96 (2021).
  • 63 By possible truth we mean that not all claims, opinions and beliefs can be ascertained as true or false. Objective truths are factual and can thus be proven even when controversial—for example, climate change and the effectiveness of vaccines. Subjective truths, on the other hand, derive from individual normative, religious, philosophical, and political views. In a pluralistic world, any conception of freedom of expression must protect individual subjective beliefs.
  • 64 Eugene Volokh, In Defense of the Marketplace of Ideas/Search for Truth as a Theory of Free Speech Protection , 97 Va. L. Rev. 595, 595 (May 2011).
  • 66 Steven J. Heyman, Free Speech and Human Dignity 2 (2008).
  • 67 A Global Dialogue to Guide Regulation Worldwide , UNESCO (Feb. 23, 2023), https://perma.cc/ALK8-HTG3.
  • 68 Can We Fix What’s Wrong with Social Media? , Yale L. Sch. News (Aug. 3, 2022), https://perma.cc/MN58-2EVK.
  • 69 Lessig, supra note 54, at 105.
  • 71 See supra Part III.B.
  • 72 Doeuk, supra note 48, at 804–13; see also John Bowers & Jonathan Zittrain, Answering Impossible Questions: Content Governance in an Age of Disinformation , Harv. Kennedy Sch. Misinformation Rev. (Jan. 14, 2020), https://perma.cc/R7WW-8MQX.
  • 73 Daphne Keller, Systemic Duties of Care and Intermediary Liability , Ctr. for Internet & Soc’y Blog (May 28, 2020), https://perma.cc/25GU-URGT.
  • 75 Decreto No. 12.965, de 23 de abril de 2014, Diário Oficial da União [D.O.U.] de 4.14.2014 (Braz.) art. 19. In order to ensure freedom of expression and prevent censorship, providers of internet applications can only be civilly liable for damages resulting from content generated by third parties if, after specific court order, they do not make arrangements to, in the scope and technical limits of their service and within the indicated time, make unavailable the content identified as infringing, otherwise subject to the applicable legal provisions. Id .
  • 76 Id. art. 21. The internet application provider that provides content generated by third parties will be held liable for the violation of intimacy resulting from the disclosure, without authorization of its participants, of images, videos, or other materials containing nude scenes or private sexual acts when, upon receipt of notification by the participant or its legal representative, fail to diligently promote, within the scope and technical limits of its service, the unavailability of this content. Id .
  • 77 Balkin, supra note 2, at 2017.
  • 78 Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech , 131 Harv. L. Rev. 1598, 1603 (2018).
  • 79 Transparency Reporting Index, Access Now (July 2021), https://perma.cc/2TSL-2KLD (cataloguing transparency reporting from companies around the world).
  • 80 Hum. Rts. Comm., Rep. of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, ¶¶ 63–66, U.N. Doc A/HRC/32/35 (2016).
  • 81 Paddy Leerssen, The Soap Box as a Black Box: Regulating Transparency in Social Media Recommender Systems , 11 Eur. J. L. & Tech. (2020).
  • 82 Daphne Keller, Some Humility About Transparency , Ctr. for Internet & Soc’y Blog (Mar. 19, 2021), https://perma.cc/4Y85-BATA.
  • 83 Mark MacCarthy, Transparency Requirements for Digital Social Media Platforms: Recommendations for Policy Makers and Industry , Transatlantic Working Grp. (Feb. 12, 2020).
  • 84 2022 O.J. (L 277) 1 [hereinafter DSA].
  • 85 The DSA was approved by the European Parliament on July 5, 2022, and on October 4, 2022, the European Council gave its final acquiescence to the regulation. Digital Services: Landmark Rules Adopted for a Safer, Open Online Environment , Eur. Parliament (July 5, 2022), https://perma.cc/BZP5-V2B2. The DSA increases transparency and accountability of platforms, by providing, for example, for the obligation of “clear information on content moderation or the use of algorithms for recommending content (so-called recommender systems); users will be able to challenge content moderation decisions.” Id .
  • 86 MacCarthy, supra note 83, 19–24.
  • 87 To this end, American legislators recently introduced a U.S. Congressional bill that proposes a model for conducting research on the impacts of digital communications in a way that protects user privacy. See Platform Accountability and Transparency Act, S. 5339, 117th Congress (2022). The project mandates that digital platforms share data with researchers previously authorized by the Federal Trade Commission and publicly disclose certain data about content, algorithms, and advertising. Id .
  • 88 Yifat Nahmias & Maayan Perel, The Oversight of Content Moderation by AI: Impact Assessment and Their Limitations , 58 Harv. J. on Legis. 145, 154–57 (2021).
  • 89 Auditing Algorithms: The Existing Landscape, Role of Regulator and Future Outlook , Digit. Regul. Coop. F. (Sept. 23, 2022), https://perma.cc/7N6W-JNCW.
  • 90 See generally Koshiyama et al., supra note 46.
  • 91 In Article 37, the DSA provides that digital platforms of a certain size should be accountable, through annual independent auditing, for compliance with the obligations set forth in the Regulation and with any commitment undertaken pursuant to codes of conduct and crisis protocols.
  • 92 Digit. Regul. Coop. F., supra note 89.
  • 93 In a transparency report published at the end of its first year of operation, the Oversight Board highlighted the inadequacy of the explanations presented by Meta on the operation of a system known as cross-check, which apparently gave some users greater freedom on the platform. In January 2022, Meta explained that the cross-check system grants an additional degree of review to certain content that internal systems mark as violating the platform’s terms of use. Meta submitted a query to the Board on how to improve the functioning of this system and the Board made relevant recommendations. See Oversight Board Published Policy Advisory Opinion on Meta’s Cross-Check Program , Oversight Bd. (Dec. 2022), https://perma.cc/87Z5-L759.
  • 94 Evelyn Douek, Content Moderation as Systems Thinking , 136 Harv. L. Rev. 526, 602–03 (2022).
  • 95 The illicit nature of child pornography is objectively apprehended and does not implicate the same subjective considerations that the other referenced categories entail. Not surprisingly, several databases have been created to facilitate the moderation of this content. See Ofcom, Overview of Perceptual Hashing Technology 14 (Nov. 22, 2022), https://perma.cc/EJ45-B76X (“Several hash databases to support the detection of known CSAM exist, e.g. the National Center for Missing and Exploited Children (NCMEC) hash database, the Internet Watch Foundation (IWF) hash list and the International Child Sexual Exploitation (ICSE) hash database.”).
  • 97 Jonathan Haidt, Why the Past 10 Years of American Life Have Been Uniquely Stupid , Atlantic (Apr. 11, 2022), https://perma.cc/2NXD-32VM.

Find anything you save across the site in your account

How Harmful Is Social Media?

A socialmedia battlefield

In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones.

“What changed in the 2010s?” Haidt asks, reminding his audience that a former Twitter developer had once compared the Retweet button to the provision of a four-year-old with a loaded weapon. “A mean tweet doesn’t kill anyone; it is an attempt to shame or punish someone publicly while broadcasting one’s own virtue, brilliance, or tribal loyalties. It’s more a dart than a bullet, causing pain but no fatalities. Even so, from 2009 to 2012, Facebook and Twitter passed out roughly a billion dart guns globally. We’ve been shooting one another ever since.” While the right has thrived on conspiracy-mongering and misinformation, the left has turned punitive: “When everyone was issued a dart gun in the early 2010s, many left-leaning institutions began shooting themselves in the brain. And, unfortunately, those were the brains that inform, instruct, and entertain most of the country.” Haidt’s prevailing metaphor of thoroughgoing fragmentation is the story of the Tower of Babel: the rise of social media has “unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.”

These are, needless to say, common concerns. Chief among Haidt’s worries is that use of social media has left us particularly vulnerable to confirmation bias, or the propensity to fix upon evidence that shores up our prior beliefs. Haidt acknowledges that the extant literature on social media’s effects is large and complex, and that there is something in it for everyone. On January 6, 2021, he was on the phone with Chris Bail, a sociologist at Duke and the author of the recent book “ Breaking the Social Media Prism ,” when Bail urged him to turn on the television. Two weeks later, Haidt wrote to Bail, expressing his frustration at the way Facebook officials consistently cited the same handful of studies in their defense. He suggested that the two of them collaborate on a comprehensive literature review that they could share, as a Google Doc, with other researchers. (Haidt had experimented with such a model before.) Bail was cautious. He told me, “What I said to him was, ‘Well, you know, I’m not sure the research is going to bear out your version of the story,’ and he said, ‘Why don’t we see?’ ”

Bail emphasized that he is not a “platform-basher.” He added, “In my book, my main take is, Yes, the platforms play a role, but we are greatly exaggerating what it’s possible for them to do—how much they could change things no matter who’s at the helm at these companies—and we’re profoundly underestimating the human element, the motivation of users.” He found Haidt’s idea of a Google Doc appealing, in the way that it would produce a kind of living document that existed “somewhere between scholarship and public writing.” Haidt was eager for a forum to test his ideas. “I decided that if I was going to be writing about this—what changed in the universe, around 2014, when things got weird on campus and elsewhere—once again, I’d better be confident I’m right,” he said. “I can’t just go off my feelings and my readings of the biased literature. We all suffer from confirmation bias, and the only cure is other people who don’t share your own.”

Haidt and Bail, along with a research assistant, populated the document over the course of several weeks last year, and in November they invited about two dozen scholars to contribute. Haidt told me, of the difficulties of social-scientific methodology, “When you first approach a question, you don’t even know what it is. ‘Is social media destroying democracy, yes or no?’ That’s not a good question. You can’t answer that question. So what can you ask and answer?” As the document took on a life of its own, tractable rubrics emerged—Does social media make people angrier or more affectively polarized? Does it create political echo chambers? Does it increase the probability of violence? Does it enable foreign governments to increase political dysfunction in the United States and other democracies? Haidt continued, “It’s only after you break it up into lots of answerable questions that you see where the complexity lies.”

Haidt came away with the sense, on balance, that social media was in fact pretty bad. He was disappointed, but not surprised, that Facebook’s response to his article relied on the same three studies they’ve been reciting for years. “This is something you see with breakfast cereals,” he said, noting that a cereal company “might say, ‘Did you know we have twenty-five per cent more riboflavin than the leading brand?’ They’ll point to features where the evidence is in their favor, which distracts you from the over-all fact that your cereal tastes worse and is less healthy.”

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public . Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”

Interested in echo chambers? “Our results show that the aggregation of users in homophilic clusters dominate online interactions on Facebook and Twitter,” which seems convincing—except that, as another team has it, “We do not find evidence supporting a strong characterization of ‘echo chambers’ in which the majority of people’s sources of news are mutually exclusive and from opposite poles.” By the end of the file, the vaguely patronizing top-line recommendation against simple summation begins to make more sense. A document that originated as a bulwark against confirmation bias could, as it turned out, just as easily function as a kind of generative device to support anybody’s pet conviction. The only sane response, it seemed, was simply to throw one’s hands in the air.

When I spoke to some of the researchers whose work had been included, I found a combination of broad, visceral unease with the current situation—with the banefulness of harassment and trolling; with the opacity of the platforms; with, well, the widespread presentiment that of course social media is in many ways bad—and a contrastive sense that it might not be catastrophically bad in some of the specific ways that many of us have come to take for granted as true. This was not mere contrarianism, and there was no trace of gleeful mythbusting; the issue was important enough to get right. When I told Bail that the upshot seemed to me to be that exactly nothing was unambiguously clear, he suggested that there was at least some firm ground. He sounded a bit less apocalyptic than Haidt.

“A lot of the stories out there are just wrong,” he told me. “The political echo chamber has been massively overstated. Maybe it’s three to five per cent of people who are properly in an echo chamber.” Echo chambers, as hotboxes of confirmation bias, are counterproductive for democracy. But research indicates that most of us are actually exposed to a wider range of views on social media than we are in real life, where our social networks—in the original use of the term—are rarely heterogeneous. (Haidt told me that this was an issue on which the Google Doc changed his mind; he became convinced that echo chambers probably aren’t as widespread a problem as he’d once imagined.) And too much of a focus on our intuitions about social media’s echo-chamber effect could obscure the relevant counterfactual: a conservative might abandon Twitter only to watch more Fox News. “Stepping outside your echo chamber is supposed to make you moderate, but maybe it makes you more extreme,” Bail said. The research is inchoate and ongoing, and it’s difficult to say anything on the topic with absolute certainty. But this was, in part, Bail’s point: we ought to be less sure about the particular impacts of social media.

Bail went on, “The second story is foreign misinformation.” It’s not that misinformation doesn’t exist, or that it hasn’t had indirect effects, especially when it creates perverse incentives for the mainstream media to cover stories circulating online. Haidt also draws convincingly upon the work of Renée DiResta, the research manager at the Stanford Internet Observatory, to sketch out a potential future in which the work of shitposting has been outsourced to artificial intelligence, further polluting the informational environment. But, at least so far, very few Americans seem to suffer from consistent exposure to fake news—“probably less than two per cent of Twitter users, maybe fewer now, and for those who were it didn’t change their opinions,” Bail said. This was probably because the people likeliest to consume such spectacles were the sort of people primed to believe them in the first place. “In fact,” he said, “echo chambers might have done something to quarantine that misinformation.”

The final story that Bail wanted to discuss was the “proverbial rabbit hole, the path to algorithmic radicalization,” by which YouTube might serve a viewer increasingly extreme videos. There is some anecdotal evidence to suggest that this does happen, at least on occasion, and such anecdotes are alarming to hear. But a new working paper led by Brendan Nyhan, a political scientist at Dartmouth, found that almost all extremist content is either consumed by subscribers to the relevant channels—a sign of actual demand rather than manipulation or preference falsification—or encountered via links from external sites. It’s easy to see why we might prefer if this were not the case: algorithmic radicalization is presumably a simpler problem to solve than the fact that there are people who deliberately seek out vile content. “These are the three stories—echo chambers, foreign influence campaigns, and radicalizing recommendation algorithms—but, when you look at the literature, they’ve all been overstated.” He thought that these findings were crucial for us to assimilate, if only to help us understand that our problems may lie beyond technocratic tinkering. He explained, “Part of my interest in getting this research out there is to demonstrate that everybody is waiting for an Elon Musk to ride in and save us with an algorithm”—or, presumably, the reverse—“and it’s just not going to happen.”

When I spoke with Nyhan, he told me much the same thing: “The most credible research is way out of line with the takes.” He noted, of extremist content and misinformation, that reliable research that “measures exposure to these things finds that the people consuming this content are small minorities who have extreme views already.” The problem with the bulk of the earlier research, Nyhan told me, is that it’s almost all correlational. “Many of these studies will find polarization on social media,” he said. “But that might just be the society we live in reflected on social media!” He hastened to add, “Not that this is untroubling, and none of this is to let these companies, which are exercising a lot of power with very little scrutiny, off the hook. But a lot of the criticisms of them are very poorly founded. . . . The expansion of Internet access coincides with fifteen other trends over time, and separating them is very difficult. The lack of good data is a huge problem insofar as it lets people project their own fears into this area.” He told me, “It’s hard to weigh in on the side of ‘We don’t know, the evidence is weak,’ because those points are always going to be drowned out in our discourse. But these arguments are systematically underprovided in the public domain.”

In his Atlantic article, Haidt leans on a working paper by two social scientists, Philipp Lorenz-Spreen and Lisa Oswald, who took on a comprehensive meta-analysis of about five hundred papers and concluded that “the large majority of reported associations between digital media use and trust appear to be detrimental for democracy.” Haidt writes, “The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the spread of misinformation.” Nyhan was less convinced that the meta-analysis supported such categorical verdicts, especially once you bracketed the kinds of correlational findings that might simply mirror social and political dynamics. He told me, “If you look at their summary of studies that allow for causal inferences—it’s very mixed.”

As for the studies Nyhan considered most methodologically sound, he pointed to a 2020 article called “The Welfare Effects of Social Media,” by Hunt Allcott, Luca Braghieri, Sarah Eichmeyer, and Matthew Gentzkow. For four weeks prior to the 2018 midterm elections, the authors randomly divided a group of volunteers into two cohorts—one that continued to use Facebook as usual, and another that was paid to deactivate their accounts for that period. They found that deactivation “(i) reduced online activity, while increasing offline activities such as watching TV alone and socializing with family and friends; (ii) reduced both factual news knowledge and political polarization; (iii) increased subjective well-being; and (iv) caused a large persistent reduction in post-experiment Facebook use.” But Gentzkow reminded me that his conclusions, including that Facebook may slightly increase polarization, had to be heavily qualified: “From other kinds of evidence, I think there’s reason to think social media is not the main driver of increasing polarization over the long haul in the United States.”

In the book “ Why We’re Polarized ,” for example, Ezra Klein invokes the work of such scholars as Lilliana Mason to argue that the roots of polarization might be found in, among other factors, the political realignment and nationalization that began in the sixties, and were then sacralized, on the right, by the rise of talk radio and cable news. These dynamics have served to flatten our political identities, weakening our ability or inclination to find compromise. Insofar as some forms of social media encourage the hardening of connections between our identities and a narrow set of opinions, we might increasingly self-select into mutually incomprehensible and hostile groups; Haidt plausibly suggests that these processes are accelerated by the coalescence of social-media tribes around figures of fearful online charisma. “Social media might be more of an amplifier of other things going on rather than a major driver independently,” Gentzkow argued. “I think it takes some gymnastics to tell a story where it’s all primarily driven by social media, especially when you’re looking at different countries, and across different groups.”

Another study, led by Nejla Asimovic and Joshua Tucker, replicated Gentzkow’s approach in Bosnia and Herzegovina, and they found almost precisely the opposite results: the people who stayed on Facebook were, by the end of the study, more positively disposed to their historic out-groups. The authors’ interpretation was that ethnic groups have so little contact in Bosnia that, for some people, social media is essentially the only place where they can form positive images of one another. “To have a replication and have the signs flip like that, it’s pretty stunning,” Bail told me. “It’s a different conversation in every part of the world.”

Nyhan argued that, at least in wealthy Western countries, we might be too heavily discounting the degree to which platforms have responded to criticism: “Everyone is still operating under the view that algorithms simply maximize engagement in a short-term way” with minimal attention to potential externalities. “That might’ve been true when Zuckerberg had seven people working for him, but there are a lot of considerations that go into these rankings now.” He added, “There’s some evidence that, with reverse-chronological feeds”—streams of unwashed content, which some critics argue are less manipulative than algorithmic curation—“people get exposed to more low-quality content, so it’s another case where a very simple notion of ‘algorithms are bad’ doesn’t stand up to scrutiny. It doesn’t mean they’re good, it’s just that we don’t know.”

Bail told me that, over all, he was less confident than Haidt that the available evidence lines up clearly against the platforms. “Maybe there’s a slight majority of studies that say that social media is a net negative, at least in the West, and maybe it’s doing some good in the rest of the world.” But, he noted, “Jon will say that science has this expectation of rigor that can’t keep up with the need in the real world—that even if we don’t have the definitive study that creates the historical counterfactual that Facebook is largely responsible for polarization in the U.S., there’s still a lot pointing in that direction, and I think that’s a fair point.” He paused. “It can’t all be randomized control trials.”

Haidt comes across in conversation as searching and sincere, and, during our exchange, he paused several times to suggest that I include a quote from John Stuart Mill on the importance of good-faith debate to moral progress. In that spirit, I asked him what he thought of the argument, elaborated by some of Haidt’s critics, that the problems he described are fundamentally political, social, and economic, and that to blame social media is to search for lost keys under the streetlamp, where the light is better. He agreed that this was the steelman opponent: there were predecessors for cancel culture in de Tocqueville, and anxiety about new media that went back to the time of the printing press. “This is a perfectly reasonable hypothesis, and it’s absolutely up to the prosecution—people like me—to argue that, no, this time it’s different. But it’s a civil case! The evidential standard is not ‘beyond a reasonable doubt,’ as in a criminal case. It’s just a preponderance of the evidence.”

The way scholars weigh the testimony is subject to their disciplinary orientations. Economists and political scientists tend to believe that you can’t even begin to talk about causal dynamics without a randomized controlled trial, whereas sociologists and psychologists are more comfortable drawing inferences on a correlational basis. Haidt believes that conditions are too dire to take the hardheaded, no-reasonable-doubt view. “The preponderance of the evidence is what we use in public health. If there’s an epidemic—when COVID started, suppose all the scientists had said, ‘No, we gotta be so certain before you do anything’? We have to think about what’s actually happening, what’s likeliest to pay off.” He continued, “We have the largest epidemic ever of teen mental health, and there is no other explanation,” he said. “It is a raging public-health epidemic, and the kids themselves say Instagram did it, and we have some evidence, so is it appropriate to say, ‘Nah, you haven’t proven it’?”

This was his attitude across the board. He argued that social media seemed to aggrandize inflammatory posts and to be correlated with a rise in violence; even if only small groups were exposed to fake news, such beliefs might still proliferate in ways that were hard to measure. “In the post-Babel era, what matters is not the average but the dynamics, the contagion, the exponential amplification,” he said. “Small things can grow very quickly, so arguments that Russian disinformation didn’t matter are like COVID arguments that people coming in from China didn’t have contact with a lot of people.” Given the transformative effects of social media, Haidt insisted, it was important to act now, even in the absence of dispositive evidence. “Academic debates play out over decades and are often never resolved, whereas the social-media environment changes year by year,” he said. “We don’t have the luxury of waiting around five or ten years for literature reviews.”

Haidt could be accused of question-begging—of assuming the existence of a crisis that the research might or might not ultimately underwrite. Still, the gap between the two sides in this case might not be quite as wide as Haidt thinks. Skeptics of his strongest claims are not saying that there’s no there there. Just because the average YouTube user is unlikely to be led to Stormfront videos, Nyhan told me, doesn’t mean we shouldn’t worry that some people are watching Stormfront videos; just because echo chambers and foreign misinformation seem to have had effects only at the margins, Gentzkow said, doesn’t mean they’re entirely irrelevant. “There are many questions here where the thing we as researchers are interested in is how social media affects the average person,” Gentzkow told me. “There’s a different set of questions where all you need is a small number of people to change—questions about ethnic violence in Bangladesh or Sri Lanka, people on YouTube mobilized to do mass shootings. Much of the evidence broadly makes me skeptical that the average effects are as big as the public discussion thinks they are, but I also think there are cases where a small number of people with very extreme views are able to find each other and connect and act.” He added, “That’s where many of the things I’d be most concerned about lie.”

The same might be said about any phenomenon where the base rate is very low but the stakes are very high, such as teen suicide. “It’s another case where those rare edge cases in terms of total social harm may be enormous. You don’t need many teen-age kids to decide to kill themselves or have serious mental-health outcomes in order for the social harm to be really big.” He added, “Almost none of this work is able to get at those edge-case effects, and we have to be careful that if we do establish that the average effect of something is zero, or small, that it doesn’t mean we shouldn’t be worried about it—because we might be missing those extremes.” Jaime Settle, a scholar of political behavior at the College of William & Mary and the author of the book “ Frenemies: How Social Media Polarizes America ,” noted that Haidt is “farther along the spectrum of what most academics who study this stuff are going to say we have strong evidence for.” But she understood his impulse: “We do have serious problems, and I’m glad Jon wrote the piece, and down the road I wouldn’t be surprised if we got a fuller handle on the role of social media in all of this—there are definitely ways in which social media has changed our politics for the worse.”

It’s tempting to sidestep the question of diagnosis entirely, and to evaluate Haidt’s essay not on the basis of predictive accuracy—whether social media will lead to the destruction of American democracy—but as a set of proposals for what we might do better. If he is wrong, how much damage are his prescriptions likely to do? Haidt, to his great credit, does not indulge in any wishful thinking, and if his diagnosis is largely technological his prescriptions are sociopolitical. Two of his three major suggestions seem useful and have nothing to do with social media: he thinks that we should end closed primaries and that children should be given wide latitude for unsupervised play. His recommendations for social-media reform are, for the most part, uncontroversial: he believes that preteens shouldn’t be on Instagram and that platforms should share their data with outside researchers—proposals that are both likely to be beneficial and not very costly.

It remains possible, however, that the true costs of social-media anxieties are harder to tabulate. Gentzkow told me that, for the period between 2016 and 2020, the direct effects of misinformation were difficult to discern. “But it might have had a much larger effect because we got so worried about it—a broader impact on trust,” he said. “Even if not that many people were exposed, the narrative that the world is full of fake news, and you can’t trust anything, and other people are being misled about it—well, that might have had a bigger impact than the content itself.” Nyhan had a similar reaction. “There are genuine questions that are really important, but there’s a kind of opportunity cost that is missed here. There’s so much focus on sweeping claims that aren’t actionable, or unfounded claims we can contradict with data, that are crowding out the harms we can demonstrate, and the things we can test, that could make social media better.” He added, “We’re years into this, and we’re still having an uninformed conversation about social media. It’s totally wild.”

New Yorker Favorites

An Oscar-winning filmmaker takes on the Church of Scientology .

Wendy Wasserstein on the baby who arrived too soon .

The young stowaways thrown overboard at sea .

As he rose in politics, Robert Moses discovered that decisions about New York City’s future would not be based on democracy .

The Muslim tamale king of the Old West .

Fiction by Jamaica Kincaid: “ Girl .”

Sign up for our daily newsletter to receive the best stories from The New Yorker .

truth about social media essay

Home — Essay Samples — Sociology — Social Media — Social Media Pros and Cons

test_template

Social Media Pros and Cons

  • Categories: Effects of Social Media Internet Social Media

About this sample

close

Words: 889 |

Updated: 7 November, 2023

Words: 889 | Pages: 2 | 5 min read

Table of contents

Advantages of social media, disadvantages of social media, video version.

Video Thumbnail

Hook Examples for Argumentative Essay on Social Media

  • A Startling Statistic: Did you know that over 3.6 billion people worldwide use social media? Join me as we explore the impact of this global phenomenon on our lives and society as a whole.
  • An Intriguing Quote: As Oscar Wilde once remarked, “Everything in moderation, including moderation.” These words prompt us to examine the balance between the benefits and drawbacks of social media in our lives.
  • A Personal Revelation: My own journey with social media led me to question its role in my life. Join me as I share my experiences and insights into the pros and cons of this omnipresent digital landscape.
  • A Societal Mirror: Social media reflects the best and worst of our society, from fostering connections to perpetuating misinformation. Explore with me how it both mirrors and shapes our cultural landscape.
  • An Evolving Debate: As technology advances and society changes, so does our understanding of social media’s impact. Join me in examining the ever-evolving debate surrounding the pros and cons of this powerful communication tool.
  • Van der Bank, C. M., & van der Bank, M. (2014). The impact of social media: advantages or disadvantages. African Journal of Hospitality, Tourism and Leisure, 4(2), 1-9. (http://www.ajhtl.com/uploads/7/1/6/3/7163688/article_17_vol4(2)july-nov_2015.pdf)
  • Abudabbous, N. (2021). Advantages and Disadvantages of Social Media and Its Effects on Young Learners. Available at SSRN 4002626. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4002626)
  • Holmes, W. S. (2011). Crisis communications and social media: Advantages, disadvantages and best practices. (https://trace.tennessee.edu/cgi/viewcontent.cgi?article=1003&context=ccisymposium)
  • Roebuck, D., Siha, S., & Bell, R. L. (2013). Faculty usage of social media and mobile devices: Analysis of advantages and concerns. Interdisciplinary Journal of E-Learning and Learning Objects, 9, 171. (https://digitalcommons.kennesaw.edu/facpubs/3171/)
  • Farrugia, R. C. (2013). Facebook and relationships: A study of how social media use is affecting long-term relationships. Rochester Institute of Technology. (https://www.proquest.com/openview/04bf6121089bb04b74dcaba7486bd814/1?pq-origsite=gscholar&cbl=18750)

Image of Dr. Oliver Johnson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Karlyna PhD

Verified writer

  • Expert in: Sociology Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

3 pages / 1239 words

3 pages / 1269 words

4 pages / 1683 words

2 pages / 917 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Social Media Pros and Cons Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Social Media

The media, in its various forms, plays a pivotal role in shaping our understanding of the world, including our perceptions of crime. Whether through news coverage, television shows, or social media, the media has the power to [...]

Primack, B. A. (n.d.). Social Media Use and Perceived Social Isolation Among Young Adults in the U.S. Emotion, 17(6), 1026–1032. DOI: 10.1037/emo0000525Hobson, K. (n.d.). The Social Media Paradox: Are We Really More Connected? [...]

Clarke, Roger. 'Dataveillance by Governments: The Technique of Computer Matching.' In Information Systems and Dataveillance, edited by Roger Clarke and Richard Wright, 129-142. Sydney, Australia: Australian Computer Society, [...]

The rise of social media platforms has undoubtedly transformed the way we connect, communicate, and share information. However, it has also given birth to what has come to be known as "The Social Dilemma." In this comprehensive [...]

Many of us in this day and age can’t live without our cellphones and especially our social media accounts. We use our social media to get our news, to find out what events are going on in the world, in our communities, in our [...]

Social Media platforms such as Facebook, Twitter, Instagram and YouTube are the pinnacle of many of the current trends that we see in today’s society. This is something that the creative industries and in particular, the dance [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

truth about social media essay

June 21, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Researchers have developed tools to study the cognitive, societal and algorithmic biases that help fake news spread

By Giovanni Luca Ciampaglia , Filippo Menczer & The Conversation US

truth about social media essay

Roy Scott Getty Images

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

Social media are among the  primary sources of news in the U.S.  and across the world. Yet users are exposed to content of questionable accuracy, including  conspiracy theories ,  clickbait ,  hyperpartisan content ,  pseudo science  and even  fabricated “fake news” reports .

It’s not surprising that there’s so much disinformation published: Spam and online fraud  are lucrative for criminals , and government and political propaganda yield  both partisan and financial benefits . But the fact that  low-credibility content spreads so quickly and easily  suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Explaining the tools developed at the Observatory on Social Media.

Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our  Observatory on Social Media  at Indiana University is building  tools  to help people become aware of these biases and protect themselves from outside influences designed to exploit them.

Bias in the brain

Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information, and too many incoming stimuli can cause  information overload . That in itself has serious implications for the quality of information on social media. We have found that steep competition for users’ limited attention means that  some ideas go viral despite their low quality —even when people prefer to share high-quality content.*

To avoid getting overwhelmed, the brain uses a  number of tricks . These methods are usually effective, but may also  become biases  when applied in the wrong contexts.

One cognitive shortcut happens when a person is deciding whether to share a story that appears on their social media feed. People are  very affected by the emotional connotations of a headline , even though that’s not a good indicator of an article’s accuracy. Much more important is  who wrote the piece .

To counter this bias, and help people pay more attention to the source of a claim before sharing it, we developed  Fakey , a mobile news literacy game (free on  Android  and  iOS ) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points for sharing news from reliable sources and flagging suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.

Bias in society

Another source of bias comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.

In fact, in our research we have found that it is possible to  determine the political leanings of a Twitter user  by simply looking at the partisan preferences of their friends. Our analysis of the structure of these  partisan communication networks  found social networks are particularly efficient at disseminating information – accurate or not – when  they are closely tied together and disconnected from other parts of society .

The tendency to evaluate information more favorably if it comes from within their own social circles creates “ echo chambers ” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into  “us versus them” confrontations .

To study how the structure of online social networks makes users vulnerable to disinformation, we built  Hoaxy , a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were  almost completely cut off from the corrections made by the fact-checkers.

When we drilled down on the misinformation-spreading accounts, we found a very dense core group of accounts retweeting each other almost exclusively – including several bots. The only times that fact-checking organizations were ever quoted or mentioned by the users in the misinformed group were when questioning their legitimacy or claiming the opposite of what they wrote.

Bias in the machine

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed  advertising tools built into many social media platforms  let disinformation campaigners exploit  confirmation bias  by  tailoring messages  to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will  tend to show that person more of that site’s content . This so-called “ filter bubble ” effect may isolate people from diverse perspectives, strengthening confirmation bias.

Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the  homogeneity bias .

Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this  popularity bias , because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.

All these algorithmic biases can be manipulated by  social bots , computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s  Big Ben , are harmless. However, some conceal their real nature and are used for malicious intents, such as  boosting disinformation  or falsely  creating the appearance of a grassroots movement , also called “astroturfing.” We found  evidence of this type of manipulation  in the run-up to the 2010 U.S. midterm election.

To study these manipulation strategies, we developed a tool to detect social bots called  Botometer . Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as  15 percent of Twitter accounts show signs of being bots .

Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.

These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.

Understanding complex vulnerabilities

Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are  many questions  left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will  not likely be only technological , though there will probably be some technical aspects to them. But they must take into account  the cognitive and social aspects  of the problem.

*Editor’s note: This article was updated on Jan. 10, 2019, to remove a link to a study that has been retracted. The text of the article is still accurate, and remains unchanged.

This article was originally published on The Conversation . Read the original article .

482 Social Media Essay Topic Ideas & Examples

📝 the social media essay structure, 🏆 best social media topic ideas & essay examples, 👍 good essay topics on social media, 🎓 simple & easy social media essay titles, 🥇 most interesting social media topics to write about, 📌 writing prompts for social media, 💡 interesting topics to write about social media, ❓ social media essay questions.

Before starting your social media essay, you need to organize your themes and ideas in a way that will make writing much easier. To structure an essay on social media, go through the following steps:

  • Step 1. Check the instructions to understand what is expected of you. There are numerous different types of essays, such as persuasive, argumentative, or informative. The particular type of essay required by your tutor determines its content, so make sure that you know what you need to write. Some tutors will provide examples of other students’ work, and it is helpful to read those, too.
  • Step 2. Choose a topic that sounds compelling. This might be the most challenging process if your tutor didn’t provide a list of possible topics to explore. To assist you in narrowing down social media essay topics, browse sample papers online, and see if they give you any inspiration.
  • Step 3. Research the subject well. While personal opinions and experiences may be relevant, you should always support them with high-quality research data. Social media is a prominent scholarly area, so there are a lot of studies that you may find useful. For example, if you need to write an essay on social media and its impact, read research on the positive and negative influences of social media.
  • Step 4. Write out the key points. These items may include your opinions, research results, and other relevant information. Take note of any new thoughts that come to mind in relation to your chosen theme.
  • Step 5. Create a powerful thesis statement. Based on all the things you’ve read, what will be the most significant idea of your essay? Do you want to write an essay on social media disadvantages? Or have you found that social networking is far more useful than people think and can be used for good? Writing a thesis that narrows down the focus of your essay creates a foundation for its structure.
  • Step 6. Double-check your points to see if any of them don’t fit in with the thesis. Removing these points will help you in making sure that your essay is well-organized and focused. For example, if you want to explore bullying on social media, don’t write a separate section about the educational uses of social media. Irrelevant statements could confuse the reader and may cost you a couple of extra marks.
  • Step 7. Write section titles that correspond to your main points. Writing the title of each section will help you to put your points in a logical order. This will ensure that your essay has an excellent flow and is enjoyable for the reader. If certain points don’t seem to fit together in one section – move some of them around or replace them with related statements.
  • Step 8. Avoid adding any new information into the last section. A social media essay conclusion serves to summarize your points and show how they support your thesis. This should be the final section you structure. Do not introduce any new information here because it will confuse your readers and make the essay seem unfinished.

The steps described above will help you to structure your essay and receive high marks for it. If you need a great social media essay example to get started, check our website for free samples and tips!

  • Positive & Negative Effects of Social Media on Teens Therefore, the topic raises a serious problem: the socialization of a teenager under the influence of the Internet environment. This paper reveals the positive and negative aspects of the influence of social networks on the […]
  • The Role of Social Media in Modern Society Essay Nowadays, Facebook has become one of the largest networks in the world by means of which people can share and exchange views, images, and photos.
  • Social Media and Privacy: The Dangers and Privacy Issues The pros and cons of social media shall be discussed and solutions to curb the security and privacy issues proposed. Despite the fact that social media has helped people stay in touch regardless of various […]
  • Social Media and Interpersonal Relationships This has made some of the relations blossom It can be concluded that social media has both positive and negative effects on relationships.
  • Social Media: Beneficial or Harmful? Social media is a core element of the internet, and it reshaped how a modern human perceives information, communicates, socializes, and learns about the outside world.
  • Rumors in Social Media and Their Impact on People The reason for this is that rumors are often based on the truth. In addition to this, the rumors are often interesting or of a sensational nature.
  • Facebook Essay Talking to friends and relatives or family members is now possible with a single Facebook account which is a perfect platform to chat and communicate.
  • Social Media: Negative and Positive Impacts It is evident that social media has negative and positive impacts on the lives of many people. Social media has enabled many people to get connected in many parts of the world.
  • How Social Media Harms Children It is important to note that with the rise of the internet and globalization, social media platforms have exploded in their popularity and use across all nations, ages, and other demographics.
  • Social Media Ethics Essay: Examples & Definition In the initial stages of social media, it was easy fro companies to brush aside the idea of social media and have nothing to do with it, hence, risk being victims of the two risks.
  • Social Media and Its Impacts on Society The rise of social media has been facilitated by the emergence of the Internet, which came into existence with the development of the first electronic computer in the 1950s.
  • Freedom of Speech in Social Media Essay Gelber tries to say that the history of the freedom of speech in Australia consists of the periods of the increasing public debates on the issue of human rights and their protection.
  • Social Media Impact on Drug Abuse Thus, social media platforms definitely contribute to the misuse of various drugs by romanticizing their consumption and making “social drug use” acceptable among users.
  • Social Media Marketing and Promotion In conclusion, like the other firms operating in the clothing sector, Rock and Diesel are struggling to retain their market share.
  • Social Media and Globalization: Positive and Negative Effects Essay It will look at the advantages and disadvantages of globalization and the response of social media to the global phenomena. This paper sets out to expound on the many positive and negative impacts of the […]
  • Social Media Effects on Consumer Behaviour The paper features sections about the aspects of consumer behavior, the relationships between customers’ trust and the growth of social media, the effects of electronic word-of-mouth, and the significance of brand awareness.
  • People’s Responsibility in the Social Media World and Its Effects on the Reputation Knowing how to protect the privacy on social media is vital and can be achieved by practicing the following good practices. These theories tend to assess the effects of social media on groups, individual and […]
  • Fake Trends on Social Media Platforms: Twitter Data collection tools can be used to trace the roots of disinformation and ultimately to stunt such activity and to meliorate the toxicity of the media environment.
  • Social Media and Democracy For example, in 2009, during the Iran elections, citizens were able to comment on Facebooks and Youtube, and the whole world was able to follow the election proceedings.
  • The Effects of Facebook and Other Social Media on Group Mind and Social Pressure Members of a particular social network have to conform to certain principles that define the social group despite the difference of opinion.
  • Social Media Case Study: Nike’s #YouCantStopUs Campaign For example, in Nike’s tweet on Twitter entitled “Nothing can stop what we can do together,” the company used an image of hugging Black athletes and encouraged the audience to install the Nike app.
  • Teacher-Student Communications via Social Media One of the most distinctive features of the document is the emphasis on the professional use and the absence of any non-professional links.
  • Social Media Impacts in the “Cyberbully” Film The first problem associated with the use of social media that is exemplified in the film is the lack of privacy.
  • Social Media Replacing Traditional Journalism This research shows that due to the development in the digital world, the majority of people tend to refer to their devices when searching for news instead of buying a printed source and that social […]
  • Impact of Social Media on Society The continued use of social media will have a great impact to the society. Social media will have a great impact for the training of medical professionals and other operation efficiencies.
  • The Impact of Social Media on the Rise in Crime For example, Jones cites revenge porn, or the practice of publishing a partner’s intimate contact on social media, as one of the results of social media use.
  • Official English Grammar in Social Media Although social media is effective in communication; it is has led to the alteration of the grammatical structures of official languages in many nations.
  • Mac Cosmetics Company’s Social Media Use for Customer Engagement The aim of this research is to determine the role of the social media in creating customer engagement to MAC Cosmetics, focusing on the United Arab Emirates’ population.
  • Social Media Addiction in Society The person takes the substance, or in case of social media, keeps checking and updating online status or website on and on.
  • Traditional Sources of Information vs. Social Media This discussion, therefore, examines the main sources of media information during the 1960s and 1970s when the targeted interviewee was a youngster. During my uncle’s youth, members of the society had to wait for information […]
  • The Effect of Social Media on Today’s Youth This theory is useful in the explanation of the impact of media during crisis, and will also be useful in the analysis of the impact of social media on the youth of the UAE.
  • Social Media Satirical Cartoon by M. Wuerker The trend displayed by the author is relevant to our current society with the ever-rising popularity of the internet and social media. I would also like to emphasize the use of pathos in the delivery […]
  • The Impact of Social Media on a Brand, Its Image, and Reputation Consequently, the research questions are: ‘Do some companies underestimate the importance of the social media?’, and ‘Does the social media has a vehement impact on the business’ revenue, success and brand image?’ The significant features […]
  • Social Media Marketing (SMM) as Networking Technique Modern technologies have been integral in supporting the growth and development of rapturous networking activities and Social Media has presently become one of the main issues in the business paradigm.
  • Social Media Monitoring Pros and Cons Social media monitoring provides organizations with an opportunity to address negative comments regarding the organization which might be posted on the social media.
  • How to Avoid Getting Into Legal Trouble in the UAE While Posting on Social Media? The avoidance of legal liability while using social media in the UAE involves abiding by the stipulations of the aforementioned laws and regulations.
  • The Role of Social Media The approach will also encourage more companies to embrace the use of social media. The above discussion explains why companies should use social media to improve their HR practices and business performances.
  • The Impact of Social Media within the Workplace To some extent, the kind of communication between the employees and the allies should be of little concern to the employers.
  • Social Media Use During the Covid-19 Pandemic Despite the intention to benefit from technological progress and the Internet, millions of people cannot control the amount and quality of information online.
  • Importance of Social Media Analytics Social media analytics is crucial to gathering an understanding of the market and improving a marketing campaign as it progresses, with the best tactical use that will generate sales.
  • Social Media and the Health Sector This work is going to conclusively address the role of the social media in healthcare, its effects on the implementation of the mandates of the sector.
  • The Role of Social Media in Aviation Crisis Management Therefore, this paper considers the general role that social media might play in a crisis or emergency in the airline industry and describes methods that could be used to deal with the potential adverse outcomes […]
  • Social Media and the Hospitality Industry As the world of online marketing continues to expand, and innovative ways of communicating with the target customers come to the fore, social media has withstood the onslaught of critics and emerged out strongly.
  • Social Media’s Impact on Children This issue is critical to be developed because children are the future of society, and the subsequent development of humanity primarily depends on them.
  • The Effects of Social Media on Society The social networks broke into the everyday life of the majority of common people in the middle of 00s, first giving neglectful and suspicious attitude, as a tracking instrument of the government.
  • Social Media Impact on Business and Education Therefore, the essay will categorically expound on the numerous benefits that these social media sites have brought to the world. The large market that social media sites create also results to increase in the number […]
  • The Impact of Social Media on Sport The following article will examine the impact of social media on the sports industry and sports fans. Thus, social media has a great impact on the sports industry as it provides a direct link between […]
  • Social Media Use in the Nursing Profession It could also mean that the opinion that was posted on the social media represent the position of my employer and the profession at large, thus causing more harm not only to the individuals involved […]
  • Bernie Hogan: The Presentation of Self in the Age of Social Media This review will seek to summarize and investigate the contents of the article and critique its findings in the context of other academic research on the topic of social media and social interaction.
  • Banning of Social Media Such as Facebook from Schools Students, who spend most time using social media, such as Facebook and twitter, find it hard to concentrate in class because of the addictive nature of the social media.
  • Social Media in Enhancing Social Relationships and Happiness Social media and technology assist to foster and maintain relationships where the people live in different geographical regions. There is a major concern that social media and technology poses a threat to the traditional fabric […]
  • Social Media and Stalking My opinion on viewing other people’s information on a social site is that all information on the social site should be accessible to all people.
  • Is Social Media Causing People to Lose Compassion? Thus, because a person is constantly under the influence of various emotions, very often, such an experience can bring a very negative result to the emotional state of a person.
  • Social Media: Benefits and Harm Therefore, they should make conscious steps to limit their use of social networks and replace it with other activities, engaging more in their relationship, studies, or work.
  • Social Media Impact on Marketing Strategies The framework relies on the following scenario: the businesses spend large sums of money in order to employ the teams of the most skillful and experienced marketers and craft the most aggressive and effective marketing […]
  • Youth’s Aggression and Social Media The problem is in the fact that posts and messages in social media that have followed shootings include images, slogans, and texts provoking violence and aggressive behaviors in young people, and more attention should be […]
  • Social Media Communication and Friendship According to Maria Konnikova, social media have altered the authenticity of relationships: the world where virtual interactions are predominant is likely to change the next generation in terms of the ability to develop full social […]
  • Social Media in Future: Twitter, Instagram and Tango The future of social media will be characterized by the fact that “the right information will be served to the right people at the right time”.
  • Social Media Analysis for Qatar Airways The marketing department is in the forefront in utilizing the social media in attracting customers and retaining the ones it has already acquired.
  • Bullying on Social Media Platforms It is consistent and repeating, taking advantage of the Internet’s anonymity with the main goal to anger, scare, or shame a victim.
  • The Role of Social Media Platforms in Promoting Products The use of social networking apps for promoting products has several major advantages. The greatest benefit is saving money on advertising.
  • Social Media Marketing: The New Frontier for Corporates Social media marketing (SMM) is the art of using social networking sites to optimize a company’s visibility and website traffic.
  • Social Media and Mental Health The connection between the positivity of a message and its reception in social media is a crucial piece of information that needs to be incorporated into the current approach toward increasing the levels of public […]
  • Self-Verification and Self-Enhancement on Social Media Self-enhancement refers to the desire of an individual to undertake efforts in order to reduce the significance of their negative self-views and increase the positivity of their conceptions of themselves.
  • Social Media as a Component of Mass Communication The reasons for such a claim are justified and refer to different opportunities that social media and the Internet give their users. During that time, social media helped me to stay aware of the current […]
  • Social Media and Its Effects on Adolescents Orben, Tomova, and Blakemore have found that social deprivation might cause severe psychological complications to adolescents, particularly in the period of the pandemic.
  • Social Media Marketing and Its Impact on Business Most SMM marketing plans involve the following steps; engagement, content, and recognition/reward. These are the plans that firms entering SMM employ to be successful in SMM.
  • Bullying Through Social Media: Research Proposal The hypothesis of the study is as follows: the role of adolescents in a cyberbullying situation is interconnected with their psychological characteristics.
  • How Does Social Media Affect Leadership? As a result, the new types of leadership are designed, and the significance of the global network is felt due to the possibility of information exchange.
  • Social Media Marketing Plan: Subway Fast Food Attract The main objective of this digital marketing plan is to attract the younger customers’ market through the Subway’s website and a twitter fun page in order to increase the customer traffic in its stores.
  • Personal Information in Social Media Platforms The storage of this information does not give one an option to reinvent him or herself to a new beginning and overcome the checkered pasts.
  • Social Media – It’s Real Value This is to expose the real value of social media as it is used today in our social lives as well as in the workplace, so as to be able to fully exploit it.
  • Social Media: A Force for Political and Human Rights Changes Worldwide In this essay, I will discuss the effectiveness of traditional media and social media, and how social media has a better participation in changing the world in terms of politics and human rights.
  • Management Problems in Social Media It is therefore the duty of the management of social media sites such as Facebook, twitter and You Tube to protect themselves and their users from infringement of privacy.
  • The Development of the Social Media Industry: The Case of Twitter Thus, in the case of Twitter, it is clear that they include the platform’s role in political affairs, the fast development of the company alongside its competitors, and the increase in the numbers of people […]
  • Gender Inequality in Social Media Research shows that teenagers from the age of thirteen use social media to discuss the physical appearances of girls and exchange images with sexual content.
  • Fired Over Facebook: Using Social Media to Complain As a result, the solution to the problem received a mixed response from the general public that was aware of the case.
  • Social Media Users’ Personality and Mental Health The use of social media has impacted people’s mental health by both contributing to their anxiety and creating a stressful and competitive platform on which people have to perform.
  • Airbnb: Social Media Strategy According to Hogan, the main aspects of the firm’s strategy are video marketing, the integration of user-generated content, and the use of social channels for customer communication. Among the three chosen platforms, the number of […]
  • Social Media and the Family In their research, House, McGinty, and Heim investigate the influence of social networking services on the level of satisfaction in long-distance relationships.
  • Dove Company’s Social Media Content One of the major focuses of marketing strategies employed for the promotion of Dove is social justice and the empowerment of women namely.
  • Local Newspaper and Its Social Media Advertising The exploration of the role of advertising on the Internet in the process of raising the profitability of the company contributes to a better understanding of the mechanism used by the World Wide Web for […]
  • What the Social Media Will Do in Future? Microsoft is one of the few companies that have lasted for decades and it still is a force to reckon with in the world of technology.
  • Social Media Benefits: Twitter, Instagram, and Google Plus Ever since the inception of the internet in the early 1980s, the world has seen great advancements in the information and communication sector.
  • Social Media Platforms Effects on Social identity The social media, whose reach and influence is global, is one of the most common avenues that are used to shape and enhance the concept of identity nowadays.
  • Effect of Social Media on Depression The number of friends that the participants of the mock study had in their social sites was also related to the degree of depression that they experienced.
  • Impact of Online Social Media in Conflict Situations A study commissioned by The George Washington University indicates that determining the actual effects of the new media in conflict situations is cumbersome due to methodological challenges and the newness of the subject. The use […]
  • Impacts of Social Media on Lives Some of these ages are regarded as revolutions because of the impact they had on the general life direction of the population.
  • Social Media and Teenagers’ Mental Health This book highlights the impact of social media on adolescent mental health and offers several solutions to this problem. 1, 2020, pp.
  • Social Media Practice and Offline Presence We then chatted for a while, and our messages were short and straight to the point. It was a sunny day, and I decided to take a selfie and post it on Instagram.
  • Free Speech Regulated on Social Media According to Alkiviadou, in the modern age of free access to online communication, the forms of interaction in the digital space often cause criminal actions.
  • Social Media Networks: Positive and Negative Sides Additionally, the emergence of the social media podia promotes the spread of false and unreliable information. In social networks and beyond, the problem of propaganda and misinformation is now critically important.
  • The Definition of Body Image and Social Media Therefore, social media is associated with body image due to its power to influence the psychological aspects of a person that translates to feelings of discontentment with physical appearance.
  • Social Media in Education Social media should become a part of the learning process since it is evident that it helps to enhance education by providing the means to share, receive feedback and use academic works in a way […]
  • Internet Issues: Teens, Social Media and Privacy I argue that it is our understanding of privacy that provides the solution and that the Internet is the biggest factor that influences it.
  • The Influence of Social Media The contribution of social networking in the creation of social identity has not been fully explored. The modern mobile technology has contributed to the increase in the usage of social networks.
  • Strategic Plan – Social Media in Women and Child Hospital Our Women and Child Hospital plans to open an account in instagram in a bid to let people know about the activities of the hospital, its scholarship program, events as well as social activities.
  • The Educational Promise of Social Media The use of social media can only be useful if the students or children are closely watched and guided on how to make use of it for positive development.
  • Employee Recruitment Through Social Media The only thing that the employers need to do to reach the potential employees is to post the jobs on the social media sites.
  • Effect of Social Media on Junior and High School Despite both the positive and negative effects of TikTok, in can be used to a benefit of junior and high school students.
  • Social Media Damages Teenagers’ Mental Health Thus, the selected social group that could help improve teenagers’ mental health is sports coaches and organizers of sports activities in schools.
  • Social Media: Risks and Opportunities National Institutes of Health is a medical website for organizing social interactions within the medical community. Meanwhile, the principal goal is to study public health to improve it.
  • Technology and Parenting: Gaming and Social Media The current project is a social media campaign report targeted at addressing the increased use of social media and gaming among the growing generation.
  • Outdoor Recreation and Social Media Observing the scenery and walking in forests have been proved to improve a person’s immune system and short-term memory, reduce stress, and produce a calming effect on their mind.
  • Social Media Impact on Organizational Performance DuBrin argues that the Internet and social media era constitutes one of the most important developments in the evolution of organizational behavior, along with the classical approach to management and the human relations movement.
  • Social Media’s Role in Language Learning For the language observation assignment, one person was interviewed about her attitudes to language learning with the help of SM platforms, the effectiveness of such practice, and the role that SM should play in learning […]
  • Social Media Audit: The Most Effective Social Media Channels To develop a selection of relevant assessment criteria for the choice of social media channels, it is necessary to consider the findings of recent academic studies. The focus of this section of the report is […]
  • Social Media Impact on the Students Academic Performance The growing popularity of social networking and online communication has raised an issue of the influence of these activities on the daily performance of the individuals.
  • Students’ Virtual Lives in Social Media With the rise of social media and the appearance of modern gadgets such as laptops, computers, and smartphones, many young men and women are far more attached to the World Wide Web than they would […]
  • The Social Media Advantages for Small Business The book lacks detailed information on the desires of the readers of the specified information and it is hard to recommend the tools that the authors use even though they give alternatives.
  • A Day in My Office Without Internet and New Social Media Technologies in My Workplace How do people cope in the workplace without the presence of the Internet considering the fact that the Internet is the main supportive technology on which the functioning of new media tools is based?
  • “How Large U.S. Companies Can Use Twitter and Other Social Media to Gain Business Value” by Culnan, McHugh, Zubillaga To begin with, the authors offer the best background information to analyse the relevance of social media sites. It is also necessary to identify the major strategies and benefits of these social sites.
  • Intimacy and Sexuality Behaviors in Social Media She posed naked and posted the photo on a social networking site in order to attract attention to her fight against the sexual harassment of Muslim women.
  • Social Media Metrics: Facebook, YouTube, and Twitter For an individual to share a video through YouTube there will be need for the individual to sign up for an account with YouTube.
  • Social Media and Socio-Political Change Social media and politics Social media has had a lot of impacts on the political happenings that have been witnessed in recent months.
  • The “How to Fix Social Media” Article by Nicholas Carr However, the distinction between public and private speech is not always clear-cut, and finding a way to regulate social media in a balanced and nuanced way will require continued dialogue and careful consideration of the […]
  • Social Media: Past, Present, and Future The emergence of this phenomenon can be attributed to the unique technological and social factors at the beginning of the twenty-first century.
  • The Social Media Effects on Football Clubs Throughout the season, the English Premier League uses its social media channels to connect with fans, share updates and highlights, and promote the league and its teams.
  • False Light and Appropriation in Social Media Ads For example, a post with a photo of a happily married couple with the caption “Divorce and adultery is a problem of our time”. Another example is the use of a standard photo of a […]
  • Amazon Inc. in Current News and Social Media Over the last semester, the stocks of the company, Amazon Care, the Lay Off of workers, and its hiring process topics were covered in the recent media about Amazon.
  • Social Media Marketing and User Satisfaction The research primarily focuses on marketing and demonstrates the significance of social media users’ thoughts, preferences, and activities. The privacy of users’ information is a significant issue regarding the security and use of social media […]
  • Social Media Strategy for Catalina Spa and Beauty Shop The customers cover the smallest distance to and from the institutions and surroundings. It is the most befitting platform to market the products and showcase the services.
  • Travel Agencies’ Use of Social Media First of all, it is vital to create a profile with information about the travel agency and the services it provides.
  • Professional Use of Social Media Social networking, which enables businesspeople and experts worldwide to engage with one another professionally regardless of their field, is referred to as professional usage of social media. The Bank of America is one of the […]
  • Major Issues Caused by Social Media (TikTok) Users want more information about the rights and contracts they can agree to with a single click and to know who has access to their data and where it is kept.
  • Identification of Fake News on Social Media The algorism of Naive Bayes serves as the primary mechanism of detecting whether or not the news is fake, based on the data collected from the different resources. It calculates the use of the exact […]
  • The Prevalence of Social Media Networks Among the most successful ways to find individuals that can be included in the research sample is through social media platforms or with the help of acquaintances.
  • Social Media Platforms and Sports The theme ideal is efficient communication with today’s Athletes due to the increasing prominence of reality programs and the prevalence of difficulties relating to achievement and failure on reality showcases.
  • Social Media and Its Effects: Mending One Rift While Creating Another The effects of social media on people’s ability and willingness to engage in online interactions have been viewed mainly as negative despite the presence of apparent benefits, such as the removal of barriers to communication. […]
  • Libel Law: Relation to Social Media In case of libel, a plaintiff should identify the guilty and prove that their statements are false, harmful, and posted intentionally.
  • Social Media and Mobile Devices in Healthcare It is crucial to ensure that all employees understand the importance of ethical use of mobile health tools and social media.
  • Social Media: The Use in Nursing Although the medical professionals who are guilty of doing so may not have malicious intentions, it is still a violation of a patient’s privacy and confidentiality.
  • The Use of Social Media in Healthcare At the same time, other opportunities to use social media and healthcare websites are when planning to promote citizen engagement, answer common treatment queries, and expand the reach of recruitment efforts.
  • The #LOTRLive Firm’s Marketing and Social Media Project The Warner Bros corporation, the producer of most of the movie and video-games adaptations, holds strong positions on the market. However, spread of piracy, and social and political landscapes provide a space for improvement.#LOTRLive is […]
  • New Horizons Agency’s Transformation into a Social Media Platform The action plan application shows a policy that allows the employees to blog under the agency-owned and managed social media handle as the best solution for the firm.
  • Social Media Marketing to Show Green Values First of all, for the successful conduct of the research, it is necessary to determine the research question and the methodology to be used.
  • Impact of Social Media on Instructional Practices for Kindergarten Teachers General Context of the Problem Despite the increase in the use of social media in teaching, there is still a significant lack of research done on the impact of social media integration in teaching techniques […]
  • Data Visualization of Teen Social Media Abuse The area chart used has enabled the researcher to compare the four variables of the study, allowing the audience to understand how the data relate.
  • Social Media and Marketing: Discussion According to the experience of the companies using Facebook for marketing, there are various ways to use the platform to build a strong community.
  • Attracting the Target Audience through Social Media One of the ways I could incorporate social media requires me to define my target audience and, in accordance with that, search for the websites and networks that they use more frequently.
  • World News Flow Under Impact of Social Media It caught on fast since most people have access to the internet and video streaming is the norm among young people.
  • Employees’ Right of Free Speech on Social Media Afterwards, the employer said that the action of the employer fell under the public policy exception to employment at will, considering that it restricted the participation of the employee in the political process.
  • Traditional vs. Social Media Celebrity Endorsements In traditional media, there is a fine print or disclaimer that makes it clear to the viewers that the celebrity was paid for the advertisement.
  • Moral Distress and Social Media Use in Nursing Gared asks the nurse about her husband’s condition and Jane, seeing her tear-stained face and red eyes, says that he will be recovered.
  • Social Media Campaign: Obesity Prevention It is hard to disagree that one of the most severe modern issues in both children and adults is obesity. While it is likely that a vast number of social media interventions would appear to […]
  • ASOS: Social Media Marketing Discussion The primary buyers’ persona is a spectator, although the filter has enabled many posters creators to join the campaign, who, in turn, have drawn conversationalists into discussing the brand.
  • Social Wellness in Social Media Thus, as a proposition, the psychiatric and mental part of wellness can be highlighted in order to practice the promotion of health and wellbeing.
  • How Heavy Use of Social Media Is Linked to Mental Illness The purpose of the study is to find out how social media overuse is linked to mental illness and the most appropriate ways of managing the exposure.
  • Cyberbullying in Social Media and Online Gaming It is necessary to take screenshots of all actions and statements that are in the nature of psychological violence. The duty of adults in this case have to teach children the rules of etiquette and […]
  • Do Social Media Algorithms Lead to Harmful Social and Political Polarization? I believe that before the rapid expansion of Facebook, the Internet used to represent a different kind of information transmission tool.
  • Apple Inc.’s Marketing Communication and Social Media Strategy Secondly, Apple compensates for the lack of a clear marketing communication strategy on social media with the active involvement of key figures within the company.
  • Do Social Media Algorithms Lead to Harmful Social Polarization? Thus, despite all the sponsors and funds that are allocated by political parties to traditional information distribution channels, social media have started to dominate the formation of public opinion.
  • Human Consciousness: Impact of Social Media Now it is an integral part of the life of the average person. The concepts of consciousness and ethics are inextricably linked with each other.
  • Analysis of Social Media Tools in Business The last item, the detailed analytics of the content and activity, allows for the development of the more efficient business strategy based on the subscribers’ preferences.
  • CDA Protects Social Media Companies Social media companies’ insufficient protection of users’ personal information and adherence to terms or services negatively influences the population’s safety by facilitating terrorist organizations’ activities.
  • Social Media Use and the Risk of Depression Thapa and Subedi explain that the reason for the development of depressive symptoms is the lack of face to face conversation and the development of perceived isolation. Is there a relationship between social media use […]
  • The Role of Gender in Interaction via Social Media: Extended Outline Premise#2: It is possible to examine the differences in men’s and women’s use of social networks by exploring shared content’s impact on them.
  • How Do Social Media Influencers Convey the Message of Body Positivity? The first platform that comes to mind and has a direct impact on self-image is Instagram which is now the main spot to convey the message of body positivity.
  • Social Media and Female Artist Representation Such a project has been facilitated by the emergence of new media, characterized by the emergence of both the internet and social media.
  • Social Media and Health Information Web-based media can assist doctors with achieving objectives through the help of data sharing and further developing openness to patients, constructing trust.
  • Social Media Usage and Productivity in the Public Sector
  • Social Media Agency “Aware” Analysis
  • Social Media Campaign: Awareness of Healthy Diets
  • Applying Goffman’s Theories to Social Media Interaction
  • Presence Across Social Media Networks: Barilla Case Study
  • Collective Behavior Types and Changes in the Time of Social Media
  • Social Media and Women’s Mental Health
  • Developing Intercultural Competence via Social Media Engagement
  • “Positive Impacts of Social Media at Work” by Hanna et al.
  • Brand Advertising on Social Media
  • Is TikTok the Superior Social Media?
  • The Role of Social Media Sites in Moderating Speech
  • Conspiracy Theories and Prejudices in Social Media
  • Social Media Promotes the Pursuit of the Thin Ideal Amongst Teens
  • Social Media Coverage in Russia vs. Global Trends
  • Social Media Risks to Patient Information
  • Social Media Profile Matters for Moderate Voters
  • Social Media Designers Awareness Training Program
  • Indigenous People Shown in Social Media
  • The Ethical Implication of Social Media in Healthcare
  • Media Bias Monitor: Quantifying Biases of Social Media
  • Social Media Impact on Adolescents
  • Websites and Social Media Risks
  • Social Media and Change of Society
  • Lee Enterprises Inc.’s Social Media Policy Case
  • Social Media Campaign Encouraging Vaccination
  • Social Media Impact on Political Segregation
  • The Role of Social Media in Businesses Credibility
  • Social Media: Impact on the Retail Business
  • Understanding Trust Influencing Factors in Social Media Communication
  • Using Social Media for Business
  • Guidelines for Employees’ Use of Social Media
  • Public Health and Social Media in the United States
  • Social Media Usage in Education
  • Censorship by Big Tech (Social Media) Companies
  • Social Media Strategies for Building Communities
  • Evaluating Social Media Data Quality for Employee Hiring Process
  • Consumer Health and Social Media Network in Saudi Arabia
  • LEVE Jeans & Co: Role of Social Media in Marketing
  • Application of Social Media Marketing Platforms
  • Social Media in Workplace Communication
  • Social Media and Human Rights Memorandum
  • Social Media Helped Obama Win
  • Great Role Social Media Plays in the Modern Environment
  • Social Media Creating an Unrealistic Perception of Wealth
  • Social Media and Social Work Practice
  • Social Media Experiment: The Marketer Tweeter
  • The Influence of Social Media on Its Users’ Everyday Lives
  • Social Media and Credible Sources of Information
  • Bullying Through Social Media: Methods
  • Risks and Opportunities of Social Media for Adolescents
  • Cyberbullying Through Social Media
  • Bullying Through Social Media
  • Evidence-Based Practice Knowledge in Social Media
  • Social Media Presence Analysis
  • Communicative Competence and Social Media
  • Social Media Impact on Wealth Accumulation
  • Social Media Use in Universities: Benefits
  • The HopeLine: Website and Social Media Analysis
  • Marketing in Social Media: Concepts and Strategies
  • Public Opinion Formed by Celebrities in Social Media
  • The Privacy Paradox in Contemporary Social Media
  • How Social Media Could Threaten Democracy
  • Social Media in a Crisis
  • Effect of Social Media Interaction on Client Stickiness
  • Framing: Social Media and Public Relations
  • Obesity and Social Media Relations
  • Social Media Applications in the Workplace
  • Social Media Within Academic Library Environment
  • Communication Final Project: Youth Activism, Social Media, and Political Change Through Children’s Books
  • Social Media Outlets Usage for the Business Enhancement
  • “Teachers, Social Media, and Free Speech” by Vasek
  • Social Media Effect on Sports Teams’ Exposure
  • Social Media Efficiency in Decreasing Youth Alcohol Consumption
  • Psychology: Social Media and Bullying
  • Social Media, Smartphones and Confidentiality in the Healthcare System
  • Self-Branding on Social Media
  • The Social Media and Medicine
  • Conformity in Social Media: Facebook Consensus
  • Examining H&M’s Marcom Tools: Social Media and Direct Email Marketing
  • “Negotiating Privacy Concerns in a Social Media Environment” by Elliso
  • Self-Disclosure and Social Media
  • Women in the West Who Are Put Under Stress Due to Social Media
  • The Interview About Social Media Marketing (SMM)
  • Tommy Hilfiger Strategy: Social Media Post Hosting
  • CD Player Selling: The Benefits of Using Social Media
  • Social Media in Reaching Out to Customers in Business
  • Social Media and Fashion Trends
  • Social Media and Your Targeted Audience
  • Explaining Social Media and Its Influences
  • Social Media Ethics and Patient Privacy Breaches
  • Social Media Marketing and Brand Communication
  • Saudi Students’ Attitudes Toward Using Social Media to Support Learning
  • Social Media Activity and Nursing
  • Social Media Impact on Tourism Industry in China
  • Social Media Users’ Consumer Behaviour
  • Social Media Integration in the UAE’s Public Sector: Literature and Research
  • Colin Kaepernick Social Media Strategy
  • Social Media Screening: Personality, Reputation, Presentation
  • Social Media & Customer Decision-Making in Fujairah
  • The Effects of Social Media on Marriage in the UAE
  • Likecoholic: Social Media Addiction
  • Social Media Use During Natural Disasters
  • Social Media and Marie Kondo’s Career, Culture, and Business
  • ACC Wholesale: Social Media Marketing Plan
  • Social Media for Strategic Business Communication
  • Social Media Conversations About Race
  • Advertising, Branding, and Social Media
  • Fake News in the Age of Social Media
  • Role of Social Media in Activism and Revolution
  • 10 Steps of Getting Started With Social Media Marketing
  • Social Media Impact on Depression and Eating Disorder
  • The Effects of Social Media on Egypt Revolution
  • The Art of Persuasion: Social Media Relations
  • Colgate Company’s Social Media and Mobile Marketing
  • Social Media Communication: Alienating or Connecting?
  • Social Media Benefits for Non-Profit Organizations
  • Social Media and Shopping Behavior of Emirati College Students
  • The Social Media Use Patterns: the Gulf Region
  • Social Media Marketing of Luxury Fashion Brands
  • Privacy in the Age of Social Media
  • Social Media Appropriation for Activism
  • Eating Disorders in Traditional and Social Media
  • Home Depot Foundation’s Project in Social Media
  • Celebrity Advertisement in Social Media Marketing
  • Catherine Bond’s Social Media Campaign
  • Social Media and Social Work Ethics
  • Social Media Usage in Business Environment
  • The Impact of Social Media on Co‐Creation of Innovation
  • Financial Information in Social Media Networks
  • Film Theory and Social Media Intersection
  • Social Media and Job Performance
  • Fear of Missing Out (FoMO) and Social Media Usage
  • Neural Networks Used in Social Media Industry
  • Social Media Impact on Voter Turnout
  • Social Media Impact on Digital Diplomacy
  • Technology and Social Media Shaping Our Reading
  • Social Media Effects on Adolescents’ Body Image
  • Tweeter-Drama: Social Media as a Theatre Platform
  • Social Media Use in Advanced Practice Nurses’ Work
  • Social Media in Second Language Learning
  • Spotify Company’s Social Media Marketing
  • Maersk Line Company’s Social Media Success Factors
  • Ethical Concerns: Social Media and Rehabilitation Counselors
  • Social Media: Blogging for a Benefit
  • Social Media Impact on Students Relationships
  • Social Media for Business Communication: Mayo Clinic Case
  • Tapioca Express Company: Social Media Content
  • Social Media and Public Opinion
  • Ethical Issues Associated with Social Media
  • Important Event in the World of Online and Social Media
  • The Effect of Social Media on Individuals
  • Builder Electro Company’s Employee Social Media Postings
  • The Dumbledore Army: Social Media Power
  • Social Media’s Role in the Contemporary World
  • Social Media Campaign: WATERisLIFE
  • Social Media Crisis: Kitchen Aid Company Case
  • UC Riverside Men’s Basketball Team’s Social Media Marketing
  • Social Media Hazards for Youth
  • How to Use Social Media to Create an Identity?
  • The Calgary Fire Service Department Social Media Channels
  • Romania’s Social Media and Technologies
  • Evaluation of Social Media at the Deakin Website
  • Social Media: Facebook Problems, Decisions and Actions
  • Networked Dissent: Threats of Social Media’s Manipulation
  • Corporate Leaders and Social Media Tools
  • The Chinese Government Blocked Social Media
  • International Social Media Blocked In China
  • Social Media at Cape Breton University
  • Social Media Websites Effectiveness for EFL Students
  • Social Media Marketing Attitude Survey
  • Las Vegas Hotel Industry: Social Media and Marketing
  • Nike Company Social Media Marketing
  • Wet Seal Company’s Social Media Marketing Strategy
  • Measuring the Impact of Social Media: Challenges, Practices and Methods
  • Social Media Marketing: Facebook
  • Significance of Social Media in Business Operations
  • Digital Marketing and Social Media Strategy
  • Social Media Marketing and Consumer Transactions
  • Men and Women in Internet and Social Media: Real-Life Stereotypes in the Virtual Communication
  • How Corporate & Private Business Can Use Social Media?
  • Social Media Marketing Plan
  • New Social Media Platforms
  • Social Media Web Resource Management
  • Social Media and Marketing
  • Retailers Find Social Media Magnify Brand Presence
  • Do Social Media Affect People in Saudi Arabia?
  • Sidra Digital and Social Media Strategy
  • Social Media and NPO’s
  • Social Media and Older Australians
  • Social Media and Social Relations
  • Great Influence of Social Media
  • Mobile Social Media Marketing
  • Social Media Strategy – eTourism
  • The Impact of Social Media on Political Leaders
  • The Future of Social Media
  • Changes in Social Media
  • Social Media: Reforming People’s Interaction
  • Social Media in the Workplace
  • Social Media Networks’ Impacts on Political Communication
  • Social Media Data Analysis
  • Social Media as Part of PR & Marketing Strategy
  • Twitter and Social Media Competition
  • New Social Media Marketing Tools
  • Social Media as a Way to Capture the Present-Day Reality
  • The Effect of Social Media on Saudi University Students
  • Innovative & Emerging Technologies: Leveraging Social Media
  • Social Media Use in Internal Communication in Dubai Public Sector Companies
  • Use of Social Media in The Police Force in Queensland
  • Social Media Crises
  • How Will Social Media Change the Future of International Politics?
  • How Internet Communication, and Social Media Influences Politics and Social Awareness in the World
  • DOD Policy on Social Media Concerning Military Members and Government Public Administration
  • Developing a Social Media Strategy
  • How Models of Audience Research Inform Debate on the Use of Social Media
  • Ethnicity and Self-Representation in Social Media: When Cultures Merge
  • The Impact of Social Media on Consumer Behavior in Electron Sector
  • Internet Marketing: Use of Social Media by Artists to Market Their Music
  • How Social Media Network Can Change the Attitude of Australian Youth
  • “How the Fashion Industry is Embracing Social Media” by Hitha Prabhakar
  • Social Media and Arab Uprising
  • Brandy Melville Social Media Marketing
  • Is it Ethical for Employers to Search Social Media for Information about Applicants for a Job?
  • Is Social Media a Useful Tool for Brand Promotion?
  • Social Media Amongst the Student Population
  • The Biggest Collaborative Projects that Exist Because of Social Media
  • Social Media Issues Relating To Race and Religion
  • Memo for Presentation: Social Media
  • Woman Intimacy and Friendship with the Appearance of Social Media
  • Impact of Social Media on Public Relations Practice
  • The Huffington Post’s Marketing Strategy
  • Social media and its drawbacks
  • The Impact of Social Media on Food Culture (preferences) in America
  • The Effects of Technology on Humans: Social Media
  • Concept of Online Social Media Marketing
  • The Role of Social Media in Recruitment
  • Social Media and Public Relations
  • Effect of Social Media Sites on Our Lives
  • Social Media Marketing Merits and Demerits
  • Domino’s Pizza: Social Media Case
  • The Application of Social Media Promotion of Organizations’ Businesses
  • The Use of Social Media in Marketing
  • Real Estate and Social Media
  • The Influence of Social Media on Communication
  • Social Media Networks: Staying Neutral?
  • Does Social Media Influence Activism and Revolution on the World Stage?
  • How Oxfam Uses Social Media for Communication
  • Social Media as an Effective Marketing Tool
  • The Internet as Social Media: Connectivity and Immediacy
  • Effects of Lack of Social Media Marketing on Papa Pita Bakery
  • How Has Social Media Influenced Hip-hop Culture?
  • Why Social Media Has a Huge Influence on Society?
  • How Does Social Media Marketing Affect Brand Loyalty?
  • Can Social Media Help Save the Environment?
  • What Is the Relationship Between Social Media Usage and Brand Engagement?
  • How Has Social Media Affected Psychological Resilience?
  • What Effects Does Social Media Have On an Individual?
  • What Is the Positive Impact of Social Media on Society and People?
  • Why Should People Join Social Media?
  • Why Is Social Media Marketing Important for Any Business?
  • What Are Social Media Marketing Strategies for Educational Programs?
  • How Has Social Media Blurred the Distinction Between Public and Private?
  • Will Social Media Strengthen or Threaten Romantic Love?
  • Why Should Social Media Have Better Age Restrictions?
  • Why Has Social Media Been So Successful?
  • How Do Students and Teachers Use Social Media?
  • What Is the Connection Between Social Media and Business?
  • What Social Media Has To Do With Racism?
  • Will Social Media Kill Branding?
  • What Negative Effects Does Social Media Have On Teenagers?
  • How Have Social Media Made People’s Lives Easier?
  • What Are the Advantages and Disadvantages of Social Media?
  • What Social Media Info Helps or Hurts Your Job Prospects?
  • What Are the Steps Required To Develop a Social Media?
  • What does Social Media say About Climate Changing?
  • Why Is Social Media Negatively Affecting Our Society?
  • What Is the Role of Social Media in the Mental Health Crisis of Adolescents?
  • What Harm Can Social Media Bring?
  • What Are the Challenges Teenagers Face When Using Social Media?
  • Why Has Social Media Become Very Popular Over The Past Few Years?
  • Mass Communication Essay Topics
  • Computers Essay Ideas
  • Facebook Topics
  • Cyber Bullying Essay Ideas
  • Internet Research Ideas
  • Freedom of Speech Ideas
  • Twitter Topics
  • Freedom Of Expression Questions
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, February 29). 482 Social Media Essay Topic Ideas & Examples. https://ivypanda.com/essays/topic/social-media-essay-examples/

"482 Social Media Essay Topic Ideas & Examples." IvyPanda , 29 Feb. 2024, ivypanda.com/essays/topic/social-media-essay-examples/.

IvyPanda . (2024) '482 Social Media Essay Topic Ideas & Examples'. 29 February.

IvyPanda . 2024. "482 Social Media Essay Topic Ideas & Examples." February 29, 2024. https://ivypanda.com/essays/topic/social-media-essay-examples/.

1. IvyPanda . "482 Social Media Essay Topic Ideas & Examples." February 29, 2024. https://ivypanda.com/essays/topic/social-media-essay-examples/.

Bibliography

IvyPanda . "482 Social Media Essay Topic Ideas & Examples." February 29, 2024. https://ivypanda.com/essays/topic/social-media-essay-examples/.

Yes, Social Media Really Is Undermining Democracy

Despite what Meta has to say.

An American flag being punctured by computer cursors

W ithin the past 15 years, social media has insinuated itself into American life more deeply than food-delivery apps into our diets and microplastics into our bloodstreams. Look at stories about conflict, and it’s often lurking in the background. Recent articles on the rising dysfunction within progressive organizations point to the role of Twitter, Slack, and other platforms in prompting “endless and sprawling internal microbattles,” as The Intercept ’s Ryan Grim put it, referring to the ACLU. At a far higher level of conflict, the congressional hearings about the January 6 insurrection show us how Donald Trump’s tweets summoned the mob to Washington and aimed it at the vice president. Far-right groups then used a variety of platforms to coordinate and carry out the attack.

Social media has changed life in America in a thousand ways, and nearly two out of three Americans now believe that these changes are for the worse. But academic researchers have not yet reached a consensus that social media is harmful. That’s been a boon to social-media companies such as Meta, which argues, as did tobacco companies, that the science is not “ settled .”

The lack of consensus leaves open the possibility that social media may not be very harmful. Perhaps we’ve fallen prey to yet another moral panic about a new technology and, as with television, we’ll worry about it less after a few decades of conflicting studies. A different possibility is that social media is quite harmful but is changing too quickly for social scientists to capture its effects. The research community is built on a quasi-moral norm of skepticism: We begin by assuming the null hypothesis (in this case, that social media is not harmful), and we require researchers to show strong, statistically significant evidence in order to publish their findings. This takes time—a couple of years, typically, to conduct and publish a study; five or more years before review papers and meta-analyses come out; sometimes decades before scholars reach agreement. Social-media platforms, meanwhile, can change dramatically in just a few years .

So even if social media really did begin to undermine democracy (and institutional trust and teen mental health ) in the early 2010s, we should not expect social science to “settle” the matter until the 2030s. By then, the effects of social media will be radically different, and the harms done in earlier decades may be irreversible.

Let me back up. This spring, The Atlantic published my essay “ Why the Past 10 Years of American Life Have Been Uniquely Stupid ,” in which I argued that the best way to understand the chaos and fragmentation of American society is to see ourselves as citizens of Babel in the days after God rendered them unable to understand one another.

I showed how a few small changes to the architecture of social-media platforms, implemented from 2009 to 2012, increased the virality of posts on those platforms, which then changed the nature of social relationships. People could spread rumors and half-truths more quickly, and they could more readily sort themselves into homogenous tribes. Even more important, in my view, was that social-media platforms such as Twitter and Facebook could now be used more easily by anyone to attack anyone. It was as if the platforms had passed out a billion little dart guns, and although most users didn’t want to shoot anyone, three kinds of people began darting others with abandon: the far right, the far left, and trolls.

Jonathan Haidt and Tobias Rose-Stockwell: The dark psychology of social networks

All of these groups were suddenly given the power to dominate conversations and intimidate dissenters into silence. A fourth group—Russian agents––also got a boost, though they didn’t need to attack people directly. Their long-running project, which ramped up online in 2013, was to fabricate, exaggerate, or simply promote stories that would increase Americans’ hatred of one another and distrust of their institutions.

The essay proved to be surprisingly uncontroversial—or, at least, hardly anyone attacked me on social media. But a few responses were published, including one from Meta (formerly Facebook), which pointed to studies it said contradicted my argument. There was also an essay in The New Yorker by Gideon Lewis-Kraus, who interviewed me and other scholars who study politics and social media. He argued that social media might well be harmful to democracies, but the research literature is too muddy and contradictory to support firm conclusions.

So was my diagnosis correct, or are concerns about social media overblown? It’s a crucial question for the future of our society. As I argued in my essay, critics make us smarter. I’m grateful, therefore, to Meta and the researchers interviewed by Lewis-Kraus for helping me sharpen and extend my argument in three ways.

Are Democracies Becoming More Polarized and Less Healthy?

My essay laid out a wide array of harms that social media has inflicted on society. Political polarization is just one of them, but it is central to the story of rising democratic dysfunction.

Meta questioned whether social media should be blamed for increased polarization. In response to my essay, Meta’s head of research, Pratiti Raychoudhury, pointed to a study by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro that looked at trends in 12 countries and found, she said, “that in some countries polarization was on the rise before Facebook even existed, and in others it has been decreasing while internet and Facebook use increased.” In a recent interview with the podcaster Lex Fridman , Mark Zuckerberg cited this same study in support of a more audacious claim: “Most of the academic studies that I’ve seen actually show that social-media use is correlated with lower polarization.”

Does that study really let social media off the hook? It plotted political polarization based on survey responses in 12 countries, most with data stretching back to the 1970s, and then drew straight lines that best fit the data points over several decades. It’s true that, while some lines sloped upward (meaning that polarization increased across the period as a whole), others sloped downward. But my argument wasn’t about the past 50 years. It was about a phase change that happened in the early 2010s , after Facebook and Twitter changed their architecture to enable hyper-virality.

I emailed Gentzkow to ask whether he could put a “hinge” in the graphs in the early 2010s, to see if the trends in polarization changed direction or accelerated in the past decade. He replied that there was not enough data after 2010 to make such an analysis reliable. He also noted that Meta’s response essay had failed to cite a 2020 article in which he and three colleagues found that randomly assigning participants to deactivate Facebook for the four weeks before the 2018 U.S. midterm elections reduced polarization.

Adrienne LaFrance: ‘History will not judge us kindly’

Meta’s response motivated me to look for additional publications to evaluate what had happened to democracies in the 2010s. I discovered four. One of them found no overall trend in polarization, but like the study by Boxell, Gentzkow, and Shapiro, it had few data points after 2015. The other three had data through 2020, and all three reported substantial increases in polarization and/or declines in the number or quality of democracies around the world.

One of them, a 2022 report from the Varieties of Democracy (V-Dem) Institute, found that “liberal democracies peaked in 2012 with 42 countries and are now down to the lowest levels in over 25 years.” It summarized the transformations of global democracy over the past 10 years in stark terms:

Just ten years ago the world looked very different from today. In 2011, there were more countries improving than declining on every aspect of democracy. By 2021 the world has been turned on its head: there are more countries declining than advancing on nearly all democratic aspects captured by V-Dem measures.

The report also notes that “toxic polarization”—signaled by declining “respect for counter-arguments and associated aspects of the deliberative component of democracy”—grew more severe in at least 32 countries.

A paper published one week after my Atlantic essay, by Yunus E. Orhan, found a global spike in democratic “backsliding” since 2008, and linked it to affective polarization, or animosity toward the other side. When affective polarization is high, partisans tolerate antidemocratic behavior by politicians on their own side––such as the January 6 attack on the U.S. Capitol.

And finally, the Economist Intelligence Unit reported a global decline in various democratic measures starting after 2015, according to its Democracy Index.

These three studies cannot prove that social media caused the global decline, but—contra Meta and Zuckerberg—they show a global trend toward polarization in the previous decade, the one in which the world embraced social media.

Has Social Media Created Harmful Echo Chambers?

So why did democracies weaken in the 2010s? How might social media have made them more fragmented and less stable? One popular argument contends that social media sorts users into echo chambers––closed communities of like-minded people. Lack of contact with people who hold different viewpoints allows a sort of tribal groupthink to take hold, reducing the quality of everyone’s thinking and the prospects for compromise that are essential in a democratic system.

According to Meta, however, “More and more research discredits the idea that social media algorithms create an echo chamber.” It points to two sources to back up that claim, but many studies show evidence that social media does in fact create echo chambers. Because conflicting studies are common in social-science research, I created a “ collaborative review ” document last year with Chris Bail, a sociologist at Duke University who studies social media. It’s a public Google doc in which we organize the abstracts of all the studies we can find about social media’s impact on democracy, and then we invite other experts to add studies, comments, and criticisms. We cover research on seven different questions, including whether social media promotes echo chambers. After spending time in the document, Lewis-Kraus wrote in The New Yorker : “The upshot seemed to me to be that exactly nothing was unambiguously clear.”

He is certainly right that nothing is unambiguous. But as I have learned from curating three such documents , researchers often reach opposing conclusions because they have “operationalized” the question differently. That is, they have chosen different ways to turn an abstract question (about the prevalence of echo chambers, say) into something concrete and measurable. For example, researchers who choose to measure echo chambers by looking at the diversity of people’s news consumption typically find little evidence that they exist at all. Even partisans end up being exposed to news stories and videos from the other side. Both of the sources that Raychoudhury cited in her defense of Meta mention this idea.

Derek Thompson: Social media is attention alcohol

But researchers who measure echo chambers by looking at social relationships and networks usually find evidence of “homophily”—that is, people tend to engage with others who are similar to themselves. One study of politically engaged Twitter users, for example, found that they “are disproportionately exposed to like-minded information and that information reaches like-minded users more quickly.” So should we throw up our hands and say that the findings are irreconcilable? No, we should integrate them, as the sociologist Zeynep Tufekci did in a 2018 essay . Coming across contrary viewpoints on social media, she wrote, is “not like reading them in a newspaper while sitting alone.” Rather, she said, “it’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium … We bond with our team by yelling at the fans of the other one.” Mere exposure to different sources of news doesn’t automatically break open echo chambers; in fact, it can reinforce them.

These closely bonded groupings can have profound political ramifications, as a couple of my critics in the New Yorker article acknowledged. A major feature of the post-Babel world is that the extremes are now far louder and more influential than before. They may also become more violent. Recent research by Morteza Dehghani and his colleagues at the University of Southern California shows that people are more willing to commit violence when they are immersed in a community they perceive to be morally homogeneous.

This finding seems to be borne out by a statement from the 18-year-old man who recently killed 10 Black Americans at a supermarket in Buffalo. In the Q&A portion of the manifesto attributed to him, he wrote:

Where did you get your current beliefs? Mostly from the internet. There was little to no influence on my personal beliefs by people I met in person.

The killer goes on to claim that he had read information “from all ideologies,” but I find it unlikely that he consumed a balanced informational diet, or, more important, that he hung out online with ideologically diverse users. The fact that he livestreamed his shooting tells us he assumed that his community shared his warped worldview. He could not have found such an extreme yet homogeneous group in his small town 200 miles from Buffalo. But thanks to social media, he found an international fellowship of extreme racists who jointly worshipped past mass murderers and from whom he copied sections of his manifesto.

Is Social Media the Primary Villain in This Story?

In her response to my essay, Raychoudhury did not deny that Meta bore any blame. Rather, her defense was two-pronged, arguing that the research is not yet definitive, and that, in any case, we should be focusing on mainstream media as the primary cause of harm.

Raychoudhury pointed to a study on the role of cable TV and mainstream media as major drivers of partisanship. She is correct to do so: The American culture war has roots going back to the turmoil of the 1960s, which activated evangelicals and other conservatives in the ’70s. Social media (which arrived around 2004 and became truly pernicious, I argue, only after 2009) is indeed a more recent player in this phenomenon.

In my essay, I included a paragraph on this backstory, noting the role of Fox News and the radicalizing Republican Party of the ’90s, but I should have said more. The story of polarization is complex, and political scientists cite a variety of contributing factors , including the growing politicization of the urban-rural divide; rising immigration; the increasing power of big and very partisan donors; the loss of a common enemy when the Soviet Union collapsed; and the loss of the “Greatest Generation,” which had an ethos of service forged in the crisis of the Second World War. And although polarization rose rapidly in the 2010s, the rise began in the ’90s, so I cannot pin the majority of the rise on social media.

But my essay wasn’t primarily about ordinary polarization. I was trying to explain a new dynamic that emerged in the 2010s: the fear of one another , even—and perhaps especially––within groups that share political or cultural affinities. This fear has created a whole new set of social and political problems.

The loss of a common enemy and those other trends with roots in the 20th century can help explain America’s ever nastier cross-party relationships, but they can’t explain why so many college students and professors suddenly began to express more fear, and engage in more self-censorship, around 2015. These mostly left-leaning people weren’t worried about the “other side”; they were afraid of a small number of students who were further to the left, and who enthusiastically hunted for verbal transgressions and used social media to publicly shame offenders.

A few years later, that same fearful dynamic spread to newsrooms , companies , nonprofit organizations , and many other parts of society . The culture war had been running for two or three decades by then, but it changed in the mid-2010s when ordinary people with little to no public profile suddenly became the targets of social-media mobs. Consider the famous 2013 case of Justine Sacco , who tweeted an insensitive joke about her trip to South Africa just before boarding her flight in London and became an international villain by the time she landed in Cape Town. She was fired the next day. Or consider the the far right’s penchant for using social media to publicize the names and photographs of largely unknown local election officials, health officials, and school-board members who refuse to bow to political pressure, and who are then subjected to waves of vitriol, including threats of violence to themselves and their children, simply for doing their jobs. These phenomena, now common to the culture, could not have happened before the advent of hyper-viral social media in 2009.

Matthew Hindman, Nathaniel Lubin, and Trevor Davis: Facebook has a superuser-supremacy problem

This fear of getting shamed, reported, doxxed, fired, or physically attacked is responsible for the self-censorship and silencing of dissent that were the main focus of my essay. When dissent within any group or institution is stifled, the group will become less perceptive, nimble, and effective over time.

Social media may not be the primary cause of polarization, but it is an important cause, and one we can do something about. I believe it is also the primary cause of the epidemic of structural stupidity, as I called it, that has recently afflicted many of America’s key institutions.

What Can We Do to Make Things Better?

My essay presented a series of structural solutions that would allow us to repair some of the damage that social media has caused to our key democratic and epistemic institutions. I proposed three imperatives: (1) harden democratic institutions so that they can withstand chronic anger and mistrust, (2) reform social media so that it becomes less socially corrosive, and (3) better prepare the next generation for democratic citizenship in this new age.

I believe that we should begin implementing these reforms now, even if the science is not yet “settled.” Beyond a reasonable doubt is the appropriate standard of evidence for reviewers guarding admission to a scientific journal, or for jurors establishing guilt in a criminal trial. It is too high a bar for questions about public health or threats to the body politic. A more appropriate standard is the one used in civil trials: the preponderance of evidence. Is social media probably damaging American democracy via at least one of the seven pathways analyzed in our collaborative-review document , or probably not ? I urge readers to examine the document themselves. I also urge the social-science community to find quicker ways to study potential threats such as social media, where platforms and their effects change rapidly. Our motto should be “Move fast and test things.” Collaborative-review documents are one way to speed up the process by which scholars find and respond to one another’s work.

Beyond these structural solutions, I considered adding a short section to the article on what each of us can do as individuals, but it sounded a bit too preachy, so I cut it. I now regret that decision. I should have noted that all of us, as individuals, can be part of the solution by choosing to act with courage, moderation, and compassion. It takes a great deal of resolve to speak publicly or stand your ground when a barrage of snide, disparaging, and otherwise hostile comments is coming at you and nobody rises to your defense (out of fear of getting attacked themselves).

Read: How to fix Twitter—and all of social media

Fortunately, social media does not usually reflect real life, something that more people are beginning to understand. A few years ago, I heard an insight from an older business executive. He noted that before social media, if he received a dozen angry letters or emails from customers, they spurred him to action because he assumed that there must be a thousand other disgruntled customers who didn’t bother to write. But now, if a thousand people like an angry tweet or Facebook post about his company, he assumes that there must be a dozen people who are really upset.

Seeing that social-media outrage is transient and performative should make it easier to withstand, whether you are the president of a university or a parent speaking at a school-board meeting. We can all do more to offer honest dissent and support the dissenters within institutions that have become structurally stupid. We can all get better at listening with an open mind and speaking in order to engage another human being rather than impress an audience. Teaching these skills to our children and our students is crucial, because they are the generation who will have to reinvent deliberative democracy and Tocqueville’s “art of association” for the digital age.

We must act with compassion too. The fear and cruelty of the post-Babel era are a result of its tendency to reward public displays of aggression. Social media has put us all in the middle of a Roman coliseum, and many in the audience want to see conflict and blood. But once we realize that we are the gladiators—tricked into combat so that we might generate “content,” “engagement,” and revenue—we can refuse to fight. We can be more understanding toward our fellow citizens, seeing that we are all being driven mad by companies that use largely the same set of psychological tricks. We can forswear public conflict and use social media to serve our own purposes, which for most people will mean more private communication and fewer public performances.

The post-Babel world will not be rebuilt by today’s technology companies. That work will be left to citizens who understand the forces that brought us to the verge of self-destruction, and who develop the new habits, virtues, technologies, and shared narratives that will allow us to reap the benefits of living and working together in peace.

About the Author

More Stories

End the Phone-Based Childhood Now

Get Phones Out of Schools Now

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Fake news, disinformation and misinformation in social media: a review

Esma aïmeur.

Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada

Sabrine Amri

Gilles brassard, associated data.

All the data and material are available in the papers cited in the references.

Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.

Introduction

Context and motivation.

Fake news, disinformation and misinformation have become such a scourge that Marcia McNutt, president of the National Academy of Sciences of the United States, is quoted to have said (making an implicit reference to the COVID-19 pandemic) “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence” in a joint statement of the National Academies 1 posted on July 15, 2021. Indeed, although online social networks (OSNs), also called social media, have improved the ease with which real-time information is broadcast; its popularity and its massive use have expanded the spread of fake news by increasing the speed and scope at which it can spread. Fake news may refer to the manipulation of information that can be carried out through the production of false information, or the distortion of true information. However, that does not mean that this problem is only created with social media. A long time ago, there were rumors in the traditional media that Elvis was not dead, 2 that the Earth was flat, 3 that aliens had invaded us, 4 , etc.

Therefore, social media has become nowadays a powerful source for fake news dissemination (Sharma et al. 2019 ; Shu et al. 2017 ). According to Pew Research Center’s analysis of the news use across social media platforms, in 2020, about half of American adults get news on social media at least sometimes, 5 while in 2018, only one-fifth of them say they often get news via social media. 6

Hence, fake news can have a significant impact on society as manipulated and false content is easier to generate and harder to detect (Kumar and Shah 2018 ) and as disinformation actors change their tactics (Kumar and Shah 2018 ; Micallef et al. 2020 ). In 2017, Snow predicted in the MIT Technology Review (Snow 2017 ) that most individuals in mature economies will consume more false than valid information by 2022.

Recent news on the COVID-19 pandemic, which has flooded the web and created panic in many countries, has been reported as fake. 7 For example, holding your breath for ten seconds to one minute is not a self-test for COVID-19 8 (see Fig.  1 ). Similarly, online posts claiming to reveal various “cures” for COVID-19 such as eating boiled garlic or drinking chlorine dioxide (which is an industrial bleach), were verified 9 as fake and in some cases as dangerous and will never cure the infection.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig1_HTML.jpg

Fake news example about a self-test for COVID-19 source: https://cdn.factcheck.org/UploadedFiles/Screenshot031120_false.jpg , last access date: 26-12-2022

Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017 ). Furthermore, it has been reported in a previous study about the spread of online news on Twitter (Vosoughi et al. 2018 ) that the spread of false news online is six times faster than truthful content and that 70% of the users could not distinguish real from fake news (Vosoughi et al. 2018 ) due to the attraction of the novelty of the latter (Bovet and Makse 2019 ). It was determined that falsehood spreads significantly farther, faster, deeper and more broadly than the truth in all categories of information, and the effects are more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information (Vosoughi et al. 2018 ).

Over 1 million tweets were estimated to be related to fake news by the end of the 2016 US presidential election. 11 In 2017, in Germany, a government spokesman affirmed: “We are dealing with a phenomenon of a dimension that we have not seen before,” referring to an unprecedented spread of fake news on social networks. 12 Given the strength of this new phenomenon, fake news has been chosen as the word of the year by the Macquarie dictionary both in 2016 13 and in 2018 14 as well as by the Collins dictionary in 2017. 15 , 16 Since 2020, the new term “infodemic” was coined, reflecting widespread researchers’ concern (Gupta et al. 2022 ; Apuke and Omar 2021 ; Sharma et al. 2020 ; Hartley and Vu 2020 ; Micallef et al. 2020 ) about the proliferation of misinformation linked to the COVID-19 pandemic.

The Gartner Group’s top strategic predictions for 2018 and beyond included the need for IT leaders to quickly develop Artificial Intelligence (AI) algorithms to address counterfeit reality and fake news. 17 However, fake news identification is a complex issue. (Snow 2017 ) questioned the ability of AI to win the war against fake news. Similarly, other researchers concurred that even the best AI for spotting fake news is still ineffective. 18 Besides, recent studies have shown that the power of AI algorithms for identifying fake news is lower than its ability to create it Paschen ( 2019 ). Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed to closely resemble the truth in order to deceive users, and as a result, it is often hard to determine its veracity by AI alone. Therefore, it is crucial to consider more effective approaches to solve the problem of fake news in social media.

Contribution

The fake news problem has been addressed by researchers from various perspectives related to different topics. These topics include, but are not restricted to, social science studies , which investigate why and who falls for fake news (Altay et al. 2022 ; Batailler et al. 2022 ; Sterret et al. 2018 ; Badawy et al. 2019 ; Pennycook and Rand 2020 ; Weiss et al. 2020 ; Guadagno and Guttieri 2021 ), whom to trust and how perceptions of misinformation and disinformation relate to media trust and media consumption patterns (Hameleers et al. 2022 ), how fake news differs from personal lies (Chiu and Oh 2021 ; Escolà-Gascón 2021 ), examine how can the law regulate digital disinformation and how governments can regulate the values of social media companies that themselves regulate disinformation spread on their platforms (Marsden et al. 2020 ; Schuyler 2019 ; Vasu et al. 2018 ; Burshtein 2017 ; Waldman 2017 ; Alemanno 2018 ; Verstraete et al. 2017 ), and argue the challenges to democracy (Jungherr and Schroeder 2021 ); Behavioral interventions studies , which examine what literacy ideas mean in the age of dis/mis- and malinformation (Carmi et al. 2020 ), investigate whether media literacy helps identification of fake news (Jones-Jang et al. 2021 ) and attempt to improve people’s news literacy (Apuke et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers 2022 ; Nagel 2022 ; Jones-Jang et al. 2021 ; Mihailidis and Viotty 2017 ; García et al. 2020 ) by encouraging people to pause to assess credibility of headlines (Fazio 2020 ), promote civic online reasoning (McGrew 2020 ; McGrew et al. 2018 ) and critical thinking (Lutzke et al. 2019 ), together with evaluations of credibility indicators (Bhuiyan et al. 2020 ; Nygren et al. 2019 ; Shao et al. 2018a ; Pennycook et al. 2020a , b ; Clayton et al. 2020 ; Ozturk et al. 2015 ; Metzger et al. 2020 ; Sherman et al. 2020 ; Nekmat 2020 ; Brashier et al. 2021 ; Chung and Kim 2021 ; Lanius et al. 2021 ); as well as social media-driven studies , which investigate the effect of signals (e.g., sources) to detect and recognize fake news (Vraga and Bode 2017 ; Jakesch et al. 2019 ; Shen et al. 2019 ; Avram et al. 2020 ; Hameleers et al. 2020 ; Dias et al. 2020 ; Nyhan et al. 2020 ; Bode and Vraga 2015 ; Tsang 2020 ; Vishwakarma et al. 2019 ; Yavary et al. 2020 ) and investigate fake and reliable news sources using complex networks analysis based on search engine optimization metric (Mazzeo and Rapisarda 2022 ).

The impacts of fake news have reached various areas and disciplines beyond online social networks and society (García et al. 2020 ) such as economics (Clarke et al. 2020 ; Kogan et al. 2019 ; Goldstein and Yang 2019 ), psychology (Roozenbeek et al. 2020a ; Van der Linden and Roozenbeek 2020 ; Roozenbeek and van der Linden 2019 ), political science (Valenzuela et al. 2022 ; Bringula et al. 2022 ; Ricard and Medeiros 2020 ; Van der Linden et al. 2020 ; Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ), health science (Alonso-Galbán and Alemañy-Castilla 2022 ; Desai et al. 2022 ; Apuke and Omar 2021 ; Escolà-Gascón 2021 ; Wang et al. 2019c ; Hartley and Vu 2020 ; Micallef et al. 2020 ; Pennycook et al. 2020b ; Sharma et al. 2020 ; Roozenbeek et al. 2020b ), environmental science (e.g., climate change) (Treen et al. 2020 ; Lutzke et al. 2019 ; Lewandowsky 2020 ; Maertens et al. 2020 ), etc.

Interesting research has been carried out to review and study the fake news issue in online social networks. Some focus not only on fake news, but also distinguish between fake news and rumor (Bondielli and Marcelloni 2019 ; Meel and Vishwakarma 2020 ), while others tackle the whole problem, from characterization to processing techniques (Shu et al. 2017 ; Guo et al. 2020 ; Zhou and Zafarani 2020 ). However, they mostly focus on studying approaches from a machine learning perspective (Bondielli and Marcelloni 2019 ), data mining perspective (Shu et al. 2017 ), crowd intelligence perspective (Guo et al. 2020 ), or knowledge-based perspective (Zhou and Zafarani 2020 ). Furthermore, most of these studies ignore at least one of the mentioned perspectives, and in many cases, they do not cover other existing detection approaches using methods such as blockchain and fact-checking, as well as analysis on metrics used for Search Engine Optimization (Mazzeo and Rapisarda 2022 ). However, in our work and to the best of our knowledge, we cover all the approaches used for fake news detection. Indeed, we investigate the proposed solutions from broader perspectives (i.e., the detection techniques that are used, as well as the different aspects and types of the information used).

Therefore, in this paper, we are highly motivated by the following facts. First, fake news detection on social media is still in the early age of development, and many challenging issues remain that require deeper investigation. Hence, it is necessary to discuss potential research directions that can improve fake news detection and mitigation tasks. However, the dynamic nature of fake news propagation through social networks further complicates matters (Sharma et al. 2019 ). False information can easily reach and impact a large number of users in a short time (Friggeri et al. 2014 ; Qian et al. 2018 ). Moreover, fact-checking organizations cannot keep up with the dynamics of propagation as they require human verification, which can hold back a timely and cost-effective response (Kim et al. 2018 ; Ruchansky et al. 2017 ; Shu et al. 2018a ).

Our work focuses primarily on understanding the “fake news” problem, its related challenges and root causes, and reviewing automatic fake news detection and mitigation methods in online social networks as addressed by researchers. The main contributions that differentiate us from other works are summarized below:

  • We present the general context from which the fake news problem emerged (i.e., online deception)
  • We review existing definitions of fake news, identify the terms and features most commonly used to define fake news, and categorize related works accordingly.
  • We propose a fake news typology classification based on the various categorizations of fake news reported in the literature.
  • We point out the most challenging factors preventing researchers from proposing highly effective solutions for automatic fake news detection in social media.
  • We highlight and classify representative studies in the domain of automatic fake news detection and mitigation on online social networks including the key methods and techniques used to generate detection models.
  • We discuss the key shortcomings that may inhibit the effectiveness of the proposed fake news detection methods in online social networks.
  • We provide recommendations that can help address these shortcomings and improve the quality of research in this domain.

The rest of this article is organized as follows. We explain the methodology with which the studied references are collected and selected in Sect.  2 . We introduce the online deception problem in Sect.  3 . We highlight the modern-day problem of fake news in Sect.  4 , followed by challenges facing fake news detection and mitigation tasks in Sect.  5 . We provide a comprehensive literature review of the most relevant scholarly works on fake news detection in Sect.  6 . We provide a critical discussion and recommendations that may fill some of the gaps we have identified, as well as a classification of the reviewed automatic fake news detection approaches, in Sect.  7 . Finally, we provide a conclusion and propose some future directions in Sect.  8 .

Review methodology

This section introduces the systematic review methodology on which we relied to perform our study. We start with the formulation of the research questions, which allowed us to select the relevant research literature. Then, we provide the different sources of information together with the search and inclusion/exclusion criteria we used to select the final set of papers.

Research questions formulation

The research scope, research questions, and inclusion/exclusion criteria were established following an initial evaluation of the literature and the following research questions were formulated and addressed.

  • RQ1: what is fake news in social media, how is it defined in the literature, what are its related concepts, and the different types of it?
  • RQ2: What are the existing challenges and issues related to fake news?
  • RQ3: What are the available techniques used to perform fake news detection in social media?

Sources of information

We broadly searched for journal and conference research articles, books, and magazines as a source of data to extract relevant articles. We used the main sources of scientific databases and digital libraries in our search, such as Google Scholar, 19 IEEE Xplore, 20 Springer Link, 21 ScienceDirect, 22 Scopus, 23 ACM Digital Library. 24 Also, we screened most of the related high-profile conferences such as WWW, SIGKDD, VLDB, ICDE and so on to find out the recent work.

Search criteria

We focused our research over a period of ten years, but we made sure that about two-thirds of the research papers that we considered were published in or after 2019. Additionally, we defined a set of keywords to search the above-mentioned scientific databases since we concentrated on reviewing the current state of the art in addition to the challenges and the future direction. The set of keywords includes the following terms: fake news, disinformation, misinformation, information disorder, social media, detection techniques, detection methods, survey, literature review.

Study selection, exclusion and inclusion criteria

To retrieve relevant research articles, based on our sources of information and search criteria, a systematic keyword-based search was carried out by posing different search queries, as shown in Table  1 .

List of keywords for searching relevant articles

Fake news + social media
Fake news + disinformation
Fake news + misinformation
Fake news + information disorder
Fake news + survey
Fake news + detection methods
Fake news + literature review
Fake news + detection techniques
Fake news + detection + social media
Disinformation + misinformation + social media

We discovered a primary list of articles. On the obtained initial list of studies, we applied a set of inclusion/exclusion criteria presented in Table  2 to select the appropriate research papers. The inclusion and exclusion principles are applied to determine whether a study should be included or not.

Inclusion and exclusion criteria

Inclusion criterionExclusion criterion
Peer-reviewed and written in the English languageArticles in a different language than English.
Clearly describes fake news, misinformation and disinformation problems in social networksDoes not focus on fake news, misinformation, or disinformation problem in social networks
Written by academic or industrial researchersShort papers, posters or similar
Have a high number of citations
Recent articles only (last ten years)
In the case of equivalent studies, the one published in the highest-rated journal or conference is selected to sustain a high-quality set of articles on which the review is conductedArticles not following these inclusion criteria
Articles that propose methodologies, methods, or approaches for fake news detection online social networks

After reading the abstract, we excluded some articles that did not meet our criteria. We chose the most important research to help us understand the field. We reviewed the articles completely and found only 61 research papers that discuss the definition of the term fake news and its related concepts (see Table  4 ). We used the remaining papers to understand the field, reveal the challenges, review the detection techniques, and discuss future directions.

Classification of fake news definitions based on the used term and features

Fake newsMisinformationDisinformationFalse informationMalinformationInformation disorder
Intent and authenticityShu et al. ( ), Sharma et al. ( ), Mustafaraj and Metaxas ( ), Klein and Wueller ( ), Potthast et al. ( ), Allcott and Gentzkow ( ), Zhou and Zafarani ( ), Zhang and Ghorbani ( ), Conroy et al. ( ), Celliers and Hattingh ( ), Nakov ( ), Shu et al. ( ), Tandoc Jr et al. ( ), Abu Arqoub et al. ( ),Molina et al. ( ), de Cock Buning ( ), Meel and Vishwakarma ( )Wu et al. ( ), Shu et al. ( ), Islam et al. ( ), Hameleers et al. ( )Kapantai et al. ( ), Shu et al. ( ), Shu et al. ( ),Kumar et al. ( ), Jungherr and Schroeder ( ), Starbird et al. ( ), de Cock Buning ( ), Bastick ( ), Bringula et al. ( ), Tsang ( ), Hameleers et al. ( ), Wu et al. ( )Shu et al. ( ), Di Domenico et al. ( ), Dame Adjin-Tettey ( )Wardle and Derakhshan ( ), Wardle Wardle ( ), Derakhshan and Wardle ( ), Shu et al. ( )
Intent or authenticityJin et al. ( ), Rubin et al. ( ), Balmas ( ),Brewer et al. ( ), Egelhofer and Lecheler ( ), Lazer et al. ( ), Allen et al. ( ), Guadagno and Guttieri ( ), Van der Linden et al. ( ), ERGA ( )Pennycook and Rand ( ), Shao et al. ( ), Shao et al. ( ),Micallef et al. ( ), Ha et al. ( ), Singh et al. ( ), Wu et al. ( )Marsden et al. ( ), Ireton and Posetti ( ), ERGA ( ), Baptista and Gradim ( )Habib et al. ( )Carmi et al. ( )
Intent and knowledgeWeiss et al. ( )Bhattacharjee et al. ( ), Khan et al. ( )Kumar and Shah ( ), Guo et al. ( )

A brief introduction of online deception

The Cambridge Online Dictionary defines Deception as “ the act of hiding the truth, especially to get an advantage .” Deception relies on peoples’ trust, doubt and strong emotions that may prevent them from thinking and acting clearly (Aïmeur et al. 2018 ). We also define it in previous work (Aïmeur et al. 2018 ) as the process that undermines the ability to consciously make decisions and take convenient actions, following personal values and boundaries. In other words, deception gets people to do things they would not otherwise do. In the context of online deception, several factors need to be considered: the deceiver, the purpose or aim of the deception, the social media service, the deception technique and the potential target (Aïmeur et al. 2018 ; Hage et al. 2021 ).

Researchers are working on developing new ways to protect users and prevent online deception (Aïmeur et al. 2018 ). Due to the sophistication of attacks, this is a complex task. Hence, malicious attackers are using more complex tools and strategies to deceive users. Furthermore, the way information is organized and exchanged in social media may lead to exposing OSN users to many risks (Aïmeur et al. 2013 ).

In fact, this field is one of the recent research areas that need collaborative efforts of multidisciplinary practices such as psychology, sociology, journalism, computer science as well as cyber-security and digital marketing (which are not yet well explored in the field of dis/mis/malinformation but relevant for future research). Moreover, Ismailov et al. ( 2020 ) analyzed the main causes that could be responsible for the efficiency gap between laboratory results and real-world implementations.

In this paper, it is not in our scope of work to review online deception state of the art. However, we think it is crucial to note that fake news, misinformation and disinformation are indeed parts of the larger landscape of online deception (Hage et al. 2021 ).

Fake news, the modern-day problem

Fake news has existed for a very long time, much before their wide circulation became facilitated by the invention of the printing press. 25 For instance, Socrates was condemned to death more than twenty-five hundred years ago under the fake news that he was guilty of impiety against the pantheon of Athens and corruption of the youth. 26 A Google Trends Analysis of the term “fake news” reveals an explosion in popularity around the time of the 2016 US presidential election. 27 Fake news detection is a problem that has recently been addressed by numerous organizations, including the European Union 28 and NATO. 29

In this section, we first overview the fake news definitions as they were provided in the literature. We identify the terms and features used in the definitions, and we classify the latter based on them. Then, we provide a fake news typology based on distinct categorizations that we propose, and we define and compare the most cited forms of one specific fake news category (i.e., the intent-based fake news category).

Definitions of fake news

“Fake news” is defined in the Collins English Dictionary as false and often sensational information disseminated under the guise of news reporting, 30 yet the term has evolved over time and has become synonymous with the spread of false information (Cooke 2017 ).

The first definition of the term fake news was provided by Allcott and Gentzkow ( 2017 ) as news articles that are intentionally and verifiably false and could mislead readers. Then, other definitions were provided in the literature, but they all agree on the authenticity of fake news to be false (i.e., being non-factual). However, they disagree on the inclusion and exclusion of some related concepts such as satire , rumors , conspiracy theories , misinformation and hoaxes from the given definition. More recently, Nakov ( 2020 ) reported that the term fake news started to mean different things to different people, and for some politicians, it even means “news that I do not like.”

Hence, there is still no agreed definition of the term “fake news.” Moreover, we can find many terms and concepts in the literature that refer to fake news (Van der Linden et al. 2020 ; Molina et al. 2021 ) (Abu Arqoub et al. 2022 ; Allen et al. 2020 ; Allcott and Gentzkow 2017 ; Shu et al. 2017 ; Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Conroy et al. 2015 ; Celliers and Hattingh 2020 ; Nakov 2020 ; Shu et al. 2020c ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ; Egelhofer and Lecheler 2019 ; Mustafaraj and Metaxas 2017 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Lazer et al. 2018 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ), disinformation (Kapantai et al. 2021 ; Shu et al. 2020a , c ; Kumar et al. 2016 ; Bhattacharjee et al. 2020 ; Marsden et al. 2020 ; Jungherr and Schroeder 2021 ; Starbird et al. 2019 ; Ireton and Posetti 2018 ), misinformation (Wu et al. 2019 ; Shu et al. 2020c ; Shao et al. 2016 , 2018b ; Pennycook and Rand 2019 ; Micallef et al. 2020 ), malinformation (Dame Adjin-Tettey 2022 ) (Carmi et al. 2020 ; Shu et al. 2020c ), false information (Kumar and Shah 2018 ; Guo et al. 2020 ; Habib et al. 2019 ), information disorder (Shu et al. 2020c ; Wardle and Derakhshan 2017 ; Wardle 2018 ; Derakhshan and Wardle 2017 ), information warfare (Guadagno and Guttieri 2021 ) and information pollution (Meel and Vishwakarma 2020 ).

There is also a remarkable amount of disagreement over the classification of the term fake news in the research literature, as well as in policy (de Cock Buning 2018 ; ERGA 2018 , 2021 ). Some consider fake news as a type of misinformation (Allen et al. 2020 ; Singh et al. 2021 ; Ha et al. 2021 ; Pennycook and Rand 2019 ; Shao et al. 2018b ; Di Domenico et al. 2021 ; Sharma et al. 2019 ; Celliers and Hattingh 2020 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Islam et al. 2020 ), others consider it as a type of disinformation (de Cock Buning 2018 ) (Bringula et al. 2022 ; Baptista and Gradim 2022 ; Tsang 2020 ; Tandoc Jr et al. 2021 ; Bastick 2021 ; Khan et al. 2019 ; Shu et al. 2017 ; Nakov 2020 ; Shu et al. 2020c ; Egelhofer and Lecheler 2019 ), while others associate the term with both disinformation and misinformation (Wu et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers et al. 2022 ; Carmi et al. 2020 ; Allcott and Gentzkow 2017 ; Zhang and Ghorbani 2020 ; Potthast et al. 2017 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ). On the other hand, some prefer to differentiate fake news from both terms (ERGA 2018 ; Molina et al. 2021 ; ERGA 2021 ) (Zhou and Zafarani 2020 ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ).

The existing terms can be separated into two groups. The first group represents the general terms, which are information disorder , false information and fake news , each of which includes a subset of terms from the second group. The second group represents the elementary terms, which are misinformation , disinformation and malinformation . The literature agrees on the definitions of the latter group, but there is still no agreed-upon definition of the first group. In Fig.  2 , we model the relationship between the most used terms in the literature.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig2_HTML.jpg

Modeling of the relationship between terms related to fake news

The terms most used in the literature to refer, categorize and classify fake news can be summarized and defined as shown in Table  3 , in which we capture the similarities and show the differences between the different terms based on two common key features, which are the intent and the authenticity of the news content. The intent feature refers to the intention behind the term that is used (i.e., whether or not the purpose is to mislead or cause harm), whereas the authenticity feature refers to its factual aspect. (i.e., whether the content is verifiably false or not, which we label as genuine in the second case). Some of these terms are explicitly used to refer to fake news (i.e., disinformation, misinformation and false information), while others are not (i.e., malinformation). In the comparison table, the empty dash (–) cell denotes that the classification does not apply.

A comparison between used terms based on intent and authenticity

TermDefinitionIntentAuthenticity
False informationVerifiably false informationFalse
MisinformationFalse information that is shared without the intention to mislead or to cause harmNot to misleadFalse
DisinformationFalse information that is shared to intentionally misleadTo misleadFalse
MalinformationGenuine information that is shared with an intent to cause harmTo cause harmGenuine

In Fig.  3 , we identify the different features used in the literature to define fake news (i.e., intent, authenticity and knowledge). Hence, some definitions are based on two key features, which are authenticity and intent (i.e., news articles that are intentionally and verifiably false and could mislead readers). However, other definitions are based on either authenticity or intent. Other researchers categorize false information on the web and social media based on its intent and knowledge (i.e., when there is a single ground truth). In Table  4 , we classify the existing fake news definitions based on the used term and the used features . In the classification, the references in the cells refer to the research study in which a fake news definition was provided, while the empty dash (–) cells denote that the classification does not apply.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig3_HTML.jpg

The features used for fake news definition

Fake news typology

Various categorizations of fake news have been provided in the literature. We can distinguish two major categories of fake news based on the studied perspective (i.e., intention or content) as shown in Fig.  4 . However, our proposed fake news typology is not about detection methods, and it is not exclusive. Hence, a given category of fake news can be described based on both perspectives (i.e., intention and content) at the same time. For instance, satire (i.e., intent-based fake news) can contain text and/or multimedia content types of data (e.g., headline, body, image, video) (i.e., content-based fake news) and so on.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig4_HTML.jpg

Most researchers classify fake news based on the intent (Collins et al. 2020 ; Bondielli and Marcelloni 2019 ; Zannettou et al. 2019 ; Kumar et al. 2016 ; Wardle 2017 ; Shu et al. 2017 ; Kumar and Shah 2018 ) (see Sect.  4.2.2 ). However, other researchers (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ) focus on the content to categorize types of fake news through distinguishing the different formats and content types of data in the news (e.g., text and/or multimedia).

Recently, another classification was proposed by Zhang and Ghorbani ( 2020 ). It is based on the combination of content and intent to categorize fake news. They distinguish physical news content and non-physical news content from fake news. Physical content consists of the carriers and format of the news, and non-physical content consists of the opinions, emotions, attitudes and sentiments that the news creators want to express.

Content-based fake news category

According to researchers of this category (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ), forms of fake news may include false text such as hyperlinks or embedded content; multimedia such as false videos (Demuyakor and Opata 2022 ), images (Masciari et al. 2020 ; Shen et al. 2019 ), audios (Demuyakor and Opata 2022 ) and so on. Moreover, we can also find multimodal content (Shu et al. 2020a ) that is fake news articles and posts composed of multiple types of data combined together, for example, a fabricated image along with a text related to the image (Shu et al. 2020a ). In this category of fake news forms, we can mention as examples deepfake videos (Yang et al. 2019b ) and GAN-generated fake images (Zhang et al. 2019b ), which are artificial intelligence-based machine-generated fake content that are hard for unsophisticated social network users to identify.

The effects of these forms of fake news content vary on the credibility assessment, as well as sharing intentions which influences the spread of fake news on OSNs. For instance, people with little knowledge about the issue compared to those who are strongly concerned about the key issue of fake news tend to be easier to convince that the misleading or fake news is real, especially when shared via a video modality as compared to the text or the audio modality (Demuyakor and Opata 2022 ).

Intent-based Fake News Category

The most often mentioned and discussed forms of fake news according to researchers in this category include but are not restricted to clickbait , hoax , rumor , satire , propaganda , framing , conspiracy theories and others. In the following subsections, we explain these types of fake news as they were defined in the literature and undertake a brief comparison between them as depicted in Table  5 . The following are the most cited forms of intent-based types of fake news, and their comparison is based on what we suspect are the most common criteria mentioned by researchers.

A comparison between the different types of intent-based fake news

Intent to deceivePropagationNegative ImpactGoal
ClickbaitHighSlowLowPopularity, Profit
HoaxHighFastLowOther
RumorHighFastHighOther
SatireLowSlowLowPopularity, Other
PropagandaHighFastHighPopularity
FramingHighFastLowOther
Conspiracy theoryHighFastHighOther

Clickbait refers to misleading headlines and thumbnails of content on the web (Zannettou et al. 2019 ) that tend to be fake stories with catchy headlines aimed at enticing the reader to click on a link (Collins et al. 2020 ). This type of fake news is considered to be the least severe type of false information because if a user reads/views the whole content, it is possible to distinguish if the headline and/or the thumbnail was misleading (Zannettou et al. 2019 ). However, the goal behind using clickbait is to increase the traffic to a website (Zannettou et al. 2019 ).

A hoax is a false (Zubiaga et al. 2018 ) or inaccurate (Zannettou et al. 2019 ) intentionally fabricated (Collins et al. 2020 ) news story used to masquerade the truth (Zubiaga et al. 2018 ) and is presented as factual (Zannettou et al. 2019 ) to deceive the public or audiences (Collins et al. 2020 ). This category is also known either as half-truth or factoid stories (Zannettou et al. 2019 ). Popular examples of hoaxes are stories that report the false death of celebrities (Zannettou et al. 2019 ) and public figures (Collins et al. 2020 ). Recently, hoaxes about the COVID-19 have been circulating through social media.

The term rumor refers to ambiguous or never confirmed claims (Zannettou et al. 2019 ) that are disseminated with a lack of evidence to support them (Sharma et al. 2019 ). This kind of information is widely propagated on OSNs (Zannettou et al. 2019 ). However, they are not necessarily false and may turn out to be true (Zubiaga et al. 2018 ). Rumors originate from unverified sources but may be true or false or remain unresolved (Zubiaga et al. 2018 ).

Satire refers to stories that contain a lot of irony and humor (Zannettou et al. 2019 ). It presents stories as news that might be factually incorrect, but the intent is not to deceive but rather to call out, ridicule, or to expose behavior that is shameful, corrupt, or otherwise “bad” (Golbeck et al. 2018 ). This is done with a fabricated story or by exaggerating the truth reported in mainstream media in the form of comedy (Collins et al. 2020 ). The intent behind satire seems kind of legitimate and many authors (such as Wardle (Wardle 2017 )) do include satire as a type of fake news as there is no intention to cause harm but it has the potential to mislead or fool people.

Also, Golbeck et al. ( 2018 ) mention that there is a spectrum from fake to satirical news that they found to be exploited by many fake news sites. These sites used disclaimers at the bottom of their webpages to suggest they were “satirical” even when there was nothing satirical about their articles, to protect them from accusations about being fake. The difference with a satirical form of fake news is that the authors or the host present themselves as a comedian or as an entertainer rather than a journalist informing the public (Collins et al. 2020 ). However, most audiences believed the information passed in this satirical form because the comedian usually projects news from mainstream media and frames them to suit their program (Collins et al. 2020 ).

Propaganda refers to news stories created by political entities to mislead people. It is a special instance of fabricated stories that aim to harm the interests of a particular party and, typically, has a political context (Zannettou et al. 2019 ). Propaganda was widely used during both World Wars (Collins et al. 2020 ) and during the Cold War (Zannettou et al. 2019 ). It is a consequential type of false information as it can change the course of human history (e.g., by changing the outcome of an election) (Zannettou et al. 2019 ). States are the main actors of propaganda. Recently, propaganda has been used by politicians and media organizations to support a certain position or view (Collins et al. 2020 ). Online astroturfing can be an example of the tools used for the dissemination of propaganda. It is a covert manipulation of public opinion (Peng et al. 2017 ) that aims to make it seem that many people share the same opinion about something. Astroturfing can affect different domains of interest, based on which online astroturfing can be mainly divided into political astroturfing, corporate astroturfing and astroturfing in e-commerce or online services (Mahbub et al. 2019 ). Propaganda types of fake news can be debunked with manual fact-based detection models such as the use of expert-based fact-checkers (Collins et al. 2020 ).

Framing refers to employing some aspect of reality to make content more visible, while the truth is concealed (Collins et al. 2020 ) to deceive and misguide readers. People will understand certain concepts based on the way they are coined and invented. An example of framing was provided by Collins et al. ( 2020 ): “suppose a leader X says “I will neutralize my opponent” simply meaning he will beat his opponent in a given election. Such a statement will be framed such as “leader X threatens to kill Y” and this framed statement provides a total misrepresentation of the original meaning.

Conspiracy Theories

Conspiracy theories refer to the belief that an event is the result of secret plots generated by powerful conspirators. Conspiracy belief refers to people’s adoption and belief of conspiracy theories, and it is associated with psychological, political and social factors (Douglas et al. 2019 ). Conspiracy theories are widespread in contemporary democracies (Sutton and Douglas 2020 ), and they have major consequences. For instance, lately and during the COVID-19 pandemic, conspiracy theories have been discussed from a public health perspective (Meese et al. 2020 ; Allington et al. 2020 ; Freeman et al. 2020 ).

Comparison Between Most Popular Intent-based Types of Fake News

Following a review of the most popular intent-based types of fake news, we compare them as shown in Table  5 based on the most common criteria mentioned by researchers in their definitions as listed below.

  • the intent behind the news, which refers to whether a given news type was mainly created to intentionally deceive people or not (e.g., humor, irony, entertainment, etc.);
  • the way that the news propagates through OSN, which determines the nature of the propagation of each type of fake news and this can be either fast or slow propagation;
  • the severity of the impact of the news on OSN users, which refers to whether the public has been highly impacted by the given type of fake news; the mentioned impact of each fake news type is mainly the proportion of the negative impact;
  • and the goal behind disseminating the news, which can be to gain popularity for a particular entity (e.g., political party), for profit (e.g., lucrative business), or other reasons such as humor and irony in the case of satire, spreading panic or anger, and manipulating the public in the case of hoaxes, made-up stories about a particular person or entity in the case of rumors, and misguiding readers in the case of framing.

However, the comparison provided in Table  5 is deduced from the studied research papers; it is our point of view, which is not based on empirical data.

We suspect that the most dangerous types of fake news are the ones with high intention to deceive the public, fast propagation through social media, high negative impact on OSN users, and complicated hidden goals and agendas. However, while the other types of fake news are less dangerous, they should not be ignored.

Moreover, it is important to highlight that the existence of the overlap in the types of fake news mentioned above has been proven, thus it is possible to observe false information that may fall within multiple categories (Zannettou et al. 2019 ). Here, we provide two examples by Zannettou et al. ( 2019 ) to better understand possible overlaps: (1) a rumor may also use clickbait techniques to increase the audience that will read the story; and (2) propaganda stories, as a special instance of a framing story.

Challenges related to fake news detection and mitigation

To alleviate fake news and its threats, it is crucial to first identify and understand the factors involved that continue to challenge researchers. Thus, the main question is to explore and investigate the factors that make it easier to fall for manipulated information. Despite the tremendous progress made in alleviating some of the challenges in fake news detection (Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Shu et al. 2020a ), much more work needs to be accomplished to address the problem effectively.

In this section, we discuss several open issues that have been making fake news detection in social media a challenging problem. These issues can be summarized as follows: content-based issues (i.e., deceptive content that resembles the truth very closely), contextual issues (i.e., lack of user awareness, social bots spreaders of fake content, and OSN’s dynamic natures that leads to the fast propagation), as well as the issue of existing datasets (i.e., there still no one size fits all benchmark dataset for fake news detection). These various aspects have proven (Shu et al. 2017 ) to have a great impact on the accuracy of fake news detection approaches.

Content-based issue, deceptive content

Automatic fake news detection remains a huge challenge, primarily because the content is designed in a way that it closely resembles the truth. Besides, most deceivers choose their words carefully and use their language strategically to avoid being caught. Therefore, it is often hard to determine its veracity by AI without the reliance on additional information from third parties such as fact-checkers.

Abdullah-All-Tanvir et al. ( 2020 ) reported that fake news tends to have more complicated stories and hardly ever make any references. It is more likely to contain a greater number of words that express negative emotions. This makes it so complicated that it becomes impossible for a human to manually detect the credibility of this content. Therefore, detecting fake news on social media is quite challenging. Moreover, fake news appears in multiple types and forms, which makes it hard and challenging to define a single global solution able to capture and deal with the disseminated content. Consequently, detecting false information is not a straightforward task due to its various types and forms Zannettou et al. ( 2019 ).

Contextual issues

Contextual issues are challenges that we suspect may not be related to the content of the news but rather they are inferred from the context of the online news post (i.e., humans are the weakest factor due to lack of user awareness, social bots spreaders, dynamic nature of online social platforms and fast propagation of fake news).

Humans are the weakest factor due to the lack of awareness

Recent statistics 31 show that the percentage of unintentional fake news spreaders (people who share fake news without the intention to mislead) over social media is five times higher than intentional spreaders. Moreover, another recent statistic 32 shows that the percentage of people who were confident about their ability to discern fact from fiction is ten times higher than those who were not confident about the truthfulness of what they are sharing. As a result, we can deduce the lack of human awareness about the ascent of fake news.

Public susceptibility and lack of user awareness (Sharma et al. 2019 ) have always been the most challenging problem when dealing with fake news and misinformation. This is a complex issue because many people believe almost everything on the Internet and the ones who are new to digital technology or have less expertise may be easily fooled (Edgerly et al. 2020 ).

Moreover, it has been widely proven (Metzger et al. 2020 ; Edgerly et al. 2020 ) that people are often motivated to support and accept information that goes with their preexisting viewpoints and beliefs, and reject information that does not fit in as well. Hence, Shu et al. ( 2017 ) illustrate an interesting correlation between fake news spread and psychological and cognitive theories. They further suggest that humans are more likely to believe information that confirms their existing views and ideological beliefs. Consequently, they deduce that humans are naturally not very good at differentiating real information from fake information.

Recent research by Giachanou et al. ( 2020 ) studies the role of personality and linguistic patterns in discriminating between fake news spreaders and fact-checkers. They classify a user as a potential fact-checker or a potential fake news spreader based on features that represent users’ personality traits and linguistic patterns used in their tweets. They show that leveraging personality traits and linguistic patterns can improve the performance in differentiating between checkers and spreaders.

Furthermore, several researchers studied the prevalence of fake news on social networks during (Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ) and after (Garrett and Bond 2021 ) the 2016 US presidential election and found that individuals most likely to engage with fake news sources were generally conservative-leaning, older, and highly engaged with political news.

Metzger et al. ( 2020 ) examine how individuals evaluate the credibility of biased news sources and stories. They investigate the role of both cognitive dissonance and credibility perceptions in selective exposure to attitude-consistent news information. They found that online news consumers tend to perceive attitude-consistent news stories as more accurate and more credible than attitude-inconsistent stories.

Similarly, Edgerly et al. ( 2020 ) explore the impact of news headlines on the audience’s intent to verify whether given news is true or false. They concluded that participants exhibit higher intent to verify the news only when they believe the headline to be true, which is predicted by perceived congruence with preexisting ideological tendencies.

Luo et al. ( 2022 ) evaluate the effects of endorsement cues in social media on message credibility and detection accuracy. Results showed that headlines associated with a high number of likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. Consequently, they highlight the urgency of empowering individuals to assess both news veracity and endorsement cues appropriately on social media.

Moreover, misinformed people are a greater problem than uninformed people (Kuklinski et al. 2000 ), because the former hold inaccurate opinions (which may concern politics, climate change, medicine) that are harder to correct. Indeed, people find it difficult to update their misinformation-based beliefs even after they have been proved to be false (Flynn et al. 2017 ). Moreover, even if a person has accepted the corrected information, his/her belief may still affect their opinion (Nyhan and Reifler 2015 ).

Falling for disinformation may also be explained by a lack of critical thinking and of the need for evidence that supports information (Vilmer et al. 2018 ; Badawy et al. 2019 ). However, it is also possible that people choose misinformation because they engage in directionally motivated reasoning (Badawy et al. 2019 ; Flynn et al. 2017 ). Online clients are normally vulnerable and will, in general, perceive web-based networking media as reliable, as reported by Abdullah-All-Tanvir et al. ( 2019 ), who propose to mechanize fake news recognition.

It is worth noting that in addition to bots causing the outpouring of the majority of the misrepresentations, specific individuals are also contributing a large share of this issue (Abdullah-All-Tanvir et al. 2019 ). Furthermore, Vosoughi et al. (Vosoughi et al. 2018 ) found that contrary to conventional wisdom, robots have accelerated the spread of real and fake news at the same rate, implying that fake news spreads more than the truth because humans, not robots, are more likely to spread it.

In this case, verified users and those with numerous followers were not necessarily responsible for spreading misinformation of the corrupted posts (Abdullah-All-Tanvir et al. 2019 ).

Viral fake news can cause much havoc to our society. Therefore, to mitigate the negative impact of fake news, it is important to analyze the factors that lead people to fall for misinformation and to further understand why people spread fake news (Cheng et al. 2020 ). Measuring the accuracy, credibility, veracity and validity of news contents can also be a key countermeasure to consider.

Social bots spreaders

Several authors (Shu et al. 2018b , 2017 ; Shi et al. 2019 ; Bessi and Ferrara 2016 ; Shao et al. 2018a ) have also shown that fake news is likely to be created and spread by non-human accounts with similar attributes and structure in the network, such as social bots (Ferrara et al. 2016 ). Bots (short for software robots) exist since the early days of computers. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior (Ferrara et al. 2016 ). Although they are designed to provide a useful service, they can be harmful, for example when they contribute to the spread of unverified information or rumors (Ferrara et al. 2016 ). However, it is important to note that bots are simply tools created and maintained by humans for some specific hidden agendas.

Social bots tend to connect with legitimate users instead of other bots. They try to act like a human with fewer words and fewer followers on social media. This contributes to the forwarding of fake news (Jiang et al. 2019 ). Moreover, there is a difference between bot-generated and human-written clickbait (Le et al. 2019 ).

Many researchers have addressed ways of identifying and analyzing possible sources of fake news spread in social media. Recent research by Shu et al. ( 2020a ) describes social bots use of two strategies to spread low-credibility content. First, they amplify interactions with content as soon as it is created to make it look legitimate and to facilitate its spread across social networks. Next, they try to increase public exposure to the created content and thus boost its perceived credibility by targeting influential users that are more likely to believe disinformation in the hope of getting them to “repost” the fabricated content. They further discuss the social bot detection systems taxonomy proposed by Ferrara et al. ( 2016 ) which divides bot detection methods into three classes: (1) graph-based, (2) crowdsourcing and (3) feature-based social bot detection methods.

Similarly, Shao et al. ( 2018a ) examine social bots and how they promote the spread of misinformation through millions of Twitter posts during and following the 2016 US presidential campaign. They found that social bots played a disproportionate role in spreading articles from low-credibility sources by amplifying such content in the early spreading moments and targeting users with many followers through replies and mentions to expose them to this content and induce them to share it.

Ismailov et al. ( 2020 ) assert that the techniques used to detect bots depend on the social platform and the objective. They note that a malicious bot designed to make friends with as many accounts as possible will require a different detection approach than a bot designed to repeatedly post links to malicious websites. Therefore, they identify two models for detecting malicious accounts, each using a different set of features. Social context models achieve detection by examining features related to an account’s social presence including features such as relationships to other accounts, similarities to other users’ behaviors, and a variety of graph-based features. User behavior models primarily focus on features related to an individual user’s behavior, such as frequency of activities (e.g., number of tweets or posts per time interval), patterns of activity and clickstream sequences.

Therefore, it is crucial to consider bot detection techniques to distinguish bots from normal users to better leverage user profile features to detect fake news.

However, there is also another “bot-like” strategy that aims to massively promote disinformation and fake content in social platforms, which is called bot farms or also troll farms. It is not social bots, but it is a group of organized individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion (Wardle 2018 ) hired to massively spread fake news or any other harmful content. A prominent troll farm example is the Russia-based Internet Research Agency (IRA), which disseminated inflammatory content online to influence the outcome of the 2016 U.S. presidential election. 33 As a result, Twitter suspended accounts connected to the IRA and deleted 200,000 tweets from Russian trolls (Jamieson 2020 ). Another example to mention in this category is review bombing (Moro and Birt 2022 ). Review bombing refers to coordinated groups of people massively performing the same negative actions online (e.g., dislike, negative review/comment) on an online video, game, post, product, etc., in order to reduce its aggregate review score. The review bombers can be both humans and bots coordinated in order to cause harm and mislead people by falsifying facts.

Dynamic nature of online social platforms and fast propagation of fake news

Sharma et al. ( 2019 ) affirm that the fast proliferation of fake news through social networks makes it hard and challenging to assess the information’s credibility on social media. Similarly, Qian et al. ( 2018 ) assert that fake news and fabricated content propagate exponentially at the early stage of its creation and can cause a significant loss in a short amount of time (Friggeri et al. 2014 ) including manipulating the outcome of political events (Liu and Wu 2018 ; Bessi and Ferrara 2016 ).

Moreover, while analyzing the way source and promoters of fake news operate over the web through multiple online platforms, Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to real information (11%).

Furthermore, recently, Shu et al. ( 2020c ) attempted to understand the propagation of disinformation and fake news in social media and found that such content is produced and disseminated faster and easier through social media because of the low barriers that prevent doing so. Similarly, Shu et al. ( 2020b ) studied hierarchical propagation networks for fake news detection. They performed a comparative analysis between fake and real news from structural, temporal and linguistic perspectives. They demonstrated the potential of using these features to detect fake news and they showed their effectiveness for fake news detection as well.

Lastly, Abdullah-All-Tanvir et al. ( 2020 ) note that it is almost impossible to manually detect the sources and authenticity of fake news effectively and efficiently, due to its fast circulation in such a small amount of time. Therefore, it is crucial to note that the dynamic nature of the various online social platforms, which results in the continued rapid and exponential propagation of such fake content, remains a major challenge that requires further investigation while defining innovative solutions for fake news detection.

Datasets issue

The existing approaches lack an inclusive dataset with derived multidimensional information to detect fake news characteristics to achieve higher accuracy of machine learning classification model performance (Nyow and Chua 2019 ). These datasets are primarily dedicated to validating the machine learning model and are the ultimate frame of reference to train the model and analyze its performance. Therefore, if a researcher evaluates their model based on an unrepresentative dataset, the validity and the efficiency of the model become questionable when it comes to applying the fake news detection approach in a real-world scenario.

Moreover, several researchers (Shu et al. 2020d ; Wang et al. 2020 ; Pathak and Srihari 2019 ; Przybyla 2020 ) believe that fake news is diverse and dynamic in terms of content, topics, publishing methods and media platforms, and sophisticated linguistic styles geared to emulate true news. Consequently, training machine learning models on such sophisticated content requires large-scale annotated fake news data that are difficult to obtain (Shu et al. 2020d ).

Therefore, datasets are also a great topic to work on to enhance data quality and have better results while defining our solutions. Adversarial learning techniques (e.g., GAN, SeqGAN) can be used to provide machine-generated data that can be used to train deeper models and build robust systems to detect fake examples from the real ones. This approach can be used to counter the lack of datasets and the scarcity of data available to train models.

Fake news detection literature review

Fake news detection in social networks is still in the early stage of development and there are still challenging issues that need further investigation. This has become an emerging research area that is attracting huge attention.

There are various research studies on fake news detection in online social networks. Few of them have focused on the automatic detection of fake news using artificial intelligence techniques. In this section, we review the existing approaches used in automatic fake news detection, as well as the techniques that have been adopted. Then, a critical discussion built on a primary classification scheme based on a specific set of criteria is also emphasized.

Categories of fake news detection

In this section, we give an overview of most of the existing automatic fake news detection solutions adopted in the literature. A recent classification by Sharma et al. ( 2019 ) uses three categories of fake news identification methods. Each category is further divided based on the type of existing methods (i.e., content-based, feedback-based and intervention-based methods). However, a review of the literature for fake news detection in online social networks shows that the existing studies can be classified into broader categories based on two major aspects that most authors inspect and make use of to define an adequate solution. These aspects can be considered as major sources of extracted information used for fake news detection and can be summarized as follows: the content-based (i.e., related to the content of the news post) and the contextual aspect (i.e., related to the context of the news post).

Consequently, the studies we reviewed can be classified into three different categories based on the two aspects mentioned above (the third category is hybrid). As depicted in Fig.  5 , fake news detection solutions can be categorized as news content-based approaches, the social context-based approaches that can be divided into network and user-based approaches, and hybrid approaches. The latter combines both content-based and contextual approaches to define the solution.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig5_HTML.jpg

Classification of fake news detection approaches

News Content-based Category

News content-based approaches are fake news detection approaches that use content information (i.e., information extracted from the content of the news post) and that focus on studying and exploiting the news content in their proposed solutions. Content refers to the body of the news, including source, headline, text and image-video, which can reflect subtle differences.

Researchers of this category rely on content-based detection cues (i.e., text and multimedia-based cues), which are features extracted from the content of the news post. Text-based cues are features extracted from the text of the news, whereas multimedia-based cues are features extracted from the images and videos attached to the news. Figure  6 summarizes the most widely used news content representation (i.e., text and multimedia/images) and detection techniques (i.e., machine learning (ML), deep Learning (DL), natural language processing (NLP), fact-checking, crowdsourcing (CDS) and blockchain (BKC)) in news content-based category of fake news detection approaches. Most of the reviewed research works based on news content for fake news detection rely on the text-based cues (Kapusta et al. 2019 ; Kaur et al. 2020 ; Vereshchaka et al. 2020 ; Ozbay and Alatas 2020 ; Wang 2017 ; Nyow and Chua 2019 ; Hosseinimotlagh and Papalexakis 2018 ; Abdullah-All-Tanvir et al. 2019 , 2020 ; Mahabub 2020 ; Bahad et al. 2019 ; Hiriyannaiah et al. 2020 ) extracted from the text of the news content including the body of the news and its headline. However, a few researchers such as Vishwakarma et al. ( 2019 ) and Amri et al. ( 2022 ) try to recognize text from the associated image.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig6_HTML.jpg

News content-based category: news content representation and detection techniques

Most researchers of this category rely on artificial intelligence (AI) techniques (such as ML, DL and NLP models) to improve performance in terms of prediction accuracy. Others use different techniques such as fact-checking, crowdsourcing and blockchain. Specifically, the AI- and ML-based approaches in this category are trying to extract features from the news content, which they use later for content analysis and training tasks. In this particular case, the extracted features are the different types of information considered to be relevant for the analysis. Feature extraction is considered as one of the best techniques to reduce data size in automatic fake news detection. This technique aims to choose a subset of features from the original set to improve classification performance (Yazdi et al. 2020 ).

Table  6 lists the distinct features and metadata, as well as the used datasets in the news content-based category of fake news detection approaches.

The features and datasets used in the news content-based approaches

Feature and metadataDatasetsReference
The average number of words in sentences, number of stop words, the sentiment rate of the news measured through the difference between the number of positive and negative words in the articleGetting real about fake news , Gathering mediabiasfactcheck , KaiDMML FakeNewsNet , Real news for Oct-Dec 2016 Kapusta et al. ( )
The length distribution of the title, body and label of the articleNews trends, Kaggle, ReutersKaur et al. ( )
Sociolinguistic, historical, cultural, ideological and syntactical features attached to particular words, phrases and syntactical constructionsFakeNewsNetVereshchaka et al. ( )
Term frequencyBuzzFeed political news, Random political news, ISOT fake newsOzbay and Alatas ( )
The statement, speaker, context, label, justificationPOLITIFACT, LIAR Wang ( )
Spatial vicinity of each word, spatial/contextual relations between terms, and latent relations between terms and articlesKaggle fake news dataset Hosseinimotlagh and Papalexakis ( )
Word length, the count of words in a tweeted statementTwitter dataset, Chile earthquake 2010 datasetsAbdullah-All-Tanvir et al. ( )
The number of words that express negative emotionsTwitter datasetAbdullah-All-Tanvir et al. ( )
Labeled dataBuzzFeed , PolitiFact Mahabub ( )
The relationship between the news article headline and article body. The biases of a written news articleKaggle: real_or_fake , Fake news detection Bahad et al. ( )
Historical data. The topic and sentiment associated with content textual. The subject and context of the text, semantic knowledge of the contentFacebook datasetDel Vicario et al. ( )
The veracity of image text. The credibility of the top 15 Google search results related to the image textGoogle images, the Onion, KaggleVishwakarma et al. ( )
Topic modeling of text and the associated image of the online newsTwitter dataset , Weibo Amri et al. ( )

a https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

b https://mediabiasfactcheck.com/ , last access date: 26-12-2022

c https://github.com/KaiDMML/FakeNewsNet , last access date: 26-12-2022

d https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

e https://www.cs.ucsb.edu/~william/data/liar_dataset.zip , last access date: 26-12-2022

f https://www.kaggle.com/mrisdal/fake-news , last access date: 26-12-2022

g https://github.com/BuzzFeedNews/2016-10-facebook-fact-check , last access date: 26-12-2022

h https://www.politifact.com/subjects/fake-news/ , last access date: 26-12-2022

i https://www.kaggle.com/rchitic17/real-or-fake , last access date: 26-12-2022

j https://www.kaggle.com/jruvika/fake-news-detection , last access date: 26-12-2022

k https://github.com/MKLab-ITI/image-verification-corpus , last access date: 26-12-2022

l https://drive.google.com/file/d/14VQ7EWPiFeGzxp3XC2DeEHi-BEisDINn/view , last access date: 26-12-2022

Social Context-based Category

Unlike news content-based solutions, the social context-based approaches capture the skeptical social context of the online news (Zhang and Ghorbani 2020 ) rather than focusing on the news content. The social context-based category contains fake news detection approaches that use the contextual aspects (i.e., information related to the context of the news post). These aspects are based on social context and they offer additional information to help detect fake news. They are the surrounding data outside of the fake news article itself, where they can be an essential part of automatic fake news detection. Some useful examples of contextual information may include checking if the news itself and the source that published it are credible, checking the date of the news or the supporting resources, and checking if any other online news platforms are reporting the same or similar stories (Zhang and Ghorbani 2020 ).

Social context-based aspects can be classified into two subcategories, user-based and network-based, and they can be used for context analysis and training tasks in the case of AI- and ML-based approaches. User-based aspects refer to information captured from OSN users such as user profile information (Shu et al. 2019b ; Wang et al. 2019c ; Hamdi et al. 2020 ; Nyow and Chua 2019 ; Jiang et al. 2019 ) and user behavior (Cardaioli et al. 2020 ) such as user engagement (Uppada et al. 2022 ; Jiang et al. 2019 ; Shu et al. 2018b ; Nyow and Chua 2019 ) and response (Zhang et al. 2019a ; Qian et al. 2018 ). Meanwhile, network-based aspects refer to information captured from the properties of the social network where the fake content is shared and disseminated such as news propagation path (Liu and Wu 2018 ; Wu and Liu 2018 ) (e.g., propagation times and temporal characteristics of propagation), diffusion patterns (Shu et al. 2019a ) (e.g., number of retweets, shares), as well as user relationships (Mishra 2020 ; Hamdi et al. 2020 ; Jiang et al. 2019 ) (e.g., friendship status among users).

Figure  7 summarizes some of the most widely adopted social context representations, as well as the most used detection techniques (i.e., AI, ML, DL, fact-checking and blockchain), in the social context-based category of approaches.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig7_HTML.jpg

Social context-based category: social context representation and detection techniques

Table  7 lists the distinct features and metadata, the adopted detection cues, as well as the used datasets, in the context-based category of fake news detection approaches.

The features, detection cues and datasets used int the social context-based approaches

Feature and metadataDetection cuesDatasetsReference
Users’ sharing behaviors, explicit and implicit profile featuresUser-based: user profile informationFakeNewsNetShu et al. ( )
Users’ trust level, explicit and implicit profile features of “experienced” users who can recognize fake news items as false and “naive” users who are more likely to believe fake newsUser-based: user engagementFakeNewsNet, BuzzFeed, PolitiFactShu et al. ( )
Users’ replies on fake content, the reply stancesUser-based: user responseRumourEval, PHEMEZhang et al. ( )
Historical user responses to previous articlesUser-based: user responseWeibo, Twitter datasetQian et al. ( )
Speaker name, job title, political party affiliation, etc.User-based: user profile informationLIARWang et al. ( )
Latent relationships among users, the influence of the users with high prestige on the other usersNetworks-based: user relationshipsTwitter15 and Twitter16 Mishra ( )
The inherent tri-relationships among publishers, news items and users (i.e., publisher-news relations and user-news interactions)Networks-based: diffusion patternsFakeNewsNetShu et al. ( )
Propagation paths of news stories constructed from the retweets of source tweetsNetworks-based: news propagation pathWeibo, Twitter15, Twitter16Liu and Wu ( )
The propagation of messages in a social networkNetworks-based: news propagation pathTwitter datasetWu and Liu ( )
Spatiotemporal information (i.e., location, timestamps of user engagements), user’s Twitter profile, the user engagement to both fake and real newsUser-based: user engagementFakeNewsNet, PolitiFact, GossipCop, TwitterNyow and Chua ( )
The credibility of information sources, characteristics of the user, and their social graphUser and network-based: user profile information and user relationshipsEgo-Twitter Hamdi et al. ( )
Number of follows and followers on social media (user followee/follower, The friendship network), users’ similaritiesUser and network-based: user profile information, user engagement and user relationshipsFakeNewsNetJiang et al. ( )

a https://www.dropbox.com/s/7ewzdrbelpmrnxu/rumdetect2017.zip , last access date: 26-12-2022 b https://snap.stanford.edu/data/ego-Twitter.html , last access date: 26-12-2022

Hybrid approaches

Most researchers are focusing on employing a specific method rather than a combination of both content- and context-based methods. This is because some of them (Wu and Rao 2020 ) believe that there still some challenging limitations in the traditional fusion strategies due to existing feature correlations and semantic conflicts. For this reason, some researchers focus on extracting content-based information, while others are capturing some social context-based information for their proposed approaches.

However, it has proven challenging to successfully automate fake news detection based on just a single type of feature (Ruchansky et al. 2017 ). Therefore, recent directions tend to do a mixture by using both news content-based and social context-based approaches for fake news detection.

Table  8 lists the distinct features and metadata, as well as the used datasets, in the hybrid category of fake news detection approaches.

The features and datasets used in the hybrid approaches

Feature and metadataDatasetsReference
Features and textual metadata of the news content: title, content, date, source, locationSOT fake news dataset, LIAR dataset and FA-KES datasetElhadad et al. ( )
Spatiotemporal information (i.e., location, timestamps of user engagements), user’s Twitter profile, the user engagement to both fake and real newsFakeNewsNet, PolitiFact, GossipCop, TwitterNyow and Chua ( )
The domains and reputations of the news publishers. The important terms of each news and their word embeddings and topics. Shares, reactions and commentsBuzzFeedXu et al. ( )
Shares and propagation path of the tweeted content. A set of metrics comprising of created discussions such as the increase in authors, attention level, burstiness level, contribution sparseness, author interaction, author count and the average length of discussionsTwitter datasetAswani et al. ( )
Features extracted from the evolution of news and features from the users involved in the news spreading: The news veracity, the credibility of news spreaders, and the frequency of exposure to the same piece of newsTwitter datasetPreviti et al. ( )
Similar semantics and conflicting semantics between posts and commentsRumourEval, PHEMEWu and Rao ( )
Information from the publisher, including semantic and emotional information in news content. Semantic and emotional information from users. The resultant latent representations from news content and user commentsWeiboGuo et al. ( )
Relationships between news articles, creators and subjectsPolitiFactZhang et al. ( )
Source domains of the news article, author namesGeorge McIntire fake news datasetDeepak and Chitturi ( )
The news content, social context and spatiotemporal information. Synthetic user engagements generated from historical temporal user engagement patternsFakeNewsNetShu et al. ( )
The news content, social reactions, statements, the content and language of posts, the sharing and dissemination among users, content similarity, stance, sentiment score, headline, named entity, news sharing, credibility history, tweet commentsSHPT, PolitiFactWang et al. ( )
The source of the news, its headline, its author, its publication time, the adherence of a news source to a particular party, likes, shares, replies, followers-followees and their activitiesNELA-GT-2019, FakedditRaza and Ding ( )

Fake news detection techniques

Another vision for classifying automatic fake news detection is to look at techniques used in the literature. Hence, we classify the detection methods based on the techniques into three groups:

  • Human-based techniques: This category mainly includes the use of crowdsourcing and fact-checking techniques, which rely on human knowledge to check and validate the veracity of news content.
  • Artificial Intelligence-based techniques: This category includes the most used AI approaches for fake news detection in the literature. Specifically, these are the approaches in which researchers use classical ML, deep learning techniques such as convolutional neural network (CNN), recurrent neural network (RNN), as well as natural language processing (NLP).
  • Blockchain-based techniques: This category includes solutions using blockchain technology to detect and mitigate fake news in social media by checking source reliability and establishing the traceability of the news content.

Human-based Techniques

One specific research direction for fake news detection consists of using human-based techniques such as crowdsourcing (Pennycook and Rand 2019 ; Micallef et al. 2020 ) and fact-checking (Vlachos and Riedel 2014 ; Chung and Kim 2021 ; Nyhan et al. 2020 ) techniques.

These approaches can be considered as low computational requirement techniques since both rely on human knowledge and expertise for fake news detection. However, fake news identification cannot be addressed solely through human force since it demands a lot of effort in terms of time and cost, and it is ineffective in terms of preventing the fast spread of fake content.

Crowdsourcing. Crowdsourcing approaches (Kim et al. 2018 ) are based on the “wisdom of the crowds” (Collins et al. 2020 ) for fake content detection. These approaches rely on the collective contributions and crowd signals (Tschiatschek et al. 2018 ) of a group of people for the aggregation of crowd intelligence to detect fake news (Tchakounté et al. 2020 ) and to reduce the spread of misinformation on social media (Pennycook and Rand 2019 ; Micallef et al. 2020 ).

Micallef et al. ( 2020 ) highlight the role of the crowd in countering misinformation. They suspect that concerned citizens (i.e., the crowd), who use platforms where disinformation appears, can play a crucial role in spreading fact-checking information and in combating the spread of misinformation.

Recently Tchakounté et al. ( 2020 ) proposed a voting system as a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party site.

Similarly, Huffaker et al. ( 2020 ) propose a crowdsourced detection of emotionally manipulative language. They introduce an approach that transforms classification problems into a comparison task to mitigate conflation content by allowing the crowd to detect text that uses manipulative emotional language to sway users toward positions or actions. The proposed system leverages anchor comparison to distinguish between intrinsically emotional content and emotionally manipulative language.

La Barbera et al. ( 2020 ) try to understand how people perceive the truthfulness of information presented to them. They collect data from US-based crowd workers, build a dataset of crowdsourced truthfulness judgments for political statements, and compare it with expert annotation data generated by fact-checkers such as PolitiFact.

Coscia and Rossi ( 2020 ) introduce a crowdsourced flagging system that consists of online news flagging. The bipolar model of news flagging attempts to capture the main ingredients that they observe in empirical research on fake news and disinformation.

Unlike the previously mentioned researchers who focus on news content in their approaches, Pennycook and Rand ( 2019 ) focus on using crowdsourced judgments of the quality of news sources to combat social media disinformation.

Fact-Checking. The fact-checking task is commonly manually performed by journalists to verify the truthfulness of a given claim. Indeed, fact-checking features are being adopted by multiple online social network platforms. For instance, Facebook 34 started addressing false information through independent fact-checkers in 2017, followed by Google 35 the same year. Two years later, Instagram 36 followed suit. However, the usefulness of fact-checking initiatives is questioned by journalists 37 , as well as by researchers such as Andersen and Søe ( 2020 ). On the other hand, work is being conducted to boost the effectiveness of these initiatives to reduce misinformation (Chung and Kim 2021 ; Clayton et al. 2020 ; Nyhan et al. 2020 ).

Most researchers use fact-checking websites (e.g., politifact.com, 38 snopes.com, 39 Reuters, 40 , etc.) as data sources to build their datasets and train their models. Therefore, in the following, we specifically review examples of solutions that use fact-checking (Vlachos and Riedel 2014 ) to help build datasets that can be further used in the automatic detection of fake content.

Yang et al. ( 2019a ) use PolitiFact fact-checking website as a data source to train, tune, and evaluate their model named XFake, on political data. The XFake system is an explainable fake news detector that assists end users to identify news credibility. The fakeness of news items is detected and interpreted considering both content and contextual (e.g., statements) information (e.g., speaker).

Based on the idea that fact-checkers cannot clean all data, and it must be a selection of what “matters the most” to clean while checking a claim, Sintos et al. ( 2019 ) propose a solution to help fact-checkers combat problems related to data quality (where inaccurate data lead to incorrect conclusions) and data phishing. The proposed solution is a combination of data cleaning and perturbation analysis to avoid uncertainties and errors in data and the possibility that data can be phished.

Tchechmedjiev et al. ( 2019 ) propose a system named “ClaimsKG” as a knowledge graph of fact-checked claims aiming to facilitate structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata. “ClaimsKG” designs the relationship between vocabularies. To gather vocabularies, a semi-automated pipeline periodically gathers data from popular fact-checking websites regularly.

AI-based Techniques

Previous work by Yaqub et al. ( 2020 ) has shown that people lack trust in automated solutions for fake news detection However, work is already being undertaken to increase this trust, for instance by von der Weth et al. ( 2020 ).

Most researchers consider fake news detection as a classification problem and use artificial intelligence techniques, as shown in Fig.  8 . The adopted AI techniques may include machine learning ML (e.g., Naïve Bayes, logistic regression, support vector machine SVM), deep learning DL (e.g., convolutional neural networks CNN, recurrent neural networks RNN, long short-term memory LSTM) and natural language processing NLP (e.g., Count vectorizer, TF-IDF Vectorizer). Most of them combine many AI techniques in their solutions rather than relying on one specific approach.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig8_HTML.jpg

Examples of the most widely used AI techniques for fake news detection

Many researchers are developing machine learning models in their solutions for fake news detection. Recently, deep neural network techniques are also being employed as they are generating promising results (Islam et al. 2020 ). A neural network is a massively parallel distributed processor with simple units that can store important information and make it available for use (Hiriyannaiah et al. 2020 ). Moreover, it has been proven (Cardoso Durier da Silva et al. 2019 ) that the most widely used method for automatic detection of fake news is not simply a classical machine learning technique, but rather a fusion of classical techniques coordinated by a neural network.

Some researchers define purely machine learning models (Del Vicario et al. 2019 ; Elhadad et al. 2019 ; Aswani et al. 2017 ; Hakak et al. 2021 ; Singh et al. 2021 ) in their fake news detection approaches. The more commonly used machine learning algorithms (Abdullah-All-Tanvir et al. 2019 ) for classification problems are Naïve Bayes, logistic regression and SVM.

Other researchers (Wang et al. 2019c ; Wang 2017 ; Liu and Wu 2018 ; Mishra 2020 ; Qian et al. 2018 ; Zhang et al. 2020 ; Goldani et al. 2021 ) prefer to do a mixture of different deep learning models, without combining them with classical machine learning techniques. Some even prove that deep learning techniques outperform traditional machine learning techniques (Mishra et al. 2022 ). Deep learning is one of the most widely popular research topics in machine learning. Unlike traditional machine learning approaches, which are based on manually crafted features, deep learning approaches can learn hidden representations from simpler inputs both in context and content variations (Bondielli and Marcelloni 2019 ). Moreover, traditional machine learning algorithms almost always require structured data and are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets, which requires human intervention to “teach them” when the result is incorrect (Parrish 2018 ), while deep learning networks rely on layers of artificial neural networks (ANN) and do not require human intervention, as multilevel layers in neural networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes (Parrish 2018 ). The two most widely implemented paradigms in deep neural networks are recurrent neural networks (RNN) and convolutional neural networks (CNN).

Still other researchers (Abdullah-All-Tanvir et al. 2019 ; Kaliyar et al. 2020 ; Zhang et al. 2019a ; Deepak and Chitturi 2020 ; Shu et al. 2018a ; Wang et al. 2019c ) prefer to combine traditional machine learning and deep learning classification, models. Others combine machine learning and natural language processing techniques. A few combine deep learning models with natural language processing (Vereshchaka et al. 2020 ). Some other researchers (Kapusta et al. 2019 ; Ozbay and Alatas 2020 ; Ahmed et al. 2020 ) combine natural language processing with machine learning models. Furthermore, others (Abdullah-All-Tanvir et al. 2019 ; Kaur et al. 2020 ; Kaliyar 2018 ; Abdullah-All-Tanvir et al. 2020 ; Bahad et al. 2019 ) prefer to combine all the previously mentioned techniques (i.e., ML, DL and NLP) in their approaches.

Table  11 , which is relegated to the Appendix (after the bibliography) because of its size, shows a comparison of the fake news detection solutions that we have reviewed based on their main approaches, the methodology that was used and the models.

Comparison of AI-based fake news detection techniques

ReferenceApproachMethodModel
Del Vicario et al. ( )An approach to analyze the sentiment associated with data textual content and add semantic knowledge to itMLLinear Regression (LIN), Logistic Regression (LOG), Support Vector Machine (SVM) with Linear Kernel, K-Nearest Neighbors (KNN), Neural Network Models (NN), Decision Trees (DT)
Elhadad et al. ( )An approach to select hybrid features from the textual content of the news, which they consider as blocks, without segmenting text into parts (title, content, date, source, etc.)MLDecision Tree, KNN, Logistic Regression, SVM, Naïve Bayes with n-gram, LSVM, Perceptron
Aswani et al. ( )A hybrid artificial bee colony approach to identify and segregate buzz in Twitter and analyze user-generated content (UGC) to mine useful information (content buzz/popularity)MLKNN with artificial bee colony optimization
Hakak et al. ( )An ensemble of machine learning approaches for effective feature extraction to classify fake newsMLDecision Tree, Random Forest and Extra Tree Classifier
Singh et al. ( )A multimodal approach, combining text and visual analysis of online news stories to automatically detect fake news through predictive analysis to detect features most strongly associated with fake newsMLLogistic Regression, Linear Discrimination Analysis, Quadratic Discriminant Analysis, K-Nearest Neighbors, Naïve Bayes, Support Vector Machine, Classification and Regression Tree, and Random Forest Analysis
Amri et al. ( )An explainable multimodal content-based fake news detection systemMLVision-and-Language BERT (VilBERT), Local Interpretable Model-Agnostic Explanations (LIME), Latent Dirichlet Allocation (LDA) topic modeling
Wang et al. ( )A hybrid deep neural network model to learn the useful features from contextual information and to capture the dependencies between sequences of contextual informationDLRecurrent and Convolutional Neural Networks (RNN and CNN)
Wang ( )A hybrid convolutional neural network approach for automatic fake news detectionDLRecurrent and Convolutional Neural Networks (RNN and CNN)
Liu and Wu ( )An early detection approach of fake news to classify the propagation path to mine the global and local changes of user characteristics in the diffusion pathDLRecurrent and Convolutional Neural Networks (RNN and CNN)
Mishra ( )Unsupervised network representation learning methods to learn user (node) embeddings from both the follower network and the retweet network and to encode the propagation path sequenceDLRNN: (long short-term memory unit (LSTM))
Qian et al. ( )A Two-Level Convolutional Neural Network with User Response Generator (TCNN-URG) where TCNN captures semantic information from the article text by representing it at the sentence and word level. The URG learns a generative model of user responses to article text from historical user responses that it can use to generate responses to new articles to assist fake news detectionDLConvolutional Neural Network (CNN)
Zhang et al. ( )Based on a set of explicit features extracted from the textual information, a deep diffusive network model is built to infer the credibility of news articles, creators and subjects simultaneouslyDLDeep Diffusive Network Model Learning
Goldani et al. ( )A capsule networks (CapsNet) approach for fake news detection using two architectures for different lengths of news statements and claims that capsule neural networks have been successful in computer vision and are receiving attention for use in Natural Language Processing (NLP)DLCapsule Networks (CapsNet)
Wang et al. ( )An automated approach to distinguish different cases of fake news (i.e., hoaxes, irony and propaganda) while assessing and classifying news articles and claims including linguistic cues as well as user credibility and news dissemination in social mediaDL, MLConvolutional Neural Network (CNN), long Short-Term Memory (LSTM), logistic regression
Abdullah-All-Tanvir et al. ( )A model to recognize forged news messages from twitter posts, by figuring out how to anticipate precision appraisals, in view of computerizing forged news identification in Twitter dataset. A combination of traditional machine learning, as well as deep learning classification models, is tested to enhance the accuracy of predictionDL, MLNaïve Bayes, Logistic Regression, Support Vector Machine, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM)
Kaliyar et al. ( )An approach named (FNDNet) based on the combination between unsupervised learning algorithm GloVe and deep convolutional neural network for fake news detectionDL, MLDeep Convolutional Neural Network (CNN), Global Vectors (GloVe)
Zhang et al. ( )A hybrid approach to encode auxiliary information coming from people’s replies alone in temporal order. Such auxiliary information is then used to update a priori belief generating a posteriori beliefDL, MLDeep Learning Model, Long Short-Term Memory Neural Network (LSTM)
Deepak and Chitturi ( )A system that consists of live data mining in addition to the deep learning modelDL, MLFeedforward Neural Network (FNN) and LSTM Word Vector Model
Shu et al. ( )A multidimensional fake news data repository “FakeNewsNet” and conduct an exploratory analysis of the datasets to evaluate themDL, MLConvolutional Neural Network (CNN), Support Vector Machines (SVMs), Logistic Regression (LR), Naïve Bayes (NB)
Vereshchaka et al. ( )A sociocultural textual analysis, computational linguistics analysis, and textual classification using NLP, as well as deep learning models to distinguish fake from real news to mitigate the problem of disinformationDL, NLPShort-Term Memory (LSTM), Recurrent Neural Network (RNN) and Gated Recurrent Unit (GRU)
Kapusta et al. ( )A sentiment and frequency analysis using both machine learning and NLP in what is called text mining to processing news content sentiment analysis and frequency analysis to compare basic text characteristics of fake and real news articlesML, NLPThe Natural Language Toolkit (NLTK), TF-IDF
Ozbay and Alatas ( )A hybrid approach based on text analysis and supervised artificial intelligence for fake news detectionML, NLPSupervised algorithms: BayesNet, JRip, OneR, Decision Stump, ZeroR, Stochastic Gradient Descent (SGD), CV Parameter Selection (CVPS), Randomizable Filtered Classifier (RFC), Logistic Model Tree (LMT). NLP: TF weighting
Ahmed et al. ( )A machine learning and NLP text-based processing to identify fake news. Various features of the text are extracted through text processing and after that those features are incorporated into classificationML, NLPMachine learning classifiers (i.e., Passive-aggressive, Naïve Bayes and Support Vector Machine)
Abdullah-All-Tanvir et al. ( )A hybrid neural network approach to identify authentic news on popular Twitter threads which would outperform the traditional neural network architecture’s performance. Three traditional supervised algorithms and two Deep Neural are combined to train the defined model. Some NLP concepts were also used to implement some of the traditional supervised machine learning algorithms over their datasetML, DL, NLPTraditional supervised algorithm (i.e., Logistic Regression, Bayesian Classifier and Support Vector Machine). Deep Neural Networks (i.e., Recurrent Neural Network, Long Short-Term Memory LSTM). NLP concepts such as Count vectorizer and TF-IDF Vectorizer
Kaur et al. ( )A hybrid method to identify news articles as fake or real through finding out which classification model identifies false features accuratelyML, DL, NLPNeural Networks (NN) and Ensemble Models. Supervised Machine Learning Classifiers such as Naïve Bayes (NB), Decision Tree (DT), Support Vector Machine (SVM), Linear Models. Term Frequency-Inverse Document Frequency (TF-IDF), Count-Vectorizer (CV), Hashing-Vectorizer (HV)
Kaliyar ( )A fake news detection approach to classify the news article or other documents into certain or not. Natural language processing, machine learning and deep learning techniques are used to implement the defined models and to predict the accuracy of different models and classifiersML, DL, NLPMachine Learning Models: Naïve Bayes, K-nearest Neighbors, Decision Tree, Random Forest. Deep Learning Networks: Shallow Convolutional Neural Networks (CNN), Very Deep Convolutional Neural Network (VDCNN), Long Short-Term Memory Network (LSTM), Gated Recurrent Unit Network (GRU). A combination of Convolutional Neural Network with Long Short-Term Memory (CNN-LSTM) and Convolutional Neural Network with Gated Recurrent Unit (CNN-LSTM)
Mahabub ( )An intelligent detection system to manage the classification of news as being either real or fakeML, DL, NLPMachine Learning: Naïve Bayes, KNN, SVM, Random Forest, Artificial Neural Network, Logistic Regression, Gradient Boosting, AdaBoost
Bahad et al. ( )A method based on Bi-directional LSTM-recurrent neural network to analyze the relationship between the news article headline and article bodyML, DL, NLPUnsupervised Learning algorithm: Global Vectors (GloVe). Bi-directional LSTM-recurrent Neural Network

Blockchain-based Techniques for Source Reliability and Traceability

Another research direction for detecting and mitigating fake news in social media focuses on using blockchain solutions. Blockchain technology is recently attracting researchers’ attention due to the interesting features it offers. Immutability, decentralization, tamperproof, consensus, record keeping and non-repudiation of transactions are some of the key features that make blockchain technology exploitable, not just for cryptocurrencies, but also to prove the authenticity and integrity of digital assets.

However, the proposed blockchain approaches are few in number and they are fundamental and theoretical approaches. Specifically, the solutions that are currently available are still in research, prototype, and beta testing stages (DiCicco and Agarwal 2020 ; Tchechmedjiev et al. 2019 ). Furthermore, most researchers (Ochoa et al. 2019 ; Song et al. 2019 ; Shang et al. 2018 ; Qayyum et al. 2019 ; Jing and Murugesan 2018 ; Buccafurri et al. 2017 ; Chen et al. 2018 ) do not specify which fake news type they are mitigating in their studies. They mention news content in general, which is not adequate for innovative solutions. For that, serious implementations should be provided to prove the usefulness and feasibility of this newly developing research vision.

Table  9 shows a classification of the reviewed blockchain-based approaches. In the classification, we listed the following:

  • The type of fake news that authors are trying to mitigate, which can be multimedia-based or text-based fake news.
  • The techniques used for fake news mitigation, which can be either blockchain only, or blockchain combined with other techniques such as AI, Data mining, Truth-discovery, Preservation metadata, Semantic similarity, Crowdsourcing, Graph theory and SIR model (Susceptible, Infected, Recovered).
  • The feature that is offered as an advantage of the given solution (e.g., Reliability, Authenticity and Traceability). Reliability is the credibility and truthfulness of the news content, which consists of proving the trustworthiness of the content. Traceability aims to trace and archive the contents. Authenticity consists of checking whether the content is real and authentic.

A checkmark ( ✓ ) in Table  9 denotes that the mentioned criterion is explicitly mentioned in the proposed solution, while the empty dash (–) cell for fake news type denotes that it depends on the case: The criterion was either not explicitly mentioned (e.g., fake news type) in the work or the classification does not apply (e.g., techniques/other).

A classification of popular blockchain-based approaches for fake news detection in social media

ReferenceFake News TypeTechniquesFeature
MultimediaText
Shae and Tsai ( ) AIReliability
Ochoa et al. ( ) Data Mining, Truth-DiscoveryReliability
Huckle and White ( ) Preservation MetadataReliability
Song et al. ( )Traceability
Shang et al. ( )Traceability
Qayyum et al. ( )Semantic SimilarityReliability
Jing and Murugesan ( )AIReliability
Buccafurri et al. ( )Crowd-SourcingReliability
Chen et al. ( )SIR ModelReliability
Hasan and Salah ( ) Authenticity
Tchechmedjiev et al. ( )Graph theoryReliability

After reviewing the most relevant state of the art for automatic fake news detection, we classify them as shown in Table  10 based on the detection aspects (i.e., content-based, contextual, or hybrid aspects) and the techniques used (i.e., AI, crowdsourcing, fact-checking, blockchain or hybrid techniques). Hybrid techniques refer to solutions that simultaneously combine different techniques from previously mentioned categories (i.e., inter-hybrid methods), as well as techniques within the same class of methods (i.e., intra-hybrid methods), in order to define innovative solutions for fake news detection. A hybrid method should bring the best of both worlds. Then, we provide a discussion based on different axes.

Fake news detection approaches classification

Artificial IntelligenceCrowdsourcing (CDS)Blockchain (BKC)Fact-checkingHybrid
MLDLNLP
ContentDel Vicario et al. ( ), Hosseinimotlagh and Papalexakis ( ), Hakak et al. ( ), Singh et al. ( ), Amri et al. ( )Wang ( ), Hiriyannaiah et al. ( )Zellers et al. ( )Kim et al. ( ), Tschiatschek et al. ( ), Tchakounté et al. ( ), Huffaker et al. ( ), La Barbera et al. ( ), Coscia and Rossi ( ), Micallef et al. ( )Song et al. ( )Sintos et al. ( )ML, DL, NLP: Abdullah-All-Tanvir et al. ( ), Kaur et al. ( ), Mahabub ( ), Bahad et al. ( ) Kaliyar ( )
ML, DL:
Abdullah-All-Tanvir et al. ( ), Kaliyar et al. ( ), Deepak and Chitturi ( )
DL, NLP: Vereshchaka et al. ( )
ML, NLP: Kapusta et al. ( ), Ozbay and Alatas Ozbay and Alatas ( ), Ahmed et al. ( )
BKC, CDS: Buccafurri et al. ( )
ContextQian et al. ( ), Liu and Wu ( ), Hamdi et al. ( ), Wang et al. ( ), Mishra ( )Pennycook and Rand ( )Huckle and White ( ), Shang et al. ( )Tchechmedjiev et al. ( )ML, DL: Zhang et al. ( ), Shu et al. ( ), Shu et al. ( ), Wu and Liu ( )
BKC, AI: Ochoa et al. ( )
BKC, SIR: Chen et al. ( )
HybridAswani et al. ( ), Previti et al. ( ), Elhadad et al. ( ), Nyow and Chua ( )Ruchansky et al. ( ), Wu and Rao ( ), Guo et al. ( ), Zhang et al. ( )Xu et al. ( )Qayyum et al. ( ), Hasan and Salah ( ), Tchechmedjiev et al. ( )Yang et al. ( )ML, DL: Shu et al. ( ), Wang et al. ( )
BKC, AI: Shae and Tsai ( ), Jing and Murugesan ( )

News content-based methods

Most of the news content-based approaches consider fake news detection as a classification problem and they use AI techniques such as classical machine learning (e.g., regression, Bayesian) as well as deep learning (i.e., neural methods such as CNN and RNN). More specifically, classification of social media content is a fundamental task for social media mining, so that most existing methods regard it as a text categorization problem and mainly focus on using content features, such as words and hashtags (Wu and Liu 2018 ). The main challenge facing these approaches is how to extract features in a way to reduce the data used to train their models and what features are the most suitable for accurate results.

Researchers using such approaches are motivated by the fact that the news content is the main entity in the deception process, and it is a straightforward factor to analyze and use while looking for predictive clues of deception. However, detecting fake news only from the content of the news is not enough because the news is created in a strategic intentional way to mimic the truth (i.e., the content can be intentionally manipulated by the spreader to make it look like real news). Therefore, it is considered to be challenging, if not impossible, to identify useful features (Wu and Liu 2018 ) and consequently tell the nature of such news solely from the content.

Moreover, works that utilize only the news content for fake news detection ignore the rich information and latent user intelligence (Qian et al. 2018 ) stored in user responses toward previously disseminated articles. Therefore, the auxiliary information is deemed crucial for an effective fake news detection approach.

Social context-based methods

The context-based approaches explore the surrounding data outside of the news content, which can be an effective direction and has some advantages in areas where the content approaches based on text classification can run into issues. However, most existing studies implementing contextual methods mainly focus on additional information coming from users and network diffusion patterns. Moreover, from a technical perspective, they are limited to the use of sophisticated machine learning techniques for feature extraction, and they ignore the usefulness of results coming from techniques such as web search and crowdsourcing which may save much time and help in the early detection and identification of fake content.

Hybrid approaches can simultaneously model different aspects of fake news such as the content-based aspects, as well as the contextual aspect based on both the OSN user and the OSN network patterns. However, these approaches are deemed more complex in terms of models (Bondielli and Marcelloni 2019 ), data availability, and the number of features. Furthermore, it remains difficult to decide which information among each category (i.e., content-based and context-based information) is most suitable and appropriate to be used to achieve accurate and precise results. Therefore, there are still very few studies belonging to this category of hybrid approaches.

Early detection

As fake news usually evolves and spreads very fast on social media, it is critical and urgent to consider early detection directions. Yet, this is a challenging task to do especially in highly dynamic platforms such as social networks. Both news content- and social context-based approaches suffer from this challenging early detection of fake news.

Although approaches that detect fake news based on content analysis face this issue less, they are still limited by the lack of information required for verification when the news is in its early stage of spread. However, approaches that detect fake news based on contextual analysis are most likely to suffer from the lack of early detection since most of them rely on information that is mostly available after the spread of fake content such as social engagement, user response, and propagation patterns. Therefore, it is crucial to consider both trusted human verification and historical data as an attempt to detect fake content during its early stage of propagation.

Conclusion and future directions

In this paper, we introduced the general context of the fake news problem as one of the major issues of the online deception problem in online social networks. Based on reviewing the most relevant state of the art, we summarized and classified existing definitions of fake news, as well as its related terms. We also listed various typologies and existing categorizations of fake news such as intent-based fake news including clickbait, hoax, rumor, satire, propaganda, conspiracy theories, framing as well as content-based fake news including text and multimedia-based fake news, and in the latter, we can tackle deepfake videos and GAN-generated fake images. We discussed the major challenges related to fake news detection and mitigation in social media including the deceptiveness nature of the fabricated content, the lack of human awareness in the field of fake news, the non-human spreaders issue (e.g., social bots), the dynamicity of such online platforms, which results in a fast propagation of fake content and the quality of existing datasets, which still limits the efficiency of the proposed solutions. We reviewed existing researchers’ visions regarding the automatic detection of fake news based on the adopted approaches (i.e., news content-based approaches, social context-based approaches, or hybrid approaches) and the techniques that are used (i.e., artificial intelligence-based methods; crowdsourcing, fact-checking, and blockchain-based methods; and hybrid methods), then we showed a comparative study between the reviewed works. We also provided a critical discussion of the reviewed approaches based on different axes such as the adopted aspect for fake news detection (i.e., content-based, contextual, and hybrid aspects) and the early detection perspective.

To conclude, we present the main issues for combating the fake news problem that needs to be further investigated while proposing new detection approaches. We believe that to define an efficient fake news detection approach, we need to consider the following:

  • Our choice of sources of information and search criteria may have introduced biases in our research. If so, it would be desirable to identify those biases and mitigate them.
  • News content is the fundamental source to find clues to distinguish fake from real content. However, contextual information derived from social media users and from the network can provide useful auxiliary information to increase detection accuracy. Specifically, capturing users’ characteristics and users’ behavior toward shared content can be a key task for fake news detection.
  • Moreover, capturing users’ historical behavior, including their emotions and/or opinions toward news content, can help in the early detection and mitigation of fake news.
  • Furthermore, adversarial learning techniques (e.g., GAN, SeqGAN) can be considered as a promising direction for mitigating the lack and scarcity of available datasets by providing machine-generated data that can be used to train and build robust systems to detect the fake examples from the real ones.
  • Lastly, analyzing how sources and promoters of fake news operate over the web through multiple online platforms is crucial; Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to valid information (11%).

Appendix: A Comparison of AI-based fake news detection techniques

This Appendix consists only in the rather long Table  11 . It shows a comparison of the fake news detection solutions based on artificial intelligence that we have reviewed according to their main approaches, the methodology that was used, and the models, as explained in Sect.  6.2.2 .

Author Contributions

The order of authors is alphabetic as is customary in the third author’s field. The lead author was Sabrine Amri, who collected and analyzed the data and wrote a first draft of the paper, all along under the supervision and tight guidance of Esma Aïmeur. Gilles Brassard reviewed, criticized and polished the work into its final form.

This work is supported in part by Canada’s Natural Sciences and Engineering Research Council.

Availability of data and material

Declarations.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

1 https://www.nationalacademies.org/news/2021/07/as-surgeon-general-urges-whole-of-society-effort-to-fight-health-misinformation-the-work-of-the-national-academies-helps-foster-an-evidence-based-information-environment , last access date: 26-12-2022.

2 https://time.com/4897819/elvis-presley-alive-conspiracy-theories/ , last access date: 26-12-2022.

3 https://www.therichest.com/shocking/the-evidence-15-reasons-people-think-the-earth-is-flat/ , last access date: 26-12-2022.

4 https://www.grunge.com/657584/the-truth-about-1952s-alien-invasion-of-washington-dc/ , last access date: 26-12-2022.

5 https://www.journalism.org/2021/01/12/news-use-across-social-media-platforms-in-2020/ , last access date: 26-12-2022.

6 https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/ , last access date: 26-12-2022.

7 https://www.buzzfeednews.com/article/janelytvynenko/coronavirus-fake-news-disinformation-rumors-hoaxes , last access date: 26-12-2022.

8 https://www.factcheck.org/2020/03/viral-social-media-posts-offer-false-coronavirus-tips/ , last access date: 26-12-2022.

9 https://www.factcheck.org/2020/02/fake-coronavirus-cures-part-2-garlic-isnt-a-cure/ , last access date: 26-12-2022.

10 https://www.bbc.com/news/uk-36528256 , last access date: 26-12-2022.

11 https://en.wikipedia.org/wiki/Pizzagate_conspiracy_theory , last access date: 26-12-2022.

12 https://www.theguardian.com/world/2017/jan/09/germany-investigating-spread-fake-news-online-russia-election , last access date: 26-12-2022.

13 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2016 , last access date: 26-12-2022.

14 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2018 , last access date: 26-12-2022.

15 https://apnews.com/article/47466c5e260149b1a23641b9e319fda6 , last access date: 26-12-2022.

16 https://blog.collinsdictionary.com/language-lovers/collins-2017-word-of-the-year-shortlist/ , last access date: 26-12-2022.

17 https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/ , last access date: 26-12-2022.

18 https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/ , last access date: 26-12-2022.

19 https://scholar.google.ca/ , last access date: 26-12-2022.

20 https://ieeexplore.ieee.org/ , last access date: 26-12-2022.

21 https://link.springer.com/ , last access date: 26-12-2022.

22 https://www.sciencedirect.com/ , last access date: 26-12-2022.

23 https://www.scopus.com/ , last access date: 26-12-2022.

24 https://www.acm.org/digital-library , last access date: 26-12-2022.

25 https://www.politico.com/magazine/story/2016/12/fake-news-history-long-violent-214535 , last access date: 26-12-2022.

26 https://en.wikipedia.org/wiki/Trial_of_Socrates , last access date: 26-12-2022.

27 https://trends.google.com/trends/explore?hl=en-US &tz=-180 &date=2013-12-06+2018-01-06 &geo=US &q=fake+news &sni=3 , last access date: 26-12-2022.

28 https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation , last access date: 26-12-2022.

29 https://www.nato.int/cps/en/natohq/177273.htm , last access date: 26-12-2022.

30 https://www.collinsdictionary.com/dictionary/english/fake-news , last access date: 26-12-2022.

31 https://www.statista.com/statistics/657111/fake-news-sharing-online/ , last access date: 26-12-2022.

32 https://www.statista.com/statistics/657090/fake-news-recogition-confidence/ , last access date: 26-12-2022.

33 https://www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731 , last access date: 26-12-2022.

34 https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news , last access date: 26-12-2022.

35 https://www.theguardian.com/technology/2017/apr/07/google-to-display-fact-checking-labels-to-show-if-news-is-true-or-false , last access date: 26-12-2022.

36 https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram , last access date: 26-12-2022.

37 https://www.wired.com/story/instagram-fact-checks-who-will-do-checking/ , last access date: 26-12-2022.

38 https://www.politifact.com/ , last access date: 26-12-2022.

39 https://www.snopes.com/ , last access date: 26-12-2022.

40 https://www.reutersagency.com/en/ , last access date: 26-12-2022.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Esma Aïmeur, Email: ac.laertnomu.ori@ruemia .

Sabrine Amri, Email: [email protected] .

Gilles Brassard, Email: ac.laertnomu.ori@drassarb .

  • Abdullah-All-Tanvir, Mahir EM, Akhter S, Huq MR (2019) Detecting fake news using machine learning and deep learning algorithms. In: 7th international conference on smart computing and communications (ICSCC), IEEE, pp 1–5 10.1109/ICSCC.2019.8843612
  • Abdullah-All-Tanvir, Mahir EM, Huda SMA, Barua S (2020) A hybrid approach for identifying authentic news using deep learning methods on popular Twitter threads. In: International conference on artificial intelligence and signal processing (AISP), IEEE, pp 1–6 10.1109/AISP48273.2020.9073583
  • Abu Arqoub O, Abdulateef Elega A, Efe Özad B, Dwikat H, Adedamola Oloyede F. Mapping the scholarship of fake news research: a systematic review. J Pract. 2022; 16 (1):56–86. doi: 10.1080/17512786.2020.1805791. [ CrossRef ] [ Google Scholar ]
  • Ahmed S, Hinkelmann K, Corradini F. Development of fake news model using machine learning through natural language processing. Int J Comput Inf Eng. 2020; 14 (12):454–460. [ Google Scholar ]
  • Aïmeur E, Brassard G, Rioux J. Data privacy: an end-user perspective. Int J Comput Netw Commun Secur. 2013; 1 (6):237–250. [ Google Scholar ]
  • Aïmeur E, Hage H, Amri S (2018) The scourge of online deception in social networks. In: 2018 international conference on computational science and computational intelligence (CSCI), IEEE, pp 1266–1271 10.1109/CSCI46756.2018.00244
  • Alemanno A. How to counter fake news? A taxonomy of anti-fake news approaches. Eur J Risk Regul. 2018; 9 (1):1–5. doi: 10.1017/err.2018.12. [ CrossRef ] [ Google Scholar ]
  • Allcott H, Gentzkow M. Social media and fake news in the 2016 election. J Econ Perspect. 2017; 31 (2):211–36. doi: 10.1257/jep.31.2.211. [ CrossRef ] [ Google Scholar ]
  • Allen J, Howland B, Mobius M, Rothschild D, Watts DJ. Evaluating the fake news problem at the scale of the information ecosystem. Sci Adv. 2020 doi: 10.1126/sciadv.aay3539. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Allington D, Duffy B, Wessely S, Dhavan N, Rubin J. Health-protective behaviour, social media usage and conspiracy belief during the Covid-19 public health emergency. Psychol Med. 2020 doi: 10.1017/S003329172000224X. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Alonso-Galbán P, Alemañy-Castilla C (2022) Curbing misinformation and disinformation in the Covid-19 era: a view from cuba. MEDICC Rev 22:45–46 10.37757/MR2020.V22.N2.12 [ PubMed ] [ CrossRef ]
  • Altay S, Hacquin AS, Mercier H. Why do so few people share fake news? It hurts their reputation. New Media Soc. 2022; 24 (6):1303–1324. doi: 10.1177/1461444820969893. [ CrossRef ] [ Google Scholar ]
  • Amri S, Sallami D, Aïmeur E (2022) Exmulf: an explainable multimodal content-based fake news detection system. In: International symposium on foundations and practice of security. Springer, Berlin, pp 177–187. 10.1109/IJCNN48605.2020.9206973
  • Andersen J, Søe SO. Communicative actions we live by: the problem with fact-checking, tagging or flagging fake news-the case of Facebook. Eur J Commun. 2020; 35 (2):126–139. doi: 10.1177/0267323119894489. [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B. Fake news and Covid-19: modelling the predictors of fake news sharing among social media users. Telematics Inform. 2021; 56 :101475. doi: 10.1016/j.tele.2020.101475. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B, Tunca EA, Gever CV. The effect of visual multimedia instructions against fake news spread: a quasi-experimental study with Nigerian students. J Librariansh Inf Sci. 2022 doi: 10.1177/09610006221096477. [ CrossRef ] [ Google Scholar ]
  • Aswani R, Ghrera S, Kar AK, Chandra S. Identifying buzz in social media: a hybrid approach using artificial bee colony and k-nearest neighbors for outlier detection. Soc Netw Anal Min. 2017; 7 (1):1–10. doi: 10.1007/s13278-017-0461-2. [ CrossRef ] [ Google Scholar ]
  • Avram M, Micallef N, Patil S, Menczer F (2020) Exposure to social engagement metrics increases vulnerability to misinformation. arXiv preprint arxiv:2005.04682 , 10.37016/mr-2020-033
  • Badawy A, Lerman K, Ferrara E (2019) Who falls for online political manipulation? In: Companion proceedings of the 2019 world wide web conference, pp 162–168 10.1145/3308560.3316494
  • Bahad P, Saxena P, Kamal R. Fake news detection using bi-directional LSTM-recurrent neural network. Procedia Comput Sci. 2019; 165 :74–82. doi: 10.1016/j.procs.2020.01.072. [ CrossRef ] [ Google Scholar ]
  • Bakdash J, Sample C, Rankin M, Kantarcioglu M, Holmes J, Kase S, Zaroukian E, Szymanski B (2018) The future of deception: machine-generated and manipulated images, video, and audio? In: 2018 international workshop on social sensing (SocialSens), IEEE, pp 2–2 10.1109/SocialSens.2018.00009
  • Balmas M. When fake news becomes real: combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Commun Res. 2014; 41 (3):430–454. doi: 10.1177/0093650212453600. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. Understanding fake news consumption: a review. Soc Sci. 2020 doi: 10.3390/socsci9100185. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. A working definition of fake news. Encyclopedia. 2022; 2 (1):632–645. doi: 10.3390/encyclopedia2010043. [ CrossRef ] [ Google Scholar ]
  • Bastick Z. Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Comput Hum Behav. 2021; 116 :106633. doi: 10.1016/j.chb.2020.106633. [ CrossRef ] [ Google Scholar ]
  • Batailler C, Brannon SM, Teas PE, Gawronski B. A signal detection approach to understanding the identification of fake news. Perspect Psychol Sci. 2022; 17 (1):78–98. doi: 10.1177/1745691620986135. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bessi A, Ferrara E (2016) Social bots distort the 2016 US presidential election online discussion. First Monday 21(11-7). 10.5210/fm.v21i11.7090
  • Bhattacharjee A, Shu K, Gao M, Liu H (2020) Disinformation in the online information ecosystem: detection, mitigation and challenges. arXiv preprint arXiv:2010.09113
  • Bhuiyan MM, Zhang AX, Sehat CM, Mitra T. Investigating differences in crowdsourced news credibility assessment: raters, tasks, and expert criteria. Proc ACM Hum Comput Interact. 2020; 4 (CSCW2):1–26. doi: 10.1145/3415164. [ CrossRef ] [ Google Scholar ]
  • Bode L, Vraga EK. In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J Commun. 2015; 65 (4):619–638. doi: 10.1111/jcom.12166. [ CrossRef ] [ Google Scholar ]
  • Bondielli A, Marcelloni F. A survey on fake news and rumour detection techniques. Inf Sci. 2019; 497 :38–55. doi: 10.1016/j.ins.2019.05.035. [ CrossRef ] [ Google Scholar ]
  • Bovet A, Makse HA. Influence of fake news in Twitter during the 2016 US presidential election. Nat Commun. 2019; 10 (1):1–14. doi: 10.1038/s41467-018-07761-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proc Natl Acad Sci. 2021 doi: 10.1073/pnas.2020043118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brewer PR, Young DG, Morreale M. The impact of real news about “fake news”: intertextual processes and political satire. Int J Public Opin Res. 2013; 25 (3):323–343. doi: 10.1093/ijpor/edt015. [ CrossRef ] [ Google Scholar ]
  • Bringula RP, Catacutan-Bangit AE, Garcia MB, Gonzales JPS, Valderama AMC. “Who is gullible to political disinformation?” Predicting susceptibility of university students to fake news. J Inf Technol Polit. 2022; 19 (2):165–179. doi: 10.1080/19331681.2021.1945988. [ CrossRef ] [ Google Scholar ]
  • Buccafurri F, Lax G, Nicolazzo S, Nocera A (2017) Tweetchain: an alternative to blockchain for crowd-based applications. In: International conference on web engineering, Springer, Berlin, pp 386–393. 10.1007/978-3-319-60131-1_24
  • Burshtein S. The true story on fake news. Intell Prop J. 2017; 29 (3):397–446. [ Google Scholar ]
  • Cardaioli M, Cecconello S, Conti M, Pajola L, Turrin F (2020) Fake news spreaders profiling through behavioural analysis. In: CLEF (working notes)
  • Cardoso Durier da Silva F, Vieira R, Garcia AC (2019) Can machines learn to detect fake news? A survey focused on social media. In: Proceedings of the 52nd Hawaii international conference on system sciences. 10.24251/HICSS.2019.332
  • Carmi E, Yates SJ, Lockley E, Pawluczuk A (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Intern Policy Rev 9(2):1–22 10.14763/2020.2.1481
  • Celliers M, Hattingh M (2020) A systematic review on fake news themes reported in literature. In: Conference on e-Business, e-Services and e-Society. Springer, Berlin, pp 223–234. 10.1007/978-3-030-45002-1_19
  • Chen Y, Li Q, Wang H (2018) Towards trusted social networks with blockchain technology. arXiv preprint arXiv:1801.02796
  • Cheng L, Guo R, Shu K, Liu H (2020) Towards causal understanding of fake news dissemination. arXiv preprint arXiv:2010.10580
  • Chiu MM, Oh YW. How fake news differs from personal lies. Am Behav Sci. 2021; 65 (2):243–258. doi: 10.1177/0002764220910243. [ CrossRef ] [ Google Scholar ]
  • Chung M, Kim N. When I learn the news is false: how fact-checking information stems the spread of fake news via third-person perception. Hum Commun Res. 2021; 47 (1):1–24. doi: 10.1093/hcr/hqaa010. [ CrossRef ] [ Google Scholar ]
  • Clarke J, Chen H, Du D, Hu YJ. Fake news, investor attention, and market reaction. Inf Syst Res. 2020 doi: 10.1287/isre.2019.0910. [ CrossRef ] [ Google Scholar ]
  • Clayton K, Blair S, Busam JA, Forstner S, Glance J, Green G, Kawata A, Kovvuri A, Martin J, Morgan E, et al. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit Behav. 2020; 42 (4):1073–1095. doi: 10.1007/s11109-019-09533-0. [ CrossRef ] [ Google Scholar ]
  • Collins B, Hoang DT, Nguyen NT, Hwang D (2020) Fake news types and detection models on social media a state-of-the-art survey. In: Asian conference on intelligent information and database systems. Springer, Berlin, pp 562–573 10.1007/978-981-15-3380-8_49
  • Conroy NK, Rubin VL, Chen Y. Automatic deception detection: methods for finding fake news. Proc Assoc Inf Sci Technol. 2015; 52 (1):1–4. doi: 10.1002/pra2.2015.145052010082. [ CrossRef ] [ Google Scholar ]
  • Cooke NA. Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr Q. 2017; 87 (3):211–221. doi: 10.1086/692298. [ CrossRef ] [ Google Scholar ]
  • Coscia M, Rossi L. Distortions of political bias in crowdsourced misinformation flagging. J R Soc Interface. 2020; 17 (167):20200020. doi: 10.1098/rsif.2020.0020. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dame Adjin-Tettey T. Combating fake news, disinformation, and misinformation: experimental evidence for media literacy education. Cogent Arts Human. 2022; 9 (1):2037229. doi: 10.1080/23311983.2022.2037229. [ CrossRef ] [ Google Scholar ]
  • Deepak S, Chitturi B. Deep neural approach to fake-news identification. Procedia Comput Sci. 2020; 167 :2236–2243. doi: 10.1016/j.procs.2020.03.276. [ CrossRef ] [ Google Scholar ]
  • de Cock Buning M (2018) A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Publications Office of the European Union
  • Del Vicario M, Quattrociocchi W, Scala A, Zollo F. Polarization and fake news: early warning of potential misinformation targets. ACM Trans Web (TWEB) 2019; 13 (2):1–22. doi: 10.1145/3316809. [ CrossRef ] [ Google Scholar ]
  • Demuyakor J, Opata EM. Fake news on social media: predicting which media format influences fake news most on facebook. J Intell Commun. 2022 doi: 10.54963/jic.v2i1.56. [ CrossRef ] [ Google Scholar ]
  • Derakhshan H, Wardle C (2017) Information disorder: definitions. In: Understanding and addressing the disinformation ecosystem, pp 5–12
  • Desai AN, Ruidera D, Steinbrink JM, Granwehr B, Lee DH. Misinformation and disinformation: the potential disadvantages of social media in infectious disease and how to combat them. Clin Infect Dis. 2022; 74 (Supplement–3):e34–e39. doi: 10.1093/cid/ciac109. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Domenico G, Sit J, Ishizaka A, Nunan D. Fake news, social media and marketing: a systematic review. J Bus Res. 2021; 124 :329–341. doi: 10.1016/j.jbusres.2020.11.037. [ CrossRef ] [ Google Scholar ]
  • Dias N, Pennycook G, Rand DG. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harv Kennedy School Misinform Rev. 2020 doi: 10.37016/mr-2020-001. [ CrossRef ] [ Google Scholar ]
  • DiCicco KW, Agarwal N (2020) Blockchain technology-based solutions to fight misinformation: a survey. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 267–281, 10.1007/978-3-030-42699-6_14
  • Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS, Deravi F. Understanding conspiracy theories. Polit Psychol. 2019; 40 :3–35. doi: 10.1111/pops.12568. [ CrossRef ] [ Google Scholar ]
  • Edgerly S, Mourão RR, Thorson E, Tham SM. When do audiences verify? How perceptions about message and source influence audience verification of news headlines. J Mass Commun Q. 2020; 97 (1):52–71. doi: 10.1177/1077699019864680. [ CrossRef ] [ Google Scholar ]
  • Egelhofer JL, Lecheler S. Fake news as a two-dimensional phenomenon: a framework and research agenda. Ann Int Commun Assoc. 2019; 43 (2):97–116. doi: 10.1080/23808985.2019.1602782. [ CrossRef ] [ Google Scholar ]
  • Elhadad MK, Li KF, Gebali F (2019) A novel approach for selecting hybrid features from online news textual metadata for fake news detection. In: International conference on p2p, parallel, grid, cloud and internet computing. Springer, Berlin, pp 914–925, 10.1007/978-3-030-33509-0_86
  • ERGA (2018) Fake news, and the information disorder. European Broadcasting Union (EBU)
  • ERGA (2021) Notions of disinformation and related concepts. European Regulators Group for Audiovisual Media Services (ERGA)
  • Escolà-Gascón Á. New techniques to measure lie detection using Covid-19 fake news and the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) Comput Hum Behav Rep. 2021; 3 :100049. doi: 10.1016/j.chbr.2020.100049. [ CrossRef ] [ Google Scholar ]
  • Fazio L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016/mr-2020-009. [ CrossRef ] [ Google Scholar ]
  • Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun ACM. 2016; 59 (7):96–104. doi: 10.1145/2818717. [ CrossRef ] [ Google Scholar ]
  • Flynn D, Nyhan B, Reifler J. The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit Psychol. 2017; 38 :127–150. doi: 10.1111/pops.12394. [ CrossRef ] [ Google Scholar ]
  • Fraga-Lamas P, Fernández-Caramés TM. Fake news, disinformation, and deepfakes: leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof. 2020; 22 (2):53–59. doi: 10.1109/MITP.2020.2977589. [ CrossRef ] [ Google Scholar ]
  • Freeman D, Waite F, Rosebrock L, Petit A, Causier C, East A, Jenner L, Teale AL, Carr L, Mulhall S, et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol Med. 2020 doi: 10.1017/S0033291720001890. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Friggeri A, Adamic L, Eckles D, Cheng J (2014) Rumor cascades. In: Proceedings of the international AAAI conference on web and social media
  • García SA, García GG, Prieto MS, Moreno Guerrero AJ, Rodríguez Jiménez C. The impact of term fake news on the scientific community. Scientific performance and mapping in web of science. Soc Sci. 2020 doi: 10.3390/socsci9050073. [ CrossRef ] [ Google Scholar ]
  • Garrett RK, Bond RM. Conservatives’ susceptibility to political misperceptions. Sci Adv. 2021 doi: 10.1126/sciadv.abf1234. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Giachanou A, Ríssola EA, Ghanem B, Crestani F, Rosso P (2020) The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: International conference on applications of natural language to information systems. Springer, Berlin, pp 181–192 10.1007/978-3-030-51310-8_17
  • Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, Buntain C, Chanduka R, Cheakalos P, Everett JB et al (2018) Fake news vs satire: a dataset and analysis. In: Proceedings of the 10th ACM conference on web science, pp 17–21, 10.1145/3201064.3201100
  • Goldani MH, Momtazi S, Safabakhsh R. Detecting fake news with capsule neural networks. Appl Soft Comput. 2021; 101 :106991. doi: 10.1016/j.asoc.2020.106991. [ CrossRef ] [ Google Scholar ]
  • Goldstein I, Yang L. Good disclosure, bad disclosure. J Financ Econ. 2019; 131 (1):118–138. doi: 10.1016/j.jfineco.2018.08.004. [ CrossRef ] [ Google Scholar ]
  • Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake news on Twitter during the 2016 US presidential election. Science. 2019; 363 (6425):374–378. doi: 10.1126/science.aau2706. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guadagno RE, Guttieri K (2021) Fake news and information warfare: an examination of the political and psychological processes from the digital sphere to the real world. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 218–242 10.4018/978-1-7998-7291-7.ch013
  • Guess A, Nagler J, Tucker J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv. 2019 doi: 10.1126/sciadv.aau4586. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guo C, Cao J, Zhang X, Shu K, Yu M (2019) Exploiting emotions for fake news detection on social media. arXiv preprint arXiv:1903.01728
  • Guo B, Ding Y, Yao L, Liang Y, Yu Z. The future of false information detection on social media: new perspectives and trends. ACM Comput Surv (CSUR) 2020; 53 (4):1–36. doi: 10.1145/3393880. [ CrossRef ] [ Google Scholar ]
  • Gupta A, Li H, Farnoush A, Jiang W. Understanding patterns of covid infodemic: a systematic and pragmatic approach to curb fake news. J Bus Res. 2022; 140 :670–683. doi: 10.1016/j.jbusres.2021.11.032. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ha L, Andreu Perez L, Ray R. Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: disciplinary contribution, topics, and impact. Am Behav Sci. 2021; 65 (2):290–315. doi: 10.1177/0002764219869402. [ CrossRef ] [ Google Scholar ]
  • Habib A, Asghar MZ, Khan A, Habib A, Khan A. False information detection in online content and its role in decision making: a systematic literature review. Soc Netw Anal Min. 2019; 9 (1):1–20. doi: 10.1007/s13278-019-0595-5. [ CrossRef ] [ Google Scholar ]
  • Hage H, Aïmeur E, Guedidi A (2021) Understanding the landscape of online deception. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 39–66. 10.4018/978-1-7998-2543-2.ch014
  • Hakak S, Alazab M, Khan S, Gadekallu TR, Maddikunta PKR, Khan WZ. An ensemble machine learning approach through effective feature extraction to classify fake news. Futur Gener Comput Syst. 2021; 117 :47–58. doi: 10.1016/j.future.2020.11.022. [ CrossRef ] [ Google Scholar ]
  • Hamdi T, Slimi H, Bounhas I, Slimani Y (2020) A hybrid approach for fake news detection in Twitter based on user features and graph embedding. In: International conference on distributed computing and internet technology. Springer, Berlin, pp 266–280. 10.1007/978-3-030-36987-3_17
  • Hameleers M. Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the us and netherlands. Inf Commun Soc. 2022; 25 (1):110–126. doi: 10.1080/1369118X.2020.1764603. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Powell TE, Van Der Meer TG, Bos L. A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Polit Commun. 2020; 37 (2):281–301. doi: 10.1080/10584609.2019.1674979. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Brosius A, de Vreese CH. Whom to trust? media exposure patterns of citizens with perceptions of misinformation and disinformation related to the news media. Eur J Commun. 2022 doi: 10.1177/02673231211072667. [ CrossRef ] [ Google Scholar ]
  • Hartley K, Vu MK. Fighting fake news in the Covid-19 era: policy insights from an equilibrium model. Policy Sci. 2020; 53 (4):735–758. doi: 10.1007/s11077-020-09405-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hasan HR, Salah K. Combating deepfake videos using blockchain and smart contracts. IEEE Access. 2019; 7 :41596–41606. doi: 10.1109/ACCESS.2019.2905689. [ CrossRef ] [ Google Scholar ]
  • Hiriyannaiah S, Srinivas A, Shetty GK, Siddesh G, Srinivasa K (2020) A computationally intelligent agent for detecting fake news using generative adversarial networks. Hybrid computational intelligence: challenges and applications. pp 69–96 10.1016/B978-0-12-818699-2.00004-4
  • Hosseinimotlagh S, Papalexakis EE (2018) Unsupervised content-based identification of fake news articles with tensor decomposition ensembles. In: Proceedings of the workshop on misinformation and misbehavior mining on the web (MIS2)
  • Huckle S, White M. Fake news: a technological approach to proving the origins of content, using blockchains. Big Data. 2017; 5 (4):356–371. doi: 10.1089/big.2017.0071. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huffaker JS, Kummerfeld JK, Lasecki WS, Ackerman MS (2020) Crowdsourced detection of emotionally manipulative language. In: Proceedings of the 2020 CHI conference on human factors in computing systems. pp 1–14 10.1145/3313831.3376375
  • Ireton C, Posetti J. Journalism, fake news & disinformation: handbook for journalism education and training. Paris: UNESCO Publishing; 2018. [ Google Scholar ]
  • Islam MR, Liu S, Wang X, Xu G. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Soc Netw Anal Min. 2020; 10 (1):1–20. doi: 10.1007/s13278-020-00696-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ismailov M, Tsikerdekis M, Zeadally S. Vulnerabilities to online social network identity deception detection research and recommendations for mitigation. Fut Internet. 2020; 12 (9):148. doi: 10.3390/fi12090148. [ CrossRef ] [ Google Scholar ]
  • Jakesch M, Koren M, Evtushenko A, Naaman M (2019) The role of source and expressive responding in political news evaluation. In: Computation and journalism symposium
  • Jamieson KH. Cyberwar: how Russian hackers and trolls helped elect a president: what we don’t, can’t, and do know. Oxford: Oxford University Press; 2020. [ Google Scholar ]
  • Jiang S, Chen X, Zhang L, Chen S, Liu H (2019) User-characteristic enhanced model for fake news detection in social media. In: CCF International conference on natural language processing and Chinese computing, Springer, Berlin, pp 634–646. 10.1007/978-3-030-32233-5_49
  • Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exploiting conflicting social viewpoints in microblogs. In: Proceedings of the AAAI conference on artificial intelligence
  • Jing TW, Murugesan RK (2018) A theoretical framework to build trust and prevent fake news in social media using blockchain. In: International conference of reliable information and communication technology. Springer, Berlin, pp 955–962, 10.1007/978-3-319-99007-1_88
  • Jones-Jang SM, Mortensen T, Liu J. Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. Am Behav Sci. 2021; 65 (2):371–388. doi: 10.1177/0002764219869406. [ CrossRef ] [ Google Scholar ]
  • Jungherr A, Schroeder R. Disinformation and the structural transformations of the public arena: addressing the actual challenges to democracy. Soc Media Soc. 2021 doi: 10.1177/2056305121988928. [ CrossRef ] [ Google Scholar ]
  • Kaliyar RK (2018) Fake news detection using a deep neural network. In: 2018 4th international conference on computing communication and automation (ICCCA), IEEE, pp 1–7 10.1109/CCAA.2018.8777343
  • Kaliyar RK, Goswami A, Narang P, Sinha S. Fndnet—a deep convolutional neural network for fake news detection. Cogn Syst Res. 2020; 61 :32–44. doi: 10.1016/j.cogsys.2019.12.005. [ CrossRef ] [ Google Scholar ]
  • Kapantai E, Christopoulou A, Berberidis C, Peristeras V. A systematic literature review on disinformation: toward a unified taxonomical framework. New Media Soc. 2021; 23 (5):1301–1326. doi: 10.1177/1461444820959296. [ CrossRef ] [ Google Scholar ]
  • Kapusta J, Benko L, Munk M (2019) Fake news identification based on sentiment and frequency analysis. In: International conference Europe middle east and North Africa information systems and technologies to support learning. Springer, Berlin, pp 400–409, 10.1007/978-3-030-36778-7_44
  • Kaur S, Kumar P, Kumaraguru P. Automating fake news detection system using multi-level voting model. Soft Comput. 2020; 24 (12):9049–9069. doi: 10.1007/s00500-019-04436-y. [ CrossRef ] [ Google Scholar ]
  • Khan SA, Alkawaz MH, Zangana HM (2019) The use and abuse of social media for spreading fake news. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS), IEEE, pp 145–148. 10.1109/I2CACIS.2019.8825029
  • Kim J, Tabibian B, Oh A, Schölkopf B, Gomez-Rodriguez M (2018) Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 324–332. 10.1145/3159652.3159734
  • Klein D, Wueller J. Fake news: a legal perspective. J Internet Law. 2017; 20 (10):5–13. [ Google Scholar ]
  • Kogan S, Moskowitz TJ, Niessner M (2019) Fake news: evidence from financial markets. Available at SSRN 3237763
  • Kuklinski JH, Quirk PJ, Jerit J, Schwieder D, Rich RF. Misinformation and the currency of democratic citizenship. J Polit. 2000; 62 (3):790–816. doi: 10.1111/0022-3816.00033. [ CrossRef ] [ Google Scholar ]
  • Kumar S, Shah N (2018) False information on web and social media: a survey. arXiv preprint arXiv:1804.08559
  • Kumar S, West R, Leskovec J (2016) Disinformation on the web: impact, characteristics, and detection of Wikipedia hoaxes. In: Proceedings of the 25th international conference on world wide web, pp 591–602. 10.1145/2872427.2883085
  • La Barbera D, Roitero K, Demartini G, Mizzaro S, Spina D (2020) Crowdsourcing truthfulness: the impact of judgment scale and assessor bias. In: European conference on information retrieval. Springer, Berlin, pp 207–214. 10.1007/978-3-030-45442-5_26
  • Lanius C, Weber R, MacKenzie WI. Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Soc Netw Anal Min. 2021; 11 (1):1–15. doi: 10.1007/s13278-021-00739-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lazer DM, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, et al. The science of fake news. Science. 2018; 359 (6380):1094–1096. doi: 10.1126/science.aao2998. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Le T, Shu K, Molina MD, Lee D, Sundar SS, Liu H (2019) 5 sources of clickbaits you should know! Using synthetic clickbaits to improve prediction and distinguish between bot-generated and human-written headlines. In: 2019 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 33–40. 10.1145/3341161.3342875
  • Lewandowsky S (2020) Climate change, disinformation, and how to combat it. In: Annual Review of Public Health 42. 10.1146/annurev-publhealth-090419-102409 [ PubMed ]
  • Liu Y, Wu YF (2018) Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Proceedings of the AAAI conference on artificial intelligence, pp 354–361
  • Luo M, Hancock JT, Markowitz DM. Credibility perceptions and detection accuracy of fake news headlines on social media: effects of truth-bias and endorsement cues. Commun Res. 2022; 49 (2):171–195. doi: 10.1177/0093650220921321. [ CrossRef ] [ Google Scholar ]
  • Lutzke L, Drummond C, Slovic P, Árvai J. Priming critical thinking: simple interventions limit the influence of fake news about climate change on Facebook. Glob Environ Chang. 2019; 58 :101964. doi: 10.1016/j.gloenvcha.2019.101964. [ CrossRef ] [ Google Scholar ]
  • Maertens R, Anseel F, van der Linden S. Combatting climate change misinformation: evidence for longevity of inoculation and consensus messaging effects. J Environ Psychol. 2020; 70 :101455. doi: 10.1016/j.jenvp.2020.101455. [ CrossRef ] [ Google Scholar ]
  • Mahabub A. A robust technique of fake news detection using ensemble voting classifier and comparison with other classifiers. SN Applied Sciences. 2020; 2 (4):1–9. doi: 10.1007/s42452-020-2326-y. [ CrossRef ] [ Google Scholar ]
  • Mahbub S, Pardede E, Kayes A, Rahayu W. Controlling astroturfing on the internet: a survey on detection techniques and research challenges. Int J Web Grid Serv. 2019; 15 (2):139–158. doi: 10.1504/IJWGS.2019.099561. [ CrossRef ] [ Google Scholar ]
  • Marsden C, Meyer T, Brown I. Platform values and democratic elections: how can the law regulate digital disinformation? Comput Law Secur Rev. 2020; 36 :105373. doi: 10.1016/j.clsr.2019.105373. [ CrossRef ] [ Google Scholar ]
  • Masciari E, Moscato V, Picariello A, Sperlí G (2020) Detecting fake news by image analysis. In: Proceedings of the 24th symposium on international database engineering and applications, pp 1–5. 10.1145/3410566.3410599
  • Mazzeo V, Rapisarda A. Investigating fake and reliable news sources using complex networks analysis. Front Phys. 2022; 10 :886544. doi: 10.3389/fphy.2022.886544. [ CrossRef ] [ Google Scholar ]
  • McGrew S. Learning to evaluate: an intervention in civic online reasoning. Comput Educ. 2020; 145 :103711. doi: 10.1016/j.compedu.2019.103711. [ CrossRef ] [ Google Scholar ]
  • McGrew S, Breakstone J, Ortega T, Smith M, Wineburg S. Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory Res Soc Educ. 2018; 46 (2):165–193. doi: 10.1080/00933104.2017.1416320. [ CrossRef ] [ Google Scholar ]
  • Meel P, Vishwakarma DK. Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl. 2020; 153 :112986. doi: 10.1016/j.eswa.2019.112986. [ CrossRef ] [ Google Scholar ]
  • Meese J, Frith J, Wilken R. Covid-19, 5G conspiracies and infrastructural futures. Media Int Aust. 2020; 177 (1):30–46. doi: 10.1177/1329878X20952165. [ CrossRef ] [ Google Scholar ]
  • Metzger MJ, Hartsell EH, Flanagin AJ. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun Res. 2020; 47 (1):3–28. doi: 10.1177/0093650215613136. [ CrossRef ] [ Google Scholar ]
  • Micallef N, He B, Kumar S, Ahamad M, Memon N (2020) The role of the crowd in countering misinformation: a case study of the Covid-19 infodemic. arXiv preprint arXiv:2011.05773
  • Mihailidis P, Viotty S. Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in “post-fact society. Am Behav Sci. 2017; 61 (4):441–454. doi: 10.1177/0002764217701217. [ CrossRef ] [ Google Scholar ]
  • Mishra R (2020) Fake news detection using higher-order user to user mutual-attention progression in propagation paths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 652–653
  • Mishra S, Shukla P, Agarwal R. Analyzing machine learning enabled fake news detection techniques for diversified datasets. Wirel Commun Mobile Comput. 2022 doi: 10.1155/2022/1575365. [ CrossRef ] [ Google Scholar ]
  • Molina MD, Sundar SS, Le T, Lee D. “Fake news” is not simply false information: a concept explication and taxonomy of online content. Am Behav Sci. 2021; 65 (2):180–212. doi: 10.1177/0002764219878224. [ CrossRef ] [ Google Scholar ]
  • Moro C, Birt JR (2022) Review bombing is a dirty practice, but research shows games do benefit from online feedback. Conversation. https://research.bond.edu.au/en/publications/review-bombing-is-a-dirty-practice-but-research-shows-games-do-be
  • Mustafaraj E, Metaxas PT (2017) The fake news spreading plague: was it preventable? In: Proceedings of the 2017 ACM on web science conference, pp 235–239. 10.1145/3091478.3091523
  • Nagel TW. Measuring fake news acumen using a news media literacy instrument. J Media Liter Educ. 2022; 14 (1):29–42. doi: 10.23860/JMLE-2022-14-1-3. [ CrossRef ] [ Google Scholar ]
  • Nakov P (2020) Can we spot the “fake news” before it was even written? arXiv preprint arXiv:2008.04374
  • Nekmat E. Nudge effect of fact-check alerts: source influence and media skepticism on sharing of news misinformation in social media. Soc Media Soc. 2020 doi: 10.1177/2056305119897322. [ CrossRef ] [ Google Scholar ]
  • Nygren T, Brounéus F, Svensson G. Diversity and credibility in young people’s news feeds: a foundation for teaching and learning citizenship in a digital era. J Soc Sci Educ. 2019; 18 (2):87–109. doi: 10.4119/jsse-917. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Reifler J. Displacing misinformation about events: an experimental test of causal corrections. J Exp Polit Sci. 2015; 2 (1):81–93. doi: 10.1017/XPS.2014.22. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Porter E, Reifler J, Wood TJ. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Polit Behav. 2020; 42 (3):939–960. doi: 10.1007/s11109-019-09528-x. [ CrossRef ] [ Google Scholar ]
  • Nyow NX, Chua HN (2019) Detecting fake news with tweets’ properties. In: 2019 IEEE conference on application, information and network security (AINS), IEEE, pp 24–29. 10.1109/AINS47559.2019.8968706
  • Ochoa IS, de Mello G, Silva LA, Gomes AJ, Fernandes AM, Leithardt VRQ (2019) Fakechain: a blockchain architecture to ensure trust in social media networks. In: International conference on the quality of information and communications technology. Springer, Berlin, pp 105–118. 10.1007/978-3-030-29238-6_8
  • Ozbay FA, Alatas B. Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A. 2020; 540 :123174. doi: 10.1016/j.physa.2019.123174. [ CrossRef ] [ Google Scholar ]
  • Ozturk P, Li H, Sakamoto Y (2015) Combating rumor spread on social media: the effectiveness of refutation and warning. In: 2015 48th Hawaii international conference on system sciences, IEEE, pp 2406–2414. 10.1109/HICSS.2015.288
  • Parikh SB, Atrey PK (2018) Media-rich fake news detection: a survey. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 436–441.10.1109/MIPR.2018.00093
  • Parrish K (2018) Deep learning & machine learning: what’s the difference? Online: https://parsers.me/deep-learning-machine-learning-whats-the-difference/ . Accessed 20 May 2020
  • Paschen J. Investigating the emotional appeal of fake news using artificial intelligence and human contributions. J Prod Brand Manag. 2019; 29 (2):223–233. doi: 10.1108/JPBM-12-2018-2179. [ CrossRef ] [ Google Scholar ]
  • Pathak A, Srihari RK (2019) Breaking! Presenting fake news corpus for automated fact checking. In: Proceedings of the 57th annual meeting of the association for computational linguistics: student research workshop, pp 357–362
  • Peng J, Detchon S, Choo KKR, Ashman H. Astroturfing detection in social media: a binary n-gram-based approach. Concurr Comput: Pract Exp. 2017; 29 (17):e4013. doi: 10.1002/cpe.4013. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc Natl Acad Sci. 2019; 116 (7):2521–2526. doi: 10.1073/pnas.1806781116. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers. 2020; 88 (2):185–200. doi: 10.1111/jopy.12476. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag Sci. 2020; 66 (11):4944–4957. doi: 10.1287/mnsc.2019.3478. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting Covid-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020; 31 (7):770–780. doi: 10.1177/0956797620939054. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B (2017) A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638
  • Previti M, Rodriguez-Fernandez V, Camacho D, Carchiolo V, Malgeri M (2020) Fake news detection using time series and user features classification. In: International conference on the applications of evolutionary computation (Part of EvoStar), Springer, Berlin, pp 339–353. 10.1007/978-3-030-43722-0_22
  • Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, pp 490–497. 10.1609/aaai.v34i01.5386
  • Qayyum A, Qadir J, Janjua MU, Sher F. Using blockchain to rein in the new post-truth world and check the spread of fake news. IT Prof. 2019; 21 (4):16–24. doi: 10.1109/MITP.2019.2910503. [ CrossRef ] [ Google Scholar ]
  • Qian F, Gong C, Sharma K, Liu Y (2018) Neural user response generator: fake news detection with collective user intelligence. In: IJCAI, vol 18, pp 3834–3840. 10.24963/ijcai.2018/533
  • Raza S, Ding C. Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal. 2022; 13 (4):335–362. doi: 10.1007/s41060-021-00302-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ricard J, Medeiros J (2020) Using misinformation as a political weapon: Covid-19 and Bolsonaro in Brazil. Harv Kennedy School misinformation Rev 1(3). https://misinforeview.hks.harvard.edu/article/using-misinformation-as-a-political-weapon-covid-19-and-bolsonaro-in-brazil/
  • Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 2019; 5 (1):1–10. doi: 10.1057/s41599-019-0279-9. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, van der Linden S, Nygren T. Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016//mr-2020-008. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, Van Der Bles AM, Van Der Linden S. Susceptibility to misinformation about Covid-19 around the world. R Soc Open Sci. 2020; 7 (10):201199. doi: 10.1098/rsos.201199. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rubin VL, Conroy N, Chen Y, Cornwell S (2016) Fake news or truth? Using satirical cues to detect potentially misleading news. In: Proceedings of the second workshop on computational approaches to deception detection, pp 7–17
  • Ruchansky N, Seo S, Liu Y (2017) Csi: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 797–806. 10.1145/3132847.3132877
  • Schuyler AJ (2019) Regulating facts: a procedural framework for identifying, excluding, and deterring the intentional or knowing proliferation of fake news online. Univ Ill JL Technol Pol’y, vol 2019, pp 211–240
  • Shae Z, Tsai J (2019) AI blockchain platform for trusting news. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS), IEEE, pp 1610–1619. 10.1109/ICDCS.2019.00160
  • Shang W, Liu M, Lin W, Jia M (2018) Tracing the source of news based on blockchain. In: 2018 IEEE/ACIS 17th international conference on computer and information science (ICIS), IEEE, pp 377–381. 10.1109/ICIS.2018.8466516
  • Shao C, Ciampaglia GL, Flammini A, Menczer F (2016) Hoaxy: A platform for tracking online misinformation. In: Proceedings of the 25th international conference companion on world wide web, pp 745–750. 10.1145/2872518.2890098
  • Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F. The spread of low-credibility content by social bots. Nat Commun. 2018; 9 (1):1–9. doi: 10.1038/s41467-018-06930-7. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shao C, Hui PM, Wang L, Jiang X, Flammini A, Menczer F, Ciampaglia GL. Anatomy of an online misinformation network. PLoS ONE. 2018; 13 (4):e0196087. doi: 10.1371/journal.pone.0196087. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y. Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 2019; 10 (3):1–42. doi: 10.1145/3305260. [ CrossRef ] [ Google Scholar ]
  • Sharma K, Seo S, Meng C, Rambhatla S, Liu Y (2020) Covid-19 on social media: analyzing misinformation in Twitter conversations. arXiv preprint arXiv:2003.12309
  • Shen C, Kasra M, Pan W, Bassett GA, Malloch Y, O’Brien JF. Fake images: the effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media Soc. 2019; 21 (2):438–463. doi: 10.1177/1461444818799526. [ CrossRef ] [ Google Scholar ]
  • Sherman IN, Redmiles EM, Stokes JW (2020) Designing indicators to combat fake media. arXiv preprint arXiv:2010.00544
  • Shi P, Zhang Z, Choo KKR. Detecting malicious social bots based on clickstream sequences. IEEE Access. 2019; 7 :28855–28862. doi: 10.1109/ACCESS.2019.2901864. [ CrossRef ] [ Google Scholar ]
  • Shu K, Sliva A, Wang S, Tang J, Liu H. Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor Newsl. 2017; 19 (1):22–36. doi: 10.1145/3137597.3137600. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2018a) Fakenewsnet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286 , 10.1089/big.2020.0062 [ PubMed ]
  • Shu K, Wang S, Liu H (2018b) Understanding user profiles on social media for fake news detection. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 430–435. 10.1109/MIPR.2018.00092
  • Shu K, Wang S, Liu H (2019a) Beyond news contents: the role of social context for fake news detection. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 312–320. 10.1145/3289600.3290994
  • Shu K, Zhou X, Wang S, Zafarani R, Liu H (2019b) The role of user profiles for fake news detection. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 436–439. 10.1145/3341161.3342927
  • Shu K, Bhattacharjee A, Alatawi F, Nazer TH, Ding K, Karami M, Liu H. Combating disinformation in a social media age. Wiley Interdiscip Rev: Data Min Knowl Discov. 2020; 10 (6):e1385. doi: 10.1002/widm.1385. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Liu H. Hierarchical propagation networks for fake news detection: investigation and exploitation. Proc Int AAAI Conf Web Soc Media AAAI Press. 2020; 14 :626–637. [ Google Scholar ]
  • Shu K, Wang S, Lee D, Liu H (2020c) Mining disinformation and fake news: concepts, methods, and recent advancements. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 1–19 10.1007/978-3-030-42699-6_1
  • Shu K, Zheng G, Li Y, Mukherjee S, Awadallah AH, Ruston S, Liu H (2020d) Early detection of fake news with multi-source weak social supervision. In: ECML/PKDD (3), pp 650–666
  • Singh VK, Ghosh I, Sonagara D. Detecting fake news stories via multimodal analysis. J Am Soc Inf Sci. 2021; 72 (1):3–17. doi: 10.1002/asi.24359. [ CrossRef ] [ Google Scholar ]
  • Sintos S, Agarwal PK, Yang J (2019) Selecting data to clean for fact checking: minimizing uncertainty vs. maximizing surprise. Proc VLDB Endowm 12(13), 2408–2421. 10.14778/3358701.3358708 [ CrossRef ]
  • Snow J (2017) Can AI win the war against fake news? MIT Technology Review Online: https://www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/ . Accessed 3 Oct. 2020
  • Song G, Kim S, Hwang H, Lee K (2019) Blockchain-based notarization for social media. In: 2019 IEEE international conference on consumer clectronics (ICCE), IEEE, pp 1–2 10.1109/ICCE.2019.8661978
  • Starbird K, Arif A, Wilson T (2019) Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. In: Proceedings of the ACM on human–computer interaction, vol 3(CSCW), pp 1–26 10.1145/3359229
  • Sterret D, Malato D, Benz J, Kantor L, Tompson T, Rosenstiel T, Sonderman J, Loker K, Swanson E (2018) Who shared it? How Americans decide what news to trust on social media. Technical report, Norc Working Paper Series, WP-2018-001, pp 1–24
  • Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020; 34 :118–122. doi: 10.1016/j.cobeha.2020.02.015. [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Jr, Thomas RJ, Bishop L. What is (fake) news? Analyzing news values (and more) in fake stories. Media Commun. 2021; 9 (1):110–119. doi: 10.17645/mac.v9i1.3331. [ CrossRef ] [ Google Scholar ]
  • Tchakounté F, Faissal A, Atemkeng M, Ntyam A. A reliable weighting scheme for the aggregation of crowd intelligence to detect fake news. Information. 2020; 11 (6):319. doi: 10.3390/info11060319. [ CrossRef ] [ Google Scholar ]
  • Tchechmedjiev A, Fafalios P, Boland K, Gasquet M, Zloch M, Zapilko B, Dietze S, Todorov K (2019) Claimskg: a knowledge graph of fact-checked claims. In: International semantic web conference. Springer, Berlin, pp 309–324 10.1007/978-3-030-30796-7_20
  • Treen KMd, Williams HT, O’Neill SJ. Online misinformation about climate change. Wiley Interdiscip Rev Clim Change. 2020; 11 (5):e665. doi: 10.1002/wcc.665. [ CrossRef ] [ Google Scholar ]
  • Tsang SJ. Motivated fake news perception: the impact of news sources and policy support on audiences’ assessment of news fakeness. J Mass Commun Q. 2020 doi: 10.1177/1077699020952129. [ CrossRef ] [ Google Scholar ]
  • Tschiatschek S, Singla A, Gomez Rodriguez M, Merchant A, Krause A (2018) Fake news detection in social networks via crowd signals. In: Companion proceedings of the the web conference 2018, pp 517–524. 10.1145/3184558.3188722
  • Uppada SK, Manasa K, Vidhathri B, Harini R, Sivaselvan B. Novel approaches to fake news and fake account detection in OSNS: user social engagement and visual content centric model. Soc Netw Anal Min. 2022; 12 (1):1–19. doi: 10.1007/s13278-022-00878-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van der Linden S, Roozenbeek J (2020) Psychological inoculation against fake news. In: Accepting, sharing, and correcting misinformation, the psychology of fake news. 10.4324/9780429295379-11
  • Van der Linden S, Panagopoulos C, Roozenbeek J. You are fake news: political bias in perceptions of fake news. Media Cult Soc. 2020; 42 (3):460–470. doi: 10.1177/0163443720906992. [ CrossRef ] [ Google Scholar ]
  • Valenzuela S, Muñiz C, Santos M. Social media and belief in misinformation in mexico: a case of maximal panic, minimal effects? Int J Press Polit. 2022 doi: 10.1177/19401612221088988. [ CrossRef ] [ Google Scholar ]
  • Vasu N, Ang B, Teo TA, Jayakumar S, Raizal M, Ahuja J (2018) Fake news: national security in the post-truth era. RSIS
  • Vereshchaka A, Cosimini S, Dong W (2020) Analyzing and distinguishing fake and real news to mitigate the problem of disinformation. In: Computational and mathematical organization theory, pp 1–15. 10.1007/s10588-020-09307-8
  • Verstraete M, Bambauer DE, Bambauer JR (2017) Identifying and countering fake news. Arizona legal studies discussion paper 73(17-15). 10.2139/ssrn.3007971
  • Vilmer J, Escorcia A, Guillaume M, Herrera J (2018) Information manipulation: a challenge for our democracies. In: Report by the Policy Planning Staff (CAPS) of the ministry for europe and foreign affairs, and the institute for strategic research (RSEM) of the Ministry for the Armed Forces
  • Vishwakarma DK, Varshney D, Yadav A. Detection and veracity analysis of fake news via scrapping and authenticating the web search. Cogn Syst Res. 2019; 58 :217–229. doi: 10.1016/j.cogsys.2019.07.004. [ CrossRef ] [ Google Scholar ]
  • Vlachos A, Riedel S (2014) Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp 18–22. 10.3115/v1/W14-2508
  • von der Weth C, Abdul A, Fan S, Kankanhalli M (2020) Helping users tackle algorithmic threats on social media: a multimedia research agenda. In: Proceedings of the 28th ACM international conference on multimedia, pp 4425–4434. 10.1145/3394171.3414692
  • Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018; 359 (6380):1146–1151. doi: 10.1126/science.aap9559. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vraga EK, Bode L. Using expert sources to correct health misinformation in social media. Sci Commun. 2017; 39 (5):621–645. doi: 10.1177/1075547017731776. [ CrossRef ] [ Google Scholar ]
  • Waldman AE. The marketplace of fake news. Univ Pa J Const Law. 2017; 20 :845. [ Google Scholar ]
  • Wang WY (2017) “Liar, liar pants on fire”: a new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648
  • Wang L, Wang Y, de Melo G, Weikum G. Understanding archetypes of fake news via fine-grained classification. Soc Netw Anal Min. 2019; 9 (1):1–17. doi: 10.1007/s13278-019-0580-z. [ CrossRef ] [ Google Scholar ]
  • Wang Y, Han H, Ding Y, Wang X, Liao Q (2019b) Learning contextual features with multi-head self-attention for fake news detection. In: International conference on cognitive computing. Springer, Berlin, pp 132–142. 10.1007/978-3-030-23407-2_11
  • Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019; 240 :112552. doi: 10.1016/j.socscimed.2019.112552. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang Y, Yang W, Ma F, Xu J, Zhong B, Deng Q, Gao J (2020) Weak supervision for fake news detection via reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 516–523. 10.1609/aaai.v34i01.5389
  • Wardle C (2017) Fake news. It’s complicated. Online: https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 3 Oct 2020
  • Wardle C. The need for smarter definitions and practical, timely empirical research on information disorder. Digit J. 2018; 6 (8):951–963. doi: 10.1080/21670811.2018.1502047. [ CrossRef ] [ Google Scholar ]
  • Wardle C, Derakhshan H. Information disorder: toward an interdisciplinary framework for research and policy making. Council Eur Rep. 2017; 27 :1–107. [ Google Scholar ]
  • Weiss AP, Alwan A, Garcia EP, Garcia J. Surveying fake news: assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. Int J Educ Integr. 2020; 16 (1):1–30. doi: 10.1007/s40979-019-0049-x. [ CrossRef ] [ Google Scholar ]
  • Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 637–645. 10.1145/3159652.3159677
  • Wu L, Rao Y (2020) Adaptive interaction fusion networks for fake news detection. arXiv preprint arXiv:2004.10009
  • Wu L, Morstatter F, Carley KM, Liu H. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor Newsl. 2019; 21 (2):80–90. doi: 10.1145/3373464.3373475. [ CrossRef ] [ Google Scholar ]
  • Wu Y, Ngai EW, Wu P, Wu C. Fake news on the internet: a literature review, synthesis and directions for future research. Intern Res. 2022 doi: 10.1108/INTR-05-2021-0294. [ CrossRef ] [ Google Scholar ]
  • Xu K, Wang F, Wang H, Yang B. Detecting fake news over online social media via domain reputations and content understanding. Tsinghua Sci Technol. 2019; 25 (1):20–27. doi: 10.26599/TST.2018.9010139. [ CrossRef ] [ Google Scholar ]
  • Yang F, Pentyala SK, Mohseni S, Du M, Yuan H, Linder R, Ragan ED, Ji S, Hu X (2019a) Xfake: explainable fake news detector with visualizations. In: The world wide web conference, pp 3600–3604. 10.1145/3308558.3314119
  • Yang X, Li Y, Lyu S (2019b) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 8261–8265. 10.1109/ICASSP.2019.8683164
  • Yaqub W, Kakhidze O, Brockman ML, Memon N, Patil S (2020) Effects of credibility indicators on social media news sharing intent. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–14. 10.1145/3313831.3376213
  • Yavary A, Sajedi H, Abadeh MS. Information verification in social networks based on user feedback and news agencies. Soc Netw Anal Min. 2020; 10 (1):1–8. doi: 10.1007/s13278-019-0616-4. [ CrossRef ] [ Google Scholar ]
  • Yazdi KM, Yazdi AM, Khodayi S, Hou J, Zhou W, Saedy S. Improving fake news detection using k-means and support vector machine approaches. Int J Electron Commun Eng. 2020; 14 (2):38–42. doi: 10.5281/zenodo.3669287. [ CrossRef ] [ Google Scholar ]
  • Zannettou S, Sirivianos M, Blackburn J, Kourtellis N. The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J Data Inf Qual (JDIQ) 2019; 11 (3):1–37. doi: 10.1145/3309699. [ CrossRef ] [ Google Scholar ]
  • Zellers R, Holtzman A, Rashkin H, Bisk Y, Farhadi A, Roesner F, Choi Y (2019) Defending against neural fake news. arXiv preprint arXiv:1905.12616
  • Zhang X, Ghorbani AA. An overview of online fake news: characterization, detection, and discussion. Inf Process Manag. 2020; 57 (2):102025. doi: 10.1016/j.ipm.2019.03.004. [ CrossRef ] [ Google Scholar ]
  • Zhang J, Dong B, Philip SY (2020) Fakedetector: effective fake news detection with deep diffusive neural network. In: 2020 IEEE 36th international conference on data engineering (ICDE), IEEE, pp 1826–1829. 10.1109/ICDE48307.2020.00180
  • Zhang Q, Lipani A, Liang S, Yilmaz E (2019a) Reply-aided detection of misinformation via Bayesian deep learning. In: The world wide web conference, pp 2333–2343. 10.1145/3308558.3313718
  • Zhang X, Karaman S, Chang SF (2019b) Detecting and simulating artifacts in GAN fake images. In: 2019 IEEE international workshop on information forensics and security (WIFS), IEEE, pp 1–6 10.1109/WIFS47025.2019.9035107
  • Zhou X, Zafarani R. A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 2020; 53 (5):1–40. doi: 10.1145/3395046. [ CrossRef ] [ Google Scholar ]
  • Zubiaga A, Aker A, Bontcheva K, Liakata M, Procter R. Detection and resolution of rumours in social media: a survey. ACM Comput Surv (CSUR) 2018; 51 (2):1–36. doi: 10.1145/3161603. [ CrossRef ] [ Google Scholar ]

A business journal from the Wharton School of the University of Pennsylvania

The Impact of Social Media: Is it Irreplaceable?

July 26, 2019 • 15 min read.

Social media as we know it has barely reached its 20th birthday, but it’s changed the fabric of everyday life. What does the future hold for the sector and the players currently at the top?

impact of social media

  • Public Policy

In little more than a decade, the impact of social media has gone from being an entertaining extra to a fully integrated part of nearly every aspect of daily life for many.

Recently in the realm of commerce, Facebook faced skepticism in its testimony to the Senate Banking Committee on Libra, its proposed cryptocurrency and alternative financial system . In politics, heartthrob Justin Bieber tweeted the President of the United States, imploring him to “let those kids out of cages.” In law enforcement, the Philadelphia police department moved to terminate more than a dozen police officers after their racist comments on social media were revealed.

And in the ultimate meshing of the digital and physical worlds, Elon Musk raised the specter of essentially removing the space between social and media through the invention — at some future time — of a brain implant that connects human tissue to computer chips.

All this, in the span of about a week.

As quickly as social media has insinuated itself into politics, the workplace, home life, and elsewhere, it continues to evolve at lightning speed, making it tricky to predict which way it will morph next. It’s hard to recall now, but SixDegrees.com, Friendster, and Makeoutclub.com were each once the next big thing, while one survivor has continued to grow in astonishing ways. In 2006, Facebook had 7.3 million registered users and reportedly turned down a $750 million buyout offer. In the first quarter of 2019, the company could claim 2.38 billion active users, with a market capitalization hovering around half a trillion dollars.

“In 2007 I argued that Facebook might not be around in 15 years. I’m clearly wrong, but it is interesting to see how things have changed,” says Jonah Berger, Wharton marketing professor and author of Contagious: Why Things Catch On . The challenge going forward is not just having the best features, but staying relevant, he says. “Social media isn’t a utility. It’s not like power or water where all people care about is whether it works. Young people care about what using one platform or another says about them. It’s not cool to use the same site as your parents and grandparents, so they’re always looking for the hot new thing.”

Just a dozen years ago, everyone was talking about a different set of social networking services, “and I don’t think anyone quite expected Facebook to become so huge and so dominant,” says Kevin Werbach, Wharton professor of legal studies and business ethics. “At that point, this was an interesting discussion about tech start-ups.

“Today, Facebook is one of the most valuable companies on earth and front and center in a whole range of public policy debates, so the scope of issues we’re thinking about with social media are broader than then,” Werbach adds.

Cambridge Analytica , the impact of social media on the last presidential election and other issues may have eroded public trust, Werbach said, but “social media has become really fundamental to the way that billions of people get information about the world and connect with each other, which raises the stakes enormously.”

Just Say No

“Facebook is dangerous,” said Sen. Sherrod Brown (D-Ohio) at July’s hearing of the Senate Banking Committee. “Facebook has said, ‘just trust us.’ And every time Americans trust you, they seem to get burned.”

Social media has plenty of detractors, but by and large, do Americans agree with Brown’s sentiment? In 2018, 42% of those surveyed in a Pew Research Center survey said they had taken a break from checking the platform for a period of several weeks or more, while 26% said they had deleted the Facebook app from their cellphone.

A year later, though, despite the reputational beating social media had taken, the 2019 iteration of the same Pew survey found social media use unchanged from 2018.

Facebook has its critics, says Wharton marketing professor Pinar Yildirim, and they are mainly concerned about two things: mishandling consumer data and poorly managing access to it by third-party providers; and the level of disinformation spreading on Facebook.

“Social media isn’t a utility. It’s not like power or water where all people care about is whether it works. Young people care about what using one platform or another says about them.” –Jonah Berger

“The question is, are we at a point where the social media organizations and their activities should be regulated for the benefit of the consumer? I do not think more regulation will necessarily help, but certainly this is what is on the table,” says Yildirim. “In the period leading to the [2020 U.S. presidential] elections, we will hear a range of discussions about regulation on the tech industry.”

Some proposals relate to stricter regulation on collection and use of consumer data, Yildirim adds, noting that the European Union already moved to stricter regulations last year by adopting the General Data Protection Regulation (GDPR) . “A number of companies in the U.S. and around the world adopted the GDPR protocol for all of their customers, not just for the residents of EU,” she says. “We will likely hear more discussions on regulation of such data, and we will likely see stricter regulation of this data.”

The other discussion bound to intensify is around the separation of Big Tech into smaller, easier to regulate units. “Most of us academics do not think that dividing organizations into smaller units is sufficient to improve their compliance with regulation. It also does not necessarily mean they will be less competitive,” says Yildirim. “For instance, in the discussion of Facebook, it is not even clear yet how breaking up the company would work, given that it does not have very clear boundaries between different business units.”

Even if such regulations never come to pass, the discussions “may nevertheless hurt Big Tech financially, given that most companies are publicly traded and it adds to the uncertainty,” Yildirim notes.

One prominent commentator about the negative impact of social media is Jaron Lanier, whose fervent opposition makes itself apparent in the plainspoken title of his 2018 book Ten Arguments for Deleting Your Social Media Accounts Right Now . He cites loss of free will, social media’s erosion of the truth and destruction of empathy, its tendency to make people unhappy, and the way in which it is “making politics impossible.” The title of the last chapter: “Social Media Hates Your Soul.”

Lanier is no tech troglodyte. A polymath who bridges the digital and analog realms, he is a musician and writer, has worked as a scientist for Microsoft, and was co-founder of pioneering virtual reality company VPL Research. The nastiness that online existence brings out in users “turned out to be like crude oil for the social media companies and other behavior manipulation empires that quickly came to dominate the internet, because it fuelled negative behavioral feedback,” he writes.

“Social media has become really fundamental to the way that billions of people get information about the world and connect with each other, which raises the stakes enormously.” –Kevin Werbach

Worse, there is an addictive quality to social media, and that is a big issue, says Berger. “Social media is like a drug, but what makes it particularly addictive is that it is adaptive. It adjusts based on your preferences and behaviors,” he says, “which makes it both more useful and engaging and interesting, and more addictive.”

The effect of that drug on mental health is only beginning to be examined, but a recent University of Pennsylvania study makes the case that limiting use of social media can be a good thing. Researchers looked at a group of 143 Penn undergraduates, using baseline monitoring and randomly assigning each to either a group limiting Facebook, Instagram, and Snapchat use to 10 minutes per platform per day, or to one told to use social media as usual for three weeks. The results, published in the Journal of Social and Clinical Psychology , showed significant reductions in loneliness and depression over three weeks in the group limiting use compared to the control group.

However, “both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring,” wrote the authors of “ No More FOMO: Limiting Social Media Decreases Loneliness and Depression .”

Monetizing a League (and a Reality) All Their Own

No one, though, is predicting that social media is a fad that will pass like its analog antecedent of the 1970s, citizens band radio. It will, however, evolve. The idea of social media as just a way to reconnect with high school friends seems quaint now. The impact of social media today is a big tent, including not only networks like Facebook, but also forums like Reddit and video-sharing platforms.

“The question is, are we at a point where the social media organizations and their activities should be regulated for the benefit of the consumer?” –Pinar Yildirim

Virtual worlds and gaming have become a major part of the sector, too. Wharton marketing professor Peter Fader says gamers are creating their own user-generated content through virtual worlds — and the revenue to go with it. He points to one group of gamers that use Grand Theft Auto as a kind of stage or departure point “to have their own virtual show.” In NoPixel, the Grand Theft Auto roleplaying server, “not much really happens and millions are tuning in to watch them. Just watching, not even participating, and it’s either live-streamed or recorded. And people are making donations to support this thing. The gamers are making hundreds of thousands of dollars.

“Now imagine having a 30-person reality show all filmed live and you can take the perspective of one person and then watch it again from another person’s perspective,” he continues. “Along the way, they can have a tip jar or talk about things they endorse. That kind of immersive media starts to build the bridge to what we like to get out of TV, but even better. Those things are on the periphery right now, but I think they are going to take over.”

Big players have noticed the potential of virtual sports and are getting into the act. In a striking example of the physical world imitating the digital one, media companies are putting up real-life stadiums where teams compete in video games. Comcast Spectator in March announced that it is building a new $50 million stadium in South Philadelphia that will be the home of the Philadelphia Fusion, the city’s e-sports team in the Overwatch League.

E-sports is serious business, with revenues globally — including advertising, sponsorships, and media rights — expected to reach $1.1 billion in 2019, according to gaming industry analytics company Newzoo.

“E-sports is absolutely here to stay,” says Fader, “and I think it’s a safe bet to say that e-sports will dominate most traditional sports, managing far more revenue and having more impact on our consciousness than baseball.”

It’s no surprise, then, that Facebook has begun making deals to carry e-sports content. In fact, it is diversification like this that may keep Facebook from ending up like its failed upstart peers. One thing that Facebook has managed to do that MySpace, Friendster, and others didn’t, is “a very good job of creating functional integration with the value they are delivering, as opposed to being a place to just share photos or send messages, it serves a lot of diversified functions,” says Keith E. Niedermeier, director of Wharton’s undergraduate marketing program and an adjunct professor of marketing. “They are creating groups and group connections, but you see them moving into lots of other services like streaming entertainment, mobile payments, and customer-to-customer buying and selling.”

“[WeChat] has really instantiated itself as a day-to-day tool in China, and it’s clear to me that Facebook would like to emulate that sort of thing.” –Keith Niedermeier

In China, WeChat has become the biggest mobile payment platform in the world and it is the platform for many third-party apps for things like bike sharing and ordering airplane tickets. “It has really instantiated itself as a day-to-day tool in China, and it’s clear to me that Facebook would like to emulate that sort of thing,” says Niedermeier.

Among nascent social media platforms that are particularly promising right now, Yildirim says that “social media platforms which are directed at achieving some objectives with smaller scale and more homogenous people stand a higher chance of entering the market and being able to compete with large, general-purpose platforms such as Facebook and Twitter.”

Irreplaceable – and Damaging?

Of course, many have begun to believe that the biggest challenge around the impact of social media may be the way it is changing society. The “attention-grabbing algorithms underlying social media … propel authoritarian practices that aim to sow confusion, ignorance, prejudice, and chaos, thereby facilitating manipulation and undermining accountability,” writes University of Toronto political science professor Ronald Deibert in a January essay in the Journal of Democracy .

Berger notes that any piece of information can now get attention, whether it is true or false. This means more potential for movements both welcome as well as malevolent. “Before, only media companies had reach, so it was harder for false information to spread. It could happen, but it was slow. Now anyone can share anything, and because people tend to believe what they see, false information can spread just as, if not more easily, than the truth.

“It’s certainly allowed more things to bubble up rather than flow from the top down,” says Berger. Absent gatekeepers, “everyone is their own media company, broadcasting to the particular set of people that follow them. It used to be that a major label signing you was the path to stardom. Now artists can build their own following online and break through that way. Social media has certainly made fame and attention more democratic, though not always in a good way.”

Deibert writes that “in a short period of time, digital technologies have become pervasive and deeply embedded in all that we do. Unwinding them completely is neither possible nor desirable.”

His cri de coeur argues: that citizens have the right to know what companies and governments are doing with their personal data, and that this right be extended internationally to hold autocratic regimes to account; that companies be barred from selling products and services that enable infringements on human rights and harms to civil society; for the creation of independent agencies with real power to hold social-media platforms to account; and the creation and enforcement of strong antitrust laws to end dominance of a very few social-media companies.

“Social media has certainly made fame and attention more democratic, though not always in a good way.” –Jonah Berger

The rising tide of concern is now extending across sectors. The U.S. Justice Department has recently begun an anti-trust investigation into how tech companies operate in social media, search, and retail services. In July, the John S. and James L. Knight Foundation announced the award of nearly $50 million in new funding to 11 U.S. universities to research how technology is transforming democracy. The foundation is also soliciting additional grant proposals to fund policy and legal research into the “rules, norms, and governance” that should be applied to social media and technology companies.

Given all of the reasons not to engage with social media — the privacy issues, the slippery-slope addiction aspect of it, its role in spreading incivility — do we want to try to put the genie back in the bottle? Can we? Does social media definitely have a future?

“Yes, surely it does,” says Yildirim. “Social connections are fabrics of society. Just as the telegraph or telephone as an innovation of communication did not reduce social connectivity, online social networks did not either. If anything, it likely increased connectivity, or reduced the cost of communicating with others.”

It is thanks to online social networks that individuals likely have larger social networks, she says, and while many criticize the fact that we are in touch with large numbers of individuals in a superficial way, these light connections may nevertheless be contributing to our lives when it comes to economic and social outcomes — ranging from finding jobs to meeting new people.

“We are used to being in contact with more individuals, and it is easier to remain in contact with people we only met once. Giving up on this does not seem likely for humans,” she says. “The technology with which we keep in touch may change, may evolve, but we will have social connections and platforms which enable them. Facebook may be gone in 10 years, but there will be something else.”

More From Knowledge at Wharton

truth about social media essay

How Gen AI Could Trigger the Next CrowdStrike Catastrophe

truth about social media essay

Without Guardrails, Generative AI Can Harm Education

truth about social media essay

Generative AI Can Have a Negative Impact on Learning

Looking for more insights.

Sign up to stay informed about our latest article releases.

How should social media platforms combat misinformation and hate speech?

Subscribe to the center for technology innovation newsletter, niam yaraghi niam yaraghi nonresident senior fellow - governance studies , center for technology innovation.

April 9, 2019

Social media companies are under increased scrutiny for their mishandling of hateful speech and fake news on their platforms. There are two ways to consider a social media platform: On one hand, we can view them as technologies that merely enable individuals to publish and share content, a figurative blank sheet of paper on which anyone can write anything. On the other hand, one can argue that social media platforms have now evolved curators of content. I argue that these companies should take some responsibility over the content that is published on their platforms and suggest a set of strategies to help them with dealing with fake news and hate speech.

Artificial and Human Intelligence together

At the beginning, social media companies established themselves not to hold any accountability over the content being published on its platform. In the intervening years, they have since set up a mix of automated and human driven editorial processes to promote or filter certain types of content. In addition to that, their users are increasingly using these platforms as the primary source of getting their news. Twitter moments , in which you can see a brief snapshot of the daily news, is a prime example of how Twitter is getting closer to becoming a news media. As social media practically become news media, their level of responsibility over the content which they distribute should increase accordingly.

While I believe it is naïve to consider social media as merely neutral content sharing technologies with no responsibility, I do not believe that we should either have the same level of editorial expectation from social media that we have from traditional news media.

The sheer volume of content shared on social media makes it impossible to establish a comprehensive editorial system. Take Twitter as an example: It is estimated that 500 million tweets are sent per day. Assuming that each tweet contains 20 words on average, the volume of content published on Twitter in one single day will be equivalent to that of New York Times in 182 years. The terminology and focus of the hate speech changes over time, and most fake news articles contain some level of truthfulness in them. Therefore, social media companies cannot solely rely on artificial intelligence or humans to monitor and edit their content. They should rather develop approaches that utilize artificial and human intelligence together.

Finding the needle in a haystack

To overcome the editorial challenges of so much content, I suggest that the companies focus on a limited number of topics which are deemed important with significant consequences. The anti-vaccination movement and those who believe in flat-earth theory are both spreading anti-scientific and fake content. However, the consequences of believing that vaccines cause harm are eminently more dangerous than believing that the earth is flat. The former creates serious public health problems, the latter makes for a good laugh at a bar. Social media companies should convene groups of experts in various domains to constantly monitor the major topics in which fake news or hate speech may cause serious harm.

It is also important to consider how recommendation algorithms on social media platforms may inadvertently promote fake and hateful speech. At their core, these recommendation systems group users based on their shared interests and then promote the same type of content to all users within each group. If most of the users in one group have interests in, say, flat-earth theory and anti-vaccination hoaxes, then the algorithm will promote the anti-vaccination content to the users in the same group who may only be interested in flat-earth theory. Over time, the exposure to such promoted content could persuade the users who initially believed in vaccines to become skeptical about them. Once the major areas of focus for combating the fake and hateful speech is determined, the social media companies can tweak their recommendation systems fairly easily so that they will not nudge users to the harmful content.

Once those limited number of topics are identified, social media companies should decide on how to fight the spread of such content. In rare instances, the most appropriate response is to censor and ban the content with no hesitation. Examples include posts that incite violence or invite others to commit crimes. The recent New Zealand incident in which the shooter live broadcasted his heinous crimes on Facebook is the prime example of the content which should have never been allowed to be posted and shared on the platform.

Facebook currently relies on its community of users to flag such content and then uses an army of real humans to monitor such content within 24 hours to determine if they are actually in violation of its terms of use. Live content is monitored by humans once it reaches a certain level of popularity. While it is easier to use artificial intelligence to monitor textual content in real-time, our technologies to analyze images and videos are quickly advancing. For example, Yahoo! has recently made its algorithms to detect offensive and adult images public. The AI algorithms of Facebook are getting smart enough to detect and flag non-consensual intimate images .

Fight misinformation with information

Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and Facebook bans white supremacist content. The other is to provide alternative information alongside the content with fake information so that the users are exposed to the truth and correct information. This approach, which is implemented by YouTube, encourages users to click on the links with verified and vetted information that would debunk the misguided claims made in fake or hateful content. If you search “Vaccines cause autism” on YouTube, while you still can view the videos posted by anti-vaxxers, you will also be presented with a link to the Wikipedia page of MMR vaccine that debunks such beliefs.

While we yet have to empirically examine and compare the effectiveness of these alternative approaches, I prefer to present users with the real information and allow them to become informed and willfully abandon their misguided beliefs by exposing them to the reliable sources of information. Regardless of their short-lived impact, diversity of ideas will ultimately move us forward by enriching our discussions. Social media companies will be able to censor content online, but they cannot control how ideas spread offline. Unless individuals are presented with counter arguments, falsehoods and hateful ideas will spread easily, as they have in the past when social media did not exist.

Related Content

Mary Blankenship, Carol Graham

July 6, 2020

Chris Meserole, Alina Polyakova

May 25, 2018

Clara Hendrickson

May 28, 2019

Related Books

Mark MacCarthy

November 7, 2023

Darrell M. West

May 26, 2011

Nicol Turner Lee

August 6, 2024

Media & Journalism Social Media

Governance Studies

Center for Technology Innovation

Darrell M. West, Roxana Muenster

August 5, 2024

Roxana Muenster

July 22, 2024

Tom Wheeler

June 24, 2024

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

64% of Americans say social media have a mostly negative effect on the way things are going in the U.S. today

About two-thirds of Americans (64%) say social media have a mostly negative effect on the way things are going in the country today, according to a Pew Research Center survey of U.S. adults conducted July 13-19, 2020. Just one-in-ten Americans say social media sites have a mostly positive effect on the way things are going, and one-quarter say these platforms have a neither positive nor negative effect.

Majority of Americans say social media negatively affect the way things are going in the country today

Those who have a negative view of the impact of social media mention, in particular, misinformation and the hate and harassment they see on social media. They also have concerns about users believing everything they see or read – or not being sure about what to believe. Additionally, they bemoan social media’s role in fomenting partisanship and polarization, the creation of echo chambers, and the perception that these platforms oppose President Donald Trump and conservatives.

This is part of a series of posts on Americans’ experiences with and attitudes about the role of social media in politics today. Pew Research Center conducted this study to understand how Americans think about the impact of social media on the way things are currently going in the country. To explore this, we surveyed 10,211 U.S. adults from July 13 to 19, 2020. Everyone who took part is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology .

Here are the questions used for this report, along with responses, and its methodology.

The public’s views on the positive and negative effect of social media vary widely by political affiliation and ideology. Across parties, larger shares describe social media’s impact as mostly negative rather than mostly positive, but this belief is particularly widespread among Republicans.

Roughly half of Democrats and independents who lean toward the Democratic Party (53%) say social media have a largely negative effect on the way things are going in the country today, compared with 78% of Republicans and leaners who say the same. Democrats are about three times as likely as Republicans to say these sites have a mostly positive impact (14% vs. 5%) and twice as likely to say social media have neither a positive nor negative effect (32% vs. 16%).

Among Democrats, there are no differences in these views along ideological lines. Republicans, however, are slightly more divided by ideology. Conservative Republicans are more likely than moderate to liberal Republicans to say social media have a mostly negative effect (83% vs. 70%). Conversely, moderate to liberal Republicans are more likely than their conservative counterparts to say social media have a mostly positive (8% vs. 4%) or neutral impact (21% vs. 13%).

Younger adults are more likely to say social media have a positive impact on the way things are going in the country and are less likely to believe social media sites have a negative impact compared with older Americans. For instance, 15% of those ages 18 to 29 say social media have a mostly positive effect on the way things are going in the country today, while just 8% of those over age 30 say the same. Americans 18 to 29 are also less likely than those 30 and older to say social media have a mostly negative impact (54% vs. 67%).

Republicans, Democrats divided on social media’s impact on country, especially among younger adults

However, views among younger adults vary widely by partisanship. For example, 43% of Democrats ages 18 to 29 say social media have a mostly negative effect on the way things are going, compared with about three-quarters (76%) of Republicans in the same age group. In addition, these youngest Democrats are more likely than their Republican counterparts to say social media platforms have a mostly positive (20% vs. 6%) or neither a positive nor negative effect (35% vs. 18%) on the way things are going in the country today. This partisan division persists among those 30 and older, but most of the gaps are smaller than those seen within the younger cohort.

Views on the negative impact of social media vary only slightly between social media users (63%) and non-users (69%), with non-users being slightly more likely to say these sites have a negative impact. However, among social media users, those who say some or a lot of what they see on social media is related to politics are more likely than those who say a little or none of what they see on these sites is related to politics to think social media platforms have a mostly negative effect on the way things are going in the country today (65% vs. 50%).

Past Pew Research Center studies have drawn attention to the complicated relationships Americans have with social media. In 2019, a Center survey found that 72% of U.S. adults reported using at least one social media site. And while these platforms have been used for political and social activism and engagement , they also raise concerns among portions of the population. Some think political ads on these sites are unacceptable, and many object to the way social media platforms have been weaponized to spread made-up news and engender online harassment . At the same time, a share of users credit something they saw on social media with changing their views about a political or social issue. And growing shares of Americans who use these sites also report feeling worn out by political posts and discussions on social media.

Those who say social media have negative impact cite concerns about misinformation, hate, censorship; those who see positive impact cite being informed

Roughly three-in-ten who say social media have a negative effect on the country cite misinformation as reason

When asked to elaborate on the main reason why they think social media have a mostly negative effect on the way things are going in this country today, roughly three-in-ten (28%) respondents who hold that view mention the spreading of misinformation and made-up news. Smaller shares reference examples of hate, harassment, conflict and extremism (16%) as a main reason, and 11% mention a perceived lack of critical thinking skills among many users – voicing concern about people who use these sites believing everything they see or read or being unsure about what to believe.

In written responses that mention misinformation or made-up news, a portion of adults often include references to the spread, speed and amount of false information available on these platforms. (Responses are lightly edited for spelling, style and readability.) For example:

“They allow for the rampant spread of misinformation.” –Man, 36

“False information is spread at lightning speed – and false information never seems to go away.” –Woman, 71

“Social media is rampant with misinformation both about the coronavirus and political and social issues, and the social media organizations do not do enough to combat this.” –Woman, 26

“Too much misinformation and lies are promoted from unsubstantiated sources that lead people to disregard vetted and expert information.” –Woman, 64

People’s responses that centered around hate, harassment, conflict or extremism in some way often mention concerns that social media contributes to incivility online tied to anonymity, the spreading of hate-filled ideas or conspiracies, or the incitement of violence.

“People say incendiary, stupid and thoughtless things online with the perception of anonymity that they would never say to someone else in person.” –Man, 53

“Promotes hate and extreme views and in some cases violence.” –Man, 69

“People don’t respect others’ opinions. They take it personally and try to fight with the other group. You can’t share your own thoughts on controversial topics without fearing someone will try to hurt you or your family.” –Woman, 65

“Social media is where people go to say some of the most hateful things they can imagine.” –Man, 46

About one-in-ten responses talk about how people on social media can be easily confused and believe everything they see or read or are not sure about what to believe.

“People believe everything they see and don’t verify its accuracy.” –Man, 75

“Many people can’t distinguish between real and fake news and information and share it without doing proper research …” –Man, 32

“You don’t know what’s fake or real.” –Man, 49

“It is hard to discern truth.” –Woman, 80

“People cannot distinguish fact from opinion, nor can they critically evaluate sources. They tend to believe everything they read, and when they see contradictory information (particularly propaganda), they shut down and don’t appear to trust any information.” –Man, 42

Smaller shares complain that the platforms censor content or allow material that is biased (9%), too negative (7%) or too steeped in partisanship and division (6%).

“Social media is censoring views that are different than theirs. There is no longer freedom of speech.” –Woman, 42

“It creates more divide between people with different viewpoints.” –Man, 37

“Focus is on negativity and encouraging angry behavior rather than doing something to help people and make the world better.” –Woman, 66

25% of Americans who say social media have a positive impact on the country cite staying informed, aware

Far fewer Americans – 10% – say they believe social media has a mostly positive effect on the way things are going in the country today. When those who hold these positive views were asked about the main reason why they thought this, one-quarter say these sites help people stay informed and aware (25%) and about one-in-ten say they allow for communication, connection and community-building (12%).

“We are now aware of what’s happening around the world due to the social media outlet.” –Woman, 28

“It brings awareness to important issues that affect all Americans.” –Man, 60

“It brings people together; folks can see that there are others who share the same/similar experience, which is really important, especially when so many of us are isolated.” –Woman, 36

“Helps people stay connected and share experiences. I also get advice and recommendations via social media.” –Man, 32

“It keeps people connected who might feel lonely and alone if there did not have social media …” – Man, 65

Smaller shares tout social media as a place where marginalized people and groups have a voice (8%) and as a venue for activism and social movements (7%).

“Spreading activism and info and inspiring participation in Black Lives Matter.” –Woman, 31

“It gives average people an opportunity to voice and share their opinions.” –Man, 67

“Visibility – it has democratized access and provided platforms for voices who have been and continue to be oppressed.” –Woman, 27

Note: This is part of a series of blog posts leading up to the 2020 presidential election that explores the role of social media in politics today. Here are the questions used for this report, along with responses, and its methodology.

Other posts in this series:

  • 23% of users in U.S. say social media led them to change views on an issue; some cite Black Lives Matter
  • 54% of Americans say social media companies shouldn’t allow any political ads
  • 55% of U.S. social media users say they are ‘worn out’ by political posts and discussions
  • Americans think social media can help build movements, but can also be a distraction
  • Misinformation
  • Misinformation Online
  • National Conditions
  • Political Discourse
  • Politics Online
  • Social Media

Brooke Auxier is a former research associate focusing on internet and technology at Pew Research Center .

Majorities in most countries surveyed say social media is good for democracy

­most americans favor restrictions on false information, violent content online, as ai spreads, experts predict the best and worst changes in digital life by 2035, social media seen as mostly good for democracy across many nations, but u.s. is a major outlier, the role of alternative social media in the news and information environment, most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Why social media has changed the world — and how to fix it

Press contact :, media download.

Sinan Aral and his new book The Hype Machine

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

person on a smartphone

Previous image Next image

Are you on social media a lot? When is the last time you checked Twitter, Facebook, or Instagram? Last night? Before breakfast? Five minutes ago?

If so, you are not alone — which is the point, of course. Humans are highly social creatures. Our brains have become wired to process social information, and we usually feel better when we are connected. Social media taps into this tendency.

“Human brains have essentially evolved because of sociality more than any other thing,” says Sinan Aral, an MIT professor and expert in information technology and marketing. “When you develop a population-scale technology that delivers social signals to the tune of trillions per day in real-time, the rise of social media isn’t unexpected. It’s like tossing a lit match into a pool of gasoline.”

The numbers make this clear. In 2005, about 7 percent of American adults used social media. But by 2017, 80 percent of American adults used Facebook alone. About 3.5 billion people on the planet, out of 7.7 billion, are active social media participants. Globally, during a typical day, people post 500 million tweets, share over 10 billion pieces of Facebook content, and watch over a billion hours of YouTube video.

As social media platforms have grown, though, the once-prevalent, gauzy utopian vision of online community has disappeared. Along with the benefits of easy connectivity and increased information, social media has also become a vehicle for disinformation and political attacks from beyond sovereign borders.

“Social media disrupts our elections, our economy, and our health,” says Aral, who is the David Austin Professor of Management at the MIT Sloan School of Management.

Now Aral has written a book about it. In “The Hype Machine,” published this month by Currency, a Random House imprint, Aral details why social media platforms have become so successful yet so problematic, and suggests ways to improve them.

As Aral notes, the book covers some of the same territory as “The Social Dilemma,” a documentary that is one of the most popular films on Netflix at the moment. But Aral’s book, as he puts it, "starts where ‘The Social Dilemma’ leaves off and goes one step further to ask: What can we do about it?”

“This machine exists in every facet of our lives,” Aral says. “And the question in the book is, what do we do? How do we achieve the promise of this machine and avoid the peril? We’re at a crossroads. What we do next is essential, so I want to equip people, policymakers, and platforms to help us achieve the good outcomes and avoid the bad outcomes.”

When “engagement” equals anger

“The Hype Machine” draws on Aral’s own research about social networks, as well as other findings, from the cognitive sciences, computer science, business, politics, and more. Researchers at the University of California at Los Angeles, for instance, have found that people obtain bigger hits of dopamine — the chemical in our brains highly bound up with motivation and reward — when their social media posts receive more likes.

At the same time, consider a 2018 MIT study by Soroush Vosoughi, an MIT PhD student and now an assistant professor of computer science at Dartmouth College; Deb Roy, MIT professor of media arts and sciences and executive director of the MIT Media Lab; and Aral, who has been studying social networking for 20 years. The three researchers found that on Twitter, from 2006 to 2017, false news stories were 70 percent more likely to be retweeted than true ones. Why? Most likely because false news has greater novelty value compared to the truth, and provokes stronger reactions — especially disgust and surprise.

In this light, the essential tension surrounding social media companies is that their platforms gain audiences and revenue when posts provoke strong emotional responses, often based on dubious content.

“This is a well-designed, well-thought-out machine that has objectives it maximizes,” Aral says. “The business models that run the social-media industrial complex have a lot to do with the outcomes we’re seeing — it’s an attention economy, and businesses want you engaged. How do they get engagement? Well, they give you little dopamine hits, and … get you riled up. That’s why I call it the hype machine. We know strong emotions get us engaged, so [that favors] anger and salacious content.”

From Russia to marketing

“The Hype Machine” explores both the political implications and business dimensions of social media in depth. Certainly social media is fertile terrain for misinformation campaigns. During the 2016 U.S. presidential election, Russia spread  false information to at least 126 million people on Facebook and another 20 million people on Insta­gram (which Facebook owns), and was responsible for 10 million tweets. About 44 percent of adult Americans visited a false news source in the final weeks of the campaign.

“I think we need to be a lot more vigilant than we are,” says Aral.

We do not know if Russia’s efforts altered the outcome of the 2016 election, Aral says, though they may have been fairly effective. Curiously, it is not clear if the same is true of most U.S. corporate engagement efforts.

As Aral examines, digital advertising on most big U.S. online platforms is often wildly ineffective, with academic studies showing that the “lift” generated by ad campaigns — the extent to which they affect consumer action — has been overstated by a factor of hundreds, in some cases. Simply counting clicks on ads is not enough. Instead, online engagement tends to be more effective among new consumers, and when it is targeted well; in that sense, there is a parallel between good marketing and guerilla social media campaigns.

“The two questions I get asked the most these days,” Aral says, “are, one, did Russia succeed in intervening in our democracy? And two, how do I measure the ROI [return on investment] from marketing investments? As I was writing this book, I realized the answer to those two questions is the same.”

Ideas for improvement

“The Hype Machine” has received praise from many commentators. Foster Provost, a professor at New York University’s Stern School of Business, says it is a “masterful integration of science, business, law, and policy.” Duncan Watts, a university professor at the University of Pennsylvania, says the book is “essential reading for anyone who wants to understand how we got here and how we can get somewhere better.”

In that vein, “The Hype Machine” has several detailed suggestions for improving social media. Aral favors automated and user-generated labeling of false news, and limiting revenue-collection that is based on false content. He also calls for firms to help scholars better research the issue of election interference.

Aral believes federal privacy measures could be useful, if we learn from the benefits and missteps of the General Data Protection Regulation (GDPR) in Europe and a new California law that lets consumers stop some data-sharing and allows people to find out what information companies have stored about them. He does not endorse breaking up Facebook, and suggests instead that the social media economy needs structural reform. He calls for data portability and interoperability, so “consumers would own their identities and could freely switch from one network to another.” Aral believes that without such fundamental changes, new platforms will simply replace the old ones, propelled by the network effects that drive the social-media economy.

“I do not advocate any one silver bullet,” says Aral, who emphasizes that changes in four areas together — money, code, norms, and laws — can alter the trajectory of the social media industry.

But if things continue without change, Aral adds, Facebook and the other social media giants risk substantial civic backlash and user burnout.

“If you get me angry and riled up, I might click more in the short term, but I might also grow really tired and annoyed by how this is making my life miserable, and I might turn you off entirely,” Aral observes. “I mean, that’s why we have a Delete Facebook movement, that’s why we have a Stop Hate for Profit movement. People are pushing back against the short-term vision, and I think we need to embrace this longer-term vision of a healthier communications ecosystem.”

Changing the social media giants can seem like a tall order. Still, Aral says, these firms are not necessarily destined for domination.

“I don’t think this technology or any other technology has some deterministic endpoint,” Aral says. “I want to bring us back to a more practical reality, which is that technology is what we make it, and we are abdicating our responsibility to steer technology toward good and away from bad. That is the path I try to illuminate in this book.”

Share this news article on:

Press mentions.

Prof. Sinan Aral’s new book, “The Hype Machine,” has been selected as one of the best books of the year about AI by Wired . Gilad Edelman notes that Aral’s book is “an engagingly written shortcut to expertise on what the likes of Facebook and Twitter are doing to our brains and our society.”

Prof. Sinan Aral speaks with Danny Crichton of TechCrunch about his new book, “The Hype Machine,” which explores the future of social media. Aral notes that he believes a starting point “for solving the social media crisis is creating competition in the social media economy.” 

New York Times

Prof. Sinan Aral speaks with New York Times editorial board member Greg Bensinger about how social media platforms can reduce the spread of misinformation. “Human-in-the-loop moderation is the right solution,” says Aral. “It’s not a simple silver bullet, but it would give accountability where these companies have in the past blamed software.”

Prof. Sinan Aral speaks with Kara Miller of GBH’s Innovation Hub about his research examining the impact of social media on everything from business re-openings during the Covid-19 pandemic to politics.

Prof. Sinan Aral speaks with NPR’s Michael Martin about his new book, “The Hype Machine,” which explores the benefits and downfalls posed by social media. “I've been researching social media for 20 years. I've seen its evolution and also the techno utopianism and dystopianism,” says Aral. “I thought it was appropriate to have a book that asks, 'what can we do to really fix the social media morass we find ourselves in?'”

Previous item Next item

Related Links

  • MIT Sloan School of Management

Related Topics

  • Business and management
  • Social media
  • Books and authors
  • Behavioral economics

Related Articles

A new study co-authored by MIT Professor David Rand shows that labeling some news stories as false makes all other news stories seem more legitimate online.

The catch to putting warning labels on fake news

“When people are consuming news on social media, their inclination to share that news with others interferes with their ability to assess its accuracy, according to a new study co-authored by MIT researchers.”

Our itch to share helps spread Covid-19 misinformation

MIT Professor Regina Barzilay (left) and CSAIL PhD student Tal Schuster are studying detectors of machine-generated text.

Better fact-checking for fake news

Pictured (left to right): Seated, Soroush Vosoughi, a postdoc at the Media Lab's Laboratory for Social Machines; Sinan Aral, the David Austin Professor of Management at MIT Sloan; and Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab, who also served as Twitter's Chief Media Scientist from 2013 to 2017.

Study: On Twitter, false news travels faster than true stories

Sinan Aral

Social networking

More mit news.

 Six people dressed in team T-shirts and jackets pose in front of a pond. They're holding a large blue rocket with gold star designs.

MIT team wins grand prize at NASA’s First Nations Launch High-Power Rocket Competition

Read full story →

At left, Mariya Grinberg stands in front of a whiteboard filled with text. At right, Nuh Gedik sits at a scientific instrument in the lab, surrounded by his mentees.

Nurturing success

David Trumper stands in front of a chalkboard, holding up a small cylindrical electric motor in each hand

For developing designers, there’s magic in 2.737 (Mechatronics)

Five square slices show glimpse of LLMs, and the final one is green with a thumbs up.

Study: Transparency is often lacking in datasets used to train large language models

Charalampos Sampalis wears a headset while looking at the camera

How MIT’s online resources provide a “highly motivating, even transformative experience”

A small model shows a wooden man in a sparse room, with dramatic lighting from the windows.

Students learn theater design through the power of play

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • Share full article

Advertisement

How Social Media Amplifies Misinformation More Than Information

A new analysis found that algorithms and some features of social media sites help false posts go viral.

truth about social media essay

By Steven Lee Myers

  • Oct. 13, 2022

It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday it began publishing results that it plans to update each week through the midterm elections on Nov. 8.

The institute’s initial report, posted online , found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.

Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.

“We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”

The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts. It analyzed nearly 600 fact-checked posts in September on a variety of subjects, including the Covid-19 pandemic, the war in Ukraine and the upcoming elections.

Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.

Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.

Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.

The institute plans to update its findings to track how the amplification fluctuates, especially as the midterm elections near. Misinformation, the institute’s report said, is much more likely to be shared than merely factual content.

“Amplification of misinformation can rise around critical events if misinformation narratives take hold,” the report said. “It can also fall, if platforms implement design changes around the event that reduce the spread of misinformation.”

Steven Lee Myers covers misinformation for The Times. He has worked in Washington, Moscow, Baghdad and Beijing, where he contributed to the articles that won the Pulitzer Prize for public service in 2021. He is also the author of “The New Tsar: The Rise and Reign of Vladimir Putin.” More about Steven Lee Myers

IMAGES

  1. Essay: social and economic impacts of social media

    truth about social media essay

  2. Social Media speech Free Essay Example

    truth about social media essay

  3. Argumentative Essay-The Impact of Social Media on Our Mental Health

    truth about social media essay

  4. (DOC) Uses of social media essay

    truth about social media essay

  5. Social Media Essay In 2021 Advantages Disadvantages E

    truth about social media essay

  6. 📗 Essay Sample on Social Media: Revealing Our True Selves? Pros and

    truth about social media essay

VIDEO

  1. Truth Social Posts DISASTROUS New Financial Losses

  2. "Unveiling the Truth: Social Media Facts You Need to Know"

  3. Trump's Truth Social media Q2 loss, decreased sales

  4. Social media

  5. Social Media-Essay in English │ Internet -Essay in English │Advantages & Disadvantages │

  6. paragraph/essay on social media in english || social media paragraph/essay for clas6/7/8/9/10/11/12

COMMENTS

  1. Trump can soon tap his $2 billion Truth Social fortune. But it ...

    The lock-up period prohibiting Trump from selling or even borrowing against his $2.3 billion stake in Truth Social owner Trump Media & Technology Group is scheduled to expire by September 25 ...

  2. The Social Media Comment Section as an Unruly Public Arena: How Comment

    The findings support the idea that the comment section accompanying news stories posted on social media resembles an unruly and noisy public arena and that the affordances of social media (i.e., algorithmic curation, which is vulnerable to inauthentic manipulation, and anonymity) offer a discursive opportunity for problematic information and ...

  3. Conclusion: Social Media the Only Constant Is Change

    The final chapter of this text is aptly named: Social Media—The Only Constant is Change, because it is the truth. One of the key skills a Social Media Manager must learn is to be nimble, fast and flexible in the face of the constant evolution of social media technology. Staying across perpetual change can take considerable time and effort ...

  4. Opinion

    The stress and mental health challenges faced by parents — just like loneliness, workplace well-being and the impact of social media on youth mental health — aren't always visible, but they ...

  5. Key things to know about U.S. election polling in 2024

    Respondents come from a variety of online sources such as ads on social media or search engines, websites offering rewards in exchange for survey participation, or self-enrollment. ... The 2000 and 2016 presidential elections demonstrated a difficult truth: The candidate with the largest share of support among all voters in the United States ...

  6. Essay on Social Media

    500+ Words Essay on Social Media. Social media is a tool that is becoming quite popular these days because of its user-friendly features. Social media platforms like Facebook, Instagram, Twitter and more are giving people a chance to connect with each other across distances. In other words, the whole world is at our fingertips all thanks to ...

  7. Democracy, Social Media, and Freedom of Expression: Hate, Lies, and the

    This Essay is a critical reflection on the impact of the digital revolution and the internet on three topics that shape the contemporary world: democracy, social media, and freedom of expression. Part I establishes historical and conceptual assumptions about constitutional democracy and discusses the role of digital platforms in the current ...

  8. The complicated truth about social media and body image

    That being said, using social media does appear to be correlated with body image concerns. A systematic review of 20 papers published in 2016 found that photo-based activities, like scrolling ...

  9. Review essay: fake news, and online misinformation and disinformation

    Review essay: fake news, and online misinformation and disinformation ... The IRA was established around 2012 partly as a response to the effectiveness of the social media campaigners during the Arab Spring of 2010-2012, but shaped too by the nationalist youth blogging camps in Russia whose activities were sponsored by government departments ...

  10. The Truth About Social Media on the Internet

    The Truth About Social Media on the Internet - Essay #2 Write about how Social Media effected society as a whole. fall term, 2022 professor long the truth about. ... Eng 1 - Essay #2 - Social Media and the Internet; Essay 3 - English - The Importance of Food; Related documents. Essay #1 - College Education;

  11. How to Write a Social Media Essay, With Examples

    Social media essay topics. Social media essay topics can include anything involving social media. Here are a few examples of strong social media essay topics: Social media and society. Analyzing social media impact. Comparing social media platforms. Digital communication analysis. Social media marketing case studies.

  12. The Future of Truth and Misinformation Online

    In late 2016, Oxford Dictionaries selected "post-truth" as the word of the year, defining it as "relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.". The 2016 Brexit vote in the United Kingdom and the tumultuous U.S. presidential election highlighted how the digital age has affected ...

  13. How Harmful Is Social Media?

    On January 6, 2021, he was on the phone with Chris Bail, a sociologist at Duke and the author of the recent book " Breaking the Social Media Prism," when Bail urged him to turn on the ...

  14. Social Media Pros and Cons: [Essay Example], 889 words

    Disadvantages of Social Media. There are advantages and disadvantages of everything on a human being. Some disadvantages are cyberbullying, hacking, fraud, scams, security issues, reputation, privacy, health issues etc. Disadvantages can cause deaths. Today's society is so influenced of social media that they spend their whole day on surfing ...

  15. Social Media: Beneficial or Harmful? Essay

    Social Media: Beneficial or Harmful? Essay. It is important to note that social media is a core element of the internet, and it reshaped how a modern human perceives information, communicates, socializes, and learns about the outside world. It became a primary lens through which one interacts with others, and thus, it is critical to properly ...

  16. Biases Make People Vulnerable to Misinformation Spread by Social Media

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.. Social media are among the primary sources of news in the U.S. and ...

  17. How misinformation spreads on social media—And what to do about it

    The flow of misinformation on Twitter is thus a function of both human and technical factors. Human biases play an important role: Since we're more likely to react to content that taps into our ...

  18. 482 Social Media Essay Topic Ideas & Examples

    Step 2. Choose a topic that sounds compelling. This might be the most challenging process if your tutor didn't provide a list of possible topics to explore. To assist you in narrowing down social media essay topics, browse sample papers online, and see if they give you any inspiration. Step 3.

  19. People say they regularly see false and misleading content on social

    Social media use has increased in emerging and developing nations in recent years. And, across the 11 emerging economies surveyed for this report, a median of 28% of adults say social media are very important for helping them keep up with political news and other developments happening in the world.. 4. At the same time, opinions are divided when it comes to the reliability, bias and hateful ...

  20. Yes, Social Media Really Is Undermining Democracy

    My essay laid out a wide array of harms that social media has inflicted on society. Political polarization is just one of them, but it is central to the story of rising democratic dysfunction.

  21. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  22. The Impact of Social Media: Is it Irreplaceable?

    Knowledge at Wharton Staff. In little more than a decade, the impact of social media has gone from being an entertaining extra to a fully integrated part of nearly every aspect of daily life for ...

  23. How should social media platforms combat misinformation and ...

    Currently, social media companies have adopted two approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and ...

  24. 64% in U.S. say social media have a mostly negative effect on country

    By Brooke Auxier. About two-thirds of Americans (64%) say social media have a mostly negative effect on the way things are going in the country today, according to a Pew Research Center survey of U.S. adults conducted July 13-19, 2020. Just one-in-ten Americans say social media sites have a mostly positive effect on the way things are going ...

  25. 'Belonging Is Stronger Than Facts': The Age of Misinformation

    In an ecosystem where that sense of identity conflict is all-consuming, she wrote, "belonging is stronger than facts.". Max Fisher is a New York-based international reporter and columnist. He ...

  26. Why social media has changed the world

    Most likely because false news has greater novelty value compared to the truth, and provokes stronger reactions — especially disgust and surprise. ... Certainly social media is fertile terrain for misinformation campaigns. During the 2016 U.S. presidential election, Russia spread false information to at least 126 million people on Facebook ...

  27. How Social Media Amplifies Misinformation More Than Information

    By Steven Lee Myers. Oct. 13, 2022. It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure ...