Site logo

35 Media Bias Examples for Students

media bias example types definition

Media bias examples include ideological bias, gotcha journalism, negativity bias, and sensationalism. Real-life situations when they occur include when ski resorts spin snow reports to make them sound better, and when cable news shows like Fox and MSNBC overtly prefer one political party over another (republican and democrat, respectively).

No one is free of all bias. No one is perfectly objective. So, every book, research paper, and article (including this one) is bound to have some form of bias.

The media is capable of employing an array of techniques to modify news stories in favor of particular interests or groups.

While bias is usually seen as a bad thing, and good media outlets try to minimize it as much as possible, at times, it can also be seen as a good thing. For example, a reporter’s bias toward scholarly consensus or a local paper’s bias toward reporting on events relevant to local people makes sense.

Media Bias Definition

Media bias refers to the inherently subjective processes involved in the selection and curation of information presented within media. It can lead to incorrect, inaccurate, incomplete, misleading, misrepresented, or otherwise skewed reporting.

Media bias cannot be fully eliminated. This is because media neutrality has practical limitations, such as the near impossibility of reporting every single available story and fact, the requirement that selected facts must form a coherent narrative, and so on (Newton, 1996).

Types of Media Bias

In a broad sense, there are two main types of media bias . 

  • Ideological bias reflects a news outlet’s desire to move the opinions of readers in a particular direction.
  • Spin bias reflects a news outlet’s attempt to create a memorable story (Mullainathan & Shleifer, 2002).

These two main types can be divided into many subcategories. The following list offers a more specific classification of different types of media bias:

  • Advertising bias occurs when stories are selected or slanted to please advertisers (Eberl et al., 2018).
  • Concision bias occurs when conciseness determines which stories are reported and which are ignored. News outlets often report views that can be summarized succinctly, thereby overshadowing views that are more unconventional, difficult to explain, and complex.
  • Confirmation bias occurs when media consumers tend to believe those stories, views, and research that confirms their current views and ignore everything else (Groseclose & Milyo, 2005).
  • Content bias occurs when two political parties are treated differently and news is biased towards one side (Entman, 2007).
  • Coverage bias occurs when the media chooses to report only negative news about one party or ideology (Eberl et al., 2017 & D’Alessio & Allen, 2000)
  • Decision-making bias occurs when the motivations, beliefs, and intentions of the journalists have an impact on what they write and how (Entman, 2007).
  • Demographic bias occurs when demographic factors, such as race, gender, social status, income, and so on are allowed to influence reporting (Ribeiro et al., 2018).
  • Gatekeeping bias occurs when stories are selected or dismissed on ideological grounds (D’Alessio & Allen, 2000). This is sometimes also referred to as agenda bias , selectivity bias (Hofstetter & Buss, 1978), or selection bias (Groeling, 2013). Such bias is often focused on political actors (Brandenburg, 2006).
  • Layout bias occurs when an article is placed in a section that is less read so that it becomes less important, or when an article is placed first so that more people read it. This can sometimes be called burying the lead .
  • Mainstream bias occurs when a news outlet only reports things that are safe to report and everyone else is reporting. By extension, the news outlet ignores stories and views that might offend the majority.
  • Partisan bias occurs when a news outlet tends to report in a way that serves a specific political party (Haselmayer et al., 2017).
  • Sensationalism bias occurs when the exceptional, the exciting, and the sensational are given more attention because it is rarer.
  • Statement bias occurs when media coverage is slanted in favor of or against specific actors or issues (D’Alessio & Allen, 2000). It is also known as tonality bias (Eberl et al., 2017) or presentation bias (Groeling, 2013).
  • Structural bias occurs when an actor or issue receives more or less favorable coverage as a result of newsworthiness instead of ideological decisions (Haselmayer et al., 2019 & van Dalen, 2012).
  • Distance bias occurs when a news agency gives more coverage to events physically closer to the news agency than elsewhere. For example, national media organizations like NBC may be unconsciously biased toward New York City news because that is where they’re located.
  • Negativity bias occurs because negative information tends to attract more attention and is remembered for a longer time, even if it’s disliked in the moment.
  • False balance bias occurs when a news agency attempts to appear balanced by presenting a news story as if the data is 50/50 on the topic, while the data may in fact show one perspective should objectively hold more weight. Climate change is the classic example.

Media Bias Examples

  • Ski resorts reporting on snowfall: Ski resorts are biased in how they spin snowfall reporting. They consistently report higher snowfall than official forecasts because they have a supply-driven interest in doing so (Raymond & Taylor, 2021).
  • Moral panic in the UK: Cohen (1964) famously explored UK media’s sensationalist reporting about youth subcultural groups as “delinquents”, causing panic among the general population that wasn’t representative of the subcultural groups’ true actions or impact on society.
  • Murdoch media in Australia: Former Prime Minister Kevin Rudd consistently reports on media bias in the Murdoch media, highlighting for example, that Murdoch’s papers have endorsed the conservative side of politics (ironically called the Liberals) in 24 out of 24 elections.
  • Fox and MSNBC: In the United States, Fox and MSNBC have niched down to report from a right- and left-wing bias, respectively.
  • Fog of war: During wartime, national news outlets tend to engage in overt bias against the enemy by reporting extensively on their war crimes while failing to report on their own war crimes.
  • Missing white woman syndrome: Sensationalism bias is evident in cases such as missing woman Gabby Petito . The argument of this type of bias is that media tends only to report on missing women when they are white, and neglect to make as much of a fuss about missing Indigenous women.
  • First-World Bias in Reporting on Natural Disasters: Scholars have found that news outlets tend to have bias toward reporting on first-world nations that have suffered natural disasters while under-reporting on natural disasters in developing nations, where they’re seen as not newsworthy (Aritenang, 2022; Berlemann & Thomas, 2018).
  • Overseas Reporting on US Politics: Sensationalism bias has an effect when non-US nations report on US politics. Unlike other nations’ politics, US politics is heavily reported worldwide. One major reason is that US politics tends to be bitterly fought and lends itself to sensational headlines.
  • Click baiting: Media outlets that have moved to a predominantly online focus, such as Forbes and Vice, are biased toward news reports that can be summed up by a sensational headline to ensure they get clicked – this is called “click baiting”.
  • Google rankings and mainstream research bias: Google has explicitly put in its site quality rater guidelines a preference for sites that report in ways that reflect “expert consensus”. While this may be seen as a positive way to use bias, it can also push potentially valid alternative perspectives and whistleblowers off the front page of search results.
  • False Balance on climate change: Researchers at Northwestern University have highlighted the prevalence of false balance reporting on climate change. They argue that 99% of scientists agree that it is man-made, yet often, news segments have one scientist arguing one side and another arguing another, giving the reporting a perception that it’s a 50-50 split in the scientific debate. In their estimation, an unbiased report would demonstrate the overwhelming amount of scientific evidence supporting one side over the other.
  • Negative Unemployment Reports: Garz found that media tend to over-report negative unemployment statistics while under-reporting when unemployment statistics are positive (Garz, 2013).
  • Gotcha Journalism: Gotcha journalism involves having journalists go out and actively seek out “gotcha questions” that will lead to sensational headlines. It is a form of bias because it often leads to less reporting on substantive messaging and an over-emphasis on gaffes and disingenuous characterizations of politicians.
  • Citizenship bias: When a disaster happens overseas, reporting often presents the number deceased, followed by the number from the news outlet’s company. For example, they might say: “51 dead, including 4 Americans.” This bias, of course, is to try to make the news appear more relevant to their audience, but nonetheless shows a bias toward the audience’s in-group.
  • Online indie media bias: Online indie media groups that have shot up on YouTube and social media often have overt biases. Left-wing versions include The Young Turks and The David Pakman Show , while right-wing versions include The Daily Wire and Charlie Kirk .
  • Western alienation: In Canada, this phenomenon refers to ostensibly national media outlets like The Globe and Mail having a bias toward news occurring in Toronto and ignoring western provinces, leading to “western alienation”.

The Government’s Role in Media Bias

Governments also play an important role in media bias due to their ability to distribute power.

The most obvious examples of pro-government media bias can be seen in totalitarian regimes, such as modern-day North Korea (Merloe, 2015). The government and the media can influence each other: the media can influence politicians and vice versa (Entman, 2007).

Nevertheless, even liberal democratic governments can affect media bias by, for example, leaking stories to their favored outlets and selectively calling upon their preferred outlets during news conferences.

In addition to the government, the market can also influence media coverage. Bias can be the function of who owns the media outlet in question, who are the media staff, what is the intended audience, what gets the most clicks or sells the most newspapers, and so on. 

Media bias refers to the bias of journalists and news outlets in reporting events, views, stories, and everything else they might cover.

The term usually denotes a widespread bias rather than something specific to one journalist or article.

There are many types of media bias. It is useful to understand the different types of biases, but also recognize that while good reporting can and does exist, it’s almost impossible to fully eliminate biases in reporting.

Aritenang, A. (2022). Understanding international agenda using media analytics: The case of disaster news coverage in Indonesia.  Cogent Arts & Humanities ,  9 (1), 2108200.

Brandenburg, H. (2006). Party Strategy and Media Bias: A Quantitative Analysis of the 2005 UK Election Campaign. Journal of Elections, Public Opinion and Parties , 16 (2), 157–178. https://doi.org/10.1080/13689880600716027

D’Alessio, D., & Allen, M. (2000). Media Bias in Presidential Elections: A Meta-Analysis. Journal of Communication , 50 (4), 133–156. https://doi.org/10.1111/j.1460-2466.2000.tb02866.x

Eberl, J.-M., Boomgaarden, H. G., & Wagner, M. (2017). One Bias Fits All? Three Types of Media Bias and Their Effects on Party Preferences. Communication Research , 44 (8), 1125–1148. https://doi.org/10.1177/0093650215614364

Eberl, J.-M., Wagner, M., & Boomgaarden, H. G. (2018). Party Advertising in Newspapers. Journalism Studies , 19 (6), 782–802. https://doi.org/10.1080/1461670X.2016.1234356

Entman, R. M. (2007). Framing Bias: Media in the Distribution of Power. Journal of Communication , 57 (1), 163–173. https://doi.org/10.1111/j.1460-2466.2006.00336.x

Garz, M. (2014). Good news and bad news: evidence of media bias in unemployment reports.  Public Choice ,  161 (3), 499-515.

Groeling, T. (2013). Media Bias by the Numbers: Challenges and Opportunities in the Empirical Study of Partisan News. Annual Review of Political Science , 16 (1), 129–151. https://doi.org/10.1146/annurev-polisci-040811-115123

Groseclose, T., & Milyo, J. (2005). A measure of media bias. The Quarterly Journal of Economics , 120 (4), 1191-1237.

Groseclose, T., & Milyo, J. (2005). A Measure of Media Bias. The Quarterly Journal of Economics , 120 (4), 1191–1237. https://doi.org/10.1162/003355305775097542

Haselmayer, M., Meyer, T. M., & Wagner, M. (2019). Fighting for attention: Media coverage of negative campaign messages. Party Politics , 25 (3), 412–423. https://doi.org/10.1177/1354068817724174

Haselmayer, M., Wagner, M., & Meyer, T. M. (2017). Partisan Bias in Message Selection: Media Gatekeeping of Party Press Releases. Political Communication , 34 (3), 367–384. https://doi.org/10.1080/10584609.2016.1265619

Hofstetter, C. R., & Buss, T. F. (1978). Bias in television news coverage of political events: A methodological analysis. Journal of Broadcasting , 22 (4), 517–530. https://doi.org/10.1080/08838157809363907

Mackey, T. P., & Jacobson, T. E. (2019). Metaliterate Learning for the Post-Truth World . American Library Association.

Merloe, P. (2015). Authoritarianism Goes Global: Election Monitoring Vs. Disinformation. Journal of Democracy , 26 (3), 79–93. https://doi.org/10.1353/jod.2015.0053

Mullainathan, S., & Shleifer, A. (2002). Media Bias (No. w9295; p. w9295). National Bureau of Economic Research. https://doi.org/10.3386/w9295

Newton, K. (1996). The mass media and modern government . Wissenschaftszentrum Berlin für Sozialforschung.

Raymond, C., & Taylor, S. (2021). “Tell all the truth, but tell it slant”: Documenting media bias. Journal of Economic Behavior & Organization , 184 , 670–691. https://doi.org/10.1016/j.jebo.2020.09.021

Ribeiro, F. N., Henrique, L., Benevenuto, F., Chakraborty, A., Kulshrestha, J., Babaei, M., & Gummadi, K. P. (2018, June). Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Twelfth international AAAI conference on web and social media .

Sloan, W. D., & Mackay, J. B. (2007). Media Bias: Finding It, Fixing It . McFarland.

van Dalen, A. (2012). Structural Bias in Cross-National Perspective: How Political Systems and Journalism Cultures Influence Government Dominance in the News. The International Journal of Press/Politics , 17 (1), 32–55. https://doi.org/10.1177/1940161211411087

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Animism Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 10 Magical Thinking Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ Social-Emotional Learning (Definition, Examples, Pros & Cons)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ What is Educational Psychology?

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

80 Media Bias Essay Topic Ideas & Examples

🏆 best media bias topic ideas & essay examples, ⭐ interesting topics to write about media bias, ✅ simple & easy media bias essay titles, ❓ questions about media bias.

  • The Impact of Media Bias Media bias is a contravention of professional standards by members of the fourth estate presenting in the form of favoritism of one section of society when it comes to the selection and reporting of events […]
  • Media Bias in Reporting: The World’s Progress vs. Negative News Given the rise of populist politicians and autocrats throughout the globe, it is tempting to overlook the progress in creating civil liberties and political freedoms, which are both a way to and a culmination to […]
  • Media Bias Monitor: Quantifying Biases of Social Media On the other hand, the media uses selective exposure and airing of stories about leaders, leading to more bias in their stories.
  • Media Bias Fact Check: Website Analysis For instance, Fact Check relies on the evidence provided by the person or organization making a claim to substantiate the accuracy of the source.
  • Bias of the Lebanese Media Therefore, the main aim of the paper is to identify the elements of bias in the media coverage through an analysis of the media coverage of Al Manar and Future TV in 2008.
  • Media Bias in the Middle East Crisis in America A good example of this in the United States Media coverage of the Middle East crisis comes in terms of criminalizing the Israeli forces.
  • Media Bias in America and the Middle East Of course, Benjamin Franklin neglected to mention that the printing company he owned was in the running to get the job of printing the money if the plan was approved.
  • Why Study the Media, Bias, Limitations, Issues of Media The media have recently have taken an identity almost undistinguishable from entertainment or pop culture and marketing where news serve as “spices” that add up flavor to the whole serving, such as the Guardian Unlimited […]
  • Media Bias: The Organization of a Newsroom The media is, however, desperate for attention, and it’s not political ideology that dictates what we are offered in the guise of news on any particular day, but what will sell advertising.
  • Mass Media Bias Definition The mass media is the principal source of political information that has an impact on the citizens. The concept of media bias refers to the disagreement about its impact on the citizens and objectivity of […]
  • Modern Biased Media: Transparency, Independence, and Objectivity Lack The mass media is considered to be the Fourth Estate by the majority of people. The main goal of this paper is to prove that the modern media is biased because it lacks transparency, independence, […]
  • How Is the Media Biased and in What Direction? The bias in this article is aimed at discrediting mainstream media’s coverage of Clinton’s campaign while praising the conservative actions of the GOP presidential candidate.
  • Al Jazeera TV: A Propaganda Platform Al Jazeera is the largest media outlet in the Middle East reporting events mostly to the Arab world. The media outlet has equated revolutions in Egypt and Libya with the ejection of totalitarianism in the […]
  • Media Bias in the U.S. Politics The main reason for the censure of this information by the media is because it had a connection with the working masses, and Unionists. In this case, the perceived media bias comes from the state […]
  • Media Bias: Media Research Center Versus Fairness and Accuracy in Reporting
  • Advertising Spending and Media Bias: Evidence From News Coverage of Car Safety Recalls
  • Towards a More Direct Measure of Political Media Bias
  • Media Bias Towards Science
  • French Media Bias and the Vote on the European Constitution
  • Political Accountability, Electoral Control, and Media Bias
  • Media Mergers and Media Bias With Rational Consumers
  • Same-Sex Marriage and Media Bias
  • Media Bias and Stereotypes: A Long Way of Justify the Truth
  • Political Polarization and the Electoral Effects of Media Bias
  • Media Bias and Its Influence on Public Opinion on Current Events
  • The Arguments Surrounding Media Bias
  • Political Science: Media Bias and Presidential Candidates
  • Competition and Commercial Media Bias
  • Media Bias and Its Influence on News: Reporting the News Article Analysis
  • Power of Media Framing – Framing Impact on Media Bias
  • Media Bias and Conflicting Ideas
  • Detecting Media Bias and Propaganda
  • Media Bias and the Effect of Patriotism on Baseball Viewership
  • Good News and Bad News: Evidence of Media Bias in Unemployment Reports
  • Media Industries and Media Bias: How They Work Together
  • More Ads, More Revs: A Note on Media Bias in Review Likelihood
  • News Consumption and Media Bias
  • Media Bias and the Persistence of the Expectation Gap: An Analysis of Press Articles on Corporate Fraud
  • Public Opinion, Polling, Media Bias, and the Electoral College
  • Media Bias and Electoral Competition
  • Information Gatekeeping, Indirect Lobbying, and Media Bias
  • Conservative and Liberal Media Bias
  • Media Bias: Politics, Reputation, and Public Influence
  • Law and Legal Definition of Media Bias
  • Primetime Spin: Media Bias and Belief Confirming Information
  • Media Bias and the Current Situation of Reporting News and Facts in America
  • Framing the Right Suspects: Measuring Media Bias
  • Media Bias and Its Economic Impact
  • When Advertisers Have Bargaining Power – Media Bias
  • Media Bias and the Lack of Reporting on Minority Missing Persons
  • Critical Thinking vs. Media Bias
  • Social Connectivity, Media Bias, and Correlation Neglect
  • The Difference Between Media Bias and Media Corruption
  • Media Bias and How It Affects Society
  • Does Foreign Media Entry Discipline or Provoke Local Media Bias?
  • What Are the Main Issues of Media Bias?
  • How Does Media Bias Affect Campaigns?
  • Does Foreign Media Entry Tempers Government Media Bias?
  • What Is Media Bias in News Reporting?
  • How Does Media Bias Affect the World?
  • What Is the Difference Between Media Bias and Media Propaganda?
  • Is Media Bias Bad for Democracy?
  • How Do Issue Coverage and Media Bias Affect Voter Perceptions of Elections?
  • What Are Some of the Most Prominent Examples of Media Bias in Politics?
  • Does Media Bias Affect Public Opinion?
  • What Are the Reasons for Which Bias in Media Is Necessary?
  • Is There a Difference Between Media Bias and Fake News?
  • What Are the Different Types of Media Bias?
  • How Does Media Bias Affect Our Society?
  • Why Is Media Bias Unavoidable in Modern Society?
  • How Does Liberal Media Bias Distort the American Mind?
  • What Is the Effect of the Economic Development and Market Competition on Media Bias in China?
  • Is There a Relationship Between Media Bias and Reporting Inaccuracies?
  • What Are the Effects of Media Bias?
  • Are There Any Benefits of Media Bias?
  • What Is the Best Way to Deal With Media Bias?
  • How to Detect Media Bias and Propaganda?
  • Does Media Bias Matter in Elections?
  • How Do Media Trust and Media Bias Perception Influence Public Evaluation of the COVID-19 Pandemic in International Metropolises?
  • Phobia Titles
  • Social Norms Essay Ideas
  • Racial Profiling Essay Topics
  • Accountability Titles
  • Terrorism Questions
  • Broadcasting Paper Topics
  • Corruption Ideas
  • Media Violence Titles
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, March 2). 80 Media Bias Essay Topic Ideas & Examples. https://ivypanda.com/essays/topic/media-bias-essay-topics/

"80 Media Bias Essay Topic Ideas & Examples." IvyPanda , 2 Mar. 2024, ivypanda.com/essays/topic/media-bias-essay-topics/.

IvyPanda . (2024) '80 Media Bias Essay Topic Ideas & Examples'. 2 March.

IvyPanda . 2024. "80 Media Bias Essay Topic Ideas & Examples." March 2, 2024. https://ivypanda.com/essays/topic/media-bias-essay-topics/.

1. IvyPanda . "80 Media Bias Essay Topic Ideas & Examples." March 2, 2024. https://ivypanda.com/essays/topic/media-bias-essay-topics/.

Bibliography

IvyPanda . "80 Media Bias Essay Topic Ideas & Examples." March 2, 2024. https://ivypanda.com/essays/topic/media-bias-essay-topics/.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 22 May 2024

Uncovering the essence of diverse media biases from the semantic embedding space

  • Hong Huang 1 , 2 , 3 , 4 , 5 ,
  • Hua Zhu 1 , 2 , 3 , 4 , 5 ,
  • Wenshi Liu 4 , 5 ,
  • Hua Gao 5 ,
  • Hai Jin 1 , 2 , 3 , 4 , 5 &
  • Bang Liu 6  

Humanities and Social Sciences Communications volume  11 , Article number:  656 ( 2024 ) Cite this article

Metrics details

  • Cultural and media studies

Media bias widely exists in the articles published by news media, influencing their readers’ perceptions, and bringing prejudice or injustice to society. However, current analysis methods usually rely on human efforts or only focus on a specific type of bias, which cannot capture the varying magnitudes, connections, and dynamics of multiple biases, thus remaining insufficient to provide a deep insight into media bias. Inspired by the Cognitive Miser and Semantic Differential theories in psychology, and leveraging embedding techniques in the field of natural language processing, this study proposes a general media bias analysis framework that can uncover biased information in the semantic embedding space on a large scale and objectively quantify it on diverse topics. More than 8 million event records and 1.2 million news articles are collected to conduct this study. The findings indicate that media bias is highly regional and sensitive to popular events at the time, such as the Russia-Ukraine conflict. Furthermore, the results reveal some notable phenomena of media bias among multiple U.S. news outlets. While they exhibit diverse biases on different topics, some stereotypes are common, such as gender bias. This framework will be instrumental in helping people have a clearer insight into media bias and then fight against it to create a more fair and objective news environment.

Introduction

In the era of information explosion, news media play a crucial role in delivering information to people and shaping their minds. Unfortunately, media bias, also called slanted news coverage, can heavily influence readers’ perceptions of news and result in a skewing of public opinion (Gentzkow et al. 2015 ; Puglisi and Snyder Jr, 2015b ; Sunstein, 2002 ). This influence can potentially lead to severe societal problems. For example, a report from FAIR has shown that Verizon management is more than twice as vocal as worker representatives in news reports about the Verizon workers’ strike in 2016 Footnote 1 , putting workers at a disadvantage in the news and contradicting the principles of fair and objective journalism. Unfortunately, this is just the tip of the media bias iceberg.

Media bias can be defined as the bias of journalists and news producers within the mass media in selecting and covering numerous events and stories (Gentzkow et al. 2015 ). This bias can manifest in various forms, such as event selection, tone, framing, and word choice (Hamborg et al. 2019 ; Puglisi and Snyder Jr, 2015b ). Given the vast number of events happening in the world at any given moment, even the most powerful media must be selective in what they choose to report instead of covering all available facts in detail (Downs, 1957 ). This selectivity can result in the perception of bias in the news coverage, whether intentional or unintentional. Academics in journalism studies attempt to explain the news selection process by developing taxonomies of news values (Galtung and Ruge, 1965 ; Harcup and O’neill, 2001 , 2017 ), which refer to certain criteria and principles that news editors and journalists consider when selecting, editing, and reporting the news. These values help determine which stories should be considered news and the significance of these stories in news reporting. However, different news organizations and journalists may emphasize different news values based on their specific objectives and audience. Consequently, a media outlet may be very keen on reporting events about specific topics while turning a blind eye to others. For example, news coverage often ignores women-related events and issues with the implicit assumption that they are less critical than men-related contents (Haraldsson and Wängnerud, 2019 ; Lühiste and Banducci, 2016 ; Ross and Carter, 2011 ). Once events are selected, the media must consider how to organize and write their news articles. At that time, the choice of tone, framing, and word is highly subjective and can introduce bias. Specifically, the words used by the authors to refer to different entities may not be neutral but instead imply various associations and value judgments (Puglisi and Snyder Jr, 2015b ). As shown in Fig. 1 , the same topic can be expressed in entirely different ways, depending on a media outlet’s standpoint Footnote 2 . For example, certain “right-wing” media outlets tend to support legal abortion, while some “left-wing” ones oppose it.

figure 1

The blue and red fonts represent the views of some “left-wing” and “right-wing” media outlets, respectively.

In fact, media bias is influenced by many factors: explicit factors such as geographic location, media position, editorial guideline, topic setting, and so on; obscure factors such as political ideology (Groseclose and Milyo, 2005 ; MacGregor, 1997 ; Merloe, 2015 ), business reason (Groseclose and Milyo, 2005 ; Paul and Elder, 2004 ), and personal career (Baron, 2006 ), etc. Besides, some studies also summarize these factors related to bias as supply-side and demand-side ones (Gentzkow et al. 2015 ; Puglisi and Snyder Jr, 2015b ). The influence of these complex factors makes the emergence of media bias inevitable. However, media bias may hinder readers from forming objective judgments about the real world, lead to skewed public opinion, and even exacerbate social prejudices and unfairness. For example, the New York Times supports Iranian women’s saying no to hijabs in defense of women’s rights Footnote 3 while criticizing the Chinese government’s initiative to encourage Uyghur women to remove hijabs and veils Footnote 4 . Besides, the influence of news coverage on voter behavior is a subject of ongoing debate. While some studies indicate that slanted news coverage can influence voters and election outcomes (Bovet and Makse, 2019 ; DellaVigna and Kaplan, 2008 ; Grossmann and Hopkins, 2016 ), others suggest that this influence is limited in certain circumstances (Stroud, 2010 ). Fortunately, research on media bias has drawn attention from multiple disciplines.

In social science, the study of media bias has a long tradition dating back to the 1950s (White, 1950 ). So far, most of the analyses in social science have been qualitative, aiming to analyze media opinions expressed in the editorial section (e.g., endorsements (Ansolabehere et al. 2006 ), editorials (Ho et al. 2008 ), ballot propositions (Puglisi and Snyder Jr, 2015a )) or find out biased instances in news articles by human annotations (Niven, 2002 ; Papacharissi and de Fatima Oliveira, 2008 ; Vaismoradi et al. 2013 ). Some researchers also conduct quantitative analysis, which primarily involves counting the frequency of specific keywords or articles related to certain issues (D’Alessio and Allen, 2000 ; Harwood and Garry, 2003 ; Larcinese et al. 2011 ). In particular, there are some attempts to estimate media bias using automatic tools (Groseclose and Milyo, 2005 ), and they commonly rely on text similarity and sentiment computation (Gentzkow and Shapiro, 2010 ; Gentzkow et al. 2006 ; Lott Jr and Hassett, 2014 ). In summary, social science research on media bias has yielded extensive and effective methodologies. These methodologies interpret media bias from diverse perspectives, marking significant progress in the realm of media studies. However, these methods usually rely on manual annotation and analysis of the texts, which requires significant manual effort and expertise (Park et al. 2009 ), thus might be inefficient and subjective. For example, in a quantitative analysis, researchers might devise a codebook with detailed definitions and rules for annotating texts, and then ask coders to read and annotate the corresponding texts (Hamborg et al. 2019 ). Developing a codebook demands substantial expertise. Moreover, the standardization process for text annotation is subjective, as different coders may interpret the same text differently, thus leading to varied annotations.

In computer science, research on social media is extensive (Lazaridou et al. 2020 ; Liu et al. 2021b ; Tahmasbi et al. 2021 ), but few methods are specifically designed to study media bias (Hamborg et al. 2019 ). Some techniques that specialize in the study of media bias focus exclusively on one type of bias (Huang et al. 2021 ; Liu et al. 2021b ; Zhang et al. 2017 ), thus not general enough. In natural language processing (NLP), research on the bias of pre-trained models or language models has attracted much attention (Qiang et al. 2023 ), aiming to identify and reduce the potential impact of bias in pre-trained models on downstream tasks (Huang et al. 2020 ; Liu et al. 2021a ; Wang et al. 2020 ). In particular, some studies on pre-trained word embedding models show that they have captured rich human knowledge and biases (Caliskan et al. 2017 ; Grand et al. 2022 ; Zeng et al. 2023 ). However, such works mainly focus on pre-trained models rather than media bias directly, which limits their applicability to media bias analysis.

A major challenge in studying media bias is that the evaluation of media bias is highly subjective because individuals have varying evaluation criteria for bias. Take political bias as an example, a story that one person views as neutral may appear to be left-leaning or right-leaning by someone else. To address this challenge, we develop an objective and comprehensive media bias analysis framework. We study media bias from two distinct but highly relevant perspectives: the macro level and the micro level. From the macro perspective, we focus on the event selection bias of each media, i.e., the types of events each media tends to report on. From the micro perspective, we focus on the bias introduced by media in the choice of words and sentence construction when composing news articles about the selected events.

In news articles, media outlets convey their attitudes towards a subject through the contexts surrounding it. However, the language used by the media to describe and refer to entities may not be purely neutral descriptors but rather imply various associations and value judgments. According to the cognitive miser theory in psychology, the human mind is considered a cognitive miser who tends to think and solve problems in simpler and less effortful ways to avoid cognitive effort (Fiske and Taylor, 1991 ; Stanovich, 2009 ). Therefore, faced with endless news information, ordinary readers will tend to summarize and remember the news content simply, i.e., labeling the things involved in news reports. Frequent association of certain words with a particular entity or subject in news reports can influence a media outlet’s loyal readers to adopt these words as labels for the corresponding item in their cognition due to the cognitive miser effect. Unfortunately, such a cognitive approach is inadequate and susceptible to various biases. For instance, if a media outlet predominantly focuses on male scientists while neglecting their female counterparts, some naive readers may perceive scientists to be mostly male, leading to a recognition bias in their perception of the scientist and even forming stereotypes unconsciously over time. According to the “distributional hypothesis” in modern linguistics (Firth, 1957 ; Harris, 1954 ; Sahlgren, 2008 ), a word’s meaning is characterized by the words occurring in the same context as it. Here, we simplify the complex associations between different words (or entities/subjects) and their respective context words into co-occurrence relationships. An effective technique to capture word semantics based on co-occurrence information is neural network-based word embedding models (Kenton and Toutanova, 2019 ; Le and Mikolov, 2014 ; Mikolov et al. 2013 ).

Word embedding models represent each word in the vocabulary as a vector (i.e., word embedding) within the word embedding space. In this space, words that frequently co-occur in similar contexts are positioned close to each other. For instance, if a media outlet predominantly features male scientists, the word “scientist” and related male-centric terms, such as “man” and “he” will frequently co-occur. Consequently, these words will cluster near the word “scientist” in the embedding space, while female-related words occupy more distant positions. This enables us to evaluate the media outlet’s gender bias concerning the term “scientist” by comparing the embedding distances between “scientist” and words associated with both males and females. This approach aligns closely with the Semantic Differential theory in psychology (Osgood et al. 1957 ), which gauges an individual’s attitudes toward various concepts, objects, and events using bipolar scales constructed from adjectives with opposing semantics. In this study, to identify media bias from news articles, we first define two sets of words with opposite semantics for each topic to develop media bias evaluation scales. Then, we quantify media bias on each topic by calculating the embedding distance difference between a target word (e.g., scientist) and these two sets of words (e.g., female-related words and male-related words) in the word embedding space.

Compared with the bias in news articles, event selection bias is more obscure, as only events of interest to the media are reported in the final articles, while events deliberately ignored by the media remain invisible to the public. Similar to the co-occurrence relationship between words mentioned earlier, two media outlets that frequently select and report on the same events should exhibit similar biases in event selection, as two words that occur frequently in the same contexts have similar semantics. Therefore, we refer to Latent Semantic Analysis (LSA (Deerwester et al. 1990 )) and generate vector representation (i.e., media embedding) for each media via truncated singular value decomposition (Truncated SVD (Halko et al. 2011 )). Essentially, a media embedding encodes the distribution of the events that a media outlet tends to report on. Therefore, in the media embedding space, media outlets that often select and report on the same events will be close to each other due to similar distributions of the selected events. If a media outlet shows significant differences in such a distribution compared to other media outlets, we can conclude that it is biased in event selection. Inspired by this, we conduct clustering on the media embeddings to study how different media outlets differ in the distribution of selected events, i.e., the so-called event selection bias.

These two methodologies, designed for micro-level and macro-level analysis, share a fundamental similarity: both leverage data-driven embedding models to represent each word or media outlet as a distinctive vector within the embedding space and conduct further analysis based on these vectors. Therefore, in this study, we integrate both methodologies into a unified framework for media bias analysis. We aim to uncover media bias on a large scale and quantify it objectively on diverse topics. Our experiment results show that: (1) Different media outlets have different preferences for various news events, and those from the same country or organization tend to share more similar tastes. Besides, the occurrence of international hot events will lead to the convergence of different media outlets’ event selection. (2) Despite differences in media bias, some stereotypes, such as gender bias, are common among various media outlets. These findings align well with our empirical understanding, thus validating the effectiveness of our proposed framework.

Data and methods

The first dataset is the GDELT Mention Table, a product of the Google Jigsaw-backed GDELT project Footnote 5 . This project aims to monitor news reports from all over the world, including print, broadcast, and online sources, in over 100 languages. Each time an event is mentioned in a news report, a new row is added to the Mention Table (See Supplementary Information Tab. S1 for details). Given that different media outlets may report on the same event at varying times, the same event can appear in multiple rows of the table. While the fields GlobalEventID and EventTimeDate are globally unique attributes for each event, MentionSourceName and MentionTimeDate may differ. Based on the GlobalEventID and MentionSourceName fields in the Mention Table, we can count the number of times each media outlet has reported on each event, ultimately constructing a “media-event” matrix. In this matrix, the element at ( i ,  j ) denotes the number of times that media outlet j has reported on the event i in past reports.

As a global event database, GDELT collects a vast amount of global events and topics, encompassing news coverage worldwide. However, despite its widespread usage in many studies, there are still some noteworthy issues. Here, we highlight some of the issues to remind readers to use it more cautiously. Above all, while GDELT provides a vast amount of data from various sources, it cannot capture every event accurately. It relies on automated data collection methods, and this could result in certain events being missed. Furthermore, its algorithms for event extraction and categorization cannot always perfectly capture the nuanced context and meaning of each event, which might lead to potential misinterpretations.

The second dataset is built on MediaCloud Footnote 6 , an open-source platform for research on media ecosystems. MediaCloud’s API enables the querying of news article URLs for a given media outlet, which can then be retrieved using a web crawler. In this study, we have collected more than 1.2 million news articles from 12 mainstream US media outlets in 2016-2021 via MediaCloud’s API (See Supplementary Information Tab. S2 for details).

Media bias estimation by media embedding

Latent Semantic Analysis (LSA (Deerwester et al. 1990 )) is a well-established technique for uncovering the topic-based semantic relationships between text documents and words. By performing truncated singular value decomposition (Truncated SVD (Halko et al. 2011 )) on a “document-word” matrix, LSA can effectively capture the topics discussed in a corpus of text documents. This is accomplished by representing documents and words as vectors in a high-dimensional embedding space, where the similarity between vectors reflects the similarity of the topics they represent. In this study, we apply this idea to media bias analysis by likening media and events to documents and words, respectively. By constructing a “media-event” matrix and performing Truncated SVD, we can uncover the underlying topics driving the media coverage of specific events. Our hypothesis posits that media outlets mentioning certain events more frequently are more likely to exhibit a biased focus on the topics related to those events. Therefore, media outlets sharing similar topic tastes during event selection will be close to each other in the embedding space, which provides a good opportunity to shed light on the media’s selection bias.

The generation procedures for media embeddings are shown in Supplementary Information Fig. S1 . First, a “media-event” matrix denoted as A m × n is constructed based on the GDELT Mention Table, where m and n represent the total number of media outlets and events, respectively. Each entry A i , j represents the number of times that media i has reported on event j . Subsequently, Truncated SVD is performed on the matrix A m × n , which results in three matrices: U m × k , Σ k × k and \({V}_{n\times k}^{T}\) . The product of Σ k × k and \({V}_{n\times k}^{T}\) is represented by E k × n . Each column of E k × n corresponds to a k -dimensional vector representation for a specific media outlet, i.e., a media embedding. Specifically, the decomposition of matrix A m × n can be formulated as follows:

Equation( 1 ) defines the complete singular value decomposition of A m × n . Both \({U}_{m\times m}^{0}\) and \({({V}_{n\times n}^{0})}^{T}\) are orthogonal matrices. \({{{\Sigma }}}_{m\times n}^{0}\) is a m  ×  n diagonal matrix whose diagonal elements are non-negative singular values of the matrix A m × n in descending order. Equation( 2 ) defines the truncated singular value decomposition (i.e., Truncated SVD) of A m × n . Based on the result of complete singular value decomposition, the part corresponding to the largest k singular values is equivalent to the result of Truncated SVD. Specifically, U m × k comprises the first k columns of the matrix \({U}_{m\times m}^{0}\) , while \({V}_{n\times k}^{T}\) comprises the first k rows of the matrix \({({V}_{n\times n}^{0})}^{T}\) . Additionally, the diagonal matrix Σ k × k is composed of the first k diagonal elements of \({{{\Sigma }}}_{m\times n}^{0}\) , representing the largest k singular values of A m × n . In particular, the media embedding model is defined as the product of the matrices Σ k × k and \({V}_{n\times k}^{T}\) , which has n k -dimensional media embeddings as follows:

To measure the similarity between two media embedding sets, we refer to Word Mover Distance (WMD (Kusner et al. 2015 )). WMD is designed to measure the dissimilarity between two text documents based on word embedding. Here, we subtract the optimal value of the original WMD objective function from 1 to convert the dissimilarity value into a normalized similarity score that ranges from 0 to 1. Specifically, the similarity between two media embedding sets is formulated as follows:

Let n denote the total number of media outlets, and s be an n -dimensional vector corresponding to the first media embedding set. For each i , the weight of media i in the embedding set is given by \({s}_{i}=\frac{1}{\sum_{k = 1}^{n}{t}_{i}}\) , where t i  = 1 if media i is in the embedding set, and t i  = 0 otherwise. Similarly, \({s}^{{\prime} }\) is another n -dimensional vector corresponding to the second media embedding set. The distance between media i and j is calculated using c ( i ,  j ) =  ∥ e i  −  e j ∥ 2 , where e i and e j are the embedding representations of media i and j , respectively. The flow matrix T   ∈   R n × n is used to determine how much media i in s travels to media j in \({s}^{{\prime} }\) . Specifically, T i , j  ≥ 0 denotes the amount of flow from media i to media j .

Media bias estimation by word embedding

Word embedding models (Kenton and Toutanova, 2019 ; Le and Mikolov, 2014 ; Mikolov et al. 2013 ) are widely used in text-related tasks due to their ability to capture rich semantics of natural language. In this study, we regard media bias in news articles as a special type of semantic and capture it using Word2Vec (Le and Mikolov, 2014 ; Mikolov et al. 2013 ).

Supplementary Information Fig. S2 presents the process of building media corpora and training word embedding models to capture media bias. First, we reorganize the corpus for each media outlet by up-sampling to ensure that each media corpus contains the same number of news articles. The advantage of up-sampling is that it makes full use of the existing media corpus data, as opposed to discarding part of the data like down-sampling does. Second, we superimpose all 12 media corpora to construct a large base corpus and pre-train a Word2Vec model denoted as W b a s e based on it. Third, we fine-tune the same pre-trained model W b a s e using the specific corpus of each media outlet separately and get 12 fine-tuned models denoted as \({W}^{{m}_{i}}\) ( i  = 1, 2, . . . 12).

In particular, the main objective of reorganizing the original corpora is to ensure that each corpus equivalently contributes to the pre-training process, in case a large corpus from certain media dominates the pre-trained model. As shown in Supplementary Information Tab. S2 , the largest corpus in 2016-2021 is from USA Today, which contains 295,518 news articles. Therefore, we can reorganize the other 11 media corpora by up-sampling to ensure that each of the 12 corpora has 295,518 articles. For example, as for NPR’s initial corpus, which has 14,654 news articles, we first repeatedly superimpose 295, 518//14, 654 = 20 times to get 293,080 articles and then randomly sample 295, 518%14, 654 = 2, 438 from the initial 14,654 articles as a supplement. Finally, we can get a reorganized NPR corpus with 295,518 articles.

Semantic Differential is a psychological technique proposed by (Osgood et al. 1957 ) to measure people’s psychological attitudes toward a given conceptual object. In the Semantic Differential theory, a given object’s semantic attributes can be evaluated in multiple dimensions. Each dimension consists of two poles corresponding to a pair of adjectives with opposite semantics (i.e., antonym pairs). The position interval between the poles of each dimension is divided into seven equally-sized parts. Then, given the object, respondents are asked to choose one of the seven parts in each dimension. The closer the position is to a pole, the closer the respondent believes the object is semantically related to the corresponding adjective. Supplementary Information Fig. S3 provides an example of Semantic Differential.

Constructing evaluation dimensions using antonym pairs in Semantic Differential is a reliable idea that aligns with how people generally evaluate things. For example, when imagining the gender-related characteristics of an occupation (e.g., nurse), individuals usually weigh between “man” and “woman”, both of which are antonyms regarding gender. Likewise, when it comes to giving an impression of the income level of the Asian race, people tend to weigh between “rich” (high income) and “poor” (low income), which are antonyms related to income. Based on such consistency, we can naturally apply Semantic Differential to measure a media outlet’s attitudes towards different entities and concepts, i.e., media bias.

Specifically, given a media m , a topic T (e.g., gender) and two semantically opposite topic word sets \(P={\{{p}_{i}\}}_{i = 1}^{{K}_{1}}\) and \(\neg P={\{\neg {p}_{i}\}}_{i = 1}^{{K}_{2}}\) about topic T , media m ’s bias towards the target x can be defined as:

Here, K 1 and K 2 denote the number of words in topic word sets P and ¬  P , respectively. W m represents the word embedding model obtained by fine-tuning W b a s e using the specific corpus of media m . \(\overrightarrow{{W}_{x}^{m}}\) is the embedding representation of the word x in W m . S i m is a similarity function used to measure the similarity between two vectors (i.e., word embeddings). In practice, we employ the cosine similarity function, which is commonly used in the field of natural language processing. In particular, equation( 5 ) calculates the difference of average similarities between the target word x and two semantically opposite topic word sets, namely P and ¬  P . Similar to the antonym pairs in Semantic Differential, such two topic word sets are used to construct the evaluation scale of media bias. In practice, to ensure the stability of the results, we have repeated this experiment five times, each time with a different random seed for up-sampling. Therefore, the final results shown in Fig. 4 are the average bias values for each topic.

The idea of recovering media bias by embedding methods

We first analyzed media bias from the aspect of event selection to study which topics a media outlet tends to focus on or ignore. Based on the GDELT database, we constructed a large “media-event" matrix that records the times each media outlet mentioned each event in news reports from February to April 2022. To extract media bias information, we referred to the idea of Latent Semantic Analysis (Deerwester et al. 1990 ) and performed Truncated SVD (Halko et al. 2011 ) on this matrix to generate vector representation (i.e., media embedding) for each media outlet (See Methods for details). Specifically, outlets with similar event selection bias (i.e., outlets that often report on events of similar topics) will have similar media embeddings. Such a bias encoded in the vector representation of each outlet is exactly the first type of media bias we aim to study.

Then, we analyzed media bias in news articles to investigate the value judgments and attitudes conveyed by media through their news articles. We collected more than 1.2 million news articles from 12 mainstream US news outlets, spanning from January 2016 to December 2021, via MediaCloud’s API. To identify media bias from each outlet’s corpus, we performed three sequential steps: (1) Pre-train a Word2Vec word embedding model based on all outlets’ corpora. (2) Fine-tune the pre-trained model by using the specific corpus of each outlet separately and obtain 12 fine-tuned models corresponding to the 12 outlets. (3) Quantify each outlet’s bias based on the corresponding fine-tuned model, combined with the idea of Semantic Differential, i.e., measuring the embedding similarities between the target words and two sets of topic words with opposite semantics (See Methods for details). An example of using Semantic Differential (Osgood et al. 1957 ) to quantify media bias is shown in Supplementary Information Fig. S4 .

Media show significant clustering due to their regions and organizations

In this experiment, we aimed to capture and analyze the event selection bias of different media outlets based on the proposed media embedding methodology. To achieve a comprehensive analysis, we selected 247 media outlets from 8 countries ( Supplementary Information Tab. S6) , including the United States, the United Kingdom, Canada, Australia, Ireland, and New Zealand-six English-speaking nations with India and China, two populous countries. For each country, we chose media outlets that were the most active during February-April 2022, with media activity measured by the quantity of news reports. We then generated embedding representations for each media outlet via Truncated SVD and performed K-means clustering (Lloyd, 1982 ; MacQueen, 1967 ) on the obtained media embedding representations (with K  = 10) for further analysis. Details of the experiment are presented in the first section of the supplementary Information. Figure 2 visualizes the clustering results.

figure 2

There are 247 media outlets from 8 countries: Canada (CA), Ireland (IE), United Kingdom (UK), China (CN), United States (US), India (IN), Australia (AU), and New Zealand (NZ). Each circle in the visualization represents a media outlet, with its color indicating the cluster it belongs to, and its diameter proportional to the number of events reported by the outlet between February and April 2022. The text in each circle represents the name or abbreviation of a media outlet (See Supplementary Information Tab. S6 for details). The results indicate that media outlets from the same country tend to be grouped together in clusters. Moreover, the majority of media outlets in the Fox series form a distinct cluster, indicating a high degree of similarity in their event selection bias.

First, we find that media outlets from different countries tend to form distinct clusters, signifying the regional nature of media bias. Specifically, we can interpret Fig. 2 from two different perspectives, and both come to this conclusion. On the one hand, most media outlets from the same country tend to appear in a limited number of clusters, which suggests that they share similar event selection bias. On the other hand, as we can see, media outlets in the same cluster mostly come from the same country, indicating that media exhibiting similar event selection bias tends to be from the same country. In our view, differences in geographical location lead to diverse initial event information accessibility for media outlets from different regions, thus shaping the content they choose to report.

Besides, we observe an intriguing pattern where the Associated Press (AP) and Reuters, despite their geographical separation, share similar event selection biases as they are clustered together. This abnormal phenomenon could be attributed to their status as international media outlets, which enables them to cover various global events, thus leading to extensive overlapping news coverage. In addition, 16 out of the 21 Fox series media outlets form a distinct cluster on their own, suggesting that a media outlet’s bias is strongly associated with the organization it belongs to. After all, media outlets within the same organization often tend to prioritize or overlook specific events due to shared positions, interests, and other influencing factors.

International hot topics drive media bias to converge

Previous results have revealed a significant correlation between media bias and the location of a media outlet. Therefore, we conducted an experiment to further investigate the event selection biases of media outlets from 25 different countries. To achieve this, we gathered GDELT data spanning from February to April 2022 and created three “media-event” matrices on a monthly basis. We then subjected each month’s “media-event” matrix to the same processing steps: (1) generating an embedding representation for each media outlet through matrix decomposition, (2) obtaining the embedding representation of each media outlet that belongs to each country to construct a media embedding set, and (3) calculating the similarity between every two countries (i.e., each two media embedding sets) using Word Mover Distance (WMD (Kusner et al. 2015 )) as the similarity metric (See Methods for details). Figure 3 presents the changes in event selection bias similarity amongst media outlets from different countries between February and April 2022.

figure 3

The horizontal axis in this figure represents the time axis, measured in months. Meanwhile, the vertical axis indicates the event selection similarity between Ukrainian media and media from other countries. Each circle represents a country, with the font inside it representing the corresponding country’s abbreviation (see details in Supplementary Information Tab. S3) . The size of a circle corresponds to the average event selection similarity between the media of a specific country and the media of all other countries. The color of the circle corresponds to the vertical axis scale. The blue dotted line’s ordinate represents the median similarity to Ukrainian media.

We find that the similarities between Ukraine and other countries peaked significantly in March 2022. This result aligns with the timeline of the Russia-Ukraine conflict: the conflict broke out around February 22, attracting media attention worldwide. In March, the conflict escalated, and the regional situation became increasingly tense, leading to even more media coverage worldwide. By April, the prolonged conflict had made the international media accustomed to it, resulting in a decline in media interest. Furthermore, we observed that the event selection biases of media outlets in both EG (Egypt) and CN (China) differed significantly from those of other countries. Given that both countries are not predominantly English-speaking, their English-language media outlets may have specific objectives such as promoting their national image and culture, which could influence and constrain the topics that a media outlet tends to cover.

Additionally, we observe that in March 2022, the country with the highest similarity to Ukraine was Russia, and in April, it was Poland. This change can be attributed to the evolving regional situation. In March, when the conflict broke out, media reports primarily focused on the warring parties, namely Russia and Ukraine. As the war continued, the impact of the war on Ukraine gradually became the focus of media coverage. For instance, the war led to the migration of a large number of Ukrainian citizens to nearby countries, among which Poland received the most citizens of Ukraine at that time.

Media shows diverse biases on different topics

In this experiment, we took 12 mainstream US news outlets as examples and conducted a quantitative media bias analysis on three typical topics (Fan and Gardent, 2022 ; Puglisi and Snyder Jr, 2015b ; Sun and Peng, 2021 ): Gender bias (about occupation); Political bias (about the American state); Income bias (about race & ethnicity). The topic words for each topic are listed in Supplementary Information Tab. S4 . These topic words are sourced from related literature (Caliskan et al. 2017 ), and search engines, along with the authors’ intuitive assessments.

Gender bias in terms of Occupation

In news coverage, media outlets may intentionally or unintentionally associate an occupation with a particular gender (e.g., stereotypes like police-man, nurse-woman). Such gender bias can subtly affect people’s attitudes towards different occupations and even impact employment fairness. To analyze gender biases in news coverage towards 8 common occupations (note that more occupations can be studied using the same methodology), we examined 12 mainstream US media outlets. As shown in Fig. 4 a, all these outlets tend to associate “teacher” and “nurse” with women. In contrast, when reporting on “police,” “driver,” “lawyer,” and “scientist,” most outlets show bias towards men. As for “director” and “photographer,” only slightly more than half of the outlets show bias towards men. Supplementary Information Tab. S5 shows the proportion of women in the eight occupations in America according to U.S. Bureau of Labor Statistics Footnote 7 . Women’s proportions in “teacher” and “nurse” dominate, while men’s in “police,” “driver,” and “lawyer” are significantly higher. Besides, among “directors,” “scientists,” and “photographers,” the proportions of women and men are about the same. Comparing the experiment results with USCB’s statistics, we find that these media outlets’ gender bias towards an occupation is highly consistent with the actual women (or men) ratio in the occupation. Such a phenomenon highlights the potential for media outlets to perpetuate and reinforce existing gender bias in society, emphasizing the need for increased awareness and attention to media bias. Note that we reorganized the corpus of each media outlet by up-sampling during the data preprocessing process, which introduced some randomness to the experiment results (See Methods for details). Therefore, we set five different random seeds for up-sampling and repeated the experiment mentioned above five times. A two-tailed t-test on the difference between the results shown in Fig. 4 a and the results of current repeated experiments showed no significant difference ( Supplementary Information Fig. S6) .

figure 4

Each column corresponds to a media outlet, and each row corresponds to a target word which usually means an entity or concept in the news text. The color bar on the right describes the value range of the bias value, with each interval of the bias value corresponding to a different color. As the bias value changes from negative to positive, the corresponding color changes from purple to yellow. Because the range of bias values differs across each topic, the color bar of different topics can also vary. The color of each heatmap square corresponds to an interval in the color bar. Specifically, the square located in row i and column j represents the bias of media j when reporting on target i. a Gender bias about eight common occupations. b Income bias about four races or ethnicities. c Political bias about the top-10 “red state” (Wyoming, West Virginia, North Dakota, Oklahoma, Idaho, Arkansas, Kentucky, South Dakota, Alabama, Texas) and the top-10 “blue state” (Hawaii, Vermont, California, Maryland, Massachusetts, New York, Rhode Island, Washington, Connecticut, Illinois) according to the CPVI ranking (Ardehaly and Culotta, 2017 ). Limited by the page layout, only the top-8 results are shown here. Please refer to Supplementary Information Fig. S5 for the complete results.

Income bias in terms of Race and Ethnicity

Media coverage often discusses the income of different groups of people, including many races and ethnicities. Here, we aim to investigate whether the media outlets are biased in their income coverage, such as associating a specific race or ethnicity with rich or poor. To this end, we selected four US racial and ethnic groups as research subjects: Asian, African, Hispanic, and Latino. In line with previous studies (Grieco and Cassidy, 2015 ; Nerenz et al. 2009 ; Perez and Hirschman, 2009 ), we considered Asian and African as racial categories and Hispanic and Latino as ethnic categories. Referring to the income statistics from USCB Footnote 8 , we do not strictly distinguish these concepts and compare them together. As shown in Fig. 4 b, for the majority of media outlets, Asian is most frequently associated with the rich, with ESPN being the only exception. This anomalous finding may be attributed to ESPN’s position as a sports media, with a primary emphasis on sports that are particularly popular with Hispanic, Latino, and African-American audiences, such as soccer, basketball, and golf. Additionally, there is a significant disparity in the media’s coverage of income bias toward Africans, Hispanics, and Latinos. Specifically, the biases towards Hispanic and Latino populations are generally comparable, with both groups being portrayed as richer than African Americans in most media coverage. Referring to the aforementioned income statistics of the U.S. population, the income rankings of different races and ethnicities have remained stable from 1950 to 2020: Asians have consistently had the highest income, followed by Hispanics with the second-highest income, and African Americans with the lowest income (the income of Black Americans is used as an approximation for African Americans). It is worth noting that USCB considers Hispanic and Latino to be the same ethnicity, although there are some controversies surrounding this practice (Mora, 2014 ; Rodriguez, 2000 ). However, these controversies are not the concern of this work, so we use Hispanic income as an approximation of Latino income following USCB. Comparing our experiment results with USCB’s income statistics, we find that the media outlets’ income bias towards different races and ethnicities is roughly consistent with their actual income levels. A two-tailed t-test on the difference between the results shown in Fig. 4 b and the results of repeated experiments showed no significant difference ( Supplementary Information Fig. S7) .

Political bias in terms of Region

Numerous studies have shown that media outlets tend to publish politically biased news articles that support the political parties they favor while criticizing those they oppose (Lazaridou et al. 2020 ; Puglisi, 2011 ). For example, a report from the Red State described liberals as regressive leftists with mental health issues. Conversely, a story from Right Wing News reported that Obama’s administration was terrible (Lazaridou et al. 2020 ). Such political inclinations will hinder readers’ objective judgment of political events and affect their attitudes toward different political parties. Therefore, we analyzed the political biases of 12 mainstream US media outlets when talking about different US states, aiming to increase public awareness of such biases in news coverage. As shown in Fig. 4 c, in the reports of these media outlets, most red states lean Republican, while most blue states lean Democrat. In particular, some blue states also show a leaning toward Republicans, such as Hawaii and Maryland. Such an abnormal phenomenon can be attributed to the source of the corpus data used in this study. The corpus data, which was used to train word embedding models, spans from January 2016 to December 2021. During this period, the Republican Party was in power, with Trump serving as president from January 2017 to January 2021. Thus, the majority of the data was collected during the Republican administration. We suggest that Trump’s presidency resulted in increased media coverage of the Republican Party, thus causing some blue states to be associated more frequently with Republicans in news reports. A two-tailed t-test on the difference between the results shown in Fig. 4 c and the results of repeated experiments showed no significant difference ( Supplementary Information Fig. S8 and Fig. S9) .

Media logic and news evaluation are two important concepts in social science. The former refers to the rules, conventions, and strategies that the media follow in the production, dissemination, and reception of information, reflecting the media’s organizational structure, commercial interests, and socio-cultural background (Altheide, 2015 ). The latter refers to the systematic analysis of the quality, effectiveness, and impact of news reports, involving multiple criteria and dimensions such as truthfulness, accuracy, fairness, balance, objectivity, diversity, etc. When studying media bias issues, media logic provides a framework for understanding the rules and patterns of media operations, while news evaluation helps identify and analyze potential biases in media reports. For example, to study media’s political bias, (D’heer, 2018 ; Esser and Strömbäck, 2014 ) compare the frameworks, languages, and perspectives used by traditional news media and social media in reporting political elections, so as to understand the impact of these differences on voters’ attitudes and behaviors. However, in spite of the progress, these methods often rely on manual observation and interpretation, thus inefficient and susceptible to human bias and errors.

In this work, we propose an automated media bias analysis framework that enables us to uncover media bias on a large scale. To carry out this study, we amassed an extensive dataset, comprising over 8 million event records and 1.2 million news articles from a diverse range of media outlets (see details of the data collection process in Methods). Our research delves into media bias from two distinct yet highly pertinent perspectives. From the macro perspective, we aim to uncover the event selection bias of each media outlet, i.e., which types of events a media outlet tends to report on. From the micro perspective, our goal is to quantify the bias of each media outlet in wording and sentence construction when composing news articles about the selected events. The experimental results align well with our existing knowledge and relevant statistical data, indicating the effectiveness of embedding methods in capturing the characteristics of media bias. The methodology we employed is unified and intuitive and follows a basic idea. First, we train embedding models using real-world data to capture and encode media bias. At this step, based on the characteristics of different types of media bias, we choose appropriate embedding methods to model them respectively (Deerwester et al. 1990 ; Le and Mikolov, 2014 ; Mikolov et al. 2013 ). Then, we utilize various methods, including cluster analysis (Lloyd, 1982 ; MacQueen, 1967 ), similarity calculation (Kusner et al. 2015 ), and semantic differential (Osgood et al. 1957 ), to extract media bias information from the obtained embedding models.

To capture the event selection biases of different media outlets, we employ Truncated SVD (Halko et al. 2011 ) on the “media-event” matrix to generate media embeddings. Truncated SVD is a widely used technique in NLP. In particular, LSA (Deerwester et al. 1990 ) applies Truncated SVD to the “document-word” matrix to capture the underlying topic-based semantic relationships between text documents and words. LSA assumes that a document tends to use relevant words when it talks about a particular topic and obtains the vector representation for each document in a latent topic space, where documents talking about similar topics are located near each other. By analogizing media outlets and events with documents and words, we can naturally apply Truncate SVD to explore media bias in the event selection process. Specifically, we assume that there are underlying topics when considering a media outlet’s event selection bias. If a media focuses on a topic, it will tend to report events related to that topic and otherwise ignore them. Therefore, media outlets sharing similar event selection biases (i.e., tend to report events about similar topics) will be close to each other in the latent topic space, which provides a good opportunity for us to study media bias (See Methods and Results for details).

When describing something, relevant contexts must be considered. For instance, positive and negative impressions are conveyed through the use of context words such as “diligent” and “lazy”, respectively. Similarly, a media outlet’s attitude towards something is reflected in the news context in which it is presented. Here, we study the association between each target and its news contexts based on the co-occurrence relationship between words. Our underlying assumption is that frequently co-occurring words are strongly associated, which aligns with the idea of word embedding models (Kenton and Toutanova, 2019 ; Le and Mikolov, 2014 ; Mikolov et al. 2013 ), where the embeddings of frequently co-occurring words are relatively similar. For example, suppose that in the corpus of media M, the word “scientist” often co-occurs with female-related words (e.g., “woman” and “she”, etc.) but rarely with those male-related words. Then, the semantic similarities of “scientist” with female-related words should be much higher than those of male-related words in the word embedding model. Therefore, we can conclude that media M’s reports on scientists are biased towards women.

According to the theory of Semantic Differential (Osgood et al. 1957 ), the difference in semantic similarities between “scientist” and female-related words versus male-related words can serve as an estimation of media M’s gender bias. Since we have kept all settings (e.g., corpus size, starting point for model fine-tuning, etc.) the same when training word embedding models for different media outlets, the estimated bias values can be interpreted as absolute ones within the same reference system. In other words, the estimated bias values for different media outlets are directly comparable in this study, with a value of 0 denoting unbiased and a value closer to 1 or -1 indicating a more pronounced bias.

We notice that there has been literature investigating the choice of events/topics and words/frames to measure media bias, such as partisan and ideological biases (Gentzkow et al. 2015 ; Puglisi and Snyder Jr, 2015b ). However, our approach not only considers bias related to the selective reporting of events (using event embedding) but also studies biased wording in news texts (using word embedding). While the former focuses on the macro level, the latter examines the micro level. These two perspectives are distinct yet highly relevant, but previous studies often only consider one of them. For the choice of events/topics, our approach allows us to explore how they change over time. For example, we can analyze the time-changing similarities between media outlets from different countries, as shown in Fig. 3 . For the choice of words/frames, prior work has either analyzed specific biases based on the frequency of particular words (Gentzkow and Shapiro, 2010 ; Gentzkow et al. 2006 ), which fails to capture deeper semantics in media language or analyzed specific biases by merely aggregating the analysis results for every single article in the corpus (e.g., calculating the sentiment (Gentzkow et al. 2006 ; Lott Jr and Hassett, 2014 ; Soroka, 2012 ) of each article or its similarity with certain authorship (Gentzkow and Shapiro, 2010 ; Groseclose and Milyo, 2005 ), then summing them up as the final bias value), without considering the relationships between different articles, thus lacking a holistic nature. In contrast, our method, based on word embeddings (Le and Mikolov, 2014 ; Mikolov et al. 2013 ), directly models the semantic associations between all words and entities in the corpus with a neural network, offering advantages in capturing both semantic meaning and holistic nature. Specially, we not only utilize word embedding techniques but also integrate them with appropriate psychological/sociological theories, such as the Semantic Differential theory and the Cognitive Miser theory. These theories endow our approach with better interpretability. In addition, the method we propose is a generalizable framework for studying media bias using embedding techniques. While this study has focused on validating its effectiveness with specific types of media bias, it can actually be applied to a broader range of media bias research. We will expand the application of this framework in future work.

As mentioned above, our proposed framework examines media bias from two distinct but highly relevant perspectives. Here, taking the significant Russia-Ukraine conflict event as an example, we will demonstrate how these two perspectives contribute to providing researchers and the public with a more comprehensive and objective assessment of media bias. For instance, we can gather relevant news articles and event reporting records about the ongoing Russia-Ukraine conflict from various media outlets worldwide and generate media and word embedding models. Then, according to the embedding similarities of different media outlets, we can judge which types of events each media outlet tends to report and select some media that tend to report on different events. By synthesizing the news reports of the selected media, we can gain a more comprehensive understanding of the conflict instead of being limited to the information selectively provided by a few media. Besides, based on the word embedding model and the bias estimation method based on Semantic Differential, we can objectively judge each media’s attitude towards Russia and Ukraine (e.g., whether a media tends to use positive or negative words to describe either party). Once a news outlet is detected as apparently biased, we should read its articles more carefully to avoid being misled.

In the end, despite the advantages of our framework, there are still some shortcomings that need improvement. First, while the media embeddings generated based on matrix decomposition have successfully captured media bias in the event selection process, interpreting these continuous numerical vectors directly can be challenging. We hope that future work will enable the media embedding to directly explain what a topic exactly means and which topics a media outlet is most interested in, thus helping us understand media bias better. Second, since there is no absolute, independent ground truth on which events have occurred and should have been covered, the aforementioned media selection bias, strictly speaking, should be understood as relative topic coverage, which is a narrower notion. Third, for topics involving more complex semantic relationships, estimating media bias using scales based on antonym pairs and the Semantic Differential theory may not be feasible, which needs further investigation in the future.

Data availability

The data that support the findings of this study are available at https://github.com/CGCL-codes/media-bias .

Code availability

The code that supports the findings of this study is also available at https://github.com/CGCL-codes/media-bias .

https://fair.org/home/when-both-sides-are-covered-in-verizon-strike-bosses-side-is-heard-more/ .

These views were extracted from reports by some mainstream US media outlets in 2022 when the Democratic Party (left-wing) was in power.

https://www.nytimes.com/2022/09/26/world/middleeast/women-iran-protests-hijab.html .

https://www.nytimes.com/2014/08/08/world/asia/uighurs-veils-a-protest-against-chinas-curbs.html .

https://www.gdeltproject.org/ .

https://mediacloud.org/ .

https://www.bls.gov/cps/cpsaat11.htm .

https://www.census.gov/content/dam/Census/library/publications/2021/demo/p60-273.pdf .

Altheide, DL (2015) Media logic. The international encyclopedia of political communication, pages 1–6

Ansolabehere S, Lessem R, Snyder Jr JM (2006) The orientation of newspaper endorsements in us elections, 1940–2002. Quarterly Journal of political science 1(4):393

Article   Google Scholar  

Ardehaly, EM, Culotta, A (2017) Mining the demographics of political sentiment from twitter using learning from label proportions. In 2017 IEEE international conference on data mining (ICDM), pages 733–738. IEEE

Baron DP (2006) Persistent media bias. Journal of Public Economics 90(1-2):1–36

Article   ADS   Google Scholar  

Bovet A, Makse HA (2019) Influence of fake news in twitter during the 2016 us presidential election. Nature communications 10(1):1–14

Caliskan A, Bryson JJ, Narayanan A (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356(6334):183–186

Article   ADS   CAS   PubMed   Google Scholar  

D’Alessio D, Allen M (2000) Media bias in presidential elections: A meta-analysis. Journal of communication 50(4):133–156

Deerwester S, Dumais ST, Furnas GW, Landauer TK, Harshman R (1990) Indexing by latent semantic analysis. Journal of the American society for information science 41(6):391–407

DellaVigna S, Kaplan E (2008) The political impact of media bias. Information and Public Choice, page 79

Downs A (1957) An economic theory of political action in a democracy. Journal of political economy 65(2):135–150

D’heer E (2018) Media logic revisited. the concept of social media logic as alternative framework to study politicians’ usage of social media during election times. Media logic (s) revisited: Modelling the interplay between media institutions, media technology and societal change, pages 173–194

Esser F, Strömbäck J (2014) Mediatization of politics: Understanding the transformation of Western democracies. Springer

Fan A, Gardent, C (2022) Generating biographies on Wikipedia: The impact of gender bias on the retrieval-based generation of women biographies. In Proceedings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)

Firth, JR (1957) A synopsis of linguistic theory, 1930–1955. Studies in linguistic analysis

Fiske ST, Taylor SE (1991) Social cognition. Mcgraw-Hill Book Company

Galtung J, Ruge MariHolmboe (1965) The structure of foreign news: The presentation of the congo, cuba and cyprus crises in four norwegian newspapers. Journal of peace research 2(1):64–90

Gentzkow M, Shapiro JM (2010) What drives media slant? evidence from us daily newspapers. Econometrica 78(1):35–71

Article   MathSciNet   Google Scholar  

Gentzkow M, Glaeser EL, Goldin C (2006) The rise of the fourth estate. how newspapers became informative and why it mattered. In Corruption and reform: Lessons from America’s economic history, pages 187–230. University of Chicago Press

Gentzkow M, Shapiro JM, Stone DF (2015) Media bias in the marketplace: Theory. In Handbook of Media Economics, volume 1, pages 623–645. Elsevier

Grand G, Blank IdanAsher, Pereira F, Fedorenko E (2022) Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nature Human Behaviour 6(7):975–987

Article   PubMed   PubMed Central   Google Scholar  

Grieco EM, Cassidy RC (2015) Overview of race and hispanic origin: Census 2000 brief. In ’Mixed Race’Studies, pages 225–243. Routledge

Groseclose T, Milyo J (2005) A measure of media bias. The Quarterly Journal of Economics 120(4):1191–1237

Grossmann, Matt and Hopkins, David A (2016) Asymmetric politics: Ideological Republicans and group interest Democrats . Oxford University Press

Halko N, Martinsson Per-Gunnar, Tropp JA (2011) Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review 53(2):217–288

Hamborg F, Donnay K, Gipp B (2019) Automated identification of media bias in news articles: an interdisciplinary literature review. International Journal on Digital Libraries 20(4):391–415

Haraldsson A, Wängnerud L (2019) The effect of media sexism on women’s political ambition: evidence from a worldwide study. Feminist media studies 19(4):525–541

Harcup T, O’neill D (2001) What is news? galtung and ruge revisited. Journalism studies 2(2):261–280

Harcup T, O’neill D (2017) What is news? news values revisited (again). Journalism studies 18(12):1470–1488

Harris ZS (1954) Distributional structure. Word 10(2-3):146–162

Harwood TG, Garry T (2003) An overview of content analysis. The marketing review 3(4):479–498

Ho DE, Quinn KM et al. (2008) Measuring explicit political positions of media. Quarterly Journal of Political Science 3(4):353–377

Huang H, Chen Z, Shi X, Wang C, He Z, Jin H, Zhang M, Li Z (2021) China in the eyes of news media: a case study under covid-19 epidemic. Frontiers of Information Technology & Electronic Engineering 22(11):1443–1457

Huang P-S, Zhang H, Jiang R, Stanforth R, Welbl J, Rae J, Maini V, Yogatama D, Kohli P (2020) Reducing sentiment bias in language models via counterfactual evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 65–83

Devlin J, Chang MW, Lee K, Toutanova K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186, (2019)

Kusner M, Sun Y, Kolkin N, Weinberger K. From word embeddings to document distances. In International conference on machine learning, pages 957–966. PMLR, (2015)

Larcinese V, Puglisi R, Snyder Jr JM (2011) Partisan bias in economic news: Evidence on the agenda-setting behavior of us newspapers. Journal of public Economics 95(9–10):1178–1189

Lazaridou K, Löser A, Mestre M, Naumann F (2020) Discovering biased news articles leveraging multiple human annotations. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1268–1277

Le, Q, Mikolov, T (2014) Distributed representations of sentences and documents. In International conference on machine learning, pages 1188–1196. PMLR

Liu, R, Jia, C, Wei, J, Xu, G, Wang, L, Vosoughi, S (2021) Mitigating political bias in language models through reinforced calibration. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14857–14866

Liu R, Wang L, Jia, C, Vosoughi, S (2021) Political depolarization of news articles using attribute-aware word embeddings. In Proceedings of the 15th International AAAI Conference on Web and Social Media (ICWSM 2021)

Lloyd S (1982) Least squares quantization in pcm. IEEE transactions on information theory 28(2):129–137

Lott Jr JR, Hassett KA (2014) Is newspaper coverage of economic events politically biased? Public Choice 160(1–2):65–108

Lühiste M, Banducci S (2016) Invisible women? comparing candidates’ news coverage in Europe. Politics & Gender 12(2):223–253

MacGregor, B (1997) Live, direct and biased?: Making television news in the satellite age

MacQueen, J (1967) Classification and analysis of multivariate observations. In 5th Berkeley Symp. Math. Statist. Probability, pages 281–297

Merloe P (2015) Authoritarianism goes global: Election monitoring vs. disinformation. Journal of Democracy 26(3):79–93

Mikolov T, Chen K, Corrado GS, Dean J (2013) Efficient estimation of word representations in vector space. In International Conference on Learning Representations

Mora, GC (2014) Making Hispanics: How activists, bureaucrats, and media constructed a new American. University of Chicago Press

Nerenz DR, McFadden B, Ulmer C et al. (2009) Race, ethnicity, and language data: standardization for health care quality improvement

Niven, David (2002). Tilt?: The search for media bias. Greenwood Publishing Group

Osgood, Charles Egerton, Suci, George J and Tannenbaum, Percy H (1957) The measurement of meaning. Number 47. University of Illinois Press

Papacharissi Z, de Fatima Oliveira M (2008) News frames terrorism: A comparative analysis of frames employed in terrorism coverage in US and UK newspapers. The international journal of press/politics 13(1):52–74

Park S, Kang S, Chung, S, Song, J (2009) Newscube: delivering multiple aspects of news to mitigate media bias. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 443–452

Paul R, Elder L (2004) The thinkers guide for conscientious citizens on how to detect media bias & propaganda in national and world news: Based on critical thinking concepts & tools

Perez AnthonyDaniel, Hirschman C (2009) The changing racial and ethnic composition of the US population: Emerging American identities. Population and development review 35(1):1–51

Puglisi, R (2011) Being the New York times: the political behaviour of a newspaper. The BE journal of economic analysis & policy 11(1)

Puglisi R, Snyder Jr JM (2015a) The balanced US press. Journal of the European Economic Association 13(2):240–264

Puglisi, Riccardo and Snyder Jr, James M (2015b) Empirical studies of media bias. In Handbook of media economics, volume 1, pages 647–667. Elsevier

Qiang J, Zhang F, Li Y, Yuan Y, Zhu Y, Wu X (2023) Unsupervised statistical text simplification using pre-trained language modeling for initialization. Frontiers of Computer Science 17(1):171303

Rodriguez, CE (2000) Changing race: Latinos, the census, and the history of ethnicity in the United States, volume 41. NYU Press

Ross K, Carter C (2011) Women and news: A long and winding road. Media, Culture & Society 33(8):1148–1165

Sahlgren M (2008) The distributional hypothesis. Italian Journal of Disability Studies 20:33–53

Google Scholar  

Soroka SN (2012) The gatekeeping function: distributions of information in media and the real world. The Journal of Politics 74(2):514–528

Stanovich KE (2009) What intelligence tests miss: The psychology of rational thought. Yale University Press

Stroud NatalieJomini (2010) Polarization and partisan selective exposure. Journal of Communication 60(3):556–576

Sun J, Peng N (2021) Men are elected, women are married: Events gender bias on wikipedia. In Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)

Sunstein C (2002) The law of group polarization. Journal of Political Philosophy 10:175–195

Tahmasbi F, Schild L, Ling C, Blackburn J, Stringhini G, Zhang Y, Zannettou S (2021) “go eat a bat, chang!”: On the emergence of sinophobic behavior on web communities in the face of covid-19. In Proceedings of the Web Conference, pages 1122–1133

Vaismoradi M, Turunen H, Bondas T (2013) Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & health sciences 15(3):398–405

Wang T, Lin XV, Rajani NF, McCann B, Ordonez V, Xiong, C (2020). Double-hard debias: Tailoring word embeddings for gender bias mitigation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5443–5453

White DavidManning (1950) The “gate keeper”: a case study in the selection of news. Journalism Quarterly 27(4):383–390

Zeng Y, Li Z, Chen Z, Ma H (2023) Aspect-level sentiment analysis based on semantic heterogeneous graph convolutional network. Frontiers of Computer Science 17(6):176340

Zhang Y, Wang H, Yin G, Wang T, Yu Y (2017) Social media in github: the role of@-mention in assisting software development. Science China Information Sciences 60(3):1–18

Download references

Acknowledgements

The work is supported by the National Natural Science Foundation of China (No. 62127808).

Author information

Authors and affiliations.

National Engineering Research Center for Big Data Technology and System, Wuhan, China

Hong Huang, Hua Zhu & Hai Jin

Services Computing Technology and System Lab, Wuhan, China

Cluster and Grid Computing Lab, Wuhan, China

School of Computer Science and Technology, Wuhan, China

Hong Huang, Hua Zhu, Wenshi Liu & Hai Jin

Huazhong University of Science and Technology, Wuhan, China

Hong Huang, Hua Zhu, Wenshi Liu, Hua Gao & Hai Jin

DIRO, Université de Montréal & Mila & Canada CIFAR AI Chair, Montreal, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

HH: conceptualization, writing-review & editing, supervision; HZ: software, writing-original draft, data curation; WSL: software; HG and HJ: resources; BL: methodology, writing-review & editing, supervision.

Corresponding author

Correspondence to Hong Huang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

Ethical approval is not required as the study does not involve human participants.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary material, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Huang, H., Zhu, H., Liu, W. et al. Uncovering the essence of diverse media biases from the semantic embedding space. Humanit Soc Sci Commun 11 , 656 (2024). https://doi.org/10.1057/s41599-024-03143-w

Download citation

Received : 26 February 2023

Accepted : 07 May 2024

Published : 22 May 2024

DOI : https://doi.org/10.1057/s41599-024-03143-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

biased media essay

Become an investor in AllSides.

AllSides

  • Bias Ratings
  • Media Bias Chart
  • Fact Check Bias
  • Rate Your Bias
  • Types of Bias

How to Spot 16 Types of Media Bias

Journalism is tied to a set of ethical standards and values, including truth and accuracy, fairness and impartiality, and accountability. However, journalism today often strays from objective fact, resulting in biased news and endless examples of media bias.

Media bias isn't necessarily a bad thing. But hidden bias misleads, manipulates and divides us. This is why AllSides provides hundreds of media bias ratings , a balanced newsfeed , the AllSides Media Bias Chart™ , and the AllSides Fact Check Bias Chart™ .

72 percent of Americans believe traditional news sources report fake news , falsehoods, or content that is purposely misleading. With trust in media declining, media consumers must learn how to spot different types of media bias.

This page outlines 16 types of media bias, along with examples of the different types of bias being used in popular media outlets. Download this page as a PDF .

Related: 14 Types of Ideological Bias

16 Types of Media Bias and how to spot them

  • Unsubstantiated Claims
  • Opinion Statements Presented as Facts
  • Sensationalism/Emotionalism
  • Mudslinging/Ad Hominem
  • Mind Reading
  • Flawed Logic
  • Bias by Omission
  • Omission of Source Attribution
  • Bias by Story Choice and Placement
  • Subjective Qualifying Adjectives
  • Word Choice
  • Negativity Bias
  • Elite v. Populist Bias

Some Final Notes on Bias

Spin is a type of media bias that means vague, dramatic or sensational language. When journalists put a “spin” on a story, they stray from objective, measurable facts. Spin is a form of media bias that clouds a reader’s view, preventing them from getting a precise take on what happened.

In the early 20th century, Public Relations and Advertising executives were referred to as “spin doctors.” They would use vague language and make unsupportable claims in order to promote a product, service or idea, downplaying any alternative views in order to make a sale. Increasingly, these tactics are appearing in journalism.

Examples of Spin Words and Phrases:

  • High-stakes
  • Latest in a string of...
  • Turn up the heat
  • Stern talks
  • Facing calls to...
  • Even though
  • Significant

Sometimes the media uses spin words and phrases to imply bad behavior . These words are often used without providing hard facts, direct quotes, or witnessed behavior:

  • Acknowledged
  • Refusing to say
  • Came to light

To stir emotions, reports often include colored, dramatic, or sensational words as a substitute for the word “said.” For example:

  • Frustration

Examples of Spin Media Bias:

biased media essay

“Gloat” means “contemplate or dwell on one's own success or another's misfortune with smugness or malignant pleasure.” Is there evidence in Trump’s tweet to show he is being smug or taking pleasure in the layoffs, or is this a subjective interpretation?

Source article

Business Insider Bias Rating

biased media essay

In this example of spin media bias, the Washington Post uses a variety of dramatic, sensationalist words to spin the story to make Trump appear emotional and unhinged. They also refer to the president's "vanity" without providing supporting evidence.

Washington Post Bias Rating

Top of Page

2. Unsubstantiated Claims

Journalists sometimes make claims in their reporting without including evidence to back them up. This can occur in the headline of an article, or in the body.

Statements that appear to be fact, but do not include specific evidence, are a key indication of this type of media bias.

Sometimes, websites or media outlets publish stories that are totally made up. This is often referred to as a type of fake news .

Examples of Unsubstantiated Claims Media Bias

biased media essay

In this media bias instance, The Daily Wire references a "longstanding pattern," but does not back this up with evidence.

The Daily Wire Bias Rating

biased media essay

In late January 2019, actor Jussie Smollett claimed he was attacked by two men who hurled racial and homophobic slurs. The Hill refers to “the violent attack” without using the word “alleged” or “allegations." The incident was revealed to be a hoax created by Smollett himself.

The Hill Bias Rating

biased media essay

This Washington Post columnist makes a claim about wealth distribution without noting where it came from. Who determined this number and how?

3. Opinion Statements Presented as Fact

Sometimes journalists use subjective language or statements under the guise of reporting objectively. Even when a media outlet presents an article as a factual and objective news piece, it may employ subjective statements or language.

A subjective statement is one that is based on personal opinions, assumptions, beliefs, tastes, preferences, or interpretations. It reflects how the writer views reality, what they presuppose to be the truth. It is a statement colored by their specific perspective or lens and cannot be verified using concrete facts and figures within the article.

There are objective modifiers — “blue” “old” “single-handedly” “statistically” “domestic” — for which the meaning can be verified. On the other hand, there are subjective modifiers — “suspicious,” “dangerous,” “extreme,” “dismissively,” “apparently” — which are a matter of interpretation.

Interpretation can present the same events as two very different incidents. For instance, a political protest in which people sat down in the middle of a street blocking traffic to draw attention to their cause can be described as “peaceful” and “productive,” or, others may describe it as “aggressive” and “disruptive.”

Examples of Words Signaling Subjective statements :

  • Good/Better/Best
  • Is considered to be
  • May mean that
  • Bad/Worse/Worst
  • It's likely that

Source: Butte College Critical Thinking Tipsheet

An objective statement, on the other hand, is an observation of observable facts . It is not based on emotions or personal opinion and is based on empirical evidence — what is quantifiable and measurable.

It’s important to note that an objective statement may not actually be true. The following statements are objective statements, but can be verified as true or false:

Taipei 101 is the world's tallest building. Five plus four equals ten. There are nine planets in our solar system. Now, the first statement of fact is true (as of this writing); the other two are false. It is possible to verify the height of buildings and determine that Taipei 101 tops them all. It is possible to devise an experiment to demonstrate that five plus four does not equal ten or to use established criteria to determine whether Pluto is a planet.

Editorial reviews by AllSides found that some media outlets blur the line between subjective statements and objective statements, leading to potential confusion for readers, in two key ways that fall under this type of media bias :

  • Including subjective statements in their writing and not attributing them to a source. (see Omission of Source Attribution )
  • Placing opinion or editorial content on the homepage next to hard news, or otherwise not clearly marking opinion content as “opinion.”

Explore logical fallacies that are often used by opinion writers.

Examples of Opinion Statements Presented as Fact

biased media essay

The sub-headline Vox uses is an opinion statement — some people likely believe the lifting of the gas limit will strengthen the coal industry — but Vox included this statement in a piece not labeled “Opinion.”

Vox Bias Rating

biased media essay

In this article about Twitter CEO Elon Musk banning reporters, we can detect that the journalist is providing their personal opinion that Musk is making "arbitary" decisions by making note of the word "seemingly." Whether or not Musk's decisions are arbitrary is a matter of personal opinion and should be reserved for the opinion pages.

SFGate Rating

biased media essay

In this article about Hillary Clinton’s appearance on "The Late Show With Stephen Colbert," the author makes an assumption about Clinton’s motives and jumps to a subjective conclusion.

Fox News Bias Rating

4. Sensationalism/Emotionalism

Sensationalism is a type of media bias in which information is presented in a way that gives a shock or makes a deep impression. Often it gives readers a false sense of culmination, that all previous reporting has led to this ultimate story.

Sensationalist language is often dramatic, yet vague. It often involves hyperbole — at the expense of accuracy — or warping reality to mislead or provoke a strong reaction in the reader.

In recent years, some media outlets have been criticized for overusing the term “breaking” or “breaking news,” which historically was reserved for stories of deep impact or wide-scale importance.

With this type of media bias, reporters often increase the readability of their pieces using vivid verbs. But there are many verbs that are heavy with implications that can’t be objectively corroborated: “blast” “slam” “bury” “abuse” “destroy” “worry.”

Examples of Words and Phrases Used by the Media that Signal Sensationalism and Emotionalism:

  • Embroiled in...
  • Torrent of tweets

Examples of Sensationalism/Emotionalism Media Bias

biased media essay

“Gawk” means to stare or gape stupidly. Does AP’s language treat this event as serious and diplomatic, or as entertainment?

AP Bias Rating

biased media essay

Here, BBC uses sensationalism in the form of hyperbole, as the election is unlikely to involve bloodshed in the literal sense.

BBC Bias Rating

biased media essay

In this piece from the New York Post, the author uses multiple sensationalist phrases and emotional language to dramatize the “Twitter battle."

New York Post Bias Rating

5. Mudslinging/Ad Hominem

Mudslinging is a type of media bias when unfair or insulting things are said about someone in order to damage their reputation. Similarly, ad hominem (Latin for “to the person”) attacks are attacks on a person’s motive or character traits instead of the content of their argument or idea. Ad hominem attacks can be used overtly, or as a way to subtly discredit someone without having to engage with their argument.

Examples of Mudslinging

biased media essay

A Reason editor calls a New York Times columnist a "snowflake" after the columnist emailed a professor and his provost to complain about a tweet calling him a bedbug.

Reason Bias Rating

biased media essay

In March 2019, The Economist ran a piece describing political commentator and author Ben Shapiro as “alt-right.” Readers pointed out that Shapiro is Jewish (the alt-right is largely anti-Semitic) and has condemned the alt-right. The Economist issued a retraction and instead referred to Shapiro as a “radical conservative.”

Source: The Economist Twitter

6. Mind Reading

Mind reading is a type of media bias that occurs in journalism when a writer assumes they know what another person thinks, or thinks that the way they see the world reflects the way the world really is.

Examples of Mind Reading

biased media essay

We can’t objectively measure that Trump hates looking foolish, because we can’t read his mind or know what he is feeling. There is also no evidence provided to demonstrate that Democrats believe they have a winning hand.

CNN Bias Rating

biased media essay

How do we know that Obama doesn’t have passion or sense of purpose? Here, the National Review writer assumes they know what is going on in Obama’s head.

National Review Bias Rating

biased media essay

Vox is upfront about the fact that they are interpreting what Neeson said. Yet this interpretation ran in a piece labeled objective news — not a piece in the Opinion section. Despite being overt about interpreting, by drifting away from what Neeson actually said, Vox is mind reading.

Slant is a type of media bias that describes when journalists tell only part of a story, or when they highlight, focus on, or play up one particular angle or piece of information. It can include cherry-picking information or data to support one side, or ignoring another perspective. Slant prevents readers from getting the full story, and narrows the scope of our understanding.

Examples of Slant

biased media essay

In the above example, Fox News notes that Rep. Alexandria Ocasio-Cortez’s policy proposals have received “intense criticism.” While this is true, it is only one side of the picture, as the Green New Deal was received well by other groups.

biased media essay

Here, Snopes does not indicate or investigate why police made sweeps (did they have evidence criminal activity was occurring in the complex?), nor did Snopes ask police for their justification, giving a one-sided view. In addition, the studies pointed to only show Black Americans are more likely to be arrested for drug possession, not all crimes.

Snopes Bias Rating

8. Flawed Logic

Flawed logic or faulty reasoning is a way to misrepresent people’s opinions or to arrive at conclusions that are not justified by the given evidence. Flawed logic can involve jumping to conclusions or arriving at a conclusion that doesn’t follow from the premise.

Examples of Flawed Logic

biased media essay

Here, the Daily Wire interprets a video to draw conclusions that aren’t clearly supported by the available evidence. The video shows Melania did not extend her hand to shake, but it could be because Clinton was too far away to reach, or perhaps there was no particular reason at all. By jumping to conclusions that this amounted to a “snub” or was the result of “bitterness” instead of limitations of physical reality or some other reason, The Daily Wire is engaging in flawed logic.

9. Bias by Omission

Bias by omission is a type of media bias in which media outlets choose not to cover certain stories, omit information that would support an alternative viewpoint, or omit voices and perspectives on the other side.

Media outlets sometimes omit stories in order to serve a political agenda. Sometimes, a story will only be covered by media outlets on a certain side of the political spectrum. Bias by omission also occurs when a reporter does not interview both sides of a story — for instance, interviewing only supporters of a bill, and not including perspectives against it.

Examples of Media Bias by Omission

biased media essay

In a piece titled, "Hate crimes are rising, regardless of Jussie Smollett's case. Here's why," CNN claims that hate crime incidents rose for three years, but omits information that may lead the reader to different conclusions. According to the FBI’s website , reports of hate crime incidents rose from previous years, but so did the number of agencies reporting, “with approximately 1,000 additional agencies contributing information.” This makes it unclear whether hate crimes are actually on the rise, as the headline claims, or simply appear to be because more agencies are reporting.

10. Omission of Source Attribution

Omission of source attribution is when a journalist does not back up their claims by linking to the source of that information. An informative, balanced article should provide the background or context of a story, including naming sources (publishing “on-the-record” information).

For example, journalists will often mention "baseless claims," "debunked theories," or note someone "incorrectly stated" something without including background information or linking to another article that would reveal how they concluded the statement is false or debunked. Or, reporters will write that “immigration opponents say," "critics say," or “supporters of the bill noted” without identifying who these sources are.

It is sometimes useful or necessary to use anonymous sources, because insider information is only available if the reporter agrees to keep their identity secret. But responsible journalists should be aware and make it clear that they are offering second-hand information on sensitive matters. This fact doesn’t necessarily make the statements false, but it does make them less than reliable.

Examples of Media Bias by Omission of Source Attribution

biased media essay

In this paragraph, The New York Times says Trump "falsely claimed" millions had voted illegally; they link to Trump's tweet, but not to a source of information that would allow the reader to determine Trump's claim is false.

The New York Times Bias Rating

biased media essay

In this paragraph, the Epoch Times repeatedly states "critics say" without attributing the views to anyone specific.

The Epoch Times Bias Rating

biased media essay

In a piece about the Mueller investigation, The New York Times never names the investigators, officials or associates mentioned.

11. Bias by Story Choice and Placement

Story choice, as well as story and viewpoint placement, can reveal media bias by showing which stories or viewpoints the editor finds most important.

Bias by story choice is when a media outlet's bias is revealed by which stories the outlet chooses to cover or to omit. For example, an outlet that chooses to cover the topic of climate change frequently can reveal a different political leaning than an outlet that chooses to cover stories about gun laws. The implication is that the outlet's editors and writers find certain topics more notable, meaningful, or important than others, which can tune us into the outlet's political bias or partisan agenda. Bias by story choice is closely linked to media bias by omission and slant .

Bias by story placement is one type of bias by placement. The stories that a media outlet features "above the fold" or prominently on its homepage and in print show which stories they really want you to read, even if you read nothing else on the site or in the publication. Many people will quickly scan a homepage or read only a headline, so the stories that are featured first can reveal what the editor hopes you take away or keep top of mind from that day.

Bias by viewpoint placement is a related type of bias by placement. This can often be seen in political stories. A balanced piece of journalism will include perspectives from both the left and the right in equal measure. If a story only features viewpoints from left-leaning sources and commentators, or includes them near the top of the story/in the first few paragraphs, and does not include right-leaning viewpoints, or buries them at the end of a story, this is an example of bias by viewpoint.

Examples of Media Bias by Placement

biased media essay

In this screenshot of ThinkProgress' homepage taken at 1 p.m. ET on Sept. 6, 2019, the media outlet chooses to prominently display coverage of LGBT issues and cuts to welfare and schools programs. In the next screenshot of The Epoch Times homepage taken at the same time on the same day, the outlet privileges very different stories.

biased media essay

Taken at the same time on the same day as the screenshot above, The Epoch Times chooses to prominently feature stories about a hurricane, the arrest of illegal immigrants , Hong Kong activists, and the building of the border wall. Notice that ThinkProgress' headline on the border wall focuses on diverting funds from schools and day cares, while the Epoch Times headline focuses on the wall's completion.

12. Subjective Qualifying Adjectives

Journalists can reveal bias when they include subjective, qualifying adjectives in front of specific words or phrases. Qualifying adjectives are words that characterize or attribute specific properties to a noun. When a journalist uses qualifying adjectives, they are suggesting a way for you to think about or interpret the issue, instead of just giving you the facts and letting you make judgements for yourself. This can manipulate your view. Subjective qualifiers are closely related to spin words and phrases , because they obscure the objective truth and insert subjectivity.

For example, a journalist who writes that a politician made a "serious allegation" is interpreting the weight of that allegation for you. An unbiased piece of writing would simply tell you what the allegation is, and allow you to make your own judgement call as to whether it is serious or not.

In opinion pieces, subjective adjectives are okay; they become a problem when they are inserted outside of the opinion pages and into hard news pieces.

Sometimes, the use of an adjective may be warranted, but journalists have to be careful in exercising their judgement. For instance, it may be warranted to call a Supreme Court ruling that overturned a major law a "landmark case." But often, adjectives are included in ways that not everyone may agree with; for instance, people who are in favor of limiting abortion would likely not agree with a journalist who characterizes new laws restricting the act as a "disturbing trend." Therefore, it's important to be aware of subjective qualifiers and adjectives so that you can be on alert and then decide for yourself whether it should be accepted or not. It is important to notice, question and challenge adjectives that journalists use.

Examples of Subjective Qualifying Adjectives

  • disturbing rise
  • serious accusations
  • troubling trend
  • sinister warning
  • awkward flaw
  • extreme law
  • baseless claim
  • debunked theory ( this phrase could coincide with bias by omission , if the journalist doesn't include information for you to determine why the theory is false. )
  • critical bill
  • offensive statement
  • harsh rebuke
  • extremist group
  • far-right/far-left organization

biased media essay

HuffPost's headline includes the phrases "sinister warning" and "extremist Republican." It goes on to note the politician's "wild rant" in a "frothy interview" and calls a competing network "far-right." These qualifying adjectives encourage the reader to think a certain way. A more neutral piece would have told the reader what Cawthorn said without telling the reader how to interpret it.

HuffPost bias rating

13. Word Choice

Words and phrases are loaded with political implications. The words or phrases a media outlet uses can reveal their perspective or ideology.

Liberals and conservatives often strongly disagree about the best way to describe hot-button issues. For example, a liberal journalist who favors abortion access may call it " reproductive healthcare ," or refer to supporters as " pro-choice ." Meanwhile, a conservative journalist would likely not use these terms — to them, this language softens an immoral or unjustifiable act. Instead, they may call people who favor abortion access " pro-abortion " rather than " pro-choice ."

Word choice can also reveal how journalists see the very same event very differently. For instance, one journalist may call an incident of civil unrest a " racial justice protest " to focus the readers' attention on the protesters' policy angles and advocacy; meanwhile, another journalist calls it a " riot " to focus readers' attention on looting and property destruction that occurred.

Words and their meanings are often shifting in the political landscape. The very same words and phrases can mean different things to different people. AllSides offers a Red Blue Translator to help readers understand how people on the left and right think and feel differently about the same words and phrases.

Examples of Polarizing Word Choices

  • pro-choice | anti-choice
  • pro-abortion | anti-abortion
  • gun rights | gun control
  • riot | protest
  • illegal immigrants | migrants
  • illegal alien | asylum-seeking migrants
  • woman | birthing person
  • voting rights | voting security
  • sex reassignment surgery | gender-affirming care
  • critical race theory | anti-racist education

Examples of Word Choice Bias

biased media essay

An outlet on the left calls Florida's controversial Parental Rights in Education law the "Don't Say Gay" bill, using language favored by opponents, while an outlet on the right calls the same bill the "FL education bill," signaling a supportive view.

USA Today source article

USA TODAY media bias rating

Fox News source article

Fox News media bias rating

14. Photo Bias

Photos can be used to shape the perception, emotions or takeaway a reader will have regarding a person or event. Sometimes a photo can give a hostile or favorable impression of the subject.

For example, a media outlet may use a photo of an event or rally that was taken at the very beginning of the event to give the impression that attendance was low. Or, they may only publish photos of conflict or a police presence at an event to make it seem violent and chaotic. Reporters may choose an image of a favored politician looking strong, determined or stately during a speech; if they disfavor him, they may choose a photo of him appearing to yell or look troubled during the same speech.

Examples of Photo Bias

biased media essay

Obama appears stern or angry — with his hand raised, brows furrowed, and mouth wide, it looks like maybe he’s yelling. The implication is that the news about the Obamacare ruling is something that would enrage Obama.

The Blaze bias rating

biased media essay

With a tense mouth, shifty eyes and head cocked to one side, Nunes looks guilty. The sensationalism in the headline aids in giving this impression (“neck-deep” in “scandal.”)

Mother Jones bias rating

biased media essay

With his lips pursed and eyes darting to the side, Schiff looks guilty in this photo. The headline stating that he “got caught celebrating” also implies that he was doing something he shouldn’t be doing. Whether or not he was actually celebrating impeachment at this dinner is up for debate, but if you judged Townhall’s article by the photo, you may conclude he was.

Townhall bias rating

biased media essay

With his arms outreached and supporters cheering, Texas Gov. Greg Abbott appears triumphant in this photo. The article explains that a pediatric hospital in Texas announced it will stop performing “ gender-confirming therapies ” for children, following a directive from Abbott for the state to investigate whether such procedures on kids constituted child abuse. The implication of the headline and photo is that this is a victory.

The Daily Wire bias rating

15. Negativity Bias

Negativity bias refers to a type of bias in which reporters emphasize bad or negative news, or frame events in a negative light.

"If it bleeds, it leads" is a common media adage referring to negativity bias. Stories about death, violence, turmoil, struggle, and hardship tend to get spotlighted in the press, because these types of stories tend to get more attention and elicit more shock, outrage, fear, and cause us to become glued to the news, wanting to hear more.

Examples of Negativity Bias

biased media essay

This story frames labor force participation as a negative thing. However, if labor force participation remained low for a long time, that would also be written up as bad news.

New York Times bias rating

16. Elite v. Populist Bias

Elite bias is when journalists defer to the beliefs, viewpoints, and perspectives of people who are part of society's most prestigious, credentialed institutions — such as academic institutions, government agencies, business executives, or nonprofit organizations. Populist bias, on the other hand, is a bias in which the journalist defers to the perspectives, beliefs, or viewpoints of those who are outside of or dissent from prestigious institutions — such as "man on the street" stories, small business owners, less prestigious institutions, and people who live outside of major urban centers.

Elite/populist bias has a geographic component in the U.S. Because major institutions of power are concentrated in American coastal cities (which tend to vote blue), there can exist conflicting values, perspectives, and ideologies among “coastal elites” and “rural/middle America" (which tends to vote red). The extent to which journalists emphasize the perspectives of urbanites versus people living in small town/rural areas can show elite or populist bias, and thus, political bias.

Examples of Elite v. Populist Bias

biased media essay

Elite Bias: This article emphasizes the guidance and perspectives of major government agencies and professors at elite universities.

NBC News bias rating

biased media essay

Populist Bias: In this opinion piece, journalist Naomi Wolf pushes back against elite government agencies, saying they can't be trusted.

The Epoch Times bias rating

Everyone is biased. It is part of human nature to have perspectives, preferences, and prejudices. But sometimes, bias — especially media bias — can become invisible to us. This is why AllSides provides hundreds of media bias ratings and a media bias chart.

We are all biased toward things that show us in the right. We are biased toward information that confirms our existing beliefs. We are biased toward the people or information that supports us, makes us look good, and affirms our judgements and virtues. And we are biased toward the more moral choice of action — at least, that which seems moral to us.

Journalism as a profession is biased toward vibrant communication, timeliness, and providing audiences with a sense of the current moment — whether or not that sense is politically slanted. Editors are biased toward strong narrative, stunning photographs, pithy quotes, and powerful prose. Every aspiring journalist has encountered media bias — sometimes the hard way. If they stay in the profession, often it will be because they have incorporated the biases of their editor.

But sometimes, bias can manipulate and blind us. It can put important information and perspectives in the shadows and prevent us from getting the whole view. For this reason, there is not a single type of media bias that can’t, and shouldn’t occasionally, be isolated and examined. This is just as true for journalists as it is for their audiences.

Good reporting can shed valuable light on our biases — good and bad. By learning how to spot media bias, how it works, and how it might blind us, we can avoid being fooled by media bias and fake news . We can learn to identify and appreciate different perspectives — and ultimately, come to a more wholesome view.

Julie Mastrine | Director of Marketing and Media Bias Ratings, AllSides

Early Contributors and Editors (2018)

Jeff Nilsson | Saturday Evening Post

Sara Alhariri | Stossel TV

Kristine Sowers | Abridge News

  • Ethics & Leadership
  • Fact-Checking
  • Media Literacy
  • The Craig Newmark Center
  • Reporting & Editing
  • Ethics & Trust
  • Tech & Tools
  • Business & Work
  • Educators & Students
  • Training Catalog
  • Custom Teaching
  • For ACES Members
  • All Categories
  • Broadcast & Visual Journalism
  • Fact-Checking & Media Literacy
  • In-newsroom
  • Memphis, Tenn.
  • Minneapolis, Minn.
  • St. Petersburg, Fla.
  • Washington, D.C.
  • Poynter ACES Introductory Certificate in Editing
  • Poynter ACES Intermediate Certificate in Editing
  • Ethics & Trust Articles
  • Get Ethics Advice
  • Fact-Checking Articles
  • International Fact-Checking Day
  • Teen Fact-Checking Network
  • International
  • Media Literacy Training
  • MediaWise Resources
  • Ambassadors
  • MediaWise in the News

Support responsible news and fact-based information today!

Should you trust media bias charts?

These controversial charts claim to show the political lean and credibility of news organizations. here’s what you need to know about them..

biased media essay

Impartial journalism is an impossible ideal. That is, at least, according to Julie Mastrine.

“Unbiased news doesn’t exist. Everyone has a bias: everyday people and journalists. And that’s OK,” Mastrine said. But it’s not OK for news organizations to hide those biases, she said.

“We can be manipulated into (a biased outlet’s) point of view and not able to evaluate it critically and objectively and understand where it’s coming from,” said Mastrine, marketing director for AllSides , a media literacy company focused on “freeing people from filter bubbles.”

That’s why she created a media bias chart.

As readers hurl claims of hidden bias towards outlets on all parts of the political spectrum, bias charts have emerged as a tool to reveal pernicious partiality.

Charts that use transparent methodologies to score political bias — particularly the AllSides chart and another from news literacy company Ad Fontes Media — are increasing in popularity and spreading across the internet. According to CrowdTangle, a social media monitoring platform, the homepages for these two sites and the pages for their charts have been shared tens of thousands of times.

But just because something is widely shared doesn’t mean it’s accurate. Are media bias charts reliable?

Why do media bias charts exist?

Traditional journalism values a focus on news reporting that is fair and impartial, guided by principles like truth, verification and accuracy. But those standards are not observed across the board in the “news” content that people consume.

Tim Groeling, a communications professor at the University of California Los Angeles, said some consumers take too much of the “news” they encounter as impartial.

When people are influenced by undisclosed political bias in the news they consume, “that’s pretty bad for democratic politics, pretty bad for our country to have people be consistently misinformed and think they’re informed,” Groeling said.

If undisclosed bias threatens to mislead some news consumers, it also pushes others away, he said.

“When you have bias that’s not acknowledged, but is present, that’s really damaging to trust,” he said.

Kelly McBride, an expert on journalism ethics and standards, NPR’s public editor and the chair of the Craig Newmark Center for Ethics and Leadership at Poynter, agrees.

“If a news consumer doesn’t see their particular bias in a story accounted for — not necessarily validated, but at least accounted for in a story — they are going to assume that the reporter or the publication is biased,” McBride said.

The growing public confusion about whether or not news outlets harbor a political bias, disclosed or not, is fueling demand for resources to sort fact from otherwise — resources like these media bias charts.

Bias and social media

Mastrine said the threat of undisclosed biases grows as social media algorithms create filter bubbles to feed users ideologically consistent content.

Could rating bias help? Mastrine and Vanessa Otero, founder of the Ad Fontes media bias chart, think so.

“It’ll actually make it easier for people to identify different perspectives and make sure they’re reading across the spectrum so that they get a balanced understanding of current events,” Mastrine said.

Otero said bias ratings could also be helpful to advertisers.

“There’s this whole ecosystem of online junk news, of polarizing misinformation, these clickbaity sites that are sucking up a lot of ad revenue. And that’s not to the benefit of anybody,” Otero said. “It’s not to the benefit of the advertisers. It’s not to the benefit of society. It’s just to the benefit of some folks who want to take advantage of people’s worst inclinations online.”

Reliable media bias ratings could allow advertisers to disinvest in fringe sites.

Groeling, the UCLA professor, said he could see major social media and search platforms using bias ratings to alter the algorithms that determine what content users see. Changes could elevate neutral content or foster broader news consumption.

But he fears the platforms’ sweeping power, especially after Facebook and Twitter censored a New York Post article purporting to show data from a laptop belonging to Hunter Biden, the son of President-elect Joe Biden. Groeling said social media platforms failed to clearly communicate how and why they stopped and slowed the spread of the article.

“(Social media platforms are) searching for some sort of arbiter of truth and news … but it’s actually really difficult to do that and not be a frightening totalitarian,” he said.

Is less more?

The Ad Fontes chart and the AllSides chart are each easy to understand: progressive publishers on one side, conservative ones on the other.

“It’s just more visible, more shareable. We think more people can see the ratings this way and kind of begin to understand them and really start to think, ‘Oh, you know, journalism is supposed to be objective and balanced,’” Mastrine said. AllSides has rated media bias since 2012. Mastrine first put them into chart form in early 2019.

Otero recognizes that accessibility comes at a price.

“Some nuance has to go away when it’s a graphic,” she said. “If you always keep it to, ‘people can only understand if they have a very deep conversation,’ then some people are just never going to get there. So it is a tool to help people have a shortcut.”

But perceiving the chart as distilled truth could give consumers an undue trust in outlets, McBride said.

“Overreliance on a chart like this is going to probably give some consumers a false level of faith,” she said. “I can think of a massive journalistic failure for just about every organization on this chart. And they didn’t all come clean about it.”

The necessity of getting people to look at the chart poses another challenge. Groeling thinks disinterest among consumers could hurt the charts’ usefulness.

“Asking people to go to this chart, asking them to take effort to understand and do that comparison, I worry would not actually be something people would do. Because most people don’t care enough about news,” he said. He would rather see a plugin that detects bias in users’ overall news consumption and offers them differing viewpoints.

McBride questioned whether bias should be the focus of the charts at all. Other factors — accountability, reliability and resources — would offer better insight into what sources of news are best, she said.

“Bias is only one thing that you need to pay attention to when you consume news. What you also want to pay attention to is the quality of the actual reporting and writing and the editing,” she said. It wouldn’t make sense to rate local news sources for bias, she added, because they are responsive to individual communities with different political ideologies.

The charts are only as good as their methodologies. Both McBride and Groeling shared praise for the stated methods for rating bias of AllSides and Ad Fontes , which can be found on their websites. Neither Ad Fontes nor AllSides explicitly rates editorial standards.

The AllSides Chart

biased media essay

(Courtesy: AllSides)

The AllSides chart focuses solely on political bias. It places sources in one of five boxes — “Left,” “Lean Left,” “Center,” “Lean Right” and “Right.” Mastrine said that while the boxes allow the chart to be easily understood, they also don’t allow sources to be rated on a gradient.

“Our five-point scale is inherently limited in the sense that we have to put somebody in a category when, in reality, it’s kind of a spectrum. They might fall in between two of the ratings,” Mastrine said.

That also makes the chart particularly easy to understand, she said.

AllSides has rated more than 800 sources in eight years, focusing on online content only. Ratings are derived from a mix of review methods.

In the blind bias survey, which Mastrine called “one of (AllSides’) most robust bias rating methodologies,” readers from the public rate articles for political bias. Two AllSides staffers with different political biases pull articles from the news sites that are being reviewed. AllSides locates these unpaid readers through its newsletter, website, social media account and other marketing tools. The readers, who self-report their political bias after they use a bias rating test provided by the company, only see the article’s text and are not told which outlet published the piece. The data is then normalized to more closely reflect the composure of America across political groupings.

AllSides also uses “editorial reviews,” where staff members look directly at a source to contribute to ratings.

“That allows us to actually look at the homepage with the branding, with the photos and all that and kind of get a feel for what the bias is, taking all that into account,” Mastrine said.

She added that an equal number of staffers who lean left, right and center conduct each review together. The personal biases of AllSides’ staffers appear on their bio pages . Mastrine leans right.

She clarified that among the 20-person staff, many are part time, 14% are people of color, 38% are lean left or left, 29% are center, and 18% are lean right or right. Half of the staffers are male, half are female.

When a news outlet receives a blind bias survey and an editorial review, both are taken into account. Mastrine said the two methods aren’t weighted together “in any mathematical way,” but said they typically hold roughly equal weight. Sometimes, she added, the editorial review carries more weight.

AllSides also uses “independent research,” which Mastrine described as the “lowest level of bias verification.” She said it consists of staffers reviewing and reporting on a source to make a preliminary bias assessment. Sometimes third-party analyses — including academic research and surveys — are incorporated into ratings, too.

AllSides highlights the specific methodologies used to judge each source on its website and states its confidence in the ratings based on the methods used. In a separate white paper , the company details the process used for its August 2020 blind bias survey.

AllSides sometimes gives separate ratings to different sections of the same source. For example, it rates The New York Times’ opinion section “Left” and its news section “Lean Left.” AllSides also incorporates reader feedback into its system. People can mark that they agree or disagree with AllSides’ rating of a source. When a significant number of people disagree, AllSides often revisits a source to vet it once again, Mastrine said.

The AllSides chart generally gets good reviews, she said, and most people mark that they agree with the ratings. Still, she sees one misconception among the people that encounter it: They think center means better. Mastrine disagrees.

“The center outlets might be omitting certain stories that are important to people. They might not even be accurate,” she said. “We tell people to read across the spectrum.”

To make that easier, AllSides offers a curated “ balanced news feed ,” featuring articles from across the political spectrum, on its website.

AllSides makes money through paid memberships, one-time donations, media literacy training and online advertisements. It plans to become a public benefit corporation by the end of the year, she added, meaning it will operate both for profit and for a stated public mission.

The Ad Fontes chart

biased media essay

(Courtesy: Ad Fontes)

The Ad Fontes chart rates both reliability and political bias. It scores news sources — around 270 now, and an expected 300 in December — using bias and reliability as coordinates on its chart.

The outlets appear on a spectrum, with seven markers showing a range from “Most Extreme Left” to “Most Extreme Right” along the bias axis, and eight markers showing a range from “Original Fact Reporting” to “Contains Inaccurate/Fabricated Info” along the reliability axis.

The chart is a departure from its first version, back when founder Vanessa Otero , a patent attorney, said she put together a chart by herself as a hobby after seeing Facebook friends fight over the legitimacy of sources during the 2016 election. Otero said that when she saw how popular her chart was, she decided to make bias ratings her full-time job and founded Ad Fontes — Latin for “to the source” — in 2018.

“There were so many thousands of people reaching out to me on the internet about this,” she said. “Teachers were using it in their classrooms as a tool for teaching media literacy. Publishers wanted to publish it in textbooks.”

About 30 paid analysts rate articles for Ad Fontes. Listed on the company’s website , they represent a range of experience — current and former journalists, educators, librarians and similar professionals. The company recruits analysts through its email list and references and vets them through a traditional application process. Hired analysts are then trained by Otero and other Ad Fontes staff.

To start review sessions, a group of coordinators composed of senior analysts and the company’s nine staffers pulls articles from the sites being reviewed. They look for articles listed as most popular or displayed most prominently.

biased media essay

Part of the Ad Fontes analyst political bias test. The test asks analysts to rank their political bias on 18 different policy issues.

Ad Fontes administers an internal political bias test to analysts, asking them to rank their left-to-right position on about 20 policy positions. That information allows the company to attempt to create ideological balance by including one centrist, one left-leaning and one right-leaning analyst on each review panel. The panels review at least three articles for each source, but they may review as many as 30 for particularly prominent outlets, like The Washington Post, Otero said. More on their methodology, including how they choose which articles to review to create a bias rating, can be found here on the Ad Fontes website.

When they review the articles, the analysts see them as they appear online, “because that’s how people encounter all content. No one encounters content blind,” Otero said. The review process recently changed so that paired analysts discuss their ratings over video chat, where they are pushed to be more specific as they form ratings, Otero said.

Individual scores for an article’s accuracy, the use of fact or opinion, and the appropriateness of its headline and image combine to create a reliability score. The bias score is determined by the article’s degree of advocacy for a left-to-right political position, topic selection and omission, and use of language.

To create an overall bias and reliability score for an outlet, the individual scores for each reviewed article are averaged, with added importance given to more popular articles. That average determines where sources show up on the chart.

Ad Fontes details its ratings process in a white paper from August 2019.

While the company mostly reviews prominent legacy news sources and other popular news sites, Otero hopes to add more podcasts and video content to the chart in coming iterations. The chart already rates video news channel “ The Young Turks ” (which claims to be the most popular online news show with 250 million views per month and 5 million subscribers on YouTube ), and Otero mentioned she next wants to examine videos from Prager University (which claims 4 billion lifetime views for its content, has 2.84 million subscribers on YouTube and 1.4 million followers on Instagram ). Ad Fontes is working with ad agency Oxford Road and dental care company Quip to create ratings for the top 50 news and politics podcasts on Apple Podcasts, Otero said.

“It’s not strictly traditional news sources, because so much of the information that people use to make decisions in their lives is not exactly news,” Otero said.

She was shocked when academic textbook publishers first wanted to use her chart. Now she wants it to become a household tool.

“As we add more news sources on to it, as we add more data, I envision this becoming a standard framework for evaluating news on at least these two dimensions of reliability and bias,” she said.

She sees complaints about it from both ends of the political spectrum as proof that it works.

“A lot of people love it and a lot of people hate it,” Otero said. “A lot of people on the left will call us neoliberal shills, and then a bunch of people that are on the right are like, ‘Oh, you guys are a bunch of leftists yourselves.’”

The project has grown to include tools for teaching media literacy to school kids and an interactive version of the chart that displays each rated article. Otero’s company operates as a public benefit corporation with a stated public benefit mission: “to make news consumers smarter and news media better.” She didn’t want Ad Fontes to rely on donations.

“If we want to grow with a problem, we have to be a sustainable business. Otherwise, we’re just going to make a small difference in a corner of the problem,” she said.

Ad Fontes makes money by responding to specific research requests from advertisers, academics and other parties that want certain outlets to be reviewed. The company also receives non-deductible donations and operates on WeFunder , a grassroots crowdfunding investment site, to bring in investors. So far, Ad Fontes has raised $163,940 with 276 investors through the site.

Should you use the charts?

Media bias charts with transparent, rigorous methodologies can offer insight into sources’ biases. That insight can help you understand what perspectives sources bring as they share the news. That insight also might help you understand what perspectives you might be missing as a news consumer.

But use them with caution. Political bias isn’t the only thing news consumers should look out for. Reliability is critical, too, and the accuracy and editorial standards of organizations play an important role in sharing informative, useful news.

Media bias charts are a media literacy tool. They offer well-researched appraisals on the bias of certain sources. But to best inform yourself, you need a full toolbox. Check out Poynter’s MediaWise project for more media literacy tools.

This article was originally published on Dec. 14, 2020. 

More about media bias charts

  • A media bias chart update puts The New York Times in a peculiar position
  • Letter to the editor: What Poynter’s critique misses about the Media Bias Chart

biased media essay

Opinion | Israel seizes and then returns Associated Press broadcasting equipment

Though the Israeli government reversed its brazen display of media suppression after worldwide pressure, the controversy remains.

biased media essay

Mike Johnson’s claim about noncitizens registering to vote at the DMV and ‘welfare’ offices is false

Only US citizens may vote in federal elections. Congress banned noncitizen voting in federal elections in 1996.

biased media essay

Opinion | Behind the scenes of CBS News’ interview with Pope Francis

The pope gave Norah O’Donnell a historic, hourlong interview from the Vatican. No topic was off-limits.

biased media essay

How a Supreme Court case most people likely have never heard of is reshaping LGBTQ+ rights

A 2020 Supreme Court case called Bostock v. Clayton County has led to a number of policy and legal shifts

biased media essay

CNN mourns the loss of commentator Alice Stewart

Stewart, a veteran political adviser who worked on several Republican presidential campaigns, was 58.

Comments are closed.

We are too obsessed with alleged bias and objectivity, which so often is in the biased eye of the beholder. The main standard of good journalism should be verifiable factual accuracy.

Hoping to see a follow-up article about whether we can trust fact checker report card charts created by collecting a fact checker’s subjective ratings.

As a writer for Wonkette, I won’t claim to be objective, but we do like to point out that our rating at Ad Fontes – both farthest to the left and the least reliable, is absurd. Apparently we can’t be trusted at all because we do satirical commentary instead of straight news.

When we’ve attempted to point out to Ms. Otero that we adhere to high standards when it comes to factuality, but we also make jokes, she has replied that satire is inherently untrustworthy and biased, particularly since we sometimes use dirty words.

That seems to us a remarkably biased definition of bias.

Start your day informed and inspired.

Get the Poynter newsletter that's right for you.

Special Issue: Propaganda

This essay was published as part of the Special Issue “Propaganda Analysis Revisited”, guest-edited by Dr. A. J. Bauer (Assistant Professor, Department of Journalism and Creative Media, University of Alabama) and Dr. Anthony Nadler (Associate Professor, Department of Communication and Media Studies, Ursinus College).

Propaganda, misinformation, and histories of media techniques

Article metrics.

CrossRef

CrossRef Citations

Altmetric Score

PDF Downloads

This essay argues that the recent scholarship on misinformation and fake news suffers from a lack of historical contextualization. The fact that misinformation scholarship has, by and large, failed to engage with the history of propaganda and with how propaganda has been studied by media and communication researchers is an empirical detriment to it, and serves to make the solutions and remedies to misinformation harder to articulate because the actual problem they are trying to solve is unclear.

School of Media and Communication, University of Leeds, UK

biased media essay

Introduction

Propaganda has a history and so does research on it. In other words, the mechanisms and methods through which media scholars have sought to understand propaganda—or misinformation, or disinformation, or fake news, or whatever you would like to call it—are themselves historically embedded and carry with them underlying notions of power and causality. To summarize the already quite truncated argument below, the larger conceptual frameworks for understanding information that is understood as “pernicious” in some way can be grouped into four large categories: studies of propaganda, the analysis of ideology and its relationship to culture, notions of conspiracy theory, and finally, concepts of misinformation and its impact. The fact that misinformation scholarship generally proceeds without acknowledging these theoretical frameworks is an empirical detriment to it and serves to make the solutions and remedies to misinformation harder to articulate because the actual problem to be solved is unclear. 

The following pages discuss each of these frameworks—propaganda, ideology, conspiracy, and misinformation—before returning to the stakes and implications of these arguments for future research on pernicious media content.

Propaganda and applied research

The most salient aspect of propaganda research is the fact that it is powerful in terms of resources while at the same time it is often intellectually derided, or at least regularly dismissed. Although there has been a left-wing tradition of propaganda research housed uneasily within the academy (Herman & Chomsky, 1988; Seldes & Seldes, 1943), this is not the primary way in which journalism or media messaging has been understood in many journalism schools or mainstream communications departments. This relates, of course, to the institutionalization of journalism and communication studies within the academic enterprise. Within this paradox, we see the greater paradox of communication research as both an applied and a disciplinary field. Propaganda is taken quite seriously by governments, the military, and the foreign service apparatus (Simpson, 1994); at the same time, it has occupied a tenuous conceptual place in most media studies and communications departments, with the dominant intellectual traditions embracing either a “limited effects” notion of what communication “does” or else more concerned with the more slippery concept of ideology (and on that, see more below). There is little doubt that the practical study of the power of messages and the field of communication research grew up together. Summarizing an initially revisionist line of research that has now become accepted within the historiography of the field, Nietzel notes that “from the very beginning, communication research was at least in part designed as an applied science, intended to deliver systematic knowledge that could be used for the business of government to the political authorities.” He adds, however, that

“this context also had its limits, for by the end of the decade, communication research had become established at American universities and lost much of its dependence on state funds. Furthermore, it had become increasingly clear that communication scientists could not necessarily deliver knowledge to the political authorities that could serve as a pattern for political acting (Simpson, 1994 pp. 88–89). From then on, politics and communication science parted ways. Many of the approaches and techniques which seemed innovative and even revolutionary in the 1940s and early 1950s, promising a magic key to managing propaganda activities and controlling public opinion, became routine fields of work, and institutions like the USIA carried out much of this kind of research themselves.” (Nietzel, 2016, p. 66)

It is important to note that this parting of ways did  not  mean that no one in the United States and the Soviet Union was studying propaganda. American government records document that, in inflation-adjusted terms, total funding for the United States Information Agency (USIA) rose from $1.2 billion in 1955 to $1.7 billion in 1999, shortly before its functions were absorbed into the United States Department of State. And this was dwarfed by Soviet spending, which spent more money jamming Western Radio transmissions alone than the United States did in its entire propaganda budget. Media effects research in the form of propaganda studies was a big and well-funded business. It was simply not treated as such within the traditional academy (Zollman, 2019). It is also important to note that this does not mean that no one in academia studies propaganda or the effect of government messages on willing or unwilling recipients, particularly in fields like health communication (also quite well-funded). These more academic studies, however, were tempered by the generally accepted fact that there existed no decontextualized, universal laws of communication that could render media messages easily useable by interested actors.

Ideology, economics, and false consciousness

If academics have been less interested than governments and health scientists in analyzing the role played by propaganda in the formation of public opinion, what has the academy worried about instead when it comes to the study of pernicious messages and their role in public life? Open dominant, deeply contested line of study has revolved around the concept of  ideology.  As defined by Raymond Williams in his wonderful  Keywords , ideology refers to an interlocking set of ideas, beliefs, concepts, or philosophical principles that are naturalized, taken for granted, or regarded as self-evident by various segments of society. Three controversial and interrelated principles then follow. First, ideology—particularly in its Marxist version—carries with it the implication that these ideas are somehow deceptive or disassociated from what actually exists. “Ideology is then abstract and false thought, in a sense directly related to the original conservative use but with the alternative—knowledge of real material conditions and relationships—differently stated” (Williams, 1976). Second, in all versions of Marxism, ideology is related to economic conditions in some fashion, with material reality, the economics of a situation, usually dominant and helping give birth to ideological precepts. In common Marxist terminology, this is usually described as the relationship between the base (economics and material conditions) and the superstructure (the realm of concepts, culture, and ideas). Third and finally, it is possible that different segments of society will have  different  ideologies, differences that are based in part on their position within the class structure of that society. 

Western Marxism in general (Anderson, 1976) and Antonio Gramsci in particular helped take these concepts and put them on the agenda of media and communications scholars by attaching more importance to “the superstructure” (and within it, media messages and cultural industries) than was the case in earlier Marxist thought. Journalism and “the media” thus play a major role in creating and maintaining ideology and thus perpetuating the deception that underlies ideological operations. In the study of the relationship between the media and ideology, “pernicious messages” obviously mean something different than they do in research on propaganda—a more structural, subtle, reinforcing, invisible, and materially dependent set of messages than is usually the case in propaganda analysis.  Perhaps most importantly, little research on media and communication understands ideology in terms of “discrete falsehoods and erroneous belief,” preferring to focus on processes of deep structural  misrecognition  that serves dominant economic interests (Corner, 2001, p. 526). This obviously marks a difference in emphasis as compared to most propaganda research. 

Much like in the study of propaganda, real-world developments have also had an impact on the academic analysis of media ideology. The collapse of communism in the 1980s and 1990s and the rise of neoliberal governance obviously has played a major role in these changes. Although only one amongst a great many debates about the status of ideology in a post-Marxist communications context, the exchange between Corner (2001, 2016) and Downey (2008; Downey et al., 2014) is useful for understanding how scholars have dealt with the relationship between large macro-economic and geopolitical changes in the world and fashions of research within the academy. Regardless of whether concepts of ideology are likely to return to fashion, any analysis of misinformation that is consonant with this tradition must keep in mind the relationship between class and culture, the outstanding and open question of “false consciousness,” and the key scholarly insight that ideological analysis is less concerned with false messages than it is with questions of structural misrecognition and the implications this might have for the maintenance of hegemony.

Postmodern conspiracy

Theorizing pernicious media content as a “conspiracy” theory is less common than either of the two perspectives discussed above. Certainly, conspiratorial media as an explanatory factor for political pathology has something of a post-Marxist (and indeed, postmodern) aura. Nevertheless, there was a period in the 1990s and early 2000s when some of the most interesting notions of conspiracy theories were analyzed in academic work, and it seems hard to deny that much of this literature would be relevant to the current emergence of the “QAnon” cult, the misinformation that is said to drive it, and other even more exotic notions of elites conspiring against the public. 

Frederic Jameson has penned remarks on conspiracy theory that represent the starting point for much current writing on the conspiratorial mindset, although an earlier and interrelated vein of scholarship can be found in the work of American writers such as Hofstadter (1964) and Rogin (1986). “Conspiracy is the poor person’s cognitive mapping in the postmodern age,” Jameson writes, “it is a degraded figure of the total logic of late capital, a desperate attempt to represent the latter’s system” (Jameson, 1991). If “postmodernism,” in Jameson’s terms, is marked by a skepticism toward metanarratives, then conspiracy theory is the only narrative system available to explain the various deformations of the capitalist system. As Horn and Rabinach put it:

“The broad interest taken by cultural studies in popular conspiracy theories mostly adopted Jameson’s view and regards them as the wrong answers to the right questions. Showing the symptoms of disorientation and loss of social transparency, conspiracy theorists are seen as the disenfranchised “poor in spirit,” who, for lack of a real understanding of the world they live in, come up with paranoid systems of world explanation.” (Horn & Rabinach, 2008)

Other thinkers, many of them operating from a perch within media studies and communications departments, have tried to take conspiracy theories more seriously (Bratich, 2008; Fenster, 2008; Pratt, 2003; Melley, 2008). The key question for all of these thinkers lies within the debate discussed in the previous section, the degree to which “real material interests” lie behind systems of ideological mystification and whether audiences themselves bear any responsibility for their own predicament. In general, writers sympathetic to Jameson have tended to maintain a Marxist perspective in which conspiracy represents a pastiche of hegemonic overthrow, thus rendering it just another form of ideological false consciousness. Theorists less taken with Marxist categories see conspiracy as an entirely rational (though incorrect) response to conditions of late modernity or even as potentially liberatory. Writers emphasizing that pernicious media content tends to fuel a conspiratorial mindset often emphasize the mediated aspects of information rather than the economics that lie behind these mediations. Both ideological analysis and academic writings on conspiracy theory argue that there is a gap between “what seems to be going on” and “what is actually going on,” and that this gap is maintained and widened by pernicious media messages. Research on ideology tends to see the purpose of pernicious media content as having an ultimately material source that is rooted in “real interests,” while research on conspiracies plays down these class aspects and questions whether any real interests exist that go beyond the exercise of political power.

The needs of informationally ill communities

The current thinking in misinformation studies owes something to all these approaches. But it owes an even more profound debt to two perspectives on information and journalism that emerged in the early 2000s, both of which are indebted to an “ecosystemic” perspective on information flows. One perspective sees information organizations and their audiences as approximating a natural ecosystem, in which different media providers contribute equally to the health of an information environment, which then leads to healthy citizens. The second perspective analyzes the flows of messages as they travel across an information environment, with messages becoming reshaped and distorted as they travel across an information network. 

Both of these perspectives owe a debt to the notion of the “informational citizen” that was popular around the turn of the century and that is best represented by the 2009 Knight Foundation report  The Information Needs of Communities  (Knight Foundation, 2009). This report pioneered the idea that communities were informational communities whose political health depended in large part on the quality of information these communities ingested. Additional reports by The Knight Foundation, the Pew Foundation, and this author (Anderson, 2010) looked at how messages circulated across these communities, and how their transformation impacted community health. 

It is a short step from these ecosystemic notions to a view of misinformation that sees it as a pollutant or even a virus (Anderson, 2020), one whose presence in a community turns it toward sickness or even political derangement. My argument here is that the current misinformation perspective owes less to its predecessors (with one key exception that I will discuss below) and more to concepts of information that were common at the turn of the century. The major difference between the concept of misinformation and earlier notions of informationally healthy citizens lies in the fact that the normative standard by which health is understood within information studies is crypto-normative. Where writings about journalism and ecosystemic health were openly liberal in nature and embraced notions of a rational, autonomous citizenry who just needed the right inputs in order to produce the right outputs, misinformation studies has a tendency to embrace liberal behavioralism without embracing a liberal political theory. What the political theory of misinformation studies is, in the end, deeply unclear.

I wrote earlier that misinformation studies owed more to notions of journalism from the turn of the century than it did to earlier traditions of theorizing. There is one exception to this, however. Misinformation studies, like propaganda analysis, is a radically de-structured notion of what information does. Buried within analysis of pernicious information there is

“A powerful cultural contradiction—the need to understand and explain social influence versus a rigid intolerance of the sociological and Marxist perspectives that could provide the theoretical basis for such an understanding. Brainwashing, after all, is ultimately a theory of ideology in the crude Marxian sense of “false consciousness.” Yet the concept of brainwashing was the brainchild of thinkers profoundly hostile to Marxism not only to its economic assumptions but also to its emphasis on structural, rather than individual, causality.” (Melley, 2008, p. 149)

For misinformation studies to grow in such a way that allows it to take its place among important academic theories of media and communication, several things must be done. The field needs to be more conscious of its own history, particularly its historical conceptual predecessors. It needs to more deeply interrogate its  informational-agentic  concept of what pernicious media content does, and perhaps find room in its arsenal for Marxist notions of hegemony or poststructuralist concepts of conspiracy. Finally, it needs to more openly advance its normative agenda, and indeed, take a normative position on what a good information environment would look like from the point of view of political theory. If this environment is a liberal one, so be it. But this position needs to be stated clearly.

Of course, misinformation studies need not worry about its academic bona fides at all. As the opening pages of this Commentary have shown, propaganda research was only briefly taken seriously as an important academic field. This did not stop it from being funded by the U.S. government to the tune of 1.5 billion dollars a year. While it is unlikely that media research will ever see that kind of investment again, at least by an American government, let’s not forget that geopolitical Great Power conflict has not disappeared in the four years that Donald Trump was the American president. Powerful state forces in Western society will have their own needs, and their own demands, for misinformation research. It is up to the scholarly community to decide how they will react to these temptations. 

  • Mainstream Media
  • / Propaganda

Cite this Essay

Anderson, C. W. (2021). Propaganda, misinformation, and histories of media techniques. Harvard Kennedy School (HKS) Misinformation Review . https://doi.org/10.37016/mr-2020-64

Bibliography

Anderson, C. W. (2010). Journalistic networks and the diffusion of local news: The brief, happy news life of the Francisville Four. Political Communication , 27 (3), 289–309. https://doi.org/10.1080/10584609.2010.496710

Anderson, C. W. (2020, August 10). Fake news is not a virus: On platforms and their effects. Communication Theory , 31 (1), 42–61. https://doi.org/10.1093/ct/qtaa008

Anderson, P. (1976). Considerations on Western Marxism . Verso.

Bratich, J. Z. (2008). Conspiracy panics: Political rationality and popular culture. State University of New York Press.

Corner, J. (2001). ‘Ideology’: A note on conceptual salvage. Media, Culture & Society , 23 (4), 525–533. https://doi.org/10.1177/016344301023004006

Corner, J. (2016). ‘Ideology’ and media research. Media, Culture & Society , 38 (2), 265 – 273. https://doi.org/10.1177/0163443715610923

Downey, J. (2008). Recognition and renewal of ideology critique. In D. Hesmondhaigh & J. Toynbee (Eds.), The media and social theory (pp. 59–74). Routledge.

Downey, J., Titley, G., & Toynbee, J. (2014). Ideology critique: The challenge for media studies. Media, Culture & Society , 36 (6), 878–887. https://doi.org/10.1177/0163443714536113

Fenster (2008). Conspiracy theories: Secrecy and power in American culture (Rev. ed.). University of Minnesota Press.

Herman, E., & Chomsky, N. (1988). Manufacturing consent: The political economy of the mass media. Pantheon Books. 

Hofstadter, R. (1964, November). The paranoid style in American politics. Harper’s Magazine.

Horn, E., & Rabinach, A. (2008). Introduction. In E. Horn (Ed.), Dark powers: Conspiracies and conspiracy theory in history and literature (pp. 1–8), New German Critique , 35 (1). https://doi.org/10.1215/0094033x-2007-015

Jameson, F. (1991). Postmodernism, or, the cultural logic of late capitalism . Duke University Press.

The Knight Foundation. (2009). Informing communities: Sustaining democracy in the digital age. https://knightfoundation.org/wp-content/uploads/2019/06/Knight_Commission_Report_-_Informing_Communities.pdf

Melley, T. (2008). Brainwashed! Conspiracy theory and ideology in postwar United States. New German Critique , 35 (1), 145–164. https://doi.org/10.1215/0094033X-2007-023

Nietzel, B. (2016). Propaganda, psychological warfare and communication research in the USA and the Soviet Union during the Cold War. History of the Human Sciences , 29 (4 – 5), 59–76. https://doi.org/10.1177/0952695116667881

Pratt, R. (2003). Theorizing conspiracy. Theory and Society , 32 , 255–271. https://doi.org/10.1023/A:1023996501425

Rogin, M. P. (1986). The countersubversive tradition in American politics.  Berkeley Journal of Sociology,   31 , 1 –33. https://www.jstor.org/stable/41035372

Seldes, G., & Seldes, H. (1943). Facts and fascism. In Fact.

Simpson, C. (1994). Science of coercion: Communication research and psychological warfare, 1945–1960. Oxford University Press.

Williams, R. (1976).  Keywords: A vocabulary of culture and society . Oxford University Press.

Zollmann, F. (2019). Bringing propaganda back into news media studies. Critical Sociology , 45 (3), 329–345. https://doi.org/10.1177/0896920517731134

This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

June 21, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Researchers have developed tools to study the cognitive, societal and algorithmic biases that help fake news spread

By Giovanni Luca Ciampaglia , Filippo Menczer & The Conversation US

biased media essay

Roy Scott Getty Images

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

Social media are among the  primary sources of news in the U.S.  and across the world. Yet users are exposed to content of questionable accuracy, including  conspiracy theories ,  clickbait ,  hyperpartisan content ,  pseudo science  and even  fabricated “fake news” reports .

It’s not surprising that there’s so much disinformation published: Spam and online fraud  are lucrative for criminals , and government and political propaganda yield  both partisan and financial benefits . But the fact that  low-credibility content spreads so quickly and easily  suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Explaining the tools developed at the Observatory on Social Media.

Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our  Observatory on Social Media  at Indiana University is building  tools  to help people become aware of these biases and protect themselves from outside influences designed to exploit them.

Bias in the brain

Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information, and too many incoming stimuli can cause  information overload . That in itself has serious implications for the quality of information on social media. We have found that steep competition for users’ limited attention means that  some ideas go viral despite their low quality —even when people prefer to share high-quality content.*

To avoid getting overwhelmed, the brain uses a  number of tricks . These methods are usually effective, but may also  become biases  when applied in the wrong contexts.

One cognitive shortcut happens when a person is deciding whether to share a story that appears on their social media feed. People are  very affected by the emotional connotations of a headline , even though that’s not a good indicator of an article’s accuracy. Much more important is  who wrote the piece .

To counter this bias, and help people pay more attention to the source of a claim before sharing it, we developed  Fakey , a mobile news literacy game (free on  Android  and  iOS ) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points for sharing news from reliable sources and flagging suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.

Bias in society

Another source of bias comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.

In fact, in our research we have found that it is possible to  determine the political leanings of a Twitter user  by simply looking at the partisan preferences of their friends. Our analysis of the structure of these  partisan communication networks  found social networks are particularly efficient at disseminating information – accurate or not – when  they are closely tied together and disconnected from other parts of society .

The tendency to evaluate information more favorably if it comes from within their own social circles creates “ echo chambers ” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into  “us versus them” confrontations .

To study how the structure of online social networks makes users vulnerable to disinformation, we built  Hoaxy , a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were  almost completely cut off from the corrections made by the fact-checkers.

When we drilled down on the misinformation-spreading accounts, we found a very dense core group of accounts retweeting each other almost exclusively – including several bots. The only times that fact-checking organizations were ever quoted or mentioned by the users in the misinformed group were when questioning their legitimacy or claiming the opposite of what they wrote.

Bias in the machine

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed  advertising tools built into many social media platforms  let disinformation campaigners exploit  confirmation bias  by  tailoring messages  to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will  tend to show that person more of that site’s content . This so-called “ filter bubble ” effect may isolate people from diverse perspectives, strengthening confirmation bias.

Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the  homogeneity bias .

Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this  popularity bias , because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.

All these algorithmic biases can be manipulated by  social bots , computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s  Big Ben , are harmless. However, some conceal their real nature and are used for malicious intents, such as  boosting disinformation  or falsely  creating the appearance of a grassroots movement , also called “astroturfing.” We found  evidence of this type of manipulation  in the run-up to the 2010 U.S. midterm election.

To study these manipulation strategies, we developed a tool to detect social bots called  Botometer . Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as  15 percent of Twitter accounts show signs of being bots .

Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.

These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.

Understanding complex vulnerabilities

Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are  many questions  left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will  not likely be only technological , though there will probably be some technical aspects to them. But they must take into account  the cognitive and social aspects  of the problem.

*Editor’s note: This article was updated on Jan. 10, 2019, to remove a link to a study that has been retracted. The text of the article is still accurate, and remains unchanged.

This article was originally published on The Conversation . Read the original article .

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Elizabeth Morrissette, Grace McKeon, Alison Louie, Amy Luther, and Alexis Fagen

Media bias could be defined as the unjust favoritism and reporting of a certain ideas or standpoint. In the news, social media, and entertainment, such as movies or television, we see media bias through the information these forms of media choose to pay attention to or report (“How to Detect Bias in News Media”, 2012). We could use the example of the difference between FOX news and CNN because these two news broadcasters have very different audiences, they tend to be biased to what the report and how they report it due to democratic or republican viewpoints.

Bias, in general, is the prejudice or preconceived notion against a person, group or thing. Bias leads to stereotyping which we can see on the way certain things are reported in the news. As an example, during Hurricane Katrina, there were two sets of photos taken of two people wading through water with bags of food. The people, one white and one black, were reported about but the way they were reported about was different. For the black man, he was reported “looting” a grocery store, while the white person was reported “finding food for survival”.  The report showed media bias because they made the black man seem like he was doing something wrong, while the white person was just “finding things in order to survive” (Guarino, 2015).

Commercial media is affected by bias because a corporation can influence what kind of entertainment is being produced. When there is an investment involved or money at stake, companies tend to want to protect their investment by not touching on topics that could start a controversy (Pavlik, 2018). In order to be able to understand what biased news is, we must be media literate. To be media literate, we need to adopt the idea that news isn’t completely transparent in the stories they choose to report. Having the knowledge that we can’t believe everything we read or see on the news will allow us as a society to become a more educated audience (Campbell, 2005).

Bias in the News

The news, whether we like it or not, is bias. Some news is bias towards Republicans while other news outlets are biased towards Democrats. It’s important to understand this when watching or reading the news to be media literate. This can be tricky because journalists may believe that their reporting is written with “fairness and balance” but most times there is an underlying bias based around what news provider the story is being written for (Pavlik and McIntosh, 61). With events happening so rapidly, journalist write quickly and sometimes point fingers without trying to. This is called Agenda-Setting which is defined by Shirley Biagi as, how reporters neglect to tell people what to think, but do tell them what and who to talk about (Biagi, 268).

The pressure to put out articles quickly, often times, can affect the story as well. How an event is portrayed, without all the facts and viewpoints, can allow the scene to be laid out in a way that frames it differently than it may have happened (Biagi, 269). However, by simply watching or reading only one portrayal of an event people will often blindly believe it is true, without see or reading other stories that may shine a different light on the subject (Vivian, 4). Media Impact   defines this as Magic Bullet Theory or the assertion that media messages directly and measurably affect people’s behavior (Biagi, 269). The stress of tight time deadlines also affects the number of variations of a story. Journalist push to get stories out creates a lack of deeper consideration to news stories. This is called Consensus Journalism or the tendency among journalists covering the same topic to report similar articles instead of differing interpretations of the event (Biagi, 268).

To see past media bias in the news it’s important to be media literate. Looking past any possible framing, or bias viewpoints and getting all the facts to create your own interpretation of a news story. It doesn’t hurt to read both sides of the story before blindly following what someone is saying, taking into consideration who they might be biased towards.

Stereotypes in the Media

Bias is not only in the news, but other entertainment media outlets such as TV and movies. Beginning during childhood, our perception of the world starts to form. Our own opinions and views are created as we learn to think for ourselves. The process of this “thinking for ourselves” is called socialization. One key agent of socialization is the mass media. Mass media portrays ideas and images that at such a young age, are very influential. However, the influence that the media has on us is not always positive. Specifically, the entertainment media, plays a big role in spreading stereotypes so much that they become normal to us (Pavlik and McIntosh, 55).

The stereotypes in entertainment media may be either gender stereotypes or cultural stereotypes. Gender stereotypes reinforce the way people see each gender supposed to be like. For example, a female stereotype could be a teenage girl who likes to go shopping, or a stay at home mom who cleans the house and goes grocery shopping. Men and women are shown in different ways in commercials, TV and movies. Women are shown as domestic housewives, and men are shown as having high status jobs, and participating in more outdoor activities (Davis, 411). A very common gender stereotype for women is that they like to shop, and are not smart enough to have a high-status profession such as a lawyer or doctor. An example of this stereotype can be shown in the musical/movie, Legally Blonde. The main character is woman who is doubted by her male counterparts. She must prove herself to be intelligent enough to become a lawyer. Another example of a gender stereotype is that men like to use tools and drive cars. For example, in most tool and car commercials /advertisements, a man is shown using the product.  On the other hand, women are most always seen in commercials for cleaning supplies or products like soaps. This stems the common stereotype that women are stay at home moms and take on duties such as cleaning the house, doing the dishes, doing the laundry, etc.

Racial stereotyping is also quite common in the entertainment media. The mass media helps to reproduce racial stereotypes, and spread those ideologies (Abraham, 184). For example, in movies and TV, the minority characters are shown as their respective stereotypes. In one specific example, the media “manifests bias and prejudice in representations of African Americans” (Abraham, 184). African Americans in the media are portrayed in negative ways. In the news, African Americans are often portrayed to be linked to negative issues such as crime, drug use, and poverty (Abraham 184). Another example of racial stereotyping is Kevin Gnapoor in the popular movie, Mean Girls . His character is Indian, and happens to be a math enthusiast and member of the Mathletes. This example strongly proves how entertainment media uses stereotypes.

Types of Media Bias

Throughout media, we see many different types of bias being used. These is bias by omission, bias by selection of source, bias by story selection, bias by placement, and bias by labeling. All of these different types are used in different ways to prevent the consumer from getting all of the information.

  • Bias by omission:  Bias by omission is when the reporter leaves out one side of the argument, restricting the information that the consumer receives. This is most prevalent when dealing with political stories (Dugger) and happens by either leaving out claims from either the liberal or conservative sides. This can be seen in either one story or a continuation of stories over time (Media Bias). There are ways to avoid this type of bias, these would include reading or viewing different sources to ensure that you are getting all of the information.
  • Bias by selection of sources:  Bias by selection of sources occurs when the author includes multiple sources that only have to do with one side (Baker).  Also, this can occur when the author intentionally leaves out sources that are pertinent to the other side of the story (Dugger). This type of bias also utilizes language such as “experts believe” and “observers say” to make people believe that what they are reading is credible. Also, the use of expert opinions is seen but only from one side, creating a barrier between one side of the story and the consumers (Baker).
  • Bias by story selection: The second type of bias by selection is bias by story selection. This is seen more throughout an entire corporation, rather than through few stories. This occurs when news broadcasters only choose to include stories that support the overall belief of the corporation in their broadcasts. This means ignoring all stories that would sway people to the other side (Baker).  Normally the stories that are selected will fully support either the left-wing or right-wing way of thinking.
  • Bias by placement: Bias by placement is a big problem in today’s society. We are seeing this type of bias more and more because it is easy with all of the different ways media is presented now, either through social media or just online. This type of bias shows how important a particular story is to the reporter. Editors will choose to poorly place stories that they don’t think are as important, or that they don’t want to be as easily accessible. This placement is used to downplay their importance and make consumers think they aren’t as important (Baker).
  • Bias by labeling: Bias by labeling is a more complicated type of bias mostly used to falsely describe politicians. Many reporters will tag politicians with extreme labels on one side of an argument while saying nothing about the other side (Media Bias). These labels that are given can either be a good thing or a bad thing, depending on the side they are biased towards. Some reporters will falsely label people as “experts”, giving them authority that they have not earned and in turn do not deserve (Media Bias). This type of bias can also come when a reporter fails to properly label a politician, such as not labeling a conservative as a conservative (Dugger). This can be difficult to pick out because not all labeling is biased, but when stronger labels are used it is important to check different sources to see if the information is correct.

Bias in Entertainment

Bias is an opinion in favor or against a person, group, and or thing compared to another, and are presented, in such ways to favor false results that are in line with their prejudgments and political or practical commitments (Hammersley & Gomm, 1).  Media bias in the entertainment is the bias from journalists and the news within the mass media about stories and events reported and the coverage of them.

There are biases in most entertainment today, such as, the news, movies, and television. The three most common biases formed in entertainment are political, racial, and gender biases. Political bias is when part of the entertainment throws in a political comment into a movie or TV show in hopes to change or detriment the viewers political views (Murillo, 462). Racial bias is, for example, is when African Americans are portrayed in a negative way and are shown in situations that have to do with things such as crime, drug use, and poverty (Mitchell, 621). Gender biases typically have to do with females. Gender biases have to do with roles that some people play and how others view them (Martin, 665). For example, young girls are supposed to be into the color pink and like princess and dolls. Women are usually the ones seen on cleaning commercials. Women are seen as “dainty” and “fragile.” And for men, they are usually seen on the more “masculine types of media, such as things that have to do with cars, and tools.

Bias is always present, and it can be found in all outlets of media. There are so many different types of bias that are present, whether it is found in is found in the news, entertainment industry, or in the portrayal of stereotypes bias, is all around us. To be media literate it’s important to always be aware of this, and to read more than one article, allowing yourself to come up with conclusion; thinking for yourself.

Works Cited 

Abraham, Linus, and Osei Appiah. “Framing News Stories: The Role of Visual Imagery in Priming Racial Stereotypes.”  Howard Journal of Communications , vol. 17, no. 3, 2006, pp. 183–203.

Baker, Brent H. “Media Bias.”  Student News Daily , 2017.

Biagi, Shirley. “Changing Messages.”  Media/Impact; An Introduction to Mass Media , 10th ed., Cengage Learning, 2013, pp. 268-270.

Campbell, Richard, et al.  Media & Culture: an Introduction to Mass Communication . Bedford/St Martins, 2005.

Davis, Shannon N. “Sex Stereotypes In Commercials Targeted Toward Children: A Content Analysis.”  Sociological Spectrum , vol. 23, no. 4, 2003, pp. 407–424.

Dugger, Ashley. “Media Bias and Criticism .” http://study.com/academy/lesson/media-bias-criticism-definition-types-examples.html .

Guarino, Mark. “Misleading reports of lawlessness after Katrina worsened crisis, officials say.”   The Guardian , 16 Aug. 2015, http://www.theguardian.com/us-news/2015/aug/16/hurricane-katrina-new-orleans-looting-violence-misleading-reports .

Hammersley, Martyn, and Roger Gomm. Bias in Social Research . Vol. 2, ser. 1, Sociological Research Online, 1997.

“How to Detect Bias in News Media.”  FAIR , 19 Nov. 2012, http://fair.org/take-action-now/media-activism-kit/how-to-detect-bias-in-news-media/ .

Levasseur, David G. “Media Bias.”  Encyclopedia of Political Communication , Lynda Lee Kaid, editor, Sage Publications, 1st edition, 2008. Credo Reference, https://search.credoreference.com/content/entry/sagepolcom/media_bias/0 .

Martin, Patricia Yancey, John R. Reynolds, and Shelley Keith, “Gender Bias and Feminist Consciousness among Judges and Attorneys: A Standpoint Theory Analysis,” Signs: Journal of Women in Culture and Society 27, no. 3 (Spring 2002): 665-701,

Mitchell, T. L., Haw, R. M., Pfeifer, J. E., & Meissner, C. A. (2005). “Racial Bias in Mock Juror Decision-Making: A Meta-Analytic Review of Defendant Treatment.” Law and Human Behavior , 29(6), 621-637.

Murillo, M. (2002). “Political Bias in Policy Convergence: Privatization Choices in Latin America.” World Politics , 54(4), 462-493.

Pavlik, John V., and Shawn McIntosh. “Media Literacy in the Digital Age .”  Converging Media: a New Introduction to Mass Communication , Oxford University Press, 2017.

Vivian, John. “Media Literacy .”  The Media of Mass Communication , 8th ed., Pearson, 2017, pp. 4–5.

Introduction to Media Studies Copyright © by Elizabeth Morrissette, Grace McKeon, Alison Louie, Amy Luther, and Alexis Fagen is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

More Americans now see the media’s influence growing compared with a year ago

Americans are now more likely to say the media are growing than declining in influence

Americans’ views about the influence of the media in the country have shifted dramatically over the course of a year in which there was much discussion about the news media’s role during the election and post-election coverage , the COVID-19 pandemic and protests about racial justice . More Americans now say that news organizations are gaining influence than say their influence is waning, a stark contrast to just one year ago when the reverse was true.

When Americans were asked to evaluate the media’s standing in the nation, about four-in-ten (41%) say news organizations are growing in their influence, somewhat higher than the one-third (33%) who say their influence is declining, according to a Pew Research Center survey conducted March 8-14, 2021. The remaining one-quarter of U.S. adults say they are neither growing nor declining in influence.

To examine Americans’ views about the influence of the news media, Pew Research Center surveyed 12,045 U.S. adults from March 8 to 14, 2021. Everyone who completed the survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology . See here to read more about the questions used for this analysis and the methodology .

This is the latest report in Pew Research Center’s ongoing investigation of the state of news, information and journalism in the digital age, a research program funded by The Pew Charitable Trusts, with generous support from the John S. and James L. Knight Foundation.

By comparison, Americans in early 2020 were far more likely to say the news media were declining in influence . Nearly half (48%) at that time said this, compared with far fewer (32%) who said news organizations were growing in influence.

The 2021 figures more closely resemble responses from 2011 – the next most recent time this was asked – and before, in that more Americans then said the news media were growing in influence than declining. Views could have shifted in the gap between 2011 and 2020, but if so, they have now shifted back. (It should be noted that prior to 2020, this question was asked on the phone instead of on the web.)

What’s more, this shift in views of the media’s influence in the country occurred among members of both political parties – and in the same direction.

Both Democrats and Republicans are more likely than last year to think the media are growing in influence

Republicans and Republican-leaning independents are about evenly split in whether they think news organizations are growing (40%) or declining in influence (41%). This is very different from a year ago, when Republicans were twice as likely to say their influence was declining than growing (56% vs. 28%).

And Democrats and Democratic leaners are now much more likely to say news organizations are growing (43%) than declining in influence (28%), while a year ago they were slightly more likely to say influence was declining (42% vs. 36% growing).

Overall, then, Republicans are still more likely than Democrats to say the news media are losing standing in the country, though the two groups are more on par in thinking that the media are increasing in their influence. (Democrats are somewhat more likely than Republicans to say news organizations are neither growing nor declining in influence – 29% vs. 19%.)  

Americans who trust national news organizations are more likely to think news media influence is growing

Trust in media closely ties to whether its influence is seen as growing or declining. Those who have greater trust in national news organizations tend to be more likely to see the news media gaining influence, while those with low levels of trust are generally more likely to see it waning.

Americans who say they have a great deal of trust in the accuracy of political news from national news organizations are twice as likely to say the news media are growing than declining in influence (48% vs. 24%, respectively). Conversely, those who have no trust at all are much more likely to think that news organizations are declining (47% vs. 33% who say they are growing).

Most demographic groups more likely to say the news media growing than declining in influence

Black Americans are far more likely to think that the news media are growing in influence rather than declining (48% vs. 19%, respectively), as are Hispanic Americans though to a somewhat lesser degree. White Americans, on the other hand, are about evenly split in thinking the news media are growing or declining in influence (39% vs. 37%, respectively). And while men are about evenly split (39% growing vs. 38% declining), women are more likely to say news organizations are growing (43%) than declining (29%) in influence.

Note: Here are the questions used for this analysis, along with responses, and its methodology .

  • Media Attitudes
  • Politics & Media

Download Jeffrey Gottfried's photo

Jeffrey Gottfried is an associate director focusing on internet and technology research at Pew Research Center .

Download Naomi Forman-Katz's photo

Naomi Forman-Katz is a research analyst focusing on news and information research at Pew Research Center .

Americans’ Changing Relationship With Local News

Introducing the pew-knight initiative, 8 facts about black americans and the news, u.s. adults under 30 now trust information from social media almost as much as from national news outlets, u.s. journalists differ from the public in their views of ‘bothsidesism’ in journalism, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Home — Essay Samples — Sociology — Sociology of Media and Communication — Media Bias

one px

Essays on Media Bias

Media bias in news report, bias in media reporting and its effect on public opinion, made-to-order essay as fast as you need it.

Each essay is customized to cater to your unique preferences

+ experts online

Biases in The News Media

Perceptions of media bias in political context, research of the views of nietzsche and baudrillard on fake news, media biases and misrepresentation of law enforcement, let us write you an essay from scratch.

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

The Issue of Bias in The American Media

The problem of media bias in the united states, media bias on guns, analysis of media bias in florida’s governorship race, get a personalized essay in under 3 hours.

Expert-written essays crafted with your exact needs in mind

The Effect of The Economic Development and Market Competition on Media Bias in China

Mainstream-media content: does the media have a liberal bias, misrepresentation of muslims in the british media, the role of media representation of knife crime in london in creating a moral panic among uk citizens, laws to combat online falsehoods in singapore’s media, biases around the issue of gun control in the media, the manipulation of the general public through american media during the 2016 presidential elections, the freedom of speech and of the press violation in north korean media, mail online story about baroness patricia scotland, an analysis of prophecies and confirmation bias, perception vs reality in media portrayal of crime, liberal media bias.

Media bias is the bias of journalists and news producers within the mass media in the selection of many events and stories that are reported and how they are covered. The term "media bias" implies a pervasive or widespread bias contravening of the standards of journalism, rather than the perspective of an individual journalist or article.

Coverage bias when media choose to report only negative news about one party or ideology. Gatekeeping bias when stories are selected or deselected, sometimes on ideological grounds. Statement bias when media coverage is slanted towards or against particular actors or issues.

Advertising bias, concision bias, content bias, corporate bias, decision-making bias, distortion bias, mainstream bias, partisan bias, sensationalism, structural bias, false balance, undue weight, speculative content, false timeliness, ventriloquism.

1. Groseclose, T., & Milyo, J. (2005). A measure of media bias. The quarterly journal of economics, 120(4), 1191-1237. (https://academic.oup.com/qje/article-abstract/120/4/1191/1926642) 2. Mullainathan, S., & Shleifer, A. (2002). Media bias. (https://www.nber.org/papers/w9295) 3. Gentzkow, M., & Shapiro, J. M. (2006). Media bias and reputation. Journal of political Economy, 114(2), 280-316. (https://www.journals.uchicago.edu/doi/abs/10.1086/499414) 4. Baron, D. P. (2006). Persistent media bias. Journal of Public Economics, 90(1-2), 1-36. (https://www.sciencedirect.com/science/article/abs/pii/S0047272705000216) 5. D'Alessio, D., & Allen, M. (2000). Media bias in presidential elections: A meta‐analysis. Journal of communication, 50(4), 133-156. (https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-2466.2000.tb02866.x) 6. Groeling, T. (2013). Media bias by the numbers: Challenges and opportunities in the empirical study of partisan news. Annual Review of Political Science, 16, 129-151. (https://www.annualreviews.org/doi/abs/10.1146/annurev-polisci-040811-115123) 7. Hamborg, F., Donnay, K., & Gipp, B. (2019). Automated identification of media bias in news articles: an interdisciplinary literature review. International Journal on Digital Libraries, 20(4), 391-415. (https://link.springer.com/article/10.1007/s00799-018-0261-y) 8. Qin, B., Strömberg, D., & Wu, Y. (2018). Media bias in China. American Economic Review, 108(9), 2442-76. (https://www.aeaweb.org/articles?id=10.1257/aer.20170947) 9. Lee, T. T. (2005). The liberal media myth revisited: An examination of factors influencing perceptions of media bias. Journal of Broadcasting & Electronic Media, 49(1), 43-64. (https://www.tandfonline.com/doi/abs/10.1207/s15506878jobem4901_4) 10. Park, S., Kang, S., Chung, S., & Song, J. (2009, April). NewsCube: delivering multiple aspects of news to mitigate media bias. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 443-452). (https://dl.acm.org/doi/abs/10.1145/1518701.1518772)

Relevant topics

  • Media Analysis
  • Social Media
  • Effects of Social Media
  • Polite Speech
  • Collaboration
  • American Identity
  • Cultural Appropriation

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

biased media essay

biased media essay

Misinformation and biases infect social media, both intentionally and accidentally

biased media essay

Assistant professor, department of Computer Science and Engineering, University of South Florida

biased media essay

Professor of Computer Science and Informatics; Director of the Center for Complex Networks and Systems Research, Indiana University

Disclosure statement

Giovanni Luca Ciampaglia has received funding from the Office of the Vice Provost for Research at Indiana University, the Democracy Fund, and the Swiss National Science Foundation. Currently, he is supported by the Indiana University Network Science Institute.

Filippo Menczer has received funding from the National Science Foundation, DARPA, US Navy, Yahoo Research, the J.S. McDonnell Foundation, and Democracy Fund.

Indiana University provides funding as a member of The Conversation US.

View all partners

Social media are among the primary sources of news in the U.S. and across the world. Yet users are exposed to content of questionable accuracy, including conspiracy theories , clickbait , hyperpartisan content , pseudo science and even fabricated “fake news” reports .

It’s not surprising that there’s so much disinformation published: Spam and online fraud are lucrative for criminals , and government and political propaganda yield both partisan and financial benefits . But the fact that low-credibility content spreads so quickly and easily suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.

Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our Observatory on Social Media at Indiana University is building tools to help people become aware of these biases and protect themselves from outside influences designed to exploit them.

Bias in the brain

Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information, and too many incoming stimuli can cause information overload . That in itself has serious implications for the quality of information on social media. We have found that steep competition for users’ limited attention means that some ideas go viral despite their low quality – even when people prefer to share high-quality content .

To avoid getting overwhelmed, the brain uses a number of tricks . These methods are usually effective, but may also become biases when applied in the wrong contexts.

One cognitive shortcut happens when a person is deciding whether to share a story that appears on their social media feed. People are very affected by the emotional connotations of a headline , even though that’s not a good indicator of an article’s accuracy. Much more important is who wrote the piece .

To counter this bias, and help people pay more attention to the source of a claim before sharing it, we developed Fakey , a mobile news literacy game (free on Android and iOS ) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points for sharing news from reliable sources and flagging suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.

biased media essay

Bias in society

Another source of bias comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.

In fact, in our research we have found that it is possible to determine the political leanings of a Twitter user by simply looking at the partisan preferences of their friends. Our analysis of the structure of these partisan communication networks found social networks are particularly efficient at disseminating information – accurate or not – when they are closely tied together and disconnected from other parts of society .

The tendency to evaluate information more favorably if it comes from within their own social circles creates “ echo chambers ” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into “us versus them” confrontations .

To study how the structure of online social networks makes users vulnerable to disinformation, we built Hoaxy , a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were almost completely cut off from the corrections made by the fact-checkers.

When we drilled down on the misinformation-spreading accounts, we found a very dense core group of accounts retweeting each other almost exclusively – including several bots. The only times that fact-checking organizations were ever quoted or mentioned by the users in the misinformed group were when questioning their legitimacy or claiming the opposite of what they wrote.

biased media essay

Bias in the machine

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed advertising tools built into many social media platforms let disinformation campaigners exploit confirmation bias by tailoring messages to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will tend to show that person more of that site’s content . This so-called “ filter bubble ” effect may isolate people from diverse perspectives, strengthening confirmation bias.

Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the homogeneity bias .

Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this popularity bias , because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.

All these algorithmic biases can be manipulated by social bots , computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s Big Ben , are harmless. However, some conceal their real nature and are used for malicious intents, such as boosting disinformation or falsely creating the appearance of a grassroots movement , also called “astroturfing.” We found evidence of this type of manipulation in the run-up to the 2010 U.S. midterm election.

biased media essay

To study these manipulation strategies, we developed a tool to detect social bots called Botometer . Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as 15 percent of Twitter accounts show signs of being bots .

Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.

These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.

Understanding complex vulnerabilities

Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are many questions left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will not likely be only technological , though there will probably be some technical aspects to them. But they must take into account the cognitive and social aspects of the problem.

Editor’s note: This article was updated on Jan. 10, 2019, to replace a link to a study that had been retracted. The text of the article is still accurate, and remains unchanged.

  • Social media
  • Online news
  • Cognitive bias
  • Confirmation bias
  • Filter bubble
  • Social media and democracy
  • Algorithm transparency
  • Algorithmic bias

biased media essay

Senior Research Fellow - Women's Health Services

biased media essay

Senior Lecturer in Periodontics

biased media essay

Lecturer / Senior Lecturer - Marketing

biased media essay

Assistant Editor - 1 year cadetship

biased media essay

Executive Dean, Faculty of Health

biased media essay

Media Bias and Democracy in India

By  janani mohan.

  • June 28, 2021

newspapers

This article was originally published in South Asian Voices.

As the COVID-19 pandemic rages out of control in India, many are rightly focusing on the content of stories on the death toll and months of lockdown. The lack of journalistic integrity behind some of the stories deepens this grim situation. In April,  reports emerged  that, at the request of the Indian government, Twitter censored 52 tweets criticizing the government’s handling of the pandemic. Meanwhile, pro-government TV channels  blamed  the farmers’ protests for limited oxygen supplies for COVID-19 patients, though supplies were  actually scarce  due to poor public health infrastructure. This reporting is not only misleading and traumatic to those affected by the pandemic, but also poses a major threat to India’s vibrant democracy.

Even before the pandemic, media bias in India existed across the largest newspapers throughout the country, and political forces shape this bias. For example, funds from the government are critical to many newspapers’ operations and budgets, and the current Bhartiya Janata Party (BJP) government has previously  refused to advertise  with newspapers that do not support its initiatives. This pressure leads media to endorse government policies, creating unbalanced reporting where media bias can affect political behavior in favor of the incumbent. Many media outlets enjoy a symbiotic relationship with the government, in turn receiving attention, funding, and prominence. These trends damage India’s democracy and also put journalists critical of the government in danger, threatening their right to physical safety.

Funds from the government are critical to many newspapers’ operations and budgets, and the current Bhartiya Janata Party (BJP) government has previously refused to advertise with newspapers that do not support its initiatives.

Media Bias in India

While the COVID-19 pandemic has exacerbated media bias in India, it is hardly a new phenomenon. A  study  of 30 Indian newspapers and 41 Indian TV channels with the largest viewership rates in the country confirms the existence of rampant media bias during a two-year period from 2017 to 2018. 1

The study relies on rating editorial articles that focus on religious, gender, and caste issues as either liberal, neutral, or conservative; and then compiling these scores by each newspaper to find the overall bias in each outlet. The results unsurprisingly and unfortunately show the consistent existence of media bias—for example, except for eight newspapers, the papers all express biases far from neutral. And this bias consistently correlates with viewers in India expressing similarly biased social, economic, and security attitudes.

What this suggests is either that biases in the media shape viewer attitudes or Indians are viewing outlets that align with their pre-existing views. Meanwhile, political parties capitalize on this bias to influence public attitudes and further their own power. The BJP  spends  almost USD $140 million on publicity per year, with 43 percent of this expenditure focusing specifically on print ads in newspapers. Government advertisements serve as a financial lever for influencing media content and public opinion. For example, during the year leading to the 2019 elections, newspapers that received more advertisement revenue from the BJP were likelier to espouse more conservative ideology and to have more conservative readers.

Bias versus Democracy

This ability of media bias to influence political support in India can contribute significantly to democratic backsliding by harming journalists, preventing freedom of expression and government accountability, and influencing voters. Media bias in itself causes democratic backsliding because the media neither holding the government accountable nor informing the public about policies that strengthen the incumbent’s power can increase authoritarian practices.

In addition, government efforts to constrain the media harms journalists, undemocratically violating citizens’ rights and physical safety. Freedom House  rates  India as only two on a four-point scale for whether there is a “free and independent media,” because of “attacks on press freedom…under the Modi government.” In fact, the government  imprisoned several journalists  in 2020 who reported critically on Prime Minister (PM) Narendra Modi’s response to the pandemic. The crackdown on journalists engendered an unsafe environment for free reporting, a feature of many authoritarian states.

A biased media also prevents citizens from receiving information that might be essential to public wellbeing by filtering information through a lens that supports government interests first. When the BJP cracked down on coverage of COVID-19 last year, journalists were  unable to disseminate  critical information to Indians. This included where migrants suffering from the sudden lockdown could receive necessities—information that could save lives. Notably, these crackdowns also meant an absence of reporting criticizing the government’s response to the pandemic. In a democratic society, a critical press is essential for holding the government accountable for its actions and motivating it to change its practices.  

Media bias plays an influencing role at the voting booth as propaganda can skew voter decisions and perceptions of what is true.

Finally, media bias plays an influencing role at the voting booth as propaganda can skew voter decisions and perceptions of what is true. During India’s 2014 general elections, the BJP advertised more than the Congress Party and voters exposed to more media were  likelier  to vote for the BJP. To influence voters, media bias often utilizes inflammatory messaging to convince more people to vote, selective information to bias what voters believe about the efficacy of the candidates, and appeasement to convince voters that they will personally benefit from voting a certain way. For example, a TimesNow interview of PM Modi before the 2019 elections  made it seem  that Modi’s economic policies—widely criticized as ineffectual—were successful.

From Media Bias to Media Neutrality

Although government measures are exacerbating media bias, the media retains some agency and could work to limit the influence of politics on reporting. Currently, 36 percent of daily newspapers  earn over half  of their total income from the government of India and most major TV stations have owners who served as politicians themselves or who had family members in politics. Although it would be difficult to convince larger outlets to participate since they benefit from their government backing, smaller independent outlets can start this movement towards neutrality. Many small outlets already eschew government funding and report with less biased views. These publications in India therefore deserve more attention and more support to reduce media bias.

While India has some of the  highest circulation  of newspapers in the world, it also unfortunately has high media bias rates and one of the  lowest press freedom rankings  for democracies. This media bias can contribute to democratic backsliding and must be addressed by media outlets. Only then can media in India properly do its job—serving to inform, not influence the public.

The author would like to acknowledge Dr. Pradeep Chhibber, Pranav Gupta, and UC Berkeley for supporting her research measuring media bias in India. All perspectives in this article are her own.

This article was originally published in  South Asian Voices.

Recent & Related

biased media essay

About Stimson

Transparency.

  • 202.223.5956
  • 1211 Connecticut Ave NW, 8th Floor, Washington, DC 20036
  • Fax: 202.238.9604
  • 202.478.3437
  • Caiti Goodman
  • Communications Dept.
  • News & Announcements

Copyright The Henry L. Stimson Center

Privacy Policy

Subscription Options

Research areas trade & technology security & strategy human security & governance climate & natural resources pivotal places.

  • Asia & the Indo-Pacific
  • Middle East & North Africa

Publications & Project Lists South Asia Voices Publication Highlights Global Governance Innovation Network Updates Mekong Dam Monitor: Weekly Alerts and Advisories Middle East Voices Updates 38 North: News and Analysis on North Korea

  • All News & Analysis (Approx Weekly)
  • 38 North Today (Frequently)
  • Only Breaking News (Occasional)
  • Share full article

Advertisement

Supported by

NPR in Turmoil After It Is Accused of Liberal Bias

An essay from an editor at the broadcaster has generated a firestorm of criticism about the network on social media, especially among conservatives.

Uri Berliner, wearing a dark zipped sweater over a white T-shirt, sits in a darkened room, a big plant and a yellow sofa behind him.

By Benjamin Mullin and Katie Robertson

NPR is facing both internal tumult and a fusillade of attacks by prominent conservatives this week after a senior editor publicly claimed the broadcaster had allowed liberal bias to affect its coverage, risking its trust with audiences.

Uri Berliner, a senior business editor who has worked at NPR for 25 years, wrote in an essay published Tuesday by The Free Press, a popular Substack publication, that “people at every level of NPR have comfortably coalesced around the progressive worldview.”

Mr. Berliner, a Peabody Award-winning journalist, castigated NPR for what he said was a litany of journalistic missteps around coverage of several major news events, including the origins of Covid-19 and the war in Gaza. He also said the internal culture at NPR had placed race and identity as “paramount in nearly every aspect of the workplace.”

Mr. Berliner’s essay has ignited a firestorm of criticism of NPR on social media, especially among conservatives who have long accused the network of political bias in its reporting. Former President Donald J. Trump took to his social media platform, Truth Social, to argue that NPR’s government funding should be rescinded, an argument he has made in the past.

NPR has forcefully pushed back on Mr. Berliner’s accusations and the criticism.

“We’re proud to stand behind the exceptional work that our desks and shows do to cover a wide range of challenging stories,” Edith Chapin, the organization’s editor in chief, said in an email to staff on Tuesday. “We believe that inclusion — among our staff, with our sourcing, and in our overall coverage — is critical to telling the nuanced stories of this country and our world.” Some other NPR journalists also criticized the essay publicly, including Eric Deggans, its TV critic, who faulted Mr. Berliner for not giving NPR an opportunity to comment on the piece.

In an interview on Thursday, Mr. Berliner expressed no regrets about publishing the essay, saying he loved NPR and hoped to make it better by airing criticisms that have gone unheeded by leaders for years. He called NPR a “national trust” that people rely on for fair reporting and superb storytelling.

“I decided to go out and publish it in hopes that something would change, and that we get a broader conversation going about how the news is covered,” Mr. Berliner said.

He said he had not been disciplined by managers, though he said he had received a note from his supervisor reminding him that NPR requires employees to clear speaking appearances and media requests with standards and media relations. He said he didn’t run his remarks to The New York Times by network spokespeople.

When the hosts of NPR’s biggest shows, including “Morning Edition” and “All Things Considered,” convened on Wednesday afternoon for a long-scheduled meet-and-greet with the network’s new chief executive, Katherine Maher , conversation soon turned to Mr. Berliner’s essay, according to two people with knowledge of the meeting. During the lunch, Ms. Chapin told the hosts that she didn’t want Mr. Berliner to become a “martyr,” the people said.

Mr. Berliner’s essay also sent critical Slack messages whizzing through some of the same employee affinity groups focused on racial and sexual identity that he cited in his essay. In one group, several staff members disputed Mr. Berliner’s points about a lack of ideological diversity and said efforts to recruit more people of color would make NPR’s journalism better.

On Wednesday, staff members from “Morning Edition” convened to discuss the fallout from Mr. Berliner’s essay. During the meeting, an NPR producer took issue with Mr. Berliner’s argument for why NPR’s listenership has fallen off, describing a variety of factors that have contributed to the change.

Mr. Berliner’s remarks prompted vehement pushback from several news executives. Tony Cavin, NPR’s managing editor of standards and practices, said in an interview that he rejected all of Mr. Berliner’s claims of unfairness, adding that his remarks would probably make it harder for NPR journalists to do their jobs.

“The next time one of our people calls up a Republican congressman or something and tries to get an answer from them, they may well say, ‘Oh, I read these stories, you guys aren’t fair, so I’m not going to talk to you,’” Mr. Cavin said.

Some journalists have defended Mr. Berliner’s essay. Jeffrey A. Dvorkin, NPR’s former ombudsman, said Mr. Berliner was “not wrong” on social media. Chuck Holmes, a former managing editor at NPR, called Mr. Berliner’s essay “brave” on Facebook.

Mr. Berliner’s criticism was the latest salvo within NPR, which is no stranger to internal division. In October, Mr. Berliner took part in a lengthy debate over whether NPR should defer to language proposed by the Arab and Middle Eastern Journalists Association while covering the conflict in Gaza.

“We don’t need to rely on an advocacy group’s guidance,” Mr. Berliner wrote, according to a copy of the email exchange viewed by The Times. “Our job is to seek out the facts and report them.” The debate didn’t change NPR’s language guidance, which is made by editors who weren’t part of the discussion. And in a statement on Thursday, the Arab and Middle Eastern Journalists Association said it is a professional association for journalists, not a political advocacy group.

Mr. Berliner’s public criticism has highlighted broader concerns within NPR about the public broadcaster’s mission amid continued financial struggles. Last year, NPR cut 10 percent of its staff and canceled four podcasts, including the popular “Invisibilia,” as it tried to make up for a $30 million budget shortfall. Listeners have drifted away from traditional radio to podcasts, and the advertising market has been unsteady.

In his essay, Mr. Berliner laid some of the blame at the feet of NPR’s former chief executive, John Lansing, who said he was retiring at the end of last year after four years in the role. He was replaced by Ms. Maher, who started on March 25.

During a meeting with employees in her first week, Ms. Maher was asked what she thought about decisions to give a platform to political figures like Ronna McDaniel, the former Republican Party chair whose position as a political analyst at NBC News became untenable after an on-air revolt from hosts who criticized her efforts to undermine the 2020 election.

“I think that this conversation has been one that does not have an easy answer,” Ms. Maher responded.

Benjamin Mullin reports on the major companies behind news and entertainment. Contact Ben securely on Signal at +1 530-961-3223 or email at [email protected] . More about Benjamin Mullin

Katie Robertson covers the media industry for The Times. Email:  [email protected]   More about Katie Robertson

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Fake news, disinformation and misinformation in social media: a review

Esma aïmeur.

Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada

Sabrine Amri

Gilles brassard, associated data.

All the data and material are available in the papers cited in the references.

Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.

Introduction

Context and motivation.

Fake news, disinformation and misinformation have become such a scourge that Marcia McNutt, president of the National Academy of Sciences of the United States, is quoted to have said (making an implicit reference to the COVID-19 pandemic) “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence” in a joint statement of the National Academies 1 posted on July 15, 2021. Indeed, although online social networks (OSNs), also called social media, have improved the ease with which real-time information is broadcast; its popularity and its massive use have expanded the spread of fake news by increasing the speed and scope at which it can spread. Fake news may refer to the manipulation of information that can be carried out through the production of false information, or the distortion of true information. However, that does not mean that this problem is only created with social media. A long time ago, there were rumors in the traditional media that Elvis was not dead, 2 that the Earth was flat, 3 that aliens had invaded us, 4 , etc.

Therefore, social media has become nowadays a powerful source for fake news dissemination (Sharma et al. 2019 ; Shu et al. 2017 ). According to Pew Research Center’s analysis of the news use across social media platforms, in 2020, about half of American adults get news on social media at least sometimes, 5 while in 2018, only one-fifth of them say they often get news via social media. 6

Hence, fake news can have a significant impact on society as manipulated and false content is easier to generate and harder to detect (Kumar and Shah 2018 ) and as disinformation actors change their tactics (Kumar and Shah 2018 ; Micallef et al. 2020 ). In 2017, Snow predicted in the MIT Technology Review (Snow 2017 ) that most individuals in mature economies will consume more false than valid information by 2022.

Recent news on the COVID-19 pandemic, which has flooded the web and created panic in many countries, has been reported as fake. 7 For example, holding your breath for ten seconds to one minute is not a self-test for COVID-19 8 (see Fig.  1 ). Similarly, online posts claiming to reveal various “cures” for COVID-19 such as eating boiled garlic or drinking chlorine dioxide (which is an industrial bleach), were verified 9 as fake and in some cases as dangerous and will never cure the infection.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig1_HTML.jpg

Fake news example about a self-test for COVID-19 source: https://cdn.factcheck.org/UploadedFiles/Screenshot031120_false.jpg , last access date: 26-12-2022

Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017 ). Furthermore, it has been reported in a previous study about the spread of online news on Twitter (Vosoughi et al. 2018 ) that the spread of false news online is six times faster than truthful content and that 70% of the users could not distinguish real from fake news (Vosoughi et al. 2018 ) due to the attraction of the novelty of the latter (Bovet and Makse 2019 ). It was determined that falsehood spreads significantly farther, faster, deeper and more broadly than the truth in all categories of information, and the effects are more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information (Vosoughi et al. 2018 ).

Over 1 million tweets were estimated to be related to fake news by the end of the 2016 US presidential election. 11 In 2017, in Germany, a government spokesman affirmed: “We are dealing with a phenomenon of a dimension that we have not seen before,” referring to an unprecedented spread of fake news on social networks. 12 Given the strength of this new phenomenon, fake news has been chosen as the word of the year by the Macquarie dictionary both in 2016 13 and in 2018 14 as well as by the Collins dictionary in 2017. 15 , 16 Since 2020, the new term “infodemic” was coined, reflecting widespread researchers’ concern (Gupta et al. 2022 ; Apuke and Omar 2021 ; Sharma et al. 2020 ; Hartley and Vu 2020 ; Micallef et al. 2020 ) about the proliferation of misinformation linked to the COVID-19 pandemic.

The Gartner Group’s top strategic predictions for 2018 and beyond included the need for IT leaders to quickly develop Artificial Intelligence (AI) algorithms to address counterfeit reality and fake news. 17 However, fake news identification is a complex issue. (Snow 2017 ) questioned the ability of AI to win the war against fake news. Similarly, other researchers concurred that even the best AI for spotting fake news is still ineffective. 18 Besides, recent studies have shown that the power of AI algorithms for identifying fake news is lower than its ability to create it Paschen ( 2019 ). Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed to closely resemble the truth in order to deceive users, and as a result, it is often hard to determine its veracity by AI alone. Therefore, it is crucial to consider more effective approaches to solve the problem of fake news in social media.

Contribution

The fake news problem has been addressed by researchers from various perspectives related to different topics. These topics include, but are not restricted to, social science studies , which investigate why and who falls for fake news (Altay et al. 2022 ; Batailler et al. 2022 ; Sterret et al. 2018 ; Badawy et al. 2019 ; Pennycook and Rand 2020 ; Weiss et al. 2020 ; Guadagno and Guttieri 2021 ), whom to trust and how perceptions of misinformation and disinformation relate to media trust and media consumption patterns (Hameleers et al. 2022 ), how fake news differs from personal lies (Chiu and Oh 2021 ; Escolà-Gascón 2021 ), examine how can the law regulate digital disinformation and how governments can regulate the values of social media companies that themselves regulate disinformation spread on their platforms (Marsden et al. 2020 ; Schuyler 2019 ; Vasu et al. 2018 ; Burshtein 2017 ; Waldman 2017 ; Alemanno 2018 ; Verstraete et al. 2017 ), and argue the challenges to democracy (Jungherr and Schroeder 2021 ); Behavioral interventions studies , which examine what literacy ideas mean in the age of dis/mis- and malinformation (Carmi et al. 2020 ), investigate whether media literacy helps identification of fake news (Jones-Jang et al. 2021 ) and attempt to improve people’s news literacy (Apuke et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers 2022 ; Nagel 2022 ; Jones-Jang et al. 2021 ; Mihailidis and Viotty 2017 ; García et al. 2020 ) by encouraging people to pause to assess credibility of headlines (Fazio 2020 ), promote civic online reasoning (McGrew 2020 ; McGrew et al. 2018 ) and critical thinking (Lutzke et al. 2019 ), together with evaluations of credibility indicators (Bhuiyan et al. 2020 ; Nygren et al. 2019 ; Shao et al. 2018a ; Pennycook et al. 2020a , b ; Clayton et al. 2020 ; Ozturk et al. 2015 ; Metzger et al. 2020 ; Sherman et al. 2020 ; Nekmat 2020 ; Brashier et al. 2021 ; Chung and Kim 2021 ; Lanius et al. 2021 ); as well as social media-driven studies , which investigate the effect of signals (e.g., sources) to detect and recognize fake news (Vraga and Bode 2017 ; Jakesch et al. 2019 ; Shen et al. 2019 ; Avram et al. 2020 ; Hameleers et al. 2020 ; Dias et al. 2020 ; Nyhan et al. 2020 ; Bode and Vraga 2015 ; Tsang 2020 ; Vishwakarma et al. 2019 ; Yavary et al. 2020 ) and investigate fake and reliable news sources using complex networks analysis based on search engine optimization metric (Mazzeo and Rapisarda 2022 ).

The impacts of fake news have reached various areas and disciplines beyond online social networks and society (García et al. 2020 ) such as economics (Clarke et al. 2020 ; Kogan et al. 2019 ; Goldstein and Yang 2019 ), psychology (Roozenbeek et al. 2020a ; Van der Linden and Roozenbeek 2020 ; Roozenbeek and van der Linden 2019 ), political science (Valenzuela et al. 2022 ; Bringula et al. 2022 ; Ricard and Medeiros 2020 ; Van der Linden et al. 2020 ; Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ), health science (Alonso-Galbán and Alemañy-Castilla 2022 ; Desai et al. 2022 ; Apuke and Omar 2021 ; Escolà-Gascón 2021 ; Wang et al. 2019c ; Hartley and Vu 2020 ; Micallef et al. 2020 ; Pennycook et al. 2020b ; Sharma et al. 2020 ; Roozenbeek et al. 2020b ), environmental science (e.g., climate change) (Treen et al. 2020 ; Lutzke et al. 2019 ; Lewandowsky 2020 ; Maertens et al. 2020 ), etc.

Interesting research has been carried out to review and study the fake news issue in online social networks. Some focus not only on fake news, but also distinguish between fake news and rumor (Bondielli and Marcelloni 2019 ; Meel and Vishwakarma 2020 ), while others tackle the whole problem, from characterization to processing techniques (Shu et al. 2017 ; Guo et al. 2020 ; Zhou and Zafarani 2020 ). However, they mostly focus on studying approaches from a machine learning perspective (Bondielli and Marcelloni 2019 ), data mining perspective (Shu et al. 2017 ), crowd intelligence perspective (Guo et al. 2020 ), or knowledge-based perspective (Zhou and Zafarani 2020 ). Furthermore, most of these studies ignore at least one of the mentioned perspectives, and in many cases, they do not cover other existing detection approaches using methods such as blockchain and fact-checking, as well as analysis on metrics used for Search Engine Optimization (Mazzeo and Rapisarda 2022 ). However, in our work and to the best of our knowledge, we cover all the approaches used for fake news detection. Indeed, we investigate the proposed solutions from broader perspectives (i.e., the detection techniques that are used, as well as the different aspects and types of the information used).

Therefore, in this paper, we are highly motivated by the following facts. First, fake news detection on social media is still in the early age of development, and many challenging issues remain that require deeper investigation. Hence, it is necessary to discuss potential research directions that can improve fake news detection and mitigation tasks. However, the dynamic nature of fake news propagation through social networks further complicates matters (Sharma et al. 2019 ). False information can easily reach and impact a large number of users in a short time (Friggeri et al. 2014 ; Qian et al. 2018 ). Moreover, fact-checking organizations cannot keep up with the dynamics of propagation as they require human verification, which can hold back a timely and cost-effective response (Kim et al. 2018 ; Ruchansky et al. 2017 ; Shu et al. 2018a ).

Our work focuses primarily on understanding the “fake news” problem, its related challenges and root causes, and reviewing automatic fake news detection and mitigation methods in online social networks as addressed by researchers. The main contributions that differentiate us from other works are summarized below:

  • We present the general context from which the fake news problem emerged (i.e., online deception)
  • We review existing definitions of fake news, identify the terms and features most commonly used to define fake news, and categorize related works accordingly.
  • We propose a fake news typology classification based on the various categorizations of fake news reported in the literature.
  • We point out the most challenging factors preventing researchers from proposing highly effective solutions for automatic fake news detection in social media.
  • We highlight and classify representative studies in the domain of automatic fake news detection and mitigation on online social networks including the key methods and techniques used to generate detection models.
  • We discuss the key shortcomings that may inhibit the effectiveness of the proposed fake news detection methods in online social networks.
  • We provide recommendations that can help address these shortcomings and improve the quality of research in this domain.

The rest of this article is organized as follows. We explain the methodology with which the studied references are collected and selected in Sect.  2 . We introduce the online deception problem in Sect.  3 . We highlight the modern-day problem of fake news in Sect.  4 , followed by challenges facing fake news detection and mitigation tasks in Sect.  5 . We provide a comprehensive literature review of the most relevant scholarly works on fake news detection in Sect.  6 . We provide a critical discussion and recommendations that may fill some of the gaps we have identified, as well as a classification of the reviewed automatic fake news detection approaches, in Sect.  7 . Finally, we provide a conclusion and propose some future directions in Sect.  8 .

Review methodology

This section introduces the systematic review methodology on which we relied to perform our study. We start with the formulation of the research questions, which allowed us to select the relevant research literature. Then, we provide the different sources of information together with the search and inclusion/exclusion criteria we used to select the final set of papers.

Research questions formulation

The research scope, research questions, and inclusion/exclusion criteria were established following an initial evaluation of the literature and the following research questions were formulated and addressed.

  • RQ1: what is fake news in social media, how is it defined in the literature, what are its related concepts, and the different types of it?
  • RQ2: What are the existing challenges and issues related to fake news?
  • RQ3: What are the available techniques used to perform fake news detection in social media?

Sources of information

We broadly searched for journal and conference research articles, books, and magazines as a source of data to extract relevant articles. We used the main sources of scientific databases and digital libraries in our search, such as Google Scholar, 19 IEEE Xplore, 20 Springer Link, 21 ScienceDirect, 22 Scopus, 23 ACM Digital Library. 24 Also, we screened most of the related high-profile conferences such as WWW, SIGKDD, VLDB, ICDE and so on to find out the recent work.

Search criteria

We focused our research over a period of ten years, but we made sure that about two-thirds of the research papers that we considered were published in or after 2019. Additionally, we defined a set of keywords to search the above-mentioned scientific databases since we concentrated on reviewing the current state of the art in addition to the challenges and the future direction. The set of keywords includes the following terms: fake news, disinformation, misinformation, information disorder, social media, detection techniques, detection methods, survey, literature review.

Study selection, exclusion and inclusion criteria

To retrieve relevant research articles, based on our sources of information and search criteria, a systematic keyword-based search was carried out by posing different search queries, as shown in Table  1 .

List of keywords for searching relevant articles

We discovered a primary list of articles. On the obtained initial list of studies, we applied a set of inclusion/exclusion criteria presented in Table  2 to select the appropriate research papers. The inclusion and exclusion principles are applied to determine whether a study should be included or not.

Inclusion and exclusion criteria

After reading the abstract, we excluded some articles that did not meet our criteria. We chose the most important research to help us understand the field. We reviewed the articles completely and found only 61 research papers that discuss the definition of the term fake news and its related concepts (see Table  4 ). We used the remaining papers to understand the field, reveal the challenges, review the detection techniques, and discuss future directions.

Classification of fake news definitions based on the used term and features

A brief introduction of online deception

The Cambridge Online Dictionary defines Deception as “ the act of hiding the truth, especially to get an advantage .” Deception relies on peoples’ trust, doubt and strong emotions that may prevent them from thinking and acting clearly (Aïmeur et al. 2018 ). We also define it in previous work (Aïmeur et al. 2018 ) as the process that undermines the ability to consciously make decisions and take convenient actions, following personal values and boundaries. In other words, deception gets people to do things they would not otherwise do. In the context of online deception, several factors need to be considered: the deceiver, the purpose or aim of the deception, the social media service, the deception technique and the potential target (Aïmeur et al. 2018 ; Hage et al. 2021 ).

Researchers are working on developing new ways to protect users and prevent online deception (Aïmeur et al. 2018 ). Due to the sophistication of attacks, this is a complex task. Hence, malicious attackers are using more complex tools and strategies to deceive users. Furthermore, the way information is organized and exchanged in social media may lead to exposing OSN users to many risks (Aïmeur et al. 2013 ).

In fact, this field is one of the recent research areas that need collaborative efforts of multidisciplinary practices such as psychology, sociology, journalism, computer science as well as cyber-security and digital marketing (which are not yet well explored in the field of dis/mis/malinformation but relevant for future research). Moreover, Ismailov et al. ( 2020 ) analyzed the main causes that could be responsible for the efficiency gap between laboratory results and real-world implementations.

In this paper, it is not in our scope of work to review online deception state of the art. However, we think it is crucial to note that fake news, misinformation and disinformation are indeed parts of the larger landscape of online deception (Hage et al. 2021 ).

Fake news, the modern-day problem

Fake news has existed for a very long time, much before their wide circulation became facilitated by the invention of the printing press. 25 For instance, Socrates was condemned to death more than twenty-five hundred years ago under the fake news that he was guilty of impiety against the pantheon of Athens and corruption of the youth. 26 A Google Trends Analysis of the term “fake news” reveals an explosion in popularity around the time of the 2016 US presidential election. 27 Fake news detection is a problem that has recently been addressed by numerous organizations, including the European Union 28 and NATO. 29

In this section, we first overview the fake news definitions as they were provided in the literature. We identify the terms and features used in the definitions, and we classify the latter based on them. Then, we provide a fake news typology based on distinct categorizations that we propose, and we define and compare the most cited forms of one specific fake news category (i.e., the intent-based fake news category).

Definitions of fake news

“Fake news” is defined in the Collins English Dictionary as false and often sensational information disseminated under the guise of news reporting, 30 yet the term has evolved over time and has become synonymous with the spread of false information (Cooke 2017 ).

The first definition of the term fake news was provided by Allcott and Gentzkow ( 2017 ) as news articles that are intentionally and verifiably false and could mislead readers. Then, other definitions were provided in the literature, but they all agree on the authenticity of fake news to be false (i.e., being non-factual). However, they disagree on the inclusion and exclusion of some related concepts such as satire , rumors , conspiracy theories , misinformation and hoaxes from the given definition. More recently, Nakov ( 2020 ) reported that the term fake news started to mean different things to different people, and for some politicians, it even means “news that I do not like.”

Hence, there is still no agreed definition of the term “fake news.” Moreover, we can find many terms and concepts in the literature that refer to fake news (Van der Linden et al. 2020 ; Molina et al. 2021 ) (Abu Arqoub et al. 2022 ; Allen et al. 2020 ; Allcott and Gentzkow 2017 ; Shu et al. 2017 ; Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Conroy et al. 2015 ; Celliers and Hattingh 2020 ; Nakov 2020 ; Shu et al. 2020c ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ; Egelhofer and Lecheler 2019 ; Mustafaraj and Metaxas 2017 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Lazer et al. 2018 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ), disinformation (Kapantai et al. 2021 ; Shu et al. 2020a , c ; Kumar et al. 2016 ; Bhattacharjee et al. 2020 ; Marsden et al. 2020 ; Jungherr and Schroeder 2021 ; Starbird et al. 2019 ; Ireton and Posetti 2018 ), misinformation (Wu et al. 2019 ; Shu et al. 2020c ; Shao et al. 2016 , 2018b ; Pennycook and Rand 2019 ; Micallef et al. 2020 ), malinformation (Dame Adjin-Tettey 2022 ) (Carmi et al. 2020 ; Shu et al. 2020c ), false information (Kumar and Shah 2018 ; Guo et al. 2020 ; Habib et al. 2019 ), information disorder (Shu et al. 2020c ; Wardle and Derakhshan 2017 ; Wardle 2018 ; Derakhshan and Wardle 2017 ), information warfare (Guadagno and Guttieri 2021 ) and information pollution (Meel and Vishwakarma 2020 ).

There is also a remarkable amount of disagreement over the classification of the term fake news in the research literature, as well as in policy (de Cock Buning 2018 ; ERGA 2018 , 2021 ). Some consider fake news as a type of misinformation (Allen et al. 2020 ; Singh et al. 2021 ; Ha et al. 2021 ; Pennycook and Rand 2019 ; Shao et al. 2018b ; Di Domenico et al. 2021 ; Sharma et al. 2019 ; Celliers and Hattingh 2020 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Islam et al. 2020 ), others consider it as a type of disinformation (de Cock Buning 2018 ) (Bringula et al. 2022 ; Baptista and Gradim 2022 ; Tsang 2020 ; Tandoc Jr et al. 2021 ; Bastick 2021 ; Khan et al. 2019 ; Shu et al. 2017 ; Nakov 2020 ; Shu et al. 2020c ; Egelhofer and Lecheler 2019 ), while others associate the term with both disinformation and misinformation (Wu et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers et al. 2022 ; Carmi et al. 2020 ; Allcott and Gentzkow 2017 ; Zhang and Ghorbani 2020 ; Potthast et al. 2017 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ). On the other hand, some prefer to differentiate fake news from both terms (ERGA 2018 ; Molina et al. 2021 ; ERGA 2021 ) (Zhou and Zafarani 2020 ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ).

The existing terms can be separated into two groups. The first group represents the general terms, which are information disorder , false information and fake news , each of which includes a subset of terms from the second group. The second group represents the elementary terms, which are misinformation , disinformation and malinformation . The literature agrees on the definitions of the latter group, but there is still no agreed-upon definition of the first group. In Fig.  2 , we model the relationship between the most used terms in the literature.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig2_HTML.jpg

Modeling of the relationship between terms related to fake news

The terms most used in the literature to refer, categorize and classify fake news can be summarized and defined as shown in Table  3 , in which we capture the similarities and show the differences between the different terms based on two common key features, which are the intent and the authenticity of the news content. The intent feature refers to the intention behind the term that is used (i.e., whether or not the purpose is to mislead or cause harm), whereas the authenticity feature refers to its factual aspect. (i.e., whether the content is verifiably false or not, which we label as genuine in the second case). Some of these terms are explicitly used to refer to fake news (i.e., disinformation, misinformation and false information), while others are not (i.e., malinformation). In the comparison table, the empty dash (–) cell denotes that the classification does not apply.

A comparison between used terms based on intent and authenticity

In Fig.  3 , we identify the different features used in the literature to define fake news (i.e., intent, authenticity and knowledge). Hence, some definitions are based on two key features, which are authenticity and intent (i.e., news articles that are intentionally and verifiably false and could mislead readers). However, other definitions are based on either authenticity or intent. Other researchers categorize false information on the web and social media based on its intent and knowledge (i.e., when there is a single ground truth). In Table  4 , we classify the existing fake news definitions based on the used term and the used features . In the classification, the references in the cells refer to the research study in which a fake news definition was provided, while the empty dash (–) cells denote that the classification does not apply.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig3_HTML.jpg

The features used for fake news definition

Fake news typology

Various categorizations of fake news have been provided in the literature. We can distinguish two major categories of fake news based on the studied perspective (i.e., intention or content) as shown in Fig.  4 . However, our proposed fake news typology is not about detection methods, and it is not exclusive. Hence, a given category of fake news can be described based on both perspectives (i.e., intention and content) at the same time. For instance, satire (i.e., intent-based fake news) can contain text and/or multimedia content types of data (e.g., headline, body, image, video) (i.e., content-based fake news) and so on.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig4_HTML.jpg

Most researchers classify fake news based on the intent (Collins et al. 2020 ; Bondielli and Marcelloni 2019 ; Zannettou et al. 2019 ; Kumar et al. 2016 ; Wardle 2017 ; Shu et al. 2017 ; Kumar and Shah 2018 ) (see Sect.  4.2.2 ). However, other researchers (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ) focus on the content to categorize types of fake news through distinguishing the different formats and content types of data in the news (e.g., text and/or multimedia).

Recently, another classification was proposed by Zhang and Ghorbani ( 2020 ). It is based on the combination of content and intent to categorize fake news. They distinguish physical news content and non-physical news content from fake news. Physical content consists of the carriers and format of the news, and non-physical content consists of the opinions, emotions, attitudes and sentiments that the news creators want to express.

Content-based fake news category

According to researchers of this category (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ), forms of fake news may include false text such as hyperlinks or embedded content; multimedia such as false videos (Demuyakor and Opata 2022 ), images (Masciari et al. 2020 ; Shen et al. 2019 ), audios (Demuyakor and Opata 2022 ) and so on. Moreover, we can also find multimodal content (Shu et al. 2020a ) that is fake news articles and posts composed of multiple types of data combined together, for example, a fabricated image along with a text related to the image (Shu et al. 2020a ). In this category of fake news forms, we can mention as examples deepfake videos (Yang et al. 2019b ) and GAN-generated fake images (Zhang et al. 2019b ), which are artificial intelligence-based machine-generated fake content that are hard for unsophisticated social network users to identify.

The effects of these forms of fake news content vary on the credibility assessment, as well as sharing intentions which influences the spread of fake news on OSNs. For instance, people with little knowledge about the issue compared to those who are strongly concerned about the key issue of fake news tend to be easier to convince that the misleading or fake news is real, especially when shared via a video modality as compared to the text or the audio modality (Demuyakor and Opata 2022 ).

Intent-based Fake News Category

The most often mentioned and discussed forms of fake news according to researchers in this category include but are not restricted to clickbait , hoax , rumor , satire , propaganda , framing , conspiracy theories and others. In the following subsections, we explain these types of fake news as they were defined in the literature and undertake a brief comparison between them as depicted in Table  5 . The following are the most cited forms of intent-based types of fake news, and their comparison is based on what we suspect are the most common criteria mentioned by researchers.

A comparison between the different types of intent-based fake news

Clickbait refers to misleading headlines and thumbnails of content on the web (Zannettou et al. 2019 ) that tend to be fake stories with catchy headlines aimed at enticing the reader to click on a link (Collins et al. 2020 ). This type of fake news is considered to be the least severe type of false information because if a user reads/views the whole content, it is possible to distinguish if the headline and/or the thumbnail was misleading (Zannettou et al. 2019 ). However, the goal behind using clickbait is to increase the traffic to a website (Zannettou et al. 2019 ).

A hoax is a false (Zubiaga et al. 2018 ) or inaccurate (Zannettou et al. 2019 ) intentionally fabricated (Collins et al. 2020 ) news story used to masquerade the truth (Zubiaga et al. 2018 ) and is presented as factual (Zannettou et al. 2019 ) to deceive the public or audiences (Collins et al. 2020 ). This category is also known either as half-truth or factoid stories (Zannettou et al. 2019 ). Popular examples of hoaxes are stories that report the false death of celebrities (Zannettou et al. 2019 ) and public figures (Collins et al. 2020 ). Recently, hoaxes about the COVID-19 have been circulating through social media.

The term rumor refers to ambiguous or never confirmed claims (Zannettou et al. 2019 ) that are disseminated with a lack of evidence to support them (Sharma et al. 2019 ). This kind of information is widely propagated on OSNs (Zannettou et al. 2019 ). However, they are not necessarily false and may turn out to be true (Zubiaga et al. 2018 ). Rumors originate from unverified sources but may be true or false or remain unresolved (Zubiaga et al. 2018 ).

Satire refers to stories that contain a lot of irony and humor (Zannettou et al. 2019 ). It presents stories as news that might be factually incorrect, but the intent is not to deceive but rather to call out, ridicule, or to expose behavior that is shameful, corrupt, or otherwise “bad” (Golbeck et al. 2018 ). This is done with a fabricated story or by exaggerating the truth reported in mainstream media in the form of comedy (Collins et al. 2020 ). The intent behind satire seems kind of legitimate and many authors (such as Wardle (Wardle 2017 )) do include satire as a type of fake news as there is no intention to cause harm but it has the potential to mislead or fool people.

Also, Golbeck et al. ( 2018 ) mention that there is a spectrum from fake to satirical news that they found to be exploited by many fake news sites. These sites used disclaimers at the bottom of their webpages to suggest they were “satirical” even when there was nothing satirical about their articles, to protect them from accusations about being fake. The difference with a satirical form of fake news is that the authors or the host present themselves as a comedian or as an entertainer rather than a journalist informing the public (Collins et al. 2020 ). However, most audiences believed the information passed in this satirical form because the comedian usually projects news from mainstream media and frames them to suit their program (Collins et al. 2020 ).

Propaganda refers to news stories created by political entities to mislead people. It is a special instance of fabricated stories that aim to harm the interests of a particular party and, typically, has a political context (Zannettou et al. 2019 ). Propaganda was widely used during both World Wars (Collins et al. 2020 ) and during the Cold War (Zannettou et al. 2019 ). It is a consequential type of false information as it can change the course of human history (e.g., by changing the outcome of an election) (Zannettou et al. 2019 ). States are the main actors of propaganda. Recently, propaganda has been used by politicians and media organizations to support a certain position or view (Collins et al. 2020 ). Online astroturfing can be an example of the tools used for the dissemination of propaganda. It is a covert manipulation of public opinion (Peng et al. 2017 ) that aims to make it seem that many people share the same opinion about something. Astroturfing can affect different domains of interest, based on which online astroturfing can be mainly divided into political astroturfing, corporate astroturfing and astroturfing in e-commerce or online services (Mahbub et al. 2019 ). Propaganda types of fake news can be debunked with manual fact-based detection models such as the use of expert-based fact-checkers (Collins et al. 2020 ).

Framing refers to employing some aspect of reality to make content more visible, while the truth is concealed (Collins et al. 2020 ) to deceive and misguide readers. People will understand certain concepts based on the way they are coined and invented. An example of framing was provided by Collins et al. ( 2020 ): “suppose a leader X says “I will neutralize my opponent” simply meaning he will beat his opponent in a given election. Such a statement will be framed such as “leader X threatens to kill Y” and this framed statement provides a total misrepresentation of the original meaning.

Conspiracy Theories

Conspiracy theories refer to the belief that an event is the result of secret plots generated by powerful conspirators. Conspiracy belief refers to people’s adoption and belief of conspiracy theories, and it is associated with psychological, political and social factors (Douglas et al. 2019 ). Conspiracy theories are widespread in contemporary democracies (Sutton and Douglas 2020 ), and they have major consequences. For instance, lately and during the COVID-19 pandemic, conspiracy theories have been discussed from a public health perspective (Meese et al. 2020 ; Allington et al. 2020 ; Freeman et al. 2020 ).

Comparison Between Most Popular Intent-based Types of Fake News

Following a review of the most popular intent-based types of fake news, we compare them as shown in Table  5 based on the most common criteria mentioned by researchers in their definitions as listed below.

  • the intent behind the news, which refers to whether a given news type was mainly created to intentionally deceive people or not (e.g., humor, irony, entertainment, etc.);
  • the way that the news propagates through OSN, which determines the nature of the propagation of each type of fake news and this can be either fast or slow propagation;
  • the severity of the impact of the news on OSN users, which refers to whether the public has been highly impacted by the given type of fake news; the mentioned impact of each fake news type is mainly the proportion of the negative impact;
  • and the goal behind disseminating the news, which can be to gain popularity for a particular entity (e.g., political party), for profit (e.g., lucrative business), or other reasons such as humor and irony in the case of satire, spreading panic or anger, and manipulating the public in the case of hoaxes, made-up stories about a particular person or entity in the case of rumors, and misguiding readers in the case of framing.

However, the comparison provided in Table  5 is deduced from the studied research papers; it is our point of view, which is not based on empirical data.

We suspect that the most dangerous types of fake news are the ones with high intention to deceive the public, fast propagation through social media, high negative impact on OSN users, and complicated hidden goals and agendas. However, while the other types of fake news are less dangerous, they should not be ignored.

Moreover, it is important to highlight that the existence of the overlap in the types of fake news mentioned above has been proven, thus it is possible to observe false information that may fall within multiple categories (Zannettou et al. 2019 ). Here, we provide two examples by Zannettou et al. ( 2019 ) to better understand possible overlaps: (1) a rumor may also use clickbait techniques to increase the audience that will read the story; and (2) propaganda stories, as a special instance of a framing story.

Challenges related to fake news detection and mitigation

To alleviate fake news and its threats, it is crucial to first identify and understand the factors involved that continue to challenge researchers. Thus, the main question is to explore and investigate the factors that make it easier to fall for manipulated information. Despite the tremendous progress made in alleviating some of the challenges in fake news detection (Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Shu et al. 2020a ), much more work needs to be accomplished to address the problem effectively.

In this section, we discuss several open issues that have been making fake news detection in social media a challenging problem. These issues can be summarized as follows: content-based issues (i.e., deceptive content that resembles the truth very closely), contextual issues (i.e., lack of user awareness, social bots spreaders of fake content, and OSN’s dynamic natures that leads to the fast propagation), as well as the issue of existing datasets (i.e., there still no one size fits all benchmark dataset for fake news detection). These various aspects have proven (Shu et al. 2017 ) to have a great impact on the accuracy of fake news detection approaches.

Content-based issue, deceptive content

Automatic fake news detection remains a huge challenge, primarily because the content is designed in a way that it closely resembles the truth. Besides, most deceivers choose their words carefully and use their language strategically to avoid being caught. Therefore, it is often hard to determine its veracity by AI without the reliance on additional information from third parties such as fact-checkers.

Abdullah-All-Tanvir et al. ( 2020 ) reported that fake news tends to have more complicated stories and hardly ever make any references. It is more likely to contain a greater number of words that express negative emotions. This makes it so complicated that it becomes impossible for a human to manually detect the credibility of this content. Therefore, detecting fake news on social media is quite challenging. Moreover, fake news appears in multiple types and forms, which makes it hard and challenging to define a single global solution able to capture and deal with the disseminated content. Consequently, detecting false information is not a straightforward task due to its various types and forms Zannettou et al. ( 2019 ).

Contextual issues

Contextual issues are challenges that we suspect may not be related to the content of the news but rather they are inferred from the context of the online news post (i.e., humans are the weakest factor due to lack of user awareness, social bots spreaders, dynamic nature of online social platforms and fast propagation of fake news).

Humans are the weakest factor due to the lack of awareness

Recent statistics 31 show that the percentage of unintentional fake news spreaders (people who share fake news without the intention to mislead) over social media is five times higher than intentional spreaders. Moreover, another recent statistic 32 shows that the percentage of people who were confident about their ability to discern fact from fiction is ten times higher than those who were not confident about the truthfulness of what they are sharing. As a result, we can deduce the lack of human awareness about the ascent of fake news.

Public susceptibility and lack of user awareness (Sharma et al. 2019 ) have always been the most challenging problem when dealing with fake news and misinformation. This is a complex issue because many people believe almost everything on the Internet and the ones who are new to digital technology or have less expertise may be easily fooled (Edgerly et al. 2020 ).

Moreover, it has been widely proven (Metzger et al. 2020 ; Edgerly et al. 2020 ) that people are often motivated to support and accept information that goes with their preexisting viewpoints and beliefs, and reject information that does not fit in as well. Hence, Shu et al. ( 2017 ) illustrate an interesting correlation between fake news spread and psychological and cognitive theories. They further suggest that humans are more likely to believe information that confirms their existing views and ideological beliefs. Consequently, they deduce that humans are naturally not very good at differentiating real information from fake information.

Recent research by Giachanou et al. ( 2020 ) studies the role of personality and linguistic patterns in discriminating between fake news spreaders and fact-checkers. They classify a user as a potential fact-checker or a potential fake news spreader based on features that represent users’ personality traits and linguistic patterns used in their tweets. They show that leveraging personality traits and linguistic patterns can improve the performance in differentiating between checkers and spreaders.

Furthermore, several researchers studied the prevalence of fake news on social networks during (Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ) and after (Garrett and Bond 2021 ) the 2016 US presidential election and found that individuals most likely to engage with fake news sources were generally conservative-leaning, older, and highly engaged with political news.

Metzger et al. ( 2020 ) examine how individuals evaluate the credibility of biased news sources and stories. They investigate the role of both cognitive dissonance and credibility perceptions in selective exposure to attitude-consistent news information. They found that online news consumers tend to perceive attitude-consistent news stories as more accurate and more credible than attitude-inconsistent stories.

Similarly, Edgerly et al. ( 2020 ) explore the impact of news headlines on the audience’s intent to verify whether given news is true or false. They concluded that participants exhibit higher intent to verify the news only when they believe the headline to be true, which is predicted by perceived congruence with preexisting ideological tendencies.

Luo et al. ( 2022 ) evaluate the effects of endorsement cues in social media on message credibility and detection accuracy. Results showed that headlines associated with a high number of likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. Consequently, they highlight the urgency of empowering individuals to assess both news veracity and endorsement cues appropriately on social media.

Moreover, misinformed people are a greater problem than uninformed people (Kuklinski et al. 2000 ), because the former hold inaccurate opinions (which may concern politics, climate change, medicine) that are harder to correct. Indeed, people find it difficult to update their misinformation-based beliefs even after they have been proved to be false (Flynn et al. 2017 ). Moreover, even if a person has accepted the corrected information, his/her belief may still affect their opinion (Nyhan and Reifler 2015 ).

Falling for disinformation may also be explained by a lack of critical thinking and of the need for evidence that supports information (Vilmer et al. 2018 ; Badawy et al. 2019 ). However, it is also possible that people choose misinformation because they engage in directionally motivated reasoning (Badawy et al. 2019 ; Flynn et al. 2017 ). Online clients are normally vulnerable and will, in general, perceive web-based networking media as reliable, as reported by Abdullah-All-Tanvir et al. ( 2019 ), who propose to mechanize fake news recognition.

It is worth noting that in addition to bots causing the outpouring of the majority of the misrepresentations, specific individuals are also contributing a large share of this issue (Abdullah-All-Tanvir et al. 2019 ). Furthermore, Vosoughi et al. (Vosoughi et al. 2018 ) found that contrary to conventional wisdom, robots have accelerated the spread of real and fake news at the same rate, implying that fake news spreads more than the truth because humans, not robots, are more likely to spread it.

In this case, verified users and those with numerous followers were not necessarily responsible for spreading misinformation of the corrupted posts (Abdullah-All-Tanvir et al. 2019 ).

Viral fake news can cause much havoc to our society. Therefore, to mitigate the negative impact of fake news, it is important to analyze the factors that lead people to fall for misinformation and to further understand why people spread fake news (Cheng et al. 2020 ). Measuring the accuracy, credibility, veracity and validity of news contents can also be a key countermeasure to consider.

Social bots spreaders

Several authors (Shu et al. 2018b , 2017 ; Shi et al. 2019 ; Bessi and Ferrara 2016 ; Shao et al. 2018a ) have also shown that fake news is likely to be created and spread by non-human accounts with similar attributes and structure in the network, such as social bots (Ferrara et al. 2016 ). Bots (short for software robots) exist since the early days of computers. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior (Ferrara et al. 2016 ). Although they are designed to provide a useful service, they can be harmful, for example when they contribute to the spread of unverified information or rumors (Ferrara et al. 2016 ). However, it is important to note that bots are simply tools created and maintained by humans for some specific hidden agendas.

Social bots tend to connect with legitimate users instead of other bots. They try to act like a human with fewer words and fewer followers on social media. This contributes to the forwarding of fake news (Jiang et al. 2019 ). Moreover, there is a difference between bot-generated and human-written clickbait (Le et al. 2019 ).

Many researchers have addressed ways of identifying and analyzing possible sources of fake news spread in social media. Recent research by Shu et al. ( 2020a ) describes social bots use of two strategies to spread low-credibility content. First, they amplify interactions with content as soon as it is created to make it look legitimate and to facilitate its spread across social networks. Next, they try to increase public exposure to the created content and thus boost its perceived credibility by targeting influential users that are more likely to believe disinformation in the hope of getting them to “repost” the fabricated content. They further discuss the social bot detection systems taxonomy proposed by Ferrara et al. ( 2016 ) which divides bot detection methods into three classes: (1) graph-based, (2) crowdsourcing and (3) feature-based social bot detection methods.

Similarly, Shao et al. ( 2018a ) examine social bots and how they promote the spread of misinformation through millions of Twitter posts during and following the 2016 US presidential campaign. They found that social bots played a disproportionate role in spreading articles from low-credibility sources by amplifying such content in the early spreading moments and targeting users with many followers through replies and mentions to expose them to this content and induce them to share it.

Ismailov et al. ( 2020 ) assert that the techniques used to detect bots depend on the social platform and the objective. They note that a malicious bot designed to make friends with as many accounts as possible will require a different detection approach than a bot designed to repeatedly post links to malicious websites. Therefore, they identify two models for detecting malicious accounts, each using a different set of features. Social context models achieve detection by examining features related to an account’s social presence including features such as relationships to other accounts, similarities to other users’ behaviors, and a variety of graph-based features. User behavior models primarily focus on features related to an individual user’s behavior, such as frequency of activities (e.g., number of tweets or posts per time interval), patterns of activity and clickstream sequences.

Therefore, it is crucial to consider bot detection techniques to distinguish bots from normal users to better leverage user profile features to detect fake news.

However, there is also another “bot-like” strategy that aims to massively promote disinformation and fake content in social platforms, which is called bot farms or also troll farms. It is not social bots, but it is a group of organized individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion (Wardle 2018 ) hired to massively spread fake news or any other harmful content. A prominent troll farm example is the Russia-based Internet Research Agency (IRA), which disseminated inflammatory content online to influence the outcome of the 2016 U.S. presidential election. 33 As a result, Twitter suspended accounts connected to the IRA and deleted 200,000 tweets from Russian trolls (Jamieson 2020 ). Another example to mention in this category is review bombing (Moro and Birt 2022 ). Review bombing refers to coordinated groups of people massively performing the same negative actions online (e.g., dislike, negative review/comment) on an online video, game, post, product, etc., in order to reduce its aggregate review score. The review bombers can be both humans and bots coordinated in order to cause harm and mislead people by falsifying facts.

Dynamic nature of online social platforms and fast propagation of fake news

Sharma et al. ( 2019 ) affirm that the fast proliferation of fake news through social networks makes it hard and challenging to assess the information’s credibility on social media. Similarly, Qian et al. ( 2018 ) assert that fake news and fabricated content propagate exponentially at the early stage of its creation and can cause a significant loss in a short amount of time (Friggeri et al. 2014 ) including manipulating the outcome of political events (Liu and Wu 2018 ; Bessi and Ferrara 2016 ).

Moreover, while analyzing the way source and promoters of fake news operate over the web through multiple online platforms, Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to real information (11%).

Furthermore, recently, Shu et al. ( 2020c ) attempted to understand the propagation of disinformation and fake news in social media and found that such content is produced and disseminated faster and easier through social media because of the low barriers that prevent doing so. Similarly, Shu et al. ( 2020b ) studied hierarchical propagation networks for fake news detection. They performed a comparative analysis between fake and real news from structural, temporal and linguistic perspectives. They demonstrated the potential of using these features to detect fake news and they showed their effectiveness for fake news detection as well.

Lastly, Abdullah-All-Tanvir et al. ( 2020 ) note that it is almost impossible to manually detect the sources and authenticity of fake news effectively and efficiently, due to its fast circulation in such a small amount of time. Therefore, it is crucial to note that the dynamic nature of the various online social platforms, which results in the continued rapid and exponential propagation of such fake content, remains a major challenge that requires further investigation while defining innovative solutions for fake news detection.

Datasets issue

The existing approaches lack an inclusive dataset with derived multidimensional information to detect fake news characteristics to achieve higher accuracy of machine learning classification model performance (Nyow and Chua 2019 ). These datasets are primarily dedicated to validating the machine learning model and are the ultimate frame of reference to train the model and analyze its performance. Therefore, if a researcher evaluates their model based on an unrepresentative dataset, the validity and the efficiency of the model become questionable when it comes to applying the fake news detection approach in a real-world scenario.

Moreover, several researchers (Shu et al. 2020d ; Wang et al. 2020 ; Pathak and Srihari 2019 ; Przybyla 2020 ) believe that fake news is diverse and dynamic in terms of content, topics, publishing methods and media platforms, and sophisticated linguistic styles geared to emulate true news. Consequently, training machine learning models on such sophisticated content requires large-scale annotated fake news data that are difficult to obtain (Shu et al. 2020d ).

Therefore, datasets are also a great topic to work on to enhance data quality and have better results while defining our solutions. Adversarial learning techniques (e.g., GAN, SeqGAN) can be used to provide machine-generated data that can be used to train deeper models and build robust systems to detect fake examples from the real ones. This approach can be used to counter the lack of datasets and the scarcity of data available to train models.

Fake news detection literature review

Fake news detection in social networks is still in the early stage of development and there are still challenging issues that need further investigation. This has become an emerging research area that is attracting huge attention.

There are various research studies on fake news detection in online social networks. Few of them have focused on the automatic detection of fake news using artificial intelligence techniques. In this section, we review the existing approaches used in automatic fake news detection, as well as the techniques that have been adopted. Then, a critical discussion built on a primary classification scheme based on a specific set of criteria is also emphasized.

Categories of fake news detection

In this section, we give an overview of most of the existing automatic fake news detection solutions adopted in the literature. A recent classification by Sharma et al. ( 2019 ) uses three categories of fake news identification methods. Each category is further divided based on the type of existing methods (i.e., content-based, feedback-based and intervention-based methods). However, a review of the literature for fake news detection in online social networks shows that the existing studies can be classified into broader categories based on two major aspects that most authors inspect and make use of to define an adequate solution. These aspects can be considered as major sources of extracted information used for fake news detection and can be summarized as follows: the content-based (i.e., related to the content of the news post) and the contextual aspect (i.e., related to the context of the news post).

Consequently, the studies we reviewed can be classified into three different categories based on the two aspects mentioned above (the third category is hybrid). As depicted in Fig.  5 , fake news detection solutions can be categorized as news content-based approaches, the social context-based approaches that can be divided into network and user-based approaches, and hybrid approaches. The latter combines both content-based and contextual approaches to define the solution.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig5_HTML.jpg

Classification of fake news detection approaches

News Content-based Category

News content-based approaches are fake news detection approaches that use content information (i.e., information extracted from the content of the news post) and that focus on studying and exploiting the news content in their proposed solutions. Content refers to the body of the news, including source, headline, text and image-video, which can reflect subtle differences.

Researchers of this category rely on content-based detection cues (i.e., text and multimedia-based cues), which are features extracted from the content of the news post. Text-based cues are features extracted from the text of the news, whereas multimedia-based cues are features extracted from the images and videos attached to the news. Figure  6 summarizes the most widely used news content representation (i.e., text and multimedia/images) and detection techniques (i.e., machine learning (ML), deep Learning (DL), natural language processing (NLP), fact-checking, crowdsourcing (CDS) and blockchain (BKC)) in news content-based category of fake news detection approaches. Most of the reviewed research works based on news content for fake news detection rely on the text-based cues (Kapusta et al. 2019 ; Kaur et al. 2020 ; Vereshchaka et al. 2020 ; Ozbay and Alatas 2020 ; Wang 2017 ; Nyow and Chua 2019 ; Hosseinimotlagh and Papalexakis 2018 ; Abdullah-All-Tanvir et al. 2019 , 2020 ; Mahabub 2020 ; Bahad et al. 2019 ; Hiriyannaiah et al. 2020 ) extracted from the text of the news content including the body of the news and its headline. However, a few researchers such as Vishwakarma et al. ( 2019 ) and Amri et al. ( 2022 ) try to recognize text from the associated image.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig6_HTML.jpg

News content-based category: news content representation and detection techniques

Most researchers of this category rely on artificial intelligence (AI) techniques (such as ML, DL and NLP models) to improve performance in terms of prediction accuracy. Others use different techniques such as fact-checking, crowdsourcing and blockchain. Specifically, the AI- and ML-based approaches in this category are trying to extract features from the news content, which they use later for content analysis and training tasks. In this particular case, the extracted features are the different types of information considered to be relevant for the analysis. Feature extraction is considered as one of the best techniques to reduce data size in automatic fake news detection. This technique aims to choose a subset of features from the original set to improve classification performance (Yazdi et al. 2020 ).

Table  6 lists the distinct features and metadata, as well as the used datasets in the news content-based category of fake news detection approaches.

The features and datasets used in the news content-based approaches

a https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

b https://mediabiasfactcheck.com/ , last access date: 26-12-2022

c https://github.com/KaiDMML/FakeNewsNet , last access date: 26-12-2022

d https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

e https://www.cs.ucsb.edu/~william/data/liar_dataset.zip , last access date: 26-12-2022

f https://www.kaggle.com/mrisdal/fake-news , last access date: 26-12-2022

g https://github.com/BuzzFeedNews/2016-10-facebook-fact-check , last access date: 26-12-2022

h https://www.politifact.com/subjects/fake-news/ , last access date: 26-12-2022

i https://www.kaggle.com/rchitic17/real-or-fake , last access date: 26-12-2022

j https://www.kaggle.com/jruvika/fake-news-detection , last access date: 26-12-2022

k https://github.com/MKLab-ITI/image-verification-corpus , last access date: 26-12-2022

l https://drive.google.com/file/d/14VQ7EWPiFeGzxp3XC2DeEHi-BEisDINn/view , last access date: 26-12-2022

Social Context-based Category

Unlike news content-based solutions, the social context-based approaches capture the skeptical social context of the online news (Zhang and Ghorbani 2020 ) rather than focusing on the news content. The social context-based category contains fake news detection approaches that use the contextual aspects (i.e., information related to the context of the news post). These aspects are based on social context and they offer additional information to help detect fake news. They are the surrounding data outside of the fake news article itself, where they can be an essential part of automatic fake news detection. Some useful examples of contextual information may include checking if the news itself and the source that published it are credible, checking the date of the news or the supporting resources, and checking if any other online news platforms are reporting the same or similar stories (Zhang and Ghorbani 2020 ).

Social context-based aspects can be classified into two subcategories, user-based and network-based, and they can be used for context analysis and training tasks in the case of AI- and ML-based approaches. User-based aspects refer to information captured from OSN users such as user profile information (Shu et al. 2019b ; Wang et al. 2019c ; Hamdi et al. 2020 ; Nyow and Chua 2019 ; Jiang et al. 2019 ) and user behavior (Cardaioli et al. 2020 ) such as user engagement (Uppada et al. 2022 ; Jiang et al. 2019 ; Shu et al. 2018b ; Nyow and Chua 2019 ) and response (Zhang et al. 2019a ; Qian et al. 2018 ). Meanwhile, network-based aspects refer to information captured from the properties of the social network where the fake content is shared and disseminated such as news propagation path (Liu and Wu 2018 ; Wu and Liu 2018 ) (e.g., propagation times and temporal characteristics of propagation), diffusion patterns (Shu et al. 2019a ) (e.g., number of retweets, shares), as well as user relationships (Mishra 2020 ; Hamdi et al. 2020 ; Jiang et al. 2019 ) (e.g., friendship status among users).

Figure  7 summarizes some of the most widely adopted social context representations, as well as the most used detection techniques (i.e., AI, ML, DL, fact-checking and blockchain), in the social context-based category of approaches.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig7_HTML.jpg

Social context-based category: social context representation and detection techniques

Table  7 lists the distinct features and metadata, the adopted detection cues, as well as the used datasets, in the context-based category of fake news detection approaches.

The features, detection cues and datasets used int the social context-based approaches

a https://www.dropbox.com/s/7ewzdrbelpmrnxu/rumdetect2017.zip , last access date: 26-12-2022 b https://snap.stanford.edu/data/ego-Twitter.html , last access date: 26-12-2022

Hybrid approaches

Most researchers are focusing on employing a specific method rather than a combination of both content- and context-based methods. This is because some of them (Wu and Rao 2020 ) believe that there still some challenging limitations in the traditional fusion strategies due to existing feature correlations and semantic conflicts. For this reason, some researchers focus on extracting content-based information, while others are capturing some social context-based information for their proposed approaches.

However, it has proven challenging to successfully automate fake news detection based on just a single type of feature (Ruchansky et al. 2017 ). Therefore, recent directions tend to do a mixture by using both news content-based and social context-based approaches for fake news detection.

Table  8 lists the distinct features and metadata, as well as the used datasets, in the hybrid category of fake news detection approaches.

The features and datasets used in the hybrid approaches

Fake news detection techniques

Another vision for classifying automatic fake news detection is to look at techniques used in the literature. Hence, we classify the detection methods based on the techniques into three groups:

  • Human-based techniques: This category mainly includes the use of crowdsourcing and fact-checking techniques, which rely on human knowledge to check and validate the veracity of news content.
  • Artificial Intelligence-based techniques: This category includes the most used AI approaches for fake news detection in the literature. Specifically, these are the approaches in which researchers use classical ML, deep learning techniques such as convolutional neural network (CNN), recurrent neural network (RNN), as well as natural language processing (NLP).
  • Blockchain-based techniques: This category includes solutions using blockchain technology to detect and mitigate fake news in social media by checking source reliability and establishing the traceability of the news content.

Human-based Techniques

One specific research direction for fake news detection consists of using human-based techniques such as crowdsourcing (Pennycook and Rand 2019 ; Micallef et al. 2020 ) and fact-checking (Vlachos and Riedel 2014 ; Chung and Kim 2021 ; Nyhan et al. 2020 ) techniques.

These approaches can be considered as low computational requirement techniques since both rely on human knowledge and expertise for fake news detection. However, fake news identification cannot be addressed solely through human force since it demands a lot of effort in terms of time and cost, and it is ineffective in terms of preventing the fast spread of fake content.

Crowdsourcing. Crowdsourcing approaches (Kim et al. 2018 ) are based on the “wisdom of the crowds” (Collins et al. 2020 ) for fake content detection. These approaches rely on the collective contributions and crowd signals (Tschiatschek et al. 2018 ) of a group of people for the aggregation of crowd intelligence to detect fake news (Tchakounté et al. 2020 ) and to reduce the spread of misinformation on social media (Pennycook and Rand 2019 ; Micallef et al. 2020 ).

Micallef et al. ( 2020 ) highlight the role of the crowd in countering misinformation. They suspect that concerned citizens (i.e., the crowd), who use platforms where disinformation appears, can play a crucial role in spreading fact-checking information and in combating the spread of misinformation.

Recently Tchakounté et al. ( 2020 ) proposed a voting system as a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party site.

Similarly, Huffaker et al. ( 2020 ) propose a crowdsourced detection of emotionally manipulative language. They introduce an approach that transforms classification problems into a comparison task to mitigate conflation content by allowing the crowd to detect text that uses manipulative emotional language to sway users toward positions or actions. The proposed system leverages anchor comparison to distinguish between intrinsically emotional content and emotionally manipulative language.

La Barbera et al. ( 2020 ) try to understand how people perceive the truthfulness of information presented to them. They collect data from US-based crowd workers, build a dataset of crowdsourced truthfulness judgments for political statements, and compare it with expert annotation data generated by fact-checkers such as PolitiFact.

Coscia and Rossi ( 2020 ) introduce a crowdsourced flagging system that consists of online news flagging. The bipolar model of news flagging attempts to capture the main ingredients that they observe in empirical research on fake news and disinformation.

Unlike the previously mentioned researchers who focus on news content in their approaches, Pennycook and Rand ( 2019 ) focus on using crowdsourced judgments of the quality of news sources to combat social media disinformation.

Fact-Checking. The fact-checking task is commonly manually performed by journalists to verify the truthfulness of a given claim. Indeed, fact-checking features are being adopted by multiple online social network platforms. For instance, Facebook 34 started addressing false information through independent fact-checkers in 2017, followed by Google 35 the same year. Two years later, Instagram 36 followed suit. However, the usefulness of fact-checking initiatives is questioned by journalists 37 , as well as by researchers such as Andersen and Søe ( 2020 ). On the other hand, work is being conducted to boost the effectiveness of these initiatives to reduce misinformation (Chung and Kim 2021 ; Clayton et al. 2020 ; Nyhan et al. 2020 ).

Most researchers use fact-checking websites (e.g., politifact.com, 38 snopes.com, 39 Reuters, 40 , etc.) as data sources to build their datasets and train their models. Therefore, in the following, we specifically review examples of solutions that use fact-checking (Vlachos and Riedel 2014 ) to help build datasets that can be further used in the automatic detection of fake content.

Yang et al. ( 2019a ) use PolitiFact fact-checking website as a data source to train, tune, and evaluate their model named XFake, on political data. The XFake system is an explainable fake news detector that assists end users to identify news credibility. The fakeness of news items is detected and interpreted considering both content and contextual (e.g., statements) information (e.g., speaker).

Based on the idea that fact-checkers cannot clean all data, and it must be a selection of what “matters the most” to clean while checking a claim, Sintos et al. ( 2019 ) propose a solution to help fact-checkers combat problems related to data quality (where inaccurate data lead to incorrect conclusions) and data phishing. The proposed solution is a combination of data cleaning and perturbation analysis to avoid uncertainties and errors in data and the possibility that data can be phished.

Tchechmedjiev et al. ( 2019 ) propose a system named “ClaimsKG” as a knowledge graph of fact-checked claims aiming to facilitate structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata. “ClaimsKG” designs the relationship between vocabularies. To gather vocabularies, a semi-automated pipeline periodically gathers data from popular fact-checking websites regularly.

AI-based Techniques

Previous work by Yaqub et al. ( 2020 ) has shown that people lack trust in automated solutions for fake news detection However, work is already being undertaken to increase this trust, for instance by von der Weth et al. ( 2020 ).

Most researchers consider fake news detection as a classification problem and use artificial intelligence techniques, as shown in Fig.  8 . The adopted AI techniques may include machine learning ML (e.g., Naïve Bayes, logistic regression, support vector machine SVM), deep learning DL (e.g., convolutional neural networks CNN, recurrent neural networks RNN, long short-term memory LSTM) and natural language processing NLP (e.g., Count vectorizer, TF-IDF Vectorizer). Most of them combine many AI techniques in their solutions rather than relying on one specific approach.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig8_HTML.jpg

Examples of the most widely used AI techniques for fake news detection

Many researchers are developing machine learning models in their solutions for fake news detection. Recently, deep neural network techniques are also being employed as they are generating promising results (Islam et al. 2020 ). A neural network is a massively parallel distributed processor with simple units that can store important information and make it available for use (Hiriyannaiah et al. 2020 ). Moreover, it has been proven (Cardoso Durier da Silva et al. 2019 ) that the most widely used method for automatic detection of fake news is not simply a classical machine learning technique, but rather a fusion of classical techniques coordinated by a neural network.

Some researchers define purely machine learning models (Del Vicario et al. 2019 ; Elhadad et al. 2019 ; Aswani et al. 2017 ; Hakak et al. 2021 ; Singh et al. 2021 ) in their fake news detection approaches. The more commonly used machine learning algorithms (Abdullah-All-Tanvir et al. 2019 ) for classification problems are Naïve Bayes, logistic regression and SVM.

Other researchers (Wang et al. 2019c ; Wang 2017 ; Liu and Wu 2018 ; Mishra 2020 ; Qian et al. 2018 ; Zhang et al. 2020 ; Goldani et al. 2021 ) prefer to do a mixture of different deep learning models, without combining them with classical machine learning techniques. Some even prove that deep learning techniques outperform traditional machine learning techniques (Mishra et al. 2022 ). Deep learning is one of the most widely popular research topics in machine learning. Unlike traditional machine learning approaches, which are based on manually crafted features, deep learning approaches can learn hidden representations from simpler inputs both in context and content variations (Bondielli and Marcelloni 2019 ). Moreover, traditional machine learning algorithms almost always require structured data and are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets, which requires human intervention to “teach them” when the result is incorrect (Parrish 2018 ), while deep learning networks rely on layers of artificial neural networks (ANN) and do not require human intervention, as multilevel layers in neural networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes (Parrish 2018 ). The two most widely implemented paradigms in deep neural networks are recurrent neural networks (RNN) and convolutional neural networks (CNN).

Still other researchers (Abdullah-All-Tanvir et al. 2019 ; Kaliyar et al. 2020 ; Zhang et al. 2019a ; Deepak and Chitturi 2020 ; Shu et al. 2018a ; Wang et al. 2019c ) prefer to combine traditional machine learning and deep learning classification, models. Others combine machine learning and natural language processing techniques. A few combine deep learning models with natural language processing (Vereshchaka et al. 2020 ). Some other researchers (Kapusta et al. 2019 ; Ozbay and Alatas 2020 ; Ahmed et al. 2020 ) combine natural language processing with machine learning models. Furthermore, others (Abdullah-All-Tanvir et al. 2019 ; Kaur et al. 2020 ; Kaliyar 2018 ; Abdullah-All-Tanvir et al. 2020 ; Bahad et al. 2019 ) prefer to combine all the previously mentioned techniques (i.e., ML, DL and NLP) in their approaches.

Table  11 , which is relegated to the Appendix (after the bibliography) because of its size, shows a comparison of the fake news detection solutions that we have reviewed based on their main approaches, the methodology that was used and the models.

Comparison of AI-based fake news detection techniques

Blockchain-based Techniques for Source Reliability and Traceability

Another research direction for detecting and mitigating fake news in social media focuses on using blockchain solutions. Blockchain technology is recently attracting researchers’ attention due to the interesting features it offers. Immutability, decentralization, tamperproof, consensus, record keeping and non-repudiation of transactions are some of the key features that make blockchain technology exploitable, not just for cryptocurrencies, but also to prove the authenticity and integrity of digital assets.

However, the proposed blockchain approaches are few in number and they are fundamental and theoretical approaches. Specifically, the solutions that are currently available are still in research, prototype, and beta testing stages (DiCicco and Agarwal 2020 ; Tchechmedjiev et al. 2019 ). Furthermore, most researchers (Ochoa et al. 2019 ; Song et al. 2019 ; Shang et al. 2018 ; Qayyum et al. 2019 ; Jing and Murugesan 2018 ; Buccafurri et al. 2017 ; Chen et al. 2018 ) do not specify which fake news type they are mitigating in their studies. They mention news content in general, which is not adequate for innovative solutions. For that, serious implementations should be provided to prove the usefulness and feasibility of this newly developing research vision.

Table  9 shows a classification of the reviewed blockchain-based approaches. In the classification, we listed the following:

  • The type of fake news that authors are trying to mitigate, which can be multimedia-based or text-based fake news.
  • The techniques used for fake news mitigation, which can be either blockchain only, or blockchain combined with other techniques such as AI, Data mining, Truth-discovery, Preservation metadata, Semantic similarity, Crowdsourcing, Graph theory and SIR model (Susceptible, Infected, Recovered).
  • The feature that is offered as an advantage of the given solution (e.g., Reliability, Authenticity and Traceability). Reliability is the credibility and truthfulness of the news content, which consists of proving the trustworthiness of the content. Traceability aims to trace and archive the contents. Authenticity consists of checking whether the content is real and authentic.

A checkmark ( ✓ ) in Table  9 denotes that the mentioned criterion is explicitly mentioned in the proposed solution, while the empty dash (–) cell for fake news type denotes that it depends on the case: The criterion was either not explicitly mentioned (e.g., fake news type) in the work or the classification does not apply (e.g., techniques/other).

A classification of popular blockchain-based approaches for fake news detection in social media

After reviewing the most relevant state of the art for automatic fake news detection, we classify them as shown in Table  10 based on the detection aspects (i.e., content-based, contextual, or hybrid aspects) and the techniques used (i.e., AI, crowdsourcing, fact-checking, blockchain or hybrid techniques). Hybrid techniques refer to solutions that simultaneously combine different techniques from previously mentioned categories (i.e., inter-hybrid methods), as well as techniques within the same class of methods (i.e., intra-hybrid methods), in order to define innovative solutions for fake news detection. A hybrid method should bring the best of both worlds. Then, we provide a discussion based on different axes.

Fake news detection approaches classification

News content-based methods

Most of the news content-based approaches consider fake news detection as a classification problem and they use AI techniques such as classical machine learning (e.g., regression, Bayesian) as well as deep learning (i.e., neural methods such as CNN and RNN). More specifically, classification of social media content is a fundamental task for social media mining, so that most existing methods regard it as a text categorization problem and mainly focus on using content features, such as words and hashtags (Wu and Liu 2018 ). The main challenge facing these approaches is how to extract features in a way to reduce the data used to train their models and what features are the most suitable for accurate results.

Researchers using such approaches are motivated by the fact that the news content is the main entity in the deception process, and it is a straightforward factor to analyze and use while looking for predictive clues of deception. However, detecting fake news only from the content of the news is not enough because the news is created in a strategic intentional way to mimic the truth (i.e., the content can be intentionally manipulated by the spreader to make it look like real news). Therefore, it is considered to be challenging, if not impossible, to identify useful features (Wu and Liu 2018 ) and consequently tell the nature of such news solely from the content.

Moreover, works that utilize only the news content for fake news detection ignore the rich information and latent user intelligence (Qian et al. 2018 ) stored in user responses toward previously disseminated articles. Therefore, the auxiliary information is deemed crucial for an effective fake news detection approach.

Social context-based methods

The context-based approaches explore the surrounding data outside of the news content, which can be an effective direction and has some advantages in areas where the content approaches based on text classification can run into issues. However, most existing studies implementing contextual methods mainly focus on additional information coming from users and network diffusion patterns. Moreover, from a technical perspective, they are limited to the use of sophisticated machine learning techniques for feature extraction, and they ignore the usefulness of results coming from techniques such as web search and crowdsourcing which may save much time and help in the early detection and identification of fake content.

Hybrid approaches can simultaneously model different aspects of fake news such as the content-based aspects, as well as the contextual aspect based on both the OSN user and the OSN network patterns. However, these approaches are deemed more complex in terms of models (Bondielli and Marcelloni 2019 ), data availability, and the number of features. Furthermore, it remains difficult to decide which information among each category (i.e., content-based and context-based information) is most suitable and appropriate to be used to achieve accurate and precise results. Therefore, there are still very few studies belonging to this category of hybrid approaches.

Early detection

As fake news usually evolves and spreads very fast on social media, it is critical and urgent to consider early detection directions. Yet, this is a challenging task to do especially in highly dynamic platforms such as social networks. Both news content- and social context-based approaches suffer from this challenging early detection of fake news.

Although approaches that detect fake news based on content analysis face this issue less, they are still limited by the lack of information required for verification when the news is in its early stage of spread. However, approaches that detect fake news based on contextual analysis are most likely to suffer from the lack of early detection since most of them rely on information that is mostly available after the spread of fake content such as social engagement, user response, and propagation patterns. Therefore, it is crucial to consider both trusted human verification and historical data as an attempt to detect fake content during its early stage of propagation.

Conclusion and future directions

In this paper, we introduced the general context of the fake news problem as one of the major issues of the online deception problem in online social networks. Based on reviewing the most relevant state of the art, we summarized and classified existing definitions of fake news, as well as its related terms. We also listed various typologies and existing categorizations of fake news such as intent-based fake news including clickbait, hoax, rumor, satire, propaganda, conspiracy theories, framing as well as content-based fake news including text and multimedia-based fake news, and in the latter, we can tackle deepfake videos and GAN-generated fake images. We discussed the major challenges related to fake news detection and mitigation in social media including the deceptiveness nature of the fabricated content, the lack of human awareness in the field of fake news, the non-human spreaders issue (e.g., social bots), the dynamicity of such online platforms, which results in a fast propagation of fake content and the quality of existing datasets, which still limits the efficiency of the proposed solutions. We reviewed existing researchers’ visions regarding the automatic detection of fake news based on the adopted approaches (i.e., news content-based approaches, social context-based approaches, or hybrid approaches) and the techniques that are used (i.e., artificial intelligence-based methods; crowdsourcing, fact-checking, and blockchain-based methods; and hybrid methods), then we showed a comparative study between the reviewed works. We also provided a critical discussion of the reviewed approaches based on different axes such as the adopted aspect for fake news detection (i.e., content-based, contextual, and hybrid aspects) and the early detection perspective.

To conclude, we present the main issues for combating the fake news problem that needs to be further investigated while proposing new detection approaches. We believe that to define an efficient fake news detection approach, we need to consider the following:

  • Our choice of sources of information and search criteria may have introduced biases in our research. If so, it would be desirable to identify those biases and mitigate them.
  • News content is the fundamental source to find clues to distinguish fake from real content. However, contextual information derived from social media users and from the network can provide useful auxiliary information to increase detection accuracy. Specifically, capturing users’ characteristics and users’ behavior toward shared content can be a key task for fake news detection.
  • Moreover, capturing users’ historical behavior, including their emotions and/or opinions toward news content, can help in the early detection and mitigation of fake news.
  • Furthermore, adversarial learning techniques (e.g., GAN, SeqGAN) can be considered as a promising direction for mitigating the lack and scarcity of available datasets by providing machine-generated data that can be used to train and build robust systems to detect the fake examples from the real ones.
  • Lastly, analyzing how sources and promoters of fake news operate over the web through multiple online platforms is crucial; Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to valid information (11%).

Appendix: A Comparison of AI-based fake news detection techniques

This Appendix consists only in the rather long Table  11 . It shows a comparison of the fake news detection solutions based on artificial intelligence that we have reviewed according to their main approaches, the methodology that was used, and the models, as explained in Sect.  6.2.2 .

Author Contributions

The order of authors is alphabetic as is customary in the third author’s field. The lead author was Sabrine Amri, who collected and analyzed the data and wrote a first draft of the paper, all along under the supervision and tight guidance of Esma Aïmeur. Gilles Brassard reviewed, criticized and polished the work into its final form.

This work is supported in part by Canada’s Natural Sciences and Engineering Research Council.

Availability of data and material

Declarations.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

1 https://www.nationalacademies.org/news/2021/07/as-surgeon-general-urges-whole-of-society-effort-to-fight-health-misinformation-the-work-of-the-national-academies-helps-foster-an-evidence-based-information-environment , last access date: 26-12-2022.

2 https://time.com/4897819/elvis-presley-alive-conspiracy-theories/ , last access date: 26-12-2022.

3 https://www.therichest.com/shocking/the-evidence-15-reasons-people-think-the-earth-is-flat/ , last access date: 26-12-2022.

4 https://www.grunge.com/657584/the-truth-about-1952s-alien-invasion-of-washington-dc/ , last access date: 26-12-2022.

5 https://www.journalism.org/2021/01/12/news-use-across-social-media-platforms-in-2020/ , last access date: 26-12-2022.

6 https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/ , last access date: 26-12-2022.

7 https://www.buzzfeednews.com/article/janelytvynenko/coronavirus-fake-news-disinformation-rumors-hoaxes , last access date: 26-12-2022.

8 https://www.factcheck.org/2020/03/viral-social-media-posts-offer-false-coronavirus-tips/ , last access date: 26-12-2022.

9 https://www.factcheck.org/2020/02/fake-coronavirus-cures-part-2-garlic-isnt-a-cure/ , last access date: 26-12-2022.

10 https://www.bbc.com/news/uk-36528256 , last access date: 26-12-2022.

11 https://en.wikipedia.org/wiki/Pizzagate_conspiracy_theory , last access date: 26-12-2022.

12 https://www.theguardian.com/world/2017/jan/09/germany-investigating-spread-fake-news-online-russia-election , last access date: 26-12-2022.

13 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2016 , last access date: 26-12-2022.

14 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2018 , last access date: 26-12-2022.

15 https://apnews.com/article/47466c5e260149b1a23641b9e319fda6 , last access date: 26-12-2022.

16 https://blog.collinsdictionary.com/language-lovers/collins-2017-word-of-the-year-shortlist/ , last access date: 26-12-2022.

17 https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/ , last access date: 26-12-2022.

18 https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/ , last access date: 26-12-2022.

19 https://scholar.google.ca/ , last access date: 26-12-2022.

20 https://ieeexplore.ieee.org/ , last access date: 26-12-2022.

21 https://link.springer.com/ , last access date: 26-12-2022.

22 https://www.sciencedirect.com/ , last access date: 26-12-2022.

23 https://www.scopus.com/ , last access date: 26-12-2022.

24 https://www.acm.org/digital-library , last access date: 26-12-2022.

25 https://www.politico.com/magazine/story/2016/12/fake-news-history-long-violent-214535 , last access date: 26-12-2022.

26 https://en.wikipedia.org/wiki/Trial_of_Socrates , last access date: 26-12-2022.

27 https://trends.google.com/trends/explore?hl=en-US &tz=-180 &date=2013-12-06+2018-01-06 &geo=US &q=fake+news &sni=3 , last access date: 26-12-2022.

28 https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation , last access date: 26-12-2022.

29 https://www.nato.int/cps/en/natohq/177273.htm , last access date: 26-12-2022.

30 https://www.collinsdictionary.com/dictionary/english/fake-news , last access date: 26-12-2022.

31 https://www.statista.com/statistics/657111/fake-news-sharing-online/ , last access date: 26-12-2022.

32 https://www.statista.com/statistics/657090/fake-news-recogition-confidence/ , last access date: 26-12-2022.

33 https://www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731 , last access date: 26-12-2022.

34 https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news , last access date: 26-12-2022.

35 https://www.theguardian.com/technology/2017/apr/07/google-to-display-fact-checking-labels-to-show-if-news-is-true-or-false , last access date: 26-12-2022.

36 https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram , last access date: 26-12-2022.

37 https://www.wired.com/story/instagram-fact-checks-who-will-do-checking/ , last access date: 26-12-2022.

38 https://www.politifact.com/ , last access date: 26-12-2022.

39 https://www.snopes.com/ , last access date: 26-12-2022.

40 https://www.reutersagency.com/en/ , last access date: 26-12-2022.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Esma Aïmeur, Email: ac.laertnomu.ori@ruemia .

Sabrine Amri, Email: [email protected] .

Gilles Brassard, Email: ac.laertnomu.ori@drassarb .

  • Abdullah-All-Tanvir, Mahir EM, Akhter S, Huq MR (2019) Detecting fake news using machine learning and deep learning algorithms. In: 7th international conference on smart computing and communications (ICSCC), IEEE, pp 1–5 10.1109/ICSCC.2019.8843612
  • Abdullah-All-Tanvir, Mahir EM, Huda SMA, Barua S (2020) A hybrid approach for identifying authentic news using deep learning methods on popular Twitter threads. In: International conference on artificial intelligence and signal processing (AISP), IEEE, pp 1–6 10.1109/AISP48273.2020.9073583
  • Abu Arqoub O, Abdulateef Elega A, Efe Özad B, Dwikat H, Adedamola Oloyede F. Mapping the scholarship of fake news research: a systematic review. J Pract. 2022; 16 (1):56–86. doi: 10.1080/17512786.2020.1805791. [ CrossRef ] [ Google Scholar ]
  • Ahmed S, Hinkelmann K, Corradini F. Development of fake news model using machine learning through natural language processing. Int J Comput Inf Eng. 2020; 14 (12):454–460. [ Google Scholar ]
  • Aïmeur E, Brassard G, Rioux J. Data privacy: an end-user perspective. Int J Comput Netw Commun Secur. 2013; 1 (6):237–250. [ Google Scholar ]
  • Aïmeur E, Hage H, Amri S (2018) The scourge of online deception in social networks. In: 2018 international conference on computational science and computational intelligence (CSCI), IEEE, pp 1266–1271 10.1109/CSCI46756.2018.00244
  • Alemanno A. How to counter fake news? A taxonomy of anti-fake news approaches. Eur J Risk Regul. 2018; 9 (1):1–5. doi: 10.1017/err.2018.12. [ CrossRef ] [ Google Scholar ]
  • Allcott H, Gentzkow M. Social media and fake news in the 2016 election. J Econ Perspect. 2017; 31 (2):211–36. doi: 10.1257/jep.31.2.211. [ CrossRef ] [ Google Scholar ]
  • Allen J, Howland B, Mobius M, Rothschild D, Watts DJ. Evaluating the fake news problem at the scale of the information ecosystem. Sci Adv. 2020 doi: 10.1126/sciadv.aay3539. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Allington D, Duffy B, Wessely S, Dhavan N, Rubin J. Health-protective behaviour, social media usage and conspiracy belief during the Covid-19 public health emergency. Psychol Med. 2020 doi: 10.1017/S003329172000224X. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Alonso-Galbán P, Alemañy-Castilla C (2022) Curbing misinformation and disinformation in the Covid-19 era: a view from cuba. MEDICC Rev 22:45–46 10.37757/MR2020.V22.N2.12 [ PubMed ] [ CrossRef ]
  • Altay S, Hacquin AS, Mercier H. Why do so few people share fake news? It hurts their reputation. New Media Soc. 2022; 24 (6):1303–1324. doi: 10.1177/1461444820969893. [ CrossRef ] [ Google Scholar ]
  • Amri S, Sallami D, Aïmeur E (2022) Exmulf: an explainable multimodal content-based fake news detection system. In: International symposium on foundations and practice of security. Springer, Berlin, pp 177–187. 10.1109/IJCNN48605.2020.9206973
  • Andersen J, Søe SO. Communicative actions we live by: the problem with fact-checking, tagging or flagging fake news-the case of Facebook. Eur J Commun. 2020; 35 (2):126–139. doi: 10.1177/0267323119894489. [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B. Fake news and Covid-19: modelling the predictors of fake news sharing among social media users. Telematics Inform. 2021; 56 :101475. doi: 10.1016/j.tele.2020.101475. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B, Tunca EA, Gever CV. The effect of visual multimedia instructions against fake news spread: a quasi-experimental study with Nigerian students. J Librariansh Inf Sci. 2022 doi: 10.1177/09610006221096477. [ CrossRef ] [ Google Scholar ]
  • Aswani R, Ghrera S, Kar AK, Chandra S. Identifying buzz in social media: a hybrid approach using artificial bee colony and k-nearest neighbors for outlier detection. Soc Netw Anal Min. 2017; 7 (1):1–10. doi: 10.1007/s13278-017-0461-2. [ CrossRef ] [ Google Scholar ]
  • Avram M, Micallef N, Patil S, Menczer F (2020) Exposure to social engagement metrics increases vulnerability to misinformation. arXiv preprint arxiv:2005.04682 , 10.37016/mr-2020-033
  • Badawy A, Lerman K, Ferrara E (2019) Who falls for online political manipulation? In: Companion proceedings of the 2019 world wide web conference, pp 162–168 10.1145/3308560.3316494
  • Bahad P, Saxena P, Kamal R. Fake news detection using bi-directional LSTM-recurrent neural network. Procedia Comput Sci. 2019; 165 :74–82. doi: 10.1016/j.procs.2020.01.072. [ CrossRef ] [ Google Scholar ]
  • Bakdash J, Sample C, Rankin M, Kantarcioglu M, Holmes J, Kase S, Zaroukian E, Szymanski B (2018) The future of deception: machine-generated and manipulated images, video, and audio? In: 2018 international workshop on social sensing (SocialSens), IEEE, pp 2–2 10.1109/SocialSens.2018.00009
  • Balmas M. When fake news becomes real: combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Commun Res. 2014; 41 (3):430–454. doi: 10.1177/0093650212453600. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. Understanding fake news consumption: a review. Soc Sci. 2020 doi: 10.3390/socsci9100185. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. A working definition of fake news. Encyclopedia. 2022; 2 (1):632–645. doi: 10.3390/encyclopedia2010043. [ CrossRef ] [ Google Scholar ]
  • Bastick Z. Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Comput Hum Behav. 2021; 116 :106633. doi: 10.1016/j.chb.2020.106633. [ CrossRef ] [ Google Scholar ]
  • Batailler C, Brannon SM, Teas PE, Gawronski B. A signal detection approach to understanding the identification of fake news. Perspect Psychol Sci. 2022; 17 (1):78–98. doi: 10.1177/1745691620986135. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bessi A, Ferrara E (2016) Social bots distort the 2016 US presidential election online discussion. First Monday 21(11-7). 10.5210/fm.v21i11.7090
  • Bhattacharjee A, Shu K, Gao M, Liu H (2020) Disinformation in the online information ecosystem: detection, mitigation and challenges. arXiv preprint arXiv:2010.09113
  • Bhuiyan MM, Zhang AX, Sehat CM, Mitra T. Investigating differences in crowdsourced news credibility assessment: raters, tasks, and expert criteria. Proc ACM Hum Comput Interact. 2020; 4 (CSCW2):1–26. doi: 10.1145/3415164. [ CrossRef ] [ Google Scholar ]
  • Bode L, Vraga EK. In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J Commun. 2015; 65 (4):619–638. doi: 10.1111/jcom.12166. [ CrossRef ] [ Google Scholar ]
  • Bondielli A, Marcelloni F. A survey on fake news and rumour detection techniques. Inf Sci. 2019; 497 :38–55. doi: 10.1016/j.ins.2019.05.035. [ CrossRef ] [ Google Scholar ]
  • Bovet A, Makse HA. Influence of fake news in Twitter during the 2016 US presidential election. Nat Commun. 2019; 10 (1):1–14. doi: 10.1038/s41467-018-07761-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proc Natl Acad Sci. 2021 doi: 10.1073/pnas.2020043118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brewer PR, Young DG, Morreale M. The impact of real news about “fake news”: intertextual processes and political satire. Int J Public Opin Res. 2013; 25 (3):323–343. doi: 10.1093/ijpor/edt015. [ CrossRef ] [ Google Scholar ]
  • Bringula RP, Catacutan-Bangit AE, Garcia MB, Gonzales JPS, Valderama AMC. “Who is gullible to political disinformation?” Predicting susceptibility of university students to fake news. J Inf Technol Polit. 2022; 19 (2):165–179. doi: 10.1080/19331681.2021.1945988. [ CrossRef ] [ Google Scholar ]
  • Buccafurri F, Lax G, Nicolazzo S, Nocera A (2017) Tweetchain: an alternative to blockchain for crowd-based applications. In: International conference on web engineering, Springer, Berlin, pp 386–393. 10.1007/978-3-319-60131-1_24
  • Burshtein S. The true story on fake news. Intell Prop J. 2017; 29 (3):397–446. [ Google Scholar ]
  • Cardaioli M, Cecconello S, Conti M, Pajola L, Turrin F (2020) Fake news spreaders profiling through behavioural analysis. In: CLEF (working notes)
  • Cardoso Durier da Silva F, Vieira R, Garcia AC (2019) Can machines learn to detect fake news? A survey focused on social media. In: Proceedings of the 52nd Hawaii international conference on system sciences. 10.24251/HICSS.2019.332
  • Carmi E, Yates SJ, Lockley E, Pawluczuk A (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Intern Policy Rev 9(2):1–22 10.14763/2020.2.1481
  • Celliers M, Hattingh M (2020) A systematic review on fake news themes reported in literature. In: Conference on e-Business, e-Services and e-Society. Springer, Berlin, pp 223–234. 10.1007/978-3-030-45002-1_19
  • Chen Y, Li Q, Wang H (2018) Towards trusted social networks with blockchain technology. arXiv preprint arXiv:1801.02796
  • Cheng L, Guo R, Shu K, Liu H (2020) Towards causal understanding of fake news dissemination. arXiv preprint arXiv:2010.10580
  • Chiu MM, Oh YW. How fake news differs from personal lies. Am Behav Sci. 2021; 65 (2):243–258. doi: 10.1177/0002764220910243. [ CrossRef ] [ Google Scholar ]
  • Chung M, Kim N. When I learn the news is false: how fact-checking information stems the spread of fake news via third-person perception. Hum Commun Res. 2021; 47 (1):1–24. doi: 10.1093/hcr/hqaa010. [ CrossRef ] [ Google Scholar ]
  • Clarke J, Chen H, Du D, Hu YJ. Fake news, investor attention, and market reaction. Inf Syst Res. 2020 doi: 10.1287/isre.2019.0910. [ CrossRef ] [ Google Scholar ]
  • Clayton K, Blair S, Busam JA, Forstner S, Glance J, Green G, Kawata A, Kovvuri A, Martin J, Morgan E, et al. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit Behav. 2020; 42 (4):1073–1095. doi: 10.1007/s11109-019-09533-0. [ CrossRef ] [ Google Scholar ]
  • Collins B, Hoang DT, Nguyen NT, Hwang D (2020) Fake news types and detection models on social media a state-of-the-art survey. In: Asian conference on intelligent information and database systems. Springer, Berlin, pp 562–573 10.1007/978-981-15-3380-8_49
  • Conroy NK, Rubin VL, Chen Y. Automatic deception detection: methods for finding fake news. Proc Assoc Inf Sci Technol. 2015; 52 (1):1–4. doi: 10.1002/pra2.2015.145052010082. [ CrossRef ] [ Google Scholar ]
  • Cooke NA. Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr Q. 2017; 87 (3):211–221. doi: 10.1086/692298. [ CrossRef ] [ Google Scholar ]
  • Coscia M, Rossi L. Distortions of political bias in crowdsourced misinformation flagging. J R Soc Interface. 2020; 17 (167):20200020. doi: 10.1098/rsif.2020.0020. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dame Adjin-Tettey T. Combating fake news, disinformation, and misinformation: experimental evidence for media literacy education. Cogent Arts Human. 2022; 9 (1):2037229. doi: 10.1080/23311983.2022.2037229. [ CrossRef ] [ Google Scholar ]
  • Deepak S, Chitturi B. Deep neural approach to fake-news identification. Procedia Comput Sci. 2020; 167 :2236–2243. doi: 10.1016/j.procs.2020.03.276. [ CrossRef ] [ Google Scholar ]
  • de Cock Buning M (2018) A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Publications Office of the European Union
  • Del Vicario M, Quattrociocchi W, Scala A, Zollo F. Polarization and fake news: early warning of potential misinformation targets. ACM Trans Web (TWEB) 2019; 13 (2):1–22. doi: 10.1145/3316809. [ CrossRef ] [ Google Scholar ]
  • Demuyakor J, Opata EM. Fake news on social media: predicting which media format influences fake news most on facebook. J Intell Commun. 2022 doi: 10.54963/jic.v2i1.56. [ CrossRef ] [ Google Scholar ]
  • Derakhshan H, Wardle C (2017) Information disorder: definitions. In: Understanding and addressing the disinformation ecosystem, pp 5–12
  • Desai AN, Ruidera D, Steinbrink JM, Granwehr B, Lee DH. Misinformation and disinformation: the potential disadvantages of social media in infectious disease and how to combat them. Clin Infect Dis. 2022; 74 (Supplement–3):e34–e39. doi: 10.1093/cid/ciac109. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Domenico G, Sit J, Ishizaka A, Nunan D. Fake news, social media and marketing: a systematic review. J Bus Res. 2021; 124 :329–341. doi: 10.1016/j.jbusres.2020.11.037. [ CrossRef ] [ Google Scholar ]
  • Dias N, Pennycook G, Rand DG. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harv Kennedy School Misinform Rev. 2020 doi: 10.37016/mr-2020-001. [ CrossRef ] [ Google Scholar ]
  • DiCicco KW, Agarwal N (2020) Blockchain technology-based solutions to fight misinformation: a survey. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 267–281, 10.1007/978-3-030-42699-6_14
  • Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS, Deravi F. Understanding conspiracy theories. Polit Psychol. 2019; 40 :3–35. doi: 10.1111/pops.12568. [ CrossRef ] [ Google Scholar ]
  • Edgerly S, Mourão RR, Thorson E, Tham SM. When do audiences verify? How perceptions about message and source influence audience verification of news headlines. J Mass Commun Q. 2020; 97 (1):52–71. doi: 10.1177/1077699019864680. [ CrossRef ] [ Google Scholar ]
  • Egelhofer JL, Lecheler S. Fake news as a two-dimensional phenomenon: a framework and research agenda. Ann Int Commun Assoc. 2019; 43 (2):97–116. doi: 10.1080/23808985.2019.1602782. [ CrossRef ] [ Google Scholar ]
  • Elhadad MK, Li KF, Gebali F (2019) A novel approach for selecting hybrid features from online news textual metadata for fake news detection. In: International conference on p2p, parallel, grid, cloud and internet computing. Springer, Berlin, pp 914–925, 10.1007/978-3-030-33509-0_86
  • ERGA (2018) Fake news, and the information disorder. European Broadcasting Union (EBU)
  • ERGA (2021) Notions of disinformation and related concepts. European Regulators Group for Audiovisual Media Services (ERGA)
  • Escolà-Gascón Á. New techniques to measure lie detection using Covid-19 fake news and the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) Comput Hum Behav Rep. 2021; 3 :100049. doi: 10.1016/j.chbr.2020.100049. [ CrossRef ] [ Google Scholar ]
  • Fazio L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016/mr-2020-009. [ CrossRef ] [ Google Scholar ]
  • Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun ACM. 2016; 59 (7):96–104. doi: 10.1145/2818717. [ CrossRef ] [ Google Scholar ]
  • Flynn D, Nyhan B, Reifler J. The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit Psychol. 2017; 38 :127–150. doi: 10.1111/pops.12394. [ CrossRef ] [ Google Scholar ]
  • Fraga-Lamas P, Fernández-Caramés TM. Fake news, disinformation, and deepfakes: leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof. 2020; 22 (2):53–59. doi: 10.1109/MITP.2020.2977589. [ CrossRef ] [ Google Scholar ]
  • Freeman D, Waite F, Rosebrock L, Petit A, Causier C, East A, Jenner L, Teale AL, Carr L, Mulhall S, et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol Med. 2020 doi: 10.1017/S0033291720001890. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Friggeri A, Adamic L, Eckles D, Cheng J (2014) Rumor cascades. In: Proceedings of the international AAAI conference on web and social media
  • García SA, García GG, Prieto MS, Moreno Guerrero AJ, Rodríguez Jiménez C. The impact of term fake news on the scientific community. Scientific performance and mapping in web of science. Soc Sci. 2020 doi: 10.3390/socsci9050073. [ CrossRef ] [ Google Scholar ]
  • Garrett RK, Bond RM. Conservatives’ susceptibility to political misperceptions. Sci Adv. 2021 doi: 10.1126/sciadv.abf1234. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Giachanou A, Ríssola EA, Ghanem B, Crestani F, Rosso P (2020) The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: International conference on applications of natural language to information systems. Springer, Berlin, pp 181–192 10.1007/978-3-030-51310-8_17
  • Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, Buntain C, Chanduka R, Cheakalos P, Everett JB et al (2018) Fake news vs satire: a dataset and analysis. In: Proceedings of the 10th ACM conference on web science, pp 17–21, 10.1145/3201064.3201100
  • Goldani MH, Momtazi S, Safabakhsh R. Detecting fake news with capsule neural networks. Appl Soft Comput. 2021; 101 :106991. doi: 10.1016/j.asoc.2020.106991. [ CrossRef ] [ Google Scholar ]
  • Goldstein I, Yang L. Good disclosure, bad disclosure. J Financ Econ. 2019; 131 (1):118–138. doi: 10.1016/j.jfineco.2018.08.004. [ CrossRef ] [ Google Scholar ]
  • Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake news on Twitter during the 2016 US presidential election. Science. 2019; 363 (6425):374–378. doi: 10.1126/science.aau2706. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guadagno RE, Guttieri K (2021) Fake news and information warfare: an examination of the political and psychological processes from the digital sphere to the real world. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 218–242 10.4018/978-1-7998-7291-7.ch013
  • Guess A, Nagler J, Tucker J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv. 2019 doi: 10.1126/sciadv.aau4586. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guo C, Cao J, Zhang X, Shu K, Yu M (2019) Exploiting emotions for fake news detection on social media. arXiv preprint arXiv:1903.01728
  • Guo B, Ding Y, Yao L, Liang Y, Yu Z. The future of false information detection on social media: new perspectives and trends. ACM Comput Surv (CSUR) 2020; 53 (4):1–36. doi: 10.1145/3393880. [ CrossRef ] [ Google Scholar ]
  • Gupta A, Li H, Farnoush A, Jiang W. Understanding patterns of covid infodemic: a systematic and pragmatic approach to curb fake news. J Bus Res. 2022; 140 :670–683. doi: 10.1016/j.jbusres.2021.11.032. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ha L, Andreu Perez L, Ray R. Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: disciplinary contribution, topics, and impact. Am Behav Sci. 2021; 65 (2):290–315. doi: 10.1177/0002764219869402. [ CrossRef ] [ Google Scholar ]
  • Habib A, Asghar MZ, Khan A, Habib A, Khan A. False information detection in online content and its role in decision making: a systematic literature review. Soc Netw Anal Min. 2019; 9 (1):1–20. doi: 10.1007/s13278-019-0595-5. [ CrossRef ] [ Google Scholar ]
  • Hage H, Aïmeur E, Guedidi A (2021) Understanding the landscape of online deception. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 39–66. 10.4018/978-1-7998-2543-2.ch014
  • Hakak S, Alazab M, Khan S, Gadekallu TR, Maddikunta PKR, Khan WZ. An ensemble machine learning approach through effective feature extraction to classify fake news. Futur Gener Comput Syst. 2021; 117 :47–58. doi: 10.1016/j.future.2020.11.022. [ CrossRef ] [ Google Scholar ]
  • Hamdi T, Slimi H, Bounhas I, Slimani Y (2020) A hybrid approach for fake news detection in Twitter based on user features and graph embedding. In: International conference on distributed computing and internet technology. Springer, Berlin, pp 266–280. 10.1007/978-3-030-36987-3_17
  • Hameleers M. Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the us and netherlands. Inf Commun Soc. 2022; 25 (1):110–126. doi: 10.1080/1369118X.2020.1764603. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Powell TE, Van Der Meer TG, Bos L. A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Polit Commun. 2020; 37 (2):281–301. doi: 10.1080/10584609.2019.1674979. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Brosius A, de Vreese CH. Whom to trust? media exposure patterns of citizens with perceptions of misinformation and disinformation related to the news media. Eur J Commun. 2022 doi: 10.1177/02673231211072667. [ CrossRef ] [ Google Scholar ]
  • Hartley K, Vu MK. Fighting fake news in the Covid-19 era: policy insights from an equilibrium model. Policy Sci. 2020; 53 (4):735–758. doi: 10.1007/s11077-020-09405-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hasan HR, Salah K. Combating deepfake videos using blockchain and smart contracts. IEEE Access. 2019; 7 :41596–41606. doi: 10.1109/ACCESS.2019.2905689. [ CrossRef ] [ Google Scholar ]
  • Hiriyannaiah S, Srinivas A, Shetty GK, Siddesh G, Srinivasa K (2020) A computationally intelligent agent for detecting fake news using generative adversarial networks. Hybrid computational intelligence: challenges and applications. pp 69–96 10.1016/B978-0-12-818699-2.00004-4
  • Hosseinimotlagh S, Papalexakis EE (2018) Unsupervised content-based identification of fake news articles with tensor decomposition ensembles. In: Proceedings of the workshop on misinformation and misbehavior mining on the web (MIS2)
  • Huckle S, White M. Fake news: a technological approach to proving the origins of content, using blockchains. Big Data. 2017; 5 (4):356–371. doi: 10.1089/big.2017.0071. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huffaker JS, Kummerfeld JK, Lasecki WS, Ackerman MS (2020) Crowdsourced detection of emotionally manipulative language. In: Proceedings of the 2020 CHI conference on human factors in computing systems. pp 1–14 10.1145/3313831.3376375
  • Ireton C, Posetti J. Journalism, fake news & disinformation: handbook for journalism education and training. Paris: UNESCO Publishing; 2018. [ Google Scholar ]
  • Islam MR, Liu S, Wang X, Xu G. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Soc Netw Anal Min. 2020; 10 (1):1–20. doi: 10.1007/s13278-020-00696-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ismailov M, Tsikerdekis M, Zeadally S. Vulnerabilities to online social network identity deception detection research and recommendations for mitigation. Fut Internet. 2020; 12 (9):148. doi: 10.3390/fi12090148. [ CrossRef ] [ Google Scholar ]
  • Jakesch M, Koren M, Evtushenko A, Naaman M (2019) The role of source and expressive responding in political news evaluation. In: Computation and journalism symposium
  • Jamieson KH. Cyberwar: how Russian hackers and trolls helped elect a president: what we don’t, can’t, and do know. Oxford: Oxford University Press; 2020. [ Google Scholar ]
  • Jiang S, Chen X, Zhang L, Chen S, Liu H (2019) User-characteristic enhanced model for fake news detection in social media. In: CCF International conference on natural language processing and Chinese computing, Springer, Berlin, pp 634–646. 10.1007/978-3-030-32233-5_49
  • Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exploiting conflicting social viewpoints in microblogs. In: Proceedings of the AAAI conference on artificial intelligence
  • Jing TW, Murugesan RK (2018) A theoretical framework to build trust and prevent fake news in social media using blockchain. In: International conference of reliable information and communication technology. Springer, Berlin, pp 955–962, 10.1007/978-3-319-99007-1_88
  • Jones-Jang SM, Mortensen T, Liu J. Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. Am Behav Sci. 2021; 65 (2):371–388. doi: 10.1177/0002764219869406. [ CrossRef ] [ Google Scholar ]
  • Jungherr A, Schroeder R. Disinformation and the structural transformations of the public arena: addressing the actual challenges to democracy. Soc Media Soc. 2021 doi: 10.1177/2056305121988928. [ CrossRef ] [ Google Scholar ]
  • Kaliyar RK (2018) Fake news detection using a deep neural network. In: 2018 4th international conference on computing communication and automation (ICCCA), IEEE, pp 1–7 10.1109/CCAA.2018.8777343
  • Kaliyar RK, Goswami A, Narang P, Sinha S. Fndnet—a deep convolutional neural network for fake news detection. Cogn Syst Res. 2020; 61 :32–44. doi: 10.1016/j.cogsys.2019.12.005. [ CrossRef ] [ Google Scholar ]
  • Kapantai E, Christopoulou A, Berberidis C, Peristeras V. A systematic literature review on disinformation: toward a unified taxonomical framework. New Media Soc. 2021; 23 (5):1301–1326. doi: 10.1177/1461444820959296. [ CrossRef ] [ Google Scholar ]
  • Kapusta J, Benko L, Munk M (2019) Fake news identification based on sentiment and frequency analysis. In: International conference Europe middle east and North Africa information systems and technologies to support learning. Springer, Berlin, pp 400–409, 10.1007/978-3-030-36778-7_44
  • Kaur S, Kumar P, Kumaraguru P. Automating fake news detection system using multi-level voting model. Soft Comput. 2020; 24 (12):9049–9069. doi: 10.1007/s00500-019-04436-y. [ CrossRef ] [ Google Scholar ]
  • Khan SA, Alkawaz MH, Zangana HM (2019) The use and abuse of social media for spreading fake news. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS), IEEE, pp 145–148. 10.1109/I2CACIS.2019.8825029
  • Kim J, Tabibian B, Oh A, Schölkopf B, Gomez-Rodriguez M (2018) Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 324–332. 10.1145/3159652.3159734
  • Klein D, Wueller J. Fake news: a legal perspective. J Internet Law. 2017; 20 (10):5–13. [ Google Scholar ]
  • Kogan S, Moskowitz TJ, Niessner M (2019) Fake news: evidence from financial markets. Available at SSRN 3237763
  • Kuklinski JH, Quirk PJ, Jerit J, Schwieder D, Rich RF. Misinformation and the currency of democratic citizenship. J Polit. 2000; 62 (3):790–816. doi: 10.1111/0022-3816.00033. [ CrossRef ] [ Google Scholar ]
  • Kumar S, Shah N (2018) False information on web and social media: a survey. arXiv preprint arXiv:1804.08559
  • Kumar S, West R, Leskovec J (2016) Disinformation on the web: impact, characteristics, and detection of Wikipedia hoaxes. In: Proceedings of the 25th international conference on world wide web, pp 591–602. 10.1145/2872427.2883085
  • La Barbera D, Roitero K, Demartini G, Mizzaro S, Spina D (2020) Crowdsourcing truthfulness: the impact of judgment scale and assessor bias. In: European conference on information retrieval. Springer, Berlin, pp 207–214. 10.1007/978-3-030-45442-5_26
  • Lanius C, Weber R, MacKenzie WI. Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Soc Netw Anal Min. 2021; 11 (1):1–15. doi: 10.1007/s13278-021-00739-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lazer DM, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, et al. The science of fake news. Science. 2018; 359 (6380):1094–1096. doi: 10.1126/science.aao2998. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Le T, Shu K, Molina MD, Lee D, Sundar SS, Liu H (2019) 5 sources of clickbaits you should know! Using synthetic clickbaits to improve prediction and distinguish between bot-generated and human-written headlines. In: 2019 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 33–40. 10.1145/3341161.3342875
  • Lewandowsky S (2020) Climate change, disinformation, and how to combat it. In: Annual Review of Public Health 42. 10.1146/annurev-publhealth-090419-102409 [ PubMed ]
  • Liu Y, Wu YF (2018) Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Proceedings of the AAAI conference on artificial intelligence, pp 354–361
  • Luo M, Hancock JT, Markowitz DM. Credibility perceptions and detection accuracy of fake news headlines on social media: effects of truth-bias and endorsement cues. Commun Res. 2022; 49 (2):171–195. doi: 10.1177/0093650220921321. [ CrossRef ] [ Google Scholar ]
  • Lutzke L, Drummond C, Slovic P, Árvai J. Priming critical thinking: simple interventions limit the influence of fake news about climate change on Facebook. Glob Environ Chang. 2019; 58 :101964. doi: 10.1016/j.gloenvcha.2019.101964. [ CrossRef ] [ Google Scholar ]
  • Maertens R, Anseel F, van der Linden S. Combatting climate change misinformation: evidence for longevity of inoculation and consensus messaging effects. J Environ Psychol. 2020; 70 :101455. doi: 10.1016/j.jenvp.2020.101455. [ CrossRef ] [ Google Scholar ]
  • Mahabub A. A robust technique of fake news detection using ensemble voting classifier and comparison with other classifiers. SN Applied Sciences. 2020; 2 (4):1–9. doi: 10.1007/s42452-020-2326-y. [ CrossRef ] [ Google Scholar ]
  • Mahbub S, Pardede E, Kayes A, Rahayu W. Controlling astroturfing on the internet: a survey on detection techniques and research challenges. Int J Web Grid Serv. 2019; 15 (2):139–158. doi: 10.1504/IJWGS.2019.099561. [ CrossRef ] [ Google Scholar ]
  • Marsden C, Meyer T, Brown I. Platform values and democratic elections: how can the law regulate digital disinformation? Comput Law Secur Rev. 2020; 36 :105373. doi: 10.1016/j.clsr.2019.105373. [ CrossRef ] [ Google Scholar ]
  • Masciari E, Moscato V, Picariello A, Sperlí G (2020) Detecting fake news by image analysis. In: Proceedings of the 24th symposium on international database engineering and applications, pp 1–5. 10.1145/3410566.3410599
  • Mazzeo V, Rapisarda A. Investigating fake and reliable news sources using complex networks analysis. Front Phys. 2022; 10 :886544. doi: 10.3389/fphy.2022.886544. [ CrossRef ] [ Google Scholar ]
  • McGrew S. Learning to evaluate: an intervention in civic online reasoning. Comput Educ. 2020; 145 :103711. doi: 10.1016/j.compedu.2019.103711. [ CrossRef ] [ Google Scholar ]
  • McGrew S, Breakstone J, Ortega T, Smith M, Wineburg S. Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory Res Soc Educ. 2018; 46 (2):165–193. doi: 10.1080/00933104.2017.1416320. [ CrossRef ] [ Google Scholar ]
  • Meel P, Vishwakarma DK. Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl. 2020; 153 :112986. doi: 10.1016/j.eswa.2019.112986. [ CrossRef ] [ Google Scholar ]
  • Meese J, Frith J, Wilken R. Covid-19, 5G conspiracies and infrastructural futures. Media Int Aust. 2020; 177 (1):30–46. doi: 10.1177/1329878X20952165. [ CrossRef ] [ Google Scholar ]
  • Metzger MJ, Hartsell EH, Flanagin AJ. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun Res. 2020; 47 (1):3–28. doi: 10.1177/0093650215613136. [ CrossRef ] [ Google Scholar ]
  • Micallef N, He B, Kumar S, Ahamad M, Memon N (2020) The role of the crowd in countering misinformation: a case study of the Covid-19 infodemic. arXiv preprint arXiv:2011.05773
  • Mihailidis P, Viotty S. Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in “post-fact society. Am Behav Sci. 2017; 61 (4):441–454. doi: 10.1177/0002764217701217. [ CrossRef ] [ Google Scholar ]
  • Mishra R (2020) Fake news detection using higher-order user to user mutual-attention progression in propagation paths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 652–653
  • Mishra S, Shukla P, Agarwal R. Analyzing machine learning enabled fake news detection techniques for diversified datasets. Wirel Commun Mobile Comput. 2022 doi: 10.1155/2022/1575365. [ CrossRef ] [ Google Scholar ]
  • Molina MD, Sundar SS, Le T, Lee D. “Fake news” is not simply false information: a concept explication and taxonomy of online content. Am Behav Sci. 2021; 65 (2):180–212. doi: 10.1177/0002764219878224. [ CrossRef ] [ Google Scholar ]
  • Moro C, Birt JR (2022) Review bombing is a dirty practice, but research shows games do benefit from online feedback. Conversation. https://research.bond.edu.au/en/publications/review-bombing-is-a-dirty-practice-but-research-shows-games-do-be
  • Mustafaraj E, Metaxas PT (2017) The fake news spreading plague: was it preventable? In: Proceedings of the 2017 ACM on web science conference, pp 235–239. 10.1145/3091478.3091523
  • Nagel TW. Measuring fake news acumen using a news media literacy instrument. J Media Liter Educ. 2022; 14 (1):29–42. doi: 10.23860/JMLE-2022-14-1-3. [ CrossRef ] [ Google Scholar ]
  • Nakov P (2020) Can we spot the “fake news” before it was even written? arXiv preprint arXiv:2008.04374
  • Nekmat E. Nudge effect of fact-check alerts: source influence and media skepticism on sharing of news misinformation in social media. Soc Media Soc. 2020 doi: 10.1177/2056305119897322. [ CrossRef ] [ Google Scholar ]
  • Nygren T, Brounéus F, Svensson G. Diversity and credibility in young people’s news feeds: a foundation for teaching and learning citizenship in a digital era. J Soc Sci Educ. 2019; 18 (2):87–109. doi: 10.4119/jsse-917. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Reifler J. Displacing misinformation about events: an experimental test of causal corrections. J Exp Polit Sci. 2015; 2 (1):81–93. doi: 10.1017/XPS.2014.22. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Porter E, Reifler J, Wood TJ. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Polit Behav. 2020; 42 (3):939–960. doi: 10.1007/s11109-019-09528-x. [ CrossRef ] [ Google Scholar ]
  • Nyow NX, Chua HN (2019) Detecting fake news with tweets’ properties. In: 2019 IEEE conference on application, information and network security (AINS), IEEE, pp 24–29. 10.1109/AINS47559.2019.8968706
  • Ochoa IS, de Mello G, Silva LA, Gomes AJ, Fernandes AM, Leithardt VRQ (2019) Fakechain: a blockchain architecture to ensure trust in social media networks. In: International conference on the quality of information and communications technology. Springer, Berlin, pp 105–118. 10.1007/978-3-030-29238-6_8
  • Ozbay FA, Alatas B. Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A. 2020; 540 :123174. doi: 10.1016/j.physa.2019.123174. [ CrossRef ] [ Google Scholar ]
  • Ozturk P, Li H, Sakamoto Y (2015) Combating rumor spread on social media: the effectiveness of refutation and warning. In: 2015 48th Hawaii international conference on system sciences, IEEE, pp 2406–2414. 10.1109/HICSS.2015.288
  • Parikh SB, Atrey PK (2018) Media-rich fake news detection: a survey. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 436–441.10.1109/MIPR.2018.00093
  • Parrish K (2018) Deep learning & machine learning: what’s the difference? Online: https://parsers.me/deep-learning-machine-learning-whats-the-difference/ . Accessed 20 May 2020
  • Paschen J. Investigating the emotional appeal of fake news using artificial intelligence and human contributions. J Prod Brand Manag. 2019; 29 (2):223–233. doi: 10.1108/JPBM-12-2018-2179. [ CrossRef ] [ Google Scholar ]
  • Pathak A, Srihari RK (2019) Breaking! Presenting fake news corpus for automated fact checking. In: Proceedings of the 57th annual meeting of the association for computational linguistics: student research workshop, pp 357–362
  • Peng J, Detchon S, Choo KKR, Ashman H. Astroturfing detection in social media: a binary n-gram-based approach. Concurr Comput: Pract Exp. 2017; 29 (17):e4013. doi: 10.1002/cpe.4013. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc Natl Acad Sci. 2019; 116 (7):2521–2526. doi: 10.1073/pnas.1806781116. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers. 2020; 88 (2):185–200. doi: 10.1111/jopy.12476. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag Sci. 2020; 66 (11):4944–4957. doi: 10.1287/mnsc.2019.3478. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting Covid-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020; 31 (7):770–780. doi: 10.1177/0956797620939054. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B (2017) A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638
  • Previti M, Rodriguez-Fernandez V, Camacho D, Carchiolo V, Malgeri M (2020) Fake news detection using time series and user features classification. In: International conference on the applications of evolutionary computation (Part of EvoStar), Springer, Berlin, pp 339–353. 10.1007/978-3-030-43722-0_22
  • Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, pp 490–497. 10.1609/aaai.v34i01.5386
  • Qayyum A, Qadir J, Janjua MU, Sher F. Using blockchain to rein in the new post-truth world and check the spread of fake news. IT Prof. 2019; 21 (4):16–24. doi: 10.1109/MITP.2019.2910503. [ CrossRef ] [ Google Scholar ]
  • Qian F, Gong C, Sharma K, Liu Y (2018) Neural user response generator: fake news detection with collective user intelligence. In: IJCAI, vol 18, pp 3834–3840. 10.24963/ijcai.2018/533
  • Raza S, Ding C. Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal. 2022; 13 (4):335–362. doi: 10.1007/s41060-021-00302-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ricard J, Medeiros J (2020) Using misinformation as a political weapon: Covid-19 and Bolsonaro in Brazil. Harv Kennedy School misinformation Rev 1(3). https://misinforeview.hks.harvard.edu/article/using-misinformation-as-a-political-weapon-covid-19-and-bolsonaro-in-brazil/
  • Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 2019; 5 (1):1–10. doi: 10.1057/s41599-019-0279-9. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, van der Linden S, Nygren T. Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016//mr-2020-008. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, Van Der Bles AM, Van Der Linden S. Susceptibility to misinformation about Covid-19 around the world. R Soc Open Sci. 2020; 7 (10):201199. doi: 10.1098/rsos.201199. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rubin VL, Conroy N, Chen Y, Cornwell S (2016) Fake news or truth? Using satirical cues to detect potentially misleading news. In: Proceedings of the second workshop on computational approaches to deception detection, pp 7–17
  • Ruchansky N, Seo S, Liu Y (2017) Csi: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 797–806. 10.1145/3132847.3132877
  • Schuyler AJ (2019) Regulating facts: a procedural framework for identifying, excluding, and deterring the intentional or knowing proliferation of fake news online. Univ Ill JL Technol Pol’y, vol 2019, pp 211–240
  • Shae Z, Tsai J (2019) AI blockchain platform for trusting news. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS), IEEE, pp 1610–1619. 10.1109/ICDCS.2019.00160
  • Shang W, Liu M, Lin W, Jia M (2018) Tracing the source of news based on blockchain. In: 2018 IEEE/ACIS 17th international conference on computer and information science (ICIS), IEEE, pp 377–381. 10.1109/ICIS.2018.8466516
  • Shao C, Ciampaglia GL, Flammini A, Menczer F (2016) Hoaxy: A platform for tracking online misinformation. In: Proceedings of the 25th international conference companion on world wide web, pp 745–750. 10.1145/2872518.2890098
  • Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F. The spread of low-credibility content by social bots. Nat Commun. 2018; 9 (1):1–9. doi: 10.1038/s41467-018-06930-7. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shao C, Hui PM, Wang L, Jiang X, Flammini A, Menczer F, Ciampaglia GL. Anatomy of an online misinformation network. PLoS ONE. 2018; 13 (4):e0196087. doi: 10.1371/journal.pone.0196087. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y. Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 2019; 10 (3):1–42. doi: 10.1145/3305260. [ CrossRef ] [ Google Scholar ]
  • Sharma K, Seo S, Meng C, Rambhatla S, Liu Y (2020) Covid-19 on social media: analyzing misinformation in Twitter conversations. arXiv preprint arXiv:2003.12309
  • Shen C, Kasra M, Pan W, Bassett GA, Malloch Y, O’Brien JF. Fake images: the effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media Soc. 2019; 21 (2):438–463. doi: 10.1177/1461444818799526. [ CrossRef ] [ Google Scholar ]
  • Sherman IN, Redmiles EM, Stokes JW (2020) Designing indicators to combat fake media. arXiv preprint arXiv:2010.00544
  • Shi P, Zhang Z, Choo KKR. Detecting malicious social bots based on clickstream sequences. IEEE Access. 2019; 7 :28855–28862. doi: 10.1109/ACCESS.2019.2901864. [ CrossRef ] [ Google Scholar ]
  • Shu K, Sliva A, Wang S, Tang J, Liu H. Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor Newsl. 2017; 19 (1):22–36. doi: 10.1145/3137597.3137600. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2018a) Fakenewsnet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286 , 10.1089/big.2020.0062 [ PubMed ]
  • Shu K, Wang S, Liu H (2018b) Understanding user profiles on social media for fake news detection. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 430–435. 10.1109/MIPR.2018.00092
  • Shu K, Wang S, Liu H (2019a) Beyond news contents: the role of social context for fake news detection. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 312–320. 10.1145/3289600.3290994
  • Shu K, Zhou X, Wang S, Zafarani R, Liu H (2019b) The role of user profiles for fake news detection. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 436–439. 10.1145/3341161.3342927
  • Shu K, Bhattacharjee A, Alatawi F, Nazer TH, Ding K, Karami M, Liu H. Combating disinformation in a social media age. Wiley Interdiscip Rev: Data Min Knowl Discov. 2020; 10 (6):e1385. doi: 10.1002/widm.1385. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Liu H. Hierarchical propagation networks for fake news detection: investigation and exploitation. Proc Int AAAI Conf Web Soc Media AAAI Press. 2020; 14 :626–637. [ Google Scholar ]
  • Shu K, Wang S, Lee D, Liu H (2020c) Mining disinformation and fake news: concepts, methods, and recent advancements. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 1–19 10.1007/978-3-030-42699-6_1
  • Shu K, Zheng G, Li Y, Mukherjee S, Awadallah AH, Ruston S, Liu H (2020d) Early detection of fake news with multi-source weak social supervision. In: ECML/PKDD (3), pp 650–666
  • Singh VK, Ghosh I, Sonagara D. Detecting fake news stories via multimodal analysis. J Am Soc Inf Sci. 2021; 72 (1):3–17. doi: 10.1002/asi.24359. [ CrossRef ] [ Google Scholar ]
  • Sintos S, Agarwal PK, Yang J (2019) Selecting data to clean for fact checking: minimizing uncertainty vs. maximizing surprise. Proc VLDB Endowm 12(13), 2408–2421. 10.14778/3358701.3358708 [ CrossRef ]
  • Snow J (2017) Can AI win the war against fake news? MIT Technology Review Online: https://www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/ . Accessed 3 Oct. 2020
  • Song G, Kim S, Hwang H, Lee K (2019) Blockchain-based notarization for social media. In: 2019 IEEE international conference on consumer clectronics (ICCE), IEEE, pp 1–2 10.1109/ICCE.2019.8661978
  • Starbird K, Arif A, Wilson T (2019) Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. In: Proceedings of the ACM on human–computer interaction, vol 3(CSCW), pp 1–26 10.1145/3359229
  • Sterret D, Malato D, Benz J, Kantor L, Tompson T, Rosenstiel T, Sonderman J, Loker K, Swanson E (2018) Who shared it? How Americans decide what news to trust on social media. Technical report, Norc Working Paper Series, WP-2018-001, pp 1–24
  • Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020; 34 :118–122. doi: 10.1016/j.cobeha.2020.02.015. [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Jr, Thomas RJ, Bishop L. What is (fake) news? Analyzing news values (and more) in fake stories. Media Commun. 2021; 9 (1):110–119. doi: 10.17645/mac.v9i1.3331. [ CrossRef ] [ Google Scholar ]
  • Tchakounté F, Faissal A, Atemkeng M, Ntyam A. A reliable weighting scheme for the aggregation of crowd intelligence to detect fake news. Information. 2020; 11 (6):319. doi: 10.3390/info11060319. [ CrossRef ] [ Google Scholar ]
  • Tchechmedjiev A, Fafalios P, Boland K, Gasquet M, Zloch M, Zapilko B, Dietze S, Todorov K (2019) Claimskg: a knowledge graph of fact-checked claims. In: International semantic web conference. Springer, Berlin, pp 309–324 10.1007/978-3-030-30796-7_20
  • Treen KMd, Williams HT, O’Neill SJ. Online misinformation about climate change. Wiley Interdiscip Rev Clim Change. 2020; 11 (5):e665. doi: 10.1002/wcc.665. [ CrossRef ] [ Google Scholar ]
  • Tsang SJ. Motivated fake news perception: the impact of news sources and policy support on audiences’ assessment of news fakeness. J Mass Commun Q. 2020 doi: 10.1177/1077699020952129. [ CrossRef ] [ Google Scholar ]
  • Tschiatschek S, Singla A, Gomez Rodriguez M, Merchant A, Krause A (2018) Fake news detection in social networks via crowd signals. In: Companion proceedings of the the web conference 2018, pp 517–524. 10.1145/3184558.3188722
  • Uppada SK, Manasa K, Vidhathri B, Harini R, Sivaselvan B. Novel approaches to fake news and fake account detection in OSNS: user social engagement and visual content centric model. Soc Netw Anal Min. 2022; 12 (1):1–19. doi: 10.1007/s13278-022-00878-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van der Linden S, Roozenbeek J (2020) Psychological inoculation against fake news. In: Accepting, sharing, and correcting misinformation, the psychology of fake news. 10.4324/9780429295379-11
  • Van der Linden S, Panagopoulos C, Roozenbeek J. You are fake news: political bias in perceptions of fake news. Media Cult Soc. 2020; 42 (3):460–470. doi: 10.1177/0163443720906992. [ CrossRef ] [ Google Scholar ]
  • Valenzuela S, Muñiz C, Santos M. Social media and belief in misinformation in mexico: a case of maximal panic, minimal effects? Int J Press Polit. 2022 doi: 10.1177/19401612221088988. [ CrossRef ] [ Google Scholar ]
  • Vasu N, Ang B, Teo TA, Jayakumar S, Raizal M, Ahuja J (2018) Fake news: national security in the post-truth era. RSIS
  • Vereshchaka A, Cosimini S, Dong W (2020) Analyzing and distinguishing fake and real news to mitigate the problem of disinformation. In: Computational and mathematical organization theory, pp 1–15. 10.1007/s10588-020-09307-8
  • Verstraete M, Bambauer DE, Bambauer JR (2017) Identifying and countering fake news. Arizona legal studies discussion paper 73(17-15). 10.2139/ssrn.3007971
  • Vilmer J, Escorcia A, Guillaume M, Herrera J (2018) Information manipulation: a challenge for our democracies. In: Report by the Policy Planning Staff (CAPS) of the ministry for europe and foreign affairs, and the institute for strategic research (RSEM) of the Ministry for the Armed Forces
  • Vishwakarma DK, Varshney D, Yadav A. Detection and veracity analysis of fake news via scrapping and authenticating the web search. Cogn Syst Res. 2019; 58 :217–229. doi: 10.1016/j.cogsys.2019.07.004. [ CrossRef ] [ Google Scholar ]
  • Vlachos A, Riedel S (2014) Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp 18–22. 10.3115/v1/W14-2508
  • von der Weth C, Abdul A, Fan S, Kankanhalli M (2020) Helping users tackle algorithmic threats on social media: a multimedia research agenda. In: Proceedings of the 28th ACM international conference on multimedia, pp 4425–4434. 10.1145/3394171.3414692
  • Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018; 359 (6380):1146–1151. doi: 10.1126/science.aap9559. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vraga EK, Bode L. Using expert sources to correct health misinformation in social media. Sci Commun. 2017; 39 (5):621–645. doi: 10.1177/1075547017731776. [ CrossRef ] [ Google Scholar ]
  • Waldman AE. The marketplace of fake news. Univ Pa J Const Law. 2017; 20 :845. [ Google Scholar ]
  • Wang WY (2017) “Liar, liar pants on fire”: a new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648
  • Wang L, Wang Y, de Melo G, Weikum G. Understanding archetypes of fake news via fine-grained classification. Soc Netw Anal Min. 2019; 9 (1):1–17. doi: 10.1007/s13278-019-0580-z. [ CrossRef ] [ Google Scholar ]
  • Wang Y, Han H, Ding Y, Wang X, Liao Q (2019b) Learning contextual features with multi-head self-attention for fake news detection. In: International conference on cognitive computing. Springer, Berlin, pp 132–142. 10.1007/978-3-030-23407-2_11
  • Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019; 240 :112552. doi: 10.1016/j.socscimed.2019.112552. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang Y, Yang W, Ma F, Xu J, Zhong B, Deng Q, Gao J (2020) Weak supervision for fake news detection via reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 516–523. 10.1609/aaai.v34i01.5389
  • Wardle C (2017) Fake news. It’s complicated. Online: https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 3 Oct 2020
  • Wardle C. The need for smarter definitions and practical, timely empirical research on information disorder. Digit J. 2018; 6 (8):951–963. doi: 10.1080/21670811.2018.1502047. [ CrossRef ] [ Google Scholar ]
  • Wardle C, Derakhshan H. Information disorder: toward an interdisciplinary framework for research and policy making. Council Eur Rep. 2017; 27 :1–107. [ Google Scholar ]
  • Weiss AP, Alwan A, Garcia EP, Garcia J. Surveying fake news: assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. Int J Educ Integr. 2020; 16 (1):1–30. doi: 10.1007/s40979-019-0049-x. [ CrossRef ] [ Google Scholar ]
  • Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 637–645. 10.1145/3159652.3159677
  • Wu L, Rao Y (2020) Adaptive interaction fusion networks for fake news detection. arXiv preprint arXiv:2004.10009
  • Wu L, Morstatter F, Carley KM, Liu H. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor Newsl. 2019; 21 (2):80–90. doi: 10.1145/3373464.3373475. [ CrossRef ] [ Google Scholar ]
  • Wu Y, Ngai EW, Wu P, Wu C. Fake news on the internet: a literature review, synthesis and directions for future research. Intern Res. 2022 doi: 10.1108/INTR-05-2021-0294. [ CrossRef ] [ Google Scholar ]
  • Xu K, Wang F, Wang H, Yang B. Detecting fake news over online social media via domain reputations and content understanding. Tsinghua Sci Technol. 2019; 25 (1):20–27. doi: 10.26599/TST.2018.9010139. [ CrossRef ] [ Google Scholar ]
  • Yang F, Pentyala SK, Mohseni S, Du M, Yuan H, Linder R, Ragan ED, Ji S, Hu X (2019a) Xfake: explainable fake news detector with visualizations. In: The world wide web conference, pp 3600–3604. 10.1145/3308558.3314119
  • Yang X, Li Y, Lyu S (2019b) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 8261–8265. 10.1109/ICASSP.2019.8683164
  • Yaqub W, Kakhidze O, Brockman ML, Memon N, Patil S (2020) Effects of credibility indicators on social media news sharing intent. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–14. 10.1145/3313831.3376213
  • Yavary A, Sajedi H, Abadeh MS. Information verification in social networks based on user feedback and news agencies. Soc Netw Anal Min. 2020; 10 (1):1–8. doi: 10.1007/s13278-019-0616-4. [ CrossRef ] [ Google Scholar ]
  • Yazdi KM, Yazdi AM, Khodayi S, Hou J, Zhou W, Saedy S. Improving fake news detection using k-means and support vector machine approaches. Int J Electron Commun Eng. 2020; 14 (2):38–42. doi: 10.5281/zenodo.3669287. [ CrossRef ] [ Google Scholar ]
  • Zannettou S, Sirivianos M, Blackburn J, Kourtellis N. The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J Data Inf Qual (JDIQ) 2019; 11 (3):1–37. doi: 10.1145/3309699. [ CrossRef ] [ Google Scholar ]
  • Zellers R, Holtzman A, Rashkin H, Bisk Y, Farhadi A, Roesner F, Choi Y (2019) Defending against neural fake news. arXiv preprint arXiv:1905.12616
  • Zhang X, Ghorbani AA. An overview of online fake news: characterization, detection, and discussion. Inf Process Manag. 2020; 57 (2):102025. doi: 10.1016/j.ipm.2019.03.004. [ CrossRef ] [ Google Scholar ]
  • Zhang J, Dong B, Philip SY (2020) Fakedetector: effective fake news detection with deep diffusive neural network. In: 2020 IEEE 36th international conference on data engineering (ICDE), IEEE, pp 1826–1829. 10.1109/ICDE48307.2020.00180
  • Zhang Q, Lipani A, Liang S, Yilmaz E (2019a) Reply-aided detection of misinformation via Bayesian deep learning. In: The world wide web conference, pp 2333–2343. 10.1145/3308558.3313718
  • Zhang X, Karaman S, Chang SF (2019b) Detecting and simulating artifacts in GAN fake images. In: 2019 IEEE international workshop on information forensics and security (WIFS), IEEE, pp 1–6 10.1109/WIFS47025.2019.9035107
  • Zhou X, Zafarani R. A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 2020; 53 (5):1–40. doi: 10.1145/3395046. [ CrossRef ] [ Google Scholar ]
  • Zubiaga A, Aker A, Bontcheva K, Liakata M, Procter R. Detection and resolution of rumours in social media: a survey. ACM Comput Surv (CSUR) 2018; 51 (2):1–36. doi: 10.1145/3161603. [ CrossRef ] [ Google Scholar ]

upsc-online-classes

Biased Media is a real threat to Indian Democracy..

Media are the communication outlets or tools used to store and deliver information or data. The term refers to components of the mass media communications industry, such as print media, publishing, the news media, photography, cinema, broad casting (radio and television) and advertising.

Biased journalist or biased news channel shows that all policies and steps of government or apolitical party is always right, they do not criticize government for their wrong work and this will harm the democracy or country because criticism is the backbone of democracy, criticism keeps the government on right track, and media is the fourth pillar of democracy, media keeps democracy alive.

Security implications from Social Media:

As technology is a double edged sword. The large numbers, speed, anonymity and secrecy attached to these conversations have far reaching security implications. Subversive actors have proved in recent years that they are particularly adept at utilizing the Internet and social media to facilitate their activities.

The security implications include:

  • Radicalization: Terrorist groups like Islamic State (ISIS) and Al Qaeda and countries like Pakistan have been extremely effective in using social media to radicalize people and position them to commit violent acts.
  • Terrorism: Many terror modules were busted by police in India, all of whose members were groomed, trained, funded and armed by their handlers on social networking sites. World over, there are cases of terrorist operations, especially lone wolf attacks, being coordinated through social media.
  • Incitement of riots through hateful posts and communal videos. E.g. Hate videos were circulated before the Muzaffarnagar riots of 2013. Pakistan's ISI is known to incite violence by circulating fake videos on social media to incite riots.
  • Cyber-crime: These include cyber bullying or stalking, financial frauds, identity theft etc.
  • Divulgence of sensitive information: Forces posted in sensitive locations are prone to giving away their locations and assets on social media.
  • Influencing democratic processes: The latest emerging threat to national interests is the use of these sites to influence and subvert democratic processes by actors both from within and from enemy countries. Examples recently were seen in US Presidential elections and Brexit referendum.
  • Cyber espionage: Sensitive information from the mobile phones used by security personnel can be stolen using malware and social media.

Following Measures should be taken to deal with these threats:

  • Legal Provisions: IT Act 2000 under Sections 69 and 69A provides government with the power to intercept and block any information, as well as punish perpetrators, in the interest of security and public order etc. The Unlawful Activities Prevention Act (UAPA) and IPC also have provisions against spreading hatred between groups, inciting violence and the intent or act of terrorist activities.
  • Security agencies: Government agencies including National Cyber Coordination Centre (NCCC) and Intelligence agencies actively track terrorist activity on the social media. State police also have their own social media cells, like the highly effective Mumbai's Social Media Lab.
  • Centralized Monitoring System (CMS): To automate the process of lawful interception and monitoring of the internet in the country. It has come into operation in Mumbai and will soon spread to other areas.
  • De-radicalisation: The Union Home Ministry initiated counter-radicalisation and de-radicalisation strategy in sync with cultural, education and employment activities to counter the threat.
  • Guidelines for armed forces: The Government of India issued updated guidelines in 2016 for regulating sharing of secret operational and service data on social media platforms.
  • Monitoring social networking companies: The activities and influence of social networking sites is also being monitored by the government so that they prevent misuse of their platforms for subversive activities and other cyber threats.
  • International Cooperation is being promoted to deal with the often transnational nature of the threats.

In view of the broad threat posed by social media, the Union government needs to come up with a National Social Media Policy. All possible legal, administrative and security related efforts must be taken up to check the use of social media for subversive purposes. However, the need for privacy and security has to be balanced carefully.

Related Essay Biased Social media a threat to Indian Democracy. (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = 'https://connect.facebook.net/en_US/sdk.js#xfbml=1&version=v3.0&appId=1414762635264187&autoLogAppEvents=1'; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Notice Board Last Day 10% off on study materials --> UPSC Final Results 2019 New --> UPSC Mains Results 2022 [ New ] Free CSAT Practice Test --> Practice Prelims Test Series UPSC Videos UPSC Results Prelims Question Papers Prelims Marks Distribution General Studies Notes [ Free ] UPSC Syllabus UPSC Prelims Syllabus UPSC Mains Syllabus UPSC Jobs List UPSC Subjects UPSC Age Limit IAS Full form UPSC Exam Free UPSC Material IAS Exam Book How to prepare for prelims 2023 --> How to prepare for CSAT UPSC Study Material --> UPSC Interview Questions --> UPSC IAS Exam Questions Economic Survey 2020-21 Download --> Union Budget 2020-21 Download --> National Education Policy 2020 Download --> Daily UPSC Current Affairs Quiz --> Union Budget 2024-25 [ New ]   --> Civil Service Essay Contest May - June 2024

  • Are elections free and fair in India?
  • Is employment is real issue in India as compared to other countries? What can we do to improve the situation?
  • Should we do away with reservation and open up to all as equal opportunity?

Civil Service Essay Contest (March 2024)

  • Changing trends in the female workforce, how it can be harnessed for better growth. Views : 457
  • Is the caste barrier breaking due to increased love marriages in India? Views : 1909

biased media essay

Top Civil Service Coaching Centers

  • IAS Coaching in Delhi
  • IAS Coaching in Mumbai
  • IAS Coaching in Chennai
  • IAS Coaching in Bangalore
  • IAS Coaching in Hyderabad
  • UPSC Syllabus
  • IAS Full Form
  • UPSC Post List
  • UPSC Subject List
  • UPSC Age Limit
  • UPSC Prelims Syllabus Pdf
  • UPSC Notes Pdf in English
  • IAS Exam Preparation
  • Union Budget 2024 - 2025

Current Affairs Analysis

biased media essay

About Civil Service India

Civil Service India is a website dedicated to the Civil Services Exam conducted by UPSC. It guides you through the entire gambit of the IAS exam starting with notification, eligibility, syllabus, tips, quiz, notes and current affairs. A team of dedicated professionals are at work to help you!

Stay updated with Us

Phone : +91 96000 32187 / +91 94456 88445

Email : [email protected]

Apps for Civil Services Preparation

IMAGES

  1. 35 Media Bias Examples for Students (2024)

    biased media essay

  2. 📚 Essay Sample on Media and Biases

    biased media essay

  3. Infographic Media Bias

    biased media essay

  4. The Problem of Biased Media in the Modern World

    biased media essay

  5. 17 Confirmation Bias Examples (2023)

    biased media essay

  6. The Biased Media Essay Examples

    biased media essay

VIDEO

  1. Understanding the Impact of Biased Media on Public Perception

  2. Essay on Social media / Essay / Define social media / Social media Essay

  3. Could the media be biased?

  4. Biased media #tiktok #tiktokviral #education #facts #reels #shortsvideo #viralvideo

  5. Poilievre DESTROYS Another Reporter

  6. A Biased Media? 🎙️😂 #RonaldReagan 1985 * #PITD #Shorts (Linked)

COMMENTS

  1. Media Bias In News Report: [Essay Example], 667 words

    Conclusion. Media bias in news reporting is a multifaceted issue that warrants careful examination. While biases are an inherent aspect of human perception, they can be mitigated through conscious efforts by journalists and media organizations. By diversifying newsrooms, fostering transparency, and engaging in robust fact-checking, the media ...

  2. 35 Media Bias Examples for Students (2024)

    By Chris Drew (PhD) / October 1, 2023. Media bias examples include ideological bias, gotcha journalism, negativity bias, and sensationalism. Real-life situations when they occur include when ski resorts spin snow reports to make them sound better, and when cable news shows like Fox and MSNBC overtly prefer one political party over another ...

  3. 80 Media Bias Essay Topic Ideas & Examples

    The mass media is the principal source of political information that has an impact on the citizens. The concept of media bias refers to the disagreement about its impact on the citizens and objectivity of […] Modern Biased Media: Transparency, Independence, and Objectivity Lack. The mass media is considered to be the Fourth Estate by the ...

  4. Media Bias Chart

    The AllSides Media Bias Chart™ is based on our full and growing list of over 1,400 media bias ratings. These ratings inform our balanced newsfeed. The AllSides Media Bias Chart™ is more comprehensive in its methodology than any other media bias chart on the Web. While other media bias charts show you the subjective opinion of just one or a ...

  5. Biased Media is a Real Threat to Indian Democracy

    Biased media poses a grave threat to Indian democracy by undermining the principles of transparency, accountability, and pluralism. Its sensationalism, misinformation, and propaganda have the potential to subvert democratic processes and foster social division. Therefore, it is imperative to address the root causes of biased media and implement ...

  6. Uncovering the essence of diverse media biases from the ...

    Media bias widely exists in the articles published by news media, influencing their readers' perceptions, and bringing prejudice or injustice to society. However, current analysis methods ...

  7. Examples of Media Bias and How to Spot Them

    1. Spin. Spin is a type of media bias that means vague, dramatic or sensational language. When journalists put a "spin" on a story, they stray from objective, measurable facts. Spin is a form of media bias that clouds a reader's view, preventing them from getting a precise take on what happened.

  8. Should you trust media bias charts?

    The AllSides Chart. The AllSides chart focuses solely on political bias. It places sources in one of five boxes — "Left," "Lean Left," "Center," "Lean Right" and "Right ...

  9. Opinion

    From sexism in political reporting ("likability") to racism in crime coverage ( the "crack baby" stereotype ), the media often suffers from the same biases that other Americans do. But we ...

  10. Propaganda, misinformation, and histories of media techniques

    This essay argues that the recent scholarship on misinformation and fake news suffers from a lack of historical contextualization. The fact that misinformation scholarship has, by and large, failed to engage with the history of propaganda and with how propaganda has been studied by media and communication researchers is an empirical detriment to it, and

  11. Biases Make People Vulnerable to Misinformation Spread by Social Media

    Bias in the brain. Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information ...

  12. A systematic review on media bias detection: What is media bias, how it

    Media bias is defined by researchers as slanted news coverage or internal bias, reflected in news articles. By definition, remarkable media bias is deliberate, intentional, and has a particular purpose and tendency towards a particular perspective, ideology, or result. On the other hand, bias can also be unintentional and even unconscious. (1), (3)

  13. Media Bias Essay

    Media Bias And The Media. or the method for reporting them is termed as Media Bias. It is some of the time said that media tailor the news and as opposed to introducing the truths it shows different purposes of perspectives and sentiments. Media inclination is pervasive or broad and it defies the guidelines of news-casting.

  14. Media Bias

    3. Media bias could be defined as the unjust favoritism and reporting of a certain ideas or standpoint. In the news, social media, and entertainment, such as movies or television, we see media bias through the information these forms of media choose to pay attention to or report ("How to Detect Bias in News Media", 2012).

  15. Understanding media bias: How credible are your sources?

    In most countries, media bias is thought to either lean to the left or right, meaning it either favours liberal or conservative politics. In some countries, media bias can go so far as to completely reflect the ideals of the governing body, for example, in North Korea. In cases such as this, media bias essentially becomes propaganda.

  16. More Americans now see news media gaining influence than in 2020

    Americans' views about the influence of the media in the country have shifted dramatically over the course of a year in which there was much discussion about the news media's role during the election and post-election coverage, the COVID-19 pandemic and protests about racial justice.More Americans now say that news organizations are gaining influence than say their influence is waning, a ...

  17. Media Bias and Democracy

    The media is the fourth pillar in the conception of the State, and thus an integral component of democracy.A functional and healthy democracy must encourage the development of journalism as an institution that can ask difficult questions to the establishment — or as it is commonly known, "speak truth to power".. Article 19 of the Constitution of India guarantees the right to freedom of ...

  18. Media Bias Essays

    Since American's don't have room schedule-wise to investigate each side to every one of... Media Bias Media Influence. Topics: American media, Fox News, Journalism, Main stream media, Mass media, Media reports, News report, People's assessment, Social media, Useless information. 8.

  19. Media bias in the United States

    Claims of media bias generally focus on the idea of media outlets reporting news in a way that seems partisan. Other claims argue that outlets sometimes sacrifice objectivity in pursuit of growth or profits.. Some academics in fields like media studies, journalism, communication, political science and economics have looked at bias of the news media in the United States as a component of their ...

  20. Misinformation and biases infect social media, both intentionally and

    Information on social media can be misleading because of biases in three places - the brain, society and algorithms. Scholars are developing ways to identify and display the effects of these biases.

  21. Media Bias and Democracy in India • Stimson Center

    The results unsurprisingly and unfortunately show the consistent existence of media bias—for example, except for eight newspapers, the papers all express biases far from neutral. And this bias consistently correlates with viewers in India expressing similarly biased social, economic, and security attitudes.

  22. NPR in Turmoil After It Is Accused of Liberal Bias

    Mr. Berliner's essay has ignited a firestorm of criticism of NPR on social media, especially among conservatives who have long accused the network of political bias in its reporting.

  23. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  24. Biased Media is a real threat to Indian Democracy.

    Biased Media is a real threat to Indian Democracy.. Media are the communication outlets or tools used to store and deliver information or data. The term refers to components of the mass media communications industry, such as print media, publishing, the news media, photography, cinema, broad casting (radio and television) and advertising.

  25. UK media's pro-Israel bias revealed

    May 19, 2024 at 10:00 am. A new report reveals a strong UK media bias on Israel's war on Gaza. Watch on. The portrayal of Israel's war on Gaza in the UK media has sparked concerns about bias ...