Grad Coach

Research Bias 101: What You Need To Know

By: Derek Jansen (MBA) | Expert Reviewed By: Dr Eunice Rautenbach | September 2022

If you’re new to academic research, research bias (also sometimes called researcher bias) is one of the many things you need to understand to avoid compromising your study. If you’re not careful, research bias can ruin the credibility of your study. 

In this post, we’ll unpack the thorny topic of research bias. We’ll explain what it is , look at some common types of research bias and share some tips to help you minimise the potential sources of bias in your research.

Overview: Research Bias 101

  • What is research bias (or researcher bias)?
  • Bias #1 – Selection bias
  • Bias #2 – Analysis bias
  • Bias #3 – Procedural (admin) bias

So, what is research bias?

Well, simply put, research bias is when the researcher – that’s you – intentionally or unintentionally skews the process of a systematic inquiry , which then of course skews the outcomes of the study . In other words, research bias is what happens when you affect the results of your research by influencing how you arrive at them.

For example, if you planned to research the effects of remote working arrangements across all levels of an organisation, but your sample consisted mostly of management-level respondents , you’d run into a form of research bias. In this case, excluding input from lower-level staff (in other words, not getting input from all levels of staff) means that the results of the study would be ‘biased’ in favour of a certain perspective – that of management.

Of course, if your research aims and research questions were only interested in the perspectives of managers, this sampling approach wouldn’t be a problem – but that’s not the case here, as there’s a misalignment between the research aims and the sample .

Now, it’s important to remember that research bias isn’t always deliberate or intended. Quite often, it’s just the result of a poorly designed study, or practical challenges in terms of getting a well-rounded, suitable sample. While perfect objectivity is the ideal, some level of bias is generally unavoidable when you’re undertaking a study. That said, as a savvy researcher, it’s your job to reduce potential sources of research bias as much as possible.

To minimize potential bias, you first need to know what to look for . So, next up, we’ll unpack three common types of research bias we see at Grad Coach when reviewing students’ projects . These include selection bias , analysis bias , and procedural bias . Keep in mind that there are many different forms of bias that can creep into your research, so don’t take this as a comprehensive list – it’s just a useful starting point.

Research bias definition

Bias #1 – Selection Bias

First up, we have selection bias . The example we looked at earlier (about only surveying management as opposed to all levels of employees) is a prime example of this type of research bias. In other words, selection bias occurs when your study’s design automatically excludes a relevant group from the research process and, therefore, negatively impacts the quality of the results.

With selection bias, the results of your study will be biased towards the group that it includes or favours, meaning that you’re likely to arrive at prejudiced results . For example, research into government policies that only includes participants who voted for a specific party is going to produce skewed results, as the views of those who voted for other parties will be excluded.

Selection bias commonly occurs in quantitative research , as the sampling strategy adopted can have a major impact on the statistical results . That said, selection bias does of course also come up in qualitative research as there’s still plenty room for skewed samples. So, it’s important to pay close attention to the makeup of your sample and make sure that you adopt a sampling strategy that aligns with your research aims. Of course, you’ll seldom achieve a perfect sample, and that okay. But, you need to be aware of how your sample may be skewed and factor this into your thinking when you analyse the resultant data.

Need a helping hand?

research bias type

Bias #2 – Analysis Bias

Next up, we have analysis bias . Analysis bias occurs when the analysis itself emphasises or discounts certain data points , so as to favour a particular result (often the researcher’s own expected result or hypothesis). In other words, analysis bias happens when you prioritise the presentation of data that supports a certain idea or hypothesis , rather than presenting all the data indiscriminately .

For example, if your study was looking into consumer perceptions of a specific product, you might present more analysis of data that reflects positive sentiment toward the product, and give less real estate to the analysis that reflects negative sentiment. In other words, you’d cherry-pick the data that suits your desired outcomes and as a result, you’d create a bias in terms of the information conveyed by the study.

Although this kind of bias is common in quantitative research, it can just as easily occur in qualitative studies, given the amount of interpretive power the researcher has. This may not be intentional or even noticed by the researcher, given the inherent subjectivity in qualitative research. As humans, we naturally search for and interpret information in a way that confirms or supports our prior beliefs or values (in psychology, this is called “confirmation bias”). So, don’t make the mistake of thinking that analysis bias is always intentional and you don’t need to worry about it because you’re an honest researcher – it can creep up on anyone .

To reduce the risk of analysis bias, a good starting point is to determine your data analysis strategy in as much detail as possible, before you collect your data . In other words, decide, in advance, how you’ll prepare the data, which analysis method you’ll use, and be aware of how different analysis methods can favour different types of data. Also, take the time to reflect on your own pre-conceived notions and expectations regarding the analysis outcomes (in other words, what do you expect to find in the data), so that you’re fully aware of the potential influence you may have on the analysis – and therefore, hopefully, can minimize it.

Analysis bias

Bias #3 – Procedural Bias

Last but definitely not least, we have procedural bias , which is also sometimes referred to as administration bias . Procedural bias is easy to overlook, so it’s important to understand what it is and how to avoid it. This type of bias occurs when the administration of the study, especially the data collection aspect, has an impact on either who responds or how they respond.

A practical example of procedural bias would be when participants in a study are required to provide information under some form of constraint. For example, participants might be given insufficient time to complete a survey, resulting in incomplete or hastily-filled out forms that don’t necessarily reflect how they really feel. This can happen really easily, if, for example, you innocently ask your participants to fill out a survey during their lunch break.

Another form of procedural bias can happen when you improperly incentivise participation in a study. For example, offering a reward for completing a survey or interview might incline participants to provide false or inaccurate information just to get through the process as fast as possible and collect their reward. It could also potentially attract a particular type of respondent (a freebie seeker), resulting in a skewed sample that doesn’t really reflect your demographic of interest.

The format of your data collection method can also potentially contribute to procedural bias. If, for example, you decide to host your survey or interviews online, this could unintentionally exclude people who are not particularly tech-savvy, don’t have a suitable device or just don’t have a reliable internet connection. On the flip side, some people might find in-person interviews a bit intimidating (compared to online ones, at least), or they might find the physical environment in which they’re interviewed to be uncomfortable or awkward (maybe the boss is peering into the meeting room, for example). Either way, these factors all result in less useful data.

Although procedural bias is more common in qualitative research, it can come up in any form of fieldwork where you’re actively collecting data from study participants. So, it’s important to consider how your data is being collected and how this might impact respondents. Simply put, you need to take the respondent’s viewpoint and think about the challenges they might face, no matter how small or trivial these might seem. So, it’s always a good idea to have an informal discussion with a handful of potential respondents before you start collecting data and ask for their input regarding your proposed plan upfront.

Procedural bias

Let’s Recap

Ok, so let’s do a quick recap. Research bias refers to any instance where the researcher, or the research design , negatively influences the quality of a study’s results, whether intentionally or not.

The three common types of research bias we looked at are:

  • Selection bias – where a skewed sample leads to skewed results
  • Analysis bias – where the analysis method and/or approach leads to biased results – and,
  • Procedural bias – where the administration of the study, especially the data collection aspect, has an impact on who responds and how they respond.

As I mentioned, there are many other forms of research bias, but we can only cover a handful here. So, be sure to familiarise yourself with as many potential sources of bias as possible to minimise the risk of research bias in your study.

research bias type

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research proposal mistakes

This is really educational and I really like the simplicity of the language in here, but i would like to know if there is also some guidance in regard to the problem statement and what it constitutes.

Alvin Neil A. Gutierrez

Do you have a blog or video that differentiates research assumptions, research propositions and research hypothesis?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 11 December 2020

Quantifying and addressing the prevalence and bias of study designs in the environmental and social sciences

  • Alec P. Christie   ORCID: orcid.org/0000-0002-8465-8410 1 ,
  • David Abecasis   ORCID: orcid.org/0000-0002-9802-8153 2 ,
  • Mehdi Adjeroud 3 ,
  • Juan C. Alonso   ORCID: orcid.org/0000-0003-0450-7434 4 ,
  • Tatsuya Amano   ORCID: orcid.org/0000-0001-6576-3410 5 ,
  • Alvaro Anton   ORCID: orcid.org/0000-0003-4108-6122 6 ,
  • Barry P. Baldigo   ORCID: orcid.org/0000-0002-9862-9119 7 ,
  • Rafael Barrientos   ORCID: orcid.org/0000-0002-1677-3214 8 ,
  • Jake E. Bicknell   ORCID: orcid.org/0000-0001-6831-627X 9 ,
  • Deborah A. Buhl 10 ,
  • Just Cebrian   ORCID: orcid.org/0000-0002-9916-8430 11 ,
  • Ricardo S. Ceia   ORCID: orcid.org/0000-0001-7078-0178 12 , 13 ,
  • Luciana Cibils-Martina   ORCID: orcid.org/0000-0002-2101-4095 14 , 15 ,
  • Sarah Clarke 16 ,
  • Joachim Claudet   ORCID: orcid.org/0000-0001-6295-1061 17 ,
  • Michael D. Craig 18 , 19 ,
  • Dominique Davoult 20 ,
  • Annelies De Backer   ORCID: orcid.org/0000-0001-9129-9009 21 ,
  • Mary K. Donovan   ORCID: orcid.org/0000-0001-6855-0197 22 , 23 ,
  • Tyler D. Eddy 24 , 25 , 26 ,
  • Filipe M. França   ORCID: orcid.org/0000-0003-3827-1917 27 ,
  • Jonathan P. A. Gardner   ORCID: orcid.org/0000-0002-6943-2413 26 ,
  • Bradley P. Harris 28 ,
  • Ari Huusko 29 ,
  • Ian L. Jones 30 ,
  • Brendan P. Kelaher 31 ,
  • Janne S. Kotiaho   ORCID: orcid.org/0000-0002-4732-784X 32 , 33 ,
  • Adrià López-Baucells   ORCID: orcid.org/0000-0001-8446-0108 34 , 35 , 36 ,
  • Heather L. Major   ORCID: orcid.org/0000-0002-7265-1289 37 ,
  • Aki Mäki-Petäys 38 , 39 ,
  • Beatriz Martín 40 , 41 ,
  • Carlos A. Martín 8 ,
  • Philip A. Martin 1 , 42 ,
  • Daniel Mateos-Molina   ORCID: orcid.org/0000-0002-9383-0593 43 ,
  • Robert A. McConnaughey   ORCID: orcid.org/0000-0002-8537-3695 44 ,
  • Michele Meroni 45 ,
  • Christoph F. J. Meyer   ORCID: orcid.org/0000-0001-9958-8913 34 , 35 , 46 ,
  • Kade Mills 47 ,
  • Monica Montefalcone 48 ,
  • Norbertas Noreika   ORCID: orcid.org/0000-0002-3853-7677 49 , 50 ,
  • Carlos Palacín 4 ,
  • Anjali Pande 26 , 51 , 52 ,
  • C. Roland Pitcher   ORCID: orcid.org/0000-0003-2075-4347 53 ,
  • Carlos Ponce 54 ,
  • Matt Rinella 55 ,
  • Ricardo Rocha   ORCID: orcid.org/0000-0003-2757-7347 34 , 35 , 56 ,
  • María C. Ruiz-Delgado 57 ,
  • Juan J. Schmitter-Soto   ORCID: orcid.org/0000-0003-4736-8382 58 ,
  • Jill A. Shaffer   ORCID: orcid.org/0000-0003-3172-0708 10 ,
  • Shailesh Sharma   ORCID: orcid.org/0000-0002-7918-4070 59 ,
  • Anna A. Sher   ORCID: orcid.org/0000-0002-6433-9746 60 ,
  • Doriane Stagnol 20 ,
  • Thomas R. Stanley 61 ,
  • Kevin D. E. Stokesbury 62 ,
  • Aurora Torres 63 , 64 ,
  • Oliver Tully 16 ,
  • Teppo Vehanen   ORCID: orcid.org/0000-0003-3441-6787 65 ,
  • Corinne Watts 66 ,
  • Qingyuan Zhao 67 &
  • William J. Sutherland 1 , 42  

Nature Communications volume  11 , Article number:  6377 ( 2020 ) Cite this article

14k Accesses

45 Citations

68 Altmetric

Metrics details

  • Environmental impact
  • Scientific community
  • Social sciences

Building trust in science and evidence-based decision-making depends heavily on the credibility of studies and their findings. Researchers employ many different study designs that vary in their risk of bias to evaluate the true effect of interventions or impacts. Here, we empirically quantify, on a large scale, the prevalence of different study designs and the magnitude of bias in their estimates. Randomised designs and controlled observational designs with pre-intervention sampling were used by just 23% of intervention studies in biodiversity conservation, and 36% of intervention studies in social science. We demonstrate, through pairwise within-study comparisons across 49 environmental datasets, that these types of designs usually give less biased estimates than simpler observational designs. We propose a model-based approach to combine study estimates that may suffer from different levels of study design bias, discuss the implications for evidence synthesis, and how to facilitate the use of more credible study designs.

Similar content being viewed by others

research bias type

Citizen science in environmental and ecological sciences

research bias type

Improving quantitative synthesis to achieve generality in ecology

research bias type

Empirical evidence of widespread exaggeration bias and selective reporting in ecology

Introduction.

The ability of science to reliably guide evidence-based decision-making hinges on the accuracy and credibility of studies and their results 1 , 2 . Well-designed, randomised experiments are widely accepted to yield more credible results than non-randomised, ‘observational studies’ that attempt to approximate and mimic randomised experiments 3 . Randomisation is a key element of study design that is widely used across many disciplines because of its ability to remove confounding biases (through random assignment of the treatment or impact of interest 4 , 5 ). However, ethical, logistical, and economic constraints often prevent the implementation of randomised experiments, whereas non-randomised observational studies have become popular as they take advantage of historical data for new research questions, larger sample sizes, less costly implementation, and more relevant and representative study systems or populations 6 , 7 , 8 , 9 . Observational studies nevertheless face the challenge of accounting for confounding biases without randomisation, which has led to innovations in study design.

We define ‘study design’ as an organised way of collecting data. Importantly, we distinguish between data collection and statistical analysis (as opposed to other authors 10 ) because of the belief that bias introduced by a flawed design is often much more important than bias introduced by statistical analyses. This was emphasised by Light, Singer & Willet 11 (p. 5): “You can’t fix by analysis what you bungled by design…”; and Rubin 3 : “Design trumps analysis.” Nevertheless, the importance of study design has often been overlooked in debates over the inability of researchers to reproduce the original results of published studies (so-called ‘reproducibility crises’ 12 , 13 ) in favour of other issues (e.g., p-hacking 14 and Hypothesizing After Results are Known or ‘HARKing’ 15 ).

To demonstrate the importance of study designs, we can use the following decomposition of estimation error equation 16 :

This demonstrates that even if we improve the quality of modelling and analysis (to reduce modelling bias through a better bias-variance trade-off 17 ) or increase sample size (to reduce statistical noise), we cannot remove the intrinsic bias introduced by the choice of study design (design bias) unless we collect the data in a different way. The importance of study design in determining the levels of bias in study results therefore cannot be overstated.

For the purposes of this study we consider six commonly used study designs; differences and connections can be visualised in Fig.  1 . There are three major components that allow us to define these designs: randomisation, sampling before and after the impact of interest occurs, and the use of a control group.

figure 1

A hypothetical study set-up is shown where the abundance of birds in three impact and control replicates (e.g., fields represented by blocks in a row) are monitored before and after an impact (e.g., ploughing) that occurs in year zero. Different colours represent each study design and illustrate how replicates are sampled. Approaches for calculating an estimate of the true effect of the impact for each design are also shown, along with synonyms from different disciplines.

Of the non-randomised observational designs, the Before-After Control-Impact (BACI) design uses a control group and samples before and after the impact occurs (i.e., in the ‘before-period’ and the ‘after-period’). Its rationale is to explicitly account for pre-existing differences between the impact group (exposed to the impact) and control group in the before-period, which might otherwise bias the estimate of the impact’s true effect 6 , 18 , 19 .

The BACI design improves upon several other commonly used observational study designs, of which there are two uncontrolled designs: After, and Before-After (BA). An After design monitors an impact group in the after-period, while a BA design compares the state of the impact group between the before- and after-periods. Both designs can be expected to yield poor estimates of the impact’s true effect (large design bias; Equation (1)) because changes in the response variable could have occurred without the impact (e.g., due to natural seasonal changes; Fig.  1 ).

The other observational design is Control-Impact (CI), which compares the impact group and control group in the after-period (Fig.  1 ). This design may suffer from design bias introduced by pre-existing differences between the impact group and control group in the before-period; bias that the BACI design was developed to account for 20 , 21 . These differences have many possible sources, including experimenter bias, logistical and environmental constraints, and various confounding factors (variables that change the propensity of receiving the impact), but can be adjusted for through certain data pre-processing techniques such as matching and stratification 22 .

Among the randomised designs, the most commonly used are counterparts to the observational CI and BACI designs: Randomised Control-Impact (R-CI) and Randomised Before-After Control-Impact (R-BACI) designs. The R-CI design, often termed ‘Randomised Controlled Trials’ (RCTs) in medicine and hailed as the ‘gold standard’ 23 , 24 , removes any pre-impact differences in a stochastic sense, resulting in zero design bias (Equation ( 1 )). Similarly, the R-BACI design should also have zero design bias, and the impact group measurements in the before-period could be used to improve the efficiency of the statistical estimator. No randomised equivalents exist of After or BA designs as they are uncontrolled.

It is important to briefly note that there is debate over two major statistical methods that can be used to analyse data collected using BACI and R-BACI designs, and which is superior at reducing modelling bias 25 (Equation (1)). These statistical methods are: (i) Differences in Differences (DiD) estimator; and (ii) covariance adjustment using the before-period response, which is an extension of Analysis of Covariance (ANCOVA) for generalised linear models — herein termed ‘covariance adjustment’ (Fig.  1 ). These estimators rely on different assumptions to obtain unbiased estimates of the impact’s true effect. The DiD estimator assumes that the control group response accurately represents the impact group response had it not been exposed to the impact (‘parallel trends’ 18 , 26 ) whereas covariance adjustment assumes there are no unmeasured confounders and linear model assumptions hold 6 , 27 .

From both theory and Equation (1), with similar sample sizes, randomised designs (R-BACI and R-CI) are expected to be less biased than controlled, observational designs with sampling in the before-period (BACI), which in turn should be superior to observational designs without sampling in the before-period (CI) or without a control group (BA and After designs 7 , 28 ). Between randomised designs, we might expect that an R-BACI design performs better than a R-CI design because utilising extra data before the impact may improve the efficiency of the statistical estimator by explicitly characterising pre-existing differences between the impact group and control group.

Given the likely differences in bias associated with different study designs, concerns have been raised over the use of poorly designed studies in several scientific disciplines 7 , 29 , 30 , 31 , 32 , 33 , 34 , 35 . Some disciplines, such as the social and medical sciences, commonly undertake direct comparisons of results obtained by randomised and non-randomised designs within a single study 36 , 37 , 38 or between multiple studies (between-study comparisons 39 , 40 , 41 ) to specifically understand the influence of study designs on research findings. However, within-study comparisons are limited in their scope (e.g., a single study 42 , 43 ) and between-study comparisons can be confounded by variability in context or study populations 44 . Overall, we lack quantitative estimates of the prevalence of different study designs and the levels of bias associated with their results.

In this work, we aim to first quantify the prevalence of different study designs in the social and environmental sciences. To fill this knowledge gap, we take advantage of summaries for several thousand biodiversity conservation intervention studies in the Conservation Evidence database 45 ( www.conservationevidence.com ) and social intervention studies in systematic reviews by the Campbell Collaboration ( www.campbellcollaboration.org ). We then quantify the levels of bias in estimates obtained by different study designs (R-BACI, R-CI, BACI, BA, and CI) by applying a hierarchical model to approximately 1000 within-study comparisons across 49 raw environmental datasets from a range of fields. We show that R-BACI, R-CI and BACI designs are poorly represented in studies testing biodiversity conservation and social interventions, and that these types of designs tend to give less biased estimates than simpler observational designs. We propose a model-based approach to combine study estimates that may suffer from different levels of study design bias, discuss the implications for evidence synthesis, and how to facilitate the use of more credible study designs.

Prevalence of study designs

We found that the biodiversity-conservation (conservation evidence) and social-science (Campbell collaboration) literature had similarly high proportions of intervention studies that used CI designs and After designs, but low proportions that used R-BACI, BACI, or BA designs (Fig.  2 ). There were slightly higher proportions of R-CI designs used by intervention studies in social-science systematic reviews than in the biodiversity-conservation literature (Fig.  2 ). The R-BACI, R-CI, and BACI designs made up 23% of intervention studies for biodiversity conservation, and 36% of intervention studies for social science.

figure 2

Intervention studies from the biodiversity-conservation literature were screened from the Conservation Evidence database ( n =4260 studies) and studies from the social-science literature were screened from 32 Campbell Collaboration systematic reviews ( n =1009 studies – note studies excluded by these reviews based on their study design were still counted). Percentages for the social-science literature were calculated for each systematic review (blue data points) and then averaged across all 32 systematic reviews (blue bars and black vertical lines represent mean and 95% Confidence Intervals, respectively). Percentages for the biodiversity-conservation literature are absolute values (shown as green bars) calculated from the entire Conservation Evidence database (after excluding any reviews). Source data are provided as a Source Data file. BA before-after, CI control-impact, BACI before-after-control-impact, R-BACI randomised BACI, R-CI randomised CI.

Influence of different study designs on study results

In non-randomised datasets, we found that estimates of BACI (with covariance adjustment) and CI designs were very similar, while the point estimates for most other designs often differed substantially in their magnitude and sign. We found similar results in randomised datasets for R-BACI (with covariance adjustment) and R-CI designs. For ~30% of responses, in both non-randomised and randomised datasets, study design estimates differed in their statistical significance (i.e., p < 0.05 versus p  > =0.05), except for estimates of (R-)BACI (with covariance adjustment) and (R-)CI designs (Table  1 ; Fig.  3 ). It was rare for the 95% confidence intervals of different designs’ estimates to not overlap – except when comparing estimates of BA designs to (R-)BACI (with covariance adjustment) and (R-)CI designs (Table  1 ). It was even rarer for estimates of different designs to have significantly different signs (i.e., one estimate with entirely negative confidence intervals versus one with entirely positive confidence intervals; Table  1 , Fig.  3 ). Overall, point estimates often differed greatly in their magnitude and, to a lesser extent, in their sign between study designs, but did not differ as greatly when accounting for the uncertainty around point estimates – except in terms of their statistical significance.

figure 3

t-statistics were obtained from two-sided t-tests of estimates obtained by each design for different responses in each dataset using Generalised Linear Models (see Methods). For randomised datasets, BACI and CI axis labels refer to R-BACI and R-CI designs (denoted by ‘R-’). DiD Difference in Differences; CA covariance adjustment. Lines at t-statistic values of 1.96 denote boundaries between cells and colours of points indicate differences in direction and statistical significance ( p  < 0.05; grey = same sign and significance, orange = same sign but difference in significance, red = different sign and significance). Numbers refer to the number of responses in each cell. Source data are provided as a Source Data file. BA Before-After, CI Control-Impact, BACI Before-After-Control-Impact.

Levels of bias in estimates of different study designs

We modelled study design bias using a random effect across datasets in a hierarchical Bayesian model; σ is the standard deviation of the bias term, and assuming bias is randomly distributed across datasets and is on average zero, larger values of σ will indicate a greater magnitude of bias (see Methods). We found that, for randomised datasets, estimates of both R-BACI (using covariance adjustment; CA) and R-CI designs were affected by negligible amounts of bias (very small values of σ; Table  2 ). When the R-BACI design used the DiD estimator, it suffered from slightly more bias (slightly larger values of σ), whereas the BA design had very high bias when applied to randomised datasets (very large values of σ; Table  2 ). There was a highly positive correlation between the estimates of R-BACI (using covariance adjustment) and R-CI designs (Ω[R-BACI CA, R-CI] was close to 1; Table  2 ). Estimates of R-BACI using the DiD estimator were also positively correlated with estimates of R-BACI using covariance adjustment and R-CI designs (moderate positive mean values of Ω[R-BACI CA, R-BACI DiD] and Ω[R-BACI DiD, R-CI]; Table  2 ).

For non-randomised datasets, controlled designs (BACI and CI) were substantially less biased (far smaller values of σ) than the uncontrolled BA design (Table  2 ). A BACI design using the DiD estimator was slightly less biased than the BACI design using covariance adjustment, which was, in turn, slightly less biased than the CI design (Table  2 ).

Standard errors estimated by the hierarchical Bayesian model were reasonably accurate for the randomised datasets (see λ in Methods and Table  2 ), whereas there was some underestimation of standard errors and lack-of-fit for non-randomised datasets.

Our approach provides a principled way to quantify the levels of bias associated with different study designs. We found that randomised study designs (R-BACI and R-CI) and observational BACI designs are poorly represented in the environmental and social sciences; collectively, descriptive case studies (the After design), the uncontrolled, observational BA design, and the controlled, observational CI design made up a substantially greater proportion of intervention studies (Fig.  2 ). And yet R-BACI, R-CI and BACI designs were found to be quantifiably less biased than other observational designs.

As expected the R-CI and R-BACI designs (using a covariance adjustment estimator) performed well; the R-BACI design using a DiD estimator performed slightly less well, probably because the differencing of pre-impact data by this estimator may introduce additional statistical noise compared to covariance adjustment, which controls for these data using a lagged regression variable. Of the observational designs, the BA design performed very poorly (both when analysing randomised and non-randomised data) as expected, being uncontrolled and therefore prone to severe design bias 7 , 28 . The CI design also tended to be more biased than the BACI design (using a DiD estimator) due to pre-existing differences between the impact and control groups. For BACI designs, we recommend that the underlying assumptions of DiD and CA estimators are carefully considered before choosing to apply them to data collected for a specific research question 6 , 27 . Their levels of bias were negligibly different and their known bracketing relationship suggests they will typically give estimates with the same sign, although their tendency to over- or underestimate the true effect will depend on how well the underlying assumptions of each are met (most notably, parallel trends for DiD and no unmeasured confounders for CA; see Introduction) 6 , 27 . Overall, these findings demonstrate the power of large within-study comparisons to directly quantify differences in the levels of bias associated with different designs.

We must acknowledge that the assumptions of our hierarchical model (that the bias for each design (j) is on average zero and normally distributed) cannot be verified without gold standard randomised experiments and that, for observational designs, the model was overdispersed (potentially due to underestimation of statistical error by GLM(M)s or positively correlated design biases). The exact values of our hierarchical model should therefore be treated with appropriate caution, and future research is needed to refine and improve our approach to quantify these biases more precisely. Responses within datasets may also not be independent as multiple species could interact; therefore, the estimates analysed by our hierarchical model are statistically dependent on each other, and although we tried to account for this using a correlation matrix (see Methods, Eq. ( 3 )), this is a limitation of our model. We must also recognise that we collated datasets using non-systematic searches 46 , 47 and therefore our analysis potentially exaggerates the intrinsic biases of observational designs (i.e., our data may disproportionately reflect situations where the BACI design was chosen to account for confounding factors). We nevertheless show that researchers were wise to use the BACI design because it was less biased than CI and BA designs across a wide range of datasets from various environmental systems and locations. Without undertaking costly and time-consuming pre-impact sampling and pilot studies, researchers are also unlikely to know the levels of bias that could affect their results. Finally, we did not consider sample size, but it is likely that researchers might use larger sample sizes for CI and BA designs than BACI designs. This is, however, unlikely to affect our main conclusions because larger sample sizes could increase type I errors (false positive rate) by yielding more precise, but biased estimates of the true effect 28 .

Our analyses provide several empirically supported recommendations for researchers designing future studies to assess an impact of interest. First, using a controlled and/or randomised design (if possible) was shown to strongly reduce the level of bias in study estimates. Second, when observational designs must be used (as randomisation is not feasible or too costly), we urge researchers to choose the BACI design over other observational designs—and when that is not possible, to choose the CI design over the uncontrolled BA design. We acknowledge that limited resources, short funding timescales, and ethical or logistical constraints 48 may force researchers to use the CI design (if randomisation and pre-impact sampling are impossible) or the BA design (if appropriate controls cannot be found 28 ). To facilitate the usage of less biased designs, longer-term investments in research effort and funding are required 43 . Far greater emphasis on study designs in statistical education 49 and better training and collaboration between researchers, practitioners and methodologists, is needed to improve the design of future studies; for example, potentially improving the CI design by pairing or matching the impact group and control group 22 , or improving the BA design using regression discontinuity methods 48 , 50 . Where the choice of study design is limited, researchers must transparently communicate the limitations and uncertainty associated with their results.

Our findings also have wider implications for evidence synthesis, specifically the exclusion of certain observational study designs from syntheses (the ‘rubbish in, rubbish out’ concept 51 , 52 ). We believe that observational designs should be included in systematic reviews and meta-analyses, but that careful adjustments are needed to account for their potential biases. Exclusion of observational studies often results from subjective, checklist-based ‘Risk of Bias’ or quality assessments of studies (e.g., AMSTRAD 2 53 , ROBINS-I 54 , or GRADE 55 ) that are not data-driven and often neglect to identify the actual direction, or quantify the magnitude, of possible bias introduced by observational studies when rating the quality of a review’s recommendations. We also found that there was a small proportion of studies that used randomised designs (R-CI or R-BACI) or observational BACI designs (Fig.  2 ), suggesting that systematic reviews and meta-analyses risk excluding a substantial proportion of the literature and limiting the scope of their recommendations if such exclusion criteria are used 32 , 56 , 57 . This problem is compounded by the fact that, at least in conservation science, studies using randomised or BACI designs are strongly concentrated in Europe, Australasia, and North America 31 . Systematic reviews that rely on these few types of study designs are therefore likely to fail to provide decision makers outside of these regions with locally relevant recommendations that they prefer 58 . The Covid-19 pandemic has highlighted the difficulties in making locally relevant evidence-based decisions using studies conducted in different countries with different demographics and cultures, and on patients of different ages, ethnicities, genetics, and underlying health issues 59 . This problem is also acute for decision-makers working on biodiversity conservation in the tropical regions, where the need for conservation is arguably the greatest (i.e., where most of Earth’s biodiversity exists 60 ) but they either have to rely on very few well-designed studies that are not locally relevant (i.e., have low generalisability), or more studies that are locally relevant but less well-designed 31 , 32 . Either option could lead decision-makers to take ineffective or inefficient decisions. In the long-term, improving the quality and coverage of scientific evidence and evidence syntheses across the world will help solve these issues, but shorter-term solutions to synthesising patchy evidence bases are required.

Our work furthers sorely needed research on how to combine evidence from studies that vary greatly in their design. Our approach is an alternative to conventional meta-analyses which tend to only weight studies by their sample size or the inverse of their variance 61 ; when studies vary greatly in their study design, simply weighting by inverse variance or sample size is unlikely to account for different levels of bias introduced by different study designs (see Equation (1)). For example, a BA study could receive a larger weight if it had lower variance than a BACI study, despite our results suggesting a BA study usually suffers from greater design bias. Our model provides a principled way to weight studies by both their variance and the likely amount of bias introduced by their study design; it is therefore a form of ‘bias-adjusted meta-analysis’ 62 , 63 , 64 , 65 , 66 . However, instead of relying on elicitation of subjective expert opinions on the bias of each study, we provide a data-driven, empirical quantification of study biases – an important step that was called for to improve such meta-analytic approaches 65 , 66 .

Future research is needed to refine our methodology, but our empirically grounded form of bias-adjusted meta-analysis could be implemented as follows: 1.) collate studies for the same true effect, their effect size estimates, standard errors, and the type of study design; 2.) enter these data into our hierarchical model, where effect size estimates share the same intercept (the true causal effect), a random effect term due to design bias (whose variance is estimated by the method we used), and a random effect term for statistical noise (whose variance is estimated by the reported standard error of studies); 3.) fit this model and estimate the shared intercept/true effect. Heuristically, this can be thought of as weighting studies by both their design bias and their sampling variance and could be implemented on a dynamic meta-analysis platform (such as metadataset.com 67 ). This approach has substantial potential to develop evidence synthesis in fields (such as biodiversity conservation 31 , 32 ) with patchy evidence bases, where reliably synthesising findings from studies that vary greatly in their design is a fundamental and unavoidable challenge.

Our study has highlighted an often overlooked aspect of debates over scientific reproducibility: that the credibility of studies is fundamentally determined by study design. Testing the effectiveness of conservation and social interventions is undoubtedly of great importance given the current challenges facing biodiversity and society in general and the serious need for more evidence-based decision-making 1 , 68 . And yet our findings suggest that quantifiably less biased study designs are poorly represented in the environmental and social sciences. Greater methodological training of researchers and funding for intervention studies, as well as stronger collaborations between methodologists and practitioners is needed to facilitate the use of less biased study designs. Better communication and reporting of the uncertainty associated with different study designs is also needed, as well as more meta-research (the study of research itself) to improve standards of study design 69 . Our hierarchical model provides a principled way to combine studies using a variety of study designs that vary greatly in their risk of bias, enabling us to make more efficient use of patchy evidence bases. Ultimately, we hope that researchers and practitioners testing interventions will think carefully about the types of study designs they use, and we encourage the evidence synthesis community to embrace alternative methods for combining evidence from heterogeneous sets of studies to improve our ability to inform evidence-based decision-making in all disciplines.

Quantifying the use of different designs

We compared the use of different study designs in the literature that quantitatively tested interventions between the fields of biodiversity conservation (4,260 studies collated by Conservation Evidence 45 ) and social science (1,009 studies found by 32 systematic reviews produced by the Campbell Collaboration: www.campbellcollaboration.org ).

Conservation Evidence is a database of intervention studies, each of which has quantitatively tested a conservation intervention (e.g., sowing strips of wildflower seeds on farmland to benefit birds), that is continuously being updated through comprehensive, manual searches of conservation journals for a wide range of fields in biodiversity conservation (e.g., amphibian, bird, peatland, and farmland conservation 45 ). To obtain the proportion of studies that used each design from Conservation Evidence, we simply extracted the type of study design from each study in the database in 2019 – the study design was determined using a standardised set of criteria; reviews were not included (Table  3 ). We checked if the designs reported in the database accurately reflected the designs in the original publication and found that for a random subset of 356 studies, 95.1% were accurately described.

Each systematic review produced by the Campbell Collaboration collates and analyses studies that test a specific social intervention; we collated systematic reviews that tested a variety of social interventions across several fields in the social sciences, including education, crime and justice, international development and social welfare (Supplementary Data  1 ). We retrieved systematic reviews produced by the Campbell Collaboration by searching their website ( www.campbellcollaboration.org ) for reviews published between 2013‒2019 (as of 8th September 2019) — we limited the date range as we could not go through every review. As we were interested in the use of study designs in the wider social-science literature, we only considered reviews (32 in total) that contained sufficient information on the number of included and excluded studies that used different study designs. Studies may be excluded from systematic reviews for several reasons, such as their relevance to the scope of the review (e.g., testing a relevant intervention) and their study design. We only considered studies if the sole reason for their exclusion from the systematic review was their study design – i.e., reviews clearly reported that the study was excluded because it used a particular study design, and not because of any other reason, such as its relevance to the review’s research questions. We calculated the proportion of studies that used each design in each systematic review (using the same criteria as for the biodiversity-conservation literature – see Table  3 ) and then averaged these proportions across all systematic reviews.

Within-study comparisons of different study designs

We wanted to make direct within-study comparisons between the estimates obtained by different study designs (e.g., see 38 , 70 , 71 for single within-study comparisons) for many different studies. If a dataset contains data collected using a BACI design, subsets of these data can be used to mimic the use of other study designs (a BA design using only data for the impact group, and a CI design using only data collected after the impact occurred). Similarly, if data were collected using a R-BACI design, subsets of these data can be used to mimic the use of a BA design and a R-CI design. Collecting BACI and R-BACI datasets would therefore allow us to make direct within-study comparisons of the estimates obtained by these designs.

We collated BACI and R-BACI datasets by searching the Web of Science Core Collection 72 which included the following citation indexes: Science Citation Index Expanded (SCI-EXPANDED) 1900-present; Social Sciences Citation Index (SSCI) 1900-present Arts & Humanities Citation Index (A&HCI) 1975-present; Conference Proceedings Citation Index - Science (CPCI-S) 1990-present; Conference Proceedings Citation Index - Social Science & Humanities (CPCI-SSH) 1990-present; Book Citation Index - Science (BKCI-S) 2008-present; Book Citation Index - Social Sciences & Humanities (BKCI-SSH) 2008-present; Emerging Sources Citation Index (ESCI) 2015-present; Current Chemical Reactions (CCR-EXPANDED) 1985-present (Includes Institut National de la Propriete Industrielle structure data back to 1840); Index Chemicus (IC) 1993-present. The following search terms were used: [‘BACI’] OR [‘Before-After Control-Impact’] and the search was conducted on the 18th December 2017. Our search returned 674 results, which we then refined by selecting only ‘Article’ as the document type and using only the following Web of Science Categories: ‘Ecology’, ‘Marine Freshwater Biology’, ‘Biodiversity Conservation’, ‘Fisheries’, ‘Oceanography’, ‘Forestry’, ‘Zoology’, Ornithology’, ‘Biology’, ‘Plant Sciences’, ‘Entomology’, ‘Remote Sensing’, ‘Toxicology’ and ‘Soil Science’. This left 579 results, which we then restricted to articles published since 2002 (15 years prior to search) to give us a realistic opportunity to obtain the raw datasets, thus reducing this number to 542. We were able to access the abstracts of 521 studies and excluded any that did not test the effect of an environmental intervention or threat using an R-BACI or BACI design with response measures related to the abundance (e.g., density, counts, biomass, cover), reproduction (reproductive success) or size (body length, body mass) of animals or plants. Many studies did not test a relevant metric (e.g., they measured species richness), did not use a BACI or R-BACI design, or did not test the effect of an intervention or threat — this left 96 studies for which we contacted all corresponding authors to ask for the raw dataset. We were able to fully access 54 raw datasets, but upon closer inspection we found that three of these datasets either: did not use a BACI design; did not use the metrics we specified; or did not provide sufficient data for our analyses. This left 51 datasets in total that we used in our preliminary analyses (Supplementary Data  2 ).

All the datasets were originally collected to evaluate the effect of an environmental intervention or impact. Most of them contained multiple response variables (e.g., different measures for different species, such as abundance or density for species A, B, and C). Within a dataset, we use the term “response” to refer to the estimation of the true effect of an impact on one response variable. There were 1,968 responses in total across 51 datasets. We then excluded 932 responses (resulting in the exclusion of one dataset) where one or more of the four time-period and treatment subsets (Before Control, Before Impact, After Control, and After Impact data) consisted of entirely zero measurements, or two or more of these subsets had more than 90% zero measurements. We also excluded one further dataset as it was the only one to not contain repeated measurements at sites in both the before- and after-periods. This was necessary to generate reliable standard errors when modelling these data. We modelled the remaining 1,036 responses from across 49 datasets (Supplementary Table  1 ).

We applied each study design to the appropriate components of each dataset using Generalised Linear Models (GLMs 73 , 74 ) because of their generality and ability to implement the statistical estimators of many different study designs. The model structure of GLMs was adjusted for each response in each dataset based on the study design specified, response measure and dataset structure (Supplementary Table  2 ). We quantified the effect of the time period for the BA design (After vs Before the impact) and the effect of the treatment type for the CI and R-CI designs (Impact vs Control) on the response variable (Supplementary Table  2 ). For BACI and R-BACI designs, we implemented two statistical estimators: 1.) a DiD estimator that estimated the true effect using an interaction term between time and treatment type; and 2.) a covariance adjustment estimator that estimated the true effect using a term for the treatment type with a lagged variable (Supplementary Table  2 ).

As there were large numbers of responses, we used general a priori rules to specify models for each response; this may have led to some model misspecification, but was unlikely to have substantially affected our pairwise comparison of estimates obtained by different designs. The error family of each GLM was specified based on the nature of the measure used and preliminary data exploration: count measures (e.g., abundance) = poisson; density measures (e.g., biomass or abundance per unit area) = quasipoisson, as data for these measures tended to be overdispersed; percentage measures (e.g., percentage cover) = quasibinomial; and size measures (e.g., body length) = gaussian.

We treated each year or season in which data were collected as independent observations because the implementation of a seasonal term in models is likely to vary on a case-by-case basis; this will depend on the research questions posed by each study and was not feasible for us to consider given the large number of responses we were modelling. The log link function was used for all models to generate a standardised log response ratio as an estimate of the true effect for each response; a fixed effect coefficient (a variable named treatment status; Supplementary Table  2 ) was used to estimate the log response ratio 61 . If the response had at least ten ‘sites’ (independent sampling units) and two measurements per site on average, we used the random effects of subsample (replicates within a site) nested within site to capture the dependence within a site and subsample (i.e., a Generalised Linear Mixed Model or GLMM 73 , 74 was implemented instead of a GLM); otherwise we fitted a GLM with only the fixed effects (Supplementary Table  2 ).

We fitted all models using R version 3.5.1 75 , and packages lme4 76 and MASS 77 . Code to replicate all analyses is available (see Data and Code Availability). We compared the estimates obtained using each study design (both in terms of point estimates and estimates with associated standard error) by their magnitude and sign.

A model-based quantification of the bias in study design estimates

We used a hierarchical Bayesian model motivated by the decomposition in Equation (1) to quantify the bias in different study design estimates. This model takes the estimated effects of impacts and their standard errors as inputs. Let \(\hat \beta _{ij}\) be the true effect estimator in study \(i\) using design \(j\) and \(\hat \sigma _{ij}\) be its estimated standard error from the corresponding GLM or GLMM. Our hierarchical model assumes:

where β i is the true effect for response \(i\) , \(\gamma _{ij}\) is the bias of design j in response \(i\) , and \(\varepsilon _{ij}\) is the sampling noise of the statistical estimator. Although \(\gamma _{ij}\) technically incorporates both the design bias and any misspecification (modelling) bias due to using GLMs or GLMMs (Equation (1)), we expect the modelling bias to be much smaller than the design bias 3 , 11 . We assume the statistical errors \(\varepsilon _i\) within a response are related to the estimated standard errors through the following joint distribution:

where \({\Omega}\) is the correlation matrix for the different estimators in the same response and λ is a scaling factor to account for possible over/under-estimation of the standard errors.

This model effectively quantifies the bias of design \(j\) using the value of \(\sigma _j\) (larger values = more bias) by accounting for within-response correlations using the correlation matrix \({\Omega}\) and for possible under-estimation of the standard error using \(\lambda\) . We ensured that the prior distributions we used had very large variances so they would have a very small effect on the posterior distribution — accordingly we placed the following disperse priors on the variance parameters:

We fitted the hierarchical Bayesian model in R version 3.5.1 using the Bayesian inference package rstan 78 .

Data availability

All data analysed in the current study are available from Zenodo, https://doi.org/10.5281/zenodo.3560856 .  Source data are provided with this paper.

Code availability

All code used in the current study is available from Zenodo, https://doi.org/10.5281/zenodo.3560856 .

Donnelly, C. A. et al. Four principles to make evidence synthesis more useful for policy. Nature 558 , 361–364 (2018).

Article   ADS   CAS   PubMed   Google Scholar  

McKinnon, M. C., Cheng, S. H., Garside, R., Masuda, Y. J. & Miller, D. C. Sustainability: map the evidence. Nature 528 , 185–187 (2015).

Rubin, D. B. For objective causal inference, design trumps analysis. Ann. Appl. Stat. 2 , 808–840 (2008).

Article   MathSciNet   MATH   Google Scholar  

Peirce, C. S. & Jastrow, J. On small differences in sensation. Mem. Natl Acad. Sci. 3 , 73–83 (1884).

Fisher, R. A. Statistical methods for research workers . (Oliver and Boyd, 1925).

Angrist, J. D. & Pischke, J.-S. Mostly harmless econometrics: an empiricist’s companion . (Princeton University Press, 2008).

de Palma, A. et al . Challenges with inferring how land-use affects terrestrial biodiversity: study design, time, space and synthesis. in Next Generation Biomonitoring: Part 1 163–199 (Elsevier Ltd., 2018).

Sagarin, R. & Pauchard, A. Observational approaches in ecology open new ground in a changing world. Front. Ecol. Environ. 8 , 379–386 (2010).

Article   Google Scholar  

Shadish, W. R., Cook, T. D. & Campbell, D. T. Experimental and quasi-experimental designs for generalized causal inference . (Houghton Mifflin, 2002).

Rosenbaum, P. R. Design of observational studies . vol. 10 (Springer, 2010).

Light, R. J., Singer, J. D. & Willett, J. B. By design: Planning research on higher education. By design: Planning research on higher education . (Harvard University Press, 1990).

Ioannidis, J. P. A. Why most published research findings are false. PLOS Med. 2 , e124 (2005).

Article   PubMed   PubMed Central   Google Scholar  

Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349 , aac4716 (2015).

Article   CAS   Google Scholar  

John, L. K., Loewenstein, G. & Prelec, D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 , 524–532 (2012).

Article   PubMed   Google Scholar  

Kerr, N. L. HARKing: hypothesizing after the results are known. Personal. Soc. Psychol. Rev. 2 , 196–217 (1998).

Zhao, Q., Keele, L. J. & Small, D. S. Comment: will competition-winning methods for causal inference also succeed in practice? Stat. Sci. 34 , 72–76 (2019).

Article   MATH   Google Scholar  

Friedman, J., Hastie, T. & Tibshirani, R. The Elements of Statistical Learning . vol. 1 (Springer series in statistics, 2001).

Underwood, A. J. Beyond BACI: experimental designs for detecting human environmental impacts on temporal variations in natural populations. Mar. Freshw. Res. 42 , 569–587 (1991).

Stewart-Oaten, A. & Bence, J. R. Temporal and spatial variation in environmental impact assessment. Ecol. Monogr. 71 , 305–339 (2001).

Eddy, T. D., Pande, A. & Gardner, J. P. A. Massive differential site-specific and species-specific responses of temperate reef fishes to marine reserve protection. Glob. Ecol. Conserv. 1 , 13–26 (2014).

Sher, A. A. et al. Native species recovery after reduction of an invasive tree by biological control with and without active removal. Ecol. Eng. 111 , 167–175 (2018).

Imbens, G. W. & Rubin, D. B. Causal Inference in Statistics, Social, and Biomedical Sciences . (Cambridge University Press, 2015).

Greenhalgh, T. How to read a paper: the basics of Evidence Based Medicine . (John Wiley & Sons, Ltd, 2019).

Salmond, S. S. Randomized Controlled Trials: Methodological Concepts and Critique. Orthopaedic Nursing 27 , (2008).

Geijzendorffer, I. R. et al. How can global conventions for biodiversity and ecosystem services guide local conservation actions? Curr. Opin. Environ. Sustainability 29 , 145–150 (2017).

Dimick, J. B. & Ryan, A. M. Methods for evaluating changes in health care policy. JAMA 312 , 2401 (2014).

Article   CAS   PubMed   Google Scholar  

Ding, P. & Li, F. A bracketing relationship between difference-in-differences and lagged-dependent-variable adjustment. Political Anal. 27 , 605–615 (2019).

Christie, A. P. et al. Simple study designs in ecology produce inaccurate estimates of biodiversity responses. J. Appl. Ecol. 56 , 2742–2754 (2019).

Watson, M. et al. An analysis of the quality of experimental design and reliability of results in tribology research. Wear 426–427 , 1712–1718 (2019).

Kilkenny, C. et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS ONE 4 , e7824 (2009).

Christie, A. P. et al. The challenge of biased evidence in conservation. Conserv, Biol . 13577, https://doi.org/10.1111/cobi.13577 (2020).

Christie, A. P. et al. Poor availability of context-specific evidence hampers decision-making in conservation. Biol. Conserv. 248 , 108666 (2020).

Moscoe, E., Bor, J. & Bärnighausen, T. Regression discontinuity designs are underutilized in medicine, epidemiology, and public health: a review of current and best practice. J. Clin. Epidemiol. 68 , 132–143 (2015).

Goldenhar, L. M. & Schulte, P. A. Intervention research in occupational health and safety. J. Occup. Med. 36 , 763–778 (1994).

CAS   PubMed   Google Scholar  

Junker, J. et al. A severe lack of evidence limits effective conservation of the World’s primates. BioScience https://doi.org/10.1093/biosci/biaa082 (2020).

Altindag, O., Joyce, T. J. & Reeder, J. A. Can Nonexperimental Methods Provide Unbiased Estimates of a Breastfeeding Intervention? A Within-Study Comparison of Peer Counseling in Oregon. Evaluation Rev. 43 , 152–188 (2019).

Chaplin, D. D. et al. The Internal And External Validity Of The Regression Discontinuity Design: A Meta-Analysis Of 15 Within-Study Comparisons. J. Policy Anal. Manag. 37 , 403–429 (2018).

Cook, T. D., Shadish, W. R. & Wong, V. C. Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. J. Policy Anal. Manag. 27 , 724–750 (2008).

Ioannidis, J. P. A. et al. Comparison of evidence of treatment effects in randomized and nonrandomized studies. J. Am. Med. Assoc. 286 , 821–830 (2001).

dos Santos Ribas, L. G., Pressey, R. L., Loyola, R. & Bini, L. M. A global comparative analysis of impact evaluation methods in estimating the effectiveness of protected areas. Biol. Conserv. 246 , 108595 (2020).

Benson, K. & Hartz, A. J. A Comparison of Observational Studies and Randomized, Controlled Trials. N. Engl. J. Med. 342 , 1878–1886 (2000).

Smokorowski, K. E. et al. Cautions on using the Before-After-Control-Impact design in environmental effects monitoring programs. Facets 2 , 212–232 (2017).

França, F. et al. Do space-for-time assessments underestimate the impacts of logging on tropical biodiversity? An Amazonian case study using dung beetles. J. Appl. Ecol. 53 , 1098–1105 (2016).

Duvendack, M., Hombrados, J. G., Palmer-Jones, R. & Waddington, H. Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies. J. Dev. Effectiveness 4 , 456–471 (2012).

Sutherland, W. J. et al. Building a tool to overcome barriers in research-implementation spaces: The Conservation Evidence database. Biol. Conserv. 238 , 108199 (2019).

Gusenbauer, M. & Haddaway, N. R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res. Synth. Methods 11 , 181–217 (2020).

Konno, K. & Pullin, A. S. Assessing the risk of bias in choice of search sources for environmental meta‐analyses. Res. Synth. Methods 11 , 698–713 (2020).

PubMed   Google Scholar  

Butsic, V., Lewis, D. J., Radeloff, V. C., Baumann, M. & Kuemmerle, T. Quasi-experimental methods enable stronger inferences from observational data in ecology. Basic Appl. Ecol. 19 , 1–10 (2017).

Brownstein, N. C., Louis, T. A., O’Hagan, A. & Pendergast, J. The role of expert judgment in statistical inference and evidence-based decision-making. Am. Statistician 73 , 56–68 (2019).

Article   MathSciNet   Google Scholar  

Hahn, J., Todd, P. & Klaauw, W. Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica 69 , 201–209 (2001).

Slavin, R. E. Best evidence synthesis: an intelligent alternative to meta-analysis. J. Clin. Epidemiol. 48 , 9–18 (1995).

Slavin, R. E. Best-evidence synthesis: an alternative to meta-analytic and traditional reviews. Educ. Researcher 15 , 5–11 (1986).

Shea, B. J. et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Online) 358 , 1–8 (2017).

Google Scholar  

Sterne, J. A. C. et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355 , i4919 (2016).

Guyatt, G. et al. GRADE guidelines: 11. Making an overall rating of confidence in effect estimates for a single outcome and for all outcomes. J. Clin. Epidemiol. 66 , 151–157 (2013).

Davies, G. M. & Gray, A. Don’t let spurious accusations of pseudoreplication limit our ability to learn from natural experiments (and other messy kinds of ecological monitoring). Ecol. Evolution 5 , 5295–5304 (2015).

Lortie, C. J., Stewart, G., Rothstein, H. & Lau, J. How to critically read ecological meta-analyses. Res. Synth. Methods 6 , 124–133 (2015).

Gutzat, F. & Dormann, C. F. Exploration of concerns about the evidence-based guideline approach in conservation management: hints from medical practice. Environ. Manag. 66 , 435–449 (2020).

Greenhalgh, T. Will COVID-19 be evidence-based medicine’s nemesis? PLOS Med. 17 , e1003266 (2020).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Barlow, J. et al. The future of hyperdiverse tropical ecosystems. Nature 559 , 517–526 (2018).

Gurevitch, J. & Hedges, L. V. Statistical issues in ecological meta‐analyses. Ecology 80 , 1142–1149 (1999).

Stone, J. C., Glass, K., Munn, Z., Tugwell, P. & Doi, S. A. R. Comparison of bias adjustment methods in meta-analysis suggests that quality effects modeling may have less limitations than other approaches. J. Clin. Epidemiol. 117 , 36–45 (2020).

Rhodes, K. M. et al. Adjusting trial results for biases in meta-analysis: combining data-based evidence on bias with detailed trial assessment. J. R. Stat. Soc.: Ser. A (Stat. Soc.) 183 , 193–209 (2020).

Article   MathSciNet   CAS   Google Scholar  

Efthimiou, O. et al. Combining randomized and non-randomized evidence in network meta-analysis. Stat. Med. 36 , 1210–1226 (2017).

Article   MathSciNet   PubMed   Google Scholar  

Welton, N. J., Ades, A. E., Carlin, J. B., Altman, D. G. & Sterne, J. A. C. Models for potentially biased evidence in meta-analysis using empirically based priors. J. R. Stat. Soc. Ser. A (Stat. Soc.) 172 , 119–136 (2009).

Turner, R. M., Spiegelhalter, D. J., Smith, G. C. S. & Thompson, S. G. Bias modelling in evidence synthesis. J. R. Stat. Soc.: Ser. A (Stat. Soc.) 172 , 21–47 (2009).

Shackelford, G. E. et al. Dynamic meta-analysis: a method of using global evidence for local decision making. bioRxiv 2020.05.18.078840, https://doi.org/10.1101/2020.05.18.078840 (2020).

Sutherland, W. J., Pullin, A. S., Dolman, P. M. & Knight, T. M. The need for evidence-based conservation. Trends Ecol. evolution 19 , 305–308 (2004).

Ioannidis, J. P. A. Meta-research: Why research on research matters. PLOS Biol. 16 , e2005468 (2018).

Article   PubMed   PubMed Central   CAS   Google Scholar  

LaLonde, R. J. Evaluating the econometric evaluations of training programs with experimental data. Am. Econ. Rev. 76 , 604–620 (1986).

Long, Q., Little, R. J. & Lin, X. Causal inference in hybrid intervention trials involving treatment choice. J. Am. Stat. Assoc. 103 , 474–484 (2008).

Article   MathSciNet   CAS   MATH   Google Scholar  

Thomson Reuters. ISI Web of Knowledge. http://www.isiwebofknowledge.com (2019).

Stroup, W. W. Generalized linear mixed models: modern concepts, methods and applications . (CRC press, 2012).

Bolker, B. M. et al. Generalized linear mixed models: a practical guide for ecology and evolution. Trends Ecol. Evolution 24 , 127–135 (2009).

R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing (2019).

Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67 , 1–48 (2015).

Venables, W. N. & Ripley, B. D. Modern Applied Statistics with S . (Springer, 2002).

Stan Development Team. RStan: the R interface to Stan. R package version 2.19.3 (2020).

Download references

Acknowledgements

We are grateful to the following people and organisations for contributing datasets to this analysis: P. Edwards, G.R. Hodgson, H. Welsh, J.V. Vieira, authors of van Deurs et al. 2012, T. M. Grome, M. Kaspersen, H. Jensen, C. Stenberg, T. K. Sørensen, J. Støttrup, T. Warnar, H. Mosegaard, Axel Schwerk, Alberto Velando, Dolores River Restoration Partnership, J.S. Pinilla, A. Page, M. Dasey, D. Maguire, J. Barlow, J. Louzada, Jari Florestal, R.T. Buxton, C.R. Schacter, J. Seoane, M.G. Conners, K. Nickel, G. Marakovich, A. Wright, G. Soprone, CSIRO, A. Elosegi, L. García-Arberas, J. Díez, A. Rallo, Parks and Wildlife Finland, Parc Marin de la Côte Bleue. Author funding sources: T.A. was supported by the Grantham Foundation for the Protection of the Environment, Kenneth Miller Trust and Australian Research Council Future Fellowship (FT180100354); W.J.S. and P.A.M. were supported by Arcadia, MAVA, and The David and Claudia Harding Foundation; A.P.C. was supported by the Natural Environment Research Council via Cambridge Earth System Science NERC DTP (NE/L002507/1); D.A. was funded by Portugal national funds through the FCT – Foundation for Science and Technology, under the Transitional Standard – DL57 / 2016 and through the strategic project UIDB/04326/2020; M.A. acknowledges Koniambo Nickel SAS, and particularly Gregory Marakovich and Andy Wright; J.C.A. was funded through by Dirección General de Investigación Científica, projects PB97-1252, BOS2002-01543, CGL2005-04893/BOS, CGL2008-02567 and Comunidad de Madrid, as well as by contract HENARSA-CSIC 2003469-CSIC19637; A.A. was funded by Spanish Government: MEC (CGL2007-65176); B.P.B. was funded through the U.S. Geological Survey and the New York City Department of Environmental Protection; R.B. was funded by Comunidad de Madrid (2018-T1/AMB-10374); J.A.S. and D.A.B. were funded through the U.S. Geological Survey and NextEra Energy; R.S.C. was funded by the Portuguese Foundation for Science and Technology (FCT) grant SFRH/BD/78813/2011 and strategic project UID/MAR/04292/2013; A.D.B. was funded through the Belgian offshore wind monitoring program (WINMON-BE), financed by the Belgian offshore wind energy sector via RBINS—OD Nature; M.K.D. was funded by the Harold L. Castle Foundation; P.M.E. was funded by the Clackamas County Water Environment Services River Health Stewardship Program and the Portland State University Student Watershed Research Project; T.D.E., J.P.A.G. and A.P. were supported by funding from the New Zealand Department of Conservation (Te Papa Atawhai) and from the Centre for Marine Environmental & Economic Research, Victoria University of Wellington, New Zealand; F.M.F. was funded by CNPq-CAPES grants (PELD site 23 403811/2012-0, PELD-RAS 441659/2016-0, BEX5528/13-5 and 383744/2015-6) and BNP Paribas Foundation (Climate & Biodiversity Initiative, BIOCLIMATE project); B.P.H. was funded by NOAA-NMFS sea scallop research set-aside program awards NA16FM1031, NA06FM1001, NA16FM2416, and NA04NMF4720332; A.L.B. was funded by the Portuguese Foundation for Science and Technology (FCT) grant FCT PD/BD/52597/2014, Bat Conservation International student research fellowship and CNPq grant 160049/2013-0; L.C.M. acknowledges Secretaría de Ciencia y Técnica (UNRC); R.A.M. acknowledges Alaska Fisheries Science Center, NOAA Fisheries, and U.S. Department of Commerce for salary support; C.F.J.M. was funded by the Portuguese Foundation for Science and Technology (FCT) grant SFRH/BD/80488/2011; R.R. was funded by the Portuguese Foundation for Science and Technology (FCT) grant PTDC/BIA-BIC/111184/2009, by Madeira’s Regional Agency for the Development of Research, Technology and Innovation (ARDITI) grant M1420-09-5369-FSE-000002 and by a Bat Conservation International student research fellowship; J.C. and S.S. were funded by the Alabama Department of Conservation and Natural Resources; A.T. was funded by the Spanish Ministry of Education with a Formacion de Profesorado Universitario (FPU) grant AP2008-00577 and Dirección General de Investigación Científica, project CGL2008-02567; C.W. was funded by Strategic Science Investment Funding of the Ministry of Business, Innovation and Employment, New Zealand; J.S.K. acknowledges Boreal Peatland LIFE (LIFE08 NAT/FIN/000596), Parks and Wildlife Finland and Kone Foundation; J.J.S.S. was funded by the Mexican National Council on Science and Technology (CONACYT 242558); N.N. was funded by The Carl Tryggers Foundation; I.L.J. was funded by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada; D.D. and D.S. were funded by the French National Research Agency via the “Investment for the Future” program IDEALG (ANR-10-BTBR-04) and by the ALGMARBIO project; R.C.P. was funded by CSIRO and whose research was also supported by funds from the Great Barrier Reef Marine Park Authority, the Fisheries Research and Development Corporation, the Australian Fisheries Management Authority, and Queensland Department of Primary Industries (QDPI). Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. The scientific results and conclusions, as well as any views or opinions expressed herein, are those of the author(s) and do not necessarily reflect those of NOAA or the Department of Commerce.

Author information

Authors and affiliations.

Conservation Science Group, Department of Zoology, University of Cambridge, The David Attenborough Building, Downing Street, Cambridge, CB3 3QZ, UK

Alec P. Christie, Philip A. Martin & William J. Sutherland

Centre of Marine Sciences (CCMar), Universidade do Algarve, Campus de Gambelas, 8005-139, Faro, Portugal

David Abecasis

Institut de Recherche pour le Développement (IRD), UMR 9220 ENTROPIE & Laboratoire d’Excellence CORAIL, Université de Perpignan Via Domitia, 52 avenue Paul Alduy, 66860, Perpignan, France

Mehdi Adjeroud

Museo Nacional de Ciencias Naturales, CSIC, Madrid, Spain

Juan C. Alonso & Carlos Palacín

School of Biological Sciences, University of Queensland, Brisbane, 4072, QLD, Australia

Tatsuya Amano

Education Faculty of Bilbao, University of the Basque Country (UPV/EHU). Sarriena z/g E-48940 Leioa, Basque Country, Spain

Alvaro Anton

U.S. Geological Survey, New York Water Science Center, 425 Jordan Rd., Troy, NY, 12180, USA

Barry P. Baldigo

Universidad Complutense de Madrid, Departamento de Biodiversidad, Ecología y Evolución, Facultad de Ciencias Biológicas, c/ José Antonio Novais, 12, E-28040, Madrid, Spain

Rafael Barrientos & Carlos A. Martín

Durrell Institute of Conservation and Ecology (DICE), School of Anthropology and Conservation, University of Kent, Canterbury, CT2 7NR, UK

Jake E. Bicknell

U.S. Geological Survey, Northern Prairie Wildlife Research Center, Jamestown, ND, 58401, USA

Deborah A. Buhl & Jill A. Shaffer

Northern Gulf Institute, Mississippi State University, 1021 Balch Blvd, John C. Stennis Space Center, Mississippi, 39529, USA

Just Cebrian

MARE – Marine and Environmental Sciences Centre, Dept. Life Sciences, University of Coimbra, Coimbra, Portugal

Ricardo S. Ceia

CFE – Centre for Functional Ecology, Dept. Life Sciences, University of Coimbra, Coimbra, Portugal

Departamento de Ciencias Naturales, Universidad Nacional de Río Cuarto (UNRC), Córdoba, Argentina

Luciana Cibils-Martina

CONICET, Buenos Aires, Argentina

Marine Institute, Rinville, Oranmore, Galway, Ireland

Sarah Clarke & Oliver Tully

National Center for Scientific Research, PSL Université Paris, CRIOBE, USR 3278 CNRS-EPHE-UPVD, Maison des Océans, 195 rue Saint-Jacques, 75005, Paris, France

Joachim Claudet

School of Biological Sciences, University of Western Australia, Nedlands, WA, 6009, Australia

Michael D. Craig

School of Environmental and Conservation Sciences, Murdoch University, Murdoch, WA, 6150, Australia

Sorbonne Université, CNRS, UMR 7144, Station Biologique, F.29680, Roscoff, France

Dominique Davoult & Doriane Stagnol

Flanders Research Institute for Agriculture, Fisheries and Food (ILVO), Ankerstraat 1, 8400, Ostend, Belgium

Annelies De Backer

Marine Science Institute, University of California Santa Barbara, Santa Barbara, CA, 93106, USA

Mary K. Donovan

Hawaii Institute of Marine Biology, University of Hawaii at Manoa, Honolulu, HI, 96822, USA

Baruch Institute for Marine & Coastal Sciences, University of South Carolina, Columbia, SC, USA

Tyler D. Eddy

Centre for Fisheries Ecosystems Research, Fisheries & Marine Institute, Memorial University of Newfoundland, St. John’s, Canada

School of Biological Sciences, Victoria University of Wellington, P O Box 600, Wellington, 6140, New Zealand

Tyler D. Eddy, Jonathan P. A. Gardner & Anjali Pande

Lancaster Environment Centre, Lancaster University, LA1 4YQ, Lancaster, UK

Filipe M. França

Fisheries, Aquatic Science and Technology Laboratory, Alaska Pacific University, 4101 University Dr., Anchorage, AK, 99508, USA

Bradley P. Harris

Natural Resources Institute Finland, Manamansalontie 90, 88300, Paltamo, Finland

Department of Biology, Memorial University, St. John’s, NL, A1B 2R3, Canada

Ian L. Jones

National Marine Science Centre and Marine Ecology Research Centre, Southern Cross University, 2 Bay Drive, Coffs Harbour, 2450, Australia

Brendan P. Kelaher

Department of Biological and Environmental Science, University of Jyväskylä, Jyväskylä, Finland

Janne S. Kotiaho

School of Resource Wisdom, University of Jyväskylä, Jyväskylä, Finland

Centre for Ecology, Evolution and Environmental Changes – cE3c, Faculty of Sciences, University of Lisbon, 1749-016, Lisbon, Portugal

Adrià López-Baucells, Christoph F. J. Meyer & Ricardo Rocha

Biological Dynamics of Forest Fragments Project, National Institute for Amazonian Research and Smithsonian Tropical Research Institute, 69011-970, Manaus, Brazil

Granollers Museum of Natural History, Granollers, Spain

Adrià López-Baucells

Department of Biological Sciences, University of New Brunswick, PO Box 5050, Saint John, NB, E2L 4L5, Canada

Heather L. Major

Voimalohi Oy, Voimatie 23, Voimatie, 91100, Ii, Finland

Aki Mäki-Petäys

Natural Resources Institute Finland, Paavo Havaksen tie 3, 90014 University of Oulu, Oulu, Finland

Fundación Migres CIMA Ctra, Cádiz, Spain

Beatriz Martín

Intergovernmental Oceanographic Commission of UNESCO, Marine Policy and Regional Coordination Section Paris 07, Paris, France

BioRISC, St. Catharine’s College, Cambridge, CB2 1RL, UK

Philip A. Martin & William J. Sutherland

Departamento de Ecología e Hidrología, Universidad de Murcia, Campus de Espinardo, 30100, Murcia, Spain

Daniel Mateos-Molina

RACE Division, Alaska Fisheries Science Center, National Marine Fisheries Service, NOAA, 7600 Sand Point Way NE, Seattle, WA, 98115, USA

Robert A. McConnaughey

European Commission, Joint Research Centre (JRC), Ispra, VA, Italy

Michele Meroni

School of Science, Engineering and Environment, University of Salford, Salford, M5 4WT, UK

Christoph F. J. Meyer

Victorian National Park Association, Carlton, VIC, Australia

Department of Earth, Environment and Life Sciences (DiSTAV), University of Genoa, Corso Europa 26, 16132, Genoa, Italy

Monica Montefalcone

Department of Ecology, Swedish University of Agricultural Sciences, Uppsala, Sweden

Norbertas Noreika

Chair of Plant Health, Institute of Agricultural and Environmental Sciences, Estonian University of Life Sciences, Tartu, Estonia

Biosecurity New Zealand – Tiakitanga Pūtaiao Aotearoa, Ministry for Primary Industries – Manatū Ahu Matua, 66 Ward St, PO Box 40742, Wallaceville, New Zealand

Anjali Pande

National Institute of Water & Atmospheric Research Ltd (NIWA), 301 Evans Bay Parade, Greta Point Wellington, New Zealand

CSIRO Oceans & Atmosphere, Queensland Biosciences Precinct, 306 Carmody Road, ST. LUCIA QLD, 4067, Australia

C. Roland Pitcher

Museo Nacional de Ciencias Naturales, CSIC, José Gutiérrez Abascal 2, E-28006, Madrid, Spain

Carlos Ponce

Fort Keogh Livestock and Range Research Laboratory, 243 Fort Keogh Rd, Miles City, Montana, 59301, USA

Matt Rinella

CIBIO-InBIO, Research Centre in Biodiversity and Genetic Resources, University of Porto, Vairão, Portugal

Ricardo Rocha

Departamento de Sistemas Físicos, Químicos y Naturales, Universidad Pablo de Olavide, ES-41013, Sevilla, Spain

María C. Ruiz-Delgado

El Colegio de la Frontera Sur, A.P. 424, 77000, Chetumal, QR, Mexico

Juan J. Schmitter-Soto

Division of Fish and Wildlife, New York State Department of Environmental Conservation, 625 Broadway, Albany, NY, 12233-4756, USA

Shailesh Sharma

University of Denver Department of Biological Sciences, Denver, CO, USA

Anna A. Sher

U.S. Geological Survey, Fort Collins Science Center, Fort Collins, CO, 80526, USA

Thomas R. Stanley

School for Marine Science and Technology, University of Massachusetts Dartmouth, New Bedford, MA, USA

Kevin D. E. Stokesbury

Georges Lemaître Earth and Climate Research Centre, Earth and Life Institute, Université Catholique de Louvain, 1348, Louvain-la-Neuve, Belgium

Aurora Torres

Center for Systems Integration and Sustainability, Department of Fisheries and Wildlife, 13 Michigan State University, East Lansing, MI, 48823, USA

Natural Resources Institute Finland, Latokartanonkaari 9, 00790, Helsinki, Finland

Teppo Vehanen

Manaaki Whenua – Landcare Research, Private Bag 3127, Hamilton, 3216, New Zealand

Corinne Watts

Statistical Laboratory, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WB, UK

Qingyuan Zhao

You can also search for this author in PubMed   Google Scholar

Contributions

A.P.C., T.A., P.A.M., Q.Z., and W.J.S. designed the research; A.P.C. wrote the paper; D.A., M.A., J.C.A., A.A., B.P.B, R.B., J.B., D.A.B., J.C., R.S.C., L.C.M., S.C., J.C., M.D.C, D.D., A.D.B., M.K.D., T.D.E., P.M.E., F.M.F., J.P.A.G., B.P.H., A.H., I.L.J., B.P.K., J.S.K., A.L.B., H.L.M., A.M., B.M., C.A.M., D.M., R.A.M, M.M., C.F.J.M.,K.M., M.M., N.N., C.P., A.P., C.R.P., C.P., M.R., R.R., M.C.R., J.J.S.S., J.A.S., S.S., A.A.S., D.S., K.D.E.S., T.R.S., A.T., O.T., T.V., C.W. contributed datasets for analyses. All authors reviewed, edited, and approved the manuscript.

Corresponding author

Correspondence to Alec P. Christie .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information Nature Communications thanks Casper Albers, Samuel Scheiner, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary information, supplementary data 1, supplementary data 2, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Christie, A.P., Abecasis, D., Adjeroud, M. et al. Quantifying and addressing the prevalence and bias of study designs in the environmental and social sciences. Nat Commun 11 , 6377 (2020). https://doi.org/10.1038/s41467-020-20142-y

Download citation

Received : 29 January 2020

Accepted : 13 November 2020

Published : 11 December 2020

DOI : https://doi.org/10.1038/s41467-020-20142-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Robust language-based mental health assessments in time and space through social media.

  • Siddharth Mangalik
  • Johannes C. Eichstaedt
  • H. Andrew Schwartz

npj Digital Medicine (2024)

Is there a “difference-in-difference”? The impact of scientometric evaluation on the evolution of international publications in Egyptian universities and research centres

  • Mona Farouk Ali

Scientometrics (2024)

Quantifying research waste in ecology

  • Marija Purgar
  • Tin Klanjscek
  • Antica Culina

Nature Ecology & Evolution (2022)

Assessing assemblage-wide mammal responses to different types of habitat modification in Amazonian forests

  • Paula C. R. Almeida-Maués
  • Anderson S. Bueno
  • Ana Cristina Mendes-Oliveira

Scientific Reports (2022)

Mitigating impacts of invasive alien predators on an endangered sea duck amidst high native predation pressure

  • Kim Jaatinen
  • Ida Hermansson

Oecologia (2022)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

research bias type

research bias type

The Ultimate Guide to Qualitative Research - Part 1: The Basics

research bias type

  • Introduction and overview
  • What is qualitative research?
  • What is qualitative data?
  • Examples of qualitative data
  • Qualitative vs. quantitative research
  • Mixed methods
  • Qualitative research preparation
  • Theoretical perspective
  • Theoretical framework
  • Literature reviews
  • Research question
  • Conceptual framework
  • Conceptual vs. theoretical framework
  • Data collection
  • Qualitative research methods
  • Focus groups
  • Observational research
  • Case studies
  • Ethnographical research
  • Ethical considerations
  • Confidentiality and privacy

What is research bias?

Understanding unconscious bias, how to avoid bias in research, bias and subjectivity in research.

  • Power dynamics
  • Reflexivity

Bias in research

In a purely objective world, research bias would not exist because knowledge would be a fixed and unmovable resource; either one knows about a particular concept or phenomenon, or they don't. However, qualitative research and the social sciences both acknowledge that subjectivity and bias exist in every aspect of the social world, which naturally includes the research process too. This bias is manifest in the many different ways that knowledge is understood, constructed, and negotiated, both in and out of research.

research bias type

Understanding research bias has profound implications for data collection methods and data analysis , requiring researchers to take particular care of how to account for the insights generated from their data .

Research bias, often unavoidable, is a systematic error that can creep into any stage of the research process , skewing our understanding and interpretation of findings. From data collection to analysis, interpretation , and even publication , bias can distort the truth we seek to capture and communicate in our research.

It’s also important to distinguish between bias and subjectivity, especially when engaging in qualitative research . Most qualitative methodologies are based on epistemological and ontological assumptions that there is no such thing as a fixed or objective world that exists “out there” that can be empirically measured and understood through research. Rather, many qualitative researchers embrace the socially constructed nature of our reality and thus recognize that all data is produced within a particular context by participants with their own perspectives and interpretations. Moreover, the researcher’s own subjective experiences inevitably shape how they make sense of the data. These subjectivities are considered to be strengths, not limitations, of qualitative research approaches, because they open new avenues for knowledge generation. This is also why reflexivity is so important in qualitative research. When we refer to bias in this guide, on the other hand, we are referring to systematic errors that can negatively affect the research process but that can be mitigated through researchers’ careful efforts.

To fully grasp what research bias is, it's essential to understand the dual nature of bias. Bias is not inherently evil. It's simply a tendency, inclination, or prejudice for or against something. In our daily lives, we're subject to countless biases, many of which are unconscious. They help us navigate our world, make quick decisions, and understand complex situations. But when conducting research, these same biases can cause significant issues.

research bias type

Research bias can affect the validity and credibility of research findings, leading to erroneous conclusions. It can emerge from the researcher's subconscious preferences or the methodological design of the study itself. For instance, if a researcher unconsciously favors a particular outcome of the study, this preference could affect how they interpret the results, leading to a type of bias known as confirmation bias.

Research bias can also arise due to the characteristics of study participants. If the researcher selectively recruits participants who are more likely to produce desired outcomes, this can result in selection bias.

Another form of bias can stem from data collection methods . If a survey question is phrased in a way that encourages a particular response, this can introduce response bias. Moreover, inappropriate survey questions can have a detrimental effect on future research if such studies are seen by the general population as biased toward particular outcomes depending on the preferences of the researcher.

Bias can also occur during data analysis . In qualitative research for instance, the researcher's preconceived notions and expectations can influence how they interpret and code qualitative data, a type of bias known as interpretation bias. It's also important to note that quantitative research is not free of bias either, as sampling bias and measurement bias can threaten the validity of any research findings.

Given these examples, it's clear that research bias is a complex issue that can take many forms and emerge at any stage in the research process. This section will delve deeper into specific types of research bias, provide examples, discuss why it's an issue, and provide strategies for identifying and mitigating bias in research.

What is an example of bias in research?

Bias can appear in numerous ways. One example is confirmation bias, where the researcher has a preconceived explanation for what is going on in their data, and any disconfirming evidence is (unconsciously) ignored. For instance, a researcher conducting a study on daily exercise habits might be inclined to conclude that meditation practices lead to greater engagement in exercise because that researcher has personally experienced these benefits. However, conducting rigorous research entails assessing all the data systematically and verifying one’s conclusions by checking for both supporting and refuting evidence.

research bias type

What is a common bias in research?

Confirmation bias is one of the most common forms of bias in research. It happens when researchers unconsciously focus on data that supports their ideas while ignoring or undervaluing data that contradicts their ideas. This bias can lead researchers to mistakenly confirm their theories, despite having insufficient or conflicting evidence.

What are the different types of bias?

There are several types of research bias, each presenting unique challenges. Some common types include:

Confirmation bias: As already mentioned, this happens when a researcher focuses on evidence supporting their theory while overlooking contradictory evidence.

Selection bias: This occurs when the researcher's method of choosing participants skews the sample in a particular direction.

Response bias: This happens when participants in a study respond inaccurately or falsely, often due to misleading or poorly worded questions.

Observer bias (or researcher bias): This occurs when the researcher unintentionally influences the results because of their expectations or preferences.

Publication bias: This type of bias arises when studies with positive results are more likely to get published, while studies with negative or null results are often ignored.

Analysis bias: This type of bias occurs when the data is manipulated or analyzed in a way that leads to a particular result, whether intentionally or unintentionally.

research bias type

What is an example of researcher bias?

Researcher bias, also known as observer bias, can occur when a researcher's expectations or personal beliefs influence the results of a study. For instance, if a researcher believes that a particular therapy is effective, they might unconsciously interpret ambiguous results in a way that supports the efficacy of the therapy, even if the evidence is not strong enough.

Even quantitative research methodologies are not immune from bias from researchers. Market research surveys or clinical trial research, for example, may encounter bias when the researcher chooses a particular population or methodology to achieve a specific research outcome. Questions in customer feedback surveys whose data is employed in quantitative analysis can be structured in such a way as to bias survey respondents toward certain desired answers.

Turn your data into findings with ATLAS.ti

Key insights are at your fingertips with our powerful interface. See how with a free trial.

Identifying and avoiding bias in research

As we will remind you throughout this chapter, bias is not a phenomenon that can be removed altogether, nor should we think of it as something that should be eliminated. In a subjective world involving humans as researchers and research participants, bias is unavoidable and almost necessary for understanding social behavior. The section on reflexivity later in this guide will highlight how different perspectives among researchers and human subjects are addressed in qualitative research. That said, bias in excess can place the credibility of a study's findings into serious question. Scholars who read your research need to know what new knowledge you are generating, how it was generated, and why the knowledge you present should be considered persuasive. With that in mind, let's look at how bias can be identified and, where it interferes with research, minimized.

How do you identify bias in research?

Identifying bias involves a critical examination of your entire research study involving the formulation of the research question and hypothesis , the selection of study participants, the methods for data collection, and the analysis and interpretation of data. Researchers need to assess whether each stage has been influenced by bias that may have skewed the results. Tools such as bias checklists or guidelines, peer review , and reflexivity (reflecting on one's own biases) can be instrumental in identifying bias.

How do you identify research bias?

Identifying research bias often involves careful scrutiny of the research methodology and the researcher's interpretations. Was the sample of participants relevant to the research question ? Were the interview or survey questions leading? Were there any conflicts of interest that could have influenced the results? It also requires an understanding of the different types of bias and how they might manifest in a research context. Does the bias occur in the data collection process or when the researcher is analyzing data?

Research transparency requires a careful accounting of how the study was designed, conducted, and analyzed. In qualitative research involving human subjects, the researcher is responsible for documenting the characteristics of the research population and research context. With respect to research methods, the procedures and instruments used to collect and analyze data are described in as much detail as possible.

While describing study methodologies and research participants in painstaking detail may sound cumbersome, a clear and detailed description of the research design is necessary for good research. Without this level of detail, it is difficult for your research audience to identify whether bias exists, where bias occurs, and to what extent it may threaten the credibility of your findings.

How to recognize bias in a study?

Recognizing bias in a study requires a critical approach. The researcher should question every step of the research process: Was the sample of participants selected with care? Did the data collection methods encourage open and sincere responses? Did personal beliefs or expectations influence the interpretation of the results? External peer reviews can also be helpful in recognizing bias, as others might spot potential issues that the original researcher missed.

The subsequent sections of this chapter will delve into the impacts of research bias and strategies to avoid it. Through these discussions, researchers will be better equipped to handle bias in their work and contribute to building more credible knowledge.

Unconscious biases, also known as implicit biases, are attitudes or stereotypes that influence our understanding, actions, and decisions in an unconscious manner. These biases can inadvertently infiltrate the research process, skewing the results and conclusions. This section aims to delve deeper into understanding unconscious bias, its impact on research, and strategies to mitigate it.

What is unconscious bias?

Unconscious bias refers to prejudices or social stereotypes about certain groups that individuals form outside their conscious awareness. Everyone holds unconscious beliefs about various social and identity groups, and these biases stem from a tendency to organize social worlds into categories.

research bias type

How does unconscious bias infiltrate research?

Unconscious bias can infiltrate research in several ways. It can affect how researchers formulate their research questions or hypotheses , how they interact with participants, their data collection methods, and how they interpret their data . For instance, a researcher might unknowingly favor participants who share similar characteristics with them, which could lead to biased results.

Implications of unconscious bias

The implications of unconscious research bias are far-reaching. It can compromise the validity of research findings , influence the choice of research topics, and affect peer review processes . Unconscious bias can also lead to a lack of diversity in research, which can severely limit the value and impact of the findings.

Strategies to mitigate unconscious research bias

While it's challenging to completely eliminate unconscious bias, several strategies can help mitigate its impact. These include being aware of potential unconscious biases, practicing reflexivity , seeking diverse perspectives for your study, and engaging in regular bias-checking activities, such as bias training and peer debriefing .

By understanding and acknowledging unconscious bias, researchers can take steps to limit its impact on their work, leading to more robust findings.

Why is researcher bias an issue?

Research bias is a pervasive issue that researchers must diligently consider and address. It can significantly impact the credibility of findings. Here, we break down the ramifications of bias into two key areas.

How bias affects validity

Research validity refers to the accuracy of the study findings, or the coherence between the researcher’s findings and the participants’ actual experiences. When bias sneaks into a study, it can distort findings and move them further away from the realities that were shared by the research participants. For example, if a researcher's personal beliefs influence their interpretation of data , the resulting conclusions may not reflect what the data show or what participants experienced.

The transferability problem

Transferability is the extent to which your study's findings can be applied beyond the specific context or sample studied. Applying knowledge from one context to a different context is how we can progress and make informed decisions. In quantitative research , the generalizability of a study is a key component that shapes the potential impact of the findings. In qualitative research , all data and knowledge that is produced is understood to be embedded within a particular context, so the notion of generalizability takes on a slightly different meaning. Rather than assuming that the study participants are statistically representative of the entire population, qualitative researchers can reflect on which aspects of their research context bear the most weight on their findings and how these findings may be transferable to other contexts that share key similarities.

How does bias affect research?

Research bias, if not identified and mitigated, can significantly impact research outcomes. The ripple effects of research bias extend beyond individual studies, impacting the body of knowledge in a field and influencing policy and practice. Here, we delve into three specific ways bias can affect research.

Distortion of research results

Bias can lead to a distortion of your study's findings. For instance, confirmation bias can cause a researcher to focus on data that supports their interpretation while disregarding data that contradicts it. This can skew the results and create a misleading picture of the phenomenon under study.

Undermining scientific progress

When research is influenced by bias, it not only misrepresents participants’ realities but can also impede scientific progress. Biased studies can lead researchers down the wrong path, resulting in wasted resources and efforts. Moreover, it could contribute to a body of literature that is skewed or inaccurate, misleading future research and theories.

Influencing policy and practice based on flawed findings

Research often informs policy and practice. If the research is biased, it can lead to the creation of policies or practices that are ineffective or even harmful. For example, a study with selection bias might conclude that a certain intervention is effective, leading to its broad implementation. However, suppose the transferability of the study's findings was not carefully considered. In that case, it may be risky to assume that the intervention will work as well in different populations, which could lead to ineffective or inequitable outcomes.

research bias type

While it's almost impossible to eliminate bias in research entirely, it's crucial to mitigate its impact as much as possible. By employing thoughtful strategies at every stage of research, we can strive towards rigor and transparency , enhancing the quality of our findings. This section will delve into specific strategies for avoiding bias.

How do you know if your research is biased?

Determining whether your research is biased involves a careful review of your research design, data collection , analysis , and interpretation . It might require you to reflect critically on your own biases and expectations and how these might have influenced your research. External peer reviews can also be instrumental in spotting potential bias.

Strategies to mitigate bias

Minimizing bias involves careful planning and execution at all stages of a research study. These strategies could include formulating clear, unbiased research questions , ensuring that your sample meaningfully represents the research problem you are studying, crafting unbiased data collection instruments, and employing systematic data analysis techniques. Transparency and reflexivity throughout the process can also help minimize bias.

Mitigating bias in data collection

To mitigate bias in data collection, ensure your questions are clear, neutral, and not leading. Triangulation, or using multiple methods or data sources, can also help to reduce bias and increase the credibility of your findings.

Mitigating bias in data analysis

During data analysis , maintaining a high level of rigor is crucial. This might involve using systematic coding schemes in qualitative research or appropriate statistical tests in quantitative research . Regularly questioning your interpretations and considering alternative explanations can help reduce bias. Peer debriefing , where you discuss your analysis and interpretations with colleagues, can also be a valuable strategy.

By using these strategies, researchers can significantly reduce the impact of bias on their research, enhancing the quality and credibility of their findings and contributing to a more robust and meaningful body of knowledge.

Impact of cultural bias in research

Cultural bias is the tendency to interpret and judge phenomena by standards inherent to one's own culture. Given the increasingly multicultural and global nature of research, understanding and addressing cultural bias is paramount. This section will explore the concept of cultural bias, its impacts on research, and strategies to mitigate it.

What is cultural bias in research?

Cultural bias refers to the potential for a researcher's cultural background, experiences, and values to influence the research process and findings. This can occur consciously or unconsciously and can lead to misinterpretation of data, unfair representation of cultures, and biased conclusions.

How does cultural bias infiltrate research?

Cultural bias can infiltrate research at various stages. It can affect the framing of research questions , the design of the study, the methods of data collection , and the interpretation of results . For instance, a researcher might unintentionally design a study that does not consider the cultural context of the participants, leading to a biased understanding of the phenomenon being studied.

Implications of cultural bias

The implications of cultural bias are profound. Cultural bias can skew your findings, limit the transferability of results, and contribute to cultural misunderstandings and stereotypes. This can ultimately lead to inaccurate or ethnocentric conclusions, further perpetuating cultural bias and inequities.

As a result, many social science fields like sociology and anthropology have been critiqued for cultural biases in research. Some of the earliest research inquiries in anthropology, for example, have had the potential to reduce entire cultures to simplistic stereotypes when compared to mainstream norms. A contemporary researcher respecting ethical and cultural boundaries, on the other hand, should seek to properly place their understanding of social and cultural practices in sufficient context without inappropriately characterizing them.

Strategies to mitigate cultural bias

Mitigating cultural bias requires a concerted effort throughout the research study. These efforts could include educating oneself about other cultures, being aware of one's own cultural biases, incorporating culturally diverse perspectives into the research process, and being sensitive and respectful of cultural differences. It might also involve including team members with diverse cultural backgrounds or seeking external cultural consultants to challenge assumptions and provide alternative perspectives.

By acknowledging and addressing cultural bias, researchers can contribute to more culturally competent, equitable, and valid research. This not only enriches the scientific body of knowledge but also promotes cultural understanding and respect.

research bias type

Ready to jumpstart your research with ATLAS.ti?

Conceptualize your research project with our intuitive data analysis interface. Download a free trial today.

Keep in mind that bias is a force to be mitigated, not a phenomenon that can be eliminated altogether, and the subjectivities of each person are what make our world so complex and interesting. As things are continuously changing and adapting, research knowledge is also continuously being updated as we further develop our understanding of the world around us.

research bias type

Ready to analyze your data with ATLAS.ti?

See how our intuitive software can draw key insights from your data with a free trial today.

  • Research Bias: Definition, Types + Examples

busayo.longe

Sometimes, in the cause of carrying out a systematic investigation, the researcher may influence the process intentionally or unknowingly. When this happens, it is termed as research bias, and like every other type of bias , it can alter your findings. 

Research bias is one of the dominant reasons for the poor validity of research outcomes. There are no hard and fast rules when it comes to research bias and this simply means that it can happen at any time; if you do not pay adequate attention. 

The spontaneity of research bias means you must take care to understand what it is, be able to identify its feature, and ultimately avoid or reduce its occurrence to the barest minimum. In this article, we will show you how to handle bias in research and how to create unbiased research surveys with Formplus. 

What is Research Bias? 

Research bias happens when the researcher skews the entire process towards a specific research outcome by introducing a systematic error into the sample data. In other words, it is a process where the researcher influences the systematic investigation to arrive at certain outcomes. 

When any form of bias is introduced in research, it takes the investigation off-course and deviates it from its true outcomes. Research bias can also happen when the personal choices and preferences of the researcher have undue influence on the study. 

For instance, let’s say a religious conservative researcher is conducting a study on the effects of alcohol. If the researcher’s conservative beliefs prompt him or her to create a biased survey or have sampling bias , then this is a case of research bias. 

Types of Research Bias 

  • Design Bias

Design bias has to do with the structure and methods of your research. It happens when the research design, survey questions, and research method is largely influenced by the preferences of the researcher rather than what works best for the research context. 

In many instances, poor research design or a pack of synergy between the different contributing variables in your systematic investigation can infuse bias into your research process. Research bias also happens when the personal experiences of the researcher influence the choice of the research question and methodology. 

Example of Design Bias  

A researcher who is involved in the manufacturing process of a new drug may design a survey with questions that only emphasize the strengths and value of the drug in question. 

  • Selection or Participant Bias

Selection bias happens when the research criteria and study inclusion method automatically exclude some part of your population from the research process. When you choose research participants that exhibit similar characteristics, you’re more likely to arrive at study outcomes that are uni-dimensional. 

Selection bias manifests itself in different ways in the context of research. Inclusion bias is particularly popular in quantitative research and it happens when you select participants to represent your research population while ignoring groups that have alternative experiences. 

Examples of Selection Bias  

  • Administering your survey online; thereby limiting it to internet savvy individuals and excluding members of your population without internet access. 
  • Collecting data about parenting from a mother’s group. The findings in this type of research will be biased towards mothers while excluding the experiences of the fathers. 
  • Publication Bias

Peer-reviewed journals and other published academic papers, in many cases, have some degree of bias. This bias is often imposed on them by the publication criteria for research papers in a particular field. Researchers work their papers to meet these criteria and may ignore information or methods that are not in line with them. 

For example, research papers in quantitative research are more likely to be published if they contain statistical information. On the other hand, Non-publication in qualitative studies is more likely to occur because of a lack of depth when describing study methodologies and findings are not presented. 

  • Analysis Bias

This is a type of research bias that creeps in during data processing. Many times, when sorting and analyzing data, the researcher may focus on data samples that confirm his or her thoughts, expectations, or personal experiences; that is, data that favors the research hypothesis. 

This means that the researcher, albeit deliberately or unintentionally, ignores data samples that are inconsistent and suggest research outcomes that differ from the hypothesis. Analysis bias can be far-reaching because it alters the research outcomes significantly and provides a false presentation of what is obtainable in the research environment. 

Example of Analysis Bias  

While researching cannabis, a researcher pays attention to data samples that reinforce the negative effects of cannabis while ignoring data that suggests positives.

  • Data Collection Bias

Data collection bias is also known as measurement bias and it happens when the researcher’s personal preferences or beliefs affect how data samples are gathered in the systematic investigation. Data collection bias happens in both q ualitative and quantitative research methods. 

In quantitative research, data collection methods can occur when you use a data-gathering tool or method that is not suitable for your research population. For example, asking individuals who do not have access to the internet, to complete a survey via email or your website. 

In qualitative research, data collection bias happens when you ask bad survey questions during a semi-structured or unstructured interview . Bad survey questions are questions that nudge the interviewee towards implied assumptions. Leading and loaded questions are common examples of bad survey questions. 

  • Procedural Bias

Procedural is a type of research bias that happens when the participants in a study are not given enough time to complete surveys. The result is that respondents end up providing half-thoughts and incomplete information that does not provide a true representation of their thoughts. 

There are different ways to subject respondents to procedural respondents. For instance, asking respondents to complete a survey quickly to access an incentive, may force them to fill in false information to simply get things over with. 

Example of Procedural Bias

  • Asking employees to complete an employee feedback survey during break time. This timeframe puts respondents under undue pressure and can affect the validity of their responses.  

Bias in Quantitative Research

In quantitative research, the researcher often tries to deny the existence of any bias, by eliminating any type of bias in the systematic investigation. Sampling bias is one of the most types of quantitative research biases and it is concerned with the samples you omit and/or include in your study. 

Types of Quantitative Research Bias

Design bias occurs in quantitative research when the research methods or processes alter the outcomes or findings of a systematic investigation. It can occur when the experiment is being conducted or during the analysis of the data to arrive at a valid conclusion. 

Many times, design biases result from the failure of the researchers to take into account the likely impact of the bias in the research they conduct. This makes the researcher ignore the needs of the research context and instead, prioritize his or her preferences. 

  • Sampling Bias

Sampling bias in quantitative research occurs when some members of the research population are systematically excluded from the data sample during research. It also means that some groups in the research population are more likely to be selected in a sample than the others. 

Sampling bias in quantitative research mainly occurs in systematic and random sampling. For example, a study about breast cancer that has just male participants can be said to have sampling bias since it excludes the female group in the research population. 

Bias in Qualitative Research

In qualitative research, the researcher accepts and acknowledges the bias without trying to deny its existence. This makes it easier for the researcher to clearly define the inherent biases and outline its possible implications while trying to minimize its effects. 

Qualitative research defines bias in terms of how valid and reliable the research results are. Bias in qualitative research distorts the research findings and also provides skewed data that defeats the validity and reliability of the systematic investigation. 

Types of Bias in Qualitative Research

  • Bias from Moderator

The interviewer or moderator in qualitative data collection can impose several biases on the process. The moderator can introduce bias in the research based on his or her disposition, expression, tone, appearance, idiolect, or relation with the research participants. 

  • Biased Questions

The framing and presentation of the questions during the research process can also lead to bias. Biased questions like leading questions , double- barrelled questions, negative questions, and loaded questions , can influence the way respondents provide answers and the authenticity of the responses they present. 

The researcher must identify and eliminate biased questions in qualitative research or rephrase them if they cannot be taken out altogether. Remember that questions form the main basis through which information is collected in research and so, biased questions can lead to invalid research findings. 

  • Biased Reporting

Biased reporting is yet another challenge in qualitative research. It happens when the research results are altered due to personal beliefs, customs, attitudes, culture, and errors among many other factors. It also means that the researcher must have analyzed the research data based on his/her beliefs rather than the views perceived by the respondents. 

Bias in Psychology

Cognitive biases can affect research and outcomes in psychology. For example, during a stop-and-search exercise, law enforcement agents may profile certain appearances and physical dispositions as law-abiding. Due to this cognitive bias, individuals who do not exhibit these outlined behaviors can be wrongly profiled as criminals. 

Another example of cognitive bias in psychology can be observed in the classroom. During a class assessment, an invigilator who is looking for physical signs of malpractice might mistakenly classify other behaviors as evidence of malpractice; even though this may not be the case. 

Bias in Market Research

There are 5 common biases in market research – social desirability bias, habituation bias, sponsor bias, confirmation bias, and cultural bias. Let’s find out more about them.

  • Social desirability bias happens when respondents fill in incorrect information in market research surveys because they want to be accepted or liked. It happens when respondents are seeking social approval and so, fail to communicate how they truly feel about the statement or question being considered. 

A good example will be market research to find out preferred sexual enhancement methods for adults. Some persons may not want to admit that they use sexual enhancement drugs to avoid criticism or disapproval.

  • Habituation bias happens when respondents give similar answers to questions that are structured in the same way. Lack of variety in survey questions can make respondents lose interest, become non-responsive, and simply regurgitate answers.  

For example, multiple-choice questions with the same set of answer options can cause habituation bias in your survey. What you get is that respondents just choose answer options without reflecting on how well their choices represent their thoughts, feelings, and ideas. 

  • Sponsor bias takes place when respondents have an idea of the brand or organization that is conducting the research. In this case, their perceptions, opinions, experiences, and feelings about the sponsor may influence how they answer the questions about that particular brand. 

For example, let’s say Formplus is carrying out a study to find out what the market’s preferred form builder is. Respondents may mention the sponsor for the survey (Formplus) as their preferred form builder out of obligation; especially when the survey has some incentives.

  • Confirmation bias happens when the overall research process is aimed at confirming the researcher’s perception or hypothesis about the research subjects. In other words, the research process is merely a formality to reinforce the researcher’s existing beliefs. 

Electoral polls often fall into the confirmation bias trap. For example, civil society organizations that are in support of one candidate can create a survey that paints the opposing candidate in a bad light to reinforce beliefs about their preferred candidate. 

  • Cultural bias arises from the assumptions we have about other cultures based on the values and standards we have for our own culture . For example, when asked to complete a survey about our culture, we may tilt towards positive answers. In the same vein, we are more likely to provide negative responses in a survey for a culture we do not like. 

How to Identify Bias in a Research

  • Pay attention to research design and methods. 
  • Observe the data collection process. Does it lean overwhelmingly towards a particular group in the survey population? 
  • Look out for bad survey questions like loaded questions and negative questions. 
  • Observe the data sample you have to confirm if it is a fair representation of your research population.

How to Avoid Research Bias 

  • Gather data from multiple sources: Be sure to collect data samples from the different groups in your research population. 
  • Verify your data: Before going ahead with the data analysis, try to check in with other data sources, and confirm if you are on the right track. 
  • If possible, ask research participants to help you review your findings: Ask the people who provided the data whether your interpretations seem to be representative of their beliefs. 
  • Check for alternative explanations: Try to identify and account for alternative reasons why you may have collected data samples the way you did. 
  • Ask other members of your team to review your results: Ask others to review your conclusions. This will help you see things that you missed or identify gaps in your argument that need to be addressed.

How to Create Unbiased Research Surveys with Formplus 

Formplus has different features that would help you create unbiased research surveys. Follow these easy steps to start creating your Formplus research survey today: 

  • Go to your Formplus dashboard and click on the “create new form” button. You can access the Formplus dashboard by signing into your Formplus account here. 

research bias type

  • After you click on the “create new form” button, you’d be taken to the form builder. This is where you can add different fields into your form and edit them accordingly. Formplus has over 30 form fields that you can simply drag and drop into your survey including rating fields and scales. 

logo-testing-survey-builder

  • After adding form fields and editing them, save your form to access the builder’s customization features. You can tweak the appearance of your form here by changing the form theme and adding preferred background images to it. 

research bias type

  • Copy the form link and share it with respondents. 

research bias type

Conclusion 

The first step to dealing with research bias is having a clear idea of what it is and also, being able to identify it in any form. In this article, we’ve shared important information about research bias that would help you identify it easily and work on minimizing its effects to the barest minimum. 

Formplus has many features and options that can help you deal with research bias as you create forms and questionnaires for quantitative and qualitative data collection. To take advantage of these, you can sign up for a Formplus account here. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of research bias
  • types of research bias
  • what is research bias
  • busayo.longe

Formplus

You may also like:

Quota Sampling: Definition, Types, Pros, Cons & Examples

In this article, we’ll explore the concept of quota sampling, its types, and some real-life examples of it can be applied in rsearch

research bias type

How to do a Meta Analysis: Methodology, Pros & Cons

In this article, we’ll go through the concept of meta-analysis, what it can be used for, and how you can use it to improve how you...

Systematic Errors in Research: Definition, Examples

In this article, we are going to explore the types of systematic error, the causes of this error, how to identify, and how to avoid it.

Selection Bias in Research: Types, Examples & Impact

In this article, we’ll discuss the effects of selection bias, how it works, its common effects and the best ways to minimize it.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

9 types of research bias and how to avoid them

Nine  Types Of Bias And How To Avoid Them

To reduce the risk of bias in qual, researchers must focus on the human elements of the research process in order to identify and avoid the nine core types of bias.

Editor’s note: Rebecca Sarniak is a moderating services specialist iModerate , a Denver research firm.

Seasoned research experts know that bias can find its way into any research program – it’s naïve to think that any research could be 100 percent free from it. But when does bias become a problem? And how do we identify and control the sources of bias to deliver the highest-quality research possible?

The goal of reducing bias isn’t to make everyone the same but to make sure that questions are thoughtfully posed and delivered in a way that allows respondents to reveal their true feelings without distortions. The risk of bias exists in all components of qualitative research and can come from the questions, the respondents and the moderator. To reduce bias – and deliver better research – let’s explore its primary sources.  

When we focus on the human elements of the research process and look at the nine core types of bias – driven from the respondent, the researcher or both – we are able to minimize the potential impact that bias has on qualitative research.

Respondent bias

1. Acquiescence bias: Also known as “yea-saying” or the friendliness bias, acquiescence bias occurs when a respondent demonstrates a tendency to agree with and be positive about whatever the moderator presents. In other words, they think every idea is a good one and can see themselves liking, buying and acting upon every situation that is proposed. Some people have acquiescent personalities, while others acquiesce because they perceive the interviewer to be an expert. Acquiescence is the easy way out, as it takes less effort than carefully weighing each option. This path escalates if fatigue sets in – some people will agree just to complete the interview. To avoid it, researchers must replace questions that imply there is a right answer with those that focus on the respondent’s true point of view.

2. Social desirability bias 1 : This bias involves respondents answering questions in a way that they think will lead to being accepted and liked. Regardless of the research format, some people will report inaccurately on sensitive or personal topics to present themselves in the best possible light. Researchers can minimize this bias by focusing on unconditional positive regard. This includes phrasing questions to show it’s okay to answer in a way that is not socially desirable. Indirect questioning – asking about what a third-party thinks, feels and how they will behave – can also be used for socially sensitive questions. This allows respondents to project their own feelings onto others and still provide honest, representative answers.

3. Habituation 2 : In cases of habituation bias, respondents provide the same answers to questions that are worded in similar ways. This is a biological response: being responsive and paying attention takes a lot of energy. To conserve energy, our brains habituate or go on autopilot. Respondents often show signs of fatigue, such as mentioning that the questions seem repetitive, or start giving similar responses across multiple questions. Moderators must keep the engagement conversational and continue to vary question wording to minimize habituation.

4. Sponsor bias 3 : When respondents know – or suspect – the sponsor of the research, their feelings and opinions about that sponsor may bias their answers. Respondents’ views on the sponsoring organization’s mission or core beliefs, for example, can influence how they answer all questions related to that brand. This is an especially important type of bias for moderators to navigate by maintaining a neutral stance, limiting moderator reinforcement to positive respondent feedback that can be construed as moderator affiliation to brand and reiterating, when possible, the moderator’s independent status.   

Researcher bias

5. Confirmation bias 4 : One of the longest-recognized and most pervasive forms of bias in research, confirmation bias occurs when a researcher forms a hypothesis or belief and uses respondents’ information to confirm that belief. This takes place in-the-moment as researchers’ judge and weight responses that confirm their hypotheses as relevant and reliable, while dismissing evidence that doesn’t support a hypothesis. Confirmation bias then extends into analysis, with researchers tending to remember points that support their hypothesis and points that disprove other hypotheses. Confirmation bias is deeply seated in the natural tendencies people use to understand and filter information, which often lead to focusing on one hypothesis at a time. To minimize confirmation bias, researchers must continually reevaluate impressions of respondents and challenge preexisting assumptions and hypotheses.

6. Culture bias 5 : Assumptions about motivations and influences that are based on our cultural lens (on the spectrum of ethnocentricity or cultural relativity) create the culture bias. Ethnocentrism is judging another culture solely by the values and standards of one's own culture. Cultural relativism is the principle that an individual's beliefs and activities should be understood by others in terms of that individual's own culture. To minimize culture bias, researchers must move toward cultural relativism by showing unconditional positive regard and being cognizant of their own cultural assumptions. Complete cultural relativism is never 100 percent achievable.

7. Question-order bias: One question can influence answers to subsequent questions, creating question-order bias . Respondents are primed by the words and ideas presented in questions that impact their thoughts, feelings and attitudes on subsequent questions. For example, if a respondent rates one product a 10 and is then asked to rate a competitive product, they will make a rating that is relative to the 10 they just provided. While question-order bias is sometimes unavoidable, asking general questions before specific, unaided before aided and positive before negative will minimize bias.

8. Leading questions and wording bias 6 : Elaborating on a respondent’s answer puts words in their mouth and, while leading questions and wording aren’t types of bias themselves, they lead to bias or are a result of bias. Researchers do this because they are trying to confirm a hypothesis, build rapport or overestimate their understanding of the respondent. To minimize this bias, ask questions that use the respondents’ language and inquire about the implications of a respondent’s thoughts and reactions. Avoid summarizing what the respondents said in your own words and do not take what they said further. Try not to assume relationships between a feeling and a behavior.

9. The halo effect 7 : Moderators and respondents have a tendency to see something or someone in a certain light because of a single, positive attribute. There are several cognitive reasons for the halo effect, so researchers must work to address it on many fronts. For example, a moderator can make assumptions about a respondent because of one positive answer they’ve provided. Moderators should reflect on their assumptions about each respondent: Why are you asking each question? What is the assumption behind it? Additionally, respondents may rate or respond to a stimulus positively overall due to one factor. Researchers should address all questions about one brand before asking for feedback on a second brand, as when respondents are required to switch back and forth rating two brands, they are likely to project their opinion on one attribute to their opinion of the brand as a whole.

Bias in qualitative research can be minimized if you know what to look for and how to manage it. By asking quality questions at the right time and remaining aware and focused on sources of bias, researchers can enable the truest respondent perspectives and ensure that the resulting research lives up to the highest qualitative standards.  

1 Dodou, D., & de Winter, J. C. F. (2014). Social desirability is the same in offline, online and paper surveys: A meta-analysis. Computers in Human Behavior, 36, 487–495. doi: 10.1016/j.chb.2014.04.005.  https://www.iser.essex.ac.uk/research/publications/working-papers/iser/2013-04.pdf  

2 Habituation of event related potentials: a tool for assessment of cognition in headache patients Neelam Vaney, Abhinav Dixit, Tandra Ghosh, Ravi Gupta, M.S. Bhatia Departments of Physiology and Psychiatry, University College of Medical Sciences & G.T.B. Hospital, Dilshad Garden, Delhi-110095,  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2605404/ .

3 Essentials or Marketing Research, An Applied Orientation By Naresh Malhotra, John Hall, Mike Shaw, Peter Oppenheim. Pp 227.  http://www.readexresearch.com/understanding-survey-data/ .

4 http://psy2.ucsd.edu/~mckenzie/nickersonConfirmationBias.pdf;  http://www.anderson.ucla.edu/faculty/keith.chen/negot.%20papers/RabinSchrag_ConfirmBias99.pdf  UCLA

5 Pirkey, W. (2015, May 6). Personal Interview. 

6 Essentials or Marketing Research, An Applied Orientation By Naresh Malhotra, John Hall, Mike Shaw, Peter Oppenheim. Pp 227.

7 Halo effects in consumer theories, Master Thesis, Erasmus University Rotterdam, thesis.eur.nl/pub/11759/Luttin,%20L.V.%20(352879ll).pdf  

How hype analysis lets companies find value in customer excitement Related Categories: Research Industry, Qualitative Research, Marketing Research-General Research Industry, Qualitative Research, Marketing Research-General, Artificial Intelligence / AI, Consumer Research, Consumers, Psychological/Emotion Research, Quantitative Research, Social Media Research

Outset AI: Unlock the depth of qualitative insights at the speed and scale of surveys Related Categories: Research Industry, Data Analysis, Qualitative Research Research Industry, Data Analysis, Qualitative Research, Artificial Intelligence / AI

discover.ai’s OPEN platform – Your key to unlocking the insights hidden in online content and social media data Related Categories: Research Industry, Data Analysis, Qualitative Research Research Industry, Data Analysis, Qualitative Research, Artificial Intelligence / AI

How friendship pairs can help marketing researchers Related Categories: Research Industry, Focus Group-Moderating, Qualitative Research Research Industry, Focus Group-Moderating, Qualitative Research, Advertising Effectiveness, Audience Research, One-on-One (Depth) Interviews

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research bias type

Home Market Research Research Tools and Apps

Research bias: What it is, Types & Examples

Research bias is a technique where the researchers conducting the experiment modify the findings in order to present a specific consequence.

The researcher sometimes unintentionally or actively affects the process while executing a systematic inquiry. It is known as research bias, and it can affect your results just like any other sort of bias.

When it comes to studying bias, there are no hard and fast guidelines, which simply means that it can occur at any time. Experimental mistakes and a lack of concern for all relevant factors can lead to research bias.

One of the most common causes of study results with low credibility is study bias. Because of its informal nature, you must be cautious when characterizing bias in research. To reduce or prevent its occurrence, you need to be able to recognize its characteristics. 

This article will cover what it is, its type, and how to avoid it.

Content Index

What is research bias?

How does research bias affect the research process, types of research bias with examples, how questionpro helps in reducing bias in a research process.

Research bias is a technique in which the researchers conducting the experiment modify the findings to present a specific consequence. It is often known as experimenter bias.

Bias is a characteristic of the research technique that makes it rely on experience and judgment rather than data analysis. The most important thing to know about bias is that it is unavoidable in many fields. Understanding research bias and reducing the effects of biased views is an essential part of any research planning process.

For example, it is much easier to become attracted to a certain point of view when using social research subjects, compromising fairness.

Research bias can majorly affect the research process, weakening its integrity and leading to misleading or erroneous results. Here are some examples of how this bias might affect the research process:

Distorted research design

When bias is present, study results can be skewed or wrong. It can make the study less trustworthy and valid. If bias affects how a study is set up, how data is collected, or how it is analyzed, it can cause systematic mistakes that move the results away from the true or unbiased values.

Invalid conclusions

It can make it hard to believe that the findings of a study are correct. Biased research can lead to unjustified or wrong claims because the results may not reflect reality or give a complete picture of the research question.

Misleading interpretations

Bias can lead to inaccurate interpretations of research findings. It can alter the overall comprehension of the research issue. Researchers may be tempted to interpret the findings in a way that confirms their previous assumptions or expectations, ignoring alternate explanations or contradictory evidence.

Ethical concerns

This bias poses ethical considerations. It can have negative effects on individuals, groups, or society as a whole. Biased research can misinform decision-making processes, leading to ineffective interventions, policies, or therapies.

Damaged credibility

Research bias undermines scientific credibility. Biased research can damage public trust in science. It may reduce reliance on scientific evidence for decision-making.

Bias can be seen in practically every aspect of quantitative research and qualitative research , and it can come from both the survey developer and the participants. The sorts of biases that come directly from the survey maker are the easiest to deal with out of all the types of bias in research. Let’s look at some of the most typical research biases.

research bias type

Design bias

Design bias happens when a researcher fails to capture biased views in most experiments. It has something to do with the organization and its research methods. The researcher must demonstrate that they realize this and have tried to mitigate its influence.

Another design bias develops after the research is completed and the results are analyzed. It occurs when the researchers’ original concerns are not reflected in the exposure, which is all too often these days.

For example, a researcher working on a survey containing questions concerning health benefits may overlook the researcher’s awareness of the sample group’s limitations. It’s possible that the group tested was all male or all over a particular age.

Selection bias or sampling bias

Selection bias occurs when volunteers are chosen to represent your research population, but those with different experiences are ignored. 

In research, selection bias manifests itself in a variety of ways. When the sampling method puts preference into the research, this is known as sampling bias . Selection bias is also referred to as sampling bias.

For example, research on a disease that depended heavily on white male volunteers cannot be generalized to the full community, including women and people of other races or communities.

Procedural bias

Procedural bias is a sort of research bias that occurs when survey respondents are given insufficient time to complete surveys. As a result, participants are forced to submit half-thoughts with misinformation, which does not accurately reflect their thinking.

Another sort of study bias is using individuals who are forced to participate, as they are more likely to complete the survey fast, leaving them with enough time to accomplish other things.

For Example, If you ask your employees to survey their break, they may be pressured, which may compromise the validity of their results.

Publication or reporting bias

A sort of bias that influences research is publication bias. It is also known as reporting bias. It refers to a condition in which favorable outcomes are more likely to be reported than negative or empty ones. Analysis bias can also make it easier for reporting bias to happen.

The publication standards for research articles in a specific area frequently reflect this bias on them. Researchers sometimes choose not to disclose their outcomes if they believe the data do not reflect their theory.

As an example, there was seven research on the antidepressant drug Reboxetine. Among them, only one got published, and the others were unpublished.

Measurement of data collecting bias

A defect in the data collection process and measuring technique causes measurement bias. Data collecting bias is also known as measurement bias. It occurs in both qualitative and quantitative research methodologies. 

Data collection methods might occur in quantitative research when you use an approach that is not appropriate for your research population. Instrument bias is one of the most common forms of measurement bias in quantitative investigations. A defective scale would generate instrument bias and invalidate the experimental process in a quantitative experiment.

For example, you may ask those who do not have internet access to survey by email or on your website.

Data collection bias occurs in qualitative research when inappropriate survey questions are asked during an unstructured interview. Bad survey questions are those that lead the interviewee to make presumptions. Subjects are frequently hesitant to provide socially incorrect responses for fear of criticism.

For example, a topic can avoid coming across as homophobic or racist in an interview.

Some more types of bias in research include the ones listed here. Researchers must understand these biases and reduce them through rigorous study design, transparent reporting, and critical evidence review: 

  • Confirmation bias: Researchers often search for, evaluate, and prioritize material that supports their existing hypotheses or expectations, ignoring contradictory data. This can lead to a skewed perception of results and perhaps biased conclusions.
  • Cultural bias: Cultural bias arises when cultural norms, attitudes, or preconceptions influence the research process and the interpretation of results.
  • Funding bias: Funding bias takes place when powerful motives support research. It can bias research design, data collecting, analysis, and interpretation toward the funding source.
  • Observer bias: Observer bias arises when the researcher or observer affects participants’ replies or behavior. Collecting data might be biased by accidental clues, expectations, or subjective interpretations.

LEARN ABOUT: Theoretical Research

QuestionPro offers several features and functionalities that can contribute to reducing bias in the research process. Here’s how QuestionPro can help:

Randomization

QuestionPro allows researchers to randomize the order of survey questions or response alternatives. Randomization helps to remove order effects and limit bias from the order in which participants encounter the items.

Branching and skip logic

Branching and skip logic capabilities in QuestionPro allow researchers to design customized survey pathways based on participants’ responses. It enables tailored questioning, ensuring that only pertinent questions are asked of participants. Bias generated by such inquiries is reduced by avoiding irrelevant or needless questions.

Diverse question types

QuestionPro supports a wide range of questions kinds, including multiple-choice, Likert scale, matrix, and open-ended questions. Researchers can choose the most relevant question kinds to get unbiased data while avoiding leading or suggestive questions that may affect participants’ responses.

Anonymous responses

QuestionPro enables researchers to collect anonymous responses, protecting the confidentiality of participants. It can encourage participants to provide more unbiased and equitable feedback, especially when dealing with sensitive or contentious issues.

Data analysis and reporting

QuestionPro has powerful data analysis and reporting options, such as charts, graphs, and statistical analysis tools. These properties allow researchers to examine and interpret obtained data objectively, decreasing the role of bias in interpreting results.

Collaboration and peer review

QuestionPro supports peer review and researcher collaboration. It helps uncover and overcome biases in research planning, questionnaire formulation, and data analysis by involving several researchers and soliciting external opinions.

You must comprehend biases in research and how to deal with them. Knowing the different sorts of biases in research allows you to readily identify them. It is also necessary to have a clear idea to recognize it in any form.

QuestionPro provides many research tools and settings that can assist you in dealing with research bias. Try QuestionPro today to undertake your original bias-free quantitative or qualitative research.

FREE TRIAL         LEARN MORE

Frequently Asking Questions

Research bias affects the validity and dependability of your research’s findings, resulting in inaccurate interpretations of the data and incorrect conclusions.

Bias should be avoided in research to ensure that findings are accurate, valid, and objective.

 To avoid research bias, researchers should take proactive steps throughout the research process, such as developing a clear research question and objectives, designing a rigorous study, following standardized protocols, and so on.

MORE LIKE THIS

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Employee Engagement Survey Tools

Top 10 Employee Engagement Survey Tools

employee engagement software

Top 20 Employee Engagement Software Solutions

May 3, 2024

customer experience software

15 Best Customer Experience Software of 2024

May 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

8 Types of Research Bias and How to Avoid Them?

Appinio Research · 18.10.2023 · 39min read

Types of Research Bias and How to Avoid Them Examples

Curious about how to ensure the integrity of your research ? Ever wondered how research bias can impact your findings? How might it affect your data-driven decisions?

Join us on a journey through the intricate landscape of unbiased research as we delve deep into strategies and real-world examples to guide you toward more reliable insights.

What is Bias in Research?

Research bias, often simply referred to as bias, is a systematic error or deviation from the true results or inferences in research. It occurs when the design, conduct, or interpretation of a study systematically skews the findings in a particular direction, leading to inaccurate or misleading results. Bias can manifest in various forms and at different stages of the research process, and it can compromise the validity and reliability of research outcomes.

Key Aspects of Research Bias

  • Systematic Error: Bias is not a random occurrence but a systematic error that consistently influences research outcomes.
  • Influence on Results: Bias can lead to overestimating or underestimating effects, associations, or relationships studied.
  • Unintentional or Intentional: Bias can be unintentional, stemming from flaws in study design, data collection, or analysis. In some cases, it can also be introduced intentionally, leading to deliberate distortion of results.
  • Impact on Decision-Making: Research bias can have significant consequences, affecting decisions in fields ranging from healthcare and policy to marketing and academia.

Understanding and recognizing the various types and sources of bias is crucial for researchers to minimize its impact and produce credible, objective, and actionable research findings.

Importance of Avoiding Research Bias

Avoiding research bias is paramount for several compelling reasons, as it directly affects the quality and integrity of research outcomes. Here's why researchers and decision-makers should prioritize bias mitigation:

  • Credibility and Trustworthiness: Research bias undermines the credibility and trustworthiness of research findings. Biased results can erode public trust, damage an organization's reputation, and hinder the acceptance of research in the scientific community.
  • Informed Decision-Making: Research serves as the foundation for informed decision-making in various fields. Bias can lead to erroneous conclusions, potentially leading to misguided policies, ineffective treatments, or poor business strategies.
  • Resource Allocation: Bias can result in the misallocation of valuable resources. When resources are allocated based on biased research, they may not effectively address the intended issues or challenges.
  • Ethical Considerations: Introducing bias, whether intentionally or unintentionally, raises ethical concerns in research. Ethical research practices demand objectivity, transparency, and fairness in the pursuit of knowledge.
  • Advancement of Knowledge: Research contributes to the advancement of knowledge and innovation. Bias hinders scientific progress by introducing errors and distorting the true nature of phenomena, hindering the development of accurate theories and solutions.
  • Public Health and Safety: In fields like healthcare, bias can have life-and-death implications. Biased medical research can lead to the adoption of less effective or potentially harmful treatments, putting patient health and safety at risk.
  • Economic Impact: In business and economics , biased research can result in poor investment decisions, market strategies, and financial losses. Avoiding bias is essential for achieving sound economic outcomes.
In the pursuit of unbiased research, having a robust data collection platform is crucial. Appinio offers a comprehensive solution that helps researchers gather data in a transparent and unbiased manner. With Appinio, you can access a diverse and representative pool of respondents, ensuring that your research reflects a broad spectrum of perspectives.   Appinio 's platform also facilitates data collection with rigorous quality control, reducing the risk of bias at the source. By partnering with Appinio, you can enhance the integrity and credibility of your work, ultimately contributing to more reliable and impactful research outcomes.   Explore the possibilities with Appinio – book a demo today!

Book a demo EN US faces

Get a free demo and see the Appinio platform in action!

The importance of avoiding research bias cannot be overstated. Recognizing bias, implementing strategies to mitigate it, and promoting transparent and unbiased research practices are essential steps to ensure that research contributes meaningfully to advancing knowledge, informed decision-making, and the well-being of individuals and society as a whole.

Common Types of Research Bias

Research bias can manifest in various forms, each with unique characteristics and implications. Understanding these common types of research bias is essential for recognizing and mitigating their effects on your research.

Selection Bias

Selection bias occurs when the sample used in a study does not represent the target population , leading to distorted results. It can happen when certain groups are systematically more or less likely to be included in the study, introducing bias.

Causes of Selection Bias:

  • Volunteer Bias: Participants self-select to participate in a study, and their motivations or characteristics differ from those who do not volunteer.
  • Convenience Sampling: Researchers choose participants who are readily available but may not be representative of the broader population.
  • Non-Response Bias: Occurs when a significant portion of selected participants does not respond or drops out during the study, potentially due to differing characteristics.

Mitigation Strategies:

  • Random Sampling: Select participants randomly from the target population to ensure equal representation.
  • Stratified Sampling: Divide the population into subgroups and sample proportionally from each subgroup.
  • Use of Control Groups: Compare the study group to a control group to help account for potential selection bias.

Sampling Bias

Sampling bias arises when the individuals or items in your sample are not chosen randomly or are not representative of the broader population. It can lead to inaccurate generalizations and skewed conclusions.

Causes of Sampling Bias:

  • Sampling Frame Issues: When the list or database used to select the sample is incomplete or outdated.
  • Self-Selection: Participants choose to be part of the sample, introducing bias if their motivations differ from non-participants.
  • Undercoverage: When certain groups are underrepresented in the sample due to difficulties in reaching or including them.
  • Random Sampling: Employ random selection methods to ensure every individual or item has an equal chance of being included.
  • Stratified Sampling: Divide the population into homogeneous subgroups and sample proportionally from each subgroup.
  • Quota Sampling : Set quotas for specific demographics to ensure representation.

Measurement Bias

Measurement bias occurs when the methods used to collect data are inaccurate or systematically flawed, leading to incorrect conclusions. This bias can affect both quantitative and qualitative data .

Causes of Measurement Bias:

  • Instrument Flaws: When the measurement tools used are inherently unreliable or imprecise.
  • Data Collection Errors: Mistakes made during data collection, such as misinterpretation of responses or inconsistent recording.
  • Response Bias: Participants may provide socially desirable responses , leading to measurement errors. Next to that are various types of bias that arise from the structure of the questionnaire and psychologically influence the participants' answers. We summarized those on our Instagram:
          View this post on Instagram                       A post shared by Appinio (@appinio)
  • Use Reliable Instruments: Select measurement tools that have been validated and are known for their accuracy.
  • Pilot Testing: Test data collection procedures to identify and address potential sources of measurement bias.
  • Blinding: Keep researchers unaware of specific measurements to minimize subjectivity.

Reporting Bias

Reporting bias involves selectively reporting results that support a particular hypothesis while ignoring or downplaying contrary findings. It can lead to a skewed representation of the evidence.

Causes of Reporting Bias:

  • Publication Pressure: Researchers may prioritize publishing positive or significant results, leaving negative or inconclusive findings unreported.
  • Editorial Bias: Journals may preferentially accept studies with significant results, discouraging the publication of less exciting findings.
  • Confirmation Bias: Researchers may unintentionally focus on, emphasize, or interpret data that aligns with their hypotheses.
  • Transparent Reporting: Share all research findings, whether they support your hypotheses or not.
  • Pre-Registration: Register your research design and hypotheses before data collection, reducing the temptation to selectively report.
  • Peer Review: Engage in peer review to ensure a balanced and comprehensive presentation of your research.

Confirmation Bias

Confirmation bias is the tendency to seek out or interpret information in a way that confirms pre-existing beliefs or expectations. It can cloud objectivity and lead to the misinterpretation of data.

Causes of Confirmation Bias:

  • Cognitive Biases: Researchers may unconsciously filter or interpret data in a way that aligns with their preconceptions.
  • Selective Information Search: Researchers might seek out information that supports their hypotheses while ignoring contradictory evidence.
  • Interpretation Bias: Even when presented with neutral data, researchers may interpret it to fit their expectations.
  • Blinding: Keep researchers unaware of the study's hypotheses to prevent bias in data interpretation.
  • Objectivity Training: Train researchers to approach research with open minds and to recognize and challenge their biases.
  • Diverse Perspectives: Collaborate with colleagues with different viewpoints to reduce the impact of confirmation bias.

Publication Bias

Publication bias occurs when studies with positive or significant results are more likely to be published, skewing the overall literature. Unpublished studies with negative or null findings remain hidden.

Causes of Publication Bias:

  • Journal Preferences: Journals may favor publishing studies with significant results, leading to the underrepresentation of negative or null findings.
  • Researcher Publication Bias: Researchers may prioritize submitting and resubmitting studies with positive results for publication.
  • Publication of Negative Results: Encourage publishing studies with negative or null findings.
  • Meta-analysis: Combine results from multiple studies to assess the overall effect, considering both published and unpublished studies.
  • Journal Policies: Support journals that promote balanced publication practices.

Recall Bias

Recall bias arises when participants in a study inaccurately remember or report past events or experiences. It can compromise the accuracy of historical data.

Causes of Recall Bias:

  • Memory Decay: Memories naturally fade over time, making it challenging to recall distant events accurately.
  • Social Desirability Bias: Participants may provide responses they believe are socially acceptable or favorable.
  • Leading Questions: The phrasing of questions can influence participants' recollections.
  • Use of Objective Data Sources: Whenever possible, rely on documented records, medical charts, or other objective sources of information.
  • Minimize Leading Questions: Craft questions carefully to avoid suggesting specific responses.
  • Consider Timing: Be aware of how the timing of data collection may affect participants' recall.

Observer Bias

Observer bias occurs when researchers' expectations or preconceived notions influence their observations and interpretations of data. It can introduce subjectivity into the research process.

Causes of Observer Bias:

  • Expectation Effects: Researchers may see what they expect or want to see in their observations.
  • Interpretation Biases: Researchers may interpret ambiguous data in a way that confirms their hypotheses.
  • Confirmation Bias: Researchers may selectively focus on evidence that supports their expectations.
  • Blinding: Keep researchers unaware of the study's hypotheses to minimize their influence on observations.
  • Inter-rater Reliability: Ensure agreement among multiple observers by using consistent criteria for data collection.
  • Training and Awareness: Train researchers to recognize and mitigate their biases, promoting more objective observations.

Understanding and identifying these common types of research bias is the first step toward conducting rigorous and reliable research. By implementing effective mitigation strategies and fostering a culture of transparency and objectivity, you can enhance the credibility and impact of your research. It's not just about avoiding pitfalls but also about ensuring that your findings stand up to scrutiny and contribute to the broader body of knowledge in your field.

Remember, research is a continuous journey of discovery, and the quest for unbiased, evidence-based insights is at its core. Embracing these principles will not only strengthen your research but also empower you to make more informed decisions, drive positive change, and ultimately, advance both your individual goals and the greater collective knowledge of society.

What Causes Research Bias?

Research bias can stem from various sources, and gaining a deeper understanding of these causes is vital for effectively addressing and preventing bias in your research endeavors. Let's explore these causes in detail:

Inherent Biases

Inherent biases are those that are an intrinsic part of the research process itself and can be challenging to eliminate entirely. They often result from limitations or constraints in a study's design, data collection, or analysis.

Key Characteristics:

  • Inherent to Study Design : These biases are ingrained in the very design or structure of a study.
  • Difficult to Eliminate: Since they are innate, completely eradicating them may not be feasible.
  • Potential to Skew Findings: Inherent biases can lead to skewed or inaccurate results.

Examples of Inherent Biases:

  • Sampling Bias: Due to inherent limitations in data collection methods .
  • Selection Bias: As a result of constraints in participant recruitment.
  • Time-Order Bias: Arising from changes over time, which may be challenging to control.

Systematic Biases

Systematic biases result from consistent errors or flaws in the research process, which can lead to predictable patterns of deviation from the truth. Unlike inherent biases, systematic biases can be addressed with deliberate efforts.

  • Consistent Patterns: These biases produce recurring errors or distortions.
  • Identifiable Sources: The sources of systematic biases can often be pinpointed and addressed.
  • Amenable to Mitigation: Conscious adjustments can reduce or eliminate systematic biases.

Examples of Systematic Biases:

  • Measurement Bias: When measurement tools are systematically flawed, leading to inaccuracies.
  • Reporting Bias: Stemming from the selective reporting of results to favor certain outcomes.
  • Confirmation Bias: Arising from researchers' preconceived notions or hypotheses.

Non-Systematic Biases

Non-systematic biases are random errors that can occur in the research process but are neither consistent nor predictable. They introduce variability and can affect individual data points but may not systematically impact the overall study.

  • Random Occurrence: Non-systematic biases are not tied to specific patterns or sources.
  • Unpredictable: They may affect one data point but not another unexpectedly.
  • Potential for Random Variation: Non-systematic biases introduce noise into data.

Examples of Non-Systematic Biases:

  • Sampling Error : Natural fluctuations in data points due to random chance.
  • Non-Response Bias: When non-responders differ from responders randomly.

Cognitive Biases

Cognitive biases are biases rooted in human psychology and decision-making processes. They can influence how researchers perceive, interpret, and make sense of data, often unconsciously.

  • Psychological Origin: Cognitive biases originate from the way our brains process information.
  • Subjective Interpretation: They affect how researchers subjectively interpret data.
  • Affect Decision-Making: Cognitive biases can influence researchers' decisions throughout the research process.

Examples of Cognitive Biases:

  • Confirmation Bias: Seeking information that confirms pre-existing beliefs.
  • Anchoring Bias: Relying too heavily on the first piece of information encountered.
  • Hindsight Bias: Seeing events as having been predictable after they've occurred.

Understanding these causes of research bias is crucial for researchers at all stages of their work. It enables you to identify potential sources of bias, take proactive measures to minimize bias and foster a research environment that prioritizes objectivity and rigor. By acknowledging the inherent biases in research, recognizing systematic and non-systematic biases, and being aware of the cognitive biases that can affect decision-making, you can conduct more reliable and credible research.

How to Detect Research Bias?

Detecting research bias is a crucial step in maintaining the integrity of your study and ensuring the reliability of your findings. Let's explore some effective methods and techniques for identifying bias in your research.

Data Analysis Techniques

Utilizing appropriate data analysis techniques is crucial in detecting and addressing research bias. Here are some strategies to consider:

  • Statistical Analysis : Employ rigorous statistical methods to examine the data. Look for anomalies, inconsistencies, or patterns that may indicate bias, such as skewed distributions or unexpected correlations.
  • Sensitivity Analysis: Conduct sensitivity analyses by varying key parameters or assumptions in your analysis. This helps assess the robustness of your results and identifies whether bias may be influencing your findings.
  • Subgroup Analysis: If your study involves diverse groups or populations, perform subgroup analyses to explore whether bias may be affecting specific subsets differently.

Peer Review

Peer review is a fundamental process for evaluating research and identifying potential bias. Here's how it can assist in detecting bias:

  • External Evaluation: Involve independent experts in your field who can objectively assess your research methods , data, and interpretations. They may identify overlooked sources of bias or offer suggestions for improvement.
  • Bias Assessment: Ask peer reviewers specifically to scrutinize your study for any signs of bias. Encourage them to assess the transparency of your methods and reporting.
  • Replicability: Peer reviewers can also assess the replicability of your study, ensuring that others can reproduce your findings independently.

Cross-Validation

Cross-validation is a technique that involves comparing the results of your research with external or independent sources to identify potential bias:

  • External Data Sources: Compare your findings with data from external sources, such as government statistics, industry reports , or previous research. Significant disparities may signal bias.
  • Expert Consultation: Seek feedback from experts who are not directly involved in your research. Their impartial perspectives can help identify any biases in your study design, data collection, or interpretation.
  • Historical Comparisons: If applicable, compare your current research with historical data to assess whether trends or patterns have changed over time, which could indicate bias.

By employing these methods and techniques, you can proactively detect and address research bias, ultimately enhancing the credibility and reliability of your research findings.

How to Avoid Research Bias?

Effectively avoiding research bias is a fundamental aspect of conducting high-quality research. Implementing specific strategies can help researchers minimize the impact of bias and enhance the validity and reliability of their findings. Let's delve into these strategies in detail:

1. Randomization

Randomization is a method used to allocate participants or data points to different groups or conditions in an entirely random way. It helps ensure that each participant has an equal chance of being assigned to any group, reducing the potential for bias in group assignments.

Key Aspects:

  • Random Assignment: Randomly assigning participants to experimental or control groups .
  • Equal Opportunity: Ensuring every participant has an equal likelihood of being in any group.
  • Minimizing Bias: Reduces the risk of selection bias by distributing potential biases equally across groups.
  • Balanced Groups: Randomization creates comparable groups in terms of potential confounding variables.
  • Minimizes Selection Bias: Eliminates researcher or participant biases in group allocation.
  • Enhanced Causality: Strengthens the ability to make causal inferences from research findings.
  • Simple Randomization: Assign participants or data points to groups using a random number generator or drawing lots.
  • Stratified Randomization: Divide the population into subgroups based on relevant characteristics (e.g., age, gender) and then randomly assign within those subgroups.
  • Blocked Randomization: Create blocks of participants, ensuring each block contains an equal number from each group.

In a clinical trial testing a new drug, researchers use simple randomization to allocate participants into two groups: one receiving the new drug and the other receiving a placebo. This helps ensure that patient characteristics, such as age or gender, do not systematically favor one group over another, minimizing bias in the study's results.

2. Blinding and Double-Blinding

Blinding involves keeping either the participants or the researchers (single-blinding) or both (double-blinding) unaware of certain aspects of the study, such as group assignments or treatment conditions. This prevents the introduction of bias due to expectations or knowledge of the study's hypotheses.

  • Single-Blinding: Either participants or researchers are unaware of crucial information.
  • Double-Blinding: Both participants and researchers are unaware of crucial information.
  • Placebo Control: Often used in pharmaceutical research to ensure blinding.
  • Minimizes Observer Bias: Researchers' expectations do not influence data collection or interpretation.
  • Prevents Participant Bias: Participants' awareness of their group assignment does not affect their behavior or responses.
  • Enhances Study Validity: Blinding reduces the risk of bias influencing study outcomes.
  • Use of Placebos: In clinical trials, a placebo group is often included to maintain blinding.
  • Blinding Procedures: Establish protocols to ensure that those who need to be blinded are kept unaware of relevant information.
  • Blinding Verification: Conduct assessments to confirm that blinding has been maintained throughout the study.

In a double-blind drug trial, neither the participants nor the researchers know whether they are receiving or administering the experimental drug or a placebo. This prevents biases in reporting and evaluating the drug's effects, ensuring that results are objective and reliable.

3. Standardization of Procedures

Standardization involves creating and following consistent, well-defined procedures throughout a study. This ensures that data collection, measurements, and interventions are carried out uniformly, minimizing potential sources of bias.

  • Detailed Protocols: Developing clear and precise protocols for data collection or interventions.
  • Consistency: Ensuring that all research personnel adhere to the established procedures.
  • Reducing Variability: Minimizing variation in how processes are carried out.
  • Increased Reliability: Standardized procedures lead to more consistent and reliable data.
  • Minimized Measurement Bias: Reduces the likelihood of measurement errors or inconsistencies.
  • Easier Replication: Standardization facilitates replication by providing a clear roadmap for future studies.
  • Protocol Development: Create detailed step-by-step protocols for data collection, interventions, or experiments.
  • Training: Train all research personnel thoroughly on standardized procedures.
  • Quality Control: Implement quality control measures to monitor and ensure adherence to protocols.

In a psychological study, researchers standardize the procedure for administering a cognitive test to all participants. They use the same test materials, instructions, and environmental conditions for every participant to ensure that the data collected are not influenced by variations in how the test is administered.

4. Sample Size Considerations

Sample size considerations involve determining the appropriate number of participants or data points needed for a study. Inadequate sample sizes can lead to underpowered studies that fail to detect meaningful effects, while excessively large samples can be resource-intensive without adding substantial value.

  • Power Analysis: Calculating the required sample size based on expected effect sizes and desired statistical power.
  • Effect Size Considerations: Ensuring the sample size is sufficient to detect the effect size of interest.
  • Resource Constraints: Balancing the need for a larger sample with available resources.
  • Improved Statistical Validity: Adequate sample sizes increase the likelihood of detecting actual effects.
  • Generalizability: Larger samples enhance the generalizability of study findings to the target population.
  • Resource Efficiency: Avoiding extensive samples conserves resources.
  • Power Analysis Software: Use statistical software to perform power analyses.
  • Pilot Studies: Conduct pilot studies to estimate effect sizes and inform sample size calculations.
  • Consider Practical Constraints: Factor in time, budget, and other practical limitations when determining sample sizes.

In a medical research study evaluating the efficacy of a new treatment, researchers conduct a power analysis to determine the required sample size. This analysis considers the expected effect size, desired level of statistical power, and available resources to ensure that the study can reliably detect the treatment's effects.

5. Replication

Replication involves conducting the same study or experiment multiple times to assess the consistency and reliability of the findings. Replication is a critical step in research, as it helps validate the results and ensures that they are not due to chance or bias.

  • Exact or Conceptual Replication: Replicating the study with the same methods (exact) or similar methods addressing the same research question (conceptual).
  • Independent Replication: Replication by different research teams or in other settings.
  • Enhanced Confidence: Replication builds confidence in the robustness of research findings.
  • Enhanced Reliability: Replicated findings are more reliable and less likely to be influenced by bias.
  • Verification of Results: Replication verifies the validity of initial study results.
  • Error Detection: Identifies potential sources of bias or errors in the original study.
  • Plan for Replication: Include replication as part of the research design from the outset.
  • Collaboration: Collaborate with other researchers or research teams to conduct independent replications.
  • Transparent Reporting: Clearly document replication methods and results for transparency.

A psychology study that originally reported a significant effect of a particular intervention on memory performance is replicated by another research team using the same methods and procedures. If the replication study also finds a significant impact, it provides additional support for the initial findings and reduces the likelihood of bias influencing the results.

6. Transparent Reporting

Transparent reporting involves thoroughly documenting all aspects of a research study, from its design and methodology to its results and conclusions. Transparent reporting ensures that other researchers can assess the study's validity and detect any potential sources of bias.

  • Comprehensive Documentation: Detailed reporting of study design, procedures, data collection, and analysis.
  • Open Access to Data: Sharing data and materials to allow for independent verification and analysis.
  • Disclosure of Conflicts: Transparent reporting includes disclosing any potential conflicts of interest that could introduce bias.
  • Accountability: Transparent reporting holds researchers accountable for their methods and results.
  • Enhanced Credibility: Transparent research is more credible and less likely to be influenced by bias.
  • Reproducibility: Other researchers can replicate and verify study findings.
  • Use of Reporting Guidelines: Follow established reporting guidelines specific to your field (e.g., CONSORT for clinical trials, STROBE for observational studies).
  • Data Sharing: Make research data and materials available to others through data repositories or supplementary materials.
  • Peer Review: Engage in peer review to ensure clear and comprehensive reporting.

A scientific journal article reporting the results of a research study includes detailed descriptions of the study design, methods, statistical analyses, and potential limitations. The authors also provide access to the raw data and materials used in the study, allowing other researchers to assess the study's validity and potential bias. This transparent reporting enhances the credibility of the research.

Real-World Examples of Research Bias

To better understand the pervasive nature of research bias and its implications, let's delve into additional real-world examples that illustrate various types of research bias beyond those previously discussed.

Pharmaceutical Industry Influence on Clinical Trials

Bias Type: Funding Bias, Sponsorship Bias

Example: The pharmaceutical industry often sponsors clinical trials to evaluate the safety and efficacy of new drugs. In some cases, studies sponsored by pharmaceutical companies have been found to report more favorable outcomes for their products compared to independently funded research.

Explanation: Funding bias occurs when the financial interests of the sponsor influence study design, data collection, and reporting. In these instances, there may be pressure to emphasize positive results or downplay adverse effects to promote the marketability of the drug.

Impact: This bias can have severe consequences for patient safety and public health, as it can lead to the approval and widespread use of drugs that may not be as effective or safe as initially reported.

Social Desirability Bias in Self-reported Surveys

Bias Type: Response Bias

Example: Researchers conducting surveys on sensitive topics such as drug use, sexual behavior, or income levels often encounter social desirability bias. Respondents may provide answers they believe are socially acceptable or desirable rather than accurate.

Explanation: Social desirability bias is rooted in the tendency to present oneself in a favorable light. Respondents may underreport stigmatized behaviors or overreport socially acceptable ones, leading to inaccurate data.

Impact: This bias can compromise the validity of survey research, especially in areas where honest reporting is crucial for public health interventions or policy decisions.

Non-Publication of Negative Clinical Trials

Bias Type: Publication Bias

Example: Clinical trials with negative or null results are less likely to be published than those with positive findings. This leads to an overrepresentation of studies showing treatment efficacy and an underrepresentation of trials indicating no effect.

Explanation: Publication bias occurs because journals often preferentially accept studies with significant results, while researchers and sponsors may be less motivated to publish negative findings. This skews the evidence base and can result in the overuse of specific treatments or interventions.

Impact: Patients and healthcare providers may make decisions based on incomplete or biased information, potentially exposing patients to ineffective or even harmful treatments.

Gender Bias in Medical Research

Bias Type: Gender Bias

Example: Historically, medical research has been biased toward male subjects, leading to a limited understanding of how diseases and treatments affect women. Clinical trials and studies often fail to include a representative number of female participants.

Explanation: Gender bias in research arises from the misconception that results from male subjects can be generalized to females. This bias can lead to treatments and medications that are less effective or safe for women.

Impact: Addressing gender bias is crucial for developing healthcare solutions that account for the distinct biological and physiological differences between genders and ensuring equitable access to effective treatments.

Political Bias in Climate Change Research

Bias Type: Confirmation Bias, Political Bias

Example: In climate change research, political bias can influence the framing, interpretation, and reporting of findings. Researchers aligned with certain political ideologies may downplay or exaggerate the significance of climate change based on their preconceptions.

Explanation: Confirmation bias comes into play when researchers seek data or interpretations that align with their political beliefs. This can result in less objective research and more susceptible to accusations of bias.

Impact: Political bias can undermine public trust in scientific research, impede policy-making, and hinder efforts to address critical issues such as climate change.

These diverse examples of research bias highlight the need for robust safeguards, transparency, and peer review in the research process. Recognizing and addressing bias is essential for maintaining the integrity of scientific inquiry and ensuring that research findings can be trusted and applied effectively.

Conclusion for Research Bias

Understanding and addressing research bias is critical in conducting reliable and trustworthy research. By recognizing the various types of bias, whether they are inherent, systematic, non-systematic, or cognitive, you can take proactive measures to minimize their impact. Strategies like randomization, blinding, standardization, and transparent reporting offer powerful tools to enhance the validity of your research.

Moreover, real-world examples highlight the tangible consequences of research bias and emphasize the importance of conducting research with integrity. Whether you're in the world of science, healthcare, marketing, or any other field, the pursuit of unbiased research is essential for making informed decisions that drive success. So, keep these insights in mind as you embark on your next research journey, and remember that a commitment to objectivity will always lead to better, more reliable outcomes.

How to Prevent Bias in Research?

Are you tired of lengthy, expensive, and potentially biased research processes? Appinio , the real-time market research platform, is here to revolutionize how you gather consumer insights. Say goodbye to research bias and hello to rapid, data-driven decision-making.

Here's why you should choose Appinio:

  • Fast and Precise: Get from questions to insights in minutes, ensuring you have the most up-to-date information for your decisions.
  • User-Friendly: No need for a PhD in research – our platform is so intuitive that anyone can use it to gather valuable insights.
  • Global Reach: Survey your target audience from over 90 countries and define precise target groups from a pool of 1200+ characteristics.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Interval Scale Definition Characteristics Examples

07.05.2024 | 29min read

Interval Scale: Definition, Characteristics, Examples

What is Qualitative Observation Definition Types Examples

03.05.2024 | 29min read

What is Qualitative Observation? Definition, Types, Examples

What is a Perceptual Map and How to Make One Template

02.05.2024 | 32min read

What is a Perceptual Map and How to Make One? (+ Template)

Understanding the different types of bias in research (2024 guide)

Last updated

6 October 2023

Reviewed by

Miroslav Damyanov

Research bias is an invisible force that overly highlights or dismisses the chosen study topic’s traits. When left unchecked, it can significantly impact the validity and reliability of your research.

In a perfect world, every research project would be free of any trace of bias—but for this to happen, you need to be aware of the most common types of research bias that plague studies.

Read this guide to learn more about the most common types of bias in research and what you can do to design and improve your studies to create high-quality research results.

  • What is research bias?

Research bias is the tendency for qualitative and quantitative research studies to contain prejudice or preference for or against a particular group of people, culture, object, idea, belief, or circumstance.

Bias is rarely based on observed facts. In most cases, it results from societal stereotypes, systemic discrimination, or learned prejudice.

Every human develops their own set of biases throughout their lifetime as they interact with their environment. Often, people are unaware of their own biases until they are challenged—and this is why it’s easy for unintentional bias to seep into research projects .

Left unchecked, bias ruins the validity of research . So, to get the most accurate results, researchers need to know about the most common types of research bias and understand how their study design can address and avoid these outcomes.

  • The two primary types of bias

Historically, there are two primary types of bias in research:

Conscious bias

Conscious bias is the practice of intentionally voicing and sharing a negative opinion about a particular group of people, beliefs, or concepts.

Characterized by negative emotions and opinions of the target group, conscious bias is often defined as intentional discrimination.

In most cases, this type of bias is not involved in research projects, as they are unjust, unfair, and unscientific.

Unconscious bias

An unconscious bias is a negative response to a particular group of people, beliefs, or concepts that is not identified or intentionally acted upon by the bias holder.

Because of this, unconscious bias is incredibly dangerous. These warped beliefs shape and impact how someone conducts themselves and their research. The trouble is that they can’t identify the moral and ethical issues with their behavior.

  • Examples of commonly occurring research bias

Humans use countless biases daily to quickly process information and make sense of the world. But, to create accurate research studies and get the best results, you must remove these biases from your study design.

Here are some of the most common types of research biases you should look out for when planning your next study:

Information bias

During any study, tampering with data collection is widely agreed to be bad science. But what if your study design includes information biases you are unaware of?

Also known as measurement bias, information bias occurs when one or more of the key study variables are not correctly measured, recorded, or interpreted. As a result, the study’s perceived outcome may be inaccurate due to data misclassification, omission, or obfuscation (obscuring). 

Observer bias

Observer bias occurs when researchers don’t have a clear understanding of their own personal assumptions and expectations. During observational studies, it’s possible for a researcher’s personal biases to impact how they interpret the data. This can dramatically affect the study’s outcome.

The study should be double-blind to combat this type of bias. This is where the participants don’t know which group they are in, and the observers don’t know which group they are observing.

Regression to the mean (RTM)

Bias can also impact research statistics.

Regression of the mean (RTM) refers to a statistical bias that if a first clinical reading is extreme in value (i.e., it’s very high or very low compared to the average), the second reading will provide a more statistically normal result.

Here’s an example: you might be nervous when a doctor takes your blood pressure in the doctor’s surgery. The first result might be quite high. This is a phenomenon known as “white coat syndrome.” When your blood pressure is retaken to double-check the value, it is more likely to be closer to typical values.

So, which value is more accurate, and which should you record as the truth?

The answer depends on the specific design of your study. However, using control groups is usually recommended for studies with a high risk of RTM.

Performance bias

A performance bias can develop if participants understand the study’s nature or desired outcomes. This can harm the study’s accuracy, as participants may adjust their behavior outside of their normal to improve their performance. This results in inaccurate data and study results.

This is a common bias type in medical and health studies, particularly those studying the differences between two lifestyle choices.

To reduce performance bias, researchers should strive to keep members of the control and study groups unaware of the other group’s activities. This method is known as “blinding.”

Recall bias

How good is your memory? Chances are, it’s not as good as you think—and the older the memory, the more inaccurate and biased it will become.

A recall bias commonly occurs in self-reporting studies requiring participants to remember past information. While people can remember big-picture events (like the day they got married or landed their first job), routine occurrences like what they do after work every Tuesday are harder to recall.

To offset this type of bias, design a study that engages with participants on both short- and long-term periods to help keep the content more top of mind.

Researcher bias

Researcher bias (also known as interviewer bias) occurs due to the researcher’s personal beliefs or tendencies that influence the study’s results or outcomes.

These types of biases can be intentional or unintentional, and most are driven by personal feelings, historical stereotypes, and assumptions about the study’s outcome before it has even begun.

Question order bias

Survey design and question order is a huge area of contention for researchers. These elements are essential for quality study design and can prevent or invite answer bias.

When designing a research study that collects data via survey questions , the order of the questions presented can impact how the participants answer each subsequent question. Leading questions (questions that guide participants toward a particular answer) are perfect examples of this. When included early in the survey, they can sway a participant’s opinions and answers as they complete the questionnaire .

This is known as systematic distortion, meaning each question answered after the guiding questions is impacted or distorted by the wording of the questions before.

Demand characteristics

Body language and social cues play a significant role in human communication—and this also rings true for the validity of research projects . 

A demand characteristic bias can occur due to a verbal or non-verbal cue that encourages research participants to behave in a particular way.

Imagine a researcher is studying a group of new grad business students about their experience applying to new jobs one, three, and six months after graduation. They scowl every time a participant mentions they don’t use a cover letter. This reaction may encourage participants to change their answers, harming the study’s outcome and resulting in less accurate results.

Courtesy bias

Courtesy bias arises from not wanting to share negative or constructive feedback or answers—a common human tendency.

You’ve probably been in this situation before. Think of a time when you had a negative opinion or perspective on a topic, but you felt the need to soften or reduce the harshness of your feedback to prevent someone’s feelings from being hurt.

This type of bias also occurs in research. Without a comfortable and non-judgmental environment that encourages honest responses, courtesy bias can result in inaccurate data intake.

Studies based on small group interviews, focus groups , or any in-person surveys are particularly vulnerable to this type of bias because people are less likely to share negative opinions in front of others or to someone’s face.

Extreme responding

Extreme responding refers to the tendency for people to respond on one side of the scale or the other, even if these extreme answers don’t reflect their true opinion. 

This is a common bias in surveys, particularly online surveys asking about a person’s experience or personal opinions (think questionnaires that ask you to decide if you strongly disagree, disagree, agree, or strongly agree with a statement).

When this occurs, the data will be skewed. It will be overly positive or negative—not accurate. This is a problem because the data can impact future decisions or study outcomes.

Writing different styles of questions and asking for follow-up interviews with a small group of participants are a few options for reducing the impact of this type of bias.

Social desirability bias

Everyone wants to be liked and respected. As a result, societal bias can impact survey answers.

It’s common for people to answer questions in a way that they believe will earn them favor, respect, or agreement with researchers. This is a common bias type for studies on taboo or sensitive topics like alcohol consumption or physical activity levels, where participants feel vulnerable or judged when sharing their honest answers.

Finding ways to comfort participants with ensured anonymity and safe and respectful research practices are ways you can offset the impact of social desirability bias.

Selection bias

For the most accurate results, researchers need to understand their chosen population before accepting participants. Failure to do this results in selection bias, which is caused by an inaccurate or misrepresented selection of participants that don’t truly reflect the chosen population.

Self-selection bias

To collect data, researchers in many studies require participants to volunteer their time and experiences. This results in a study design that is automatically biased toward people who are more likely to get involved.

People who are more likely to voluntarily participate in a study are not reflective of the common experience of a broad, diverse population. Because of this, any information collected from this type of study will contain a self-selection bias .

To avoid this type of bias, researchers can use random assignment (using control versus treatment groups to divide the study participants after they volunteer).

Sampling or ascertainment bias

When choosing participants for a study, take care to select people who are representative of the overall population being researched. Failure to do this will result in sampling bias.

For example, if researchers aim to learn more about how university stress impacts sleep quality but only choose engineering students as participants, the study won’t reflect the wider population they want to learn more about.

To avoid sampling bias, researchers must first have a strong understanding of their chosen study population. Then, they should take steps to ensure that any person within that population has an equal chance of being selected for the study.

Attrition bias

People tend to be hard on themselves, so an attrition bias toward the impact of failure versus success can seep into research.

Many people find it easier to list things they struggle with rather than things they think they are good at. This also occurs in research, as people are more likely to value the impact of a negative experience (or failure) than that of a positive, successful outcome.

Survivorship bias

In medical clinical trials and studies, a survivorship bias may develop if only the results and data from participants who survived the trial are studied. Survivorship bias also includes participants who were unable to complete the entire trial, not just those who passed away during the duration of the study.

In long-term studies that evaluate new medications or therapies for high-mortality diseases like aggressive cancers, choosing to only consider the success rate, side effects, or experiences of those who completed the study eliminates a large portion of important information. This disregarded information may have offered insights into the quality, efficacy, and safety of the treatment being tested.

Nonresponse bias

A nonresponse bias occurs when a portion of chosen participants decide not to complete or participate in the study. This is a common issue in survey-based research (especially online surveys).

In survey-based research, the issue of response versus nonresponse rates can impact the quality of the information collected. Every nonresponse is a missed opportunity to get a better understanding of the chosen population, whether participants choose not to reply based on subject apathy, shame, guilt, or a lack of skills or resources.

To combat this bias, improve response rates using multiple different survey styles. These might include in-person interviews, mailed paper surveys, and virtual options. However, note that these efforts will never completely remove nonresponse bias from your study.

Cognitive bias

Cognitive biases result from repeated errors in thinking or memory caused by misinterpreting information, oversimplifying a situation, or making inaccurate mental shortcuts. They can be tricky to identify and account for, as everyone lives with invisible cognitive biases that govern how they understand and interact with their surrounding environment.

Anchoring bias

When given a list of information, humans have a tendency to overemphasize (or anchor onto) the first thing mentioned.

For example, if you ask people to remember a grocery list of items that starts with apples, bananas, yogurt, and bread, people are most likely to remember apples over any of the other ingredients. This is because apples were mentioned first, despite not being any more important than the other items listed.

This habit conflates the importance and significance of this one piece of information, which can impact how you respond to or feel about the other equally important concepts being mentioned.

Halo effect

The halo effect explains the tendency for people to form opinions or assumptions about other people based on one specific characteristic. Most commonly seen in studies about physical appearance and attractiveness, the halo effect can cause either a positive or negative response depending on how the defined trait is perceived.

Framing effect

Framing effect bias refers to how you perceive information based on how it’s presented to you. 

To demonstrate this, decide which of the following desserts sounds more delicious.

“Made with 95% natural ingredients!”

“Contains only 5% non-natural ingredients!”

Both of these claims say the same thing, but most people have a framing effect bias toward the first claim as it’s positive and more impactful.

This type of bias can significantly impact how people perceive or react to data and information.

The misinformation effect

The misinformation effect refers to the brain’s tendency to alter or misremember past experiences when it has since been fed inaccurate information. This type of bias can significantly impact how a person feels about, remembers, or trusts the authority of their previous experiences.

Confirmation bias

Confirmation bias occurs when someone unconsciously prefers or favors information that confirms or validates their beliefs and ideas.

In some cases, confirmation bias is so strong that people find themselves disregarding information that counters their worldview, resulting in poorer research accuracy and quality.

We all like being proven right (even if we are testing a research hypothesis ), so this is a commonly occurring cognitive bias that needs to be addressed during any scientific study.

Availability heuristic

All humans contextualize and understand the world around them based on their past experiences and memories. Because of this, people tend to have an availability bias toward explanations they have heard before. 

People are more likely to assume or gravitate toward reasoning and ideas that align with past experience. This is known as the availability heuristic . Information and connections that are more available or accessible in your memory might seem more likely than other alternatives. This can impact the validity of research efforts.

  • How to avoid bias in your research

Research is a compelling, complex, and essential part of human growth and learning, but collecting the most accurate data possible also poses plenty of challenges.

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 10 April 2023

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

research bias type

Users report unexpectedly high data usage, especially during streaming sessions.

research bias type

Users find it hard to navigate from the home page to relevant playlists in the app.

research bias type

It would be great to have a sleep timer feature, especially for bedtime listening.

research bias type

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

  • How it works

researchprospect post subheader

What is Research Bias - Types & Examples

Research is crucial in generating knowledge and understanding the world around us. However, the validity and reliability of research findings can be compromised by various factors, including bias in research. This comprehensive guide will explain the different examples and types of research bias. But before that, let’s look into the research bias definition.

What is Research Bias?

Research bias refers to the systematic errors or deviations from the truth that can occur during the research process, leading to inaccurate or misleading results. It arises from flaws in the research design , data collection , analysis, and interpretation, which can distort the findings and conclusions. Bias in research can occur at any stage of the research process and may be unintentional or deliberate. Recognising and addressing research bias is crucial for maintaining the integrity and credibility of scientific research.

Example of Bias in Research

Suppose a researcher wants to investigate the relationship between coffee consumption and heart disease risk. They recruit participants for their study and ask them to self-report their coffee intake through a questionnaire. Bias can occur in this scenario due to self-reporting bias, where participants may provide inaccurate or biased information about their coffee consumption.

For example, health-conscious individuals might underreport their coffee intake because they perceive it as unhealthy, while coffee enthusiasts might overreport their consumption due to their positive attitude towards coffee.

Types of Research Bias

There are many different types of research bias. Some of them are discussed below.

Information Bias

Publication bias, interviewer bias, response bias, researcher bias, selection bias, cognitive bias.

Information bias is also known as measurement bias. It refers to a type of research bias that occurs when there are errors or distortions in gathering, interpreting, or reporting information in a research study or any other form of data collection.

Example of Information Bias In Research

Let's say you are studying the effectiveness of a new weight loss program. You recruit participants and ask them to keep a daily food diary to track their caloric intake. However, the participants know that they are being monitored and may alter their eating habits, consciously or unconsciously, to present a more favourable image of themselves.

In this case, the participants' awareness of being observed can lead to information bias in research. They might underreport their consumption of high-calorie foods or overreport their consumption of healthy foods, skewing the data collected. This research bias could make the weight loss program appear more effective than it actually is because the reported dietary intake doesn't accurately reflect the participants' true behaviour.

Types of Information Bias In Research

Information bias can manifest in different ways, such as:

1. Measurement Bias

Measurement Bias occurs when the measurement instruments or techniques used to collect data are flawed or inaccurate. For example, if a survey question is poorly worded or ambiguous, it may generate biased responses or misinterpretations of the respondents' answers.

2. Recall Bias

Recall bias arises when participants in a study inaccurately remember or recall past events, experiences, or behaviours. It can happen due to various factors, such as selective memory, social desirability bias, or the passage of time. Recall bias causes distorted or unreliable data.

3. Reporting Bias

Reporting bias occurs when there is selective or incomplete reporting of study findings. It can happen if researchers or organisations only publish or publicise results that support their preconceived notions or desired outcomes while omitting or downplaying contradictory or unfavourable findings. Reporting bias can lead to a skewed perception of the true state of knowledge in a particular field.

4. Publication Bias

Publication bias refers to the tendency of researchers, journals, or other publishing entities to publish studies with statistically significant or positive results preferentially. Studies with null or negative findings are often less likely to be published, leading to an overrepresentation of positive results in the literature and potentially distorting the overall understanding of a research topic.

5. Language Bias

This bias can transpire if research is conducted and reported in a specific language, leading to limited accessibility and potential exclusion of relevant studies or data published in other languages. Language bias can introduce distortions in systematic reviews, meta-analyses, or other forms of evidence synthesis.

Publication bias occurs due to the systematic tendency of scientific journals and researchers to preferentially publish studies with positive or significant results while overlooking or rejecting studies with negative or non-significant findings. It transpires when the decision to publish a study is influenced by the nature or direction of its results rather than its methodological rigour or scientific merit.

Publication bias in research can arise due to various factors, such as researchers' and journals' preferences for novel or groundbreaking findings, the pressure to present positive results to secure funding or advance academic careers, and the tendency of studies with positive results to generate more attention and citations. This research bias can distort the overall body of scientific literature, leading to an overrepresentation of studies with positive outcomes and an underrepresentation of studies with negative or inconclusive findings.

Example of Publication Bias In Research

Let's say a pharmaceutical company conducts a clinical trial to test the effectiveness of a new drug for treating a certain medical condition. The company conducts several trials but only submits the results of the trials that show positive outcomes that states that the drug is effective to scientific journals for publication, as the negative results may lead to rejection in funding.

Interviewer bias means the potential for bias or prejudice to influence the outcome of an interview. It happens when the interviewer's personal beliefs, preferences, stereotypes, or prejudices affect their evaluation of the interviewee's qualifications, skills, or suitability for a position.

Example of Interviewer Bias In Research

Imagine there is an interviewer named James conducting interviews for a sales position in a company. During one interview, a candidate named Aisha, who is a woman, showcases exceptional knowledge about the products, demonstrates excellent communication skills, and presents a strong sales track record.

However, James thinks women are generally less assertive or aggressive in sales roles than men. Due to this stereotype bias in research, James may subconsciously underestimate Aisha's abilities or question her suitability for the position, despite her impressive qualifications.

Types of Interviewer Bias In Research

The main types of interviewer bias are:

1. Stereotyping

Stereotyping refers to holding preconceived notions or stereotypes about certain groups of people based on their race, gender, age, religion, or other characteristics. These biases can lead to unfair judgments or assumptions about the interviewee's abilities.

2. Confirmation Bias

In confirmation bias , Interviewers may subconsciously seek information that confirms their pre-existing beliefs or initial impressions about the interviewee. This results in selectively noticing and emphasising certain responses or behaviours that align with their biases while disregarding contradictory evidence.

3. Similarity Bias

Similarity bias means unconsciously favouring candidates with similar backgrounds, experiences, or characteristics, resulting in a preference for more relatable or familiar candidates. This leads to overlooking qualified candidates from diverse backgrounds.

4. Halo and Horns Effect

The halo effect occurs when an interviewer forms an overall positive impression of a candidate based on one favourable characteristic, leading to a bias in favour of that candidate. Conversely, the horns effect occurs when a negative impression of a candidate's single attribute influences the overall evaluation, resulting in a bias against the candidate.

5. Contrast Effect

The contrast effect leads to evaluating candidates relative to each other rather than based on objective criteria, leading to biased judgments. If the previous candidate was exceptionally strong or weak, the current candidate might be evaluated more harshly or leniently.

6. Implicit Bias

Interviewers may have unconscious biases influencing their perceptions and decision-making. Societal stereotypes often form these biases and can affect evaluations and decisions without the interviewer's conscious awareness.

Response bias arises from a systematic error or distortion in how individuals respond to survey questions or provide information in research studies. It occurs when respondents consistently tend to answer questions inaccurately or in a particular direction, leading to a skewed or biased dataset.

Example of Response Bias In Research

You conduct a survey asking people about their exercise habits and distribute the survey to a group of individuals. You ask them to report the number of times they exercise per week. However, some respondents may feel pressured to provide answers they believe are more socially acceptable. They might overstate their exercise frequency to present themselves as more active and health-conscious. This would result in an overestimation of exercise habits in the data.

Types of Response Bias In Research

We have discussed a few common types of response bias below. Other major types include courtesy bias and extreme responding.

1. Social Desirability Bias

This occurs when respondents provide answers that they perceive to be more socially acceptable or desirable than their true beliefs or behaviours. They may modify their responses to conform to societal norms or present themselves favourably.

2. Acquiescence Bias

Also known as "yea-saying" or "nay-saying," Acquiescence bias in research is the tendency of respondents to agree or disagree with statements or questions without carefully considering their content. Some individuals are predisposed to consistently agree (acquiesce) or disagree with items, leading to skewed responses.

3. Non-Response Bias

This bias emerges when individuals who choose not to participate in a study or survey have different characteristics or opinions compared to those who do participate.

Researcher bias, also known as experimenter bias or investigator bias, refers to the influence or distortion of research findings or interpretations due to the personal beliefs, preferences, or expectations of the researcher conducting the study. It occurs when the researcher's subjective biases or preconceived notions unconsciously affect the research process, leading to flawed or biased results.

Example of Researcher Bias In Research

Assume that a researcher is conducting a study on the effectiveness of a new teaching method for improving student performance in mathematics. The researcher strongly believes the new teaching method will significantly enhance students' mathematical abilities.

To test the method, the researcher divides students into two groups: the control group, which receives traditional teaching methods, and the experimental group, which receives the new teaching method.

During the study, the researcher spends more time interacting with the experimental group, providing additional support and encouragement. They unintentionally convey their enthusiasm for the new teaching method to the students in the experimental group while giving a different level of attention or encouragement to the control group.

When the post-test results come in, the experimental group shows a statistically significant improvement in mathematical performance compared to the control group. Influenced by their initial beliefs and unintentional differential treatment, the researcher concludes that the new teaching method is highly effective in enhancing students' mathematical abilities.

Hire an Expert Writer

Proposal and research paper orders completed by our expert writers are

  • Formally drafted in academic style
  • Plagiarism free
  • 100% Confidential
  • Never Resold
  • Include unlimited free revisions
  • Completed to match exact client requirements

Selection bias refers to a systematic error or distortion that occurs in a research study when the participants or subjects included in the study are not representative of the target population. This research bias arises when the process of selecting participants for the study is flawed or biased in some way, leading to a sample that does not accurately reflect the characteristics of the broader population.

Example of Selection Bias In Research

Suppose a research team wants to evaluate the weight loss program's effectiveness and recruits participants by placing an advertisement in a fitness magazine. The advertisement attracts health-conscious individuals who are actively seeking ways to lose weight. As a result, the study sample primarily consists of highly motivated individuals to lose weight and may have already tried other weight loss methods.

The sample is biased towards individuals more likely to succeed in weight loss due to their pre-existing motivation and experience.

Types of Selection Bias In Research

Selection bias can occur in various forms and impact both observational and experimental studies. Some common types of selection bias include:

1. Non-Response Bias

This occurs when individuals chosen for the study do not participate or respond, leading to a sample that differs from the target population. Non-response bias can introduce bias in research if those who choose not to participate have different characteristics from those who do participate.

2. Volunteer Bias

Volunteer bias happens when participants self-select or volunteer to participate in a study. This can lead to a sample not representative of the broader population because volunteers may have different characteristics, motivations, or experiences compared to those who do not volunteer.

3. Healthy User Bias

This research bias can occur in studies that examine the effects of a particular intervention or treatment. It arises when participants who follow a certain lifestyle or treatment regimen are healthier or have better health outcomes than the general population, leading to overestimating the treatment's effectiveness.

4. Berkson's Bias

This research bias occurs in hospital-based studies where patients are selected based on hospital admission. Since hospital-based studies typically exclude healthy individuals, the sample may consist of patients with multiple conditions or diseases, leading to an artificial association between certain variables.

5. Survivorship Bias

Survivorship bias happens when the sample includes only individuals or entities that have survived a particular process or undergone a specific experience. This bias can lead to an inaccurate understanding of the entire population since it neglects those who did not survive or dropped out.

A cognitive bias refers to systematic patterns of deviation from rational judgment or decision-making processes, often influenced by subjective factors and unconscious mental processes. These research biases can affect how we interpret information, judge, and form beliefs. Cognitive biases can be thought of as shortcuts or mental filters that our brains use to simplify complex information processing.

Example of Cognitive Bias In Research

Assume that you are investigating the effects of a new drug on a particular medical condition. Due to prior experiences or personal beliefs, the researcher has a positive view of the drug's effectiveness. During the research process, the researcher may unconsciously focus on collecting and analysing data that supports their preconceived notion of the drug's efficacy. They may pay less attention to data that suggests the drug has limited or no impact.

Types of Cognitive Bias In Research

Some of the most common types of cognitive bias are discussed below.

1. Confirmation Bias

The tendency to seek, interpret, or remember information in a way that confirms one's existing beliefs or hypotheses while disregarding or downplaying contradictory evidence.

2. Availability Heuristic

This research bias occurs when you overestimate the importance or likelihood of events based on how easily they come to mind or how vividly they are remembered.

3. Anchoring Bias

Relying too heavily on the first piece of information encountered (the " anchor ) when making decisions or estimations, even if it is irrelevant or misleading.

4. Halo Effect

The halo effect happens when you generalise positive or negative impressions of a person, company, or brand based on a single characteristic or initial experience.

5. Overconfidence Effect

The tendency to overestimate one's abilities, knowledge, or the accuracy of one's beliefs and predictions.

6. Bandwagon Effect

Preferencing to adopt certain beliefs or behaviours because others are doing so, often without critical evaluation or independent thinking.

7. Framing Effect

The framing effect refers to how the information presented or "framed" can influence decision-making, emphasising the potential gains or losses, leading to different choices even when the options are objectively the same.

How to Avoid Research Bias?

Avoiding research bias is crucial for maintaining the integrity and validity of your research findings. Here are some strategies on how to minimise research bias:

  • Formulate a clear and specific research question that outlines the objective of your study. This will help you stay focused and reduce the chances of introducing research bias.
  • Perform a thorough literature review on your topic before starting your research. This will help you understand the current state of knowledge and identify potential biases or gaps in the existing research.
  • Use randomisation and blinding techniques to ensure that participants or samples are assigned to groups unbiasedly. Blinding techniques, such as single-blind or double-blind procedures, can be used to prevent bias in data collection and analysis.
  • Ensure that your sample is representative of the target population by using random or stratified sampling methods . Avoid selecting participants based on convenience, as it can introduce selection bias.
  • Consider using random invitations or incentives to encourage a diverse range of participants.
  • Clearly define and document the methods and procedures used for data collection to ensure consistency. This includes using standardised measurement tools, following specific protocols, and training research assistants to minimise variability and observer bias.
  • Researchers can unintentionally introduce bias through preconceived notions, beliefs, or expectations. Be conscious of your biases and regularly reflect on how they influence your research process and interpretation of results.
  • Relying on a single source can introduce bias. Triangulate your findings by using multiple methods ( quantitative and qualitative ) and collecting data from diverse sources to ensure a more comprehensive and balanced perspective.
  • Use appropriate statistical techniques and avoid cherry-picking results that support your hypothesis. Be transparent about the limitations and uncertainties in your findings.

Frequently Asked Questions

What is bias in research.

Bias in research refers to systematic errors or preferences that can distort the results or conclusions of a study, leading to inaccuracies or unfairness due to factors such as sampling, measurement, or interpretation.

What causes bias in research?

Bias in research can be caused by various factors, such as the selection of participants, flawed study design, inadequate sampling methods, researcher's own beliefs or preferences, funding sources, publication bias, or the omission or manipulation of data.

How to avoid bias in research?

To avoid research bias, use random and representative sampling, blinding techniques, pre-registering hypotheses, conducting rigorous peer review, disclosing conflicts of interest, and promoting transparency in data collection and analysis.

How to address bias in research?

You can critically examine your biases, use diverse and inclusive samples, employ appropriate statistical methods, conduct robust sensitivity analyses, encourage replication studies, and engage in open dialogue about potential biases in your findings.

You May Also Like

Are you new to English or just want to revise your grammar skills? We have gathered the basic language rules that every person should know in English.

Unfamiliar with what plagiarism is? Learn the different types of plagiarism and how to avoid them in our comprehensive plagiarism guide.

Applying to a university and looking for a guide to write your UCAS personal statement? We have covered all aspects of the UCAS statement to help you get to your dream university.

More Interesting Articles

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Incorporate STEM journalism in your classroom

  • Exercise type: Activity
  • Topic: Science & Society
  • Category: Research & Design
  • Category: Diversity in STEM

How bias affects scientific research

  • Download Student Worksheet

Purpose: Students will work in groups to evaluate bias in scientific research and engineering projects and to develop guidelines for minimizing potential biases.

Procedural overview: After reading the Science News for Students article “ Think you’re not biased? Think again ,” students will discuss types of bias in scientific research and how to identify it. Students will then search the Science News archive for examples of different types of bias in scientific and medical research. Students will read the National Institute of Health’s Policy on Sex as a Biological Variable and analyze how this policy works to reduce bias in scientific research on the basis of sex and gender. Based on their exploration of bias, students will discuss the benefits and limitations of research guidelines for minimizing particular types of bias and develop guidelines of their own.

Approximate class time: 2 class periods

How Bias Affects Scientific Research student guide

Computer with access to the Science News archive

Interactive meeting and screen-sharing application for virtual learning (optional)

Directions for teachers:

One of the guiding principles of scientific inquiry is objectivity. Objectivity is the idea that scientific questions, methods and results should not be affected by the personal values, interests or perspectives of researchers. However, science is a human endeavor, and experimental design and analysis of information are products of human thought processes. As a result, biases may be inadvertently introduced into scientific processes or conclusions.

In scientific circles, bias is described as any systematic deviation between the results of a study and the “truth.” Bias is sometimes described as a tendency to prefer one thing over another, or to favor one person, thing or explanation in a way that prevents objectivity or that influences the outcome of a study or the understanding of a phenomenon. Bias can be introduced in multiple points during scientific research — in the framing of the scientific question, in the experimental design, in the development or implementation of processes used to conduct the research, during collection or analysis of data, or during the reporting of conclusions.

Researchers generally recognize several different sources of bias, each of which can strongly affect the results of STEM research. Three types of bias that often occur in scientific and medical studies are researcher bias, selection bias and information bias.

Researcher bias occurs when the researcher conducting the study is in favor of a certain result. Researchers can influence outcomes through their study design choices, including who they choose to include in a study and how data are interpreted. Selection bias can be described as an experimental error that occurs when the subjects of the study do not accurately reflect the population to whom the results of the study will be applied. This commonly happens as unequal inclusion of subjects of different races, sexes or genders, ages or abilities. Information bias occurs as a result of systematic errors during the collection, recording or analysis of data.

When bias occurs, a study’s results may not accurately represent phenomena in the real world, or the results may not apply in all situations or equally for all populations. For example, if a research study does not address the full diversity of people to whom the solution will be applied, then the researchers may have missed vital information about whether and how that solution will work for a large percentage of a target population.

Bias can also affect the development of engineering solutions. For example, a new technology product tested only with teenagers or young adults who are comfortable using new technologies may have user experience issues when placed in the hands of older adults or young children.

Want to make it a virtual lesson? Post the links to the  Science News for Students article “ Think you’re not biased? Think again ,” and the National Institutes of Health information on sickle-cell disease . A link to additional resources can be provided for the students who want to know more. After students have reviewed the information at home, discuss the four questions in the setup and the sickle-cell research scenario as a class. When the students have a general understanding of bias in research, assign students to breakout rooms to look for examples of different types of bias in scientific and medical research, to discuss the Science News article “ Biomedical studies are including more female subjects (finally) ” and the National Institute of Health’s Policy on Sex as a Biological Variable and to develop bias guidelines of their own. Make sure the students have links to all articles they will need to complete their work. Bring the groups back together for an all-class discussion of the bias guidelines they write.

Assign the Science News for Students article “ Think you’re not biased? Think again ” as homework reading to introduce students to the core concepts of scientific objectivity and bias. Request that they answer the first two questions on their guide before the first class discussion on this topic. In this discussion, you will cover the idea of objective truth and introduce students to the terminology used to describe bias. Use the background information to decide what level of detail you want to give to your students.

As students discuss bias, help them understand objective and subjective data and discuss the importance of gathering both kinds of data. Explain to them how these data differ. Some phenomena — for example, body temperature, blood type and heart rate — can be objectively measured. These data tend to be quantitative. Other phenomena cannot be measured objectively and must be considered subjectively. Subjective data are based on perceptions, feelings or observations and tend to be qualitative rather than quantitative. Subjective measurements are common and essential in biomedical research, as they can help researchers understand whether a therapy changes a patient’s experience. For instance, subjective data about the amount of pain a patient feels before and after taking a medication can help scientists understand whether and how the drug works to alleviate pain. Subjective data can still be collected and analyzed in ways that attempt to minimize bias.

Try to guide student discussion to include a larger context for bias by discussing the effects of bias on understanding of an “objective truth.” How can someone’s personal views and values affect how they analyze information or interpret a situation?

To help students understand potential effects of biases, present them with the following scenario based on information from the National Institutes of Health :

Sickle-cell disease is a group of inherited disorders that cause abnormalities in red blood cells. Most of the people who have sickle-cell disease are of African descent; it also appears in populations from the Mediterranean, India and parts of Latin America. Males and females are equally likely to inherit the condition. Imagine that a therapy was developed to treat the condition, and clinical trials enlisted only male subjects of African descent. How accurately would the results of that study reflect the therapy’s effectiveness for all people who suffer from sickle-cell disease?

In the sickle-cell scenario described above, scientists will have a good idea of how the therapy works for males of African descent. But they may not be able to accurately predict how the therapy will affect female patients or patients of different races or ethnicities. Ask the students to consider how they would devise a study that addressed all the populations affected by this disease.

Before students move on, have them answer the following questions. The first two should be answered for homework and discussed in class along with the remaining questions.

1.What is bias?

In common terms, bias is a preference for or against one idea, thing or person. In scientific research, bias is a systematic deviation between observations or interpretations of data and an accurate description of a phenomenon.

2. How can biases affect the accuracy of scientific understanding of a phenomenon? How can biases affect how those results are applied?

Bias can cause the results of a scientific study to be disproportionately weighted in favor of one result or group of subjects. This can cause misunderstandings of natural processes that may make conclusions drawn from the data unreliable. Biased procedures, data collection or data interpretation can affect the conclusions scientists draw from a study and the application of those results. For example, if the subjects that participate in a study testing an engineering design do not reflect the diversity of a population, the end product may not work as well as desired for all users.

3. Describe two potential sources of bias in a scientific, medical or engineering research project. Try to give specific examples.

Researchers can intentionally or unintentionally introduce biases as a result of their attitudes toward the study or its purpose or toward the subjects or a group of subjects. Bias can also be introduced by methods of measuring, collecting or reporting data. Examples of potential sources of bias include testing a small sample of subjects, testing a group of subjects that is not diverse and looking for patterns in data to confirm ideas or opinions already held.

4. How can potential biases be identified and eliminated before, during or after a scientific study?

Students should brainstorm ways to identify sources of bias in the design of research studies. They may suggest conducting implicit bias testing or interviews before a study can be started, developing guidelines for research projects, peer review of procedures and samples/subjects before beginning a study, and peer review of data and conclusions after the study is completed and before it is published. Students may focus on the ideals of transparency and replicability of results to help reduce biases in scientific research.

Obtain and evaluate information about bias

Students will now work in small groups to select and analyze articles for different types of bias in scientific and medical research. Students will start by searching the Science News or Science News for Students archives and selecting articles that describe scientific studies or engineering design projects. If the Science News or Science News for Students articles chosen by students do not specifically cite and describe a study, students should consult the Citations at the end of the article for links to related primary research papers. Students may need to read the methods section and the conclusions of the primary research paper to better understand the project’s design and to identify potential biases. Do not assume that every scientific paper features biased research.

Student groups should evaluate the study or engineering design project outlined in the article to identify any biases in the experimental design, data collection, analysis or results. Students may need additional guidance for identifying biases. Remind them of the prior discussion about sources of bias and task them to review information about indicators of bias. Possible indicators include extreme language such as all , none or nothing ; emotional appeals rather than logical arguments; proportions of study subjects with specific characteristics such as gender, race or age; arguments that support or refute one position over another and oversimplifications or overgeneralizations. Students may also want to look for clues related to the researchers’ personal identity such as race, religion or gender. Information on political or religious points of view, sources of funding or professional affiliations may also suggest biases.

Students should also identify any deliberate attempts to reduce or eliminate bias in the project or its results. Then groups should come back together and share the results of their analysis with the class.

If students need support in searching the archives for appropriate articles, encourage groups to brainstorm search terms that may turn up related articles. Some potential search terms include bias , study , studies , experiment , engineer , new device , design , gender , sex , race , age , aging , young , old , weight , patients , survival or medical .

If you are short on time or students do not have access to the Science News or Science News for Students archive, you may want to provide articles for students to review. Some suggested articles are listed in the additional resources  below.

Once groups have selected their articles, students should answer the following questions in their groups.

1. Record the title and URL of the article and write a brief summary of the study or project.

Answers will vary, but students should accurately cite the article evaluated and summarize the study or project described in the article. Sample answer: We reviewed the Science News article “Even brain images can be biased,” which can be found at www.sciencenews.org/blog/scicurious/even-brain-images-can-be-biased. This article describes how scientific studies of human brains that involve electronic images of brains tend to include study subjects from wealthier and more highly educated households and how researchers set out to collect new data to make the database of brain images more diverse.

2. What sources of potential bias (if any) did you identify in the study or project? Describe any procedures or policies deliberately included in the study or project to eliminate biases.

The article “Even brain images can be biased” describes how scientists identified a sampling bias in studies of brain images that resulted from the way subjects were recruited. Most of these studies were conducted at universities, so many college students volunteer to participate, which resulted in the samples being skewed toward wealthier, educated, white subjects. Scientists identified a database of pediatric brain images and evaluated the diversity of the subjects in that database. They found that although the subjects in that database were more ethnically diverse than the U.S. population, the subjects were generally from wealthier households and the parents of the subjects tended to be more highly educated than average. Scientists applied statistical methods to weight the data so that study samples from the database would more accurately reflect American demographics.

3. How could any potential biases in the study or design project have affected the results or application of the results to the target population?

Scientists studying the rate of brain development in children were able to recognize the sampling bias in the brain image database. When scientists were able to apply statistical methods to ensure a better representation of socioeconomically diverse samples, they saw a different pattern in the rate of brain development in children. Scientists learned that, in general, children’s brains matured more quickly than they had previously thought. They were able to draw new conclusions about how certain factors, such as family wealth and education, affected the rate at which children’s brains developed. But the scientsits also suggested that they needed to perform additional studies with a deliberately selected group of children to ensure true diversity in the samples.

In this phase, students will review the Science News article “ Biomedical studies are including more female subjects (finally) ” and the NIH Policy on Sex as a Biological Variable , including the “ guidance document .” Students will identify how sex and gender biases may have affected the results of biomedical research before NIH instituted its policy. The students will then work with their group to recommend other policies to minimize biases in biomedical research.

To guide their development of proposed guidelines, students should answer the following questions in their groups.

1. How have sex and gender biases affected the value and application of biomedical research?

Gender and sex biases in biomedical research have diminished the accuracy and quality of research studies and reduced the applicability of results to the entire population. When girls and women are not included in research studies, the responses and therapeutic outcomes of approximately half of the target population for potential therapies remain unknown.

2. Why do you think the NIH created its policy to reduce sex and gender biases?

In the guidance document, the NIH states that “There is a growing recognition that the quality and generalizability of biomedical research depends on the consideration of key biological variables, such as sex.” The document goes on to state that many diseases and conditions affect people of both sexes, and restricting diversity of biological variables, notably sex and gender, undermines the “rigor, transparency, and generalizability of research findings.”

3. What impact has the NIH Policy on Sex as a Biological Variable had on biomedical research?

The NIH’s policy that sex is factored into research designs, analyses and reporting tries to ensure that when developing and funding biomedical research studies, researchers and institutes address potential biases in the planning stages, which helps to reduce or eliminate those biases in the final study. Including females in biomedical research studies helps to ensure that the results of biomedical research are applicable to a larger proportion of the population, expands the therapies available to girls and women and improves their health care outcomes.

4. What other policies do you think the NIH could institute to reduce biases in biomedical research? If you were to recommend one set of additional guidelines for reducing bias in biomedical research, what guidelines would you propose? Why?

Students could suggest that the NIH should have similar policies related to race, gender identity, wealth/economic status and age. Students should identify a category of bias or an underserved segment of the population that they think needs to be addressed in order to improve biomedical research and health outcomes for all people and should recommend guidelines to reduce bias related to that group. Students recommending guidelines related to race might suggest that some populations, such as African Americans, are historically underserved in terms of access to medical services and health care, and they might suggest guidelines to help reduce the disparity. Students might recommend that a certain percentage of each biomedical research project’s sample include patients of diverse racial and ethnic backgrounds.

5. What biases would your suggested policy help eliminate? How would it accomplish that goal?

Students should describe how their proposed policy would address a discrepancy in the application of biomedical research to the entire human population. Race can be considered a biological variable, like sex, and race has been connected to higher or lower incidence of certain characteristics or medical conditions, such as blood types or diabetes, which sometimes affect how the body reponds to infectious agents, drugs, procedures or other therapies. By ensuring that people from diverse racial and ethnic groups are included in biomedical research studies, scientists and medical professionals can provide better medical care to members of those populations.

Class discussion about bias guidelines

Allow each group time to present its proposed bias-reducing guideline to another group and to receive feedback. Then provide groups with time to revise their guidelines, if necessary. Act as a facilitator while students conduct the class discussion. Use this time to assess individual and group progress. Students should demonstrate an understanding of different biases that may affect patient outcomes in biomedical research studies and in practical medical settings. As part of the group discussion, have students answer the following questions.

1. Why is it important to identify and eliminate biases in research and engineering design?

The goal of most scientific research and engineering projects is to improve the quality of life and the depth of understanding of the world we live in. By eliminating biases, we can better serve the entirety of the human population and the planet .

2. Were there any guidelines that were suggested by multiple groups? How do those actions or policies help reduce bias?

Answers will depend on the guidelines developed and recommended by other groups. Groups could suggest policies related to race, gender identity, wealth/economic status and age. Each group should clearly identify how its guidelines are designed to reduce bias and improve the quality of human life.

3. Which guidelines developed by your classmates do you think would most reduce the effects of bias on research results or engineering designs? Support your selection with evidence and scientific reasoning.

Answers will depend on the guidelines developed and recommended by other groups. Students should agree that guidelines that minimize inequities and improve health care outcomes for a larger group are preferred. Guidelines addressing inequities of race and wealth/economic status are likely to expand access to improved medical care for the largest percentage of the population. People who grow up in less economically advantaged settings have specific health issues related to nutrition and their access to clean water, for instance. Ensuring that people from the lowest economic brackets are represented in biomedical research improves their access to medical care and can dramatically change the length and quality of their lives.

Possible extension

Challenge students to honestly evaluate any biases they may have. Encourage them to take an Implicit Association Test (IAT) to identify any implicit biases they may not recognize. Harvard University has an online IAT platform where students can participate in different assessments to identify preferences and biases related to sex and gender, race, religion, age, weight and other factors. You may want to challenge students to take a test before they begin the activity, and then assign students to take a test after completing the activity to see if their preferences have changed. Students can report their results to the class if they want to discuss how awareness affects the expression of bias.

Additional resources

If you want additional resources for the discussion or to provide resources for student groups, check out the links below.

Additional Science News articles:

Even brain images can be biased

Data-driven crime prediction fails to erase human bias

What we can learn from how a doctor’s race can affect Black newborns’ survival

Bias in a common health care algorithm disproportionately hurts black patients

Female rats face sex bias too

There’s no evidence that a single ‘gay gene’ exists

Positive attitudes about aging may pay off in better health

What male bias in the mammoth fossil record says about the animal’s social groups

The man flu struggle might be real, says one researcher

Scientists may work to prevent bias, but they don’t always say so

The Bias Finders

Showdown at Sex Gap

University resources:

Project Implicit (Take an Implicit Association Tests)

Catalogue of Bias

Understanding Health Research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 17, Issue 4
  • Bias in research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Joanna Smith 1 ,
  • Helen Noble 2
  • 1 School of Human and Health Sciences, University of Huddersfield , Huddersfield , UK
  • 2 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
  • Correspondence to : Dr Joanna Smith , School of Human and Health Sciences, University of Huddersfield, Huddersfield HD1 3DH, UK; j.e.smith{at}hud.ac.uk

https://doi.org/10.1136/eb-2014-101946

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The aim of this article is to outline types of ‘bias’ across research designs, and consider strategies to minimise bias. Evidence-based nursing, defined as the “process by which evidence, nursing theory, and clinical expertise are critically evaluated and considered, in conjunction with patient involvement, to provide the delivery of optimum nursing care,” 1 is central to the continued development of the nursing professional. Implementing evidence into practice requires nurses to critically evaluate research, in particular assessing the rigour in which methods were undertaken and factors that may have biased findings.

What is bias in relation to research and why is understanding bias important?

Although different study designs have specific methodological challenges and constraints, bias can occur at each stage of the research process ( table 1 ). In quantitative research, the validity and reliability are assessed using statistical tests that estimate the size of error in samples and calculating the significance of findings (typically p values or CIs). The tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research. However, in the broadest context, these terms are applicable, with validity referring to the integrity and application of the methods and the precision in which the findings accurately reflect the data, and reliability referring to the consistency within the analytical processes. 4

  • View inline

Types of research bias

How is bias minimised when undertaken research?

Bias exists in all study designs, and although researchers should attempt to minimise bias, outlining potential sources of bias enables greater critical evaluation of the research findings and conclusions. Researchers bring to each study their experiences, ideas, prejudices and personal philosophies, which if accounted for in advance of the study, enhance the transparency of possible research bias. Clearly articulating the rationale for and choosing an appropriate research design to meet the study aims can reduce common pitfalls in relation to bias. Ethics committees have an important role in considering whether the research design and methodological approaches are biased, and suitable to address the problem being explored. Feedback from peers, funding bodies and ethics committees is an essential part of designing research studies, and often provides valuable practical guidance in developing robust research.

In quantitative studies, selection bias is often reduced by the random selection of participants, and in the case of clinical trials randomisation of participants into comparison groups. However, not accounting for participants who withdraw from the study or are lost to follow-up can result in sample bias or change the characteristics of participants in comparison groups. 7 In qualitative research, purposeful sampling has advantages when compared with convenience sampling in that bias is reduced because the sample is constantly refined to meet the study aims. Premature closure of the selection of participants before analysis is complete can threaten the validity of a qualitative study. This can be overcome by continuing to recruit new participants into the study during data analysis until no new information emerges, known as data saturation. 8

In quantitative studies having a well-designed research protocol explicitly outlining data collection and analysis can assist in reducing bias. Feasibility studies are often undertaken to refine protocols and procedures. Bias can be reduced by maximising follow-up and where appropriate in randomised control trials analysis should be based on the intention-to-treat principle, a strategy that assesses clinical effectiveness because not everyone complies with treatment and the treatment people receive may be changed according to how they respond. Qualitative research has been criticised for lacking transparency in relation to the analytical processes employed. 4 Qualitative researchers must demonstrate rigour, associated with openness, relevance to practice and congruence of the methodological approach. Although other researchers may interpret the data differently, appreciating and understanding how the themes were developed is an essential part of demonstrating the robustness of the findings. Reducing bias can include respondent validation, constant comparisons across participant accounts, representing deviant cases and outliers, prolonged involvement or persistent observation of participants, independent analysis of the data by other researchers and triangulation. 4

In summary, minimising bias is a key consideration when designing and undertaking research. Researchers have an ethical duty to outline the limitations of studies and account for potential sources of bias. This will enable health professionals and policymakers to evaluate and scrutinise study findings, and consider these when applying findings to practice or policy.

  • Wakefield AJ ,
  • Anthony A ,
  • ↵ The Lancet . Retraction—ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children . Lancet 2010 ; 375 : 445 . OpenUrl CrossRef PubMed Web of Science
  • Easterbrook PJ ,
  • Berlin JA ,
  • Gopalan R ,
  • Petticrew M ,
  • Thomson H ,
  • Francis J ,
  • Johnston M ,
  • Robertson C ,

Competing interests None.

Read the full text or download the PDF:

The complete guide to selection bias

research bias type

As the old saying goes, knowledge is power. But the quest for knowledge isn’t always as easy as we’d like it to be.

Researchers often find themselves delving into datasets, dissecting information, and uncovering insights that can shape entire fields of study. However, amongst the excitement of new discovery lies a subtle yet complicated obstacle: selection bias.

This phenomenon has the potential to warp conclusions, skew perceptions, and cast doubt on the integrity of your findings. But what is it, and how can you avoid it in your own research? Let’s take a look.

What is selection bias?

Also known as the selection effect, selection bias occurs when a sample used in a study isn’t completely representative of the population of interest or is sub-optimal for answering the specific research question. This could be introduced through different sampling methods or the way the participants were selected. Or it could just come down to the particular area of interest being researched.

This bias then distorts the results of the study, undermining its value and rendering it untrustworthy.

There are a variety of different types of bias, each bringing their own implications. These are:

1. Sampling bias

Sampling bias occurs when certain members of the population of interest have a higher or lower chance of being selected than others. When this happens, the research won’t give a very representative point of view.

2. Survivorship bias

Survivorship bias is when only successful subjects are included in the final analysis, leading to a skewed outcome. This is often seen in studies of successful people or companies, where failures are taken out of the equation.

3. Self-selection bias

Self-selection bias is where people nominate themselves to be part of a study, leading to a non-random sample of participants. This is often prevalent in surveys or online polls, where the people who take part may not represent the population as a whole.

4. Information bias

Information bias happens when there are systematic errors in the measurement or collection of data. This makes the outcomes just as unreliable.

5. Non-response bias

There will also be people who refuse to take part in or drop out of a study. It’s likely that there will be some kind of underlying commonality in these participants. For example, they might be male, or under the age of 20.

Examples of selection bias

To understand the more practical implications that selection bias can have on a study, let’s take a look at some real-life scenarios.

1. Clinical trials

Clinical trials can see a significant impact from selection bias, largely due to self-selection bias. This can impact the effectiveness of the drug being trialed. For example, where younger people are more likely to take part in clinical trials, the results may only show the impact in that age group. So, older people aren’t represented.

2. Job recruitment

A common selection bias example is recruitment. Hiring processes can also fall prey to selection bias. If your company relies on employee referrals as one of your main recruitment methods, this can mean that individuals from different backgrounds or networks are excluded from the process entirely. This may result in a workforce that lacks diversity and unique thinking.

3. Economic studies

These can be vulnerable to survivorship bias. Especially in studies of successful businesses or investment strategies. For example, if a study only examines companies that have achieved significant growth, it may overlook the failures and challenges faced by less successful enterprises. 

Survivorship bias distorts perceptions of risk and reward. This can lead to flawed investment decisions.

4. Educational research

In educational studies, selection bias can distort assessments of teaching interventions or educational programs. For example, you might want to look at the effectiveness of a tutoring program. But if you only include students from affluent areas, the findings will likely be irrelevant to students from disadvantaged backgrounds.

The impact of selection bias

Why is selection bias so concerning? Because it can result in misleading conclusions and send researchers down the wrong path. And when this happens, outcomes don’t align with reality.

The ramifications of selection bias extend beyond just the statistics. They can also result in wasted resources. Valuable time, money, and manpower are used up on research that doesn’t reflect the population of interest. This can stop progress and have negative or unfair impacts on certain groups of people.

It can also lead people to lose trust in science. When research comes across as biased or unfair, it makes people doubt if science can really help us. This can have hugely negative impacts on society as a whole.

How to avoid selection bias in your own research

To get accurate results and draw meaningful conclusions, you need to conduct research that's fair and minimizes bias. But how?

Here are some simple yet effective strategies to ensure you conduct research with integrity and impartiality:

1. Define your population

Clearly define the population you want to study and make sure you understand who should be included and excluded from your research. Let's say you want to gauge people's understanding of a new financial services product. You would define your population of interest (people who would use that product), then ensure your sample reflects that population.

2. Random sampling

Use random sampling to select participants from your population. Imagine you're conducting a survey on public opinion about a controversial social issue. Instead of selecting participants based on convenience or availability, use random sampling to ensure that every member of the population of interest has an equal chance of being included in your study. This helps reduce the risk of bias and ensures that your findings are representative of the population as a whole.

3. Stratified sampling

If your population has different subgroups, you can use  stratified sampling to ensure representation from each group. By sampling randomly within the strata, you can capture the diversity of your population more accurately. This avoids biases introduced by over or underrepresentation of certain groups.

4. Minimize exclusions

Try to reduce as many exclusions as possible from your study unless absolutely necessary. Excluding certain groups or individuals can introduce bias and limit the effectiveness of your findings. 

5. Transparent reporting

Be transparent about your selection process in your research reports. Clearly document how participants were selected, as well as any criteria used for exclusion. This information gives people a clear insight into your methodology. And it helps readers to build trust with your findings.

6. Consider alternatives

Explore alternative methods of data collection or sampling if traditional methods introduce bias. For example, if you're carrying out a study on consumer preferences for a new product, consider using a combination of online surveys and focus groups to reach a diverse range of participants. This approach helps reduce bias by relying solely on one method of data collection and ensures that your findings are robust and reliable.

7. Consult experts

Seek input from colleagues or experts in your field to review your research design and selection process. Fresh perspectives can help identify potential sources of bias that may have been overlooked, enhancing the credibility of your research and ensuring you offer diverse viewpoints and methodologies.

How Prolific can help

At Prolific, flexibility and control are right at the heart of everything we do. With our pool of 120,000+ active participants, all fully vetted and verified, you can rely on us to deliver definitive and varied data sets, no matter what your research topic is.

Sign up to Prolific today to gather balanced, representative samples for your research.

You might also like

research bias type

High-quality human data to deliver world-leading research and AIs.

research bias type

Follow us on

All Rights Reserved Prolific 2024

May 4, 2024

Implicit Bias Hurts Everyone. Here’s How to Overcome It

The environment shapes stereotypes and biases, but it is possible to recognize and change them

By Corey S. Powell & OpenMind Magazine

Serious woman of color scientist wearing protective eyewear in white coat.

fotostorm/Getty Images

We all have a natural tendency to view the world in black and white—to the extent that it's hard not to hear "black" and immediately think "white." Fortunately, there are ways to activate the more subtle shadings in our minds. Kristin Pauker is a professor of psychology at the University of Hawaiʻi at Mānoa who studies stereotyping and prejudice, with a focus on how our environment shapes our biases. In this podcast and Q&A, she tells OpenMind co-editor Corey S. Powell how researchers measure and study bias, and how we can use their findings to make a more equitable world. (This conversation has been edited for length and clarity.)

When I hear “bias,” the first thing I think of is a conscious prejudice. But you study something a lot more subtle, which researchers call “implicit bias.” What is it, and how does it affect us?

Implicit bias is a form of bias that influences our decision-making, our interactions and our behaviors. It can be based on any social group membership, like race, gender, age, sexual orientation or even the color of your shirt. Often we’re not aware of the ways in which these biases are influencing us. Sometimes implicit bias gets called unconscious bias, which is a little bit of a misnomer. We can be aware of these biases, so it's not necessarily unconscious. But we often are not aware of the way in which they're influencing our behaviors and thoughts.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

You make it sound like almost anything can set us off. Why is bias so deeply ingrained in our heads?

Our brain likes to categorize things because it makes our world easier to process. We make categories as soon as we start learning about something. So we categorize fruits, we categorize vegetables, we categorize chairs, we categorize tables for their function—and we also categorize people. We know from research that categorization happens early in life, as early as 5 or 6, in some cases even 3 or 4. Categorization creates shortcuts that help us process information faster, but that also can lead us to make assumptions that may or may not hold in particular situations. What categories we use are directed by the environment that we're in. Our environment already has told us certain categories are really important, such as gender, age, race and ethnicity. We quickly form an association when we’re assigned to a particular group.

Listen to the Podcast

Kristin Pauker: We have to think about ways in which we can change the features of our environment—so that our weeds aren’t so prolific.

In your research, you use a diagnostic tool called an “ implicit association test .” How does it work, and what does it tell you?

Typically someone would show you examples of individuals who belong to categories, and then ask you to categorize those individuals. For example, you would see faces and you would categorize them as black and white. You’re asked to make a fast categorization, as fast as you can. Then you are presented with words that could be categorized as good or bad, like “hero” and “evil,” and again asked to categorize the words quickly. The complicated part happens when, say, good and white are paired together or bad and black are paired together. You're asked to categorize the faces and the words as you were before. Then it's flipped, so that bad and white are paired together, and good and black are paired together. You’re asked to make the categorizations once again with the new pairings.

The point of the test is, how quickly do you associate certain concepts together? Oftentimes if certain concepts are more closely paired in your mind, then it will be easier for you to make that association. Your response will be faster. When the pairing is less familiar to you or less closely associated, it takes you longer to respond. Additional processing needs to occur.

When you run this implicit association test on your test subjects or your students, are they often surprised by the results?

We’ve done it as a demonstration in the classroom, and I've had students come up and complain saying, “There’s something wrong with this test. I don't believe it.” They’ll try to poke all kinds of holes in the test because it gave them a score that wasn’t what they felt it should be according to what they think about themselves. This is the case, I think, for almost anyone. I've taken an implicit association test and found that I have a stronger association with men in science than women in science . And I'm a woman scientist! We can have and hold these biases because they’re prevalent in society, even if they’re biases that may not be beneficial to the group we belong to.

Studies show that even after you make people aware of their implicit biases, they can’t necessarily get rid of them. So are we stuck with our biases?

Those biases are hard to change and control, but that doesn't mean that they are un controllable and un changeable. It’s just that oftentimes there are many features in our environment that reinforce those biases. I was thinking about an analogy. Right now I’m struggling with weeds growing in my yard, invasive vines. It’s hard because there are so many things supporting the growth of these vines. I live in a place that has lots of sun and rain. Similarly, there’s so much in our environment that is supporting our biases. It’s hard to just cut them off and be like, OK, they're gone. We have to think about ways in which we can change the features of our environment—so that our weeds aren’t so prolific.

Common programs aimed at reducing bias, such as corporate diversity training workshops, often seem to stop at the stage of making people aware that bias exists. Is that why they haven’t worked very well ?

If people are told that they’re biased, the reaction that many of them have is, “Oh, that means I'm a racist? I'm not a racist!” Very defensive, because we associate this idea of being biased with a moral judgment that I'm a bad person. Because of that, awareness-raising can have the opposite of the intended effect. Being told that they're biased can make people worried and defensive, and they push back against that idea. They're not willing to accept it.

A lot of the diversity training models are based on the idea that you can just tell people about their biases and then get them to accept them and work on them. But, A, some people don't want to accept their biases. B, some people don't want to work on them. And C, the messaging around how we talk about these biases creates a misunderstanding that they can’t be changed. We talk about biases that are unconscious, biases that we all hold, that are formed early in life—it creates the idea, “Well, there’s nothing I can do, so why should I even try?”

How can we do better in talking about bias, so that people are more likely to embrace change instead of becoming defensive or defeated?

Some of it is about messaging. Biases are hard to change, but we should be discussing the ways in which these biases can change, even though it might take some time and work. You have to emphasize the idea that these things can change, or else why would we try? There is research showing that if you just give people their bias score, normally that doesn't result in them becoming more aware of their bias. But if you combine that score with a message that this is something controllable, people are less defensive and more willing to accept their biases.

What about concrete actions we can take to reduce the negative impact of implicit bias?

One thing is thinking about when we do interventions. A lot of times we’re trying to make changes in the workplace. We should be thinking more about how we're raising our children. The types of environments we're exposing them to, and the features that are in our schools , are good places to think about creating change. Prejudice is something that’s malleable.

Another thing is not always focusing on the person. So much of what we do in these interventions is try to change individual people's biases. But we can also think about our environment. What are the ways in which our environments are communicating these biases, and how can we make changes there? A clever idea people have been thinking about is trying to change consequences of biases. There's a researcher, Jason A. Okonofua , who talks about this and calls it “sidelining bias.” You're not targeting the person and trying to get rid of their biases. You're targeting the situations that support those biases. If you can change that situation and kind of cut it off, then the consequences of bias might not be as bad. It could lead to a judgment that is not so influenced by those biases.

There’s research showing that people make fairer hiring decisions when they work off tightly structured interviews and qualification checklists, which leave less room for subjective reactions. Is that the kind of “sidelining” strategy you’re talking about?

Yes, that’s been shown to be an effective way to sideline bias. If you set those criteria ahead of time, it's harder for you to shift a preference based on the person that you would like to hire. Another good example is finding ways to slow down the processes we're working on. Biases are more likely to influence our decision-making when we have to make really quick decisions or when we are stressed—which is the case for a lot of important decisions that we make.

Jennifer Eberhardt does research on these kinds of implicit biases. She worked with NextDoor (a neighborhood monitoring app) when they noticed a lot of racial profiling in the things people were reporting in their neighborhood. She worked with them to change the way that people report a suspicious person. Basically they added some extra steps to the checklist when you report something. Rather than just reporting that someone looks suspicious, a user had to indicate what about the behavior itself was suspicious. And then there was an explicit warning that they couldn't just say the reason for the suspicious behavior was someone's race. Including extra check steps slowed down the process and reduced the profiling.

It does feel like we’re making progress in addressing bias but, damn, it’s been a slow process. Where can we go from here?

A big part that’s missing in the research on implicit bias is creating tools that are useful for people. We still don’t know a lot about bias, but we know a lot more than we're willing to put into practice. For instance, creating resources for parents to be able to have conversations about bias , and to be aware that the everyday things we do are really important. This is something that many people want to tackle, but they don’t know how to do it. Just asking questions about what is usual and what is unusual has really interesting effects. We’ve done that with our son. He’d say something and I would ask, “Why is that something that only boys can do? You say girls can't do that, is that really the case? Can you think of examples where the opposite is true?”

This Q&A is part of a series of OpenMind essays, podcasts and videos supported by a generous grant from the Pulitzer Center 's Truth Decay initiative.

This story originally appeared on OpenMind , a digital magazine tackling science controversies and deceptions.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research bias

Types of Bias in Research | Definition & Examples

Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection , data analysis , interpretation, or publication. Research bias can occur in both qualitative and quantitative research .

Understanding research bias is important for several reasons.

  • Bias exists in all research, across research designs , and is difficult to eliminate.
  • Bias can occur at any stage of the research process.
  • Bias impacts the validity and reliability of your findings, leading to misinterpretation of data.

It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimise them.

For example, the success rate of the program will likely be affected if participants start to drop out. Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results.  

Table of contents

Actor–observer bias.

  • Confirmation bias

Information bias

Interviewer bias.

  • Publication bias

Researcher bias

Response bias.

Selection bias

How to avoid bias in research

Other types of research bias, frequently asked questions about research bias.

Actor–observer bias occurs when you attribute the behaviour of others to internal factors, like skill or personality, but attribute your own behaviour to external or situational factors.

In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behaviour of others, you are more likely to associate behaviour with their personality, nature, or temperament.

One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the road, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.

At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.

Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.

Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasise findings that ‘prove’ that your lived experience is the case for most families, neglecting other explanations and experiences.

Information bias , also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.

The main types of information bias are:

  • Recall bias
  • Observer bias

Performance bias

Regression to the mean (rtm).

Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.

Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.

As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.

  • A group of children who have been diagnosed, called the case group
  • A group of children who have not been diagnosed, called the control group

Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.

The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.

Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.

Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observationa l and experimental studies, where subjective judgement (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the data collection process.

Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double-  and single-blinded research methods.

Based on discussions you had with other researchers before starting your observations, you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.

At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct interviews with medical staff to clarify the observed events. Note: Observer bias and actor–observer bias are not the same thing.

Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.

Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding , which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.

When the subjects of an experimental study change or improve their behaviour because they are aware they are being studied, this is called the Hawthorne (or observer) effect . Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behaviour in an effort to compensate for their perceived disadvantage.

Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the centre of its distribution on a second measurement.

Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.

In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean .

This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.

However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.

Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.

Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.

Participant: ‘I like to solve puzzles, or sometimes do some gardening.’

You: ‘I love gardening, too!’

In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.

Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.

Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant , or favoring the study hypotheses are more likely to be published due to publication bias.

Publication bias is related to data dredging (also called p -hacking ), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P -hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.

Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions ).

The unconscious form of researcher bias is associated with the Pygmalion (or Rosenthal) effect, where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.

Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs .

  • Good question: What are your views on alcohol consumption among your peers?
  • Bad question: Do you think it’s okay for young people to drink so much?

Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews .

This happens because when people are asked a question (e.g., during an interview ), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.

Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.

While interviewing a student, you ask them:

‘Do you think it’s okay to cheat on an exam?’

Common types of response bias are:

Acquiescence bias

Demand characteristics.

  • Social desirability bias

Courtesy bias

  • Question-order bias

Extreme responding

Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like ‘agree/disagree’, ‘yes/no’, or ‘true/false’. Acquiescence is sometimes referred to as ‘yea-saying’.

This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.

Q: Are you a social person?

People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.

In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:

Q: What would you prefer?

  • A quiet night in
  • A night out with friends

Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviours or views. Ensuring that participants are not aware of the research goals is the best way to avoid this type of bias.

On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.

Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behaviour.

You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.

Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.

Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.

Question order bias

Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.

When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect ), which can lead to systematic distortion of the responses.

Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales , and it distorts people’s true attitudes and opinions.

Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.

Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.

Common types of selection bias are:

Sampling or ascertainment bias

  • Attrition bias

Volunteer or self-selection bias

  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias

Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.

The easiest way to prevent sampling bias is to use a probability sampling method . This way, each member of the population you are studying has an equal chance of being included in your sample.

Sampling bias is often referred to as ascertainment bias in the medical field.

Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.

You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.

You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey , three surveys during the program, and a posttest survey.

Volunteer bias (also called self-selection bias ) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.

Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment – i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.

Closely related to volunteer bias is nonresponse bias , which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.

Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.

Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.

This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process – focusing on ‘survivors’ and forgetting those who went through a similar process and did not survive.

Note that ‘survival’ does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.

However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.

Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.

You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality , and sending them reminders to complete the survey.

You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.

Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.

While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.

  • Clearly explain in your methodology section how your research design will help you meet the research objectives and why this is the most appropriate research design.
  • In quantitative studies , make sure that you use probability sampling to select the participants. If you’re running an experiment, make sure you use random assignment to assign your control and treatment groups.
  • Account for participants who withdraw or are lost to follow-up during the study. If they are withdrawing for a particular reason, it could bias your results. This applies especially to longer-term or longitudinal studies .
  • Use triangulation to enhance the validity and credibility of your findings.
  • Phrase your survey or interview questions in a neutral, non-judgemental tone. Be very careful that your questions do not steer your participants in any particular direction.
  • Consider using a reflexive journal. Here, you can log the details of each interview , paying special attention to any influence you may have had on participants. You can include these in your final analysis.

Cognitive bias

  • Baader–Meinhof phenomenon
  • Availability heuristic
  • Halo effect
  • Framing effect
  • Sampling bias
  • Ascertainment bias
  • Self-selection bias
  • Hawthorne effect
  • Omitted variable bias
  • Pygmalion effect
  • Placebo effect

Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews . These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.

Is this article helpful?

More interesting articles.

  • Attrition Bias | Examples, Explanation, Prevention
  • Demand Characteristics | Definition, Examples & Control
  • Hostile Attribution Bias | Definition & Examples
  • Observer Bias | Definition, Examples, Prevention
  • Regression to the Mean | Definition & Examples
  • Representativeness Heuristic | Example & Definition
  • Sampling Bias and How to Avoid It | Types & Examples
  • Self-Fulfilling Prophecy | Definition & Examples
  • The Availability Heuristic | Example & Definition
  • The Baader–Meinhof Phenomenon Explained
  • What Is a Ceiling Effect? | Definition & Examples
  • What Is Actor-Observer Bias? | Definition & Examples
  • What Is Affinity Bias? | Definition & Examples
  • What Is Anchoring Bias? | Definition & Examples
  • What Is Ascertainment Bias? | Definition & Examples
  • What Is Belief Bias? | Definition & Examples
  • What Is Bias for Action? | Definition & Examples
  • What Is Cognitive Bias? | Meaning, Types & Examples
  • What Is Confirmation Bias? | Definition & Examples
  • What Is Conformity Bias? | Definition & Examples
  • What Is Correspondence Bias? | Definition & Example
  • What Is Explicit Bias? | Definition & Examples
  • What Is Generalisability? | Definition & Examples
  • What Is Hindsight Bias? | Definition & Examples
  • What Is Implicit Bias? | Definition & Examples
  • What Is Information Bias? | Definition & Examples
  • What Is Ingroup Bias? | Definition & Examples
  • What Is Negativity Bias? | Definition & Examples
  • What Is Nonresponse Bias?| Definition & Example
  • What Is Normalcy Bias? | Definition & Example
  • What Is Omitted Variable Bias? | Definition & Example
  • What Is Optimism Bias? | Definition & Examples
  • What Is Outgroup Bias? | Definition & Examples
  • What Is Overconfidence Bias? | Definition & Examples
  • What Is Perception Bias? | Definition & Examples
  • What Is Primacy Bias? | Definition & Example
  • What Is Publication Bias? | Definition & Examples
  • What Is Recall Bias? | Definition & Examples
  • What Is Recency Bias? | Definition & Examples
  • What Is Response Bias? | Definition & Examples
  • What Is Selection Bias? | Definition & Examples
  • What Is Self-Selection Bias? | Definition & Example
  • What Is Self-Serving Bias? | Definition & Example
  • What Is Social Desirability Bias? | Definition & Examples
  • What Is Status Quo Bias? | Definition & Examples
  • What Is Survivorship Bias? | Definition & Examples
  • What Is the Affect Heuristic? | Example & Definition
  • What Is the Egocentric Bias? | Definition & Examples
  • What Is the Framing Effect? | Definition & Examples
  • What Is the Halo Effect? | Definition & Examples
  • What Is the Hawthorne Effect? | Definition & Examples
  • What Is the Placebo Effect? | Definition & Examples
  • What Is the Pygmalion Effect? | Definition & Examples
  • What Is Unconscious Bias? | Definition & Examples
  • What Is Undercoverage Bias? | Definition & Example
  • What Is Vividness Bias? | Definition & Examples

IMAGES

  1. 78 Types of Bias (2024)

    research bias type

  2. Types of Bias in Research

    research bias type

  3. Research Bias

    research bias type

  4. Understanding Bias Explain the Different Types

    research bias type

  5. Research bias: What it is, Types & Examples

    research bias type

  6. 10 Proven Ways to Detect Bias in an Article

    research bias type

VIDEO

  1. edit audio for your bias type ✨️ 💕 #bts #army #shorts #editaudio

  2. Sampling Bias in Research

  3. Evaluating RCT Performance Bias

  4. Types of Research Bias (with MCQ’s)

  5. 2023 PPCR videos: Publication bias in Meta-analysis by TA Javier Obeso

  6. Ep. 202: Bias Type Ish!! (SZN2)

COMMENTS

  1. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  2. Identifying and Avoiding Bias in Research

    Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...

  3. Research Bias 101: Definition + Examples

    Research bias refers to any instance where the researcher, or the research design, negatively influences the quality of a study's results, whether intentionally or not. The three common types of research bias we looked at are: Selection bias - where a skewed sample leads to skewed results. Analysis bias - where the analysis method and/or ...

  4. Study Bias

    Channeling and procedure bias are other forms of selection bias that can be encountered and addressed during the planning stage of a study. Channeling bias is a type of selection bias noted in observational studies. It occurs most frequently when patient characteristics, such as age or severity of illness, affect cohort assignment.

  5. Moving towards less biased research

    Introduction. Bias, perhaps best described as 'any process at any stage of inference which tends to produce results or conclusions that differ systematically from the truth,' can pollute the entire spectrum of research, including its design, analysis, interpretation and reporting. 1 It can taint entire bodies of research as much as it can individual studies. 2 3 Given this extensive ...

  6. Quantifying and addressing the prevalence and bias of study ...

    Future research is needed to refine our methodology, but our empirically grounded form of bias-adjusted meta-analysis could be implemented as follows: 1.) collate studies for the same true effect ...

  7. Best Available Evidence or Truth for the Moment: Bias in Research

    Abstract. The subject of this column is the nature of bias in both quantitative and qualitative research. To that end, bias will be defined and then both the processes by which it enters into research will be entertained along with discussions on how to ameliorate this problem. Clinicians, who are in practice, frequently are called upon to make ...

  8. Bias in Research

    Research bias can affect the validity and credibility of research findings, leading to erroneous conclusions. It can emerge from the researcher's subconscious preferences or the methodological design of the study itself. For instance, if a researcher unconsciously favors a particular outcome of the study, this preference could affect how they interpret the results, leading to a type of bias ...

  9. Research Bias: Definition, Types + Examples

    Analysis Bias; This is a type of research bias that creeps in during data processing. Many times, when sorting and analyzing data, the researcher may focus on data samples that confirm his or her thoughts, expectations, or personal experiences; that is, data that favors the research hypothesis.

  10. 9 types of research bias and how to avoid them

    To reduce bias - and deliver better research - let's explore its primary sources. When we focus on the human elements of the research process and look at the nine core types of bias - driven from the respondent, the researcher or both - we are able to minimize the potential impact that bias has on qualitative research. Respondent bias. 1.

  11. Research bias: What it is, Types & Examples

    Research bias: What it is, Types & Examples. The researcher sometimes unintentionally or actively affects the process while executing a systematic inquiry. It is known as research bias, and it can affect your results just like any other sort of bias. When it comes to studying bias, there are no hard and fast guidelines, which simply means that ...

  12. Types of Research Bias & How to Avoid Them? + Examples

    Real-World Examples of Research Bias. To better understand the pervasive nature of research bias and its implications, let's delve into additional real-world examples that illustrate various types of research bias beyond those previously discussed. Pharmaceutical Industry Influence on Clinical Trials. Bias Type: Funding Bias, Sponsorship Bias

  13. Revisiting Bias in Qualitative Research: Reflections on Its

    Bias—commonly understood to be any influence that provides a distortion in the results of a study (Polit & Beck, 2014)—is a term drawn from the quantitative research paradigm.Most (though perhaps not all) of us would recognize the concept as being incompatible with the philosophical underpinnings of qualitative inquiry (Thorne, Stephens, & Truant, 2016).

  14. Understanding Different Types of Research Bias: A Comprehensive Guide

    Research bias is the tendency for qualitative and quantitative research studies to contain prejudice or preference for or against a particular group of people, culture, object, idea, belief, or circumstance. Bias is rarely based on observed facts. In most cases, it results from societal stereotypes, systemic discrimination, or learned prejudice ...

  15. What is Research Bias

    Research bias refers to the systematic errors or deviations from the truth that can occur during the research process, leading to inaccurate or misleading results. It arises from flaws in the research design, data collection, analysis, and interpretation, which can distort the findings and conclusions. Bias in research can occur at any stage of ...

  16. How bias affects scientific research

    Students will study types of bias in scientific research and in applications of science and engineering, and will identify the effects of bias on research conclusions and on society. Then ...

  17. PDF Bias in research

    Bias in research Joanna Smith,1 Helen Noble2 The aim of this article is to outline types of 'bias' across research designs, and consider strategies to minimise bias. Evidence-based nursing, defined as the "process by which evidence, nursing theory, and clinical expertise are critically evaluated and considered, in conjunction

  18. Bias in research

    Definition of bias. Bias is any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions. Bias can occur either intentionally or unintentionally ( 1 ). Intention to introduce bias into someone's research is immoral.

  19. Bias in research

    The aim of this article is to outline types of 'bias' across research designs, and consider strategies to minimise bias. Evidence-based nursing, defined as the "process by which evidence, nursing theory, and clinical expertise are critically evaluated and considered, in conjunction with patient involvement, to provide the delivery of optimum nursing care,"1 is central to the continued ...

  20. (PDF) Bias in research

    This article describes some basic issues related to bias in research. ... A simple framework is introduced that defines ten categories of statistical errors on the basis of type of error, bias or ...

  21. The Complete Guide to Selection Bias

    There are a variety of different types of bias, each bringing their own implications. These are: 1. Sampling bias. Sampling bias occurs when certain members of the population of interest have a higher or lower chance of being selected than others. When this happens, the research won't give a very representative point of view. 2. Survivorship bias

  22. How Researcher Bias Affects Study Reliability

    The ethical implications of researcher bias are profound. When bias distorts study outcomes, it can lead to misinformation, which may affect public policy, clinical decisions, or further research ...

  23. Implicit Bias Hurts Everyone. Here's How to Overcome It

    Jennifer Eberhardt does research on these kinds of implicit biases. She worked with NextDoor (a neighborhood monitoring app) when they noticed a lot of racial profiling in the things people were ...

  24. Where does 'us versus them' bias in the brain come from?

    Explore how implicit biases shape human societies, backed by psychology and neuroscience research, to understand intergroup dynamics better.

  25. Food and agricultural industry locational determinants research

    FAI locational research. Using FSRDC microdata, Dunn and Hueth (Reference Dunn and Hueth 2017) show the number of crop services establishments is in secular decline, but the number of employees per establishment is growing.The authors also show that multi-unit establishments make up a relatively constant and small proportion of that sector and that business establishment births are in decline.

  26. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  27. Is she still angry? Intact learning but no updating of facial

    Thus, perception is shaped both by one's current sensory information (i.e., likelihood), and by one's predictions about the probability of a stimulus in the environment based on previous experiences, learned knowledge, or even innate biases (i.e., prior). Research has considered the forming of priors in different ways, such as throughout a ...