Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

A decade of theory as reflected in Psychological Science (2009–2019)

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Durham University, Durham, United Kingdom

ORCID logo

Contributed equally to this work with: Nihan Albayrak-Aydemir, Ana Barbosa Mendes, Patricio Gonzalez-Marquez, Annika Maus, Aoife O’Mahony, Conor J. R. Smithson, Kirill Volodko

Roles Investigation, Writing – review & editing

Affiliation London School of Economics and Political Science, London, United Kingdom

Affiliation ITEC, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium

¶ ‡ These authors also contributed equally to this work.

Affiliation Pepperdine University, Malibu, California, United States of America

Affiliation Quest University, Squamish, Canada

Affiliation University of Cambridge, Cambridge, United Kingdom

Affiliation Cardiff University, Cardiff, United Kingdom

Affiliation University of Birmingham, Birmingham, United Kingdom

Affiliation Radboud University, Nijmegen, Netherlands

Affiliation University of Regina, Regina, Canada

Affiliation Vanderbilt University, Nashville, Tennessee, United States of America

  • Jonathon McPhetres, 
  • Nihan Albayrak-Aydemir, 
  • Ana Barbosa Mendes, 
  • Elvina C. Chow, 
  • Patricio Gonzalez-Marquez, 
  • Erin Loukras, 
  • Annika Maus, 
  • Aoife O’Mahony, 
  • Christina Pomareda, 

PLOS

  • Published: March 5, 2021
  • https://doi.org/10.1371/journal.pone.0247986
  • Peer Review
  • Reader Comments

Fig 1

The dominant belief is that science progresses by testing theories and moving towards theoretical consensus. While it’s implicitly assumed that psychology operates in this manner, critical discussions claim that the field suffers from a lack of cumulative theory. To examine this paradox, we analysed research published in Psychological Science from 2009–2019 ( N = 2,225). We found mention of 359 theories in-text, most were referred to only once. Only 53.66% of all manuscripts included the word theory , and only 15.33% explicitly claimed to test predictions derived from theories. We interpret this to suggest that the majority of research published in this flagship journal is not driven by theory, nor can it be contributing to cumulative theory building. These data provide insight into the kinds of research psychologists are conducting and raises questions about the role of theory in the psychological sciences.

Citation: McPhetres J, Albayrak-Aydemir N, Barbosa Mendes A, Chow EC, Gonzalez-Marquez P, Loukras E, et al. (2021) A decade of theory as reflected in Psychological Science (2009–2019). PLoS ONE 16(3): e0247986. https://doi.org/10.1371/journal.pone.0247986

Editor: T. Alexander Dececchi, Mount Marty College, UNITED STATES

Received: September 11, 2020; Accepted: February 18, 2021; Published: March 5, 2021

Copyright: © 2021 McPhetres et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are available from the Open Science Framework (OSF) database ( osf.io/hgn3a ). The OSF preregistration is also available ( osf.io/d6bcq/ ).

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

“ The problem is almost anything passes for theory . ” -Gigerenzer, 1998, pg. 196 (1).

Introduction

Many have noted that psychology lacks the cumulative theory that characterizes other scientific fields [ 1 – 4 ]. So pressing has this deficit become in recent years that many scholars have called for a greater focus on theory development in the psychological sciences [ 5 – 11 ].

At the same time, it has been argued that there are perhaps too many theories to choose from [ 3 , 12 – 14 ]. One factor contributing to this dilemma is that theories are often vague and poorly specified [ 2 , 15 ], so a given theory is unable to adequately explain a range of phenomena without relying on rhetoric. Thus, psychology uses experimentation to tell a narrative rather than to test theoretical predictions [ 16 , 17 ]. From this perspective, psychology needs more exploratory and descriptive research before moving on to theory building and testing [ 18 – 20 ].

Despite these competing viewpoints, it is often claimed that psychological science follows a hypothetico-deductive model like most other scientific disciplines [ 21 ]. In this tradition, experiments exist to test predictions derived from theories. Specifically, researchers should be conducting strong tests of theories [ 22 – 24 ] because strong tests of theory are the reason some fields move forward faster than others [ 2 , 4 , 25 ]. That is, the goal scientists should be working towards is theoretical consensus [ 1 , 2 , 26 – 28 ]. At a glance, it would appear that most psychological research proceeds in this fashion, because papers often use theoretical terms in introduction sections, or name theories in the discussion section. However, no research has been undertaken to examine this assumption and what role theory actually plays in psychological research.

So, which is it? If there is a lack of theory , then most articles should be testing a-theoretical predictions or conducting descriptive and exploratory research. If there is too much theory , then almost every published manuscript should exist to test theoretically derived predictions.

To examine the role of theory in psychological research, we analysed articles published from 2009–2019 in the journal Psychological Science . We use this data to answer some specific questions. First, we are interested in distinguishing between specific and casual uses of theory. So, we analyse how often theory-related words are used overall and how often a specific theory is named and/or tested. Additionally, given that preregistration can help prevent HARKING [ 29 ], we examine whether articles that name and/or test a theory are more likely to be preregistered. Next, it’s possible that some subsets of psychological research might be more or less reliant on theory. To examine this, we investigate whether studies that name and/or test a theory are more likely to generate a specific kind of data. Finally, to provide greater context for these analyses, we examined how many theories were mentioned over this time period and how many times each was mentioned.

Disclosures

All analyses conducted are reported and deviations are disclosed at the end of this section. Our sample size was pre-determined and was based on the entire corpus of published articles. Finally, because this research does not involve human subjects, ethics approval was not sought.

Materials and methods

We accessed all the articles published in Psychological Science from 2009–2019. We chose this journal because it is the flagship journal of the Association for Psychological Science and one of the top journals in the field that publishes a broad range of research from all areas of the discipline. Additionally, this journal explicitly states that theoretical significance is a requirement for publication [ 30 , 31 ].

As preregistered https://osf.io/d6bcq/?view_only=af0461976df7454fbcf7ac7ff1500764 , we excluded comments, letters, errata, editorials, or other articles which did not test original data because they could not be coded or because, in some cases, they were simply replications or re-analyses of previously published articles. This resulted in 2,225 articles being included in the present analysis.

Many useful definitions and operationalisations of a scientific theory have been put forward [ 4 , 32 – 34 ] and we drew on these for the present work. The definition of a scientific theory for the purposes of this research is as follows:

A theory is a framework for understanding some aspect of the natural world. A theory often has a name—usually this includes the word theory , but may sometimes use another label (e.g., model, hypothesis). A theory can be specific or broad, but it should be able to make predictions or generally guide the interpretation of phenomena, and it must be distinguished from a single effect . Finally, a theory is not an untested prediction, a standard hypothesis, or a conjecture.

We used this definition in order to distinguish its use from colloquial and general uses of the word, not to evaluate the strength, viability, or suitability of a theory.

Text mining

Article PDFs were first mined for the frequency of the words theory , theories , and theoretical using the TM [ 35 ] and Quanteda [ 36 ] packages in R. Word frequencies were summed and percentages were calculated for each year and for the entire corpus. We did not search or code for the terms model or hypothesis because these are necessarily more general and have multiple different meanings, none of which overlap with theory (but see the Additional Considerations section for more on this).

After identifying the articles that used the words theory and theories , 10 trained coders further examined those articles. Instances of the word theoretical were not examined further because it is necessarily used generally (and because it was used less than, but often alongside, theory and theories ).

Each article was initially scored independently by two individual coders who were blind to the purpose of the study; Fleiss’ Kappa is report for this initial coding. Recommendations suggest that a kappa between .21-.40 indicates fair agreement, .41-.60 indicates moderate agreement, .61-.80 indicates substantial agreement, and .81–1.0 is almost perfect agreement [ 37 ].

After the initial round of coding, two additional blind coders and the first author each independently reviewed a unique subset of disagreements to resolve ties. This means that the ratings we analyse in the following section are the result of ratings only for which two independent coders (or two out of three coders) agreed 100%.

For each article, the following categories were coded:

Was a specific theory referred to by name?

For each article, the coder conducted a word-search for the string “theor” and examined the context of each instance of the string. We recorded whether each paper, at any point, referred to a specific theory or model by name. Instances of words in the reference section were not counted nor coded further. General references to theory (e.g., psychological theory) or to classes or groups of theories (e.g. relationship theories) were not counted because these do not allow for specific interpretations or predictions. Similarly, instances where a theory, a class of theories, or an effect common across multiple studies was cited in-text along with multiple references but not named explicitly—for example, “cognitive theory (e.g. Author A, 1979; Author B, 1996; Author C & Author D, 2004) predicts”—were also not counted because these examples refer to the author’s own interpretation of or assumptions about a theory rather than a specific prediction outlined by a set of theoretical constraints. Initial coder agreement was 78% (and significantly greater than chance, Fleiss’ kappa = .45, p < .001).

Did the article claim to test a prediction derived from a specific theory?

For each article, the coder examined the abstract, the section prior to introducing the first study, the results, and the beginning of the general discussion. We recorded whether the paper, at any point, explicitly claimed to test a prediction derived from a specific theory or model. As above, this would have been needed to be made clear by the authors to avoid categorising general predictions, auxiliary assumptions, indirect and verbal interpretations of multiple theories, models, or hypotheses derived from personal expectations as being theoretically derived. Initial coder agreement was 74% (and significantly greater than chance, Fleiss’ kappa = .24, p < .001).

What was the primary type of data generated by the study?

For each article, the coder examined the abstract, the section prior to introducing the first study, the results, and the beginning of the general discussion. The primary type of data used in the study was coded as either self-report/survey, physiological/biological, observational/behavioural (including reaction times), or other. In the case of multiple types of data across multi-study papers, we considered the abstract, the research question, the hypothesis, and the results in order to determine the type of data most relevant to the question. Initial coder agreement was 64% (and significantly greater than chance, Fleiss’ kappa = .42, p < .001).

Did the article include a preregistered study?

Preregistration is useful for restricting HARKing [ 29 ]. It is also useful for testing pre-specified and directional predictions, and hypotheses derived from competing theories. As such, we reasoned that preregistered studies may be more likely to test theoretical predictions.

We coded whether the article included a preregistered study. This was identified by looking for a badge as well as conducting a word search for the strings “prereg” and “pre-reg”. Initial coder agreement was 99% (and significantly greater than chance, Fleiss’ kappa = .97, p < .001).

Theory counting

The number of theories named directly in the text were recorded and summed by year to provide an overview of how frequently each theory was invoked. The goal of this was to simply create a comprehensive list of the names and number of theories that were referred to in the text at any point. To be as inclusive as possible, slightly different classification criteria were used (see S1 File ).

Transparency statement

Our original preregistered analysis plan did not include plans for counting the total number of theories mentioned in text, nor for examining the frequency of the words model and hypothesis . Additionally, coding the instances of the word hypothesis was not preregistered, but was added after a round of reviews. Finally, for simplicity, we have focused on percentages out of the total articles coded (rather than presenting separate percentages for frequencies of theory and theories ); complete counts and percentages are presented in the S1 File .

Question 1: How often are theory-related words used?

To begin, the complete corpus of articles was analysed ( N = 2,225). Between the years 2009 and 2019, the word theory was used in 53.66% of articles, the word theories was used in 29.80% of articles, and the word theoretical was used in 32.76% of articles (note that these categories are non-exclusive). Total percentages and raw counts by year are presented in the S1 and S2 Tables in S1 File .

Question 2: How often was a theory named and/or tested?

The 1,605 articles including the word theory or theories were further coded to examine the context of the word. Of these articles, only 33.58% of them named a specific theory—that is, 66.42% used the word superfluously. Further, only 15.33% of the 1,605 articles explicitly claimed to test a prediction derived from a theory.

To put this differently, only 24.22% of all the articles published over the 11-year period ( N = 2,225) actually named a specific theory in the manuscript. For example, they used “psychological theory” or “many theories…” instead of naming and citing a specific theory. This means that the remainder of those papers either 1) did not derive predictions, operationalisations, analytic strategies, and interpretations of their data from theory, or 2) did not credit previous theory for this information.

The words theories and theoretical showed similar patterns, but they were used less often than the word theory ; for simplicity, we present a detailed summary of these counts by year in the S2 Table in S1 File . The pattern of these effects by year is depicted in Fig 1 , below.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

The percentage of articles that included the words theory/theories, mentioned a theory by name, and were preregistered was calculated out of the total number of articles published from 2009–2019 in Psychological Science excluding comments, editorials, and errata ( N = 2,225); note that for simplicity this figure counts all articles that received a preregistered badge (even if they were not coded in the present study).

https://doi.org/10.1371/journal.pone.0247986.g001

Question 3: Are articles that name a specific theory more likely to be preregistered?

Because there were no preregistered articles prior to 2014, we considered only articles published from 2014 onwards (N = 737) for this part of the analysis. Articles that named a specific theory were no more or less likely to be preregistered. Specifically, 11.11% of articles that explicitly named a specific theory were preregistered. In contrast, 11.31% of articles that did not name a theory were preregistered.

Conversely, articles that actually tested a specific theory were only slightly more likely to be preregistered. Of the articles that were preregistered, 15.66% stated that they tested a specific theory. Of the articles that were not preregistered, 12.84% stated that they tested a specific theory. See S3 and S4 Tables in S1 File for full counts by year.

Question 4: Are studies that name and/or test theories more likely to generate a specific kind of data?

Of the 1,605 articles coded over the 11-year period, the overwhelming majority (55.26%) relied on self-report and survey data. Following this, 28.35% used observational data (including reaction times), 11.03% used biological or physiological data, and the remaining 5.30% used other types of data or methodologies (for example, they used computational modelling or presented a new method) to answer their primary research question.

However, it does not appear that studies using different types of data are any more or less prone to invoking theory. Of the studies that used self-report and survey data, 26.16% named a specific theory. Of the studies that used biological and physiological data, 19.77% named a specific theory. Of the studies that used observational or behavioural data, 22.20% named a specific theory. Of the studies that used other types of data, 25.88% named a specific theory. See S5 and S6 Tables in S1 File for complete counts.

Further, it does not appear that theoretically derived predictions are more conducive to any specific type of study. Only 17.36% of studies using self-report data, 11.86% of studies using biological/physiological data, 11.87% of studies using observational data, and 20% of studies using other types of data explicitly claimed to be testing theoretically derived predictions.

Question 5: How many theories were mentioned in this 11-year period?

We also counted the number of theories that were mentioned or referred to explicitly in each of the 2,225 manuscripts. As described in the S1 File , slightly different criteria were used for this task so as to be as inclusive as possible. A total of 359 theories were mentioned in text over the 11-year period. Most theories were mentioned in only a single paper ( mode = 1, median = 1, mean = 1.99). The full list of theories is presented in S7 Table in S1 File . For ease of reference, the top 10 most-mentioned theories are displayed below in Table 1 .

thumbnail

https://doi.org/10.1371/journal.pone.0247986.t001

Exploratory analysis: Did authors use the word hypothesis in place of theory ?

One concern may be that authors are misusing the word hypothesis to refer to these formal, higher-level theories . That is, that authors are using the word hypothesis when they should be using the word theory . To examine this possibility, we mined all 2,225 documents for the word hypothesis and examined the immediate context surrounding each instance.

If the authors were referring to a formally named, superordinate hypothesis derived from elsewhere (e.g., if it satisfied the criteria for a theory) it was coded as 1. It was coded as 0 if the authors were using hypothesis correctly. Specifically, it received a code of 0 if the authors were referring to their own hypothesis or expectations (e.g., our hypothesis, this hypothesis, etc), if they were describing a statistical analysis (e.g. null hypothesis), or if they were describing an effect or pattern of results (e.g., the hypothesis that… ). Instances in the references were not counted. Two independent coders rated each instance of the word. Initial coder agreement was 89.5% and significantly greater than chance (Fleiss’ kappa = .61, p < .001). As before, after initial coder agreement was analysed, a third coder resolved any disagreements and the final ratings (consisting of scores for which at least two coders agreed) were analysed.

Of the 2225 articles published over the 11 years, 62% used the word hypothesis (n = 1,386). Of those, 14.5% (n = 202) used hypothesis in a way to refer to a larger, formal, or externally derived theory . Put differently, this constitutes 9% of the total corpus ( N = 2,225). Complete counts according to year are displayed in S8 Table in S1 File . Thus, it appears that this misuse of the word is not very common. However, even if we were to add this total count to our previous analysis of theory , it would not change our overall interpretation: the majority of papers published in Psychological Science are not discussing nor relying on theories in their research.

The Psychological Science website states that “The main criteria for publication in Psychological Science are general theoretical and empirical significance and methodological/statistical rigor” [ 30 , 31 ]. Yet, only 53.66% of articles published used the word theory , and even fewer named or claimed to test a specific theory. How can research have general theoretical significance if the word theory is not even present in the article?

A more pressing question, perhaps, is how can a field be contributing towards cumulative theoretical knowledge if the research is so fractionated? We identified 359 psychological theories that were referred to in-text (see S7 Table in S1 File for the complete list) and most of these were referred to only a single time. A recent review referred to this as theorrhea (a mania for new theory), and described it as a symptom stifling “the production of new research” [ 38 ]. Indeed, it’s hard to imagine that a cumulative science is one where each theory is examined so infrequently. One cannot help but wonder how the field can ever move towards theoretical consensus if everyone is studying something different—or, worse, studying the same thing with a different name.

These data provide insight into how psychologists are using psychological theories in their research. Many papers made no reference to a theory at all and most did not explicitly derive their predictions from a theory. It’s impossible to know why a given manuscript was written in a certain way, but we offer some possibilities to help understand why some authors neglected to even include the word theory in their report. One possibility is that the research is truly a-theoretical or descriptive. There is clear value in descriptive research—value that can ultimately amount to theoretical advancement [ 17 , 18 , 20 ] and it would be misguided to avoid interesting questions because they did not originate from theory.

It’s also possible that researchers are testing auxiliary assumptions [ 39 ] or their own interpretations (instead of the literal interpretations or predictions) of theories [ 40 ]. This strategy is quite common: authors describe certain effects or qualities of previous literature (e.g., the literature review) in their introduction to narrate how they developed a certain hypothesis or idea, then they state their own hypothesis. Such a strategy is fine, but certainly does not amount to a quantifiable prediction derived from a pre-specified theory. Further, given that psychological theories are almost always verbal [ 2 , 15 ], there may not even be literal interpretations or predictions to test.

An additional possibility is that researchers may be focusing on “effects” and paradigms rather than theories per se. Psychology is organized topically—development, cognition, social behaviour, personality—and these topics are essentially collections of effects (e.g., motivated reasoning, the Stroop effect, etc). Accordingly, researchers tend to study specific effects and examine whether they hold under different conditions. Additionally, a given study may be conducted because it is the logical follow-up from a previous study they conducted, not because the researchers are interested in examining whether a theory is true or not.

However, it’s also important to consider the qualities of the research that did use the word theory and why. Recall only 33.58% of articles using the word theory or theories said anything substantial about a theory. For the remaining articles, it’s possible that these words and phrases were injected post-hoc to make the paper seem theoretically significant, because it is standard practice, or because it is a journal requirement. That is, this may be indicative of a specific type of HARKing: searching the literature for relevant hypotheses or predictions after data analysis, known as RHARKing [ 29 ]. For example, some researchers may have conducted a study for other reasons (e.g., personal interest), but then searched for a relevant theory to connect the results to after the fact. It’s important to note that HARKing can be prevented by preregistration, but preregistration was only used in 11.11% of the papers that claimed to test a theory. Of course, it’s impossible to know an author’s motivation in absence of a preregistration, but the possibility remains quite likely given that between 27% and 58% of scientists admit to HARKing [ 29 ].

Finally, this data provides insight into the kind of research psychologists are conducting. The majority (55.26%) is conducted using self-report and survey data. Much less research is conducted using observational (28.35%) and biological or physiological (11.03%) data. While not as bleak as a previous report claiming that behavioural data is completely absent in the psychological sciences [ 41 ], this points to a limitation in the kinds of questions that can be answered. Of course, self-report data may be perfectly reasonable for some questions, but such questions are necessarily restricted to a narrower slice of human behaviour and cognition. Further, a high degree of reliance on a single method certainly contrasts with the large number of theories being referenced. It is worth considering how much explanatory power each of the theories have if most of them are discussed exclusively in the context of self-report and survey data.

Limitations and additional considerations

The present results describe only one journal: Psychological Science . However, we chose this journal because it is one of the top journals in the field, because it publishes research from all areas of psychology, and because it has explicit criteria for theoretical relevance. Thus, we expected that research published in this journal would be representative of some of the theoretically relevant research being conducted. So, we do not claim that the results described here statistically generalize to other journals, only that they describe the pattern of research in one of the top journals in psychology. One specific concern is that Psychological Science limits articles to 2,000 words, and this may have restricted the ability to describe and reference theories. This may be true, though would seem that the body of knowledge a piece of research is contributing towards would be one of the most important pieces of information to include in a report. That is, if the goal of that research were to contribute to cumulative knowledge, it does not require many words to refer to a body of theory by name.

An additional concern may be that, in some areas of psychology, “theories” may be referred to with a different name (e.g., model or hypothesis ). However, the terms model and hypothesis do not carry the formal weight that scientific theory does. In the hierarchy of science, theories are regarded as being the highest status a claim can achieve—that most articles use it casually and conflate it with other meanings is problematic for clear scientific communication. In contrast, model or hypothesis could be used to refer to several different things: if something is called model , then it’s not claiming to be a theory . Our additional analysis only identified a small minority of papers that used hypothesis in this fashion (9% of the total corpus). While this number is relatively small, this does highlight an additional issue: the lack of consistency with which theories are referred to and discussed. It is difficult and confusing to consistently add to a body of knowledge if different names and terms are used.

Another claim might be that theory should simply be implicit in any text; that it should permeate through one’s writing without many direct references to it. If we were to proceed in this fashion, how could one possibly contribute to cumulative theory? If theory need not be named, identified, or referred to specifically, how is a researcher to judge what body of research they are contributing to? How are they to interpret their findings? How is one even able to design an experiment to answer their research question without a theory? The argument has been made that researchers need theory to guide methods [ 5 , 6 , 9 ]—this is not possible without, at least, clearly naming and referencing theories.

A final limitation to note is one regarding the consistency of the coders. While the fair to moderate kappas obtained here may seem concerning at first, we believe this reflects the looseness and vaguery with which words like theory are used. Authors are often ambiguous and pad their introductions and discussion with references to models and other research; it is not often explicit whether a model is simply being mentioned or whether it is actually guiding the research. Further complicating things is that references to theories are often inconsistent. Thus, it can be a particularly difficult task to determine whether an author actually derived their predictions from a specific theory or whether they are simply discussing it because they later noted the similarities. Such difficulties could have contributed to the lower initial agreement among coders. Therefore, along with noting that the kappas are lower than would be ideal, we also suggest that future researchers should be conscious of their writing: it’s very easy to be extremely explicit about where one’s predictions were derived from and why a test is being conducted. We believe this to be a necessary component of any research report.

Concluding remarks

Our interpretation of this data is that the published research we reviewed is simultaneously saturated and fractionated, and theory is not guiding the majority of research published in Psychological Science despite this being the main criteria for acceptance. While many articles included the words theory and theories , these words are most often used casually and non-specifically. In a large subset of the remaining cases, the theoretical backbone is no more than a thin veneer of relevant rhetoric and citations.

These results highlight many questions for the field moving forward. For example, it’s often noted that psychology has made little progress towards developing clearly specified, cumulative theories [ 2 , 15 ] but what should that progress look like? What is the role of theory in psychological science? Additionally, while it is widely assumed that psychological research follows the hypothetico-deductive model, these data suggest this is not necessarily the case. There are many other ways to do research and not all of them involve theory testing. If the majority of research in a top journal is not explicitly testing predictions derived from theory, then perhaps it exists to explore and describe interesting effects. There is certainly nothing wrong with a descriptive approach, and this aim of psychology has been suggested for at least half a century [ 20 , 42 , 43 ].

To be clear, we are not suggesting that every article should include the word theory , nor that it should be a requirement for review. We are not even suggesting that research needs to be based in theory. Instead, we are simply pointing out the pattern of research that exists in one of the leading research journals with the hope that this inspires critical discussion around the process, aims, and motivation of psychological research. There are many ways to do research. If scientists want to work towards developing nomothetic explanations of human nature then, yes, theory can help. If scientists simply want to describe or explore something interesting, that’s fine too.

Supporting information

https://doi.org/10.1371/journal.pone.0247986.s001

  • 1. Kuhn T., The structure of scientific revolutions , 3rd ed. (The University of Chicago, 1996).
  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 21. Laudan L., Science and Hypothesis : Historical essays on scientific methodology (D. Reidel Publishing Company, 1980). pmid:7012532
  • 22. Bacon F., The new organon and related writing . (Liberal Arts Press, 1960).
  • 24. Popper K., Objective knowledge : An evolutionary approach (Oxford University Press, 1979).
  • 28. Lykken D. T., “What’s wrong with psychology anyway?” in Thinking Clearly about Psychology ., Ghicetti D., Grove W., Eds. (University of Minnesota, 1991), pp. 3–39.
  • 30. APS, 2019 Submission Guidelines (2018) (May 5, 2020).
  • 31. APS, 2020 Submission Guidelines (2020) (May 5, 2020).
  • 32. P. Duhem, The aim and structure of physical theory (Atheneum, 1962).
  • 33. National Academies of Science, Science , evolution , and creationism (National Academies Press, 2008).
  • 34. Planck M., Johnston W. H., The philosophy of physics . (W.W. Norton & Co., 1936).
  • 35. I. Feinerer, K. Hornik, tm: Text mining package. (2019).

The Research Hypothesis: Role and Construction

  • First Online: 01 January 2012

Cite this chapter

hypothesis and theory article

  • Phyllis G. Supino EdD 3  

5983 Accesses

A hypothesis is a logical construct, interposed between a problem and its solution, which represents a proposed answer to a research question. It gives direction to the investigator’s thinking about the problem and, therefore, facilitates a solution. There are three primary modes of inference by which hypotheses are developed: deduction (reasoning from a general propositions to specific instances), induction (reasoning from specific instances to a general proposition), and abduction (formulation/acceptance on probation of a hypothesis to explain a surprising observation).

A research hypothesis should reflect an inference about variables; be stated as a grammatically complete, declarative sentence; be expressed simply and unambiguously; provide an adequate answer to the research problem; and be testable. Hypotheses can be classified as conceptual versus operational, single versus bi- or multivariable, causal or not causal, mechanistic versus nonmechanistic, and null or alternative. Hypotheses most commonly entail statements about “variables” which, in turn, can be classified according to their level of measurement (scaling characteristics) or according to their role in the hypothesis (independent, dependent, moderator, control, or intervening).

A hypothesis is rendered operational when its broadly (conceptually) stated variables are replaced by operational definitions of those variables. Hypotheses stated in this manner are called operational hypotheses, specific hypotheses, or predictions and facilitate testing.

Wrong hypotheses, rightly worked from, have produced more results than unguided observation

—Augustus De Morgan, 1872[ 1 ]—

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

De Morgan A, De Morgan S. A budget of paradoxes. London: Longmans Green; 1872.

Google Scholar  

Leedy Paul D. Practical research. Planning and design. 2nd ed. New York: Macmillan; 1960.

Bernard C. Introduction to the study of experimental medicine. New York: Dover; 1957.

Erren TC. The quest for questions—on the logical force of science. Med Hypotheses. 2004;62:635–40.

Article   PubMed   Google Scholar  

Peirce CS. Collected papers of Charles Sanders Peirce, vol. 7. In: Hartshorne C, Weiss P, editors. Boston: The Belknap Press of Harvard University Press; 1966.

Aristotle. The complete works of Aristotle: the revised Oxford Translation. In: Barnes J, editor. vol. 2. Princeton/New Jersey: Princeton University Press; 1984.

Polit D, Beck CT. Conceptualizing a study to generate evidence for nursing. In: Polit D, Beck CT, editors. Nursing research: generating and assessing evidence for nursing practice. 8th ed. Philadelphia: Wolters Kluwer/Lippincott Williams and Wilkins; 2008. Chapter 4.

Jenicek M, Hitchcock DL. Evidence-based practice. Logic and critical thinking in medicine. Chicago: AMA Press; 2005.

Bacon F. The novum organon or a true guide to the interpretation of nature. A new translation by the Rev G.W. Kitchin. Oxford: The University Press; 1855.

Popper KR. Objective knowledge: an evolutionary approach (revised edition). New York: Oxford University Press; 1979.

Morgan AJ, Parker S. Translational mini-review series on vaccines: the Edward Jenner Museum and the history of vaccination. Clin Exp Immunol. 2007;147:389–94.

Article   PubMed   CAS   Google Scholar  

Pead PJ. Benjamin Jesty: new light in the dawn of vaccination. Lancet. 2003;362:2104–9.

Lee JA. The scientific endeavor: a primer on scientific principles and practice. San Francisco: Addison-Wesley Longman; 2000.

Allchin D. Lawson’s shoehorn, or should the philosophy of science be rated, ‘X’? Science and Education. 2003;12:315–29.

Article   Google Scholar  

Lawson AE. What is the role of induction and deduction in reasoning and scientific inquiry? J Res Sci Teach. 2005;42:716–40.

Peirce CS. Collected papers of Charles Sanders Peirce, vol. 2. In: Hartshorne C, Weiss P, editors. Boston: The Belknap Press of Harvard University Press; 1965.

Bonfantini MA, Proni G. To guess or not to guess? In: Eco U, Sebeok T, editors. The sign of three: Dupin, Holmes, Peirce. Bloomington: Indiana University Press; 1983. Chapter 5.

Peirce CS. Collected papers of Charles Sanders Peirce, vol. 5. In: Hartshorne C, Weiss P, editors. Boston: The Belknap Press of Harvard University Press; 1965.

Flach PA, Kakas AC. Abductive and inductive reasoning: background issues. In: Flach PA, Kakas AC, ­editors. Abduction and induction. Essays on their relation and integration. The Netherlands: Klewer; 2000. Chapter 1.

Murray JF. Voltaire, Walpole and Pasteur: variations on the theme of discovery. Am J Respir Crit Care Med. 2005;172:423–6.

Danemark B, Ekstrom M, Jakobsen L, Karlsson JC. Methodological implications, generalization, scientific inference, models (Part II) In: explaining society. Critical realism in the social sciences. New York: Routledge; 2002.

Pasteur L. Inaugural lecture as professor and dean of the faculty of sciences. In: Peterson H, editor. A treasury of the world’s greatest speeches. Douai, France: University of Lille 7 Dec 1954.

Swineburne R. Simplicity as evidence for truth. Milwaukee: Marquette University Press; 1997.

Sakar S, editor. Logical empiricism at its peak: Schlick, Carnap and Neurath. New York: Garland; 1996.

Popper K. The logic of scientific discovery. New York: Basic Books; 1959. 1934, trans. 1959.

Caws P. The philosophy of science. Princeton: D. Van Nostrand Company; 1965.

Popper K. Conjectures and refutations. The growth of scientific knowledge. 4th ed. London: Routledge and Keegan Paul; 1972.

Feyerabend PK. Against method, outline of an anarchistic theory of knowledge. London, UK: Verso; 1978.

Smith PG. Popper: conjectures and refutations (Chapter IV). In: Theory and reality: an introduction to the philosophy of science. Chicago: University of Chicago Press; 2003.

Blystone RV, Blodgett K. WWW: the scientific method. CBE Life Sci Educ. 2006;5:7–11.

Kleinbaum DG, Kupper LL, Morgenstern H. Epidemiological research. Principles and quantitative methods. New York: Van Nostrand Reinhold; 1982.

Fortune AE, Reid WJ. Research in social work. 3rd ed. New York: Columbia University Press; 1999.

Kerlinger FN. Foundations of behavioral research. 1st ed. New York: Hold, Reinhart and Winston; 1970.

Hoskins CN, Mariano C. Research in nursing and health. Understanding and using quantitative and qualitative methods. New York: Springer; 2004.

Tuckman BW. Conducting educational research. New York: Harcourt, Brace, Jovanovich; 1972.

Wang C, Chiari PC, Weihrauch D, Krolikowski JG, Warltier DC, Kersten JR, Pratt Jr PF, Pagel PS. Gender-specificity of delayed preconditioning by isoflurane in rabbits: potential role of endothelial nitric oxide synthase. Anesth Analg. 2006;103:274–80.

Beyer ME, Slesak G, Nerz S, Kazmaier S, Hoffmeister HM. Effects of endothelin-1 and IRL 1620 on myocardial contractility and myocardial energy metabolism. J Cardiovasc Pharmacol. 1995;26(Suppl 3):S150–2.

PubMed   CAS   Google Scholar  

Stone J, Sharpe M. Amnesia for childhood in patients with unexplained neurological symptoms. J Neurol Neurosurg Psychiatry. 2002;72:416–7.

Naughton BJ, Moran M, Ghaly Y, Michalakes C. Computer tomography scanning and delirium in elder patients. Acad Emerg Med. 1997;4:1107–10.

Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337:867–72.

Stern JM, Simes RJ. Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ. 1997;315:640–5.

Stevens SS. On the theory of scales and measurement. Science. 1946;103:677–80.

Knapp TR. Treating ordinal scales as interval scales: an attempt to resolve the controversy. Nurs Res. 1990;39:121–3.

The Cochrane Collaboration. Open Learning Material. www.cochrane-net.org/openlearning/html/mod14-3.htm . Accessed 12 Oct 2009.

MacCorquodale K, Meehl PE. On a distinction between hypothetical constructs and intervening ­variables. Psychol Rev. 1948;55:95–107.

Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: ­conceptual, strategic and statistical considerations. J Pers Soc Psychol. 1986;51:1173–82.

Williamson GM, Schultz R. Activity restriction mediates the association between pain and depressed affect: a study of younger and older adult cancer patients. Psychol Aging. 1995;10:369–78.

Song M, Lee EO. Development of a functional capacity model for the elderly. Res Nurs Health. 1998;21:189–98.

MacKinnon DP. Introduction to statistical mediation analysis. New York: Routledge; 2008.

Download references

Author information

Authors and affiliations.

Department of Medicine, College of Medicine, SUNY Downstate Medical Center, 450 Clarkson Avenue, 1199, Brooklyn, NY, 11203, USA

Phyllis G. Supino EdD

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phyllis G. Supino EdD .

Editor information

Editors and affiliations.

, Cardiovascular Medicine, SUNY Downstate Medical Center, Clarkson Avenue, box 1199 450, Brooklyn, 11203, USA

Phyllis G. Supino

, Cardiovascualr Medicine, SUNY Downstate Medical Center, Clarkson Avenue 450, Brooklyn, 11203, USA

Jeffrey S. Borer

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this chapter

Supino, P.G. (2012). The Research Hypothesis: Role and Construction. In: Supino, P., Borer, J. (eds) Principles of Research Methodology. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3360-6_3

Download citation

DOI : https://doi.org/10.1007/978-1-4614-3360-6_3

Published : 18 April 2012

Publisher Name : Springer, New York, NY

Print ISBN : 978-1-4614-3359-0

Online ISBN : 978-1-4614-3360-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

hypothesis and theory article

Understanding Science

How science REALLY works...

  • Understanding Science 101
  • Misconceptions
  • The process of science works at many levels — from that of a single study to that of a broad investigation spanning many decades and encompassing hundreds of individual studies.
  • Hypotheses are proposed explanations for a narrow set of phenomena. They are not guesses.
  • Theories are powerful explanations for a wide range of phenomena. Accepted theories are not tenuous.
  • Some theories are so broad and powerful that they frame whole disciplines of study and encompass many smaller hypotheses and theories.

Misconception:  Hypotheses are just guesses.

Correction:  Hypotheses are reasoned and informed explanations.  Read more about it.

Misconception:  Theories are just hunches.

Correction:  In science, theories are broad explanations. To be accepted, they must be supported by many lines of evidence.  Read more about it.

Misconception:  If evidence supports a hypothesis, it is upgraded to a theory. If the theory then garners even more support, it may be upgraded to a law.

Correction:  Hypotheses cannot become theories and theories cannot become laws. Hypotheses, theories, and laws are all scientific explanations but they differ in breadth, not in level of support. Theories apply to a broader range of phenomena than do hypotheses. The term  law  is sometimes used to refer to an idea about how observable phenomena are related.  Read more about it.

Science at multiple levels

The process of science works at multiple levels — from the small scale (e.g., a comparison of the genes of three closely related North American butterfly species) to the large scale (e.g., a half-century-long series of investigations of the idea that geographic isolation of a population can trigger speciation). The process of science works in much the same way whether embodied by an individual scientist tackling a specific problem, question, or hypothesis over the course of a few months or years, or by a community of scientists coming to agree on broad ideas over the course of decades and hundreds of individual experiments and studies. Similarly, scientific explanations come at different levels:

Hypotheses are proposed explanations for a fairly narrow set of phenomena. These reasoned explanations are not guesses — of the wild or educated variety. When scientists formulate new hypotheses, they are usually based on prior experience, scientific background knowledge, preliminary observations , and logic. For example, scientists observed that alpine butterflies exhibit characteristics intermediate between two species that live at lower elevations. Based on these observations and their understanding of speciation, the scientists hypothesized that this species of alpine butterfly evolved as the result of hybridization between the two other species living at lower elevations.

Theories , on the other hand, are broad explanations for a wide range of phenomena. They are concise (i.e., generally don’t have a long list of exceptions and special rules), coherent, systematic, predictive, and broadly applicable. In fact, theories often integrate and generalize many hypotheses. For example, the theory of natural selection broadly applies to all populations with some form of inheritance, variation, and differential reproductive success — whether that population is composed of alpine butterflies, fruit flies on a tropical island, a new form of life discovered on Mars, or even bits in a computer’s memory. This theory helps us understand a wide range of observations (including the rise of antibiotic-resistant bacteria and the physical match between pollinators and their preferred flowers), makes predictions in new situations (e.g., that treating AIDS patients with a cocktail of medications should slow the evolution of the virus), and has proven itself time and time again in thousands of experiments and observational studies.

"JUST" A THEORY?

Occasionally, scientific ideas (such as biological evolution) are written off with the putdown “it’s just a theory.” This slur is misleading and conflates two separate meanings of the word theory : In common usage, the word theory means just a hunch, but in science, a theory is a powerful explanation for a broad set of observations. To be accepted by the scientific community, a theory (in the scientific sense of the word) must be strongly supported by many different lines of evidence . So biological evolution is a theory: It is a well-supported, widely accepted, and powerful explanation for the diversity of life on Earth. But it is not “just” a theory.

Words with both technical and everyday meanings often cause confusion. Even scientists sometimes use the word theory when they really mean hypothesis or even just a hunch. Many technical fields have similar vocabulary problems — for example, both the terms work in physics and ego in psychology have specific meanings in their technical fields that differ from their common uses. However, context and a little background knowledge are usually sufficient to figure out which meaning is intended.

Over-arching theories

Some theories, which we’ll call over-arching theories , are particularly important and reflect broad understandings of a particular part of the natural world . Evolutionary theory, atomic theory, gravity, quantum theory, and plate tectonics are examples of this sort of over-arching theory. These theories have been broadly supported by multiple lines of evidence and help frame our understanding of the world around us.

Such over-arching theories encompass many subordinate theories and hypotheses, and consequently, changes to those smaller theories and hypotheses reflect a refinement (not an overthrow) of the over-arching theory. For example, when punctuated equilibrium was proposed as a mode of evolutionary change and evidence was found supporting the idea in some situations, it represented an elaborated reinforcement of evolutionary theory, not a refutation of it. Over-arching theories are so important because they help scientists choose their methods of study and mode of reasoning, connect important phenomena in new ways, and open new areas of study. For example, evolutionary theory highlighted an entirely new set of questions for exploration: How did this characteristic evolve? How are these species related to one another? How has life changed over time?

A MODEL EXPLANATION

Hypotheses and theories can be complex. For example, a particular hypothesis about meteorological interactions or nuclear reactions might be so complex that it is best described in the form of a computer program or a long mathematical equation. In such cases, the hypothesis or theory may be called a model .

Take a sidetrip

To see an example of how models of the atmosphere can shape policy, explore  Ozone depletion: Uncovering the hidden hazard of hairspray .

  • Teaching resources
  • You can help students understand the differences between observation and inference (e.g., between observations and the hypothesis supported by them) by regularly asking students to analyze lecture material, text, or video. Students should try to figure out which aspects of the content were directly observed and which aspects were generated by scientists trying to figure out what their observations meant.
  • Forming hypotheses — scientific explanations — can be difficult for students. It is often easier for students to generate an expectation (what they think will happen or what they expect to observe) based on prior experience than to formulate a potential explanation for that phenomena. You can help students go beyond expectations to generate real, explanatory hypotheses by providing sentence stems for them to fill in: “I expect to observe A because B.” Once students have filled in this sentence you can explain that B is a hypothesis and A is the expectation generated by that hypothesis.

Benefits of science

Even theories change

Subscribe to our newsletter

  • The science flowchart
  • Science stories
  • Grade-level teaching guides
  • Teaching resource database
  • Journaling tool

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Physics LibreTexts

1.2: Theories, Hypotheses and Models

  • Last updated
  • Save as PDF
  • Page ID 19359

For the purpose of this textbook (and science in general), we introduce a distinction in what we mean by “theory”, “hypothesis”, and by “model”. We will consider a “theory” to be a set of statements (or an equation) that gives us a broad description, applicable to several phenomena and that allows us to make verifiable predictions. For example, Chloë’s Theory ( \(t \propto \sqrt{h}\) ) can be considered a theory. Specifically, we do not use the word theory in the context of “I have a theory about this...”

A “hypothesis” is a consequence of the theory that one can test. From Chloë’s Theory, we have the hypothesis that an object will take \(\sqrt{2}\) times longer to fall from \(1\:\text{m}\) than from \(2\:\text{m}\) . We can formulate the hypothesis based on the theory and then test that hypothesis. If the hypothesis is found to be invalidated by experiment, then either the theory is incorrect, or the hypothesis is not consistent with the theory.

A “model” is a situation-specific description of a phenomenon based on a theory , that allows us to make a specific prediction. Using the example from the previous section, our theory would be that the fall time of an object is proportional to the square root of the drop height, and a model would be applying that theory to describe a tennis ball falling by \(4.2\) m. From the model, we can form a testable hypothesis of how long it will take the tennis ball to fall that distance. It is important to note that a model will almost always be an approximation of the theory applied to describe a particular phenomenon. For example, if Chloë’s Theory is only valid in vacuum, and we use it to model the time that it take for an object to fall at the surface of the Earth, we may find that our model disagrees with experiment. We would not necessarily conclude that the theory is invalidated, if our model did not adequately apply the theory to describe the phenomenon (e.g. by forgetting to include the effect of air drag).

This textbook will introduce the theories from Classical Physics, which were mostly established and tested between the seventeenth and nineteenth centuries. We will take it as given that readers of this textbook are not likely to perform experiments that challenge those well-established theories. The main challenge will be, given a theory, to define a model that describes a particular situation, and then to test that model. This introductory physics course is thus focused on thinking of “doing physics” as the task of correctly modeling a situation.

Emma's Thoughts

What’s the difference between a model and a theory?

“Model” and “Theory” are sometimes used interchangeably among scientists. In physics, it is particularly important to distinguish between these two terms. A model provides an immediate understanding of something based on a theory.

For example, if you would like to model the launch of your toy rocket into space, you might run a computer simulation of the launch based on various theories of propulsion that you have learned. In this case, the model is the computer simulation, which describes what will happen to the rocket. This model depends on various theories that have been extensively tested such as Newton’s Laws of motion, Fluid dynamics, etc.

  • “Model”: Your homemade rocket computer simulation
  • “Theory”: Newton’s Laws of motion, Fluid dynamics

With this analogy, we can quickly see that the “model” and “theory” are not interchangeable. If they were, we would be saying that all of Newton’s Laws of Motion depend on the success of your piddly toy rocket computer simulation!

Exercise \(\PageIndex{2}\)

Models cannot be scientifically tested, only theories can be tested.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

A decade of theory as reflected in Psychological Science (2009–2019)

Jonathon mcphetres.

1 Durham University, Durham, United Kingdom

Nihan Albayrak-Aydemir

2 London School of Economics and Political Science, London, United Kingdom

Ana Barbosa Mendes

3 ITEC, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium

Elvina C. Chow

4 Pepperdine University, Malibu, California, United States of America

Patricio Gonzalez-Marquez

5 Quest University, Squamish, Canada

Erin Loukras

Annika maus.

6 University of Cambridge, Cambridge, United Kingdom

Aoife O’Mahony

7 Cardiff University, Cardiff, United Kingdom

Christina Pomareda

8 University of Birmingham, Birmingham, United Kingdom

Maximilian A. Primbs

9 Radboud University, Nijmegen, Netherlands

Shalaine L. Sackman

10 University of Regina, Regina, Canada

Conor J. R. Smithson

11 Vanderbilt University, Nashville, Tennessee, United States of America

Kirill Volodko

Associated data.

All relevant data are available from the Open Science Framework (OSF) database ( osf.io/hgn3a ). The OSF preregistration is also available ( osf.io/d6bcq/ ).

The dominant belief is that science progresses by testing theories and moving towards theoretical consensus. While it’s implicitly assumed that psychology operates in this manner, critical discussions claim that the field suffers from a lack of cumulative theory. To examine this paradox, we analysed research published in Psychological Science from 2009–2019 ( N = 2,225). We found mention of 359 theories in-text, most were referred to only once. Only 53.66% of all manuscripts included the word theory , and only 15.33% explicitly claimed to test predictions derived from theories. We interpret this to suggest that the majority of research published in this flagship journal is not driven by theory, nor can it be contributing to cumulative theory building. These data provide insight into the kinds of research psychologists are conducting and raises questions about the role of theory in the psychological sciences.

“ The problem is almost anything passes for theory . ” -Gigerenzer, 1998, pg. 196 (1).

Introduction

Many have noted that psychology lacks the cumulative theory that characterizes other scientific fields [ 1 – 4 ]. So pressing has this deficit become in recent years that many scholars have called for a greater focus on theory development in the psychological sciences [ 5 – 11 ].

At the same time, it has been argued that there are perhaps too many theories to choose from [ 3 , 12 – 14 ]. One factor contributing to this dilemma is that theories are often vague and poorly specified [ 2 , 15 ], so a given theory is unable to adequately explain a range of phenomena without relying on rhetoric. Thus, psychology uses experimentation to tell a narrative rather than to test theoretical predictions [ 16 , 17 ]. From this perspective, psychology needs more exploratory and descriptive research before moving on to theory building and testing [ 18 – 20 ].

Despite these competing viewpoints, it is often claimed that psychological science follows a hypothetico-deductive model like most other scientific disciplines [ 21 ]. In this tradition, experiments exist to test predictions derived from theories. Specifically, researchers should be conducting strong tests of theories [ 22 – 24 ] because strong tests of theory are the reason some fields move forward faster than others [ 2 , 4 , 25 ]. That is, the goal scientists should be working towards is theoretical consensus [ 1 , 2 , 26 – 28 ]. At a glance, it would appear that most psychological research proceeds in this fashion, because papers often use theoretical terms in introduction sections, or name theories in the discussion section. However, no research has been undertaken to examine this assumption and what role theory actually plays in psychological research.

So, which is it? If there is a lack of theory , then most articles should be testing a-theoretical predictions or conducting descriptive and exploratory research. If there is too much theory , then almost every published manuscript should exist to test theoretically derived predictions.

To examine the role of theory in psychological research, we analysed articles published from 2009–2019 in the journal Psychological Science . We use this data to answer some specific questions. First, we are interested in distinguishing between specific and casual uses of theory. So, we analyse how often theory-related words are used overall and how often a specific theory is named and/or tested. Additionally, given that preregistration can help prevent HARKING [ 29 ], we examine whether articles that name and/or test a theory are more likely to be preregistered. Next, it’s possible that some subsets of psychological research might be more or less reliant on theory. To examine this, we investigate whether studies that name and/or test a theory are more likely to generate a specific kind of data. Finally, to provide greater context for these analyses, we examined how many theories were mentioned over this time period and how many times each was mentioned.

Disclosures

All analyses conducted are reported and deviations are disclosed at the end of this section. Our sample size was pre-determined and was based on the entire corpus of published articles. Finally, because this research does not involve human subjects, ethics approval was not sought.

Materials and methods

We accessed all the articles published in Psychological Science from 2009–2019. We chose this journal because it is the flagship journal of the Association for Psychological Science and one of the top journals in the field that publishes a broad range of research from all areas of the discipline. Additionally, this journal explicitly states that theoretical significance is a requirement for publication [ 30 , 31 ].

As preregistered https://osf.io/d6bcq/?view_only=af0461976df7454fbcf7ac7ff1500764 , we excluded comments, letters, errata, editorials, or other articles which did not test original data because they could not be coded or because, in some cases, they were simply replications or re-analyses of previously published articles. This resulted in 2,225 articles being included in the present analysis.

Many useful definitions and operationalisations of a scientific theory have been put forward [ 4 , 32 – 34 ] and we drew on these for the present work. The definition of a scientific theory for the purposes of this research is as follows:

A theory is a framework for understanding some aspect of the natural world. A theory often has a name—usually this includes the word theory , but may sometimes use another label (e.g., model, hypothesis). A theory can be specific or broad, but it should be able to make predictions or generally guide the interpretation of phenomena, and it must be distinguished from a single effect . Finally, a theory is not an untested prediction, a standard hypothesis, or a conjecture.

We used this definition in order to distinguish its use from colloquial and general uses of the word, not to evaluate the strength, viability, or suitability of a theory.

Text mining

Article PDFs were first mined for the frequency of the words theory , theories , and theoretical using the TM [ 35 ] and Quanteda [ 36 ] packages in R. Word frequencies were summed and percentages were calculated for each year and for the entire corpus. We did not search or code for the terms model or hypothesis because these are necessarily more general and have multiple different meanings, none of which overlap with theory (but see the Additional Considerations section for more on this).

After identifying the articles that used the words theory and theories , 10 trained coders further examined those articles. Instances of the word theoretical were not examined further because it is necessarily used generally (and because it was used less than, but often alongside, theory and theories ).

Each article was initially scored independently by two individual coders who were blind to the purpose of the study; Fleiss’ Kappa is report for this initial coding. Recommendations suggest that a kappa between .21-.40 indicates fair agreement, .41-.60 indicates moderate agreement, .61-.80 indicates substantial agreement, and .81–1.0 is almost perfect agreement [ 37 ].

After the initial round of coding, two additional blind coders and the first author each independently reviewed a unique subset of disagreements to resolve ties. This means that the ratings we analyse in the following section are the result of ratings only for which two independent coders (or two out of three coders) agreed 100%.

For each article, the following categories were coded:

Was a specific theory referred to by name?

For each article, the coder conducted a word-search for the string “theor” and examined the context of each instance of the string. We recorded whether each paper, at any point, referred to a specific theory or model by name. Instances of words in the reference section were not counted nor coded further. General references to theory (e.g., psychological theory) or to classes or groups of theories (e.g. relationship theories) were not counted because these do not allow for specific interpretations or predictions. Similarly, instances where a theory, a class of theories, or an effect common across multiple studies was cited in-text along with multiple references but not named explicitly—for example, “cognitive theory (e.g. Author A, 1979; Author B, 1996; Author C & Author D, 2004) predicts”—were also not counted because these examples refer to the author’s own interpretation of or assumptions about a theory rather than a specific prediction outlined by a set of theoretical constraints. Initial coder agreement was 78% (and significantly greater than chance, Fleiss’ kappa = .45, p < .001).

Did the article claim to test a prediction derived from a specific theory?

For each article, the coder examined the abstract, the section prior to introducing the first study, the results, and the beginning of the general discussion. We recorded whether the paper, at any point, explicitly claimed to test a prediction derived from a specific theory or model. As above, this would have been needed to be made clear by the authors to avoid categorising general predictions, auxiliary assumptions, indirect and verbal interpretations of multiple theories, models, or hypotheses derived from personal expectations as being theoretically derived. Initial coder agreement was 74% (and significantly greater than chance, Fleiss’ kappa = .24, p < .001).

What was the primary type of data generated by the study?

For each article, the coder examined the abstract, the section prior to introducing the first study, the results, and the beginning of the general discussion. The primary type of data used in the study was coded as either self-report/survey, physiological/biological, observational/behavioural (including reaction times), or other. In the case of multiple types of data across multi-study papers, we considered the abstract, the research question, the hypothesis, and the results in order to determine the type of data most relevant to the question. Initial coder agreement was 64% (and significantly greater than chance, Fleiss’ kappa = .42, p < .001).

Did the article include a preregistered study?

Preregistration is useful for restricting HARKing [ 29 ]. It is also useful for testing pre-specified and directional predictions, and hypotheses derived from competing theories. As such, we reasoned that preregistered studies may be more likely to test theoretical predictions.

We coded whether the article included a preregistered study. This was identified by looking for a badge as well as conducting a word search for the strings “prereg” and “pre-reg”. Initial coder agreement was 99% (and significantly greater than chance, Fleiss’ kappa = .97, p < .001).

Theory counting

The number of theories named directly in the text were recorded and summed by year to provide an overview of how frequently each theory was invoked. The goal of this was to simply create a comprehensive list of the names and number of theories that were referred to in the text at any point. To be as inclusive as possible, slightly different classification criteria were used (see S1 File ).

Transparency statement

Our original preregistered analysis plan did not include plans for counting the total number of theories mentioned in text, nor for examining the frequency of the words model and hypothesis . Additionally, coding the instances of the word hypothesis was not preregistered, but was added after a round of reviews. Finally, for simplicity, we have focused on percentages out of the total articles coded (rather than presenting separate percentages for frequencies of theory and theories ); complete counts and percentages are presented in the S1 File .

Question 1: How often are theory-related words used?

To begin, the complete corpus of articles was analysed ( N = 2,225). Between the years 2009 and 2019, the word theory was used in 53.66% of articles, the word theories was used in 29.80% of articles, and the word theoretical was used in 32.76% of articles (note that these categories are non-exclusive). Total percentages and raw counts by year are presented in the S1 and S2 Tables in S1 File .

Question 2: How often was a theory named and/or tested?

The 1,605 articles including the word theory or theories were further coded to examine the context of the word. Of these articles, only 33.58% of them named a specific theory—that is, 66.42% used the word superfluously. Further, only 15.33% of the 1,605 articles explicitly claimed to test a prediction derived from a theory.

To put this differently, only 24.22% of all the articles published over the 11-year period ( N = 2,225) actually named a specific theory in the manuscript. For example, they used “psychological theory” or “many theories…” instead of naming and citing a specific theory. This means that the remainder of those papers either 1) did not derive predictions, operationalisations, analytic strategies, and interpretations of their data from theory, or 2) did not credit previous theory for this information.

The words theories and theoretical showed similar patterns, but they were used less often than the word theory ; for simplicity, we present a detailed summary of these counts by year in the S2 Table in S1 File . The pattern of these effects by year is depicted in Fig 1 , below.

An external file that holds a picture, illustration, etc.
Object name is pone.0247986.g001.jpg

The percentage of articles that included the words theory/theories, mentioned a theory by name, and were preregistered was calculated out of the total number of articles published from 2009–2019 in Psychological Science excluding comments, editorials, and errata ( N = 2,225); note that for simplicity this figure counts all articles that received a preregistered badge (even if they were not coded in the present study).

Question 3: Are articles that name a specific theory more likely to be preregistered?

Because there were no preregistered articles prior to 2014, we considered only articles published from 2014 onwards (N = 737) for this part of the analysis. Articles that named a specific theory were no more or less likely to be preregistered. Specifically, 11.11% of articles that explicitly named a specific theory were preregistered. In contrast, 11.31% of articles that did not name a theory were preregistered.

Conversely, articles that actually tested a specific theory were only slightly more likely to be preregistered. Of the articles that were preregistered, 15.66% stated that they tested a specific theory. Of the articles that were not preregistered, 12.84% stated that they tested a specific theory. See S3 and S4 Tables in S1 File for full counts by year.

Question 4: Are studies that name and/or test theories more likely to generate a specific kind of data?

Of the 1,605 articles coded over the 11-year period, the overwhelming majority (55.26%) relied on self-report and survey data. Following this, 28.35% used observational data (including reaction times), 11.03% used biological or physiological data, and the remaining 5.30% used other types of data or methodologies (for example, they used computational modelling or presented a new method) to answer their primary research question.

However, it does not appear that studies using different types of data are any more or less prone to invoking theory. Of the studies that used self-report and survey data, 26.16% named a specific theory. Of the studies that used biological and physiological data, 19.77% named a specific theory. Of the studies that used observational or behavioural data, 22.20% named a specific theory. Of the studies that used other types of data, 25.88% named a specific theory. See S5 and S6 Tables in S1 File for complete counts.

Further, it does not appear that theoretically derived predictions are more conducive to any specific type of study. Only 17.36% of studies using self-report data, 11.86% of studies using biological/physiological data, 11.87% of studies using observational data, and 20% of studies using other types of data explicitly claimed to be testing theoretically derived predictions.

Question 5: How many theories were mentioned in this 11-year period?

We also counted the number of theories that were mentioned or referred to explicitly in each of the 2,225 manuscripts. As described in the S1 File , slightly different criteria were used for this task so as to be as inclusive as possible. A total of 359 theories were mentioned in text over the 11-year period. Most theories were mentioned in only a single paper ( mode = 1, median = 1, mean = 1.99). The full list of theories is presented in S7 Table in S1 File . For ease of reference, the top 10 most-mentioned theories are displayed below in Table 1 .

Exploratory analysis: Did authors use the word hypothesis in place of theory ?

One concern may be that authors are misusing the word hypothesis to refer to these formal, higher-level theories . That is, that authors are using the word hypothesis when they should be using the word theory . To examine this possibility, we mined all 2,225 documents for the word hypothesis and examined the immediate context surrounding each instance.

If the authors were referring to a formally named, superordinate hypothesis derived from elsewhere (e.g., if it satisfied the criteria for a theory) it was coded as 1. It was coded as 0 if the authors were using hypothesis correctly. Specifically, it received a code of 0 if the authors were referring to their own hypothesis or expectations (e.g., our hypothesis, this hypothesis, etc), if they were describing a statistical analysis (e.g. null hypothesis), or if they were describing an effect or pattern of results (e.g., the hypothesis that… ). Instances in the references were not counted. Two independent coders rated each instance of the word. Initial coder agreement was 89.5% and significantly greater than chance (Fleiss’ kappa = .61, p < .001). As before, after initial coder agreement was analysed, a third coder resolved any disagreements and the final ratings (consisting of scores for which at least two coders agreed) were analysed.

Of the 2225 articles published over the 11 years, 62% used the word hypothesis (n = 1,386). Of those, 14.5% (n = 202) used hypothesis in a way to refer to a larger, formal, or externally derived theory . Put differently, this constitutes 9% of the total corpus ( N = 2,225). Complete counts according to year are displayed in S8 Table in S1 File . Thus, it appears that this misuse of the word is not very common. However, even if we were to add this total count to our previous analysis of theory , it would not change our overall interpretation: the majority of papers published in Psychological Science are not discussing nor relying on theories in their research.

The Psychological Science website states that “The main criteria for publication in Psychological Science are general theoretical and empirical significance and methodological/statistical rigor” [ 30 , 31 ]. Yet, only 53.66% of articles published used the word theory , and even fewer named or claimed to test a specific theory. How can research have general theoretical significance if the word theory is not even present in the article?

A more pressing question, perhaps, is how can a field be contributing towards cumulative theoretical knowledge if the research is so fractionated? We identified 359 psychological theories that were referred to in-text (see S7 Table in S1 File for the complete list) and most of these were referred to only a single time. A recent review referred to this as theorrhea (a mania for new theory), and described it as a symptom stifling “the production of new research” [ 38 ]. Indeed, it’s hard to imagine that a cumulative science is one where each theory is examined so infrequently. One cannot help but wonder how the field can ever move towards theoretical consensus if everyone is studying something different—or, worse, studying the same thing with a different name.

These data provide insight into how psychologists are using psychological theories in their research. Many papers made no reference to a theory at all and most did not explicitly derive their predictions from a theory. It’s impossible to know why a given manuscript was written in a certain way, but we offer some possibilities to help understand why some authors neglected to even include the word theory in their report. One possibility is that the research is truly a-theoretical or descriptive. There is clear value in descriptive research—value that can ultimately amount to theoretical advancement [ 17 , 18 , 20 ] and it would be misguided to avoid interesting questions because they did not originate from theory.

It’s also possible that researchers are testing auxiliary assumptions [ 39 ] or their own interpretations (instead of the literal interpretations or predictions) of theories [ 40 ]. This strategy is quite common: authors describe certain effects or qualities of previous literature (e.g., the literature review) in their introduction to narrate how they developed a certain hypothesis or idea, then they state their own hypothesis. Such a strategy is fine, but certainly does not amount to a quantifiable prediction derived from a pre-specified theory. Further, given that psychological theories are almost always verbal [ 2 , 15 ], there may not even be literal interpretations or predictions to test.

An additional possibility is that researchers may be focusing on “effects” and paradigms rather than theories per se. Psychology is organized topically—development, cognition, social behaviour, personality—and these topics are essentially collections of effects (e.g., motivated reasoning, the Stroop effect, etc). Accordingly, researchers tend to study specific effects and examine whether they hold under different conditions. Additionally, a given study may be conducted because it is the logical follow-up from a previous study they conducted, not because the researchers are interested in examining whether a theory is true or not.

However, it’s also important to consider the qualities of the research that did use the word theory and why. Recall only 33.58% of articles using the word theory or theories said anything substantial about a theory. For the remaining articles, it’s possible that these words and phrases were injected post-hoc to make the paper seem theoretically significant, because it is standard practice, or because it is a journal requirement. That is, this may be indicative of a specific type of HARKing: searching the literature for relevant hypotheses or predictions after data analysis, known as RHARKing [ 29 ]. For example, some researchers may have conducted a study for other reasons (e.g., personal interest), but then searched for a relevant theory to connect the results to after the fact. It’s important to note that HARKing can be prevented by preregistration, but preregistration was only used in 11.11% of the papers that claimed to test a theory. Of course, it’s impossible to know an author’s motivation in absence of a preregistration, but the possibility remains quite likely given that between 27% and 58% of scientists admit to HARKing [ 29 ].

Finally, this data provides insight into the kind of research psychologists are conducting. The majority (55.26%) is conducted using self-report and survey data. Much less research is conducted using observational (28.35%) and biological or physiological (11.03%) data. While not as bleak as a previous report claiming that behavioural data is completely absent in the psychological sciences [ 41 ], this points to a limitation in the kinds of questions that can be answered. Of course, self-report data may be perfectly reasonable for some questions, but such questions are necessarily restricted to a narrower slice of human behaviour and cognition. Further, a high degree of reliance on a single method certainly contrasts with the large number of theories being referenced. It is worth considering how much explanatory power each of the theories have if most of them are discussed exclusively in the context of self-report and survey data.

Limitations and additional considerations

The present results describe only one journal: Psychological Science . However, we chose this journal because it is one of the top journals in the field, because it publishes research from all areas of psychology, and because it has explicit criteria for theoretical relevance. Thus, we expected that research published in this journal would be representative of some of the theoretically relevant research being conducted. So, we do not claim that the results described here statistically generalize to other journals, only that they describe the pattern of research in one of the top journals in psychology. One specific concern is that Psychological Science limits articles to 2,000 words, and this may have restricted the ability to describe and reference theories. This may be true, though would seem that the body of knowledge a piece of research is contributing towards would be one of the most important pieces of information to include in a report. That is, if the goal of that research were to contribute to cumulative knowledge, it does not require many words to refer to a body of theory by name.

An additional concern may be that, in some areas of psychology, “theories” may be referred to with a different name (e.g., model or hypothesis ). However, the terms model and hypothesis do not carry the formal weight that scientific theory does. In the hierarchy of science, theories are regarded as being the highest status a claim can achieve—that most articles use it casually and conflate it with other meanings is problematic for clear scientific communication. In contrast, model or hypothesis could be used to refer to several different things: if something is called model , then it’s not claiming to be a theory . Our additional analysis only identified a small minority of papers that used hypothesis in this fashion (9% of the total corpus). While this number is relatively small, this does highlight an additional issue: the lack of consistency with which theories are referred to and discussed. It is difficult and confusing to consistently add to a body of knowledge if different names and terms are used.

Another claim might be that theory should simply be implicit in any text; that it should permeate through one’s writing without many direct references to it. If we were to proceed in this fashion, how could one possibly contribute to cumulative theory? If theory need not be named, identified, or referred to specifically, how is a researcher to judge what body of research they are contributing to? How are they to interpret their findings? How is one even able to design an experiment to answer their research question without a theory? The argument has been made that researchers need theory to guide methods [ 5 , 6 , 9 ]—this is not possible without, at least, clearly naming and referencing theories.

A final limitation to note is one regarding the consistency of the coders. While the fair to moderate kappas obtained here may seem concerning at first, we believe this reflects the looseness and vaguery with which words like theory are used. Authors are often ambiguous and pad their introductions and discussion with references to models and other research; it is not often explicit whether a model is simply being mentioned or whether it is actually guiding the research. Further complicating things is that references to theories are often inconsistent. Thus, it can be a particularly difficult task to determine whether an author actually derived their predictions from a specific theory or whether they are simply discussing it because they later noted the similarities. Such difficulties could have contributed to the lower initial agreement among coders. Therefore, along with noting that the kappas are lower than would be ideal, we also suggest that future researchers should be conscious of their writing: it’s very easy to be extremely explicit about where one’s predictions were derived from and why a test is being conducted. We believe this to be a necessary component of any research report.

Concluding remarks

Our interpretation of this data is that the published research we reviewed is simultaneously saturated and fractionated, and theory is not guiding the majority of research published in Psychological Science despite this being the main criteria for acceptance. While many articles included the words theory and theories , these words are most often used casually and non-specifically. In a large subset of the remaining cases, the theoretical backbone is no more than a thin veneer of relevant rhetoric and citations.

These results highlight many questions for the field moving forward. For example, it’s often noted that psychology has made little progress towards developing clearly specified, cumulative theories [ 2 , 15 ] but what should that progress look like? What is the role of theory in psychological science? Additionally, while it is widely assumed that psychological research follows the hypothetico-deductive model, these data suggest this is not necessarily the case. There are many other ways to do research and not all of them involve theory testing. If the majority of research in a top journal is not explicitly testing predictions derived from theory, then perhaps it exists to explore and describe interesting effects. There is certainly nothing wrong with a descriptive approach, and this aim of psychology has been suggested for at least half a century [ 20 , 42 , 43 ].

To be clear, we are not suggesting that every article should include the word theory , nor that it should be a requirement for review. We are not even suggesting that research needs to be based in theory. Instead, we are simply pointing out the pattern of research that exists in one of the leading research journals with the hope that this inspires critical discussion around the process, aims, and motivation of psychological research. There are many ways to do research. If scientists want to work towards developing nomothetic explanations of human nature then, yes, theory can help. If scientists simply want to describe or explore something interesting, that’s fine too.

Supporting information

Funding statement.

The author(s) received no specific funding for this work.

Data Availability

  • PLoS One. 2021; 16(3): e0247986.

Decision Letter 0

21 Dec 2020

PONE-D-20-28543

A decade of theory as reflected in Psychological Science (2009-2019)

Dear Dr. McPhetres,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

I would like to thank you for this submission, I speak for myself and hopefully both reviewers when I say it was a pleasure to read and looks to be a strong and impactful addition to the literature. I agree with the reviewers though that before acceptance some minor additions are required. There was no major conflicts between what the reviewers said, in fact I personally think they compliment each other quite well, so that addressing their concerns in one area should help the entire work overall. While I think that all the suggestions are valid, and they are not too onerous on you and your teams, I believe the most critical issues addressing reviewer 1's statements about the usage of the term "hypothesis", as they correctly point out how your lack of capture in your coding regime may overestimate the atheoretical nature of the field. I would also stress the need to address reviewers 2's concerns about the agreement scores between coders. Once these issues are addressed I believe this work will be strong enough to publish. I understand that, with the upcoming holidays for many institutions as well as Covid restrictions, it may be difficult for you and your team to address all these concerns quickly, therefore while I suggested approximately 48 days as time for resubmission (slightly more than the 45 that is typical), if you require more time please contact us and we can extend this deadline. We all understand that the current pace of the academic and non-academic world is not typical, and we do not want you or your time to feel constrained by this timeline. I thank you for your submission

Please submit your revised manuscript by January 29th 2021. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at gro.solp@enosolp . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:  http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

T. Alexander Dececchi, Ph.D

Academic Editor

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note the study analyses publications from a single publication (Psychological Science) as part of this study. We note that you have acknowledged this as a limitation in the Discussion, and indicate that "we do not claim that the results described here statistically generalize to other journals".

However, some of the conclusions made do appear to suggest the results are generalizable to wider group, e.g. "We interpret this to suggest that most psychological research is not driven by theory, nor can it be contributing to cumulative theory building."

Please revise accordingly. This is required in order to meet PLOS ONE's 4th publication criterion, which states that 'Conclusions are presented in an appropriate fashion and are supported by the data.'

https://journals.plos.org/plosone/s/criteria-for-publication#loc-4

Additional Editor Comments:

First off I would like to apologize for the delays and I thank you for your understanding and patiences. Second I wish to congratulate you on an overall very compelling and informative study. This line of inquiry is needed to help drive psychological research forward. That said I also agree with the reviewers on their most significant suggestions, especially the omission of "hypothesis" from your analysis as brought up by reviewer 1 and the moderate coder agreement scores brought forward by reviewer 2. I believe addressing these in the next version will greatly improve it and make it even more accessible to wider audience. I thank you all for this manuscript and I look forward to your re-submission.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review this manuscript. The authors have chosen a fascinating topic and approached it in an elegant and innovative fashion. Their analytic approach is well considered and, given the constraints of their analysis, the conclusions they draw from their results are sound. I certainly believe that this submission contributes in a unique and meaningful way to the literature on theorizing in psychology, and I think it would make a fine addition to your outlet.

My only concern regards the authors’ decision to omit the term “hypothesis” from their analyses. Although the authors go to some lengths to justify this decision, I remain unconvinced by their argument. Anecdotally, I think it is common in the field to refer to theory and hypothesis interchangeably, and there are certain types of hypothesis that satisfy the sort of superordinate status the authors ascribe to “theory”. In the evolutionary psychological literature, for example, theories such as inclusive fitness are referred to as “first order hypotheses”, from which subsidiary, testable hypotheses or predictions can be derived. In a similar vein, it is widely recognised that psychology progresses via cumulative tests of lower-order hypotheses derived from higher-order theories – here, it is quite reasonable to expect that researchers only explicitly refer to the former (i.e., the subject of their analysis), rather than the broader theoretical framework from which their hypotheses are derived; nevertheless, progressive empirical support for lower-order hypotheses constitutes cumulative support for higher-order theories. In short, these two terms cannot be readily individuated. On the other hand, I am sympathetic to the fact that a hypothesis can also refer to its more trivial sense (i.e., specific, testable predictions), which would require a more nuanced, qualitative analysis and coding of target articles to differentiate the more substantive use of the term (i.e., theory) from its more trivial form (i.e., empirical predictions). Nevertheless, I believe that such an analysis is required to demonstrate, convincingly, whether psychological science operates in the atheoretical manner the authors describe.

Otherwise, another, minor suggestion is that the authors might like to consider complementing some of their results with inferential analyses (e.g., chi-square analyses), where appropriate. It would be interesting to see whether the differences they cite reach statistical significance.

In closing, I would like to congratulate the authors on a fascinating submission, and I wish them all the best in their future endeavours.

Reviewer #2: Overview: This manuscript explored mentions of theory in the past 10 years in the journal Psychological Science. This paper attempts to provide an answer about the extent to which modern psychological research is guided by theory. This manuscript is innovative, clever, and overall well-written. The authors present interesting findings about psychological research’s current lack of grounding in theory without necessarily prescribing a need for change. My primary concern is the low agreement between coders on what constitutes a reference to theory, as captured by the Fleiss’ kappas. While these values suggest coders agreed at better than chance rates, their agreement was only fair to moderate at best. This goes back to the authors’ question of how to identify a theory and thus a reference to theory. More detail and explanation for these low agreement scores is needed.

This manuscript examined

1. It would be helpful to readers to include the theories that were mentioned most often in the text section on how many theories were mentioned in addition to the supplemental information.

2. Psych Science article’s introduction and discussion sections are limited to 2000 words. The authors might consider whether this word limit could have contributed to lower rates of including references to theory.

3. The manuscript currently lacks information to interpret Fleiss’ kappa according to cut points (i.e., no agreement, slight agreement, fair agreement, etc.) to help the reader better understand the level of agreement between coders. Furthermore, according to cut points for Fleiss’ kappa, coders showed only moderate agreement for the initial question of referring to a specific theory and only fair agreement for testing a prediction from a specific theory. These low kappas are concerning. The authors should note this is a limitation and offer potential explanations for why coders showed these levels of disagreement. It would help to contextualize the kappas based on what other studies using this as a measure of agreement have found.

6. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,  https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at  gro.solp@serugif . Please note that Supporting Information files do not need this step.

Author response to Decision Letter 0

11 Jan 2021

Response to comments

Editor comments

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Response: I have reviewed the guidelines and believe that my files now satisfy these requirements.

Response: I have removed, to the best of my knowledge, statements that imply a generalisation to all of psychology. For example, I have reworded the statement you pointed out to read “that the research published in this flagship journal is not driven by theory”, in the beginning of the concluding remarks “the published research we reviewed” and “theory is not guiding the majority of research published in Psychological Science.”

Reviewer Comments

Reviewer #1: Thank you for the opportunity to review this manuscript. The authors have chosen a fascinating topic and approached it in an elegant and innovative fashion. Their analytic approach is well considered and, given the constraints of their analysis, the conclusions they draw from their results are sound. I certainly believe that this submission contributes in a unique and meaningful way to the literature on theorizing in psychology, and I think it would make a fine addition to your outlet.

Response: Thank you for the positive evaluation of our work.

1. My only concern regards the authors’ decision to omit the term “hypothesis” from their analyses. Although the authors go to some lengths to justify this decision, I remain unconvinced by their argument. Anecdotally, I think it is common in the field to refer to theory and hypothesis interchangeably, and there are certain types of hypothesis that satisfy the sort of superordinate status the authors ascribe to “theory”. In the evolutionary psychological literature, for example, theories such as inclusive fitness are referred to as “first order hypotheses”, from which subsidiary, testable hypotheses or predictions can be derived. In a similar vein, it is widely recognised that psychology progresses via cumulative tests of lower-order hypotheses derived from higher-order theories – here, it is quite reasonable to expect that researchers only explicitly refer to the former (i.e., the subject of their analysis), rather than the broader theoretical framework from which their hypotheses are derived; nevertheless, progressive empirical support for lower-order hypotheses constitutes cumulative support for higher-order theories. In short, these two terms cannot be readily individuated. On the other hand, I am sympathetic to the fact that a hypothesis can also refer to its more trivial sense (i.e., specific, testable predictions), which would require a more nuanced, qualitative analysis and coding of target articles to differentiate the more substantive use of the term (i.e., theory) from its more trivial form (i.e., empirical predictions). Nevertheless, I believe that such an analysis is required to demonstrate, convincingly, whether psychological science operates in the atheoretical manner the authors describe.

Response: We have included this additional analysis. I now detail the results in the “exploratory analysis” section and have included a table detailing this data by year. The results show that, while some people do use hypothesis in place of theory, this is a minority of papers (only 9% of the total corpus).

2. Otherwise, another, minor suggestion is that the authors might like to consider complementing some of their results with inferential analyses (e.g., chi-square analyses), where appropriate. It would be interesting to see whether the differences they cite reach statistical significance.

Response: We have not included inferential statistics because we have analysed the entire corpus of articles. Thus, there is no ‘population’ to generalise our results to with the interpretation of a p-value That is, because we have all the articles, everything is an actual difference if the numbers differ in their absolute value when you have the whole population and no p-values are needed to determine whether the numbers would differ significantly given a frequentist methodology and interpretation (e.g. what would happen if we repeated the study 100 times).

3. In closing, I would like to congratulate the authors on a fascinating submission, and I wish them all the best in their future endeavours.

Response: Thank you again for your constructive feedback!

Reviewer #2: Overview: This manuscript explored mentions of theory in the past 10 years in the journal Psychological Science. This paper attempts to provide an answer about the extent to which modern psychological research is guided by theory. This manuscript is innovative, clever, and overall well-written. The authors present interesting findings about psychological research’s current lack of grounding in theory without necessarily prescribing a need for change. My primary concern is the low agreement between coders on what constitutes a reference to theory, as captured by the Fleiss’ kappas. While these values suggest coders agreed at better than chance rates, their agreement was only fair to moderate at best. This goes back to the authors’ question of how to identify a theory and thus a reference to theory. More detail and explanation for these low agreement scores is needed.

Response: Thanks for pointing this out- I think this is the result of a miscommunication on my part. I have included additional text in the methods section to clarify how the data were coded by raters and why I do not believe the fair-to-moderate kappas to be a problem. I will also explain a bit more here, though.

First, just to clarify, the coding took place in two stages. Initially, two coders independently reviewed each article and recorded ratings. The kappa reported in the article was computed on this initial coding only. Then, in the second step, a third coder reviewed the disagreements and it was the ratings after this final round of coding which we analyse. So, the resulting code that we analysed for the main results was the result of codes on which one of the two conditions were satisfied: either a) two coders agreed 100% or b) two out of three coders agreed 100%.

This means that that lower level of agreement was corrected when the third coder independently reviewed the disagreements (ie the kappa doesn’t necessarily describe the data we analysed).

Thus, I do not think this is an issue because 1) agreement wasn’t too bad to begin with, it was still at moderate levels for the more complicated ratings, 2) more categories means lower agreement, and 3) the tie-breaker means the ratings are the result of agreement by at least two coders.

In response, I have made the following changes to the manuscript.

On page 3-4 where I describe the coding, I have reworded this to read as follows. Second, I have included some brief rules of thumb on pages 3-4. It now reads as follows:

“Each article was initially scored independently by two individual coders who were blind to the purpose of the study; Fleiss’ Kappa is report for this initial coding. Recommendations suggest that a kappa between .21-.40 indicates fair agreement, .41-.60 indicates moderate agreement, .61-.80 indicates substantial agreement, and .81-1.0 is almost perfect agreement (37).

After the initial round of coding, two additional blind coders and the first author each independently reviewed a unique subset of disagreements to resolve ties. This means that the ratings we analyse in the following section are the result of codes only for which two independent raters (or two out of three raters) agreed 100%. ”

Response: I have noted this in under the heading of “ Question 5: How many theories were mentioned…” (page 8). I have included a table with the top-10 most mentioned theories.

Response: Good point. At the beginning of the limitations sections I have stated the following:

“One specific concern is that Psychological Science limits articles to 2,000 words, and this may have restricted the ability to describe and reference theories. This may be true, though would seem that the body of knowledge a piece of research is contributing towards would be one of the most important pieces of information to include in a report. That is, if the goal of that research were to contribute to cumulative knowledge it, it does not require many words to refer to a body of theory by name.”

Response: I have included the rules of thumb for kappa in the methods section, as described earlier. This is related to my previous response regarding the calculation of the kappas. Namely, we only used the codes for which two raters agreed (e.g. after tie-breaking). However, there are a few other practical considerations to be made here.

First, these categories are extremely difficult to code. They may seem straightforward but 1) authors are often extremely vague, 2) we are coding something for which we expect there to be misuses of the word (which adds noise), and 3) coding multiple categories will necessarily reduce agreement.

The coders did almost perfectly when coding whether a study was pre-registered- had this not been the case, I would have been more concerned about the other categories.

Going into this project, I initially thought it would be straightforward to identify what a theory is, but it is not. People use this word so loosely that it makes any coding scheme feel inadequate. Authors contradict themselves and make ambiguous statements. I think this has more to do with the articles rather than the coders or the coding scheme. Some of these thoughts were already in the manuscript though. And I’m hesitant to put all of these thoughts into the paper, but I have added some discussion on this to the end of the limitations section.

Submitted filename: Response to comments.docx

Decision Letter 1

18 Feb 2021

A decade of theory as reflected in  Psychological Science (2009-2019)

PONE-D-20-28543R1

Dear Dr. McPhetres

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/ , click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at gro.solp@gnillibrohtua .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact gro.solp@sserpeno .

Additional Editor Comments (optional):

After reading your revisions the reviewers and myself all agree that we should accept your manuscript. Congratulations. I know this was a long time in the works and I apologize for that. I thank you for your patience

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

2. Is the manuscript technically sound, and do the data support the conclusions?

3. Has the statistical analysis been performed appropriately and rigorously?

4. Have the authors made all data underlying the findings in their manuscript fully available?

5. Is the manuscript presented in an intelligible fashion and written in standard English?

6. Review Comments to the Author

Reviewer #1: The authors have done a fine job responding to the reviewers' concerns. I wish them all the best in their future endeavours.

Reviewer #2: The authors did a great job incorporating reviewer feedback into the revised document. The only other change I would suggest is tempering some of the strong language in the abstract and discussion somewhat to be more suggestive of potential implications of the findings. For example, in the abstract it states: “We interpret this to suggest that the majority of research published in this flagship journal is not driven by theory, nor can it be contributing to cumulative theory building.” Maybe instead say something like, “Given that the majority of research published in this flagship journal does not derive specific hypotheses from theory, we suggest that theory is not a primary driver of much of this research. Further, the research findings themselves may not be contributing to cumulative theory building“. From what I understand of the findings, several studies did reference theory, they just did not use theory to specifically derive their hypotheses.

7. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

Acceptance letter

25 Feb 2021

A decade of theory as reflected in Psychological Science  (2009-2019)

Dear Dr. McPhetres:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact gro.solp@sserpeno .

If we can help with anything else, please email us at gro.solp@enosolp .

Thank you for submitting your work to PLOS ONE and supporting open access.

PLOS ONE Editorial Office Staff

on behalf of

Dr. T. Alexander Dececchi

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

1.1: Hypothesis, Theories, and Laws

  • Last updated
  • Save as PDF
  • Page ID 338933

  Learning Objectives

  • Describe the difference between hypothesis and theory as scientific terms.
  • Describe the difference between a theory and scientific law.

Although many have taken science classes throughout the course of their studies, people often have incorrect or misleading ideas about some of the most important and basic principles in science. Most students have heard of hypotheses, theories, and laws, but what do these terms really mean? Prior to reading this section, consider what you have learned about these terms before. What do these terms mean to you? What do you read that contradicts or supports what you thought?

What is a Fact?

A fact is a basic statement established by experiment or observation. All facts are true under the specific conditions of the observation.

What is a Hypothesis?

One of the most common terms used in science classes is a "hypothesis". The word can have many different definitions, depending on the context in which it is being used:

  • An educated guess: a scientific hypothesis provides a suggested solution based on evidence.
  • Prediction: if you have ever carried out a science experiment, you probably made this type of hypothesis when you predicted the outcome of your experiment.
  • Tentative or proposed explanation: hypotheses can be suggestions about why something is observed. In order for it to be scientific, however, a scientist must be able to test the explanation to see if it works and if it is able to correctly predict what will happen in a situation. For example, "if my hypothesis is correct, we should see ___ result when we perform ___ test."
A hypothesis is very tentative; it can be easily changed.

What is a Theory?

The United States National Academy of Sciences describes what a theory is as follows:

"Some scientific explanations are so well established that no new evidence is likely to alter them. The explanation becomes a scientific theory. In everyday language a theory means a hunch or speculation. Not so in science. In science, the word theory refers to a comprehensive explanation of an important feature of nature supported by facts gathered over time. Theories also allow scientists to make predictions about as yet unobserved phenomena."

"A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experimentation. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory." It is as factual an explanation of the universe as the atomic theory of matter (stating that everything is made of atoms) or the germ theory of disease (which states that many diseases are caused by germs). Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.

Note some key features of theories that are important to understand from this description:

  • Theories are explanations of natural phenomena. They aren't predictions (although we may use theories to make predictions). They are explanations as to why we observe something.
  • Theories aren't likely to change. They have a large amount of support and are able to satisfactorily explain numerous observations. Theories can, indeed, be facts. Theories can change, but it is a long and difficult process. In order for a theory to change, there must be many observations or pieces of evidence that the theory cannot explain.
  • Theories are not guesses. The phrase "just a theory" has no room in science. To be a scientific theory carries a lot of weight; it is not just one person's idea about something
Theories aren't likely to change.

What is a Law?

Scientific laws are similar to scientific theories in that they are principles that can be used to predict the behavior of the natural world. Both scientific laws and scientific theories are typically well-supported by observations and/or experimental evidence. Usually scientific laws refer to rules for how nature will behave under certain conditions, frequently written as an equation. Scientific theories are more overarching explanations of how nature works and why it exhibits certain characteristics. As a comparison, theories explain why we observe what we do and laws describe what happens.

For example, around the year 1800, Jacques Charles and other scientists were working with gases to, among other reasons, improve the design of the hot air balloon. These scientists found, after many, many tests, that certain patterns existed in the observations on gas behavior. If the temperature of the gas is increased, the volume of the gas increased. This is known as a natural law. A law is a relationship that exists between variables in a group of data. Laws describe the patterns we see in large amounts of data, but do not describe why the patterns exist.

What is a Belief?

A belief is a statement that is not scientifically provable. Beliefs may or may not be incorrect; they just are outside the realm of science to explore.

Laws vs. Theories

A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence has accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. Remember, theories are explanations and laws are patterns we see in large amounts of data, frequently written as an equation. A theory will always remain a theory; a law will always remain a law.

Video \(\PageIndex{1}\): What’s the difference between a scientific law and theory?

  • A hypothesis is a tentative explanation that can be tested by further investigation.
  • A theory is a well-supported explanation of observations.
  • A scientific law is a statement that summarizes the relationship between variables.
  • An experiment is a controlled method of testing a hypothesis.

Contributions & Attributions

Marisa Alviar-Agnew  ( Sacramento City College )

Henry Agnew (UC Davis)

This is the Difference Between a Hypothesis and a Theory

What to Know A hypothesis is an assumption made before any research has been done. It is formed so that it can be tested to see if it might be true. A theory is a principle formed to explain the things already shown in data. Because of the rigors of experiment and control, it is much more likely that a theory will be true than a hypothesis.

As anyone who has worked in a laboratory or out in the field can tell you, science is about process: that of observing, making inferences about those observations, and then performing tests to see if the truth value of those inferences holds up. The scientific method is designed to be a rigorous procedure for acquiring knowledge about the world around us.

hypothesis

In scientific reasoning, a hypothesis is constructed before any applicable research has been done. A theory, on the other hand, is supported by evidence: it's a principle formed as an attempt to explain things that have already been substantiated by data.

Toward that end, science employs a particular vocabulary for describing how ideas are proposed, tested, and supported or disproven. And that's where we see the difference between a hypothesis and a theory .

A hypothesis is an assumption, something proposed for the sake of argument so that it can be tested to see if it might be true.

In the scientific method, the hypothesis is constructed before any applicable research has been done, apart from a basic background review. You ask a question, read up on what has been studied before, and then form a hypothesis.

What is a Hypothesis?

A hypothesis is usually tentative, an assumption or suggestion made strictly for the objective of being tested.

When a character which has been lost in a breed, reappears after a great number of generations, the most probable hypothesis is, not that the offspring suddenly takes after an ancestor some hundred generations distant, but that in each successive generation there has been a tendency to reproduce the character in question, which at last, under unknown favourable conditions, gains an ascendancy. Charles Darwin, On the Origin of Species , 1859 According to one widely reported hypothesis , cell-phone transmissions were disrupting the bees' navigational abilities. (Few experts took the cell-phone conjecture seriously; as one scientist said to me, "If that were the case, Dave Hackenberg's hives would have been dead a long time ago.") Elizabeth Kolbert, The New Yorker , 6 Aug. 2007

What is a Theory?

A theory , in contrast, is a principle that has been formed as an attempt to explain things that have already been substantiated by data. It is used in the names of a number of principles accepted in the scientific community, such as the Big Bang Theory . Because of the rigors of experimentation and control, its likelihood as truth is much higher than that of a hypothesis.

It is evident, on our theory , that coasts merely fringed by reefs cannot have subsided to any perceptible amount; and therefore they must, since the growth of their corals, either have remained stationary or have been upheaved. Now, it is remarkable how generally it can be shown, by the presence of upraised organic remains, that the fringed islands have been elevated: and so far, this is indirect evidence in favour of our theory . Charles Darwin, The Voyage of the Beagle , 1839 An example of a fundamental principle in physics, first proposed by Galileo in 1632 and extended by Einstein in 1905, is the following: All observers traveling at constant velocity relative to one another, should witness identical laws of nature. From this principle, Einstein derived his theory of special relativity. Alan Lightman, Harper's , December 2011

Non-Scientific Use

In non-scientific use, however, hypothesis and theory are often used interchangeably to mean simply an idea, speculation, or hunch (though theory is more common in this regard):

The theory of the teacher with all these immigrant kids was that if you spoke English loudly enough they would eventually understand. E. L. Doctorow, Loon Lake , 1979 Chicago is famous for asking questions for which there can be no boilerplate answers. Example: given the probability that the federal tax code, nondairy creamer, Dennis Rodman and the art of mime all came from outer space, name something else that has extraterrestrial origins and defend your hypothesis . John McCormick, Newsweek , 5 Apr. 1999 In his mind's eye, Miller saw his case suddenly taking form: Richard Bailey had Helen Brach killed because she was threatening to sue him over the horses she had purchased. It was, he realized, only a theory , but it was one he felt certain he could, in time, prove. Full of urgency, a man with a mission now that he had a hypothesis to guide him, he issued new orders to his troops: Find out everything you can about Richard Bailey and his crowd. Howard Blum, Vanity Fair , January 1995

And sometimes one term is used as a genus, or a means for defining the other:

Laplace's popular version of his astronomy, the Système du monde , was famous for introducing what came to be known as the nebular hypothesis , the theory that the solar system was formed by the condensation, through gradual cooling, of the gaseous atmosphere (the nebulae) surrounding the sun. Louis Menand, The Metaphysical Club , 2001 Researchers use this information to support the gateway drug theory — the hypothesis that using one intoxicating substance leads to future use of another. Jordy Byrd, The Pacific Northwest Inlander , 6 May 2015 Fox, the business and economics columnist for Time magazine, tells the story of the professors who enabled those abuses under the banner of the financial theory known as the efficient market hypothesis . Paul Krugman, The New York Times Book Review , 9 Aug. 2009

Incorrect Interpretations of "Theory"

Since this casual use does away with the distinctions upheld by the scientific community, hypothesis and theory are prone to being wrongly interpreted even when they are encountered in scientific contexts—or at least, contexts that allude to scientific study without making the critical distinction that scientists employ when weighing hypotheses and theories.

The most common occurrence is when theory is interpreted—and sometimes even gleefully seized upon—to mean something having less truth value than other scientific principles. (The word law applies to principles so firmly established that they are almost never questioned, such as the law of gravity.)

This mistake is one of projection: since we use theory in general use to mean something lightly speculated, then it's implied that scientists must be talking about the same level of uncertainty when they use theory to refer to their well-tested and reasoned principles.

The distinction has come to the forefront particularly on occasions when the content of science curricula in schools has been challenged—notably, when a school board in Georgia put stickers on textbooks stating that evolution was "a theory, not a fact, regarding the origin of living things." As Kenneth R. Miller, a cell biologist at Brown University, has said , a theory "doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.”

While theories are never completely infallible, they form the basis of scientific reasoning because, as Miller said "to the best of our ability, we’ve tested them, and they’ve held up."

More Differences Explained

  • Epidemic vs. Pandemic
  • Diagnosis vs. Prognosis
  • Treatment vs. Cure

Word of the Day

See Definitions and Examples »

Get Word of the Day daily email!

Games & Quizzes

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Commonly Confused

'canceled' or 'cancelled', is it 'home in' or 'hone in', the difference between 'race' and 'ethnicity', homophones, homographs, and homonyms, on 'biweekly' and 'bimonthly', grammar & usage, primary and caucus: what is the difference, words commonly mispronounced, merriam-webster’s great big list of words you love to hate, more commonly misspelled words, commonly misspelled words, 12 words for signs of spring, 12 more bird names that sound like insults (and sometimes are), 13 unusually long english words, the words of the week - apr. 26, 9 superb owl words.

Help | Advanced Search

Computer Science > Logic in Computer Science

Title: implications of computer science theory for the simulation hypothesis.

Abstract: The simulation hypothesis has recently excited renewed interest, especially in the physics and philosophy communities. However, the hypothesis specifically concerns {computers} that simulate physical universes, which means that to properly investigate it we need to couple computer science theory with physics. Here I do this by exploiting the physical Church-Turing thesis. This allows me to introduce a preliminary investigation of some of the computer science theoretic aspects of the simulation hypothesis. In particular, building on Kleene's second recursion theorem, I prove that it is mathematically possible for us to be in a simulation that is being run on a computer \textit{by us}. In such a case, there would be two identical instances of us; the question of which of those is ``really us'' is meaningless. I also show how Rice's theorem provides some interesting impossibility results concerning simulation and self-simulation; briefly describe the philosophical implications of fully homomorphic encryption for (self-)simulation; briefly investigate the graphical structure of universes simulating universes simulating universes, among other issues. I end by describing some of the possible avenues for future research that this preliminary investigation reveals.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. Hypothesis vs. Theory: A Simple Guide to Tell Them Apart

    hypothesis and theory article

  2. 🏷️ Formulation of hypothesis in research. How to Write a Strong

    hypothesis and theory article

  3. Difference Between Thesis and Hypothesis

    hypothesis and theory article

  4. Primary Difference Between Hypothesis and Theory

    hypothesis and theory article

  5. Research Hypothesis: Definition, Types, Examples and Quick Tips

    hypothesis and theory article

  6. How to Write a Hypothesis

    hypothesis and theory article

VIDEO

  1. Giant Impect hypothesis Theory

  2. Concept of Hypothesis

  3. Simulation Hypothesis theory kya hai, #shorts

  4. Hypothesis (THEORY-AWESOME MIX!)

  5. Hypothesis theory 🌍🪐…. #viral #youtubeshorts #astronomy #science

  6. Simulation Hypothesis Theory #facts #shortsvideo #sciencefacts

COMMENTS

  1. Scientific Hypotheses: Writing, Promoting, and Predicting Implications

    Formulating hypothesis articles first and calling for multicenter and interdisciplinary research can be a solution in such instances, potentially launching influential scientific directions, if not academic disciplines. ... and propose a new theory or a test.4 A hypothesis can be proven wrong partially or entirely. However, even such an ...

  2. PDF Understanding Hypotheses, Predictions, Laws, and Theories

    a hypothesis, theory, and law, but with the comments in text boxes added. Science Education Review, 13(1), 2014 17 A law (or rule or principle) is a statement that summarises an observed regularity or pattern in nature. A scientific theory is a set of statements that, when taken together, attempt to explain a

  3. On the scope of scientific hypotheses

    2. The scientific hypothesis. In this section, we will describe a functional and descriptive role regarding how scientists use hypotheses. Jeong & Kwon [] investigated and summarized the different uses the concept of 'hypothesis' had in philosophical and scientific texts.They identified five meanings: assumption, tentative explanation, tentative cause, tentative law, and prediction.

  4. A decade of theory as reflected in

    A theory often has a name—usually this includes the word theory, but may sometimes use another label (e.g., model, hypothesis). A theory can be specific or broad, but it should be able to make predictions or generally guide the interpretation of phenomena, and it must be distinguished from a single effect. Finally, a theory is not an untested ...

  5. Frontiers in Psychology

    Hypothesis and Theory articles are peer-reviewed, have a maximum word count of 12,000 and may contain no more than 15 Figures/Tables. Authors are required to pay a fee (A-type article) to publish a Hypothesis and Theory article. Hypothesis and Theory articles should have the following format: 1) Abstract, 2) Introduction, 3) Subsections ...

  6. Frontiers

    Hypothesis and Theory: A Two-Process Model of Torpor-Arousal Regulation in Hibernators. Thomas Ruf 1,2* Sylvain Giroud 1 Fritz Geiser 2. 1 Research Institute of Wildlife Ecology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria. 2 Centre for Behavioural and Physiological Ecology, Zoology ...

  7. Frontiers

    Many natural resources are managed collaboratively by government agencies and affected stakeholders. Collaborative management is intended to reduce conflict, facilitate learning, and increase consensus among stakeholders. Previous work highlights the role of trust in developing a shared understanding and theory of change for participants in collaborations, but defines trust broadly. The ...

  8. The Research Hypothesis: Role and Construction

    A hypothesis (from the Greek, foundation) is a logical construct, interposed between a problem and its solution, which represents a proposed answer to a research question. It gives direction to the investigator's thinking about the problem and, therefore, facilitates a solution. Unlike facts and assumptions (presumed true and, therefore, not ...

  9. Full article: Theories and Models: What They Are, What They Are for

    What Are Theories. The terms theory and model have been defined in numerous ways, and there are at least as many ideas on how theories and models relate to each other (Bailer-Jones, Citation 2009).I understand theories as bodies of knowledge that are broad in scope and aim to explain robust phenomena.Models, on the other hand, are instantiations of theories, narrower in scope and often more ...

  10. Theory vs. Hypothesis: Basics of the Scientific Method

    Theory vs. Hypothesis: Basics of the Scientific Method. Written by MasterClass. Last updated: Jun 7, 2021 • 2 min read. Though you may hear the terms "theory" and "hypothesis" used interchangeably, these two scientific terms have drastically different meanings in the world of science.

  11. Science at multiple levels

    Misconception: If evidence supports a hypothesis, it is upgraded to a theory. If the theory then garners even more support, it may be upgraded to a law. Correction: Hypotheses cannot become theories and theories cannot become laws. Hypotheses, theories, and laws are all scientific explanations but they differ in breadth, not in level of support.

  12. The scientific method (article)

    A theory is different from a hypothesis, though they're certainly related. A hypothesis is a potential answer to a relatively small, specific question. A theory, on the other hand, addresses a broader question and is supported by a large amount of data from multiple sources 1, 2 ‍ .

  13. Full article: Research Problems and Hypotheses in Empirical Research

    The distinction between a theory and a general hypothesis may be considered somewhat vague and arbitrary. Testing of a general hypothesis should therefore be conceptualized in the same way as testing of a theory. That is, a specific hypothesis is derived from the general hypothesis and tested as part of the test of the general hypothesis.

  14. Frontiers

    The first desired outcome for this paper in the 'hypothesis and theory' category of this journal is to construct a qualitative theoretical framework or 'field map' of important scientific reasoning strategies for model construction, and major interconnections between them, on the basis of detailed case studies of scientific events.

  15. 1.2: Theories, Hypotheses and Models

    A "hypothesis" is a consequence of the theory that one can test. From Chloë's Theory, we have the hypothesis that an object will take 2-√ 2 times longer to fall from 1m 1 m than from 2 m 2 m. We can formulate the hypothesis based on the theory and then test that hypothesis. If the hypothesis is found to be invalidated by experiment ...

  16. A decade of theory as reflected in Psychological Science (2009-2019)

    Introduction. Many have noted that psychology lacks the cumulative theory that characterizes other scientific fields [1-4].So pressing has this deficit become in recent years that many scholars have called for a greater focus on theory development in the psychological sciences [5-11].At the same time, it has been argued that there are perhaps too many theories to choose from [3, 12-14].

  17. 1.1: Hypothesis, Theories, and Laws

    Henry Agnew (UC Davis) 1.1: Hypothesis, Theories, and Laws is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts. Although all of us have taken science classes throughout the course of our study, many people have incorrect or misleading ideas about some of the most important and basic principles in science.

  18. (PDF) HYPOTHESIS AND THEORY

    the truth of Being, the opening of the world. Two problematics arise from this. First, his idea of "world-disclosure" evoked a sense of everydayness (which captures, for. me, the idea of ...

  19. Frontiers in Psychology

    Hypothesis and Theory. Published on 06 Mar 2024 The police hunch: the Bayesian brain, active inference, and the free energy principle in action. in Theoretical and Philosophical Psychology. Gareth Stubbs; Karl Friston; Frontiers in Psychology. doi 10.3389/fpsyg.2024.1368265. 1,609 views

  20. Hypothesis vs. Theory: The Difference Explained

    A hypothesis is an assumption made before any research has been done. It is formed so that it can be tested to see if it might be true. A theory is a principle formed to explain the things already shown in data. Because of the rigors of experiment and control, it is much more likely that a theory will be true than a hypothesis.

  21. Implications of computer science theory for the simulation hypothesis

    The simulation hypothesis has recently excited renewed interest, especially in the physics and philosophy communities. However, the hypothesis specifically concerns {computers} that simulate physical universes, which means that to properly investigate it we need to couple computer science theory with physics. Here I do this by exploiting the physical Church-Turing thesis. This allows me to ...

  22. Dark forest hypothesis

    The Dark Forest Hypothesis is the conjecture that many alien civilizations exist throughout the universe, but they are both silent and hostile, maintaining their undetectability for fear of being destroyed by another hostile and undetected civilization. It is one of many possible explanations of the Fermi paradox, which contrasts the lack of contact with alien life with the potential for such ...