Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Physics LibreTexts

1.2: Theories, Hypotheses and Models

  • Last updated
  • Save as PDF
  • Page ID 19359

For the purpose of this textbook (and science in general), we introduce a distinction in what we mean by “theory”, “hypothesis”, and by “model”. We will consider a “theory” to be a set of statements (or an equation) that gives us a broad description, applicable to several phenomena and that allows us to make verifiable predictions. For example, Chloë’s Theory ( \(t \propto \sqrt{h}\) ) can be considered a theory. Specifically, we do not use the word theory in the context of “I have a theory about this...”

A “hypothesis” is a consequence of the theory that one can test. From Chloë’s Theory, we have the hypothesis that an object will take \(\sqrt{2}\) times longer to fall from \(1\:\text{m}\) than from \(2\:\text{m}\) . We can formulate the hypothesis based on the theory and then test that hypothesis. If the hypothesis is found to be invalidated by experiment, then either the theory is incorrect, or the hypothesis is not consistent with the theory.

A “model” is a situation-specific description of a phenomenon based on a theory , that allows us to make a specific prediction. Using the example from the previous section, our theory would be that the fall time of an object is proportional to the square root of the drop height, and a model would be applying that theory to describe a tennis ball falling by \(4.2\) m. From the model, we can form a testable hypothesis of how long it will take the tennis ball to fall that distance. It is important to note that a model will almost always be an approximation of the theory applied to describe a particular phenomenon. For example, if Chloë’s Theory is only valid in vacuum, and we use it to model the time that it take for an object to fall at the surface of the Earth, we may find that our model disagrees with experiment. We would not necessarily conclude that the theory is invalidated, if our model did not adequately apply the theory to describe the phenomenon (e.g. by forgetting to include the effect of air drag).

This textbook will introduce the theories from Classical Physics, which were mostly established and tested between the seventeenth and nineteenth centuries. We will take it as given that readers of this textbook are not likely to perform experiments that challenge those well-established theories. The main challenge will be, given a theory, to define a model that describes a particular situation, and then to test that model. This introductory physics course is thus focused on thinking of “doing physics” as the task of correctly modeling a situation.

Emma's Thoughts

What’s the difference between a model and a theory?

“Model” and “Theory” are sometimes used interchangeably among scientists. In physics, it is particularly important to distinguish between these two terms. A model provides an immediate understanding of something based on a theory.

For example, if you would like to model the launch of your toy rocket into space, you might run a computer simulation of the launch based on various theories of propulsion that you have learned. In this case, the model is the computer simulation, which describes what will happen to the rocket. This model depends on various theories that have been extensively tested such as Newton’s Laws of motion, Fluid dynamics, etc.

  • “Model”: Your homemade rocket computer simulation
  • “Theory”: Newton’s Laws of motion, Fluid dynamics

With this analogy, we can quickly see that the “model” and “theory” are not interchangeable. If they were, we would be saying that all of Newton’s Laws of Motion depend on the success of your piddly toy rocket computer simulation!

Exercise \(\PageIndex{2}\)

Models cannot be scientifically tested, only theories can be tested.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Models in Science

Models are of central importance in many scientific contexts. The centrality of models such as inflationary models in cosmology, general-circulation models of the global climate, the double-helix model of DNA, evolutionary models in biology, agent-based models in the social sciences, and general-equilibrium models of markets in their respective domains is a case in point (the Other Internet Resources section at the end of this entry contains links to online resources that discuss these models). Scientists spend significant amounts of time building, testing, comparing, and revising models, and much journal space is dedicated to interpreting and discussing the implications of models.

As a result, models have attracted philosophers’ attention and there are now sizable bodies of literature about various aspects of scientific modeling. A tangible result of philosophical engagement with models is a proliferation of model types recognized in the philosophical literature. Probing models , phenomenological models , computational models , developmental models , explanatory models , impoverished models , testing models , idealized models , theoretical models , scale models , heuristic models , caricature models , exploratory models , didactic models , fantasy models , minimal models , toy models , imaginary models , mathematical models , mechanistic models , substitute models , iconic models , formal models , analogue models , and instrumental models are but some of the notions that are used to categorize models. While at first glance this abundance is overwhelming, it can be brought under control by recognizing that these notions pertain to different problems that arise in connection with models. Models raise questions in semantics (how, if at all, do models represent?), ontology (what kind of things are models?), epistemology (how do we learn and explain with models?), and, of course, in other domains within philosophy of science.

1. Semantics: Models and Representation

2.1 physical objects, 2.2 fictional objects and abstract objects, 2.3 set-theoretic structures, 2.4 descriptions and equations, 3.1 learning about models, 3.2 learning about target systems, 3.3 explaining with models, 3.4 understanding with models, 3.5 other cognitive functions, 4.1 models as subsidiaries to theory, 4.2 models as independent from theories, 5.1 models, realism, and laws of nature, 5.2 models and reductionism, other internet resources, related entries.

Many scientific models are representational models: they represent a selected part or aspect of the world, which is the model’s target system. Standard examples are the billiard ball model of a gas, the Bohr model of the atom, the Lotka–Volterra model of predator–prey interaction, the Mundell–Fleming model of an open economy, and the scale model of a bridge.

This raises the question what it means for a model to represent a target system. This problem is rather involved and decomposes into various subproblems. For an in-depth discussion of the issue of representation, see the entry on scientific representation . At this point, rather than addressing the issue of what it means for a model to represent, we focus on a number of different kinds of representation that play important roles in the practice of model-based science, namely scale models, analogical models, idealized models, toy models, minimal models, phenomenological models, exploratory models, and models of data. These categories are not mutually exclusive, and a given model can fall into several categories at once.

Scale models . Some models are down-sized or enlarged copies of their target systems (Black 1962). A typical example is a small wooden car that is put into a wind tunnel to explore the actual car’s aerodynamic properties. The intuition is that a scale model is a naturalistic replica or a truthful mirror image of the target; for this reason, scale models are sometimes also referred to as “true models” (Achinstein 1968: Ch. 7). However, there is no such thing as a perfectly faithful scale model; faithfulness is always restricted to some respects. The wooden scale model of the car provides a faithful portrayal of the car’s shape but not of its material. And even in the respects in which a model is a faithful representation, the relation between model-properties and target-properties is usually not straightforward. When engineers use, say, a 1:100 scale model of a ship to investigate the resistance that an actual ship experiences when moving through the water, they cannot simply measure the resistance the model experiences and then multiply it with the scale. In fact, the resistance faced by the model does not translate into the resistance faced by the actual ship in a straightforward manner (that is, one cannot simply scale the water resistance with the scale of the model: the real ship need not have one hundred times the water resistance of its 1:100 model). The two quantities stand in a complicated nonlinear relation with each other, and the exact form of that relation is often highly nontrivial and emerges as the result of a thoroughgoing study of the situation (Sterrett 2006, forthcoming; Pincock forthcoming).

Analogical models . Standard examples of analogical models include the billiard ball model of a gas, the hydraulic model of an economic system, and the dumb hole model of a black hole. At the most basic level, two things are analogous if there are certain relevant similarities between them. In a classic text, Hesse (1963) distinguishes different types of analogies according to the kinds of similarity relations into which two objects enter. A simple type of analogy is one that is based on shared properties. There is an analogy between the earth and the moon based on the fact that both are large, solid, opaque, spherical bodies that receive heat and light from the sun, revolve around their axes, and gravitate towards other bodies. But sameness of properties is not a necessary condition. An analogy between two objects can also be based on relevant similarities between their properties. In this more liberal sense, we can say that there is an analogy between sound and light because echoes are similar to reflections, loudness to brightness, pitch to color, detectability by the ear to detectability by the eye, and so on.

Analogies can also be based on the sameness or resemblance of relations between parts of two systems rather than on their monadic properties. It is in this sense that the relation of a father to his children is asserted to be analogous to the relation of the state to its citizens. The analogies mentioned so far have been what Hesse calls “material analogies”. We obtain a more formal notion of analogy when we abstract from the concrete features of the systems and only focus on their formal set-up. What the analogue model then shares with its target is not a set of features, but the same pattern of abstract relationships (i.e., the same structure, where structure is understood in a formal sense). This notion of analogy is closely related to what Hesse calls “formal analogy”. Two items are related by formal analogy if they are both interpretations of the same formal calculus. For instance, there is a formal analogy between a swinging pendulum and an oscillating electric circuit because they are both described by the same mathematical equation.

A further important distinction due to Hesse is the one between positive, negative, and neutral analogies. The positive analogy between two items consists in the properties or relations they share (both gas molecules and billiard balls have mass); the negative analogy consists in the properties they do not share (billiard balls are colored, gas molecules are not); the neutral analogy comprises the properties of which it is not known (yet) whether they belong to the positive or the negative analogy (do billiard balls and molecules have the same cross section in scattering processes?). Neutral analogies play an important role in scientific research because they give rise to questions and suggest new hypotheses. For this reason several authors have emphasized the heuristic role that analogies play in theory and model construction, as well as in creative thought (Bailer-Jones and Bailer-Jones 2002; Bailer-Jones 2009: Ch. 3; Hesse 1974; Holyoak and Thagard 1995; Kroes 1989; Psillos 1995; and the essays collected in Helman 1988). See also the entry on analogy and analogical reasoning .

It has also been discussed whether using analogical models can in some cases be confirmatory in a Bayesian sense. Hesse (1974: 208–219) argues that this is possible if the analogy is a material analogy. Bartha (2010, 2013 [2019]) disagrees and argues that analogical models cannot be confirmatory in a Bayesian sense because the information encapsulated in an analogical model is part of the relevant background knowledge, which has the consequence that the posterior probability of a hypothesis about a target system cannot change as a result of observing the analogy. Analogical models can therefore only establish the plausibility of a conclusion in the sense of justifying a non-negligible prior probability assignment (Bartha 2010: §8.5).

More recently, these questions have been discussed in the context of so-called analogue experiments, which promise to provide knowledge about an experimentally inaccessible target system (e.g., a black hole) by manipulating another system, the source system (e.g., a Bose–Einstein condensate). Dardashti, Thébault, and Winsberg (2017) and Dardashti, Hartmann et al. (2019) have argued that, given certain conditions, an analogue simulation of one system by another system can confirm claims about the target system (e.g., that black holes emit Hawking radiation). See Crowther et al. (forthcoming) for a critical discussion, and also the entry on computer simulations in science .

Idealized models . Idealized models are models that involve a deliberate simplification or distortion of something complicated with the objective of making it more tractable or understandable. Frictionless planes, point masses, completely isolated systems, omniscient and fully rational agents, and markets in perfect equilibrium are well-known examples. Idealizations are a crucial means for science to cope with systems that are too difficult to study in their full complexity (Potochnik 2017).

Philosophical debates over idealization have focused on two general kinds of idealizations: so-called Aristotelian and Galilean idealizations. Aristotelian idealization amounts to “stripping away”, in our imagination, all properties from a concrete object that we believe are not relevant to the problem at hand. There is disagreement on how this is done. Jones (2005) and Godfrey-Smith (2009) offer an analysis of abstraction in terms of truth: while an abstraction remains silent about certain features or aspects of the system, it does not say anything false and still offers a true (albeit restricted) description. This allows scientists to focus on a limited set of properties in isolation. An example is a classical-mechanics model of the planetary system, which describes the position of an object as a function of time and disregards all other properties of planets. Cartwright (1989: Ch. 5), Musgrave (1981), who uses the term “negligibility assumptions”, and Mäki (1994), who speaks of the “method of isolation”, allow abstractions to say something false, for instance by neglecting a causally relevant factor.

Galilean idealizations are ones that involve deliberate distortions: physicists build models consisting of point masses moving on frictionless planes; economists assume that agents are omniscient; biologists study isolated populations; and so on. Using simplifications of this sort whenever a situation is too difficult to tackle was characteristic of Galileo’s approach to science. For this reason it is common to refer to ‘distortive’ idealizations of this kind as “Galilean idealizations” (McMullin 1985). An example for such an idealization is a model of motion on an ice rink that assumes the ice to be frictionless, when, in reality, it has low but non-zero friction.

Galilean idealizations are sometimes characterized as controlled idealizations, i.e., as ones that allow for de-idealization by successive removal of the distorting assumptions (McMullin 1985; Weisberg 2007). Thus construed, Galilean idealizations don’t cover all distortive idealizations. Batterman (2002, 2011) and Rice (2015, 2019) discuss distortive idealizations that are ineliminable in that they cannot be removed from the model without dismantling the model altogether.

What does a model involving distortions tell us about reality? Laymon (1991) formulated a theory which understands idealizations as ideal limits: imagine a series of refinements of the actual situation which approach the postulated limit, and then require that the closer the properties of a system come to the ideal limit, the closer its behavior has to come to the behavior of the system at the limit (monotonicity). If this is the case, then scientists can study the system at the limit and carry over conclusions from that system to systems distant from the limit. But these conditions need not always hold. In fact, it can happen that the limiting system does not approach the system at the limit. If this happens, we are faced with a singular limit (Berry 2002). In such cases the system at the limit can exhibit behavior that is different from the behavior of systems distant from the limit. Limits of this kind appear in a number of contexts, most notably in the theory of phase transitions in statistical mechanics. There is, however, no agreement over the correct interpretation of such limits. Batterman (2002, 2011) sees them as indicative of emergent phenomena, while Butterfield (2011a,b) sees them as compatible with reduction (see also the entries on intertheory relations in physics and scientific reduction ).

Galilean and Aristotelian idealizations are not mutually exclusive, and many models exhibit both in that they take into account a narrow set of properties and distort them. Consider again the classical-mechanics model of the planetary system: the model only takes a narrow set of properties into account and distorts them, for instance by describing planets as ideal spheres with a rotation-symmetric mass distribution.

A concept that is closely related to idealization is approximation. In a broad sense, A can be called an approximation of B if A is somehow close to B . This, however, is too broad because it makes room for any likeness to qualify as an approximation. Rueger and Sharp (1998) limit approximations to quantitative closeness, and Portides (2007) frames it as an essentially mathematical concept. On that notion A is an approximation of B iff A is close to B in a specifiable mathematical sense, where the relevant sense of “close” will be given by the context. An example is the approximation of one curve with another one, which can be achieved by expanding a function into a power series and only keeping the first two or three terms. In different situations we approximate an equation with another one by letting a control parameter tend towards zero (Redhead 1980). This raises the question of how approximations are different from idealizations, which can also involve mathematical closeness. Norton (2012) sees the distinction between the two as referential: an approximation is an inexact description of the target while an idealization introduces a secondary system (real or fictitious) which stands for the target system (while being distinct from it). If we say that the period of the pendulum on the wall is roughly two seconds, then this is an approximation; if we reason about the real pendulum by assuming that the pendulum bob is a point mass and that the string is massless (i.e., if we assume that the pendulum is a so-called ideal pendulum), then we use an idealization. Separating idealizations and approximations in this way does not imply that there cannot be interesting relations between the two. For instance, an approximation can be justified by pointing out that it is the mathematical expression of an acceptable idealization (e.g., when we neglect a dissipative term in an equation of motion because we make the idealizing assumption that the system is frictionless).

Toy models . Toy models are extremely simplified and strongly distorted renderings of their targets, and often only represent a small number of causal or explanatory factors (Hartmann 1995; Reutlinger et al. 2018; Nguyen forthcoming). Typical examples are the Lotka–Volterra model in population ecology (Weisberg 2013) and the Schelling model of segregation in the social sciences (Sugden 2000). Toy models usually do not perform well in terms of prediction and empirical adequacy, and they seem to serve other epistemic goals (more on these in Section 3 ). This raises the question whether they should be regarded as representational at all (Luczak 2017).

Some toy models are characterized as “caricatures” (Gibbard and Varian 1978; Batterman and Rice 2014). Caricature models isolate a small number of salient characteristics of a system and distort them into an extreme case. A classic example is Akerlof’s (1970) model of the car market (“the market for lemons”), which explains the difference in price between new and used cars solely in terms of asymmetric information, thereby disregarding all other factors that may influence the prices of cars (see also Sugden 2000). However, it is controversial whether such highly idealized models can still be regarded as informative representations of their target systems. For a discussion of caricature models, in particular in economics, see Reiss (2006).

Minimal models . Minimal models are closely related to toy models in that they are also highly simplified. They are so simplified that some argue that they are non-representational: they lack any similarity, isomorphism, or resemblance relation to the world (Batterman and Rice 2014). It has been argued that many economic models are of this kind (Grüne-Yanoff 2009). Minimal economic models are also unconstrained by natural laws, and do not isolate any real factors ( ibid .). And yet, minimal models help us to learn something about the world in the sense that they function as surrogates for a real system: scientists can study the model to learn something about the target. It is, however, controversial whether minimal models can assist scientists in learning something about the world if they do not represent anything (Fumagalli 2016). Minimal models that purportedly lack any similarity or representation are also used in different parts of physics to explain the macro-scale behavior of various systems whose micro-scale behavior is extremely diverse (Batterman and Rice 2014; Rice 2018, 2019; Shech 2018). Typical examples are the features of phase transitions and the flow of fluids. Proponents of minimal models argue that what provides an explanation of the macro-scale behavior of a system in these cases is not a feature that system and model have in common, but the fact that the system and the model belong to the same universality class (a class of models that exhibit the same limiting behavior even though they show very different behavior at finite scales). It is, however, controversial whether explanations of this kind are possible without reference to at least some common features (Lange 2015; Reutlinger 2017).

Phenomenological models . Phenomenological models have been defined in different, although related, ways. A common definition takes them to be models that only represent observable properties of their targets and refrain from postulating hidden mechanisms and the like (Bokulich 2011). Another approach, due to McMullin (1968), defines phenomenological models as models that are independent of theories. This, however, seems to be too strong. Many phenomenological models, while failing to be derivable from a theory, incorporate principles and laws associated with theories. The liquid-drop model of the atomic nucleus, for instance, portrays the nucleus as a liquid drop and describes it as having several properties (surface tension and charge, among others) originating in different theories (hydrodynamics and electrodynamics, respectively). Certain aspects of these theories—although usually not the full theories—are then used to determine both the static and dynamical properties of the nucleus. Finally, it is tempting to identify phenomenological models with models of a phenomenon . Here, “phenomenon” is an umbrella term covering all relatively stable and general features of the world that are interesting from a scientific point of view. The weakening of sound as a function of the distance to the source, the decay of alpha particles, the chemical reactions that take place when a piece of limestone dissolves in an acid, the growth of a population of rabbits, and the dependence of house prices on the base rate of the Federal Reserve are phenomena in this sense. For further discussion, see Bailer-Jones (2009: Ch. 7), Bogen and Woodward (1988), and the entry on theory and observation in science .

Exploratory models . Exploratory models are models which are not proposed in the first place to learn something about a specific target system or a particular experimentally established phenomenon. Exploratory models function as the starting point of further explorations in which the model is modified and refined. Gelfert (2016) points out that exploratory models can provide proofs-of-principle and suggest how-possibly explanations (2016: Ch. 4). As an example, Gelfert mentions early models in theoretical ecology, such as the Lotka–Volterra model of predator–prey interaction, which mimic the qualitative behavior of speed-up and slow-down in population growth in an environment with limited resources (2016: 80). Such models do not give an accurate account of the behavior of any actual population, but they provide the starting point for the development of more realistic models. Massimi (2019) notes that exploratory models provide modal knowledge. Fisher (2006) sees these models as tools for the examination of the features of a given theory.

Models of data. A model of data (sometimes also “data model”) is a corrected, rectified, regimented, and in many instances idealized version of the data we gain from immediate observation, the so-called raw data (Suppes 1962). Characteristically, one first eliminates errors (e.g., removes points from the record that are due to faulty observation) and then presents the data in a “neat” way, for instance by drawing a smooth curve through a set of points. These two steps are commonly referred to as “data reduction” and “curve fitting”. When we investigate, for instance, the trajectory of a certain planet, we first eliminate points that are fallacious from the observation records and then fit a smooth curve to the remaining ones. Models of data play a crucial role in confirming theories because it is the model of data, and not the often messy and complex raw data, that theories are tested against.

The construction of a model of data can be extremely complicated. It requires sophisticated statistical techniques and raises serious methodological as well as philosophical questions. How do we decide which points on the record need to be removed? And given a clean set of data, what curve do we fit to it? The first question has been dealt with mainly within the context of the philosophy of experiment (see, for instance, Galison 1997 and Staley 2004). At the heart of the latter question lies the so-called curve-fitting problem, which is that the data themselves dictate neither the form of the fitted curve nor what statistical techniques scientists should use to construct a curve. The choice and rationalization of statistical techniques is the subject matter of the philosophy of statistics, and we refer the reader to the entry Philosophy of Statistics and to Bandyopadhyay and Forster (2011) for a discussion of these issues. Further discussions of models of data can be found in Bailer-Jones (2009: Ch. 7), Brewer and Chinn (1994), Harris (2003), Hartmann (1995), Laymon (1982), Mayo (1996, 2018), and Suppes (2007).

The gathering, processing, dissemination, analysis, interpretation, and storage of data raise many important questions beyond the relatively narrow issues pertaining to models of data. Leonelli (2016, 2019) investigates the status of data in science, argues that data should be defined not by their provenance but by their evidential function, and studies how data travel between different contexts.

2. Ontology: What Are Models?

What are models? That is, what kind of object are scientists dealing with when they work with a model? A number of authors have voiced skepticism that this question has a meaningful answer, because models do not belong to a distinctive ontological category and anything can be a model (Callender and Cohen 2006; Giere 2010; Suárez 2004; Swoyer 1991; Teller 2001). Contessa (2010) replies that this is a non sequitur . Even if, from an ontological point of view, anything can be a model and the class of things that are referred to as models contains a heterogeneous collection of different things, it does not follow that it is either impossible or pointless to develop an ontology of models. This is because even if not all models are of a particular ontological kind, one can nevertheless ask to what ontological kinds the things that are de facto used as models belong. There may be several such kinds and each kind can be analyzed in its own right. What sort of objects scientists use as models has important repercussions for how models perform relevant functions such as representation and explanation, and hence this issue cannot be dismissed as “just sociology”.

The objects that commonly serve as models indeed belong to different ontological kinds: physical objects, fictional objects, abstract objects, set-theoretic structures, descriptions, equations, or combinations of some of these, are frequently referred to as models, and some models may fall into yet other classes of things. Following Contessa’s advice, the aim then is to develop an ontology for each of these. Those with an interest in ontology may see this as a goal in its own right. It pays noting, however, that the question has reverberations beyond ontology and bears on how one understands the semantics and the epistemology of models.

Some models are physical objects. Such models are commonly referred to as “material models”. Standard examples of models of this kind are scale models of objects like bridges and ships (see Section 1 ), Watson and Crick’s metal model of DNA (Schaffner 1969), Phillips and Newlyn’s hydraulic model of an economy (Morgan and Boumans 2004), the US Army Corps of Engineers’ model of the San Francisco Bay (Weisberg 2013), Kendrew’s plasticine model of myoglobin (Frigg and Nguyen 2016), and model organisms in the life sciences (Leonelli and Ankeny 2012; Leonelli 2010; Levy and Currie 2015). All these are material objects that serve as models. Material models do not give rise to ontological difficulties over and above the well-known problems in connection with objects that metaphysicians deal with, for instance concerning the nature of properties, the identity of objects, parts and wholes, and so on.

However, many models are not material models. The Bohr model of the atom, a frictionless pendulum, or an isolated population, for instance, are in the scientist’s mind rather than in the laboratory and they do not have to be physically realized and experimented upon to serve as models. These “non-physical” models raise serious ontological questions, and how they are best analyzed is debated controversially. In the remainder of this section we review some of the suggestions that have attracted attention in the recent literature on models.

What has become known as the fiction view of models sees models as akin to the imagined objects of literary fiction—that is, as akin to fictional characters like Sherlock Holmes or fictional places like Middle Earth (Godfrey-Smith 2007). So when Bohr introduced his model of the atom he introduced a fictional object of the same kind as the object Conan Doyle introduced when he invented Sherlock Holmes. This view squares well with scientific practice, where scientists often talk about models as if they were objects and often take themselves to be describing imaginary atoms, populations, or economies. It also squares well with philosophical views that see the construction and manipulation of models as essential aspects of scientific investigation (Morgan 1999), even if models are not material objects, because these practices seem to be directed toward some kind of object.

What philosophical questions does this move solve? Fictional discourse and fictional entities face well-known philosophical questions, and one may well argue that simply likening models to fictions amounts to explaining obscurum per obscurius (for a discussion of these questions, see the entry on fictional entities ). One way to counter this objection and to motivate the fiction view of models is to point to the view’s heuristic power. In this vein Frigg (2010b) identifies five specific issues that an ontology of models has to address and then notes that these issues arise in very similar ways in the discussion about fiction (the issues are the identity conditions, property attribution, the semantics of comparative statements, truth conditions, and the epistemology of imagined objects). Likening models to fiction then has heuristic value because there is a rich literature on fiction that offers a number of solutions to these issues.

Only a small portion of the options available in the extensive literature on fictions have actually been explored in the context of scientific models. Contessa (2010) formulates what he calls the “dualist account”, according to which a model is an abstract object that stands for a possible concrete object. The Rutherford model of the atom, for instance, is an abstract object that acts as a stand-in for one of the possible systems that contain an electron orbiting around a nucleus in a well-defined orbit. Barberousse and Ludwig (2009) and Frigg (2010b) take a different route and develop an account of models as fictions based on Walton’s (1990) pretense theory of fiction. According to this view the sentences of a passage of text introducing a model should be seen as a prop in a game of make-believe, and the model is the product of an act of pretense. This is an antirealist position in that it takes talk of model “objects” to be figures of speech because ultimately there are no model objects—models only live in scientists’ imaginations. Salis (forthcoming) reformulates this view to become what she calls the “the new fiction view of models”. The core difference lies in the fact that what is considered as the model are the model descriptions and their content rather than the imaginings that they prescribe. This is a realist view of models, because descriptions exist.

The fiction view is not without critics. Giere (2009), Magnani (2012), Pincock (2012), Portides (2014), and Teller (2009) reject the fiction approach and argue, in different ways, that models should not be regarded as fictions. Weisberg (2013) argues for a middle position which sees fictions as playing a heuristic role but denies that they should be regarded as forming part of a scientific model. The common core of these criticisms is that the fiction view misconstrues the epistemic standing of models. To call something a fiction, so the charge goes, is tantamount to saying that it is false, and it is unjustified to call an entire model a fiction—and thereby claim that it fails to capture how the world is—just because the model involves certain false assumptions or fictional elements. In other words, a representation isn’t automatically counted as fiction just because it has some inaccuracies. Proponents of the fiction view agree with this point but deny that the notion of fiction should be analyzed in terms of falsity. What makes a work a fiction is not its falsity (or some ratio of false to true claims): neither is everything that is said in a novel untrue (Tolstoy’s War and Peace contains many true statements about Napoleon’s Franco-Russian War), nor does every text containing false claims qualify as fiction (false news reports are just that, they are not fictions). The defining feature of a fiction is that readers are supposed to imagine the events and characters described, not that they are false (Frigg 2010a; Salis forthcoming).

Giere (1988) advocated the view that “non-physical” models are abstract entities. However, there is little agreement on the nature of abstract objects, and Hale (1988: 86–87) lists no less than twelve different possible characterizations (for a review of the available options, see the entry on abstract objects ). In recent publications, Thomasson (2020) and Thomson-Jones (2020) develop what they call an “artifactualist view” of models, which is based on Thomasson’s (1999) theory of abstract artifacts. This view agrees with the pretense theory that the content of text that introduces a fictional character or a model should be understood as occurring in pretense, but at the same time insists that in producing such descriptions authors create abstract cultural artifacts that then exist independently of either the author or the readers. Artifactualism agrees with Platonism that abstract objects exist, but insists, contra Platonism, that abstract objects are brought into existence through a creative act and are not eternal. This allows the artifactualist to preserve the advantages of pretense theory while at the same time holding the realist view that fictional characters and models actually exist.

An influential point of view takes models to be set-theoretic structures. This position can be traced back to Suppes (1960) and is now, with slight variants, held by most proponents of the so-called semantic view of theories (for a discussion of this view, see the entry on the structure of scientific theories ). There are differences between the versions of the semantic view, but with the exception of Giere (1988) all versions agree that models are structures of one sort or another (Da Costa and French 2000).

This view of models has been criticized on various grounds. One pervasive criticism is that many types of models that play an important role in science are not structures and cannot be accommodated within the structuralist view of models, which can neither account for how these models are constructed nor for how they work in the context of investigation (Cartwright 1999; Downes 1992; Morrison 1999). Examples for such models are interpretative models and mediating models, discussed later in Section 4.2 . Another charge held against the set-theoretic approach is that set-theoretic structures by themselves cannot be representational models—at least if that requires them to share some structure with the target—because the ascription of a structure to a target system which forms part of the physical world relies on a substantive (non-structural) description of the target, which goes beyond what the structuralist approach can afford (Nguyen and Frigg forthcoming).

A time-honored position has it that a model is a stylized description of a target system. It has been argued that this is what scientists display in papers and textbooks when they present a model (Achinstein 1968; Black 1962). This view has not been subject to explicit criticism. However, some of the criticisms that have been marshaled against the so-called syntactic view of theories equally threaten a linguistic understanding of models (for a discussion of this view, see the entry on the structure of scientific theories ). First, a standard criticism of the syntactic view is that by associating a theory with a particular formulation, the view misconstrues theory identity because any change in the formulation results in a new theory (Suppe 2000). A view that associates models with descriptions would seem to be open to the same criticism. Second, models have different properties than descriptions: the Newtonian model of the solar system consists of orbiting spheres, but it makes no sense to say this about its description. Conversely, descriptions have properties that models do not have: a description can be written in English and consist of 517 words, but the same cannot be said of a model. One way around these difficulties is to associate the model with the content of a description rather than with the description itself. For a discussion of a position on models that builds on the content of a description, see Salis (forthcoming).

A contemporary version of descriptivism is Levy’s (2012, 2015) and Toon’s (2012) so-called direct-representation view. This view shares with the fiction view of models ( Section 2.2 ) the reliance on Walton’s pretense theory, but uses it in a different way. The main difference is that the views discussed earlier see modeling as introducing a vehicle of representation, the model, that is distinct from the target, and they see the problem as elucidating what kind of thing the model is. On the direct-representation view there are no models distinct from the target; there are only model-descriptions and targets, with no models in-between them. Modeling, on this view, consists in providing an imaginative description of real things. A model-description prescribes imaginings about the real system; the ideal pendulum, for instance, prescribes model-users to imagine the real spring as perfectly elastic and the bob as a point mass. This approach avoids the above problems because the identity conditions for models are given by the conditions for games of make-believe (and not by the syntax of a description) and property ascriptions take place in pretense. There are, however, questions about how this account deals with models that have no target (like models of the ether or four-sex populations), and about how models thus understood deal with idealizations. For a discussion of these points, see Frigg and Nguyen (2016), Poznic (2016), and Salis (forthcoming).

A closely related approach sees models as equations. This is a version of the view that models are descriptions, because equations are syntactic items that describe a mathematical structure. The issues that this view faces are similar to the ones we have already encountered: First, one can describe the same situation using different kinds of coordinates and as a result obtain different equations but without thereby also obtaining a different model. Second, the model and the equation have different properties. A pendulum contains a massless string, but the equation describing its motion does not; and an equation may be inhomogeneous, but the system it describes is not. It is an open question whether these issues can be avoided by appeal to a pretense account.

3. Epistemology: The Cognitive Functions of Models

One of the main reasons why models play such an important role in science is that they perform a number of cognitive functions. For example, models are vehicles for learning about the world. Significant parts of scientific investigation are carried out on models rather than on reality itself because by studying a model we can discover features of, and ascertain facts about, the system the model stands for: models allow for “surrogative reasoning” (Swoyer 1991). For instance, we study the nature of the hydrogen atom, the dynamics of a population, or the behavior of a polymer by studying their respective models. This cognitive function of models has been widely acknowledged in the literature, and some even suggest that models give rise to a new style of reasoning, “model-based reasoning”, according to which “inferences are made by means of creating models and manipulating, adapting, and evaluating them” (Nersessian 2010: 12; see also Magnani, Nersessian, and Thagard 1999; Magnani and Nersessian 2002; and Magnani and Casadio 2016).

Learning about a model happens in two places: in the construction of the model and in its manipulation (Morgan 1999). There are no fixed rules or recipes for model building and so the very activity of figuring out what fits together, and how, affords an opportunity to learn about the model. Once the model is built, we do not learn about its properties by looking at it; we have to use and manipulate the model in order to elicit its secrets.

Depending on what kind of model we are dealing with, building and manipulating a model amount to different activities demanding different methodologies. Material models seem to be straightforward because they are used in common experimental contexts (e.g., we put the model of a car in the wind tunnel and measure its air resistance). Hence, as far as learning about the model is concerned, material models do not give rise to questions that go beyond questions concerning experimentation more generally.

Not so with fictional and abstract models. What constraints are there to the construction of fictional and abstract models, and how do we manipulate them? A natural response seems to be that we do this by performing a thought experiment. Different authors (e.g., Brown 1991; Gendler 2000; Norton 1991; Reiss 2003; Sorensen 1992) have explored this line of argument, but they have reached very different and often conflicting conclusions about how thought experiments are performed and what the status of their outcomes is (for details, see the entry on thought experiments ).

An important class of models is computational in nature. For some models it is possible to derive results or solve equations of a mathematical model analytically. But quite often this is not the case. It is at this point that computers have a great impact, because they allow us to solve problems that are otherwise intractable. Hence, computational methods provide us with knowledge about (the consequences of) a model where analytical methods remain silent. Many parts of current research in both the natural and social sciences rely on computer simulations, which help scientists to explore the consequences of models that cannot be investigated otherwise. The formation and development of stars and galaxies, the dynamics of high-energy heavy-ion reactions, the evolution of life, outbreaks of wars, the progression of an economy, moral behavior, and the consequences of decision procedures in an organization are explored with computer simulations, to mention only a few examples.

Computer simulations are also heuristically important. They can suggest new theories, models, and hypotheses, for example, based on a systematic exploration of a model’s parameter space (Hartmann 1996). But computer simulations also bear methodological perils. For example, they may provide misleading results because, due to the discrete nature of the calculations carried out on a digital computer, they only allow for the exploration of a part of the full parameter space, and this subspace need not reflect every important feature of the model. The severity of this problem is somewhat mitigated by the increasing power of modern computers. But the availability of more computational power can also have adverse effects: it may encourage scientists to swiftly come up with increasingly complex but conceptually premature models, involving poorly understood assumptions or mechanisms and too many additional adjustable parameters (for a discussion of a related problem in the social sciences, see Braun and Saam 2015: Ch. 3). This can lead to an increase in empirical adequacy—which may be welcome for certain forecasting tasks—but not necessarily to a better understanding of the underlying mechanisms. As a result, the use of computer simulations can change the weight we assign to the various goals of science. Finally, the availability of computer power may seduce scientists into making calculations that do not have the degree of trustworthiness one would expect them to have. This happens, for instance, when computers are used to propagate probability distributions forward in time, which can turn out to be misleading (see Frigg et al. 2014). So it is important not to be carried away by the means that new powerful computers offer and lose sight of the actual goals of research. For a discussion of further issues in connection with computer simulations, we refer the reader to the entry on computer simulations in science .

Once we have knowledge about the model, this knowledge has to be “translated” into knowledge about the target system. It is at this point that the representational function of models becomes important again: if a model represents, then it can instruct us about reality because (at least some of) the model’s parts or aspects have corresponding parts or aspects in the world. But if learning is connected to representation and if there are different kinds of representations (analogies, idealizations, etc.), then there are also different kinds of learning. If, for instance, we have a model we take to be a realistic depiction, the transfer of knowledge from the model to the target is accomplished in a different manner than when we deal with an analogue, or a model that involves idealizing assumptions. For a discussion of the different ways in which the representational function of models can be exploited to learn about the target, we refer the reader to the entry Scientific Representation .

Some models explain. But how can they fulfill this function given that they typically involve idealizations? Do these models explain despite or because of the idealizations they involve? Does an explanatory use of models presuppose that they represent, or can non-representational models also explain? And what kind of explanation do models provide?

There is a long tradition requesting that the explanans of a scientific explanation must be true. We find this requirement in the deductive-nomological model (Hempel 1965) as well as in the more recent literature. For instance, Strevens (2008: 297) claims that “no causal account of explanation … allows nonveridical models to explain”. For further discussions, see also Colombo et al. (2015).

Authors working in this tradition deny that idealizations make a positive contribution to explanation and explore how models can explain despite being idealized. McMullin (1968, 1985) argues that a causal explanation based on an idealized model leaves out only features which are irrelevant for the respective explanatory task (see also Salmon 1984 and Piccinini and Craver 2011 for a discussion of mechanism sketches). Friedman (1974) argues that a more realistic (and hence less idealized) model explains better on the unification account. The idea is that idealizations can (at least in principle) be de-idealized (for a critical discussion of this claim in the context of the debate about scientific explanations, see Batterman 2002; Bokulich 2011; Morrison 2005, 2009; Jebeile and Kennedy 2015; and Rice 2015). Strevens (2008) argues that an explanatory causal model has to provide an accurate representation of the relevant causal relationships or processes which the model shares with the target system. The idealized assumptions of a model do not make a difference for the phenomenon under consideration and are therefore explanatorily irrelevant. In contrast, both Potochnik (2017) and Rice (2015) argue that models that explain can directly distort many difference-making causes.

According to Woodward’s (2003) theory, models are tools to find out about the causal relations that hold between certain facts or processes, and it is these relations that do the explanatory work. More specifically, explanations provide information about patterns of counterfactual dependence between the explanans and the explanandum which

enable us to see what sort of difference it would have made for the explanandum if the factors cited in the explanans had been different in various possible ways. (Woodward 2003: 11)

Accounts of causal explanation have also led to various claims about how idealized models can provide explanations, exploring to what extent idealization allows for the misrepresentation of irrelevant causal factors by the explanatory model (Elgin and Sober 2002; Strevens 2004, 2008; Potochnik 2007; Weisberg 2007, 2013). However, having the causally relevant features in common with real systems continues to play the essential role in showing how idealized models can be explanatory.

But is it really the truth of the explanans that makes the model explanatory? Other authors pursue a more radical line and argue that false models explain not only despite their falsity, but in fact because of their falsity. Cartwright (1983: 44) maintains that “the truth doesn’t explain much”. In her so-called “simulacrum account of explanation”, she suggests that we explain a phenomenon by constructing a model that fits the phenomenon into the basic framework of a grand theory (1983: Ch. 8). On this account, the model itself is the explanation we seek. This squares well with basic scientific intuitions, but it leaves us with the question of what notion of explanation is at work (see also Elgin and Sober 2002) and of what explanatory function idealizations play in model explanations (Rice 2018, 2019). Wimsatt (2007: Ch. 6) stresses the role of false models as means to arrive at true theories. Batterman and Rice (2014) argue that models explain because the details that characterize specific systems do not matter for the explanation. Bokulich (2008, 2009, 2011, 2012) pursues a similar line of reasoning and sees the explanatory power of models as being closely related to their fictional nature. Bokulich (2009) and Kennedy (2012) present non-representational accounts of model explanation (see also Jebeile and Kennedy 2015). Reiss (2012) and Woody (2004) provide general discussions of the relationship between representation and explanation.

Many authors have pointed out that understanding is one of the central goals of science (see, for instance, de Regt 2017; Elgin 2017; Khalifa 2017; Potochnik 2017). In some cases, we want to understand a certain phenomenon (e.g., why the sky is blue); in other cases, we want to understand a specific scientific theory (e.g., quantum mechanics) that accounts for a phenomenon in question. Sometimes we gain understanding of a phenomenon by understanding the corresponding theory or model. For instance, Maxwell’s theory of electromagnetism helps us understand why the sky is blue. It is, however, controversial whether understanding a phenomenon always presupposes an understanding of the corresponding theory (de Regt 2009: 26).

Although there are many different ways of gaining understanding, models and the activity of scientific modeling are of particular importance here (de Regt et al. 2009; Morrison 2009; Potochnik 2017; Rice 2016). This insight can be traced back at least to Lord Kelvin who, in his famous 1884 Baltimore Lectures on Molecular Dynamics and the Wave Theory of Light , maintained that “the test of ‘Do we or do we not understand a particular subject in physics?’ is ‘Can we make a mechanical model of it?’” (Kelvin 1884 [1987: 111]; see also Bailer-Jones 2009: Ch. 2; and de Regt 2017: Ch. 6).

But why do models play such a crucial role in the understanding of a subject matter? Elgin (2017) argues that this is not despite, but because, of models being literally false. She views false models as “felicitous falsehoods” that occupy center stage in the epistemology of science, and mentions the ideal-gas model in statistical mechanics and the Hardy–Weinberg model in genetics as examples for literally false models that are central to their respective disciplines. Understanding is holistic and it concerns a topic, a discipline, or a subject matter, rather than isolated claims or facts. Gaining understanding of a context means to have

an epistemic commitment to a comprehensive, systematically linked body of information that is grounded in fact, is duly responsive to reasons or evidence, and enables nontrivial inference, argument, and perhaps action regarding the topic the information pertains to (Elgin 2017: 44)

and models can play a crucial role in the pursuit of these epistemic commitments. For a discussion of Elgin’s account of models and understanding, see Baumberger and Brun (2017) and Frigg and Nguyen (forthcoming).

Elgin (2017), Lipton (2009), and Rice (2016) all argue that models can be used to understand independently of their ability to provide an explanation. Other authors, among them Strevens (2008, 2013), argue that understanding presupposes a scientific explanation and that

an individual has scientific understanding of a phenomenon just in case they grasp a correct scientific explanation of that phenomenon. (Strevens 2013: 510; see, however, Sullivan and Khalifa 2019)

On this account, understanding consists in a particular form of epistemic access an individual scientist has to an explanation. For Strevens this aspect is “grasping”, while for de Regt (2017) it is “intelligibility”. It is important to note that both Strevens and de Regt hold that such “subjective” aspects are a worthy topic for investigations in the philosophy of science. This contrasts with the traditional view (see, e.g., Hempel 1965) that delegates them to the realm of psychology. See Friedman (1974), Trout (2002), and Reutlinger et al. (2018) for further discussions of understanding.

Besides the functions already mentioned, it has been emphasized variously that models perform a number of other cognitive functions. Knuuttila (2005, 2011) argues that the epistemic value of models is not limited to their representational function, and develops an account that views models as epistemic artifacts which allow us to gather knowledge in diverse ways. Nersessian (1999, 2010) stresses the role of analogue models in concept-formation and other cognitive processes. Hartmann (1995) and Leplin (1980) discuss models as tools for theory construction and emphasize their heuristic and pedagogical value. Epstein (2008) lists a number of specific functions of models in the social sciences. Peschard (2011) investigates the way in which models may be used to construct other models and generate new target systems. And Isaac (2013) discusses non-explanatory uses of models which do not rely on their representational capacities.

4. Models and Theory

An important question concerns the relation between models and theories. There is a full spectrum of positions ranging from models being subordinate to theories to models being independent of theories.

To discuss the relation between models and theories in science it is helpful to briefly recapitulate the notions of a model and of a theory in logic. A theory is taken to be a (usually deductively closed) set of sentences in a formal language. A model is a structure (in the sense introduced in Section 2.3 ) that makes all sentences of a theory true when its symbols are interpreted as referring to objects, relations, or functions of a structure. The structure is a model of the theory in the sense that it is correctly described by the theory (see Bell and Machover 1977 or Hodges 1997 for details). Logical models are sometimes also referred to as “models of theory” to indicate that they are interpretations of an abstract formal system.

Models in science sometimes carry over from logic the idea of being the interpretation of an abstract calculus (Hesse 1967). This is salient in physics, where general laws—such as Newton’s equation of motion—lie at the heart of a theory. These laws are applied to a particular system—e.g., a pendulum—by choosing a special force function, making assumptions about the mass distribution of the pendulum etc. The resulting model then is an interpretation (or realization) of the general law.

It is important to keep the notions of a logical and a representational model separate (Thomson-Jones 2006): these are distinct concepts. Something can be a logical model without being a representational model, and vice versa . This, however, does not mean that something cannot be a model in both senses at once. In fact, as Hesse (1967) points out, many models in science are both logical and representational models. Newton’s model of planetary motion is a case in point: the model, consisting of two homogeneous perfect spheres located in otherwise empty space that attract each other gravitationally, is simultaneously a logical model (because it makes the axioms of Newtonian mechanics true when they are interpreted as referring to the model) and a representational model (because it represents the real sun and earth).

There are two main conceptions of scientific theories, the so-called syntactic view of theories and the so-called semantic view of theories (see the entry on the structure of scientific theories ). On both conceptions models play a subsidiary role to theories, albeit in very different ways. The syntactic view of theories (see entry section on the syntactic view ) retains the logical notions of a model and a theory. It construes a theory as a set of sentences in an axiomatized logical system, and a model as an alternative interpretation of a certain calculus (Braithwaite 1953; Campbell 1920 [1957]; Nagel 1961; Spector 1965). If, for instance, we take the mathematics used in the kinetic theory of gases and reinterpret the terms of this calculus in a way that makes them refer to billiard balls, the billiard balls are a model of the kinetic theory of gases in the sense that all sentences of the theory come out true. The model is meant to be something that we are familiar with, and it serves the purpose of making an abstract formal calculus more palpable. A given theory can have different models, and which model we choose depends both on our aims and our background knowledge. Proponents of the syntactic view disagree about the importance of models. Carnap and Hempel thought that models only serve a pedagogic or aesthetic purpose and are ultimately dispensable because all relevant information is contained in the theory (Carnap 1938; Hempel 1965; see also Bailer-Jones 1999). Nagel (1961) and Braithwaite (1953), on the other hand, emphasize the heuristic role of models, and Schaffner (1969) submits that theoretical terms get at least part of their meaning from models.

The semantic view of theories (see entry section on the semantic view ) dispenses with sentences in an axiomatized logical system and construes a theory as a family of models. On this view, a theory literally is a class, cluster, or family of models—models are the building blocks of which scientific theories are made up. Different versions of the semantic view work with different notions of a model, but, as noted in Section 2.3 , in the semantic view models are mostly construed as set-theoretic structures. For a discussion of the different options, we refer the reader to the relevant entry in this encyclopedia (linked at the beginning of this paragraph).

In both the syntactic and the semantic view of theories models are seen as subordinate to theory and as playing no role outside the context of a theory. This vision of models has been challenged in a number of ways, with authors pointing out that models enjoy various degrees of freedom from theory and function autonomously in many contexts. Independence can take many forms, and large parts of the literature on models are concerned with investigating various forms of independence.

Models as completely independent of theory . The most radical departure from a theory-centered analysis of models is the realization that there are models that are completely independent from any theory. An example of such a model is the Lotka–Volterra model. The model describes the interaction of two populations: a population of predators and one of prey animals (Weisberg 2013). The model was constructed using only relatively commonsensical assumptions about predators and prey and the mathematics of differential equations. There was no appeal to a theory of predator–prey interactions or a theory of population growth, and the model is independent of theories about its subject matter. If a model is constructed in a domain where no theory is available, then the model is sometimes referred to as a “substitute model” (Groenewold 1961), because the model substitutes a theory.

Models as a means to explore theory . Models can also be used to explore theories (Morgan and Morrison 1999). An obvious way in which this can happen is when a model is a logical model of a theory (see Section 4.1 ). A logical model is a set of objects and properties that make a formal sentence true, and so one can see in the model how the axioms of the theory play out in a particular setting and what kinds of behavior they dictate. But not all models that are used to explore theories are logical models, and models can represent features of theories in other ways. As an example, consider chaos theory. The equations of non-linear systems, such as those describing the three-body problem, have solutions that are too complex to study with paper-and-pencil methods, and even computer simulations are limited in various ways. Abstract considerations about the qualitative behavior of solutions show that there is a mechanism that has been dubbed “stretching and folding” (see the entry Chaos ). To obtain an idea of the complexity of the dynamics exhibiting stretching and folding, Smale proposed to study a simple model of the flow—now known as the “horseshoe map” (Tabor 1989)—which provides important insights into the nature of stretching and folding. Other examples of models of that kind are the Kac ring model that is used to study equilibrium properties of systems in statistical mechanics (Lavis 2008) and Norton’s dome in Newtonian mechanics (Norton 2003).

Models as complements of theories . A theory may be incompletely specified in the sense that it only imposes certain general constraints but remains silent about the details of concrete situations, which are provided by a model (Redhead 1980). A special case of this situation is when a qualitative theory is known and the model introduces quantitative measures (Apostel 1961). Redhead’s example of a theory that is underdetermined in this way is axiomatic quantum field theory, which only imposes certain general constraints on quantum fields but does not provide an account of particular fields. Harré (2004) notes that models can complement theories by providing mechanisms for processes that are left unspecified in the theory even though they are responsible for bringing about the observed phenomena.

Theories may be too complicated to handle. In such cases a model can complement a theory by providing a simplified version of the theoretical scenario that allows for a solution. Quantum chromodynamics, for instance, cannot easily be used to investigate the physics of an atomic nucleus even though it is the relevant fundamental theory. To get around this difficulty, physicists construct tractable phenomenological models (such as the MIT bag model) which effectively describe the relevant degrees of freedom of the system under consideration (Hartmann 1999, 2001). The advantage of these models is that they yield results where theories remain silent. Their drawback is that it is often not clear how to understand the relationship between the model and the theory, as the two are, strictly speaking, contradictory.

Models as preliminary theories . The notion of a model as a substitute for a theory is closely related to the notion of a developmental model . This term was coined by Leplin (1980), who pointed out how useful models were in the development of early quantum theory, and it is now used as an umbrella notion covering cases in which models are some sort of a preliminary exercise to theory.

Also closely related is the notion of a probing model (or “study model”). Models of this kind do not perform a representational function and are not expected to instruct us about anything beyond the model itself. The purpose of these models is to test new theoretical tools that are used later on to build representational models. In field theory, for instance, the so-called φ 4 -model was studied extensively, not because it was believed to represent anything real, but because it served several heuristic functions: the simplicity of the φ 4 -model allowed physicists to “get a feeling” for what quantum field theories are like and to extract some general features that this simple model shared with more complicated ones. Physicists could study complicated techniques such as renormalization in a simple setting, and it was possible to get acquainted with important mechanisms—in this case symmetry-breaking—that could later be used in different contexts (Hartmann 1995). This is true not only for physics. As Wimsatt (1987, 2007) points out, a false model in genetics can perform many useful functions, among them the following: the false model can help answering questions about more realistic models, provide an arena for answering questions about properties of more complex models, “factor out” phenomena that would not otherwise be seen, serve as a limiting case of a more general model (or two false models may define the extremes of a continuum of cases on which the real case is supposed to lie), or lead to the identification of relevant variables and the estimation of their values.

Interpretative models . Cartwright (1983, 1999) argues that models do not only aid the application of theories that are somehow incomplete; she claims that models are also involved whenever a theory with an overarching mathematical structure is applied. The main theories in physics—classical mechanics, electrodynamics, quantum mechanics, and so on—fall into this category. Theories of that kind are formulated in terms of abstract concepts that need to be concretized for the theory to provide a description of the target system, and concretizing the relevant concepts, idealized objects and processes are introduced. For instance, when applying classical mechanics, the abstract concept of force has to be replaced with a concrete force such as gravity. To obtain tractable equations, this procedure has to be applied to a simplified scenario, for instance that of two perfectly spherical and homogeneous planets in otherwise empty space, rather than to reality in its full complexity. The result is an interpretative model , which grounds the application of mathematical theories to real-world targets. Such models are independent from theory in that the theory does not determine their form, and yet they are necessary for the application of the theory to a concrete problem.

Models as mediators . The relation between models and theories can be complicated and disorderly. The contributors to a programmatic collection of essays edited by Morgan and Morrison (1999) rally around the idea that models are instruments that mediate between theories and the world. Models are “autonomous agents” in that they are independent from both theories and their target systems, and it is this independence that allows them to mediate between the two. Theories do not provide us with algorithms for the construction of a model; they are not “vending machines” into which one can insert a problem and a model pops out (Cartwright 1999). The construction of a model often requires detailed knowledge about materials, approximation schemes, and the setup, and these are not provided by the corresponding theory. Furthermore, the inner workings of a model are often driven by a number of different theories working cooperatively. In contemporary climate modeling, for instance, elements of different theories—among them fluid dynamics, thermodynamics, electromagnetism—are put to work cooperatively. What delivers the results is not the stringent application of one theory, but the voices of different theories when put to use in chorus with each other in one model.

In complex cases like the study of a laser system or the global climate, models and theories can get so entangled that it becomes unclear where a line between the two should be drawn: where does the model end and the theory begin? This is not only a problem for philosophical analysis; it also arises in scientific practice. Bailer-Jones (2002) interviewed a group of physicists about their understanding of models and their relation to theories, and reports widely diverging views: (i) there is no substantive difference between model and theory; (ii) models become theories when their degree of confirmation increases; (iii) models contain simplifications and omissions, while theories are accurate and complete; (iv) theories are more general than models, and modeling is about applying general theories to specific cases. The first suggestion seems to be too radical to do justice to many aspects of practice, where a distinction between models and theories is clearly made. The second view is in line with common parlance, where the terms “model” and “theory” are sometimes used to express someone’s attitude towards a particular hypothesis. The phrase “it’s just a model” indicates that the hypothesis at stake is asserted only tentatively or is even known to be false, while something is awarded the label “theory” if it has acquired some degree of general acceptance. However, this use of “model” is different from the uses we have seen in Sections 1 to 3 and is therefore of no use if we aim to understand the relation between scientific models and theories (and, incidentally, one can equally dismiss speculative claims as being “just a theory”). The third proposal is correct in associating models with idealizations and simplifications, but it overshoots by restricting this to models; in fact, also theories can contain idealizations and simplifications. The fourth view seems closely aligned with interpretative models and the idea that models are mediators, but being more general is a gradual notion and hence does not provide a clear-cut criterion to distinguish between theories and models.

5. Models and Other Debates in the Philosophy of Science

The debate over scientific models has important repercussions for other issues in the philosophy of science (for a historical account of the philosophical discussion about models, see Bailer-Jones 1999). Traditionally, the debates over, say, scientific realism, reductionism, and laws of nature were couched in terms of theories, because theories were seen as the main carriers of scientific knowledge. Once models are acknowledged as occupying an important place in the edifice of science, these issues have to be reconsidered with a focus on models. The question is whether, and if so how, discussions of these issues change when we shift focus from theories to models. Up to now, no comprehensive model-based account of any of these issues has emerged, but models have left important traces in the discussions of these topics.

As we have seen in Section 1 , models typically provide a distorted representation of their targets. If one sees science as primarily model-based, this could be taken to suggest an antirealist interpretation of science. Realists, however, deny that the presence of idealizations in models renders a realist approach to science impossible and point out that a good model, while not literally true, is usually at least approximately true, and/or that it can be improved by de-idealization (Laymon 1985; McMullin 1985; Nowak 1979; Brzezinski and Nowak 1992).

Apart from the usual worries about the elusiveness of the notion of approximate truth (for a discussion, see the entry on truthlikeness ), antirealists have taken issue with this reply for two (related) reasons. First, as Cartwright (1989) points out, there is no reason to assume that one can always improve a model by adding de-idealizing corrections. Second, it seems that de-idealization is not in accordance with scientific practice because it is unusual that scientists invest work in repeatedly de-idealizing an existing model. Rather, they shift to a different modeling framework once the adjustments to be made get too involved (Hartmann 1998). The various models of the atomic nucleus are a case in point: once it was realized that shell effects are important to understand various subatomic phenomena, the (collective) liquid-drop model was put aside and the (single-particle) shell model was developed to account for the corresponding findings. A further difficulty with de-idealization is that most idealizations are not “controlled”. For example, it is not clear in what way one could de-idealize the MIT bag model to eventually arrive at quantum chromodynamics, the supposedly correct underlying theory.

A further antirealist argument, the “incompatible-models argument”, takes as its starting point the observation that scientists often successfully use several incompatible models of one and the same target system for predictive purposes (Morrison 2000). These models seemingly contradict each other, as they ascribe different properties to the same target system. In nuclear physics, for instance, the liquid-drop model explores the analogy of the atomic nucleus with a (charged) fluid drop, while the shell model describes nuclear properties in terms of the properties of protons and neutrons, the constituents of an atomic nucleus. This practice appears to cause a problem for scientific realism: Realists typically hold that there is a close connection between the predictive success of a theory and its being at least approximately true. But if several models of the same system are predictively successful and if these models are mutually inconsistent, then it is difficult to maintain that they are all approximately true.

Realists can react to this argument in various ways. First, they can challenge the claim that the models in question are indeed predictively successful. If the models are not good predictors, then the argument is blocked. Second, they can defend a version of “perspectival realism” (Giere 2006; Massimi 2017; Rueger 2005). Proponents of this position (which is sometimes also called “perspectivism”) situate it somewhere between “standard” scientific realism and antirealism, and where exactly the right middle position lies is the subject matter of active debate (Massimi 2018a,b; Saatsi 2016; Teller 2018; and the contributions to Massimi and McCoy 2019). Third, realists can deny that there is a problem in the first place, because scientific models, which are always idealized and therefore strictly speaking false, are just the wrong vehicle to make a point about realism (which should be discussed in terms of theories).

A particular focal point of the realism debate are laws of nature, where the questions arise what laws are and whether they are truthfully reflected in our scientific representations. According to the two currently dominant accounts, the best-systems approach and the necessitarian approach, laws of nature are understood to be universal in scope, meaning that they apply to everything that there is in the world (for discussion of laws, see the entry on laws of nature ). This take on laws does not seem to sit well with a view that places models at the center of scientific research. What role do general laws play in science if it is models that represent what is happening in the world? And how are models and laws related?

One possible response to these questions is to argue that laws of nature govern entities and processes in a model rather than in the world. Fundamental laws, on this approach, do not state facts about the world but hold true of entities and processes in the model. This view has been advocated in different variants: Cartwright (1983) argues that all laws are ceteris paribus laws. Cartwright (1999) makes use of “capacities” (which she considers to be prior to laws) and introduces the notion of a “nomological machine”. This is

a fixed (enough) arrangement of components, or factors, with stable (enough) capacities that in the right sort of stable (enough) environment will, with repeated operation, give rise to the kind of regular behavior that we represent in our scientific laws. (1999: 50; see also the entry on ceteris paribus laws )

Giere (1999) argues that the laws of a theory are better thought of, not as encoding general truths about the world, but rather as open-ended statements that can be filled in various ways in the process of building more specific scientific models. Similar positions have also been defended by Teller (2001) and van Fraassen (1989).

The multiple-models problem mentioned in Section 5.1 also raises the question of how different models are related. Evidently, multiple models for the same target system do not generally stand in a deductive relationship, as they often contradict each other. Some (Cartwright 1999; Hacking 1983) have suggested a picture of science according to which there are no systematic relations that hold between different models. Some models are tied together because they represent the same target system, but this does not imply that they enter into any further relationships (deductive or otherwise). We are confronted with a patchwork of models, all of which hold ceteris paribus in their specific domains of applicability.

Some argue that this picture is at least partially incorrect because there are various interesting relations that hold between different models or theories. These relations range from thoroughgoing reductive relations (Scheibe 1997, 1999, 2001: esp. Chs. V.23 and V.24) and controlled approximations over singular limit relations (Batterman 2001 [2016]) to structural relations (Gähde 1997) and rather loose relations called “stories” (Hartmann 1999; see also Bokulich 2003; Teller 2002; and the essays collected in Part III of Hartmann et al. 2008). These suggestions have been made on the basis of case studies, and it remains to be seen whether a more general account of these relations can be given and whether a deeper justification for them can be provided, for instance, within a Bayesian framework (first steps towards a Bayesian understanding of reductive relations can be found in Dizadji-Bahmani et al. 2011; Liefke and Hartmann 2018; and Tešić 2019).

Models also figure in the debate about reduction and emergence in physics. Here, some authors argue that the modern approach to renormalization challenges Nagel’s (1961) model of reduction or the broader doctrine of reductions (for a critical discussion, see, for instance, Batterman 2002, 2010, 2011; Morrison 2012; and Saatsi and Reutlinger 2018). Dizadji-Bahmani et al. (2010) provide a defense of the Nagel–Schaffner model of reduction, and Butterfield (2011a,b, 2014) argues that renormalization is consistent with Nagelian reduction. Palacios (2019) shows that phase transitions are compatible with reductionism, and Hartmann (2001) argues that the effective-field-theories research program is consistent with reductionism (see also Bain 2013 and Franklin forthcoming). Rosaler (2015) argues for a “local” form of reduction which sees the fundamental relation of reduction holding between models, not theories, which is, however, compatible with the Nagel–Schaffner model of reduction. See also the entries on intertheory relations in physics and scientific reduction .

In the social sciences, agent-based models (ABMs) are increasingly used (Klein et al. 2018). These models show how surprisingly complex behavioral patterns at the macro-scale can emerge from a small number of simple behavioral rules for the individual agents and their interactions. This raises questions similar to the questions mentioned above about reduction and emergence in physics, but so far one only finds scattered remarks about reduction in the literature. See Weisberg and Muldoon (2009) and Zollman (2007) for the application of ABMs to the epistemology and the social structure of science, and Colyvan (2013) for a discussion of methodological questions raised by normative models in general.

  • Achinstein, Peter, 1968, Concepts of Science: A Philosophical Analysis , Baltimore, MD: Johns Hopkins Press.
  • Akerlof, George A., 1970, “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism”, The Quarterly Journal of Economics , 84(3): 488–500. doi:10.2307/1879431
  • Apostel, Leo, 1961, “Towards the Formal Study of Models in the Non-Formal Sciences”, in Freudenthal 1961: 1–37. doi:10.1007/978-94-010-3667-2_1
  • Bailer-Jones, Daniela M., 1999, “Tracing the Development of Models in the Philosophy of Science”, in Magnani, Nersessian, and Thagard 1999: 23–40. doi:10.1007/978-1-4615-4813-3_2
  • –––, 2002, “Scientists’ Thoughts on Scientific Models”, Perspectives on Science , 10(3): 275–301. doi:10.1162/106361402321899069
  • –––, 2009, Scientific Models in Philosophy of Science , Pittsburgh, PA: University of Pittsburgh Press.
  • Bailer-Jones, Daniela M. and Coryn A. L. Bailer-Jones, 2002, “Modeling Data: Analogies in Neural Networks, Simulated Annealing and Genetic Algorithms”, in Magnani and Nersessian 2002: 147–165. doi:10.1007/978-1-4615-0605-8_9
  • Bain, Jonathan, 2013, “Emergence in Effective Field Theories”, European Journal for Philosophy of Science , 3(3): 257–273. doi:10.1007/s13194-013-0067-0
  • Bandyopadhyay, Prasanta S. and Malcolm R. Forster (eds.), 2011, Philosophy of Statistics (Handbook of the Philosophy of Science 7), Amsterdam: Elsevier.
  • Barberousse, Anouk and Pascal Ludwig, 2009, “Fictions and Models”, in Suárez 2009: 56–75.
  • Bartha, Paul, 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press. doi:10.1093/acprof:oso/9780195325539.001.0001
  • –––, 2013 [2019], “Analogy and Analogical Reasoning”, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2019 Edition). URL = < https://plato.stanford.edu/archives/spr2019/entries/reasoning-analogy/ >
  • Batterman, Robert W., 2002, The Devil in the Details: Asymptotic Reasoning in Explanation, Reduction, and Emergence , Oxford: Oxford University Press. doi:10.1093/0195146476.001.0001
  • –––, 2001 [2016], “Intertheory Relations in Physics”, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2016 Edition). URL = < https://plato.stanford.edu/archives/fall2016/entries/physics-interrelate >
  • –––, 2010, “Reduction and Renormalization”, in Gerhard Ernst and Andreas Hüttemann (eds.), Time, Chance and Reduction: Philosophical Aspects of Statistical Mechanics , Cambridge: Cambridge University Press, pp. 159–179.
  • –––, 2011, “Emergence, Singularities, and Symmetry Breaking”, Foundations of Physics , 41(6): 1031–1050. doi:10.1007/s10701-010-9493-4
  • Batterman, Robert W. and Collin C. Rice, 2014, “Minimal Model Explanations”, Philosophy of Science , 81(3): 349–376. doi:10.1086/676677
  • Baumberger, Christoph and Georg Brun, 2017, “Dimensions of Objectual Understanding”, in Stephen R. Grimm, Christoph Baumberger, and Sabine Ammon (eds.), Explaining Understanding: New Perspectives from Epistemology and Philosophy of Science , New York: Routledge, pp. 165–189.
  • Bell, John and Moshé Machover, 1977, A Course in Mathematical Logic , Amsterdam: North-Holland.
  • Berry, Michael, 2002, “Singular Limits”, Physics Today , 55(5): 10–11. doi:10.1063/1.1485555
  • Black, Max, 1962, Models and Metaphors: Studies in Language and Philosophy , Ithaca, NY: Cornell University Press.
  • Bogen, James and James Woodward, 1988, “Saving the Phenomena”, The Philosophical Review , 97(3): 303–352. doi:10.2307/2185445
  • Bokulich, Alisa, 2003, “Horizontal Models: From Bakers to Cats”, Philosophy of Science , 70(3): 609–627. doi:10.1086/376927
  • –––, 2008, Reexamining the Quantum–Classical Relation: Beyond Reductionism and Pluralism , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511751813
  • –––, 2009, “Explanatory Fictions”, in Suárez 2009: 91–109.
  • –––, 2011, “How Scientific Models Can Explain”, Synthese , 180(1): 33–45. doi:10.1007/s11229-009-9565-1
  • –––, 2012, “Distinguishing Explanatory from Nonexplanatory Fictions”, Philosophy of Science , 79(5): 725–737. doi:10.1086/667991
  • Braithwaite, Richard, 1953, Scientific Explanation , Cambridge: Cambridge University Press.
  • Braun, Norman and Nicole J. Saam (eds.), 2015, Handbuch Modellbildung und Simulation in den Sozialwissenschaften , Wiesbaden: Springer Fachmedien. doi:10.1007/978-3-658-01164-2
  • Brewer, William F. and Clark A. Chinn, 1994, “Scientists’ Responses to Anomalous Data: Evidence from Psychology, History, and Philosophy of Science”, in PSA 1994: Proceedings of the 1994 Biennial Meeting of the Philosophy of Science Association , Vol. 1, pp. 304–313. doi:10.1086/psaprocbienmeetp.1994.1.193035
  • Brown, James, 1991, The Laboratory of the Mind: Thought Experiments in the Natural Sciences , London: Routledge.
  • Brzezinski, Jerzy and Leszek Nowak (eds.), 1992, Idealization III: Approximation and Truth , Amsterdam: Rodopi.
  • Butterfield, Jeremy, 2011a, “Emergence, Reduction and Supervenience: A Varied Landscape”, Foundations of Physics , 41(6): 920–959. doi:10.1007/s10701-011-9549-0
  • –––, 2011b, “Less Is Different: Emergence and Reduction Reconciled”, Foundations of Physics , 41(6): 1065–1135. doi:10.1007/s10701-010-9516-1
  • –––, 2014, “Reduction, Emergence, and Renormalization”, Journal of Philosophy , 111(1): 5–49. doi:10.5840/jphil201411111
  • Callender, Craig and Jonathan Cohen, 2006, “There Is No Special Problem about Scientific Representation”, Theoria , 55(1): 67–85.
  • Campbell, Norman, 1920 [1957], Physics: The Elements , Cambridge: Cambridge University Press. Reprinted as Foundations of Science , New York: Dover, 1957.
  • Carnap, Rudolf, 1938, “Foundations of Logic and Mathematics”, in Otto Neurath, Charles Morris, and Rudolf Carnap (eds.), International Encyclopaedia of Unified Science , Volume 1, Chicago, IL: University of Chicago Press, pp. 139–213.
  • Cartwright, Nancy, 1983, How the Laws of Physics Lie , Oxford: Oxford University Press. doi:10.1093/0198247044.001.0001
  • –––, 1989, Nature’s Capacities and Their Measurement , Oxford: Oxford University Press. doi:10.1093/0198235070.001.0001
  • –––, 1999, The Dappled World: A Study of the Boundaries of Science , Cambridge: Cambridge University Press. doi:10.1017/CBO9781139167093
  • Colombo, Matteo, Stephan Hartmann, and Robert van Iersel, 2015, “Models, Mechanisms, and Coherence”, The British Journal for the Philosophy of Science , 66(1): 181–212. doi:10.1093/bjps/axt043
  • Colyvan, Mark, 2013, “Idealisations in Normative Models”, Synthese , 190(8): 1337–1350. doi:10.1007/s11229-012-0166-z
  • Contessa, Gabriele, 2010, “Scientific Models and Fictional Objects”, Synthese , 172(2): 215–229. doi:10.1007/s11229-009-9503-2
  • Crowther, Karen, Niels S. Linnemann, and Christian Wüthrich, forthcoming, “What We Cannot Learn from Analogue Experiments”, Synthese , first online: 4 May 2019. doi:10.1007/s11229-019-02190-0
  • Da Costa, Newton and Steven French, 2000, “Models, Theories, and Structures: Thirty Years On”, Philosophy of Science , 67(supplement): S116–S127. doi:10.1086/392813
  • Dardashti, Radin, Stephan Hartmann, Karim Thébault, and Eric Winsberg, 2019, “Hawking Radiation and Analogue Experiments: A Bayesian Analysis”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 67: 1–11. doi:10.1016/j.shpsb.2019.04.004
  • Dardashti, Radin, Karim P. Y. Thébault, and Eric Winsberg, 2017, “Confirmation via Analogue Simulation: What Dumb Holes Could Tell Us about Gravity”, The British Journal for the Philosophy of Science , 68(1): 55–89. doi:10.1093/bjps/axv010
  • de Regt, Henk, 2009, “Understanding and Scientific Explanation”, in de Regt, Leonelli, and Eigner 2009: 21–42.
  • –––, 2017, Understanding Scientific Understanding , Oxford: Oxford University Press. doi:10.1093/oso/9780190652913.001.0001
  • de Regt, Henk, Sabina Leonelli, and Kai Eigner (eds.), 2009, Scientific Understanding: Philosophical Perspectives , Pittsburgh, PA: University of Pittsburgh Press.
  • Dizadji-Bahmani, Foad, Roman Frigg, and Stephan Hartmann, 2010, “Who’s Afraid of Nagelian Reduction?”, Erkenntnis , 73(3): 393–412. doi:10.1007/s10670-010-9239-x
  • –––, 2011, “Confirmation and Reduction: A Bayesian Account”, Synthese , 179(2): 321–338. doi:10.1007/s11229-010-9775-6
  • Downes, Stephen M., 1992, “The Importance of Models in Theorizing: A Deflationary Semantic View”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1992(1): 142–153. doi:10.1086/psaprocbienmeetp.1992.1.192750
  • Elgin, Catherine Z., 2010, “Telling Instances”, in Roman Frigg and Matthew Hunter (eds.), Beyond Mimesis and Convention (Boston Studies in the Philosophy of Science 262), Dordrecht: Springer Netherlands, pp. 1–17. doi:10.1007/978-90-481-3851-7_1
  • –––, 2017, True Enough . Cambridge, MA, and London: MIT Press.
  • Elgin, Mehmet and Elliott Sober, 2002, “Cartwright on Explanation and Idealization”, Erkenntnis , 57(3): 441–450. doi:10.1023/A:1021502932490
  • Epstein, Joshua M., 2008, “Why Model?”, Journal of Artificial Societies and Social Simulation , 11(4): 12. [ Epstein 2008 available online ]
  • Fisher, Grant, 2006, “The Autonomy of Models and Explanation: Anomalous Molecular Rearrangements in Early Twentieth-Century Physical Organic Chemistry”, Studies in History and Philosophy of Science Part A , 37(4): 562–584. doi:10.1016/j.shpsa.2006.09.009
  • Franklin, Alexander, forthcoming, “Whence the Effectiveness of Effective Field Theories?”, The British Journal for the Philosophy of Science , first online: 3 August 2018. doi:10.1093/bjps/axy050
  • Freudenthal, Hans (ed.), 1961, The Concept and the Role of the Model in Mathematics and Natural and Social Sciences , Dordrecht: Reidel. doi:10.1007/978-94-010-3667-2
  • Friedman, Michael, 1974, “Explanation and Scientific Understanding”, Journal of Philosophy , 71(1): 5–19. doi:10.2307/2024924
  • Frigg, Roman, 2010a, “Fiction in Science”, in John Woods (ed.), Fictions and Models: New Essays , Munich: Philosophia Verlag, pp. 247–287.
  • –––, 2010b, “Models and Fiction”, Synthese , 172(2): 251–268. doi:10.1007/s11229-009-9505-0
  • Frigg, Roman, Seamus Bradley, Hailiang Du, and Leonard A. Smith, 2014, “Laplace’s Demon and the Adventures of His Apprentices”, Philosophy of Science , 81(1): 31–59. doi:10.1086/674416
  • Frigg, Roman and James Nguyen, 2016, “The Fiction View of Models Reloaded”, The Monist , 99(3): 225–242. doi:10.1093/monist/onw002 [ Frigg and Nguyen 2016 available online ]
  • –––, forthcoming, “Mirrors without Warnings”, Synthese , first online: 21 May 2019. doi:10.1007/s11229-019-02222-9
  • Fumagalli, Roberto, 2016, “Why We Cannot Learn from Minimal Models”, Erkenntnis , 81(3): 433–455. doi:10.1007/s10670-015-9749-7
  • Gähde, Ulrich, 1997, “Anomalies and the Revision of Theory-Elements: Notes on the Advance of Mercury’s Perihelion”, in Maria Luisa Dalla Chiara, Kees Doets, Daniele Mundici, and Johan van Benthem (eds.), Structures and Norms in Science (Synthese Library 260), Dordrecht: Springer Netherlands, pp. 89–104. doi:10.1007/978-94-017-0538-7_6
  • Galison, Peter, 1997, Image and Logic: A Material Culture of Microphysics , Chicago, IL: University of Chicago Press.
  • Gelfert, Axel, 2016, How to Do Science with Models: A Philosophical Primer (Springer Briefs in Philosophy), Cham: Springer International Publishing. doi:10.1007/978-3-319-27954-1
  • Gendler, Tamar Szabó, 2000, Thought Experiment: On the Powers and Limits of Imaginary Cases , New York and London: Garland.
  • Gibbard, Allan and Hal R. Varian, 1978, “Economic Models”, The Journal of Philosophy , 75(11): 664–677. doi:10.5840/jphil1978751111
  • Giere, Ronald N., 1988, Explaining Science: A Cognitive Approach , Chicago, IL: University of Chicago Press.
  • –––, 1999, Science Without Laws , Chicago, IL: University of Chicago Press.
  • –––, 2006, Scientific Perspectivism , Chicago, IL: University of Chicago Press.
  • –––, 2009, “Why Scientific Models Should Not be Regarded as Works of Fiction”, in Suárez 2009: 248–258.
  • –––, 2010, “An Agent-Based Conception of Models and Scientific Representation”, Synthese , 172(2): 269–281. doi:10.1007/s11229-009-9506-z
  • Godfrey-Smith, Peter, 2007, “The Strategy of Model-Based Science”, Biology & Philosophy , 21(5): 725–740. doi:10.1007/s10539-006-9054-6
  • –––, 2009, “Abstractions, Idealizations, and Evolutionary Biology”, in Anouk Barberousse, Michel Morange, and Thomas Pradeu (eds.), Mapping the Future of Biology: Evolving Concepts and Theories (Boston Studies in the Philosophy of Science 266), Dordrecht: Springer Netherlands, pp. 47–56. doi:10.1007/978-1-4020-9636-5_4
  • Groenewold, H. J., 1961, “The Model in Physics”, in Freudenthal 1961: 98–103. doi:10.1007/978-94-010-3667-2_9
  • Grüne-Yanoff, Till, 2009, “Learning from Minimal Economic Models”, Erkenntnis , 70(1): 81–99. doi:10.1007/s10670-008-9138-6
  • Hacking, Ian, 1983, Representing and Intervening: Introductory Topics in the Philosophy of Natural Science , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511814563
  • Hale, Susan C., 1988, “Spacetime and the Abstract/Concrete Distinction”, Philosophical Studies , 53(1): 85–102. doi:10.1007/BF00355677
  • Harré, Rom, 2004, Modeling: Gateway to the Unknown (Studies in Multidisciplinarity 1), ed. by Daniel Rothbart, Amsterdam etc.: Elsevier.
  • Harris, Todd, 2003, “Data Models and the Acquisition and Manipulation of Data”, Philosophy of Science , 70(5): 1508–1517. doi:10.1086/377426
  • Hartmann, Stephan, 1995, “Models as a Tool for Theory Construction: Some Strategies of Preliminary Physics”, in Herfel et al. 1995: 49–67.
  • –––, 1996, “The World as a Process: Simulations in the Natural and Social Sciences”, in Hegselmann, Mueller, and Troitzsch 1996: 77–100. doi:10.1007/978-94-015-8686-3_5
  • –––, 1998, “Idealization in Quantum Field Theory”, in Shanks 1998: 99–122.
  • –––, 1999, “Models and Stories in Hadron Physics”, in Morgan and Morrison 1999: 326–346. doi:10.1017/CBO9780511660108.012
  • –––, 2001, “Effective Field Theories, Reductionism and Scientific Explanation”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 32(2): 267–304. doi:10.1016/S1355-2198(01)00005-3
  • Hartmann, Stephan, Carl Hoefer, and Luc Bovens (eds.), 2008, Nancy Cartwright’s Philosophy of Science (Routledge Studies in the Philosophy of Science), New York: Routledge.
  • Hegselmann, Rainer, Ulrich Mueller, and Klaus G. Troitzsch (eds.), 1996, Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View (Theory and Decision Library 23), Dordrecht: Springer Netherlands. doi:10.1007/978-94-015-8686-3
  • Helman, David H. (ed.), 1988, Analogical Reasoning: Perspectives of Artificial Intelligence, Cognitive Science, and Philosophy (Synthese Library 197), Dordrecht: Springer Netherlands. doi:10.1007/978-94-015-7811-0
  • Hempel, Carl G., 1965, Aspects of Scientific Explanation and Other Essays in the Philosophy of Science , New York: Free Press.
  • Herfel, William, Wladiysław Krajewski, Ilkka Niiniluoto, and Ryszard Wojcicki (eds.), 1995, Theories and Models in Scientific Process (Poznań Studies in the Philosophy of Science and the Humanities 44), Amsterdam: Rodopi.
  • Hesse, Mary, 1963, Models and Analogies in Science , London: Sheed and Ward.
  • –––, 1967, “Models and Analogy in Science”, in Paul Edwards (ed.), Encyclopedia of Philosophy , New York: Macmillan, pp. 354–359.
  • –––, 1974, The Structure of Scientific Inference , London: Macmillan.
  • Hodges, Wilfrid, 1997, A Shorter Model Theory , Cambridge: Cambridge University Press.
  • Holyoak, Keith and Paul Thagard, 1995, Mental Leaps: Analogy in Creative Thought , Cambridge, MA: MIT Press.
  • Horowitz, Tamara and Gerald J. Massey (eds.), 1991, Thought Experiments in Science and Philosophy , Lanham, MD: Rowman & Littlefield.
  • Isaac, Alistair M. C., 2013, “Modeling without Representation”, Synthese , 190(16): 3611–3623. doi:10.1007/s11229-012-0213-9
  • Jebeile, Julie and Ashley Graham Kennedy, 2015, “Explaining with Models: The Role of Idealizations”, International Studies in the Philosophy of Science , 29(4): 383–392. doi:10.1080/02698595.2015.1195143
  • Jones, Martin R., 2005, “Idealization and Abstraction: A Framework”, in Jones and Cartwright 2005: 173–217. doi:10.1163/9789401202732_010
  • Jones, Martin R. and Nancy Cartwright (eds.), 2005, Idealization XII: Correcting the Model (Poznań Studies in the Philosophy of the Sciences and the Humanities 86), Amsterdam and New York: Rodopi. doi:10.1163/9789401202732
  • Kelvin, William Thomson, Baron, 1884 [1987], Notes of lectures on molecular dynamics and the wave theory of light. Delivered at the Johns Hopkins University, Baltimore (aka Lord Kelvin’s Baltimore Lectures), A. S. Hathaway (recorder). A revised version was published in 1904, London: C.J. Clay and Sons. Reprint of the 1884 version in Robert Kargon and Peter Achinstein (eds.), Kelvin’s Baltimore Lectures and Modern Theoretical Physics , Cambridge, MA: MIT Press, 1987.
  • Khalifa, Kareem, 2017, Understanding, Explanation, and Scientific Knowledge , Cambridge: Cambridge University Press. doi:10.1017/9781108164276
  • Klein, Dominik, Johannes Marx, and Kai Fischbach, 2018, “Agent-Based Modeling in Social Science History and Philosophy: An Introduction”, Historical Social Research , 43(1): 243–258.
  • Knuuttila, Tarja, 2005, “Models, Representation, and Mediation”, Philosophy of Science , 72(5): 1260–1271. doi:10.1086/508124
  • –––, 2011, “Modelling and Representing: An Artefactual Approach to Model-Based Representation”, Studies in History and Philosophy of Science Part A , 42(2): 262–271. doi:10.1016/j.shpsa.2010.11.034
  • Kroes, Peter, 1989, “Structural Analogies Between Physical Systems”, The British Journal for the Philosophy of Science , 40(2): 145–154. doi:10.1093/bjps/40.2.145
  • Lange, Marc, 2015, “On ‘Minimal Model Explanations’: A Reply to Batterman and Rice”, Philosophy of Science , 82(2): 292–305. doi:10.1086/680488
  • Lavis, David A., 2008, “Boltzmann, Gibbs, and the Concept of Equilibrium”, Philosophy of Science , 75(5): 682–692. doi:10.1086/594514
  • Laymon, Ronald, 1982, “Scientific Realism and the Hierarchical Counterfactual Path from Data to Theory”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1982(1): 107–121. doi:10.1086/psaprocbienmeetp.1982.1.192660
  • –––, 1985, “Idealizations and the Testing of Theories by Experimentation”, in Peter Achinstein and Owen Hannaway (eds.), Observation, Experiment, and Hypothesis in Modern Physical Science , Cambridge, MA: MIT Press, pp. 147–173.
  • –––, 1991, “Thought Experiments by Stevin, Mach and Gouy: Thought Experiments as Ideal Limits and Semantic Domains”, in Horowitz and Massey 1991: 167–191.
  • Leonelli, Sabina, 2010, “Packaging Small Facts for Re-Use: Databases in Model Organism Biology”, in Peter Howlett and Mary S. Morgan (eds.), How Well Do Facts Travel? The Dissemination of Reliable Knowledge , Cambridge: Cambridge University Press, pp. 325–348. doi:10.1017/CBO9780511762154.017
  • –––, 2016, Data-Centric Biology: A Philosophical Study , Chicago, IL, and London: University of Chicago Press.
  • –––, 2019, “What Distinguishes Data from Models?”, European Journal for Philosophy of Science , 9(2): article 22. doi:10.1007/s13194-018-0246-0
  • Leonelli, Sabina and Rachel A. Ankeny, 2012, “Re-Thinking Organisms: The Impact of Databases on Model Organism Biology”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 29–36. doi:10.1016/j.shpsc.2011.10.003
  • Leplin, Jarrett, 1980, “The Role of Models in Theory Construction”, in Thomas Nickles (ed.), Scientific Discovery, Logic, and Rationality (Boston Studies in the Philosophy of Science 56), Dordrecht: Springer Netherlands, 267–283. doi:10.1007/978-94-009-8986-3_12
  • Levy, Arnon, 2012, “Models, Fictions, and Realism: Two Packages”, Philosophy of Science , 79(5): 738–748. doi:10.1086/667992
  • –––, 2015, “Modeling without Models”, Philosophical Studies , 172(3): 781–798. doi:10.1007/s11098-014-0333-9
  • Levy, Arnon and Adrian Currie, 2015, “Model Organisms Are Not (Theoretical) Models”, The British Journal for the Philosophy of Science , 66(2): 327–348. doi:10.1093/bjps/axt055
  • Levy, Arnon and Peter Godfrey-Smith (eds.), 2020, The Scientific Imagination: Philosophical and Psychological Perspectives , New York: Oxford University Press.
  • Liefke, Kristina and Stephan Hartmann, 2018, “Intertheoretic Reduction, Confirmation, and Montague’s Syntax–Semantics Relation”, Journal of Logic, Language and Information , 27(4): 313–341. doi:10.1007/s10849-018-9272-8
  • Lipton, Peter, 2009, “Understanding without Explanation”, in de Regt, Leonelli, and Eigner 2009: 43–63.
  • Luczak, Joshua, 2017, “Talk about Toy Models”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 57: 1–7. doi:10.1016/j.shpsb.2016.11.002
  • Magnani, Lorenzo, 2012, “Scientific Models Are Not Fictions: Model-Based Science as Epistemic Warfare”, in Lorenzo Magnani and Ping Li (eds.), Philosophy and Cognitive Science: Western & Eastern Studies (Studies in Applied Philosophy, Epistemology and Rational Ethics 2), Berlin and Heidelberg: Springer, pp. 1–38. doi:10.1007/978-3-642-29928-5_1
  • Magnani, Lorenzo and Claudia Casadio (eds.), 2016, Model-Based Reasoning in Science and Technology: Logical, Epistemological, and Cognitive Issues (Studies in Applied Philosophy, Epistemology and Rational Ethics 27), Cham: Springer International Publishing. doi:10.1007/978-3-319-38983-7
  • Magnani, Lorenzo and Nancy J. Nersessian (eds.), 2002, Model-Based Reasoning: Science, Technology, Values , Boston, MA: Springer US. doi:10.1007/978-1-4615-0605-8
  • Magnani, Lorenzo, Nancy J. Nersessian, and Paul Thagard (eds.), 1999, Model-Based Reasoning in Scientific Discovery , Boston, MA: Springer US. doi:10.1007/978-1-4615-4813-3
  • Mäki, Uskali, 1994, “Isolation, Idealization and Truth in Economics”, in Bert Hamminga and Neil B. De Marchi (eds.), Idealization VI: Idealization in Economics (Poznań Studies in the Philosophy of the Sciences and the Humanities 38), Amsterdam: Rodopi, pp. 147–168.
  • Massimi, Michela, 2017, “Perspectivism”, in Juha Saatsi (ed.), The Routledge Handbook of Scientific Realism , London: Routledge, pp. 164–175.
  • –––, 2018a, “Four Kinds of Perspectival Truth”, Philosophy and Phenomenological Research , 96(2): 342–359. doi:10.1111/phpr.12300
  • –––, 2018b, “Perspectival Modeling”, Philosophy of Science , 85(3): 335–359. doi:10.1086/697745
  • –––, 2019, “Two Kinds of Exploratory Models”, Philosophy of Science , 86(5): 869–881. doi:10.1086/705494
  • Massimi, Michela and Casey D. McCoy (eds.), 2019, Understanding Perspectivism: Scientific Challenges and Methodological Prospects , New York: Routledge. doi:10.4324/9781315145198
  • Mayo, Deborah, 1996, Error and the Growth of Experimental Knowledge , Chicago, IL: University of Chicago Press.
  • –––, 2018, Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars , Cambridge: Cambridge University Press. doi:10.1017/9781107286184
  • McMullin, Ernan, 1968, “What Do Physical Models Tell Us?”, in B. Van Rootselaar and J. Frits Staal (eds.), Logic, Methodology and Philosophy of Science III (Studies in Logic and the Foundations of Mathematics 52), Amsterdam: North Holland, pp. 385–396. doi:10.1016/S0049-237X(08)71206-0
  • –––, 1985, “Galilean Idealization”, Studies in History and Philosophy of Science Part A , 16(3): 247–273. doi:10.1016/0039-3681(85)90003-2
  • Morgan, Mary S., 1999, “Learning from Models”, in Morgan and Morrison 1999: 347–388. doi:10.1017/CBO9780511660108.013
  • Morgan, Mary S. and Marcel J. Boumans, 2004, “Secrets Hidden by Two-Dimensionality: The Economy as a Hydraulic Machine”, in Soraya de Chadarevian and Nick Hopwood (eds.), Model: The Third Dimension of Science , Stanford, CA: Stanford University Press, pp. 369–401.
  • Morgan, Mary S. and Margaret Morrison (eds.), 1999, Models as Mediators: Perspectives on Natural and Social Science , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511660108
  • Morrison, Margaret, 1999, “Models as Autonomous Agents”, in Morgan and Morrison 1999: 38–65. doi:10.1017/CBO9780511660108.004
  • –––, 2000, Unifying Scientific Theories: Physical Concepts and Mathematical Structures , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511527333
  • –––, 2005, “Approximating the Real: The Role of Idealizations in Physical Theory”, in Jones and Cartwright 2005: 145–172. doi:10.1163/9789401202732_009
  • –––, 2009, “Understanding in Physics and Biology: From the Abstract to the Concrete”, in de Regt, Leonelli, and Eigner 2009: 123–145.
  • –––, 2012, “Emergent Physics and Micro-Ontology”, Philosophy of Science , 79(1): 141–166. doi:10.1086/663240
  • Musgrave, Alan, 1981, “‘Unreal Assumptions’ in Economic Theory: The F-Twist Untwisted”, Kyklos , 34(3): 377–387. doi:10.1111/j.1467-6435.1981.tb01195.x
  • Nagel, Ernest, 1961, The Structure of Science: Problems in the Logic of Scientific Explanation , New York: Harcourt, Brace and World.
  • Nersessian, Nancy J., 1999, “Model-Based Reasoning in Conceptual Change”, in Magnani, Nersessian, and Thagard 1999: 5–22. doi:10.1007/978-1-4615-4813-3_1
  • –––, 2010, Creating Scientific Concepts , Cambridge, MA: MIT Press.
  • Nguyen, James, forthcoming, “It’s Not a Game: Accurate Representation with Toy Models”, The British Journal for the Philosophy of Science , first online: 23 March 2019. doi:10.1093/bjps/axz010
  • Nguyen, James and Roman Frigg, forthcoming, “Mathematics Is Not the Only Language in the Book of Nature”, Synthese , first online: 28 August 2017. doi:10.1007/s11229-017-1526-5
  • Norton, John D., 1991, “Thought Experiments in Einstein’s Work”, in Horowitz and Massey 1991: 129–148.
  • –––, 2003, “Causation as Folk Science”, Philosopher’s Imprint , 3: article 4. [ Norton 2003 available online ]
  • –––, 2012, “Approximation and Idealization: Why the Difference Matters”, Philosophy of Science , 79(2): 207–232. doi:10.1086/664746
  • Nowak, Leszek, 1979, The Structure of Idealization: Towards a Systematic Interpretation of the Marxian Idea of Science , Dordrecht: D. Reidel.
  • Palacios, Patricia, 2019, “Phase Transitions: A Challenge for Intertheoretic Reduction?”, Philosophy of Science , 86(4): 612–640. doi:10.1086/704974
  • Peschard, Isabelle, 2011, “Making Sense of Modeling: Beyond Representation”, European Journal for Philosophy of Science , 1(3): 335–352. doi:10.1007/s13194-011-0032-8
  • Piccinini, Gualtiero and Carl Craver, 2011, “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches”, Synthese , 183(3): 283–311. doi:10.1007/s11229-011-9898-4
  • Pincock, Christopher, 2012, Mathematics and Scientific Representation , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199757107.001.0001
  • –––, forthcoming, “Concrete Scale Models, Essential Idealization and Causal Explanation”, British Journal for the Philosophy of Science .
  • Portides, Demetris P., 2007, “The Relation between Idealisation and Approximation in Scientific Model Construction”, Science & Education , 16(7–8): 699–724. doi:10.1007/s11191-006-9001-6
  • –––, 2014, “How Scientific Models Differ from Works of Fiction”, in Lorenzo Magnani (ed.), Model-Based Reasoning in Science and Technology (Studies in Applied Philosophy, Epistemology and Rational Ethics 8), Berlin and Heidelberg: Springer, pp. 75–87. doi:10.1007/978-3-642-37428-9_5
  • Potochnik, Angela, 2007, “Optimality Modeling and Explanatory Generality”, Philosophy of Science , 74(5): 680–691.
  • –––, 2017, Idealization and the Aims of Science , Chicago, IL: University of Chicago Press.
  • Poznic, Michael, 2016, “Make-Believe and Model-Based Representation in Science: The Epistemology of Frigg’s and Toon’s Fictionalist Views of Modeling”, Teorema: Revista Internacional de Filosofía , 35(3): 201–218.
  • Psillos, Stathis, 1995, “The Cognitive Interplay between Theories and Models: The Case of 19th Century Optics”, in Herfel et al. 1995: 105–133.
  • Redhead, Michael, 1980, “Models in Physics”, The British Journal for the Philosophy of Science , 31(2): 145–163. doi:10.1093/bjps/31.2.145
  • Reiss, Julian, 2003, “Causal Inference in the Abstract or Seven Myths about Thought Experiments”, in Causality: Metaphysics and Methods Research Project , Technical Report 03/02. London: London School of Economics.
  • –––, 2006, “Social Capacities”, in Hartmann et al. 2006: 265–288.
  • –––, 2012, “The Explanation Paradox”, Journal of Economic Methodology , 19(1): 43–62. doi:10.1080/1350178X.2012.661069
  • Reutlinger, Alexander, 2017, “Do Renormalization Group Explanations Conform to the Commonality Strategy?”, Journal for General Philosophy of Science , 48(1): 143–150. doi:10.1007/s10838-016-9339-7
  • Reutlinger, Alexander, Dominik Hangleiter, and Stephan Hartmann, 2018, “Understanding (with) Toy Models”, The British Journal for the Philosophy of Science , 69(4): 1069–1099. doi:10.1093/bjps/axx005
  • Rice, Collin C., 2015, “Moving Beyond Causes: Optimality Models and Scientific Explanation”, Noûs , 49(3): 589–615. doi:10.1111/nous.12042
  • –––, 2016, “Factive Scientific Understanding without Accurate Representation”, Biology & Philosophy , 31(1): 81–102. doi:10.1007/s10539-015-9510-2
  • –––, 2018, “Idealized Models, Holistic Distortions, and Universality”, Synthese , 195(6): 2795–2819. doi:10.1007/s11229-017-1357-4
  • –––, 2019, “Models Don’t Decompose That Way: A Holistic View of Idealized Models”, The British Journal for the Philosophy of Science , 70(1): 179–208. doi:10.1093/bjps/axx045
  • Rosaler, Joshua, 2015, “Local Reduction in Physics”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 50: 54–69. doi:10.1016/j.shpsb.2015.02.004
  • Rueger, Alexander, 2005, “Perspectival Models and Theory Unification”, The British Journal for the Philosophy of Science , 56(3): 579–594. doi:10.1093/bjps/axi128
  • Rueger, Alexander and David Sharp, 1998, “Idealization and Stability: A Perspective from Nonlinear Dynamics”, in Shanks 1998: 201–216.
  • Saatsi, Juha, 2016, “Models, Idealisations, and Realism”, in Emiliano Ippoliti, Fabio Sterpetti, and Thomas Nickles (eds.), Models and Inferences in Science (Studies in Applied Philosophy, Epistemology and Rational Ethics 25), Cham: Springer International Publishing, pp. 173–189. doi:10.1007/978-3-319-28163-6_10
  • Saatsi, Juha and Alexander Reutlinger, 2018, “Taking Reductionism to the Limit: How to Rebut the Antireductionist Argument from Infinite Limits”, Philosophy of Science , 85(3): 455–482. doi:10.1086/697735
  • Salis, Fiora, forthcoming, “The New Fiction View of Models”, The British Journal for the Philosophy of Science , first online: 20 April 2019. doi:10.1093/bjps/axz015
  • Salmon, Wesley C., 1984, Scientific Explanation and the Causal Structure of the World , Princeton, NJ: Princeton University Press.
  • Schaffner, Kenneth F., 1969, “The Watson–Crick Model and Reductionism”, The British Journal for the Philosophy of Science , 20(4): 325–348. doi:10.1093/bjps/20.4.325
  • Scheibe, Erhard, 1997, Die Reduktion physikalischer Theorien: Ein Beitrag zur Einheit der Physik, Teil I: Grundlagen und elementare Theorie , Berlin: Springer.
  • –––, 1999, Die Reduktion physikalischer Theorien: Ein Beitrag zur Einheit der Physik, Teil II: Inkommensurabilität und Grenzfallreduktion , Berlin: Springer.
  • –––, 2001, Between Rationalism and Empiricism: Selected Papers in the Philosophy of Physics , Brigitte Falkenburg (ed.), New York: Springer. doi:10.1007/978-1-4613-0183-7
  • Shanks, Niall (ed.), 1998, Idealization in Contemporary Physics , Amsterdam: Rodopi.
  • Shech, Elay, 2018, “Idealizations, Essential Self-Adjointness, and Minimal Model Explanation in the Aharonov–Bohm Effect”, Synthese , 195(11): 4839–4863. doi:10.1007/s11229-017-1428-6
  • Sismondo, Sergio and Snait Gissis (eds.), 1999, Modeling and Simulation , Special Issue of Science in Context , 12(2).
  • Sorensen, Roy A., 1992, Thought Experiments , New York: Oxford University Press. doi:10.1093/019512913X.001.0001
  • Spector, Marshall, 1965, “Models and Theories”, The British Journal for the Philosophy of Science , 16(62): 121–142. doi:10.1093/bjps/XVI.62.121
  • Staley, Kent W., 2004, The Evidence for the Top Quark: Objectivity and Bias in Collaborative Experimentation , Cambridge: Cambridge University Press.
  • Sterrett, Susan G., 2006, “Models of Machines and Models of Phenomena”, International Studies in the Philosophy of Science , 20(1): 69–80. doi:10.1080/02698590600641024
  • –––, forthcoming, “Scale Modeling”, in Diane Michelfelder and Neelke Doorn (eds.), Routledge Handbook of Philosophy of Engineering , Chapter 32. [ Sterrett forthcoming available online ]
  • Strevens, Michael, 2004, “The Causal and Unification Approaches to Explanation Unified—Causally”, Noûs , 38(1): 154–176. doi:10.1111/j.1468-0068.2004.00466.x
  • –––, 2008, Depth: An Account of Scientific Explanation , Cambridge, MA, and London: Harvard University Press.
  • –––, 2013, Tychomancy: Inferring Probability from Causal Structure , Cambridge, MA, and London: Harvard University Press.
  • Suárez, Mauricio, 2003, “Scientific Representation: Against Similarity and Isomorphism”, International Studies in the Philosophy of Science , 17(3): 225–244. doi:10.1080/0269859032000169442
  • –––, 2004, “An Inferential Conception of Scientific Representation”, Philosophy of Science , 71(5): 767–779. doi:10.1086/421415
  • ––– (ed.), 2009, Fictions in Science: Philosophical Essays on Modeling and Idealization , London: Routledge. doi:10.4324/9780203890103
  • Sugden, Robert, 2000, “Credible Worlds: The Status of Theoretical Models in Economics”, Journal of Economic Methodology , 7(1): 1–31. doi:10.1080/135017800362220
  • Sullivan, Emily and Kareem Khalifa, 2019, “Idealizations and Understanding: Much Ado About Nothing?”, Australasian Journal of Philosophy , 97(4): 673–689. doi:10.1080/00048402.2018.1564337
  • Suppe, Frederick, 2000, “Theory Identity”, in William H. Newton-Smith (ed.), A Companion to the Philosophy of Science , Oxford: Wiley-Blackwell, pp. 525–527.
  • Suppes, Patrick, 1960, “A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences”, Synthese , 12(2–3): 287–301. Reprinted in Freudenthal 1961: 163–177, and in Suppes 1969: 10–23. doi:10.1007/BF00485107 doi:10.1007/978-94-010-3667-2_16
  • –––, 1962, “Models of Data”, in Ernest Nagel, Patrick Suppes, and Alfred Tarski (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the 1960 International Congress , Stanford, CA: Stanford University Press, pp. 252–261. Reprinted in Suppes 1969: 24–35.
  • –––, 1969, Studies in the Methodology and Foundations of Science: Selected Papers from 1951 to 1969 , Dordrecht: Reidel.
  • –––, 2007, “Statistical Concepts in Philosophy of Science”, Synthese , 154(3): 485–496. doi:10.1007/s11229-006-9122-0
  • Swoyer, Chris, 1991, “Structural Representation and Surrogative Reasoning”, Synthese , 87(3): 449–508. doi:10.1007/BF00499820
  • Tabor, Michael, 1989, Chaos and Integrability in Nonlinear Dynamics: An Introduction , New York: John Wiley.
  • Teller, Paul, 2001, “Twilight of the Perfect Model”, Erkenntnis , 55(3): 393–415. doi:10.1023/A:1013349314515
  • –––, 2002, “Critical Study: Nancy Cartwright’s The Dappled World: A Study of the Boundaries of Science ”, Noûs , 36(4): 699–725. doi:10.1111/1468-0068.t01-1-00408
  • –––, 2009, “Fictions, Fictionalization, and Truth in Science”, in Suárez 2009: 235–247.
  • –––, 2018, “Referential and Perspectival Realism”, Spontaneous Generations: A Journal for the History and Philosophy of Science , 9(1): 151–164. doi:10.4245/sponge.v9i1.26990
  • Tešić, Marko, 2019, “Confirmation and the Generalized Nagel–Schaffner Model of Reduction: A Bayesian Analysis”, Synthese , 196(3): 1097–1129. doi:10.1007/s11229-017-1501-1
  • Thomasson, Amie L., 1999, Fiction and Metaphysics , New York: Cambridge University Press. doi:10.1017/CBO9780511527463
  • –––, 2020, “If Models Were Fictions, Then What Would They Be?”, in Levy and Godfrey-Smith 2020: 51–74.
  • Thomson-Jones, Martin, 2006, “Models and the Semantic View”, Philosophy of Science , 73(5): 524–535. doi:10.1086/518322
  • –––, 2020, “Realism about Missing Systems”, in Levy and Godfrey-Smith 2020: 75–101.
  • Toon, Adam, 2012, Models as Make-Believe: Imagination, Fiction and Scientific Representation , Basingstoke: Palgrave Macmillan.
  • Trout, J. D., 2002, “Scientific Explanation and the Sense of Understanding”, Philosophy of Science , 69(2): 212–233. doi:10.1086/341050
  • van Fraassen, Bas C., 1989, Laws and Symmetry , Oxford: Oxford University Press. doi:10.1093/0198248601.001.0001
  • Walton, Kendall L., 1990, Mimesis as Make-Believe: On the Foundations of the Representational Arts , Cambridge, MA: Harvard University Press.
  • Weisberg, Michael, 2007, “Three Kinds of Idealization”, Journal of Philosophy , 104(12): 639–659. doi:10.5840/jphil20071041240
  • –––, 2013, Simulation and Similarity: Using Models to Understand the World , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199933662.001.0001
  • Weisberg, Michael and Ryan Muldoon, 2009, “Epistemic Landscapes and the Division of Cognitive Labor”, Philosophy of Science , 76(2): 225–252. doi:10.1086/644786
  • Wimsatt, William, 1987, “False Models as Means to Truer Theories”, in Matthew Nitecki and Antoni Hoffman (eds.), Neutral Models in Biology , Oxford: Oxford University Press, pp. 23–55.
  • –––, 2007, Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality , Cambridge, MA: Harvard University Press.
  • Woodward, James, 2003, Making Things Happen: A Theory of Causal Explanation , Oxford: Oxford University Press. doi:10.1093/0195155270.001.0001
  • Woody, Andrea I., 2004, “More Telltale Signs: What Attention to Representation Reveals about Scientific Explanation”, Philosophy of Science , 71(5): 780–793. doi:10.1086/421416
  • Zollman, Kevin J. S., 2007, “The Communication Structure of Epistemic Communities”, Philosophy of Science , 74(5): 574–587. doi:10.1086/525605
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Internet Encyclopedia of Philosophy article on models
  • Bibliography (1450–2008), Mueller Science
  • Interactive models from various sciences (Phet, University of Colorado, Boulder)
  • Models of the global climate (Climate.gov)
  • Double-helix model of DNA (Proteopedia)
  • A Biologist’s Guide to Mathematical Modeling in Ecology and Evolution (Sarah Otto and Troy Day)
  • Lotka–Volterra model (analyticphysics.com)
  • Schelling’s Model of Segregation (Frank McCown)
  • Modeling Commons (NetLogo)
  • Social and Economic Networks: Models and Analysis (Stanford Online course)
  • Neural Network Models (TensorFlow)

analogy and analogical reasoning | laws of nature | science: unity of | scientific explanation | scientific realism | scientific representation | scientific theories: structure of | simulations in science | thought experiments

Acknowledgments

We would like to thank Joe Dewhurst, James Nguyen, Alexander Reutlinger, Collin Rice, Dunja Šešelja, and Paul Teller for helpful comments on the drafts of the revised version in 2019. When writing the original version back in 2006 we benefitted from comments and suggestions by Nancy Cartwright, Paul Humphreys, Julian Reiss, Elliott Sober, Chris Swoyer, and Paul Teller.

Copyright © 2020 by Roman Frigg < r . p . frigg @ lse . ac . uk > Stephan Hartmann < stephan . hartmann @ lrz . uni-muenchen . de >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Biology LibreTexts

1.2: Models, Hypotheses, and Theories

  • Last updated
  • Save as PDF
  • Page ID 3901

  • Michael W. Klymkowsky and Melanie M. Cooper
  • University of Colorado Boulder and Michigan State University

Tentative scientific models are commonly known as hypotheses. Such models are valuable in that they serve as a way to clearly articulate one’s assumptions and their implications. They form the logical basis for generating testable predictions about the phenomena they purport to explain. As scientific models become more sophisticated, their predictions can be expected to become more and more accurate or apply to areas that previous forms of the model could not handle. Let us assume that two models are equally good at explaining a particular observation. How might we judge between them? One way is the rule of thumb known as Occam&#39;s Razor, also known as the Principle of Parsimony, named after the medieval philosopher William of Occam (1287–1347). This rule states that all other things being equal, the simplest explanation is to be preferred. This is not to imply that an accurate scientific explanation will be simple, or that the simplest explanations are the correct ones, only that to be useful, a scientific model should not be more complex than necessary. Consider two models for a particular phenomenon, one that involves angels and the other that does not. We need not seriously consider the model that invokes angels unless we can accurately monitor the presence of angels and if so, whether they are actively involved in the process to be explained. Why? Because angels, if they exist, imply more complex factors that does a simple natural explanation. For example, we would have to explain what angels are made of, how they originated, and how they intervene in, or interact with the physical world, that is, how they make matter do things. Do they obey the laws of thermodynamics or not? Under what conditions do they intervene? Are their interventions consistent or capricious? Assuming that an alternative, angel-less model is as or more accurate at describing the phenomena, the scientific choice would be the angel-less model. Parsimony (an extreme unwillingness to spend money or use resources) has the practical effect that it lets us restrict our thinking to the minimal model that is needed to explain specific phenomena. The surprising result, well illustrated by a talk by Murray Gell-Mann, is that simple, albeit often counter-intuitive rules can explain much of the Universe with remarkable precision. 17 A model that fails to accurately describe and predict the observable world must be missing something and is either partially or completely wrong.

Scientific models are continually being modified, expanded, or replaced in order to explain more and more phenomena more and more accurately. It is an implicit assumption of science that the Universe can be understood in scientific terms, and this presumption has been repeatedly confirmed but has by no means been proven.

A model that has been repeatedly confirmed and covers many different observations is known as a theory – at least this is the meaning of the word in a rigorous scientific context. It is worth noting that the word theory is often misused, even by scientists who might be expected to know better. If there are multiple “theories” to explain a particular phenomenon, it is more correct to say that i) these are not actually theories, in the scientific sense, but rather working models or simple speculations, and that ii) one or more, and perhaps all of these models are incorrect or incomplete. A scientific theory is a very special set of ideas that explains, in a logically consistent, empirically supported, and predictive manner a broad range of phenomena. Moreover, it has been tested repeatedly by a number of critical and objective people and measures – that is people who have no vested interest in the outcome – and found to provide accurate descriptions of the phenomenon it purports to explain. It is not idle speculation. If you are curious, you might count how many times the word theory is misused, at least in the scientific sense, in your various classes.

"Gravity explains the motions of the planets, but it cannot explain who sets the planets in motion." - Isaac Newton

That said, theories are not static. New or more accurate observations that a theory cannot explain will inevitably drive the theory's revision or replacement. When this occurs, the new theory explains the new observations as well as everything explained by the older theory. Consider for example, gravity. Isaac Newton’s law of gravity, describes how objects behave and it is possible to make extremely accurate predictions of how objects behave using its rules. However, Newton did not really have a theory of gravity, that is, a naturalistic explanation for why gravity exists and why it behaves the way it does. He relied, in fact, on a supernatural explanation. 18 When it was shown that Newton’s law of gravity failed in specific situations, such as when an object is in close proximity of a massive object, like the sun, new rules and explanations were needed. Albert Einstein’s Theory of General Relativity not only more accurately predicts the behavior of these systems, but also provided a naturalistic explanation for the origin of the gravitational force. 19 So is general relativity true? Not necessarily, which is why scientists continue to test its predictions in increasingly extreme situations.

17 Beauty, truth and ... physics?

18 Want to read an interesting biography of Newton, check out “Isaac Newton” by James Gleick

19 A good video on General Relativity

Contributors and Attributions

Michael W. Klymkowsky (University of Colorado Boulder) and Melanie M. Cooper (Michigan State University) with significant contributions by Emina Begovic & some editorial assistance of Rebecca Klymkowsky.

Scientific Hypothesis, Model, Theory, and Law

Understanding the Difference Between Basic Scientific Terms

Hero Images / Getty Images

  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Scientific Method
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

Words have precise meanings in science. For example, "theory," "law," and "hypothesis" don't all mean the same thing. Outside of science, you might say something is "just a theory," meaning it's a supposition that may or may not be true. In science, however, a theory is an explanation that generally is accepted to be true. Here's a closer look at these important, commonly misused terms.

A hypothesis is an educated guess, based on observation. It's a prediction of cause and effect. Usually, a hypothesis can be supported or refuted through experimentation or more observation. A hypothesis can be disproven but not proven to be true.

Example: If you see no difference in the cleaning ability of various laundry detergents, you might hypothesize that cleaning effectiveness is not affected by which detergent you use. This hypothesis can be disproven if you observe a stain is removed by one detergent and not another. On the other hand, you cannot prove the hypothesis. Even if you never see a difference in the cleanliness of your clothes after trying 1,000 detergents, there might be one more you haven't tried that could be different.

Scientists often construct models to help explain complex concepts. These can be physical models like a model volcano or atom  or conceptual models like predictive weather algorithms. A model doesn't contain all the details of the real deal, but it should include observations known to be valid.

Example: The  Bohr model shows electrons orbiting the atomic nucleus, much the same way as the way planets revolve around the sun. In reality, the movement of electrons is complicated but the model makes it clear that protons and neutrons form a nucleus and electrons tend to move around outside the nucleus.

A scientific theory summarizes a hypothesis or group of hypotheses that have been supported with repeated testing. A theory is valid as long as there is no evidence to dispute it. Therefore, theories can be disproven. Basically, if evidence accumulates to support a hypothesis, then the hypothesis can become accepted as a good explanation of a phenomenon. One definition of a theory is to say that it's an accepted hypothesis.

Example: It is known that on June 30, 1908, in Tunguska, Siberia, there was an explosion equivalent to the detonation of about 15 million tons of TNT. Many hypotheses have been proposed for what caused the explosion. It was theorized that the explosion was caused by a natural extraterrestrial phenomenon , and was not caused by man. Is this theory a fact? No. The event is a recorded fact. Is this theory, generally accepted to be true, based on evidence to-date? Yes. Can this theory be shown to be false and be discarded? Yes.

A scientific law generalizes a body of observations. At the time it's made, no exceptions have been found to a law. Scientific laws explain things but they do not describe them. One way to tell a law and a theory apart is to ask if the description gives you the means to explain "why." The word "law" is used less and less in science, as many laws are only true under limited circumstances.

Example: Consider Newton's Law of Gravity . Newton could use this law to predict the behavior of a dropped object but he couldn't explain why it happened.

As you can see, there is no "proof" or absolute "truth" in science. The closest we get are facts, which are indisputable observations. Note, however, if you define proof as arriving at a logical conclusion, based on the evidence, then there is "proof" in science. Some work under the definition that to prove something implies it can never be wrong, which is different. If you're asked to define the terms hypothesis, theory, and law, keep in mind the definitions of proof and of these words can vary slightly depending on the scientific discipline. What's important is to realize they don't all mean the same thing and cannot be used interchangeably.

  • Theory Definition in Science
  • Hypothesis, Model, Theory, and Law
  • What Is a Scientific or Natural Law?
  • Scientific Hypothesis Examples
  • The Continental Drift Theory: Revolutionary and Significant
  • What 'Fail to Reject' Means in a Hypothesis Test
  • What Is a Hypothesis? (Science)
  • Hypothesis Definition (Science)
  • Definition of a Hypothesis
  • Processual Archaeology
  • The Basics of Physics in Scientific Study
  • What Is the Difference Between Hard and Soft Science?
  • Tips on Winning the Debate on Evolution
  • Geological Thinking: Method of Multiple Working Hypotheses
  • 5 Common Misconceptions About Evolution
  • Deductive Versus Inductive Reasoning

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

15.5: Hypothesis Tests for Regression Models

  • Last updated
  • Save as PDF
  • Page ID 4039

  • Danielle Navarro
  • University of New South Wales

So far we’ve talked about what a regression model is, how the coefficients of a regression model are estimated, and how we quantify the performance of the model (the last of these, incidentally, is basically our measure of effect size). The next thing we need to talk about is hypothesis tests. There are two different (but related) kinds of hypothesis tests that we need to talk about: those in which we test whether the regression model as a whole is performing significantly better than a null model; and those in which we test whether a particular regression coefficient is significantly different from zero.

At this point, you’re probably groaning internally, thinking that I’m going to introduce a whole new collection of tests. You’re probably sick of hypothesis tests by now, and don’t want to learn any new ones. Me too. I’m so sick of hypothesis tests that I’m going to shamelessly reuse the F-test from Chapter 14 and the t-test from Chapter 13. In fact, all I’m going to do in this section is show you how those tests are imported wholesale into the regression framework.

Testing the model as a whole

Okay, suppose you’ve estimated your regression model. The first hypothesis test you might want to try is one in which the null hypothesis that there is no relationship between the predictors and the outcome, and the alternative hypothesis is that the data are distributed in exactly the way that the regression model predicts . Formally, our “null model” corresponds to the fairly trivial “regression” model in which we include 0 predictors, and only include the intercept term b 0

H 0 :Y i =b 0 +ϵ i

If our regression model has K predictors, the “alternative model” is described using the usual formula for a multiple regression model:

\(H_{1}: Y_{i}=\left(\sum_{k=1}^{K} b_{k} X_{i k}\right)+b_{0}+\epsilon_{i}\)

How can we test these two hypotheses against each other? The trick is to understand that just like we did with ANOVA, it’s possible to divide up the total variance SS tot into the sum of the residual variance SS res and the regression model variance SS mod . I’ll skip over the technicalities, since we covered most of them in the ANOVA chapter, and just note that:

SS mod =SS tot −SS res

And, just like we did with the ANOVA, we can convert the sums of squares in to mean squares by dividing by the degrees of freedom.

\(\mathrm{MS}_{m o d}=\dfrac{\mathrm{SS}_{m o d}}{d f_{m o d}}\) \(\mathrm{MS}_{r e s}=\dfrac{\mathrm{SS}_{r e s}}{d f_{r e s}}\)

So, how many degrees of freedom do we have? As you might expect, the df associated with the model is closely tied to the number of predictors that we’ve included. In fact, it turns out that df mod =K. For the residuals, the total degrees of freedom is df res =N−K−1.

\(\ F={MS_{mod} \over MS_{res}}\)

and the degrees of freedom associated with this are K and N−K−1. This F statistic has exactly the same interpretation as the one we introduced in Chapter 14. Large F values indicate that the null hypothesis is performing poorly in comparison to the alternative hypothesis. And since we already did some tedious “do it the long way” calculations back then, I won’t waste your time repeating them. In a moment I’ll show you how to do the test in R the easy way, but first, let’s have a look at the tests for the individual regression coefficients.

Tests for individual coefficients

The F-test that we’ve just introduced is useful for checking that the model as a whole is performing better than chance. This is important: if your regression model doesn’t produce a significant result for the F-test then you probably don’t have a very good regression model (or, quite possibly, you don’t have very good data). However, while failing this test is a pretty strong indicator that the model has problems, passing the test (i.e., rejecting the null) doesn’t imply that the model is good! Why is that, you might be wondering? The answer to that can be found by looking at the coefficients for the regression.2 model:

I can’t help but notice that the estimated regression coefficient for the baby.sleep variable is tiny (0.01), relative to the value that we get for dan.sleep (-8.95). Given that these two variables are absolutely on the same scale (they’re both measured in “hours slept”), I find this suspicious. In fact, I’m beginning to suspect that it’s really only the amount of sleep that I get that matters in order to predict my grumpiness.

Once again, we can reuse a hypothesis test that we discussed earlier, this time the t-test. The test that we’re interested has a null hypothesis that the true regression coefficient is zero (b=0), which is to be tested against the alternative hypothesis that it isn’t (b≠0). That is:

H 1 : b≠0

How can we test this? Well, if the central limit theorem is kind to us, we might be able to guess that the sampling distribution of \(\ \hat{b}\), the estimated regression coefficient, is a normal distribution with mean centred on b. What that would mean is that if the null hypothesis were true, then the sampling distribution of \(\ \hat{b}\) has mean zero and unknown standard deviation. Assuming that we can come up with a good estimate for the standard error of the regression coefficient, SE (\(\ \hat{b}\)), then we’re in luck. That’s exactly the situation for which we introduced the one-sample t way back in Chapter 13. So let’s define a t-statistic like this,

\(\ t = { \hat{b} \over SE(\hat{b})}\)

I’ll skip over the reasons why, but our degrees of freedom in this case are df=N−K−1. Irritatingly, the estimate of the standard error of the regression coefficient, SE(\(\ \hat{b}\)), is not as easy to calculate as the standard error of the mean that we used for the simpler t-tests in Chapter 13. In fact, the formula is somewhat ugly, and not terribly helpful to look at. For our purposes it’s sufficient to point out that the standard error of the estimated regression coefficient depends on both the predictor and outcome variables, and is somewhat sensitive to violations of the homogeneity of variance assumption (discussed shortly).

In any case, this t-statistic can be interpreted in the same way as the t-statistics that we discussed in Chapter 13. Assuming that you have a two-sided alternative (i.e., you don’t really care if b>0 or b<0), then it’s the extreme values of t (i.e., a lot less than zero or a lot greater than zero) that suggest that you should reject the null hypothesis.

Running the hypothesis tests in R

To compute all of the quantities that we have talked about so far, all you need to do is ask for a summary() of your regression model. Since I’ve been using regression.2 as my example, let’s do that:

The output that this command produces is pretty dense, but we’ve already discussed everything of interest in it, so what I’ll do is go through it line by line. The first line reminds us of what the actual regression model is:

You can see why this is handy, since it was a little while back when we actually created the regression.2 model, and so it’s nice to be reminded of what it was we were doing. The next part provides a quick summary of the residuals (i.e., the ϵi values),

which can be convenient as a quick and dirty check that the model is okay. Remember, we did assume that these residuals were normally distributed, with mean 0. In particular it’s worth quickly checking to see if the median is close to zero, and to see if the first quartile is about the same size as the third quartile. If they look badly off, there’s a good chance that the assumptions of regression are violated. These ones look pretty nice to me, so let’s move on to the interesting stuff. The next part of the R output looks at the coefficients of the regression model:

Each row in this table refers to one of the coefficients in the regression model. The first row is the intercept term, and the later ones look at each of the predictors. The columns give you all of the relevant information. The first column is the actual estimate of b (e.g., 125.96 for the intercept, and -8.9 for the dan.sleep predictor). The second column is the standard error estimate \(\ \hat{\sigma_b}\). The third column gives you the t-statistic, and it’s worth noticing that in this table t= \(\ \hat{b}\) /SE(\(\ \hat{b}\)) every time. Finally, the fourth column gives you the actual p value for each of these tests. 217 The only thing that the table itself doesn’t list is the degrees of freedom used in the t-test, which is always N−K−1 and is listed immediately below, in this line:

The value of df=97 is equal to N−K−1, so that’s what we use for our t-tests. In the final part of the output we have the F-test and the R 2 values which assess the performance of the model as a whole

So in this case, the model performs significantly better than you’d expect by chance (F(2,97)=215.2, p<.001), which isn’t all that surprising: the R 2 =.812 value indicate that the regression model accounts for 81.2% of the variability in the outcome measure. However, when we look back up at the t-tests for each of the individual coefficients, we have pretty strong evidence that the baby.sleep variable has no significant effect; all the work is being done by the dan.sleep variable. Taken together, these results suggest that regression.2 is actually the wrong model for the data: you’d probably be better off dropping the baby.sleep predictor entirely. In other words, the regression.1 model that we started with is the better model.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

hypothesis based models

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

hypothesis based models

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

The Scientific Method – Hypotheses, Models, Theories, and Laws

The Scientific Method Blue

The scientific method is defined as the steps scientists follow to create a view of the world that is accurate, reliable, and consistent.  It’s also a way of minimizing how a scientist’s cultural and personal beliefs impact and influence their work.  It attempts to make a person’s perceptions and interpretations of nature and natural phenomena as scientific and neutral as possible.  It minimizes the amount of prejudice and bias a scientist has on the results of an experiment, hypothesis, or theory.

The scientific method can be broken down into four steps:

  • Observe and describe the phenomenon (or group of various phenomena).
  • Create a hypothesis that explains the phenomena. In physics, this often means creating a mathematical relation or a causal mechanism.
  • Use this hypothesis to attempt to predict other related phenomena or the results of another set of observations.
  • Test the performance of these predictions using independent experiments.

If the results of these experiments support the hypothesis, then it may become a theory or even a law of nature.  However, if they do not support the hypothesis, then it either has to be changed or completely rejected.  The main benefit of the scientific method is that it has predictive power—a proven theory can be applied to a wide range of phenomena.  Of course, even the most tested theory may be, at some point, proven wrong because new observations may be recorded or experiments done that contradict it.  Theories can never fully be proven, only fully disproven.

  • The Steps of the Scientific Method – A basic introduction
  • Wikipedia’s Entry for the Scientific Method – It goes into the history of the method
  • Definition of the Scientific Method – Also includes a brief history of its use
  • Steps of the Scientific Method – More detail about each of the steps

Testing Hypotheses

Testing a hypothesis can lead to one of two things: the hypothesis is confirmed or the hypothesis is rejected, meaning it either has to be changed or a new hypothesis has to be created.  This must happen if the experiments repeatedly and clearly show that their hypothesis is wrong.  It doesn’t matter how elegant or supported a theory is—if it can be disproven once, it can’t be considered a law of nature.  Experimentation is the supreme rule in the scientific method, and if an experiment shows that the hypothesis isn’t true, it trumps all previous experiments that supported it.  These experiments sometimes directly test the theory, while other times they test the theory indirectly via logic and math.  The scientific method requires that all theories have to be testable in some way—those that can’t are not considered scientific theories.

If a theory is disproven, that theory might still be applicable in some ways, but it’s no longer considered a true law of nature.  For example, Newton’s Laws were disproven in cases where the velocity is greater than the speed of light, but they can still be applied to mechanics that use slower velocities.  Other theories that were widely held to be true for years, even centuries, that have been disproven due to new observations include the idea that the earth is the center of our solar system or that the planets orbited the sun in perfect circular orbits rather than the now-proven elliptical orbits.

Of course, a hypothesis or proven theory isn’t always disproven by one single experiment.  This is because experiments may have errors in them, so a hypothesis that looks like it failed once is tested several times by several independent tests.  Things that can cause errors include faulty instruments, misreading measurements or other data, or the bias of the researcher.  Most measurements are given with a degree of error.  Scientists work to make that degree of error as small as possible while still estimating and calculating everything that could cause errors in a test.

  • Testing Software Hypotheses – How to apply the scientific method to software testing
  • Testing Scientific Ideas – Including a graph of the process
  • Research Hypothesis Testing – What is it, and how is it tested?
  • What Hypothesis Testing is All About – A different look at testing

Common Mistakes in Applying the Scientific Method

Unfortunately, the scientific method isn’t always applied correctly.  Mistakes do happen, and some of them are actually fairly common.  Because all scientists are human with biases and prejudices, it can be hard to be truly objective in some cases.  It’s important that all results are as untainted by bias as possible, but that doesn’t always happen. Another common mistake is taking something as common sense or deciding that something is so logical that it doesn’t need to be tested.  Scientists have to remember that everything has to be tested before it can be considered a solid hypothesis.

Scientists also have to be willing to look at every piece of data, even those which invalidate the hypothesis.  Some scientists so strongly believe their hypothesis that they try to explain away data that disproves it.  They want to find some reason as to why that data or experiment must be wrong instead of looking at their hypothesis again.  All data has to be considered in the same way, even if it goes against the hypothesis.

Another common issue is forgetting to estimate all possible errors that could arise during testing.  Some data that contradicts the hypothesis has been explained as falling into the range of error, but really, it was a systematic error that the researchers simply didn’t account for.

  • Mistakes Young Researchers Make – 15 common errors new scientists may make
  • Experimental Error – A look at false positives and false negatives
  • Control of Measurement Errors – How to keep errors in measurement to a minimum
  • Errors in Scientific Experiments – What they are and how to handle them

Hypotheses, Models, Theories, and Laws

While some people do incorrectly use words like “theory” and “hypotheses” interchangeably, the scientific community has very strict definitions of these terms.

Hypothesis:   A hypothesis is an observation, usually based on a cause and effect.  It is the basic idea that has not been tested.  A hypothesis is just an idea that explains something.  It must go through a number of experiments designed to prove or disprove it.

Model: A hypothesis becomes a model after some testing has been done and it appears to be a valid observation.  Some models are only valid in specific instances, such as when a value falls within a certain range.  A model may also be called a law.

Scientific theory: A model that has been repeatedly tested and confirmed may become a scientific theory.  These theories have been tested by a number of independent researchers around the world using various experiments, and all have supported the theory.  Theories may be disproven, of course, but only after rigorous testing of a new hypothesis that seems to contradict them.

  • What is a Hypothesis? – The definition of a hypothesis and its function in the scientific method
  • Hypothesis, Theory, and Law – Definitions of each
  • 10 Scientific Laws and Theories – Some examples

The scientific method has been used for years to create hypotheses, test them, and develop them into full scientific theories.  While it appears to be a very simple method at first glance, it’s actually one of the most complex ways of testing and evaluating an observation or idea.  It’s different from other types of explanation because it attempts to remove all bias and move forward using systematic experimentation only.  However, like any method, there is room for error, such as bias or mechanical error.  Of course, just like the theories it tests, the scientific method may someday be revised.

hypothesis based models

BSC Designer is strategy execution software that enhances strategy formulation and execution through KPIs, strategy maps, and dashboards. Our proprietary strategy implementation system guides companies in practical application of strategic planning.

Privacy Overview

Get science-backed answers as you write with Paperpal's Research feature

How to Write a Hypothesis? Types and Examples 

how to write a hypothesis for research

All research studies involve the use of the scientific method, which is a mathematical and experimental technique used to conduct experiments by developing and testing a hypothesis or a prediction about an outcome. Simply put, a hypothesis is a suggested solution to a problem. It includes elements that are expressed in terms of relationships with each other to explain a condition or an assumption that hasn’t been verified using facts. 1 The typical steps in a scientific method include developing such a hypothesis, testing it through various methods, and then modifying it based on the outcomes of the experiments.  

A research hypothesis can be defined as a specific, testable prediction about the anticipated results of a study. 2 Hypotheses help guide the research process and supplement the aim of the study. After several rounds of testing, hypotheses can help develop scientific theories. 3 Hypotheses are often written as if-then statements. 

Here are two hypothesis examples: 

Dandelions growing in nitrogen-rich soils for two weeks develop larger leaves than those in nitrogen-poor soils because nitrogen stimulates vegetative growth. 4  

If a company offers flexible work hours, then their employees will be happier at work. 5  

Table of Contents

  • What is a hypothesis? 
  • Types of hypotheses 
  • Characteristics of a hypothesis 
  • Functions of a hypothesis 
  • How to write a hypothesis 
  • Hypothesis examples 
  • Frequently asked questions 

What is a hypothesis?

Figure 1. Steps in research design

A hypothesis expresses an expected relationship between variables in a study and is developed before conducting any research. Hypotheses are not opinions but rather are expected relationships based on facts and observations. They help support scientific research and expand existing knowledge. An incorrectly formulated hypothesis can affect the entire experiment leading to errors in the results so it’s important to know how to formulate a hypothesis and develop it carefully.

A few sources of a hypothesis include observations from prior studies, current research and experiences, competitors, scientific theories, and general conditions that can influence people. Figure 1 depicts the different steps in a research design and shows where exactly in the process a hypothesis is developed. 4  

There are seven different types of hypotheses—simple, complex, directional, nondirectional, associative and causal, null, and alternative. 

Types of hypotheses

The seven types of hypotheses are listed below: 5 , 6,7  

  • Simple : Predicts the relationship between a single dependent variable and a single independent variable. 

Example: Exercising in the morning every day will increase your productivity.  

  • Complex : Predicts the relationship between two or more variables. 

Example: Spending three hours or more on social media daily will negatively affect children’s mental health and productivity, more than that of adults.  

  • Directional : Specifies the expected direction to be followed and uses terms like increase, decrease, positive, negative, more, or less. 

Example: The inclusion of intervention X decreases infant mortality compared to the original treatment.  

  • Non-directional : Does not predict the exact direction, nature, or magnitude of the relationship between two variables but rather states the existence of a relationship. This hypothesis may be used when there is no underlying theory or if findings contradict prior research. 

Example: Cats and dogs differ in the amount of affection they express.  

  • Associative and causal : An associative hypothesis suggests an interdependency between variables, that is, how a change in one variable changes the other.  

Example: There is a positive association between physical activity levels and overall health.  

A causal hypothesis, on the other hand, expresses a cause-and-effect association between variables. 

Example: Long-term alcohol use causes liver damage.  

  • Null : Claims that the original hypothesis is false by showing that there is no relationship between the variables. 

Example: Sleep duration does not have any effect on productivity.  

  • Alternative : States the opposite of the null hypothesis, that is, a relationship exists between two variables. 

Example: Sleep duration affects productivity.  

hypothesis based models

Characteristics of a hypothesis

So, what makes a good hypothesis? Here are some important characteristics of a hypothesis. 8,9  

  • Testable : You must be able to test the hypothesis using scientific methods to either accept or reject the prediction. 
  • Falsifiable : It should be possible to collect data that reject rather than support the hypothesis. 
  • Logical : Hypotheses shouldn’t be a random guess but rather should be based on previous theories, observations, prior research, and logical reasoning. 
  • Positive : The hypothesis statement about the existence of an association should be positive, that is, it should not suggest that an association does not exist. Therefore, the language used and knowing how to phrase a hypothesis is very important. 
  • Clear and accurate : The language used should be easily comprehensible and use correct terminology. 
  • Relevant : The hypothesis should be relevant and specific to the research question. 
  • Structure : Should include all the elements that make a good hypothesis: variables, relationship, and outcome. 

Functions of a hypothesis

The following list mentions some important functions of a hypothesis: 1  

  • Maintains the direction and progress of the research. 
  • Expresses the important assumptions underlying the proposition in a single statement. 
  • Establishes a suitable context for researchers to begin their investigation and for readers who are referring to the final report. 
  • Provides an explanation for the occurrence of a specific phenomenon. 
  • Ensures selection of appropriate and accurate facts necessary and relevant to the research subject. 

To summarize, a hypothesis provides the conceptual elements that complete the known data, conceptual relationships that systematize unordered elements, and conceptual meanings and interpretations that explain the unknown phenomena. 1  

hypothesis based models

How to write a hypothesis

Listed below are the main steps explaining how to write a hypothesis. 2,4,5  

  • Make an observation and identify variables : Observe the subject in question and try to recognize a pattern or a relationship between the variables involved. This step provides essential background information to begin your research.  

For example, if you notice that an office’s vending machine frequently runs out of a specific snack, you may predict that more people in the office choose that snack over another. 

  • Identify the main research question : After identifying a subject and recognizing a pattern, the next step is to ask a question that your hypothesis will answer.  

For example, after observing employees’ break times at work, you could ask “why do more employees take breaks in the morning rather than in the afternoon?” 

  • Conduct some preliminary research to ensure originality and novelty : Your initial answer, which is your hypothesis, to the question is based on some pre-existing information about the subject. However, to ensure that your hypothesis has not been asked before or that it has been asked but rejected by other researchers you would need to gather additional information.  

For example, based on your observations you might state a hypothesis that employees work more efficiently when the air conditioning in the office is set at a lower temperature. However, during your preliminary research you find that this hypothesis was proven incorrect by a prior study. 

  • Develop a general statement : After your preliminary research has confirmed the originality of your proposed answer, draft a general statement that includes all variables, subjects, and predicted outcome. The statement could be if/then or declarative.  
  • Finalize the hypothesis statement : Use the PICOT model, which clarifies how to word a hypothesis effectively, when finalizing the statement. This model lists the important components required to write a hypothesis. 

P opulation: The specific group or individual who is the main subject of the research 

I nterest: The main concern of the study/research question 

C omparison: The main alternative group 

O utcome: The expected results  

T ime: Duration of the experiment 

Once you’ve finalized your hypothesis statement you would need to conduct experiments to test whether the hypothesis is true or false. 

Hypothesis examples

The following table provides examples of different types of hypotheses. 10 ,11  

hypothesis based models

Key takeaways  

Here’s a summary of all the key points discussed in this article about how to write a hypothesis. 

  • A hypothesis is an assumption about an association between variables made based on limited evidence, which should be tested. 
  • A hypothesis has four parts—the research question, independent variable, dependent variable, and the proposed relationship between the variables.   
  • The statement should be clear, concise, testable, logical, and falsifiable. 
  • There are seven types of hypotheses—simple, complex, directional, non-directional, associative and causal, null, and alternative. 
  • A hypothesis provides a focus and direction for the research to progress. 
  • A hypothesis plays an important role in the scientific method by helping to create an appropriate experimental design. 

Frequently asked questions

Hypotheses and research questions have different objectives and structure. The following table lists some major differences between the two. 9  

Here are a few examples to differentiate between a research question and hypothesis. 

Yes, here’s a simple checklist to help you gauge the effectiveness of your hypothesis. 9   1. When writing a hypothesis statement, check if it:  2. Predicts the relationship between the stated variables and the expected outcome.  3. Uses simple and concise language and is not wordy.  4. Does not assume readers’ knowledge about the subject.  5. Has observable, falsifiable, and testable results. 

As mentioned earlier in this article, a hypothesis is an assumption or prediction about an association between variables based on observations and simple evidence. These statements are usually generic. Research objectives, on the other hand, are more specific and dictated by hypotheses. The same hypothesis can be tested using different methods and the research objectives could be different in each case.     For example, Louis Pasteur observed that food lasts longer at higher altitudes, reasoned that it could be because the air at higher altitudes is cleaner (with fewer or no germs), and tested the hypothesis by exposing food to air cleaned in the laboratory. 12 Thus, a hypothesis is predictive—if the reasoning is correct, X will lead to Y—and research objectives are developed to test these predictions. 

Null hypothesis testing is a method to decide between two assumptions or predictions between variables (null and alternative hypotheses) in a statistical relationship in a sample. The null hypothesis, denoted as H 0 , claims that no relationship exists between variables in a population and any relationship in the sample reflects a sampling error or occurrence by chance. The alternative hypothesis, denoted as H 1 , claims that there is a relationship in the population. In every study, researchers need to decide whether the relationship in a sample occurred by chance or reflects a relationship in the population. This is done by hypothesis testing using the following steps: 13   1. Assume that the null hypothesis is true.  2. Determine how likely the sample relationship would be if the null hypothesis were true. This probability is called the p value.  3. If the sample relationship would be extremely unlikely, reject the null hypothesis and accept the alternative hypothesis. If the relationship would not be unlikely, accept the null hypothesis. 

hypothesis based models

To summarize, researchers should know how to write a good hypothesis to ensure that their research progresses in the required direction. A hypothesis is a testable prediction about any behavior or relationship between variables, usually based on facts and observation, and states an expected outcome.  

We hope this article has provided you with essential insight into the different types of hypotheses and their functions so that you can use them appropriately in your next research project. 

References  

  • Dalen, DVV. The function of hypotheses in research. Proquest website. Accessed April 8, 2024. https://www.proquest.com/docview/1437933010?pq-origsite=gscholar&fromopenview=true&sourcetype=Scholarly%20Journals&imgSeq=1  
  • McLeod S. Research hypothesis in psychology: Types & examples. SimplyPsychology website. Updated December 13, 2023. Accessed April 9, 2024. https://www.simplypsychology.org/what-is-a-hypotheses.html  
  • Scientific method. Britannica website. Updated March 14, 2024. Accessed April 9, 2024. https://www.britannica.com/science/scientific-method  
  • The hypothesis in science writing. Accessed April 10, 2024. https://berks.psu.edu/sites/berks/files/campus/HypothesisHandout_Final.pdf  
  • How to develop a hypothesis (with elements, types, and examples). Indeed.com website. Updated February 3, 2023. Accessed April 10, 2024. https://www.indeed.com/career-advice/career-development/how-to-write-a-hypothesis  
  • Types of research hypotheses. Excelsior online writing lab. Accessed April 11, 2024. https://owl.excelsior.edu/research/research-hypotheses/types-of-research-hypotheses/  
  • What is a research hypothesis: how to write it, types, and examples. Researcher.life website. Published February 8, 2023. Accessed April 11, 2024. https://researcher.life/blog/article/how-to-write-a-research-hypothesis-definition-types-examples/  
  • Developing a hypothesis. Pressbooks website. Accessed April 12, 2024. https://opentext.wsu.edu/carriecuttler/chapter/developing-a-hypothesis/  
  • What is and how to write a good hypothesis in research. Elsevier author services website. Accessed April 12, 2024. https://scientific-publishing.webshop.elsevier.com/manuscript-preparation/what-how-write-good-hypothesis-research/  
  • How to write a great hypothesis. Verywellmind website. Updated March 12, 2023. Accessed April 13, 2024. https://www.verywellmind.com/what-is-a-hypothesis-2795239  
  • 15 Hypothesis examples. Helpfulprofessor.com Published September 8, 2023. Accessed March 14, 2024. https://helpfulprofessor.com/hypothesis-examples/ 
  • Editage insights. What is the interconnectivity between research objectives and hypothesis? Published February 24, 2021. Accessed April 13, 2024. https://www.editage.com/insights/what-is-the-interconnectivity-between-research-objectives-and-hypothesis  
  • Understanding null hypothesis testing. BCCampus open publishing. Accessed April 16, 2024. https://opentextbc.ca/researchmethods/chapter/understanding-null-hypothesis-testing/#:~:text=In%20null%20hypothesis%20testing%2C%20this,said%20to%20be%20statistically%20significant  

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Empirical Research: A Comprehensive Guide for Academics 
  • How to Write a Scientific Paper in 10 Steps 
  • What is a Literature Review? How to Write It (with Examples)
  • What are Journal Guidelines on Using Generative AI Tools

Measuring Academic Success: Definition & Strategies for Excellence

What are scholarly sources and where can you find them , you may also like, 4 ways paperpal encourages responsible writing with ai, what are scholarly sources and where can you..., what is academic writing: tips for students, why traditional editorial process needs an upgrade, paperpal’s new ai research finder empowers authors to..., what is hedging in academic writing  , how to use ai to enhance your college..., ai + human expertise – a paradigm shift..., how to use paperpal to generate emails &....

The use and limitations of null-model-based hypothesis testing

  • Published: 23 April 2020
  • Volume 35 , article number  31 , ( 2020 )

Cite this article

hypothesis based models

  • Mingjun Zhang   ORCID: orcid.org/0000-0001-6971-1175 1  

2139 Accesses

5 Citations

2 Altmetric

Explore all metrics

In this article I give a critical evaluation of the use and limitations of null-model-based hypothesis testing as a research strategy in the biological sciences. According to this strategy, the null model based on a randomization procedure provides an appropriate null hypothesis stating that the existence of a pattern is the result of random processes or can be expected by chance alone, and proponents of other hypotheses should first try to reject this null hypothesis in order to demonstrate their own hypotheses. Using as an example the controversy over the use of null hypotheses and null models in species co-occurrence studies, I argue that null-model-based hypothesis testing fails to work as a proper analog to traditional statistical null-hypothesis testing as used in well-controlled experimental research, and that the random process hypothesis should not be privileged as a null hypothesis. Instead, the possible use of the null model resides in its role of providing a way to challenge scientists’ commonsense judgments about how a seemingly unusual pattern could have come to be. Despite this possible use, null-model-based hypothesis testing still carries certain limitations, and it should not be regarded as an obligation for biologists who are interested in explaining patterns in nature to first conduct such a test before pursuing their own hypotheses.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

hypothesis based models

Similar content being viewed by others

Bayesian data analysis in population ecology: motivations, methods, and benefits.

hypothesis based models

The multiple-comparison trap and the Raven’s paradox—perils of using null hypothesis testing in environmental assessment

hypothesis based models

Does God roll dice? Neutrality and determinism in evolutionary ecology

In species co-occurrence studies, when claiming that a species exists, occurs, or is present on an island, ecologists typically mean that the species has established a breeding population on that island instead of just having several vagile individuals.

For a detailed discussion of the differences between neutral models and null models, see Gotelli and McGill ( 2006 ).

In species co-occurrence studies, the null models constructed by different ecologists may be more or less different from each other. Even Connor and Simberloff themselves keep modifying their null models in later publications. Nevertheless, the version I will introduce here, which appears in one of their earliest and also most-cited publications on this subject, helps demonstrate the key features of null-model-based hypothesis testing.

For reviews of the technical issues in the construction of null models, see Gotelli and Graves ( 1996 ) and Sanderson and Pimm ( 2015 ).

Although the term “randomization test” is often used interchangeably with “permutation test,” actually they are different. A randomization test is based on random assignment involved in experimental design; the procedure of random assignment is conducted before empirical data are collected. By contrast, a permutation test is a nonparametric method of statistical hypothesis testing based on data resampling.

Bausman WC (2018) Modeling: neutral, null, and baseline. Philos Sci 85:594–616

Article   Google Scholar  

Bausman W, Halina M (2018) Not null enough: pseudo-null hypotheses in community ecology and comparative psychology. Biol Philos 33:1–20

Chase JM, Leibold MA (2003) Ecological niches: linking classical and contemporary approaches. University of Chicago Press, Chicago

Book   Google Scholar  

Colwell RK, Winkler DW (1984) A null model for null models in biogeography. In: Strong DR Jr, Simberloff D, Abele LG, Thistle AB (eds) Ecological communities: conceptual issues and the evidence. Princeton University Press, Princeton, pp 344–359

Chapter   Google Scholar  

Connor EF, Simberloff D (1979) The assembly of species communities: chance or competition? Ecology 60:1132–1140

Connor EF, Simberloff D (1983) Interspecific competition and species co-occurrence patterns on islands: null models and the evaluation of evidence. Oikos 41:455–465

Connor EF, Simberloff D (1984) Neutral models of species’ co-occurrence patterns. In: Strong DR Jr, Simberloff D, Abele LG, Thistle AB (eds) Ecological communities: conceptual issues and the evidence. Princeton University Press, Princeton, pp 316–331

Connor EF, Collins MD, Simberloff D (2013) The checkered history of checkerboard distributions. Ecology 94:2403–2414

Connor EF, Collins MD, Simberloff D (2015) The checkered history of checkerboard distributions: reply. Ecology 96:3388–3389

Diamond JM (1975) Assembly of species communities. In: Cody ML, Diamond JM (eds) Ecology and evolution of communities. Harvard University Press, Cambridge, pp 342–444

Google Scholar  

Diamond JM, Gilpin ME (1982) Examination of the “null” model of Connor and Simberloff for species co-occurrences on islands. Oecologia 52:64–74

Diamond J, Pimm SL, Sanderson JG (2015) The checkered history of checkerboard distributions: comment. Ecology 96:3386–3388

Fisher RA (1925) Statistical methods for research workers. Oliver and Boyd, Edinburgh

Fisher RA (1926) The arrangement of field experiments. J Minist Agric 33:503–513

Fisher RA (1935) The design of experiments. Oliver and Boyd, Edinburgh

Gilpin ME, Diamond JM (1984) Are species co-occurrences on islands non-random, and are null hypotheses useful in community ecology? In: Strong DR Jr, Simberloff D, Abele LG, Thistle AB (eds) Ecological communities: conceptual issues and the evidence. Princeton University Press, Princeton, pp 297–315

Gotelli NJ, Graves GR (1996) Null models in ecology. Smithsonian Institution Press, Washington

Gotelli NJ, McGill BJ (2006) Null versus neutral models: what’s the difference? Ecography 29:793–800

Harvey PH (1987) On the use of null hypotheses in biogeography. In: Nitechi MH, Hoffman A (eds) Neutral models in biology. Oxford University Press, New York, pp 109–118

Hubbell SP (2001) The unified neutral theory of biodiversity and biogeography. Princeton University Press, Princeton

Hubbell SP (2006) Neutral theory and the evolution of ecological equivalence. Ecology 87:1387–1398

Lewin R (1983) Santa Rosalia was a goat. Science 221:636–639

MacArthur R (1972) Geographical ecology: patterns in the distribution of species. Harper & Row, Publishers, Inc., New York

Rathcke BJ (1984) Patterns of flowering phenologies: testability and causal inference using a random model. In: Strong DR Jr, Simberloff D, Abele LG, Thistle AB (eds) Ecological communities: conceptual issues and the evidence. Princeton University Press, Princeton, pp 383–396

Rosindell J, Hubbell SP, Etienne RS (2011) The unified neutral theory of biodiversity and biogeography at age ten. Trends Ecol Evol 26:340–348

Sanderson JG, Pimm SL (2015) Patterns in nature: the analysis of species co-occurences. The University of Chicago Press, Chicago

Schelling TC (1978) Micromotives and macrobehavior. W. W. Norton & Company, New York

Sloep PB (1986) Null hypotheses in ecology: towards the dissolution of a controversy. Philos Sci 1:307–313

Sober E (1988) Reconstructing the past: parsimony, evolution, and inference. The MIT Press, Cambridge

Sober E (1994) Let’s Razor Ockham’s Razor. In: From a biological point of view. Cambridge University Press, Cambridge, pp 136–157

von Bertalanffy L (1968) General system theory: foundations, development, applications. George Braziller, New York

Download references

Acknowledgements

I wish to acknowledge the great help of Michael Weisberg, Erol Akçay, Jay Odenbaugh, and two anonymous reviewers for suggestions on improving the manuscript. An earlier draft of this article was also presented in the Philosophy of Science Reading Group at the University of Pennsylvania, the Salon of Philosophy of Science and Technology at Tsinghua University in Beijing, and PBDB 13 (Philosophy of Biology at Dolphin Beach) in Moruya, Australia. I want to thank the participants of these meetings, who asked valuable questions that inspired this article.

Author information

Authors and affiliations.

Department of Philosophy, University of Pennsylvania, Claudia Cohen Hall, Room 433, 249 S. 36th Street, Philadelphia, PA, 19104-6304, USA

Mingjun Zhang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mingjun Zhang .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Zhang, M. The use and limitations of null-model-based hypothesis testing. Biol Philos 35 , 31 (2020). https://doi.org/10.1007/s10539-020-09748-0

Download citation

Received : 29 June 2019

Accepted : 13 April 2020

Published : 23 April 2020

DOI : https://doi.org/10.1007/s10539-020-09748-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Null hypothesis
  • Checkerboard distribution
  • Interspecific competition
  • Random colonization
  • Control of variables
  • Find a journal
  • Publish with us
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 29 April 2024

Integrative metabolomics-genomics analysis identifies key networks in a stem cell-based model of schizophrenia

  • Angeliki Spathopoulou   ORCID: orcid.org/0000-0001-9239-6222 1   na1 ,
  • Gabriella A. Sauerwein 1   na1 ,
  • Valentin Marteau   ORCID: orcid.org/0000-0002-3667-1653 1   na1 ,
  • Martina Podlesnic 1 ,
  • Theresa Lindlbauer 1 ,
  • Tobias Kipura   ORCID: orcid.org/0009-0007-4733-893X 2 ,
  • Madlen Hotze 2 ,
  • Elisa Gabassi 1 ,
  • Katharina Kruszewski 1 ,
  • Marja Koskuvi 3 ,
  • János M. Réthelyi 4 ,
  • Ágota Apáti 5 ,
  • Luciano Conti   ORCID: orcid.org/0000-0002-2050-9846 6 ,
  • Manching Ku 7 ,
  • Therese Koal 8 ,
  • Udo Müller 8 ,
  • Radu A. Talmazan   ORCID: orcid.org/0000-0001-6678-7801 8 ,
  • Ilkka Ojansuu 9 ,
  • Olli Vaurio 9 ,
  • Markku Lähteenvuo   ORCID: orcid.org/0000-0002-7244-145X 9 ,
  • Šárka Lehtonen   ORCID: orcid.org/0000-0002-5055-9414 3 , 10 ,
  • Jerome Mertens 1 , 11 ,
  • Marcel Kwiatkowski 2 ,
  • Katharina Günther 1 ,
  • Jari Tiihonen   ORCID: orcid.org/0000-0002-0400-6798 9 , 12 ,
  • Jari Koistinaho   ORCID: orcid.org/0000-0001-6559-1153 13 , 14 ,
  • Zlatko Trajanoski 15 &
  • Frank Edenhofer   ORCID: orcid.org/0000-0002-6489-714X 1  

Molecular Psychiatry ( 2024 ) Cite this article

72 Accesses

3 Altmetric

Metrics details

  • Schizophrenia

Schizophrenia (SCZ) is a neuropsychiatric disorder, caused by a combination of genetic and environmental factors. The etiology behind the disorder remains elusive although it is hypothesized to be associated with the aberrant response to neurotransmitters, such as dopamine and glutamate. Therefore, investigating the link between dysregulated metabolites and distorted neurodevelopment holds promise to offer valuable insights into the underlying mechanism of this complex disorder. In this study, we aimed to explore a presumed correlation between the transcriptome and the metabolome in a SCZ model based on patient-derived induced pluripotent stem cells (iPSCs). For this, iPSCs were differentiated towards cortical neurons and samples were collected longitudinally at various developmental stages, reflecting neuroepithelial-like cells, radial glia, young and mature neurons. The samples were analyzed by both RNA-sequencing and targeted metabolomics and the two modalities were used to construct integrative networks in silico. This multi-omics analysis revealed significant perturbations in the polyamine and gamma-aminobutyric acid (GABA) biosynthetic pathways during rosette maturation in SCZ lines. We particularly observed the downregulation of the glutamate decarboxylase encoding genes GAD1  and GAD2 , as well as their protein product GAD65/67 and their biochemical product GABA in SCZ samples. Inhibition of ornithine decarboxylase resulted in further decrease of GABA levels suggesting a compensatory activation of the ornithine/putrescine pathway as an alternative route for GABA production. These findings indicate an imbalance of cortical excitatory/inhibitory dynamics occurring during early neurodevelopmental stages in SCZ. Our study supports the hypothesis of disruption of inhibitory circuits to be causative for SCZ and establishes a novel in silico approach that enables for integrative correlation of metabolic and transcriptomic data of psychiatric disease models.

Similar content being viewed by others

hypothesis based models

APOE4/4 is linked to damaging lipid droplets in Alzheimer’s disease microglia

hypothesis based models

Single-cell long-read sequencing-based mapping reveals specialized splicing patterns in developing and adult mouse and human brain

hypothesis based models

Single-cell multiplex chromatin and RNA interactions in ageing human brain

Introduction.

Although schizophrenia (SCZ) is a detrimental neuropsychiatric disorder affecting 0.32–1.0% of the global population [ 1 ], very little is known about the pathological mechanisms underlying the disease’s manifestation and progression. The predominant model of SCZ depicts it as a neurodevelopmental disorder, involving fundamental neurobiological alterations to occur prior to the manifestations of symptoms, through the interplay of genetic predispositions and environmental factors [ 2 ]. It has been hypothesized that the pathology of the disease is associated with a distorted regulation and response to dopamine and/or glutamate [ 3 , 4 ]. However, aberrant glutamatergic and dopaminergic neurotransmission alone fails to capture the complexity of the disease’s etiology [ 5 ]. Metabolomics has emerged as a promising novel tool for the identification of SCZ-associated metabolites [ 6 , 7 , 8 , 9 ]. Recent studies revealed altered metabolic profiles in blood serum and plasma from patients with SCZ when compared to control individuals, including different metabolite classes, like amino acids [ 10 , 11 , 12 ], phospholipids [ 13 , 14 , 15 , 16 ], and neuropeptides [ 17 , 18 ]. Though these observations may lead to the improvement of the disease diagnosis and the discovery of novel biomarkers, the assessment of a putative link between the dysregulated metabolic and transcriptomic profiles and a distorted early human neurodevelopment remains challenging. This is due to both the limited access to patients’ biomaterial and the poor insights into the correlation of transcriptomic and metabolomic data. Induced pluripotent stem cells (iPSCs) developed into a powerful model to investigate early neurodevelopmental aberrations in SCZ patients [ 19 , 20 , 21 , 22 ]. iPSCs can be derived by reprogramming adult somatic cells from healthy individuals and patients with SCZ [ 23 ]. Patient-specific iPSCs carrying the complex genetic makeup of their donors can subsequently be differentiated into appropriate neural models, as a source for transcriptome-wide analysis and metabolic studies [ 24 , 25 , 26 ].

Our study aims at exploiting the iPSC technology to establish an integrative network analysis of a presumed correlation between transcriptomics and metabolomics, focusing on early neurodevelopmental changes in SCZ. To this end, we differentiated iPSC lines derived from SCZ patients and control individuals into cortical neurons and examined six developmental stages along the differentiation trajectory. We performed transcriptome-wide analyses and targeted metabolomics and developed an in silico approach to integrate gene expression data with metabolic profiles in a network, offering a more holistic view of cellular dysregulations. Our analysis revealed a distortion of the main γ-aminobutyric acid (GABA) biosynthetic pathway, through the downregulation of glutamate decarboxylation during the rosette maturation stage. Moreover, we found the non-canonical GABA biosynthetic route through putrescine upregulated in the SCZ lines, presumably due to a compensatory mechanism. Our study establishes a novel in silico approach of correlating metabolic and transcriptomic data, unraveling an imbalance in cortical excitatory/inhibitory dynamics manifested during early neurodevelopmental stages in SCZ iPS cell lines.

Materials and methods

Cell culture.

Eight human iPSC lines were employed in this study (Supplementary Table  S1 ). The cells were cultured using Corning® Matrigel® hESC-Qualified Matrix (Corning, Cat. No. 354277) coated plates with the use of StemMACS™ iPS Brew XF Medium (Miltenyi Biotec, Cat. No. 130-104-368) or Essential 8™ medium (ThermoFisher Scientific, Cat. No. A157001), in antibiotic-free conditions, and maintained at 37 °C, 5% CO 2 . iPSCs were passaged every 3–5 days using either Accutase™ or 0.5 mM phosphate-buffered saline (PBS)/EDTA. Briefly, when passaging the cells with Accutase™, cells were firstly washed with DMEM, 1 ml of Accutase™ was added per 6-well and the cells were incubated at 37 °C for 3–4 min, to ensure proper cell detachment. After the incubation an equal volume of DMEM was added to the well and the cells were collected and centrifuged at 1200 rpm for 3 min at 4 °C. For splitting with PBS/EDTA (ThermoFisher Scientific, Cat. No. 15575020), cells were briefly washed with DMEM, 1 ml PBS/EDTA was added per 6-well and the cells were incubated until they started to roughly dissociate. The EDTA was aspirated and the cells or the cell pellet (when splitting using Accutase), were resuspended in fresh medium supplemented with 10 μM ROCK inhibitor, Y-27632, (Miltenyi Biotec, Cat. No. 130-106-538). The next day the medium was changed back to iPSC medium without ROCK inhibitor. All cell lines were thoroughly characterized for their pluripotency (Supplementary Fig.  1A, B ) and were tested frequently for mycoplasma contamination.

Cortical neuronal differentiation

The generation of cortical progenitors and neurons was performed as described before [ 27 , 28 ] with minor modifications. Briefly, iPSCs from five 6-wells were collected with Accutase™ and seeded onto an ES-Matrigel coated 12-well. When 100% confluency was reached, StemMACS™ iPS Brew XF Medium was replaced by neural induction medium (NIM; DMEM/F12 Glutamax, Neurobasal, 100 mM L-Glutamine, 0.5 × N-2, 0.5 × B-27 + Vitamin A, 50 μM Non-Essential Amino Acids, 50 μM 2-mercaptoethanol, 2.5 μg/ml insulin, 1 μM dorsomorphine, 10 μM SB431542). The medium was changed every day until the appearance of a tightly packed neuroepithelial sheet (NES). NES was passaged with 0.5 mM EDTA in a ratio of 1:2 or 1:3 to Corning® Matrigel® Growth Factor Reduced (GFR) Basement Membrane Matrix (GFR-Matrigel; Corning, Cat. No. 354230) coated plates. The next day, the medium was switched to neural maintenance medium (NMM; DMEM/F12 Glutamax, Neurobasal, 100mM L-Glutamine, 0.5 × N-2, 0.5 × B-27 + Vitamin A, 50 μM Non-Essential Amino Acids, 50 μM 2-mercaptoethanol, 2.5 μg/ml insulin) and was changed every other day. Upon the appearance of rosettes, 20 ng/ml FGF2 (Peprotech, Cat. No. 100-18C) were added to the medium for four days. On the fourth day of FGF2 treatment, the cells were split again with 0.5 mM EDTA in a ratio of 1:2 to 1:3 onto GFR-Matrigel coated plates. The medium was switched back to NMM and the cortical progenitors were maintained for about 5–10 days until neurons accumulated outside of the rosettes. At this point, cells were passaged with Accutase™, and 50,000 cells/cm² were seeded on poly-L-ornithin/Laminin coated plates for further neuronal differentiation. Alternatively, 2–4 million cells/ml were frozen with neural freezing medium. Neurons differentiated further with half medium changes every two to three days. Samples were harvested at day (d) 0, 7, 16, 27, 50, and 100.

For the DFMO treatment, adherent cell cultures were treated daily with 10 µM DFMO (difluoromethylornithine hydrochloride hydrate; Merck, Cat. No. D193) starting from the first day of differentiation until the collection of cellular pellet and supernatant for mass spectrometry analysis or fixation for subsequent immunocytochemistry (ICC).

Immunocytochemistry

Cells were fixed in 4% paraformaldehyde (PFA; Sigma, Cat. No. 158127-500G) in PBS solution for 20 min at room temperature (RT). The non-specific binding was blocked with incubation with blocking buffer (3% bovine serum albumin (BSA), 0.2% Triton ×100 in PBS) for 1 h at RT. The primary antibody (Ab) was diluted in the blocking buffer in the recommended concentration and the Ab solution was applied overnight at 4 °C. The following primary Abs were used in the following concentrations: AFP 1:400 (Dako, Cat. No. A000829-2), GAD65/67 1:100 (Abcam, Cat. No. AB183999), GFAP 1:400 (Sigma, Cat. No. G3893-.2 ML), Ki67-VioR667 1:200 (Miltenyi, Cat. No.130-120-422), MAP2 1:1,000 (SynapticSystems, Cat. No.188006), NEUN 1:500 (Sigma, Cat. No. ABN78), OCT3/4 1:200 (Szabo-Scandic, Cat. No. GTX101497-100), PAX6 1:500 (Invitrogen, Cat. No. 42-6600), S100b 1:750 (Abcam, Cat. No. ab52642), SMA 1:500 (Abcam, Cat. No. ab7817), SOX1 1:200 (R&D Systems, Cat. No. AF3369), SOX2 1:500 (R&D Systems, Cat. No. MAB2018), TAU 1:200 (Cell Signaling Technology, Cat. No. 4019), TUBB3 1:1,000 (BioLegend, Cat. No. 801202 and Abcam, Cat.No. ab52623), vGLUT 1:100 (SynapticSystems, Cat. No. 135311). The secondary Ab was diluted 1:500 in 1.5% BSA, 0,2% Triton ×100 in PBS, and the solution was applied for 2 h at RT. The secondary Abs used in this study were: donkey anti-rabbit Alexa Fluor TM 488 (ThermoFisher Scientific, Cat. No. A-21206), donkey anti-rabbit Alexa Fluor TM 546 (ThermoFisher Scientific, Cat. No. A-10040), donkey anti-mouse Alexa Fluor TM 594 (ThermoFisher Scientific, Cat. No. A-21203), donkey anti-mouse Alexa Fluor TM 647 (ThermoFisher Scientific, Cat. No. A-31571), donkey anti-goat Alexa Fluor TM 594 (ThermoFisher Scientific, Cat. No. A-11058), goat anti-chicken Alexa FluorTM 594 (ThermoFisher Scientific, Cat. No. A32759). Finally, the nuclei were counterstained using 4’,6-diamidino-2-phenylindole (DAPI; ThermoFisher Scientific, Cat. No. D21490) in PBS in 1:5000 dilution for 5 min at RT. The coverslips were mounted using Aqua-Poly/Mount mounting medium (PolySciences, Cat. No. 18606-20).

Microscopy, image acquisition and image analysis

Fluorescent pictures were acquired with the Zeiss Axio Observer Z1 inverted fluorescent microscope and the Leica DMi8 inverted microscope. The image acquisition was performed under the same exposure and laser intensity settings for each set of analyses. For each sample, ten random fields of view were acquired, with a minimum of 20 z-stacks collected per field to ensure proper signal coverage. Further image processing was carried out using the ImageJ software. For quantitative fluorescence intensity analysis, maximum intensity projection was applied and the mean fluorescence intensity values were calculated after background noise subtraction. These values were then normalized to the DAPI+ nuclear area to account for variations in cell density in the different fields of view.

Reverse transcription quantitative PCR

Total RNA was extracted from cells using TRI Reagent® (Merck, Cat. No. T9424), according to the manufacturer’s instructions. Genomic DNA was removed through treatment with DNase I (Sigma-Aldrich, Cat. No. AMPD1). Subsequently, 1 µg of purified RNA was reverse transcribed into cDNA using the RevertAid RT Reverse Transcription Kit (ThermoFisher Scientific, Cat. No. K1691), following the manufacturer’s guidelines. The expression levels of specific target genes at the mRNA level were quantified via reverse transcription quantitative PCR (RT-qPCR) using the 5× HOT FIREPol EvaGreen qPCR Mix Plus (no ROX) (Solis BioDyne, Cat. No. 08-25-00001-10). Samples were analyzed in technical triplicates to ensure data reliability. Non-template controls (NTCs) were included for each primer pair in every assay to monitor for reagent contamination and primer-dimer formation. To confirm the absence of genomic DNA contamination, random RNA samples were evaluated through gel electrophoresis. The RT-qPCR assays were conducted on the CFX Connect Real-Time PCR Detection System (Bio-Rad). Gene expression levels were normalized to the housekeeping gene ACTB. Relative expression changes were calculated employing the ΔΔCt method [ 29 ]. The list of the primers used for RT-qPCR assays is shown in Table  1 .

Bulk RNA sample collection, quality control, library preparation, and bulk RNA sequencing

Total RNA was isolated from cells at six time points during the cortical differentiation and was prepared for paired-end mRNA sequencing. RNA extraction was performed using the TRI Reagent® (Merck, Cat. No. T9424) according to the manufacturer’s guidelines. Genomic DNA digest was performed with the use of the TURBO DNA-free™ Kit (ThermoFisher Scientific, Cat. No. AM2238). For the library preparation, the Illumina TruSeq RNA Library Prep Kit v2 was used (Illumina, Cat. No. RS-122-2001, RS-122-2002). Quality, as well as concentration of RNA were assessed employing the Agilent RNA 6000 Pico kit (Agilent, cat. no. 5067-1513), Nanodrop, the NEBNext® Library Quant Kit for Illumina® (New England Biolabs, Cat. No. E7630S) and the Qubit RNA Integrity and Quality (IQ) Assay Kit (ThermoFisher Scientific, Cat. No. Q33222). All the kits were used according to the manufacturer’s guidelines. Paired-end sequencing was performed with the NextSeq 500/550 v2 Kit (150 cycles) (Illumina).

Transcriptomic data pre-processing, heatmap generation, and differential gene expression analysis

Low-quality ends and adapter sequences were trimmed using the wrapper Trim Galore!. Reads were mapped to the human reference genome (GRCh38) using the open-source software STAR [ 30 ]. The raw counts were generated with the Hypergeometric Optimization of Motif EnRichment (HOMER) suite [ 31 ]. All the subsequent analysis was performed using R [ 32 ]. Differential gene expression analysis was performed using the DESeq2 package [ 33 ]. Raw counts were normalized using the median of ratios (variance stabilization transformation; vst) [ 34 ]. Heatmaps were generated with the ClustVis [ 35 ] tool, using the z-score of the vst transcriptomic data for every gene. Gene ontology (GO) enrichment analysis was performed using the ShinyGO 0.76 online tool [ 36 ].

A likelihood ratio test (LRT) was used to identify the differentially expressed genes (DEGs) of SCZ and control (CTRL) across the multiple time points of neuronal differentiation [ 32 ]. The LRT compared the full model containing the covariates ‘sex’, ‘batch’, ‘time point’, and ‘disease’ with a model reducing the covariates ‘sex’, ‘batch’, and ‘time point’. Statistical values were corrected for FDR using the Benjamini-Hochberg method.

Weighted gene correlation network analysis (WGCNA) and module-traits relationships

Weighted Gene Correlation Network Analysis (WGCNA) allows the generation of modules that include genes that are co-expressed in the same manner. The vst counts were used to build a co-expression network using the WGCNA [ 37 ] package in R [ 32 ]. The data were corrected for sex and batch effects using the ComBat function that is implemented in the sva package [ 38 ]. The topological overlap measure was calculated using the adjacency matrix. The DynamicTree Cut algorithm, implemented in the WGCNA package, was used to identify the different modules. The gray module contains all the genes that were not assigned to any of the other modules. The module eigengene were calculated. Pearson’s correlation was used to compare modules to each other and to the traits SCZ and the differentiation time points in the adjacency matrix. The top 25% of genes with the highest module membership (MM) were identified as hub genes.

Gene ontology annotation

Functional enrichment analysis was performed with an input gene ID list using the tool g:GOSt from the g:Profiler [ 39 ] R package. Statistical significance was computed and the g:SCS-threshold was corrected at p  < 0.05.

Targeted metabolomics, sample collection, and data processing

The cells were washed with 1 ml sterile 1x PBS for 60 s. After the wash, the cells were scraped using 1 ml PBS and the suspension was collected and centrifuged at 4000 rpm for 5 min, at RT. The cell pellets were kept constantly on dry ice and stored at −80 °C until further processing. The cell supernatant was collected after a 24-h incubation, centrifuged at 4000 rpm for 10 min, immediately placed on dry ice and stored at −80 °C. Samples were analyzed using the biocrates MxP® Quant 500 (biocrates life sciences AG, Cat. No. 21094.12). Liquid chromatography-tandem mass spectrometry (LC-MS/MS) was employed to analyze small molecules, including analyte classes such as amino acids, biogenic amines, carboxylic acids, and amino acid-related molecules [ 40 ]. Lipid species were measured using flow injection analysis tandem mass spectrometry (FIA–MS/MS). Small molecules were quantified with external 7-point calibrations and internal standards and lipids were quantified by internal standards [ 41 ]. The raw data were processed by applying a modified 80% rule to reduce the false positive measurements [ 42 ]. The actual missing values, i.e., the values over the level of detection (LOD) for one time point but not for another time point, were uniformly at random imputed with a non-zero value between LOD/2 and LOD. Missing values within one class (i.e., time points and metabolites) were imputed using the arithmetic mean of the class. Batch effects were corrected by centering the data within the groups (i.e., time points) and batches. The performance of the normalization was assessed by plotting the row standard deviations versus the row means and the principal component analysis (PCA). In addition, variancePartition analysis was performed to evaluate the contribution of each individual component of the study design (i.e., time point, batch, and condition), to the measured variation of each metabolite [ 43 ].

For metabolite extraction, cell pellets were resuspended in 500 µL ice-cold methanol. Metabolites from supernatants (50 µL) were extracted using 450 µL 8:1 methanol:water. Fully 13C, 15N labeled amino acid standard (Cambridge Isotope Laboratories, Cat. No. MSK-CAA-1) and 6D-gamma hydroxybutyrate (Sigma-Aldrich, Cat. No. 615587) were spiked into samples at the first step of the extraction. After simultaneous proteo-metabolome liquid-liquid extraction [ 44 ], protein content was determined from extracted cellular interphases using a Pierce Micro BCA Protein Assay Kit (Thermo Fisher Scientific, Cat. No. 23235). Dried metabolite samples from cell pellets were dissolved in 20 µL 0.1% formic acid (FA) or 50 µL 0.1% FA for the analysis from the supernatant samples. The sample (1 µL) was injected on an Atlantis Premier BEH C18 AX column (1.7 µm, 2.1 × 150 mm, Waters, 186009361) equilibrated at 40 °C using an Acquity Premier UPLC system (Waters). A gradient was run at a flowrate of 0.4 mL/min with mobile phase A (0.1% FA in water) and mobile phase B (0.1% FA in acetonitrile) as follows: 1 min at 1% B, to 40% B in 1 min, 40% B to 99% B in 0.5 min, hold at 99% B for 1.1 min, 99% B to 1% B in 0.1 min followed by 1.8 min of re-equilibration at 1% B. GABA and Glutamate (Glu) were detected using a Xevo-TQ XS Mass spectrometer (Waters) equipped with an electrospray ionization source running in positive mode. The transitions 104–>69 (endogenous GABA), 110–>73 (labeled GABA), 148–>102 (endogenous Glu) and 154–>107 (labeled Glu) were used for quantification. The raw files were processed using MS Quan in waters connect (Waters, V1.7.0.7). The data was further analyzed in R and normalized to the protein content.

Short time-series expression miner (STEM) analysis of metabolomic and transcriptomic data

To analyze time-related cluster dynamics, the non-parametric clustering algorithm of Short Time-series Expression Miner (STEM) was used [ 45 ]. STEM is an online tool that assigns genes or metabolites to significant temporal expression profiles. The Maximum Number of Model Profiles and the Maximum Unit Change in Model Profiles between time points were set to 50 and 2, respectively. Data were normalized to d0. Integrated into the STEM tool is a GO enrichment analysis. All annotations (Biological Process (BP), Molecular Function (MF), and Cellular Component (CC)) were selected and applied. Statistical significance was computed and FDR-corrected at p  < 0.05.

Network analysis

The network establishment was based on the gene expression and metabolite level changes across the five successive time point comparisons, along the cortical differentiation. The connectivity information for the initial network was acquired from the publicly available recon3D stoichiometric model data set (available at https://www.vmh.life/#downloadview , retrieved in September 2020) [ 46 ]. Ultimately, 51 metabolites and 1135 genes were matched with their corresponding IDs.

Briefly, the construction of the network was performed based on the following steps. Initially, all the reactions associated with any of the target genes were extracted. The metabolites associated with these reactions were identified and the educt-product stoichiometry was applied for every metabolite involved in the network. Subsequently, the reaction data were filtered to extract and proceed only with the genes and metabolites measured in our dataset. The network was further enriched with protein-protein interaction information, derived using the signor database (available at https://signor.uniroma2.it/downloads.php , retrieved in September 2020) [ 47 ]. Finally, the network vertices were constructed after examining the unique metabolites and genes, existing in the edge dataset and were further enriched with vertex attributes, such as the vertex type (i.e., gene/metabolite). Log 2 fold changes (log 2 FC) were converted to a color gradient scale, ranging from blue (indicating a downregulation compared to the previous time point) to red (indicating upregulation).

Extraction of subnetworks from the parental network, was based on assigning membership to the pathways, as defined by the KEGG pathway database, and selecting the subnetwork that included the highest number of differentially expressed genes and metabolites, with the closest degree distribution of the vertices. Pie charts with five equal fractions were used in order to visualize the fold changes occurring across a single metabolite or gene, corresponding to the transitions between two succeeding time points. Additionally, ellipses were used for visualizing the metabolites, while the genes were visualized with circles.

Metabolites that were needed as substantial interconnections between measured metabolites, but were not measured in our dataset, were visualized as small dots. The position for every node was provided as coordinates on a 2D plane. Network visualizations were performed using the R igraph package [ 48 ].

In vitro cortical differentiation of SCZ and control iPSC lines and sampling

The reprogramming of adult somatic cells from affected patients into iPSCs allows various approaches for disease modeling [ 23 , 49 ]. In this study, we subjected iPSCs to a cortical differentiation protocol previously reported by Shi et al. [ 28 ] that yields mature cortical neurons in a stepwise manner and particularly reflects very early stages of neurodevelopment (Fig.  1A ). We extracted samples from six time points along the neuronal differentiation. At iPSC stage (day 0; d0), cells expressed pluripotency markers, exhibited iPSC characteristic morphology and had the ability to differentiate into the three germinal layers (Fig.  1A, B , Supplementary Fig.  1 ). Subsequently, the cells were directed to form a tightly packed neuroepithelial sheet (NES)-like structure through the exposure to a neural induction medium (NIM), containing small molecules that modulate the WNT and TGFβ pathway. The neural progenitor cells (NPCs) present at that stage expressed SOX1 and SOX2 (d7) and subsequently traversed through a rosette formation stage (d12). A short treatment with FGF2 yielded in the expansion of the NPC population, expressing the proliferation marker Ki67 and the neural stem cell marker PAX6 (Fig.  1B ). Finally, TUBB3+ neurons appeared and started migrating out of the rosettes, observed at around d27, and further differentiated into young and mature neurons (Fig.  1B–D ). During the later differentiation time points (d50–100), GFAP+/S100β+ astrocytes appeared in the adherent cultures (Fig.  1B, D ). Interestingly, the neurons observed between d50–100 were positive for both the glutamatergic marker vesicular glutamate transporter 1 (vGLUT1), as well as the glutamate decarboxylase 65/67 (GAD65/67) (Fig.  1E ), correlating with the recently published finding that certain classes of neurons co-express glutamate and GABA machinery [ 50 ].

figure 1

A Schematic presentation of the cortical differentiation protocol (top panel), originally developed by Shi et al. [ 27 ] and representative brightfield images (bottom panel) corresponding to the key developmental time points during the cortical differentiation process. Scalebars, 100 μm. B Representative immunocytochemistry (ICC) stainings of cell stage-specific markers at day (d)0, 7, 16, 27, 50 and 100 (as depicted in A). Cells at d0 express the pluripotency markers SOX2 (red) and OCT4 (green). Cells on d7 are expressing the neural stem cell markers SOX1 (red) and SOX2 (green). On d16, characteristic rosette structures are formed and the neural progenitors express the proliferation marker KI67 (red) and the neural stem cell marker PAX6 (green). At d27 PAX6+ (green) neural stem cells are still present together with TUBB3+ (red) young neurons. At d50 GFAP+ astrocytes (red) appear together with TUBB3+ (green) neurons. Finally, at d100 mature MAP2+ (red)/ vGLUT1+ (green) neurons are present. Scalebars, 100 µm. C ICC analysis showing mature TAU+ (red)/NeuN+ (green) neurons present in culture in the later developmental stages (d100) of neuronal differentiation. Nuclei are counterstained with DAPI. Scalebar, 50 µm. D ICC analysis of differentiated cultures at d75 showing S100β+ (green) astrocytes together with MAP2+ (red) mature neurons. Nuclei are counterstained with DAPI. Scalebar, 100 µm. E Double-positive GAD65/67 (magenta) and vGLUT1 (green) mature MAP2+ (red) neurons are present in d100 neuronal cultures. Scalebar, 20 µm. iPSC m. induced pluripotent stem cell medium, NIM neural induction medium, NMM neural maturation medium, bFGF basic fibroblast growth factor, d day in vitro.

Transcriptomic analysis indicates extracellular matrix component abnormalities in SCZ samples

In order to investigate the potential dysregulations in SCZ during cortical differentiation at a transcriptional level, we performed bulk RNA sequencing (RNA-seq) of SCZ and CTRL samples derived from distinct time points along the differentiation process (Supplementary Table  S2 ). PCA revealed distinct clustering of all lines at the iPSC stage, followed by a clustering along a developmental trajectory for the subsequent developmental differentiation time points (d0 to d100; Fig.  2A ). The largest component of variability (principal component 1, PC1) represents neurogenesis that effectively distinguished the six developmental time points. PC2 (13.3% of explained variance) further separated samples in the iPSC stage from the NPCs and the neural rosettes samples in d7, d16, and d27, and the neuronal cell samples of d50 and d100.

figure 2

A Plot of principal components PC1 and PC2, obtained from the PCA of transcriptomic data from all SCZ and CTRL lines of six time points during cortical differentiation. Each dot represents a cell line, colored by the respective time point. B Trajectory analysis of all differentially expressed genes (DEGs) from d0 to 100. Black line, CTRL group; red line, SCZ group. Data are represented as arithmetic mean ± SEM. C Heatmap visualization showing the most significant SCZ DEGs. Rows: DEGs; columns: CTRL (CTRL 1 and CTRL 2) and schizophrenia (SCZ 1 and SCZ 2) samples in d0, 7, 16, 27, 50, and 100. GO analysis highlighting the biological processes ( D ), molecular functions ( E ) and cellular components ( F ) of the DEGs. GO terms were generated with the ShinyGO 0.80 graphical gene-set enrichment tool [ 36 ]. d day in vitro, CTRL control, SCZ schizophrenia.

To analyze the genes that are differentially expressed between the CTRL and SCZ lines, DEG analysis was performed revealing 28 genes, all of which were significantly upregulated in SCZ over CTRL (Table  2 ). Interestingly, trajectory analysis showed the most pronounced DEGs at d16, where 61.2% of the DEGs were upregulated in SCZ as compared to the CTRL group (Fig.  2B ). Subsequent analysis of the DEGs revealed that the majority of the upregulated SCZ genes (Fig.  2C ) are related to GO terms associated with nervous system development (Fig.  2D ) and extracellular matrix (ECM) components (Fig.  2E, F ), including genes of the collagen superfamily, as well as fibronectin 1 ( FN1 ), and genes related to DNA-binding-transcription activator activity (Fig.  2C, F ). Taken together, transcriptomic analysis indicates upregulated gene transcription associated, among others, with ECM components during SCZ iPSC neural differentiation at all the developmental time points investigated, with a distinct peak at the rosette stage (d16).

WGCNA reveals gene modules correlated to the SCZ trait at rosette stage

Next, we performed a weighted correlation network analysis (WGCNA) on the bulk transcriptomic data of the two conditions to gain more insights into the biological networks underlying the pathological developmental mechanisms. This analysis identified eleven modules in total, including the gray module (Fig.  3A, B ). When the module eigengenes (ME) were compared to the disease trait, the red module was found to be correlated to the SCZ trait ( p-value  = 0.04) (Fig.  3A ). Comparison of the six subsequent differentiation time points revealed that the red module is correlated to the rosette stages (d16–27) (Fig.  3B ). Dendrogram plots further supported the correlation of the MEred module with the SCZ trait and the d27 timepoint (Fig.  3C ). The comparison of the red module membership with the SCZ trait ( p -value = 1.7e–05; Fig.  3D ) and the d27 ( p -value = 4.1e–06; Fig.  3E ) revealed a correlation of 0.3 and 0.32, respectively. Further analysis of the MEred module with all the time points of SCZ vs CTRL supported a significant correlation of the module with the disease trait ( p -value = 0.0402; Fig.  3F ) and the d27 rosette stage ( p -value = 0.0353; Fig.  3G ). Taken together, the WGCNA analysis revealed that the red module hub genes are significantly correlated to the SCZ trait, and to d27, which corresponds to the developmental stage where young neurons are formed and are migrating out of the neural rosettes. Intrigued by the correlation of the red module with the SCZ trait, we sought to determine the GO terms of the red module genes. The red module gene subset was correlated to ECM-associated terms (Fig.  3H ), including collagen-associated functions (Fig.  3I ). By that, complementary bioinformatics approaches consistently reveal a group of potent gene candidates that are upregulated in the SCZ cell lines and are strongly related with ECM processes, at the late rosette developmental time point (d27).

figure 3

Module eigengene (ME) comparison of each module with the disease trait ( A ) and the six subsequent time points ( B ), based on Pearson’s correlation. Columns correspond to the different traits; rows correspond to the ME of each module. Upper value in each cell corresponds to Pearson’s correlation; bottom value, p -value; right panel, color scale according to correlation. C Hierarchical clustering of ME and the SCZ trait reveals higher correlation of the red cluster with SCZ and d27. Scatter plots depicting the gene significance-module membership (MM) correlation for the SCZ trait ( D ) and for d27 ( E ). Box and whiskers plots showing the relationship between the red module and the disease trait ( p -value = 0.0402; F ) and the timepoint d27 ( p -value = 0.0353; G ). GO analysis highlighting the biological processes ( H ) and cellular components ( I ) of the red module genes. GO terms were generated with the ShinyGO 0.80 graphical gene-set enrichment tool [ 36 ]. d day in vitro, CTRL control, SCZ schizophrenia, ME module eigengene. * p  < 0.05.

Integrative transcriptomic-metabolic in silico analysis allows the generation of system-wide networks

To assess a presumed SCZ-dependent metabolic aberration, we subjected the samples from the cortical differentiation (Fig.  1A ) to a quantitative targeted metabolomics analysis. For that, we harvested both the cell supernatants and the cell pellets and analyzed them employing the MxP® Quant 500 kit (Table  S3 ). This targeted metabolomics approach allows the identification and quantification of 630 metabolites, belonging to 26 analyte classes, including lipids and several small molecules (Table S4 ). To assess the metabolomic dynamics during the cortical differentiation, a short time-series expression miner (STEM) pattern temporal analysis was performed [ 45 ]. STEM analysis revealed three significant profiles, #8, #24, and #44, with p-value  = 8.9E–4; 4.1E–4, and 9.9E–5, respectively (Fig.  4 ). These profiles revealed a total of 25 metabolites that were enriched in the SCZ samples. The group of metabolites assigned to profile #8 were two amino acids (AA), asparagine (Asn) and cysteine (Cys), one AA-related metabolite, 5-amino valeric acid (5-AVA), three lyso-phosphatidylcholines (Lyso-PC), Lyso-PC a C16:0, Lyso-PC a C16:1, and Lyso-PC a C18:1, and two phosphatidylcholines (PC), PC ae C36:0, and PC ae C38:5. Profile #8 showed a steadily decreasing trajectory during the neuronal differentiation. Metabolites assigned to profiles #24 and #44 were all PCs with zero to three double bonds in their fatty acid (FA) chain and one ceramide (Cer), Cer (d18:1/18:0). Metabolites assigned to profile #24 increased from d16 to d27 and decreased again at d50. Profile #44 was enriched with metabolite levels increasing from d0 to d7 and stayed relatively stable at later time points (d27 to 100). Taken together, the STEM pattern analysis of metabolites during cortical differentiation revealed a significant enrichment of decreasing trajectories (profile #8) in the CTRL and SCZ groups. Moreover, the analysis revealed 1.64-fold more PCs in the SCZ profiles compared to the CTRL ones.

figure 4

Short time-series expression miner (STEM) plots of significantly enriched temporal profiles obtained from the pre-processed and filtered final metabolomic data set, containing 112 metabolites. All data were normalized against d0. P -value, FDR adjusted at p  < 0.05; y-axis normalized concentration; data are represented as arithmetic mean ± SEM per metabolite from d0–100. d day in vitro, 5-AVA 5-aminovaleric acid, Asn asparagine, Cys cysteine, Lyso-PC lyso-phosphatidylcholine, PC phosphatidylcholine, Cer ceramide.

Next, we aimed to obtain an integrative view of the SCZ pathophysiology by transcriptomic and metabolomic analyses and employed Recon3D [ 46 ], a metabolic network model for reconstructing networks based on the combination of transcriptomic and metabolomic data. By that, we could investigate the global molecular changes in the different developmental stages of the cortical differentiation, based on the determined alterations in gene expression levels and metabolic abundance. To reconstruct the network, we started with the comparison of DEGs and metabolites between each two subsequent time points of neuronal differentiation yielding five comparisons, as depicted in Table  3 and Fig.  5A . We further compared the identified genes and metabolites to the human metabolome database (HMDB), an electronic database from where we retrieved information about metabolites and genes related to the human metabolism. These results are shown in Table  3 in the row marked as “identified in HMBD”. Next, we reconstructed an initial network where the edges are based on the openly available recon3D stochiometric dataset. 51 metabolites and 1135 genes were retrieved with corresponding IDs. Initially, all the reactions associated with the target genes were extracted. The metabolites associated with the reactions were extracted and the stoichiometry matrix data were applied to add the educt-product information. The reactions that were not associated with genes or metabolites measured in our dataset were removed from the network reconstruction. Finally, the network was enriched with information about protein-protein interactions, obtained from the Signor dataset, resulting in a parental network including 5798 nodes and 42,614 edges (Fig.  5A ). Ultimately, here we combined metabolomic with gene expression data generating a parental integrative network that allows the study of global molecular changes that occur during the different in vitro developmental stages of cortical neurogenesis.

figure 5

A Schematic flowchart of the transcriptomic-metabolomic integrative network construction. Transcriptomic and metabolomic data were obtained from the six consecutive time points (see Fig.  1 ). Five comparisons between every two subsequent time points were performed and the statistical p -values and fold change (FC) were calculated. The pathways associated with the measured genes were extracted. The associated metabolites were identified and the product/educt information for each reaction was added. Only the pathways related to the measured genes and metabolites were kept for the network reconstruction. Finally, the parental network was enriched with protein-protein information, extracted from the Signor database and the subnetworks of interest were extracted for further analysis. B Polyamine metabolism subnetwork. The global changes in metabolite abundance and gene expression levels, across five consecutive time point comparisons are shown for the subnetwork of the polyamine metabolism. Network nodes depict differentially expressed genes (circles) and metabolites (ellipses), as well as not measured metabolites (small gray dots). Network edges depict individual reactions and the associated genes. Log 2 FC are converted to a color gradient scale, ranging from blue (indicating downregulation to the previous time point) to red (indicating upregulation). The genes and metabolites with no significant change within a certain comparison are depicted in gray. Starting in the upper pie section, the comparison between iPSCs and d7 is depicted, continuing in a clockwise direction for all subsequent comparisons. The genes and metabolites marked with an asterisk are altered in the SCZ condition. FC fold change, lfc log fold change, LRT likelihood ratio test, d day in vitro.

Integrative transcriptomic-metabolomic network reveals an altered GABA biosynthetic pathway in SCZ

We sought to further examine the combined transcriptional/metabolic pathways, by extracting sub-networks, based on the most converging candidate metabolites and genes in the same pathway, as defined by the HMDB database. Network analysis of the closest connected metabolites and genes pointed towards the polyamine biosynthetic pathway, including dysregulations of putrescine-associated pathways, with distortions of aldehyde dehydrogenase 1 family member A1 ( ALDH1A1 ), as well as the metabolite GABA, in SCZ lines at time points d16–27 (Fig.  5B ). GABA is the main inhibitory neurotransmitter in the adult central nervous system (CNS) and its main biosynthetic route occurs through the decarboxylation of glutamate via GAD65/67 [ 51 ]. However, the ornithine/putrescine pathway is known to be a non-canonical route for GABA biosynthesis [ 52 ]. Along this pathway, ornithine gets decarboxylated to putrescine by ornithine decarboxylase (ODC1; Fig.  5B ). Subsequently, putrescine is converted into GABA either by oxidation through AOC2/DAO2, with 4-Aminobutanal as an intermediate, or via an acetyltransferase SAT1- and ALDH1A1-dependent pathway.

Thus, we explored more comprehensively a putative dysregulation of GABA in SCZ samples by targeted LC-MS analyses (Fig.  6 ). In addition, to investigate the role of putrescine in SCZ GABA biosynthesis we performed a cellular treatment using difluoromethylornithine (DFMO). DFMO is an ODC1 inhibitor that interferes with putrescine biosynthesis [ 53 ]. We observed only slightly reduced GABA levels in the SCZ lines compared to the controls. However, DFMO treatment resulted in significantly decreased GABA levels in the SCZ samples in both the cell pellet (Fig.  6A ; p -value = 0.0363) and the supernatant (Fig.  6B ; p -value = 0.0494) while the control lines were not affected. This result indicates a stronger reliance of the SCZ lines on the non-canonical putrescine pathway for their GABA biosynthesis. To further investigate this hypothesis, we also analyzed the levels of glutamate, as it is the canonical substrate for GABA production. Indeed, we found glutamate strongly reduced in CTRL lines as compared to SCZ samples, independent from the DFMO treatment (Fig.  6C ). Further analysis of the canonical GABA biosynthetic pathway demonstrates that both GAD1 and GAD2 mRNA levels are significantly decreased in SCZ lines (Fig.  6D–I ), in both d27-mature rosette stage, as well as in d100 neurons. Next, we aimed to further confirm this observation at protein level and performed ICC stainings on d27 samples with both CTRL and SCZ cellular cultures. Indeed, we found GAD65/67 significantly decreased in SCZ samples as compared to the controls (Fig.  6J, K ). From these data, we conclude that SCZ cell lines exhibit a distortion of the GAD1/2 -dependent GABA production in early neurodevelopmental stages, i.e., neural rosette stage, that correlates to neural tube formation in vivo.

figure 6

Targeted mass spectrometry analysis of GABA levels in cellular pellets ( A ) and supernatants ( B ), as well as glutamate ( C ) from d27 samples, with and without DFMO treatment. Errors bars represent mean ± S.E.M. GAD1 RNA levels from the bulk RNA-seq data ( D ) and qRT-PCR analyses at d27 ( E ) and d100 ( F ). GAD2 RNA levels from the bulk RNA-seq data ( G ) and qRT-PCR analyses at d27 ( H ) and d100 ( I ). Errors bars represent mean ± S.D. J Representative ICC staining of SCZ and CTRL lines at d27 for the stem cell marker SOX2 (green) and glutamate decarboxylase 65/67 (GAD65/67, magenta). Scalebars, 50 μm. K Intensity quantification of GAD65/67. The intensity measurements were normalized against the DAPI+/nuclear area. conc. concentration, n.s. not significant, d day in vitro, CTRL control, SCZ schizophrenia, S.E.M standard error, S.D. standard deviation. * p  < 0.05, ** p  < 0.005, *** p  < 0.0001.

Despite collective efforts in the field, pathological mechanisms underlying SCZ pathology remain elusive. The prevalent model depicts SCZ as a neurodevelopmental disorder, involving fundamental neurobiological alterations occurring prior to the manifestations of symptoms, through the interplay of genetic predispositions and environmental factors [ 2 ]. At a molecular level, the SCZ pathology is known to be associated with a distorted response to neurotransmitters, including glutamate and dopamine [ 3 ]. However, aberrant glutamatergic and dopaminergic neurotransmission alone fails to capture the complexity of the disease’s etiology [ 4 ]. Recently, metabolomic studies have proven invaluable in biomarker discovery and in elucidating complex molecular mechanisms. In fact, the metabolome reflects more complex genetic and environmental interactions [ 54 ]. For instance, studies in the cancer biology field have successfully employed metabolomic approaches for studying ECM abnormalities [ 55 , 56 ]. Our integrative transcriptomics-metabolomics study employing SCZ patient-derived iPSCs reveals a GABA distortion in the early rosette maturation stage. GABA is the main inhibitory neurotransmitter, primarily synthesized by decarboxylation of glutamate by GAD56/67 and released by GABAergic interneurons in the adult CNS. Here, we validated our finding by demonstrating a significant reduction of GAD both at RNA and protein levels in SCZ samples. We additionally report a significant reduction of GABA levels in both cellular pellets and supernatants of SCZ samples, indicating deficient GABA biosynthesis in cells derived from SCZ patients. These observations are in line with studies in mice, where GAD1 neuronal knock-down elicited emotional neuropsychiatric-like abnormalities, as well as in post-mortem brain studies from childhood-onset SCZ patients [ 57 , 58 ]. Moreover, it has been reported that GABA is involved in neural stem/progenitor cell proliferation and differentiation and that it might even exhibit an excitatory function during early development [ 59 , 60 , 61 ]. However, GABA dysregulation has not been demonstrated at the early rosette-stage timepoint in human SCZ iPSC-derived cells thus far.

Moreover, our study demonstrates further reduction of GABA levels in DMFO-treated cultures of SCZ iPSC-derived neural cells. Since DMFO inhibits ornithine decarboxylase and by that impacts putrescine biosynthesis, we hypothesize that SCZ cultures partly compensate for the loss of glutamate-based GABA biosynthesis through induction of the non-canonical putrescine pathway. This hypothesis is supported by our integrative network analyses, which underscore SCZ-dependent dysregulations of various enzymes and metabolites of the putrescine/GABA sub-network. We conclude from our data that distorted inhibitory/excitatory imbalances during neurodevelopment of SCZ cells result in partial disruption of the inhibitory circuit formation that is insufficiently compensated at later stages of CNS maturation. In fact, post-mortem studies also indicate an imbalance in the excitatory/inhibitory circuits in SCZ patients [ 62 ]. Our findings support the hypothesis that specific defects in the development and function of interneuron progenitors may play a key role in the etiology of psychiatric disorders including SCZ, autism, and intellectual disabilities [ 62 ], and assigns GABA a key function in SCZ in this respect.

Finally, we established a new combinatorial transcriptomic-metabolomic network analysis workflow, in order to investigate pathophysiological mechanisms in an integrative, more universal manner. Recently, Wang et al. applied a similar metabolomic-transcriptomic integrative network approach, using data obtained from patient-derived blood samples, in order to identify SCZ biomarkers and to develop a more precise disease diagnosis [ 63 ]. In our study, we established a parental integrative network, employing in vitro-derived metabolomic and transcriptomic data. Subsequently, we further elaborated on biologically relevant sub-networks. These approaches can be further used for modeling a vast variety of diseases, including neuropsychiatric disorders.

In conclusion, here we employed an iPSC-based neuronal differentiation model for studying early neurodevelopmental defects in SCZ pathology. Assessment of the metabolome at distinct stages revealed a distortion in the GABA biosynthetic pathways in SCZ lines, a dysregulation observed from the early rosette formation and maturation stages. Therefore, our study elucidates the involvement of GABA dysregulations and compensatory mechanisms during early in vitro neurodevelopment, implying an early imbalance in excitatory/inhibitory circuits. Ultimately, our findings together with the in silico analytical pipeline will contribute to deepen our understanding of SCZ and other psychiatric disorders and potentially build a basis for the development of new therapeutic interventions.

Data availability

The datasets generated and analyzed during this study are available from the corresponding author upon reasonable request.

World Health Organization. Schizophrenia. 2022. https://www.who.int/news-room/fact-sheets/detail/schizophrenia . Accessed 29 September 2022.

Orsolini L, Pompili S, Volpe U. Schizophrenia: A Narrative Review of Etiopathogenetic, Diagnostic and Treatment Aspects. J Clin Med. 2022;2022:5040.

Article   Google Scholar  

Bear MF, Connors BW PMA. Neuroscience: Exploring the brain. 4th ed. Philadelphia: Wolters Kluwer; 2016.

McCutcheon RA, Krystal JH, Howes OD. Dopamine and glutamate in schizophrenia: biology, symptoms and treatment. World Psychiatry. 2020;19:15.

Article   PubMed   PubMed Central   Google Scholar  

Chang CY, Chen YW, Wang TW, Lai WS. Akting up in the GABA hypothesis of schizophrenia: Akt1 deficiency modulates GABAergic functions and hippocampus-dependent functions. Sci Rep. 2016;6:1–13.

CAS   Google Scholar  

Rujescu D, Giegling I. Metabolomics of Schizophrenia. In: The Neurobioly of Schizophrenia, Elsevier; 2016. p. 167–177.

Davison J, O’Gorman A, Brennan L, Cotter DR. A systematic review of metabolite biomarkers of schizophrenia. Schizophr Res. 2018;195:32–50.

Article   PubMed   Google Scholar  

Patti GJ, Yanes O, Siuzdak G. Metabolomics: the apogee of the omics trilogy. Nat Rev Mol Cell Biol. 2012;13:263–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Campeau A, Mills RH, Stevens T, Rossitto L-A, Meehan M, Dorrestein P, et al. Multi-omics of human plasma reveals molecular features of dysregulated inflammation and accelerated aging in schizophrenia. Mol Psychiatry. 2022;27:1217–25.

Parksepp M, Leppik L, Koch K, Uppin K, Kangro R, Haring L, et al. Metabolomics approach revealed robust changes in amino acid and biogenic amine signatures in patients with schizophrenia in the early course of the disease. Sci Rep. 2020;10:1–11.

Okamoto N, Ikenouchi A, Watanabe K, Igata R, Fujii R, Yoshimura R. A Metabolomics Study of Serum in Hospitalized Patients With Chronic Schizophrenia. Front Psychiatry. 2021;12:2246.

Yao JK, Dougherty GG, Reddy RD, Keshavan MS, Montrose DM, Matson WR, et al. Altered interactions of tryptophan metabolites in first-episode neuroleptic-naive patients with schizophrenia. Mol Psychiatry. 2009;15:938–53.

Tsang TM, Huang JTJ, Holmes E, Bahn S. Metabolic profiling of plasma from discordant schizophrenia twins: correlation between lipid signals and global functioning in female schizophrenia patients. J Proteome Res. 2006;5:756–60.

Article   CAS   PubMed   Google Scholar  

Huang JH, Park H, Iaconelli J, Berkovitch SS, Watmuff B, McPhie D, et al. Unbiased Metabolite Profiling of Schizophrenia Fibroblasts under Stressful Perturbations Reveals Dysregulation of Plasmalogens and Phosphatidylcholines. J Proteome Res. 2017;16:481–93.

Yan L, Zhou J, Wang D, Si D, Liu Y, Zhong L, et al. Unbiased lipidomic profiling reveals metabolomic changes during the onset and antipsychotics treatment of schizophrenia disease. Metabolomics. 2018;14:80.

Leppik L, Parksepp M, Janno S, Koido K, Haring L, Vasar E, et al. Profiling of lipidomics before and after antipsychotic treatment in first-episode psychosis. Eur Arch Psychiatry Clin Neurosci. 2020;270:59–70.

Podvin S, Jones J, Kang A, Goodman R, Reed P, Lietz CB, et al. Human iN neuronal model of schizophrenia displays dysregulation of chromogranin B and related neuropeptide transmitter signatures. Molecular Psychiatry. 2024;2024:1–10.

Google Scholar  

Hashimoto K, Engberg G, Shimizu E, Nordin C, Lindström LH, Iyo M. Elevated glutamine/glutamate ratio in cerebrospinal fluid of first episode and drug naive schizophrenic patients. BMC Psychiatry. 2005;5:6.

Soliman MA, Aboharb F, Zeltner N, Studer L. Pluripotent stem cells in neuropsychiatric disorders. Mol Psychiatry. 2017;22:1241–9.

Lee KM, Hawi ZH, Parkington HC, Parish CL, Kumar PV, Polo JM, et al. The application of human pluripotent stem cells to model the neuronal and glial components of neurodevelopmental disorders. Mol Psychiatry. 2020;25:368–78.

Casas BS, Vitória G, Prieto CP, Casas M, Chacón C, Uhrig M, et al. Schizophrenia-derived hiPSC brain microvascular endothelial-like cells show impairments in angiogenesis and blood–brain barrier function. Mol Psychiatry. 2022;2022:1–11.

Ni P, Noh H, Park GH, Shao Z, Guan Y, Park JM, et al. iPSC-derived homogeneous populations of developing schizophrenia cortical interneurons have compromised mitochondrial function. Mol Psychiatry. 2019;25:2873–88.

Takahashi K, Tanabe K, Ohnuki M, Narita M, Ichisaka T, Tomoda K, et al. Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors. Cell. 2007;131:861–71.

Brennand KJ, Simone A, Jou J, Gelboin-Burkhart C, Tran N, Sangar S, et al. Modelling schizophrenia using human induced pluripotent stem cells. Nature. 2011;473:221–5.

Li J, Ryan SK, Deboer E, Cook K, Fitzgerald S, Lachman HM, et al. Mitochondrial deficits in human iPSC-derived neurons from patients with 22q11.2 deletion syndrome and schizophrenia. Transl Psychiatry. 2019;9:1–10.

Chiang CH, Su Y, Wen Z, Yoritomo N, Ross CA, Margolis RL, et al. Integration-free induced pluripotent stem cells derived from schizophrenia patients with a DISC1 mutation. Mol Psychiatry. 2011;16:358–60.

Shi Y, Kirwan P, Livesey FJ. Directed differentiation of human pluripotent stem cells to cerebral cortex neurons and neural networks. Nat Protoc. 2012;7:1836–46.

Shi Y, Kirwan P, Smith J, Robinson HPC, Livesey FJ. Human cerebral cortex development from pluripotent stem cells to functional excitatory synapses. Nat Neurosci. 2012;15:477–86.

Schmittgen TD, Livak KJ. Analyzing real-time PCR data by the comparative C(T) method. Nat Protoc. 2008;3:1101–8.

Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C, Jha S, et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013;29:15–21.

Heinz S, Benner C, Spann N, Bertolino E, Lin YC, Laslo P, et al. Simple Combinations of Lineage-Determining Transcription Factors Prime cis-Regulatory Elements Required for Macrophage and B Cell Identities. Mol Cell. 2010;38:576–89.

R Core Team. R: A Language and Environment for Statistical Computing. 2023. R Foundation for Statistical Computing, Vienna, Austria.

Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15:550.

Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11:1–12.

Metsalu T, Vilo J. ClustVis: a web tool for visualizing clustering of multivariate data using Principal Component Analysis and heatmap. Nucleic Acids Res. 2015;43:W566–W570.

Ge SX, Jung D, Jung D, Yao R. ShinyGO: a graphical gene-set enrichment tool for animals and plants. Bioinformatics. 2020;36:2628–9.

Langfelder P, Horvath S. WGCNA: An R package for weighted correlation network analysis. BMC Bioinformatics. 2008;9:1–13.

Leek JT, Johnson WE, Parker HS, Jaffe AE, Storey JD. The sva package for removing batch effects and other unwanted variation in high-throughput experiments. Bioinformatics. 2012;28:882–3.

Raudvere U, Kolberg L, Kuzmin I, Arak T, Adler P, Peterson H, et al. g:Profiler: a web server for functional enrichment analysis and conversions of gene lists (2019 update). Nucleic Acids Res. 2019;47:W191–W198.

Plumb R, Castro-Perez J, Granger J, Beattie I, Joncour K, Wright A. Ultra-performance liquid chromatography coupled to quadrupole-orthogonal time-of-flight mass spectrometry. Rapid Commun Mass Spectrom. 2004;18:2331–7.

Ramsay SL, Stoeggl WM, Weinberger KM, Graber A, Guggenbichler W. Apparatus and method for analyzing a metabolite profile. US Patent 8,265,877, 2012.

Mock A, Warta R, Dettling S, Brors B, Jä Ger D, Herold-Mende C. MetaboDiff: an R package for differential metabolomic analysis. Bioinformatics. 2018;34:3417–8.

Hoffman GE, Schadt EE. variancePartition: Interpreting drivers of variation in complex gene expression studies. BMC Bioinformatics. 2016;17:483.

van Pijkeren A, Egger AS, Hotze M, Zimmermann E, Kipura T, Grander J, et al. Proteome Coverage after Simultaneous Proteo-Metabolome Liquid-Liquid Extraction. J Proteome Res. 2023;22:951–66.

Ernst J, Bar-Joseph Z. STEM: A tool for the analysis of short time series gene expression data. BMC Bioinformatics. 2006;7:1–11.

Brunk E, Sahoo S, Zielinski DC, Altunkaya A, Dräger A, Mih N, et al. Recon3D enables a three-dimensional view of gene variation in human metabolism. Nat Biotechnol. 2018;36:272–81.

Licata L, Lo Surdo P, Iannuccelli M, Palma A, Micarelli E, Perfetto L, et al. SIGNOR 2.0, the SIGnaling Network Open Resource 2.0: 2019 update. Nucleic Acids Res. 2020;48:D504–10.

Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal, Complex Syst. 2006;1965:1–9. Accessed 6 July 2022.

Takahashi K, Yamanaka S. Induction of Pluripotent Stem Cells from Mouse Embryonic and Adult Fibroblast Cultures by Defined Factors. Cell. 2006;126:663–76.

Root DH, Zhang S, Barker DJ, Miranda-Barrientos J, Liu B, Wang HL, et al. Selective Brain Distribution and Distinctive Synaptic Architecture of Dual Glutamatergic-GABAergic Neurons. Cell Rep. 2018;23:3465.

Olsen RW, DeLorey TM. GABA Synthesis, Uptake and Release. In: Basic Neurochemistry: Molecular, Cellular and Medical Aspects, 6th ed.Lippincott-Raven; 1999.

Kim JI, Ganesan S, Luo SX, Wu YW, Park E, Huang EJ, et al. Aldehyde dehydrogenase 1a1 mediates a GABA synthesis pathway in midbrain dopaminergic neurons. Science. 2015;350:102–6.

Koomoa DLT, Yco LP, Borsics T, Wallick CJ, Bachmann AS. Ornithine Decarboxylase Inhibition by DFMO Activates Opposing Signaling Pathways via Phosphorylation of both Akt/PKB and p27Kip1 in Neuroblastoma. Cancer Res. 2008;68:9825.

Petrovchich I, Sosinsky A, Konde A, Archibald A, Henderson D, Maletic-Savatic M, et al. Metabolomics in Schizophrenia and Major Depressive Disorder. Front Biol. 2016;11:222–31.

Nur SM, Shait Mohammed MR, Zamzami MA, Choudhry H, Ahmad A, Ateeq B, et al. Untargeted Metabolomics Showed Accumulation of One-Carbon Metabolites to Facilitate DNA Methylation during Extracellular Matrix Detachment of Cancer Cells. Metabolites. 2022;12:267.

Shait Mohammed MR, Alghamdi RA, Alzahrani AM, Zamzami MA, Choudhry H, Khan MI. Compound C, a Broad Kinase Inhibitor Alters Metabolic Fingerprinting of Extra Cellular Matrix Detached Cancer Cells. Front Oncol. 2021;11:12778.

Addington AM, Gornick M, Duckworth J, Sporn A, Gogtay N, Bobb A, et al. GAD1 (2q31.1), which encodes glutamic acid decarboxylase (GAD67), is associated with childhood-onset schizophrenia and cortical gray matter volume loss. Mol Psychiatry. 2005;10:581–8.

Miyata S, Kakizaki T, Fujihara K, Obinata H, Hirano T, Nakai J, et al. Global knockdown of glutamate decarboxylase 67 elicits emotional abnormality in mice. Mol Brain. 2021;14:1–14.

Loturco JJ, Owens DF, Heath MJ, Davis MB, Kriegsteing AR. GABA and Glutamate Depolarize Cortical Progenitor Cells and Inhibit DNA Synthesis. Neuron. 1995;15:1287–98.

Wu C, Sun D. GABA receptors in brain development, function, and injury. Metab Brain Dis. 2015;30:367–79.

Li K, Xu E. The role and the mechanism of γ-aminobutyric acid during central nervous system development. Neurosci Bull. 2008;24:195–200.

Marín O. Interneuron dysfunction in psychiatric disorders. Nat Rev Neurosci. 2012;13:107–20.

Wang T, Li P, Meng X, Zhang J, Liu Q, Jia C, et al. An integrated pathological research for precise diagnosis of schizophrenia combining LC-MS/1H NMR metabolomics and transcriptomics. Clinica Chimica Acta. 2022;524:84–95.

Article   CAS   Google Scholar  

Jost M, Chen Y, Gilbert LA, Horlbeck MA, Krenning L, Menchon G, et al. Combined CRISPRi/a-Based Chemical Genetic Screens Reveal that Rigosertib Is a Microtubule-Destabilizing Agent. Mol Cell. 2017;68:210.

Download references

Acknowledgements

We thank Marta Suarez Cubero for the excellent technical assistance. The study has been supported by the EC H2020 Marie Skłodowska-Curie COFUND doctoral training programme ARDRE, the JPco-fuND2015 project “MADGIC” funded by the European Union and the Austrian Science Fund (FWF) (grant DOI number 10.55776/I3029). For open access purposes, the author has applied a CC BY public copyright license to any author accepted manuscript version arising from this submission.

Open access funding provided by University of Innsbruck and Medical University of Innsbruck.

Author information

These authors contributed equally: Angeliki Spathopoulou, Gabriella A. Sauerwein, Valentin Marteau.

Authors and Affiliations

Institute of Molecular Biology & CMBI, Department of Genomics, Stem Cell & Regenerative Medicine, University of Innsbruck, Innsbruck, Austria

Angeliki Spathopoulou, Gabriella A. Sauerwein, Valentin Marteau, Martina Podlesnic, Theresa Lindlbauer, Elisa Gabassi, Katharina Kruszewski, Jerome Mertens, Katharina Günther & Frank Edenhofer

Institute of Biochemistry and Center for Molecular Biosciences Innsbruck, University of Innsbruck, Innsbruck, Austria

Tobias Kipura, Madlen Hotze & Marcel Kwiatkowski

Neuroscience Center, University of Helsinki, Helsinki, Finland

Marja Koskuvi & Šárka Lehtonen

Department of Psychiatry and Psychotherapy, Semmelweis University, Budapest, Hungary

János M. Réthelyi

HUN-REN RCNS, Institute of Molecular Life Sciences, Budapest, Hungary

Ágota Apáti

Department of Cellular, Computational and Integrative Biology—CIBIO, University of Trento, Trento, Italy

Luciano Conti

Department of Pediatrics and Adolescent Medicine, Division of Pediatric Hematology and Oncology, Faculty of Medicine, Medical Center - University of Freiburg, Freiburg, Germany

Manching Ku

biocrates life sciences AG, Innsbruck, Austria

Therese Koal, Udo Müller & Radu A. Talmazan

Department of Forensic Psychiatry, University of Kuopio, Niuvanniemi Hospital, Kuopio, Finland

Ilkka Ojansuu, Olli Vaurio, Markku Lähteenvuo & Jari Tiihonen

A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland

Šárka Lehtonen

Department of Neurosciences, Sanford Consortium for Regenerative Medicine, University of California San Diego, San Diego, USA

Jerome Mertens

Department of Clinical Neuroscience, Karolinska Institutet, and Center for Psychiatry Research, Stockholm City Council, Stockholm, Sweden

Jari Tiihonen

Institute of Life Science, University of Helsinki, FI-00014, Helsinki, Finland

Jari Koistinaho

Drug Research Program, Division of Pharmacology and Pharmacotherapy, University of Helsinki, Helsinki, Finland

Institute of Bioinformatics, Biocenter, Medical University Innsbruck, Innsbruck, Austria

Zlatko Trajanoski

You can also search for this author in PubMed   Google Scholar

Contributions

FE, JM, and JK designed the study. AS, GAS, TL, TK, EG and MH performed wetlab experiments. GAS, VM and MP performed the in silico analyses with the guidance of ZT and FE. KK, MK, JMR, AA, LC, IO, OV, ML, SL, JM, JT, and JK provided the employed cell lines. TK, UM, and RAT provided expertise on the metabolomic analysis. AS drafted the original manuscript with the input of KG and FE. All authors approved the final manuscript. Funding acquisition: FE.

Corresponding author

Correspondence to Frank Edenhofer .

Ethics declarations

Competing interests.

The authors declare no competing interest.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary tables 1-4, supplementary figure 1, supplementary figure legends, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Spathopoulou, A., Sauerwein, G.A., Marteau, V. et al. Integrative metabolomics-genomics analysis identifies key networks in a stem cell-based model of schizophrenia. Mol Psychiatry (2024). https://doi.org/10.1038/s41380-024-02568-8

Download citation

Received : 17 October 2022

Revised : 12 April 2024

Accepted : 17 April 2024

Published : 29 April 2024

DOI : https://doi.org/10.1038/s41380-024-02568-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

hypothesis based models

  • Open access
  • Published: 02 May 2024

Use of the International IFOMPT Cervical Framework to inform clinical reasoning in postgraduate level physiotherapy students: a qualitative study using think aloud methodology

  • Katie L. Kowalski 1 ,
  • Heather Gillis 1 ,
  • Katherine Henning 1 ,
  • Paul Parikh 1 ,
  • Jackie Sadi 1 &
  • Alison Rushton 1  

BMC Medical Education volume  24 , Article number:  486 ( 2024 ) Cite this article

Metrics details

Vascular pathologies of the head and neck are rare but can present as musculoskeletal problems. The International Federation of Orthopedic Manipulative Physical Therapists (IFOMPT) Cervical Framework (Framework) aims to assist evidence-based clinical reasoning for safe assessment and management of the cervical spine considering potential for vascular pathology. Clinical reasoning is critical to physiotherapy, and developing high-level clinical reasoning is a priority for postgraduate (post-licensure) educational programs.

To explore the influence of the Framework on clinical reasoning processes in postgraduate physiotherapy students.

Qualitative case study design using think aloud methodology and interpretive description, informed by COnsolidated criteria for REporting Qualitative research. Participants were postgraduate musculoskeletal physiotherapy students who learned about the Framework through standardized delivery. Two cervical spine cases explored clinical reasoning processes. Coding and analysis of transcripts were guided by Elstein’s diagnostic reasoning components and the Postgraduate Musculoskeletal Physiotherapy Practice model. Data were analyzed using thematic analysis (inductive and deductive) for individuals and then across participants, enabling analysis of key steps in clinical reasoning processes and use of the Framework. Trustworthiness was enhanced with multiple strategies (e.g., second researcher challenged codes).

For all participants ( n  = 8), the Framework supported clinical reasoning using primarily hypothetico-deductive processes. It informed vascular hypothesis generation in the patient history and testing the vascular hypothesis through patient history questions and selection of physical examination tests, to inform clarity and support for diagnosis and management. Most participant’s clinical reasoning processes were characterized by high-level features (e.g., prioritization), however there was a continuum of proficiency. Clinical reasoning processes were informed by deep knowledge of the Framework integrated with a breadth of wider knowledge and supported by a range of personal characteristics (e.g., reflection).

Conclusions

Findings support use of the Framework as an educational resource in postgraduate physiotherapy programs to inform clinical reasoning processes for safe and effective assessment and management of cervical spine presentations considering potential for vascular pathology. Individualized approaches may be required to support students, owing to a continuum of clinical reasoning proficiency. Future research is required to explore use of the Framework to inform clinical reasoning processes in learners at different levels.

Peer Review reports

Introduction

Musculoskeletal neck pain and headache are highly prevalent and among the most disabling conditions globally that require effective rehabilitation [ 1 , 2 , 3 , 4 ]. A range of rehabilitation professionals, including physiotherapists, assess and manage musculoskeletal neck pain and headache. Assessment of the cervical spine can be a complex process. Patients can present to physiotherapy with vascular pathology masquerading as musculoskeletal pain and dysfunction, as neck pain and/or headache as a common first symptom [ 5 ]. While vascular pathologies of the head and neck are rare [ 6 ], they are important considerations within a cervical spine assessment to facilitate the best possible patient outcomes [ 7 ]. The International IFOMPT (International Federation of Orthopedic Manipulative Physical Therapists) Cervical Framework (Framework) provides guidance in the assessment and management of the cervical spine region, considering the potential for vascular pathologies of the neck and head [ 8 ]. Two separate, but related, risks are considered: risk of misdiagnosis of an existing vascular pathology and risk of serious adverse event following musculoskeletal interventions [ 8 ].

The Framework is a consensus document iteratively developed through rigorous methods and the best contemporary evidence [ 8 ], and is also published as a Position Statement [ 7 ]. Central to the Framework are clinical reasoning and evidence-based practice, providing guidance in the assessment of the cervical spine region, considering the potential for vascular pathologies in advance of planned interventions [ 7 , 8 ]. The Framework was developed and published to be a resource for practicing musculoskeletal clinicians and educators. It has been implemented widely within IFOMPT postgraduate (post-licensure) educational programs, influencing curricula by enabling a comprehensive and systemic approach when considering the potential for vascular pathology [ 9 ]. Frequently reported curricula changes include an emphasis on the patient history and incorporating Framework recommended physical examination tests to evaluate a vascular hypothesis [ 9 ]. The Framework aims to assist musculoskeletal clinicians in their clinical reasoning processes, however no study has investigated students’ use of the Framework to inform their clinical reasoning.

Clinical reasoning is a critical component to physiotherapy practice as it is fundamental to assessment and diagnosis, enabling physiotherapists to provide safe and effective patient-centered care [ 10 ]. This is particularly important for postgraduate physiotherapy educational programs, where developing a high level of clinical reasoning is a priority for educational curricula [ 11 ] and critical for achieving advanced practice physiotherapy competency [ 12 , 13 , 14 , 15 ]. At this level of physiotherapy, diagnostic reasoning is emphasized as an important component of a high level of clinical reasoning, informed by advanced use of domain-specific knowledge (e.g., propositional, experiential) and supported by a range of personal characteristics (e.g., adaptability, reflective) [ 12 ]. Facilitating the development of clinical reasoning improves physiotherapist’s performance and patient outcomes [ 16 ], underscoring the importance of clinical reasoning to physiotherapy practice. Understanding students’ use of the Framework to inform their clinical reasoning can support optimal implementation of the Framework within educational programs to facilitate safe and effective assessment and management of the cervical spine for patients.

To explore the influence of the Framework on the clinical reasoning processes in postgraduate level physiotherapy students.

Using a qualitative case study design, think aloud case analyses enabled exploration of clinical reasoning processes in postgraduate physiotherapy students. Case study design allows evaluation of experiences in practice, providing knowledge and accounts of practical actions in a specific context [ 17 ]. Case studies offer opportunity to generate situationally dependent understandings of accounts of clinical practice, highlighting the action and interaction that underscore the complexity of clinical decision-making in practice [ 17 ]. This study was informed by an interpretive description methodological approach with thematic analysis [ 18 , 19 ]. Interpretive description is coherent with mixed methods research and pragmatic orientations [ 20 , 21 ], and enables generation of evidence-based disciplinary knowledge and clinical understanding to inform practice [ 18 , 19 , 22 ]. Interpretive description has evolved for use in educational research to generate knowledge of educational experiences and the complexities of health care education to support achievement of educational objectives and professional practice standards [ 23 ]. The COnsolidated criteria for REporting Qualitative research (COREQ) informed the design and reporting of this study [ 24 ].

Research team

All research team members hold physiotherapy qualifications, and most hold advanced qualifications specializing in musculoskeletal physiotherapy. The research team is based in Canada and has varying levels of academic credentials (ranging from Clinical Masters to PhD or equivalent) and occupations (ranging from PhD student to Director of Physical Therapy). The final author (AR) is also an author of the Framework, which represents international and multiprofessional consensus. Authors HG and JS are lecturers on one of the postgraduate programs which students were recruited from. The primary researcher and first author (KK) is a US-trained Physical Therapist and Postdoctoral Research Associate investigating spinal pain and clinical reasoning in the School of Physical Therapy at Western University. Authors KK, KH and PP had no prior relationship with the postgraduate educational programs, students, or the Framework.

Study setting

Western University in London, Ontario, Canada offers a one-year Advanced Health Care Practice (AHCP) postgraduate IFOMPT-approved Comprehensive Musculoskeletal Physiotherapy program (CMP) and a postgraduate Sport and Exercise Medicine (SEM) program. Think aloud case analyses interviews were conducted using Zoom, a viable option for qualitative data collection and audio-video recording of interviews that enables participation for students who live in geographically dispersed areas across Canada [ 25 ]. Interviews with individual participants were conducted by one researcher (KK or KH) in a calm and quiet environment to minimize disruption to the process of thinking aloud [ 26 ].

Participants

AHCP postgraduate musculoskeletal physiotherapy students ≥ 18 years of age in the CMP and SEM programs were recruited via email and an introduction to the research study during class by KK, using purposive sampling to ensure theoretical representation. The purposive sample ensured key characteristics of participants were included, specifically gender, ethnicity, and physiotherapy experience (years, type). AHCP students must have attended standardized teaching about the Framework to be eligible to participate. Exclusion criteria included inability to communicate fluently in English. As think-aloud methodology seeks rich, in-depth data from a small sample [ 27 ], this study sought to recruit 8–10 AHCP students. This range was informed by prior think aloud literature and anticipated to balance diversity of participant characteristics, similarities in musculoskeletal physiotherapy domain knowledge and rich data supporting individual clinical reasoning processes [ 27 , 28 ].

Learning about the IFOMPT Cervical Framework

CMP and SEM programs included standardized teaching of the Framework to inform AHCP students’ clinical reasoning in practice. Delivery included a presentation explaining the Framework, access to the full Framework document [ 8 ], and discussion of its role to inform practice, including a case analysis of a cervical spine clinical presentation, by research team members AR and JS. The full Framework document that is publicly available through IFOMPT [ 8 ] was provided to AHCP students as the Framework Position Statement [ 7 ] was not yet published. Discussion and case analysis was led by AHCP program leads in November 2021 (CMP, including research team member JS) and January 2022 (SEM).

Think aloud case analyses data collection

Using think aloud methodology, the analytical processes of how participants use the Framework to inform clinical reasoning were explored in an interview with one research team member not involved in AHCP educational programs (KK or KH). The think aloud method enables description and explanation of complex information paralleling the clinical reasoning process and has been used previously in musculoskeletal physiotherapy [ 29 , 30 ]. It facilitates the generation of rich verbal [ 27 ]as participants verbalize their clinical reasoning protocols [ 27 , 31 ]. Participants were aware of the aim of the research study and the research team’s clinical and research backgrounds, supporting an open environment for depth of data collection [ 32 ]. There was no prior relationship between participants and research team members conducting interviews.

Participants were instructed to think aloud their analysis of two clinical cases, presented in random order (Supplementary  1 ). Case information was provided in stages to reflect the chronology of assessment of patients in practice (patient history, planning the physical examination, physical examination, treatment). Use of the Framework to inform clinical reasoning was discussed at each stage. The cases enabled participants to identify and discuss features of possible vascular pathology, treatment indications and contraindications/precautions, etc. Two research study team members (HG, PP) developed cases designed to facilitate and elicit clinical reasoning processes in neck and head pain presentations. Cases were tested against the research team to ensure face validity. Cases and think aloud prompts were piloted prior to use with three physiotherapists at varying levels of practice to ensure they were fit for purpose.

Data collection took place from March 30-August 15, 2022, during the final terms of the AHCP programs and an average of 5 months after standardized teaching about the Framework. During case analysis interviews, participants were instructed to constantly think aloud, and if a pause in verbalizations was sustained, they were reminded to “keep thinking aloud” [ 27 ]. As needed, prompts were given to elicit verbalization of participants’ reasoning processes, including use of the Framework to inform their clinical reasoning at each stage of case analysis (Supplementary  2 ). Aside from this, all interactions between participants and researchers minimized to not interfere with the participant’s thought processes [ 27 , 31 ]. When analysis of the first case was complete, the researcher provided the second case, each lasting 35–45 min. A break between cases was offered. During and after interviews, field notes were recorded about initial impressions of the data collection session and potential patterns appearing to emerge [ 33 ].

Data analysis

Data from think aloud interviews were analyzed using thematic analysis [ 30 , 34 ], facilitating identification and analysis of patterns in data and key steps in the clinical reasoning process, including use of the Framework to enable its characterization (Fig.  1 ). As established models of clinical reasoning exist, a hybrid approach to thematic analysis was employed, incorporating inductive and deductive processes [ 35 ], which proceeded according to 5 iterative steps: [ 34 ]

figure 1

Data analysis steps

Familiarize with data: Audio-visual recordings were transcribed verbatim by a physiotherapist external to the research team. All transcripts were read and re-read several times by one researcher (KK), checking for accuracy by reviewing recordings as required. Field notes supported depth of familiarization with data.

Generate initial codes: Line-by-line coding of transcripts by one researcher (KK) supported generation of initial codes that represented components, patterns and meaning in clinical reasoning processes and use of the Framework. Established preliminary coding models were used as a guide. Elstein’s diagnostic reasoning model [ 36 ] guided generating initial codes of key steps in clinical reasoning processes (Table  1 a) [ 29 , 36 ]. Leveraging richness of data, further codes were generated guided by the Postgraduate Musculoskeletal Physiotherapy Practice model, which describes masters level clinical practice (Table  1 b) [ 12 ]. Codes were refined as data analysis proceeded. All codes were collated within participants along with supporting data.

Generate initial themes within participants: Coded data was inductively grouped into initial themes within each participant, reflecting individual clinical reasoning processes and use of the Framework. This inductive stage enabled a systematic, flexible approach to describe each participant’s unique thinking path, offering insight into the complexities of their clinical reasoning processes. It also provided a comprehensive understanding of the Framework informing clinical reasoning and a rich characterization of its components, aiding the development of robust, nuanced insights [ 35 , 37 , 38 ]. Initial themes were repeatedly revised to ensure they were grounded in and reflected raw data.

Develop, review and refine themes across participants: Initial themes were synthesized across participants to develop themes that represented all participants. Themes were reviewed and refined, returning to initial themes and codes at the individual participant level as needed.

Organize themes into established models: Themes were deductively organized into established clinical reasoning models; first into Elstein’s diagnostic reasoning model, second into the Postgraduate Musculoskeletal Physiotherapy Practice model to characterize themes within each diagnostic reasoning component [ 12 , 36 ].

Trustworthiness of findings

The research study was conducted according to an a priori protocol and additional steps were taken to establish trustworthiness of findings [ 39 ]. Field notes supported deep familiarization with data and served as a means of data source triangulation during analysis [ 40 ]. One researcher coded transcripts and a second researcher challenged codes, with codes and themes rigorously and iteratively reviewed and refined. Frequent debriefing sessions with the research team, reflexive discussions with other researchers and peer scrutiny of initial findings enabled wider perspectives and experiences to shape analysis and interpretation of findings. Several strategies were implemented to minimize the influence of prior relationships between participants and researchers, including author KK recruiting participants, KK and KH collecting/analyzing data, and AR, JS, HG and PP providing input on de-identified data at the stage of synthesis and interpretation.

Nine AHCP postgraduate level students were recruited and participated in data collection. One participant was withdrawn because of unfamiliarity with the standardized teaching session about use of the Framework (no recall of session), despite confirmation of attendance. Data from eight participants were used for analysis (CMP: n  = 6; SEM: n  = 2; Table  2 ), which achieved sample size requirements for think aloud methodology of rich and in-depth data [ 27 , 28 ].

Diagnostic reasoning components

Informed by the Framework, all components of Elstein’s diagnostic reasoning processes [ 36 ] were used by participants, including use of treatment with physiotherapy interventions to aid diagnostic reasoning. An illustrative example is presented in Supplement  3 . Clinical reasoning used primarily hypothetico-deductive processes reflecting a continuum of proficiency, was informed by deep Framework knowledge and breadth of prior knowledge (e.g., experiential), and supported by a range of personal characteristics (e.g., justification for decisions).

Cue acquisition

All participants sought to acquire additional cues early in the patient history, and for some this persisted into the medical history and physical examination. Cue acquisition enabled depth and breadth of understanding patient history information to generate hypotheses and factors contributing to the patient’s pain experience (Table  3 ). All participants asked further questions to understand details of the patients’ pain and their presentation, while some also explored the impact of pain on patient functioning and treatments received to date. There was a high degree of specificity to questions for most participants. Ongoing clinical reasoning processes through a thorough and complete assessment, even if the patient had previously received treatment for similar symptoms, was important for some participants. Cue acquisition was supported by personal characteristics including a patient-centered approach (e.g., understanding the patient’s beliefs about pain) and one participant reflected on their approach to acquiring patient history cues.

Hypothesis generation

Participants generated an average of 4.5 hypotheses per case (range: 2–8) and most hypotheses (77%) were generated rapidly early in the patient history. Knowledge from the Framework about patient history features of vascular pathology informed vascular hypothesis generation in the patient history for all participants in both cases (Table  4 ). Vascular hypotheses were also generated during the past medical history, where risk factors for vascular pathology were identified and interpreted by some participants who had high levels of suspicion for cervical articular involvement. Non-vascular hypotheses were generated during the physical examination by some participants to explain individual physical examination or patient history cues. Deep knowledge of the patient history section in the Framework supported high level of cue identification and interpretation for generating vascular hypotheses. Initial hypotheses were prioritized by some participants, however the level of specificity of hypotheses varied.

Cue evaluation

All participants evaluated cues throughout the patient history and physical examination in relationship to hypotheses generated, indicating use of hypothetico-deductive reasoning processes (Table  5 ). Framework knowledge of patient history features of vascular pathology was used to test vascular hypotheses and aid differential diagnosis. The patient history section supported high level of cue identification and interpretation of patient history features for all but one participant, and generation of further patient history questions for all participants. The level of specificity of these questions was high for all but one participant. Framework knowledge of recommended physical examination tests, including removal of positional testing, supported planning a focused and prioritized physical examination to further test vascular hypotheses for all participants. No participant indicated intention to use positional testing as part of their physical examination. Treatment with physiotherapy interventions served as a form of cue evaluation, and cues were evaluated to inform prognosis for some participants. At times during the physical examination, some participants demonstrated occasional errors or difficulty with cue evaluation by omitting key physical exam tests (e.g., no cranial nerve assessment despite concerns for trigeminal nerve involvement), selecting physical exam tests in advance of hypothesis generation (e.g., cervical spine instability testing), difficulty interpreting cues, or late selection of a physical examination test. Cue acquisition was supported by a range of personal characteristics. Most participants justified selection of physical examination tests, and some self-reflected on their ability to collect useful physical examination information to inform selection of tests. Precaution to the physical examination was identified by all participants but one, which contributed to an adaptable approach, prioritizing patient safety and comfort. Critical analysis of physical examination information aided interpretation within the context of the patient for most participants.

Hypothesis evaluation

All participants used the Framework to evaluate their hypotheses throughout the patient history and physical examination, continuously shifting their level of support for hypotheses (Table  6 , Supplement  4 ). This informed clarity in the overall level of suspicion for vascular pathology or musculoskeletal diagnoses, which were specific for most participants. Response to treatment with physiotherapy interventions served as a form of hypothesis evaluation for most participants who had low level suspicion for vascular pathology, highlighting ongoing reasoning processes. Hypotheses evaluated were prioritized by ranking according to level of suspicion by some participants. Difficulties weighing patient history and physical examination cues to inform judgement on overall level of suspicion for vascular pathology was demonstrated by some participants who reported that incomplete physical examination data and not being able to see the patient contributed to difficulties. Hypothesis evaluation was supported by the personal characteristic of reflection, where some students reflected on the Framework’s emphasis on the patient history to evaluate a vascular hypothesis.

The Framework supported all participants in clinical reasoning related to treatment (Table  7 ). Treatment decisions were always linked to the participant’s overall level of suspicion for vascular pathology or musculoskeletal diagnosis. Framework knowledge supported participants with high level of suspicion for vascular pathology to refer for further investigations. Participants with a musculoskeletal diagnosis kept the patient for physiotherapy interventions. The Framework patient history section supported patient education about symptoms of vascular pathology and safety netting for some participants. Framework knowledge influenced informed consent processes and risk-benefit analysis to support the selection of musculoskeletal physiotherapy interventions, which were specific and prioritized for some participants. Less Framework knowledge related to treatment was demonstrated by some students, generating unclear recommendations regarding the urgency of referral and use of the Framework to inform musculoskeletal physiotherapy interventions. Treatment was supported by a range of personal characteristics. An adaptable approach that prioritized patient safety and was supported by justification was demonstrated in all participants except one. Shared decision-making enabled the selection of physiotherapy interventions, which were patient-centered (individualized, considered whole person, identified future risk for vascular pathology). Communication with the patient’s family doctor facilitated collaborative patient-centered care for most participants.

This is the first study to explore the influence of the Framework on clinical reasoning processes in postgraduate physiotherapy students. The Framework supported clinical reasoning that used primarily hypothetico-deductive processes. The Framework informed vascular hypothesis generation in the patient history and testing the vascular hypothesis through patient history questions and selection of physical examination tests to inform clarity and support for diagnosis and management. Most postgraduate students’ clinical reasoning processes were characterized by high-level features (e.g. specificity, prioritization). However, some demonstrated occasional difficulties or errors, reflecting a continuum of clinical reasoning proficiency. Clinical reasoning processes were informed by deep knowledge of the Framework integrated with a breadth of wider knowledge and supported by a range of personal characteristics (e.g., justification for decisions, reflection).

Use of the Framework to inform clinical reasoning processes

The Framework provided a structured and comprehensive approach to support postgraduate students’ clinical reasoning processes in assessment and management of the cervical spine region, considering the potential for vascular pathology. Patient history and physical examination information was evaluated to inform clarity and support the decision to refer for further vascular investigations or proceed with musculoskeletal physiotherapy diagnosis/interventions. The Framework is not intended to lead to a vascular pathology diagnosis [ 7 , 8 ], and following the Framework does not guarantee vascular pathologies will be identified [ 41 ]. Rather, it aims to support a process of clinical reasoning to elicit and interpret appropriate patient history and physical examination information to estimate the probability of vascular pathology and inform judgement about the need to refer for further investigations [ 7 , 8 , 42 ]. Results of this study suggest the Framework has achieved this aim for postgraduate physiotherapy students.

The Framework supported postgraduate students in using primarily hypothetico-deductive diagnostic reasoning processes. This is expected given the diversity of vascular pathology clinical presentations precluding a definite clinical pattern and inherent complexity as a potential masquerader of a musculoskeletal problem [ 7 ]. It is also consistent with prior research investigating clinical reasoning processes in musculoskeletal physiotherapy postgraduate students [ 12 ] and clinical experts [ 29 ] where hypothetico-deductive and pattern recognition diagnostic reasoning are employed according to the demands of the clinical situation [ 10 ]. Diagnostic reasoning of most postgraduate students in this study demonstrated features suggestive of high-level clinical reasoning in musculoskeletal physiotherapy [ 12 ], including ongoing reasoning with high-level cue identification and interpretation, specificity and prioritization during assessment and treatment, use of physiotherapy interventions to aid diagnostic reasoning, and prognosis determination [ 12 , 29 , 43 ]. Expert physiotherapy practice has been further described as using a dialectical model of clinical reasoning with seamless transitions between clinical reasoning strategies [ 44 ]. While diagnostic reasoning was a focus in this study, postgraduate students considered a breadth of information as important to their reasoning (e.g., patient’s perspectives of the reason for their pain). This suggests wider reasoning strategies (e.g., narrative, collaborative) were employed to enable shared decision-making within the context of patient-centered care.

Study findings also highlighted a continuum of proficiency in use of the Framework to inform clinical reasoning processes. Not all students demonstrated all characteristics of high-level clinical reasoning and there are suggestions of incomplete reasoning processes, for example occasional errors in evaluating cues. Some students offered explanations such as incomplete case information as factors contributing to difficulties with clinical reasoning processes. However, the ability to critically evaluate incomplete and potentially conflicting clinical information is consistently identified as an advanced clinical practice competency [ 14 , 43 ]. A continuum of proficiency in clinical reasoning in musculoskeletal physiotherapy is supported by wider healthcare professions describing acquisition and application of clinical knowledge and skills as a developmental continuum of clinical competence progressing from novice to expert [ 45 , 46 ]. The range of years of clinical practice experience in this cohort of students (3–14 years) or prior completed postgraduate education may have contributed to the continuum of proficiency, as high-quality and diverse experiential learning is essential for the development of high-level clinical reasoning [ 14 , 47 ].

Deep knowledge of the Framework informs clinical reasoning processes

Postgraduate students demonstrated deep Framework knowledge to inform clinical reasoning processes. All students demonstrated knowledge of patient history features of vascular pathology, recommended physical examination tests to test a vascular hypothesis, and the need to refer if there is a high level of suspicion for vascular pathology. A key development in the recent Framework update is the removal of the recommendation to perform positional testing [ 8 ]. All students demonstrated knowledge of this development, and none wanted to test a vascular hypothesis with positional testing. Most also demonstrated Framework knowledge about considerations for planning treatment with physiotherapy interventions (e.g., risk-benefit analysis, informed consent), though not all, which underscores the continuum of proficiency in postgraduate students. Rich organization of multidimensional knowledge is a required component for high level clinical reasoning and is characteristic of expert physiotherapy practice [ 10 , 48 , 49 ]. Most postgraduate physiotherapy students displayed this expert practice characteristic through integration of deep Framework knowledge with a breadth of prior knowledge (e.g., experiential, propositional) to inform clinical reasoning processes. This highlights the utility of the Framework in postgraduate physiotherapy education to develop advanced level evidence-based knowledge informing clinical reasoning processes for safe assessment and management of the cervical spine, considering the potential for vascular pathology [ 9 , 8 , 50 , 51 , 52 ].

Framework supports personal characteristics to facilitate integration of knowledge and clinical reasoning

The Framework supported personal characteristics of postgraduate students, which are key drivers for the complex integration of advanced knowledge and high-level clinical reasoning [ 10 , 12 , 48 ]. For all students, the Framework supported justification for decisions and patient-centered care, emphasizing a whole-person approach and shared decision-making. Further demonstrating a continuum of proficiency, the Framework supported a wider breadth of personal characteristics for some students, including critical analysis, reflection, self-analysis, and adaptability. These personal characteristics illustrate the interwoven cognitive and metacognitive skills that influence and support a high level of clinical reasoning [ 10 , 12 ] and the development of clinical expertise [ 48 , 53 ]. For example [ 54 ], reflection is critical to developing high-level clinical reasoning and advanced level practice [ 12 , 55 ]. Postgraduate students reflected on prior knowledge, experiences, and action within the context of current Framework knowledge, emphasizing active engagement in cognitive processes to inform clinical reasoning processes. Reflection-in-action is highlighted by self-analysis and adaptability. These characteristics require continuous cognitive processing to consider personal strengths and limitations in the context of the patient and evidence-based practice, adapting the clinical encounter as required [ 53 , 55 ]. These findings highlight use of the Framework in postgraduate education to support development of personal characteristics that are indicative of an advanced level of clinical practice [ 12 ].

Synthesis of findings

Derived from synthesis of research study findings and informed by the Postgraduate Musculoskeletal Physiotherapy Practice model [ 12 ], use of the Framework to inform clinical reasoning processes in postgraduate students is illustrated in Fig.  2 . Overlapping clinical reasoning, knowledge and personal characteristic components emphasize the complex interaction of factors contributing to clinical reasoning processes. Personal characteristics of postgraduate students underpin clinical reasoning and knowledge, highlighting their role in facilitating the integration of these two components. Bolded subcomponents indicate convergence of results reflecting all postgraduate students and underscores the variability among postgraduate students contributing to a continuum of clinical reasoning proficiency. The relative weighting of the components is approximately equal to balance the breadth and convergence of subcomponents. Synthesis of findings align with the Postgraduate Musculoskeletal Physiotherapy Practice model [ 12 ], though some differences exist. Limited personal characteristics were identified in this study with little convergence across students, which may be due to the objective of this study and the case analysis approach.

figure 2

Use of the Framework to inform clinical reasoning in postgraduate level musculoskeletal physiotherapy students. Adapted from the Postgraduate Musculoskeletal Physiotherapy Practice model [ 12 ].

Strengths and limitations

Think aloud case analyses enabled situationally dependent understanding of the Framework to inform clinical reasoning processes in postgraduate level students [ 17 ], considering the rare potential for vascular pathology. A limitation of this approach was the standardized nature of case information provided to students, which may have influenced clinical reasoning processes. Future research studies may consider patient case simulation to address this limitation [ 30 ]. Interviews were conducted during the second half of the postgraduate educational program, and this timing could have influenced clinical reasoning processes compared to if interviews were conducted at the end of the program. Future research can explore use of the Framework to inform clinical reasoning processes in established advanced practice physiotherapists. The sample size of this study aligns with recommendations for think aloud methodology [ 27 , 28 ], achieved rich data, and purposive sampling enabled wide representation of key characteristics (e.g., gender, ethnicity, country of training, physiotherapy experiences), which enhances transferability of findings. Students were aware of the study objective in advance of interviews which may have contributed to a heightened level of awareness of vascular pathology. The prior relationship between students and researchers may have also influenced results, however several strategies were implemented to minimize this influence.

Implications

The Framework is widely implemented within IFOMPT postgraduate educational programs and has led to important shifts in educational curricula [ 9 ]. Findings of this study support use of the Framework as an educational resource in postgraduate physiotherapy programs to inform clinical reasoning processes for safe and effective assessment and management of cervical spine presentations considering the potential for vascular pathology. Individualized approaches may be required to support each student, owing to a continuum of clinical reasoning proficiency. As the Framework was written for practicing musculoskeletal clinicians, future research is required to explore use of the Framework to inform clinical reasoning in learners at different levels, for example entry-level physiotherapy students.

The Framework supported clinical reasoning that used primarily hypothetico-deductive processes in postgraduate physiotherapy students. It informed vascular hypothesis generation in the patient history and testing the vascular hypothesis through patient history questions and selection of physical examination tests, to inform clarity and support for diagnosis and management. Most postgraduate students clinical reasoning processes were characterized as high-level, informed by deep Framework knowledge integrated with a breadth of wider knowledge, and supported by a range of personal characteristics to facilitate the integration of advanced knowledge and high-level clinical reasoning. Future research is required to explore use of the Framework to inform clinical reasoning in learners at different levels.

Data availability

The dataset used and analyzed during the current study are available from the corresponding author on reasonable request.

Safiri S, Kolahi AA, Hoy D, Buchbinder R, Mansournia MA, Bettampadi D et al. Global, regional, and national burden of neck pain in the general population, 1990–2017: systematic analysis of the global burden of Disease Study 2017. BMJ. 2020;368.

Stovner LJ, Nichols E, Steiner TJ, Abd-Allah F, Abdelalim A, Al-Raddadi RM, et al. Global, regional, and national burden of migraine and tension-type headache, 1990–2016: a systematic analysis for the global burden of Disease Study 2016. Lancet Neurol. 2018;17:954–76.

Article   Google Scholar  

Cieza A, Causey K, Kamenov K, Hanson SW, Chatterji S, Vos T. Global estimates of the need for rehabilitation based on the Global Burden of Disease study 2019: a systematic analysis for the global burden of Disease Study 2019. Lancet. 2020;396:2006–17.

Côté P, Yu H, Shearer HM, Randhawa K, Wong JJ, Mior S et al. Non-pharmacological management of persistent headaches associated with neck pain: A clinical practice guideline from the Ontario protocol for traffic injury management (OPTIMa) collaboration. European Journal of Pain (United Kingdom). 2019;23.

Diamanti S, Longoni M, Agostoni EC. Leading symptoms in cerebrovascular diseases: what about headache? Neurological Sciences. 2019.

Debette S, Compter A, Labeyrie MA, Uyttenboogaart M, Metso TM, Majersik JJ, et al. Epidemiology, pathophysiology, diagnosis, and management of intracranial artery dissection. Lancet Neurol. 2015;14:640–54.

Rushton A, Carlesso LC, Flynn T, Hing WA, Rubinstein SM, Vogel S, et al. International Framework for examination of the Cervical Region for potential of vascular pathologies of the Neck Prior to Musculoskeletal intervention: International IFOMPT Cervical Framework. J Orthop Sports Phys Therapy. 2023;53:7–22.

Rushton A, Carlesso LC, Flynn T, Hing WA, Kerry R, Rubinstein SM, et al. International framework for examination of the cervical region for potential of vascular pathologies of the neck prior to orthopaedic manual therapy (OMT) intervention: International IFOMPT Cervical Framework. International IFOMPT Cervical Framework; 2020.

Hutting N, Kranenburg R, Taylor A, Wilbrink W, Kerry R, Mourad F. Implementation of the International IFOMPT Cervical Framework: a survey among educational programmes. Musculoskelet Sci Pract. 2022;62:102619.

Jones MA, Jensen G, Edwards I. Clinical reasoning in physiotherapy. In: Campbell S, Watkins V, editors. Clinical reasoning in the health professions. Third. Philadelphia: Elsevier; 2008. pp. 245–56.

Google Scholar  

Fennelly O, Desmeules F, O’Sullivan C, Heneghan NR, Cunningham C. Advanced musculoskeletal physiotherapy practice: informing education curricula. Musculoskelet Sci Pract. 2020;48:102174.

Rushton A, Lindsay G. Defining the construct of masters level clinical practice in manipulative physiotherapy. Man Ther. 2010;15.

Rushton A, Lindsay G. Defining the construct of masters level clinical practice in healthcare based on the UK experience. Med Teach. 2008;30:e100–7.

Noblet T, Heneghan NR, Hindle J, Rushton A. Accreditation of advanced clinical practice of musculoskeletal physiotherapy in England: a qualitative two-phase study to inform implementation. Physiotherapy (United Kingdom). 2021;113.

Tawiah AK, Stokes E, Wieler M, Desmeules F, Finucane L, Lewis J, et al. Developing an international competency and capability framework for advanced practice physiotherapy: a scoping review with narrative synthesis. Physiotherapy. 2023;122:3–16.

Williams A, Rushton A, Lewis JJ, Phillips C. Evaluation of the clinical effectiveness of a work-based mentoring programme to develop clinical reasoning on patient outcome: a stepped wedge cluster randomised controlled trial. PLoS ONE. 2019;14.

Miles R. Complexity, representation and practice: case study as method and methodology. Issues Educational Res. 2015;25.

Thorne S, Kirkham SR, MacDonald-Emes J. Interpretive description: a noncategorical qualitative alternative for developing nursing knowledge. Res Nurs Health. 1997;20.

Thorne S, Kirkham SR, O’Flynn-Magee K. The Analytic challenge in interpretive description. Int J Qual Methods. 2004;3.

Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. Sage; 2003.

Dolan S, Nowell L, Moules NJ. Interpretive description in applied mixed methods research: exploring issues of fit, purpose, process, context, and design. Nurs Inq. 2023;30.

Thorne S. Interpretive description. In: Routledge International Handbook of Qualitative Nursing Research. 2013. pp. 295–306.

Thompson Burdine J, Thorne S, Sandhu G. Interpretive description: a flexible qualitative methodology for medical education research. Med Educ. 2021;55.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus group. Int J Qual Health Care. 2007;19:349–57.

Archibald MM, Ambagtsheer RC, Casey MG, Lawless M. Using zoom videoconferencing for qualitative data Collection: perceptions and experiences of researchers and participants. Int J Qual Methods. 2019;18.

Van Someren M, Barnard YF, Sandberg J. The think aloud method: a practical approach to modelling cognitive. Volume 11. London: Academic; 1994.

Fonteyn ME, Kuipers B, Grobe SJ. A description of think aloud Method and Protocol Analysis. Qual Health Res. 1993;3:430–41.

Lundgrén-Laine H, Salanterä S. Think-Aloud technique and protocol analysis in clinical decision-making research. Qual Health Res. 2010;20:565–75.

Doody C, McAteer M. Clinical reasoning of expert and novice physiotherapists in an outpatient orthopaedic setting. Physiotherapy. 2002;88.

Gilliland S. Physical therapist students’ development of diagnostic reasoning: a longitudinal study. J Phys Therapy Educ. 2017;31.

Ericsson KA, Simon HA. How to study thinking in Everyday Life: contrasting think-aloud protocols with descriptions and explanations of thinking. Mind Cult Act. 1998;5:178–86.

Dwyer SC, Buckle JL. The space between: on being an insider-outsider in qualitative research. Int J Qual Methods. 2009;8.

Shenton AK. Strategies for ensuring trustworthiness in qualitative research projects. Educ Inform. 2004;22:63–75.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

Fereday J, Muir-Cochrane E. Demonstrating Rigor using thematic analysis: a Hybrid Approach of Inductive and deductive coding and theme development. Int J Qual Methods. 2006;5.

Elstein ASLSS. Medical problem solving: an analysis of clinical reasoning. Harvard University Press; 1978.

Proudfoot K. Inductive/Deductive Hybrid Thematic Analysis in mixed methods research. J Mix Methods Res. 2023;17.

Charters E. The use of think-aloud methods in qualitative research an introduction to think-aloud methods. Brock Educ J. 2003;12.

Nowell LS, Norris JM, White DE, Moules NJ. Thematic analysis: striving to meet the trustworthiness Criteria. Int J Qual Methods. 2017;16:1–13.

Thurmond VA. The point of triangulation. J Nurs Scholarsh. 2001;33.

Hutting N, Wilbrink W, Taylor A, Kerry R. Identifying vascular pathologies or flow limitations: important aspects in the clinical reasoning process. Musculoskelet Sci Pract. 2021;53:102343.

de Best RF, Coppieters MW, van Trijffel E, Compter A, Uyttenboogaart M, Bot JC, et al. Risk assessment of vascular complications following manual therapy and exercise for the cervical region: diagnostic accuracy of the International Federation of Orthopaedic Manipulative physical therapists framework (the Go4Safe project). J Physiother. 2023;69:260–6.

Petty NJ. Becoming an expert: a masterclass in developing clinical expertise. Int J Osteopath Med. 2015;18:207–18.

Edwards I, Jones M, Carr J, Braunack-Mayer A, Jensen GM. Clinical reasoning strategies in physical therapy. Phys Ther. 2004;84.

Carraccio CL, Benson BJ, Nixon LJ, Derstine PL. Clinical teaching from the Educational Bench to the clinical Bedside: Translating the Dreyfus Developmental Model to the Learning of Clinical Skills.

Benner P. Using the Dreyfus Model of Skill Acquisition to describe and interpret Skill Acquisition and Clinical Judgment in nursing practice and education. Bull Sci Technol Soc. 2004;24:188–99.

Benner P. From novice to expert: Excellence and power in clinical nursing practice. Upper Saddle River, New Jersey: Prentice Hall;: Commemorative Ed; 2001.

Jensen GM, Gwyer J, Shepard KF, Hack LM. Expert practice in physical therapy. Phys Ther. 2000;80.

Huhn K, Gilliland SJ, Black LL, Wainwright SF, Christensen N. Clinical reasoning in physical therapy: a Concept Analysis. Phys Ther. 2019;99.

Hutting N, Kranenburg HA, Rik KR. Yes, we should abandon pre-treatment positional testing of the cervical spine. Musculoskelet Sci Pract. 2020;49:102181.

Kranenburg HA, Tyer R, Schmitt M, Luijckx GJ, Schans C, Van Der, Hutting N, et al. Effects of head and neck positions on blood flow in the vertebral, internal carotid, and intracranial arteries: a systematic review. J Orthop Sports Phys Ther. 2019;49:688–97.

Hutting N, Kerry R, Coppieters MW, Scholten-Peeters GGM. Considerations to improve the safety of cervical spine manual therapy. Musculoskelet Sci Pract. 2018;33.

Wainwright SF, Shepard KF, Harman LB, Stephens J. Novice and experienced physical therapist clinicians: a comparison of how reflection is used to inform the clinical decision-making process. Phys Ther. 2010;90:75–88.

Dy SM, Purnell TS. Key concepts relevant to quality of complex and shared decision-making in health care: a literature review. Soc Sci Med. 2012;74:582–7.

Christensen N, Jones MA, Higgs J, Edwards I. Dimensions of clinical reasoning capability. In: Campbell S, Watkins V, editors. Clinical reasoning in the health professions. 3rd edition. Philadelphia: Elsevier; 2008. pp. 101–10.

Download references

Acknowledgements

The authors would like to acknowledge study participants and the transcriptionist for their time in completing and transcribing think aloud interviews.

No funding was received to conduct this research study.

Author information

Authors and affiliations.

School of Physical Therapy, Western University, London, Ontario, Canada

Katie L. Kowalski, Heather Gillis, Katherine Henning, Paul Parikh, Jackie Sadi & Alison Rushton

You can also search for this author in PubMed   Google Scholar

Contributions

Katie Kowalski: Conceptualization, methodology, validation, formal analysis, investigation, data curation, writing– original draft, visualization, project administration. Heather Gillis: Validation, resources, writing– review & editing. Katherine Henning: Investigation, formal analysis, writing– review & editing. Paul Parikh: Validation, resources, writing– review & editing. Jackie Sadi: Validation, resources, writing– review & editing. Alison Rushton: Conceptualization, methodology, validation, writing– review & editing, supervision.

Corresponding author

Correspondence to Katie L. Kowalski .

Ethics declarations

Ethics approval and consent to participate.

Western University Health Science Research Ethics Board granted ethical approval (Project ID: 119934). Participants provided written informed consent prior to participating in think aloud interviews.

Consent for publication

Not applicable.

Competing interests

Author AR is an author of the IFOMPT Cervical Framework. Authors JS and HG are lecturers on the AHCP CMP program. AR and JS led standardized teaching of the Framework. Measures to reduce the influence of potential competing interests on the conduct and results of this study included: the Framework representing international and multiprofessional consensus, recruitment of participants by author KK, data collection and analysis completed by KK with input from AR, JS and HG at the stage of data synthesis and interpretation, and wider peer scrutiny of initial findings. KK, KH and PP have no potential competing interests.

Authors’ information

The lead author of this study (AR) is the first author of the International IFOMPT Cervical Framework.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kowalski, K.L., Gillis, H., Henning, K. et al. Use of the International IFOMPT Cervical Framework to inform clinical reasoning in postgraduate level physiotherapy students: a qualitative study using think aloud methodology. BMC Med Educ 24 , 486 (2024). https://doi.org/10.1186/s12909-024-05399-x

Download citation

Received : 11 February 2024

Accepted : 08 April 2024

Published : 02 May 2024

DOI : https://doi.org/10.1186/s12909-024-05399-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • International IFOMPT Cervical Framework
  • Clinical reasoning
  • Postgraduate students
  • Physiotherapy
  • Educational research
  • Qualitative research
  • Think aloud methodology

BMC Medical Education

ISSN: 1472-6920

hypothesis based models

IMAGES

  1. How to Write a Hypothesis in 12 Steps 2024

    hypothesis based models

  2. Why Is Research Important?

    hypothesis based models

  3. How to Write a Hypothesis: The Ultimate Guide with Examples

    hypothesis based models

  4. How to Write a Hypothesis

    hypothesis based models

  5. Issue Trees: The Ultimate Guide with Detailed Examples (2023)

    hypothesis based models

  6. Simplified example of an evidence-based hypothesis model showing

    hypothesis based models

VIDEO

  1. Concept of Hypothesis

  2. Session 8- Hypothesis testing by Non Parametric Tests (7/12/23)

  3. What Is A Hypothesis?

  4. The Linear Representation Hypothesis and the Geometry of Large Language Models with Kiho Park

  5. Hypothesis Testing in Machine Learning

  6. Testing and Estimation

COMMENTS

  1. Theories and Models: What They Are, What They Are for, and What They

    What Are Theories. The terms theory and model have been defined in numerous ways, and there are at least as many ideas on how theories and models relate to each other (Bailer-Jones, Citation 2009).I understand theories as bodies of knowledge that are broad in scope and aim to explain robust phenomena.Models, on the other hand, are instantiations of theories, narrower in scope and often more ...

  2. 1.2: Theories, Hypotheses and Models

    A "hypothesis" is a consequence of the theory that one can test. From Chloë's Theory, we have the hypothesis that an object will take 2-√ 2 times longer to fall from 1m 1 m than from 2 m 2 m. We can formulate the hypothesis based on the theory and then test that hypothesis. If the hypothesis is found to be invalidated by experiment ...

  3. Hypothesis, Model, Theory, and Law

    Model . A model is used for situations when it is known that the hypothesis has a limitation on its validity. The Bohr model of the atom, for example, depicts electrons circling the atomic nucleus in a fashion similar to planets in the solar system.This model is useful in determining the energies of the quantum states of the electron in the simple hydrogen atom, but it is by no means ...

  4. Hypotheses and Models for Theory Testing

    Based on hypotheses, theories can be examined empirically (see Sect. 5.2).This chapter illustrates the nature of hypotheses and the procedure for testing them empirically. The chapter specifically addresses the increasingly important relationship between significance tests and effect sizes in research, as well as the problem of post hoc hypothesis tests.

  5. Models in Science

    Models are of central importance in many scientific contexts. The centrality of models such as inflationary models in cosmology, general-circulation models of the global climate, the double-helix model of DNA, evolutionary models in biology, agent-based models in the social sciences, and general-equilibrium models of markets in their respective domains is a case in point (the Other Internet ...

  6. 1.2: Models, Hypotheses, and Theories

    University of Colorado Boulder and Michigan State University. Tentative scientific models are commonly known as hypotheses. Such models are valuable in that they serve as a way to clearly articulate one's assumptions and their implications. They form the logical basis for generating testable predictions about the phenomena they purport to ...

  7. Scientific Hypothesis, Model, Theory, and Law

    A hypothesis is an educated guess, based on observation. It's a prediction of cause and effect. Usually, a hypothesis can be supported or refuted through experimentation or more observation. A hypothesis can be disproven but not proven to be true. Example: If you see no difference in the cleaning ability of various laundry detergents, you might ...

  8. Scientific hypothesis

    The Royal Society - On the scope of scientific hypotheses (Apr. 24, 2024) scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If ...

  9. Human systems immunology: hypothesis-based modeling and unbiased data

    In the hypothesis-based modeling approach, one poses a question based on a specific hypothesis and then tries to develop a model (often quantitative) to help answer the question of interest. Such models typically simplify complex biological problems in order to reveal essential elements and make predictions about the experimental system.

  10. The imperative of physics-based modeling and inverse theory in ...

    To best learn from data about large-scale complex systems, physics-based models representing the laws of nature must be integrated into the learning process. Inverse theory provides a crucial ...

  11. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  12. A systematic method for hypothesis synthesis and conceptual model

    Edge-based metrics for our case study of the effects of forest fragmentation on birds (Grames, 2021), indicating (a) the most well-studied hypotheses in the final conceptual model based on frequency of occurrence in the literature and (b) the hypotheses in the network that serve as key links between groups of concepts based on edge betweenness ...

  13. What I learned at McKinsey: How to be hypothesis-driven

    Form a hypothesis about the problem and determine the data needed to test the hypothesis Gather and analyze the necessary data, comparing the result to the hypothesis Update the model of the ...

  14. 15.5: Hypothesis Tests for Regression Models

    Formally, our "null model" corresponds to the fairly trivial "regression" model in which we include 0 predictors, and only include the intercept term b 0. H 0 :Y i =b 0 +ϵ i. If our regression model has K predictors, the "alternative model" is described using the usual formula for a multiple regression model: H1: Yi = ( ∑K k ...

  15. Hypothetical Models in Social Science

    The terms hypothetical modeling and model-based science both refer to the scientific activity of understanding phenomena by building hypothetical systems, which at once are much simpler than the phenomenon under investigation and hopefully resemble it in some respect. The modeler studies these simpler, hypothetical systems in order to gain insights into the more complex phenomena they represent.

  16. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  17. The Scientific Method

    Hypotheses, Models, Theories, and Laws. While some people do incorrectly use words like "theory" and "hypotheses" interchangeably, the scientific community has very strict definitions of these terms. Hypothesis: A hypothesis is an observation, usually based on a cause and effect. It is the basic idea that has not been tested.

  18. Turbulence modeling

    Turbulence modeling. A simulation of a physical wind tunnel airplane model. In fluid dynamics, turbulence modeling is the construction and use of a mathematical model to predict the effects of turbulence. Turbulent flows are commonplace in most real-life scenarios. In spite of decades of research, there is no analytical theory to predict the ...

  19. Model-based learning: a synthesis of theory and research

    This article provides a review of theoretical approaches to model-based learning and related research. In accordance with the definition of model-based learning as an acquisition and utilization of mental models by learners, the first section centers on mental model theory. In accordance with epistemology of modeling the issues of semantics, ontology, and learning with models as well as ...

  20. How to Write a Hypothesis? Types and Examples

    A hypothesis is an assumption about an association between variables made based on limited evidence, which should be tested. A hypothesis has four parts—the research question, independent variable, dependent variable, and the proposed relationship between the variables. The statement should be clear, concise, testable, logical, and falsifiable.

  21. Model Setting and Interpretation of Results in Research Using

    Statistical hypothesis testing results for parameter estimates can be used. Equivalent model. A problem that researchers often overlook in the model modification stage is the existence of an equivalent model, which refers to a model that produces the same predicted covariance matrix although the established paths between variables may differ.

  22. Combined Rule-Based and Hypothesis-Based Method for Building Model

    Aiming at reconstructing a 3D building model at Level of Detail (LoD) 2 and even LoD3 with preferred geometry accuracy and affordable computation expense, in this paper, we propose a novel method for the efficient reconstruction of building models from the photogrammetric point clouds which combines the rule-based and the hypothesis-based ...

  23. Chapter 4 Research Model, Hypotheses, and Methodology

    PDF-1.6 %öäüß 1 0 obj /Type /Catalog /Version /1.6 /Pages 2 0 R /PageLayout /OneColumn /PageMode /UseOutlines /PageLabels 3 0 R >> endobj 4 0 obj /CreationDate (D ...

  24. The use and limitations of null-model-based hypothesis testing

    Null-model-based hypothesis testing in species co-occurrence studies. In order to answer the general question of the use and limitations of null-model-based hypothesis testing, it is necessary to properly detail how it is actually used in scientific research. In this section, I will use Connor and Simberloff's ( 1979) null model of species co ...

  25. Integrative metabolomics-genomics analysis identifies key ...

    In this study, we aimed to explore a presumed correlation between the transcriptome and the metabolome in a SCZ model based on patient-derived induced pluripotent stem cells (iPSCs).

  26. Use of the International IFOMPT Cervical Framework to inform clinical

    Background Vascular pathologies of the head and neck are rare but can present as musculoskeletal problems. The International Federation of Orthopedic Manipulative Physical Therapists (IFOMPT) Cervical Framework (Framework) aims to assist evidence-based clinical reasoning for safe assessment and management of the cervical spine considering potential for vascular pathology. Clinical reasoning is ...

  27. Animals

    The aim of this study was to analyse the bite forces of seven species from three carnivore families: Canidae, Felidae, and Ursidae. The material consisted of complete, dry crania and mandibles. A total of 33 measurements were taken on each skull, mandible, temporomandibular joint, and teeth. The area of the temporalis and masseter muscles was calculated, as was the length of the arms of the ...