Here is an illustration of an Action Hypothesis stated in different forms. Carefully observe the wordings, the format, relationship between the factors in each form of the hypothesis. Predictive form Declarative orDirectional Form QuestionForm Null Form If the III grade students receive a “drill work” in the chapter “Addition of whole numbers their progress will be better in Arithmetic. 1. Replace the word “Drill Work” as ‘Supervised study’ in all the forms. 2. Add after, addition of two digit (carrying) A “Drill work” program in the chapter addition of whole numbers for III grade students will cause/influence better progress in Arithmetic, Or Addition (whole number) drill work in and progress in Arithmetic are (positively) related to each other.OrThere is a (positive) relationship between ‘Drill work’ in Addition (whole Nos.) and progress in Arithmetic. To what extent a “Drill work” program in the chapter Addition (Whole numbers) for III grade students will improve their progress in Arithmetic.OrDoes a drill work program in ‘Addition (Whole Nos.) for III graders improve their progress in Arithmetic? If so, to what extent? A “Drill work” program in the chapter. ‘Addition for III grade students and their progress in Arithmetic are not related to each other.OrThere is no significant relationship between the ‘drill work’ program in the chapter addition and progress (whole No.) in Arithmetic among III grade students.
Eight students in class IV are not able to identify directions on a map. You have realized that inadequate exposure to map reading is the cause for the problem. Now write a hypothesis for finding a solution to this problem in all the four forms. Your course of remedial action should be reflected in the hypothesis.
|
Activity Sheet on Formulation of action hypothesis
| | | | | | | | | | | | |
| | | | | |
| | | | | |
| | | | | | | | | |
| | | | | | | | | | | | | | | |
Personal tools.
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Learn about our Editorial Process
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
On This Page:
A research hypothesis, in its plural form “hypotheses,” is a specific, testable prediction about the anticipated results of a study, established at its outset. It is a key component of the scientific method .
Hypotheses connect theory to data and guide the research process towards expanding scientific understanding
Predictions typically arise from a thorough knowledge of the research literature, curiosity about real-world problems or implications, and integrating this to advance theory. They build on existing literature while providing new insight.
Alternative hypothesis.
The research hypothesis is often called the alternative or experimental hypothesis in experimental research.
It typically suggests a potential relationship between two key variables: the independent variable, which the researcher manipulates, and the dependent variable, which is measured based on those changes.
The alternative hypothesis states a relationship exists between the two variables being studied (one variable affects the other).
A hypothesis is a testable statement or prediction about the relationship between two or more variables. It is a key component of the scientific method. Some key points about hypotheses:
In summary, a hypothesis is a precise, testable statement of what researchers expect to happen in a study and why. Hypotheses connect theory to data and guide the research process towards expanding scientific understanding.
An experimental hypothesis predicts what change(s) will occur in the dependent variable when the independent variable is manipulated.
It states that the results are not due to chance and are significant in supporting the theory being investigated.
The alternative hypothesis can be directional, indicating a specific direction of the effect, or non-directional, suggesting a difference without specifying its nature. It’s what researchers aim to support or demonstrate through their study.
The null hypothesis states no relationship exists between the two variables being studied (one variable does not affect the other). There will be no changes in the dependent variable due to manipulating the independent variable.
It states results are due to chance and are not significant in supporting the idea being investigated.
The null hypothesis, positing no effect or relationship, is a foundational contrast to the research hypothesis in scientific inquiry. It establishes a baseline for statistical testing, promoting objectivity by initiating research from a neutral stance.
Many statistical methods are tailored to test the null hypothesis, determining the likelihood of observed results if no true effect exists.
This dual-hypothesis approach provides clarity, ensuring that research intentions are explicit, and fosters consistency across scientific studies, enhancing the standardization and interpretability of research outcomes.
A non-directional hypothesis, also known as a two-tailed hypothesis, predicts that there is a difference or relationship between two variables but does not specify the direction of this relationship.
It merely indicates that a change or effect will occur without predicting which group will have higher or lower values.
For example, “There is a difference in performance between Group A and Group B” is a non-directional hypothesis.
A directional (one-tailed) hypothesis predicts the nature of the effect of the independent variable on the dependent variable. It predicts in which direction the change will take place. (i.e., greater, smaller, less, more)
It specifies whether one variable is greater, lesser, or different from another, rather than just indicating that there’s a difference without specifying its nature.
For example, “Exercise increases weight loss” is a directional hypothesis.
The Falsification Principle, proposed by Karl Popper , is a way of demarcating science from non-science. It suggests that for a theory or hypothesis to be considered scientific, it must be testable and irrefutable.
Falsifiability emphasizes that scientific claims shouldn’t just be confirmable but should also have the potential to be proven wrong.
It means that there should exist some potential evidence or experiment that could prove the proposition false.
However many confirming instances exist for a theory, it only takes one counter observation to falsify it. For example, the hypothesis that “all swans are white,” can be falsified by observing a black swan.
For Popper, science should attempt to disprove a theory rather than attempt to continually provide evidence to support a research hypothesis.
Hypotheses make probabilistic predictions. They state the expected outcome if a particular relationship exists. However, a study result supporting a hypothesis does not definitively prove it is true.
All studies have limitations. There may be unknown confounding factors or issues that limit the certainty of conclusions. Additional studies may yield different results.
In science, hypotheses can realistically only be supported with some degree of confidence, not proven. The process of science is to incrementally accumulate evidence for and against hypothesized relationships in an ongoing pursuit of better models and explanations that best fit the empirical data. But hypotheses remain open to revision and rejection if that is where the evidence leads.
We can never 100% prove the alternative hypothesis. Instead, we see if we can disprove, or reject the null hypothesis.
If we reject the null hypothesis, this doesn’t mean that our alternative hypothesis is correct but does support the alternative/experimental hypothesis.
Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct. We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.
Consider a hypothesis many teachers might subscribe to: students work better on Monday morning than on Friday afternoon (IV=Day, DV= Standard of work).
Now, if we decide to study this by giving the same group of students a lesson on a Monday morning and a Friday afternoon and then measuring their immediate recall of the material covered in each session, we would end up with the following:
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on May 6, 2022 by Shona McCombes . Revised on November 20, 2023.
A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection .
Daily apple consumption leads to fewer doctor’s visits.
What is a hypothesis, developing a hypothesis (with example), hypothesis examples, other interesting articles, frequently asked questions about writing hypotheses.
A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.
A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).
Hypotheses propose a relationship between two or more types of variables .
If there are any control variables , extraneous variables , or confounding variables , be sure to jot those down as you go to minimize the chances that research bias will affect your results.
In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .
Step 1. ask a question.
Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.
Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.
At this stage, you might construct a conceptual framework to ensure that you’re embarking on a relevant topic . This can also help you identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalize more complex constructs.
Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.
You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:
To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable.
In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.
If you are comparing two groups, the hypothesis can state what difference you expect to find between them.
If your research involves statistical hypothesis testing , you will also have to write a null hypothesis . The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .
Research question | Hypothesis | Null hypothesis |
---|---|---|
What are the health benefits of eating an apple a day? | Increasing apple consumption in over-60s will result in decreasing frequency of doctor’s visits. | Increasing apple consumption in over-60s will have no effect on frequency of doctor’s visits. |
Which airlines have the most delays? | Low-cost airlines are more likely to have delays than premium airlines. | Low-cost and premium airlines are equally likely to have delays. |
Can flexible work arrangements improve job satisfaction? | Employees who have flexible working hours will report greater job satisfaction than employees who work fixed hours. | There is no relationship between working hour flexibility and job satisfaction. |
How effective is high school sex education at reducing teen pregnancies? | Teenagers who received sex education lessons throughout high school will have lower rates of unplanned pregnancy teenagers who did not receive any sex education. | High school sex education has no effect on teen pregnancy rates. |
What effect does daily use of social media have on the attention span of under-16s? | There is a negative between time spent on social media and attention span in under-16s. | There is no relationship between social media use and attention span in under-16s. |
If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.
Statistics
Research bias
A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).
Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2023, November 20). How to Write a Strong Hypothesis | Steps & Examples. Scribbr. Retrieved September 8, 2024, from https://www.scribbr.com/methodology/hypothesis/
Other students also liked, construct validity | definition, types, & examples, what is a conceptual framework | tips & examples, operationalization | a guide with examples, pros & cons, what is your plagiarism score.
Academic tools.
In this article we provide a brief overview of the logic of action in philosophy, linguistics, computer science, and artificial intelligence. The logic of action is the formal study of action in which formal languages are the main tool of analysis.
The concept of action is of central interest to many disciplines: the social sciences including economics, the humanities including history and literature, psychology, linguistics, law, computer science, artificial intelligence, and probably others. In philosophy it has been studied since the beginning because of its importance for epistemology and, particularly, ethics; and since a few decades it is even studied for its own sake. But it is in the logic of action that action is studied in the most abstract way.
The logic of action began in philosophy. But it has also played a certain role in linguistics. And currently it is of great importance in computer science and artificial intelligence. For our purposes it is natural to separate the accounts of these developments.
1.2 the stit saga, 1.3 intentions, 1.4 logics of special kinds of action, 2.1 speech acts, 2.2 action sentences, 2.3 dynamic semantics, 3.1 reasoning about programs, 4.1 representing and reasoning about actions, 4.2 description and specification of intelligent agents, other internet resources, related entries, 1. the logic of action in philosophy.
Already St. Anselm studied the concept of action in a way that must be classified as logical; had he known symbolic logic, he would certainly have made use of it (Henry 1967; Walton 1976). In modern times the subject was introduced by, among others, Alan Ross Anderson, Frederick B. Fitch, Stig Kanger, and Georg Henrik von Wright; Kanger’s work was further developed by his students Ingmar Pörn and Lars Lindahl. The first clearly semantic account was given by Brian F. Chellas (1969). (For a more detailed account, see Segerberg 1992 or the mini-history in Belnap 2001.)
Today there are two rather different groups of theories that may be described as falling under the term logic of action. One, the result of the creation of Nuel Belnap and his many collaborators, may be called stit theory (a term that will be explained in the next paragraph). The other is dynamic logic . Both are connected with modal logic, but in different ways. Stit theory grew out of the philosophical tradition of modal logic. Dynamic logic, on the other hand, was invented by computer scientists in order to analyse computer action; only after the fact was it realized that it could be viewed as modal logic of a very general kind. One important difference between the two is that (for the most part) actions are not directly studied in stit theory: the ontology does not (usually) recognize a category of actions or events. But dynamic logic does. Among philosophers such ontological permissiveness has been unusual. Hector-Neri Castañeda, with his distinction between propositions and practitions, provides one notable exception.
The stit tradition is treated in this section, the dynamic logic one in the next.
The term “stit” is an acronym based on “sees to it that”. The idea is to add, to an ordinary classical propositional language, a new propositional operator \(\stit\), interpreting \(\stit_i\phi\), where \(i\) stands for an agent and \(\phi\) for a proposition, as \(i\) sees to it that \(\phi\). (The official notation of the Belnap school is more laborious: [\(i \mathop{\mathsf{stit:}} \phi\)].) Note that \(\phi\) is allowed to contain nestings of the new operator.
In order to develop formal meaning conditions for the stit operator \(\stit\) a semantics is defined. A stit frame has four components: a set \(T\), the nodes of which are called moments; an irreflexive tree ordering \(\lt\) of \(T\); a set of agents; and a choice function \(C\). A maximal branch through the tree is called a history.
The tree \((T,\lt)\) seems to correspond to a naïve picture familiar to us all: a moment \(m\) is a temporary present; the set \(\left\{n : n \lt m\right\}\) corresponds to the past of \(m\), which is unique; while the set \(\left\{n : m \lt n\right\}\) corresponds to the open future of \(m\), each particular maximal linear subset of which corresponds to a particular possible future.
To formalize the notion of action, begin with two general observations:
This is where the choice function \(C\) comes in: for each moment \(m\) and agent \(i, C\) yields a partitioning \(C_i^m\) of the set \(H_m\) of all histories through \(m\). An equivalence class in \(C_i^m\) is called a choice cell. (Note that two histories belonging to the same choice cell agree up to the moment in question but not necessarily later on.) If \(h\) is a history running through \(m\) we write \(C_i^m(h)\) for the choice cell of which \(h\) is a member. It is natural to associate \(C_i^m\) with the set of actions open to the agent \(i\) at \(m\), and to think of the choice cell \(C_i^m(h)\) as representing the action associated with \(h\).
A stit model has an additional component: a valuation. A valuation in a frame, it turns out, is a function that assigns to a variable and each index either 1 (truth) or 0 (falsity), where an index is an ordered pair consisting of a history and a moment on that history. The notion of truth or falsity of a formula with respect to an index can now be defined. If \(V\) is the valuation we have the following basic truth-condition for atomic \(\phi\):
The truth-conditions for the Boolean connectives are as expected; for example,
Let us write \(\llbracket\phi\rrbracket_m\) for the set \(\left\{h \in H_m : (h,m) \models \phi\right\}\), that is, the set of histories agreeing with \(h\) at least up to \(m\) and such that \(\phi\) is true with respect to the index consisting of that history and \(m\). Defining formal truth-conditions for the stit operator there are at least two possibilities to be considered:
To distinguish the two different operators that those conditions define, the former operator is called the Chellas stit , written \(\cstit\), while the latter operator is called the deliberative stit , written \(\dstit\).
In words, \(\cstit_i \phi\) is true at an index \((h,m)\) if \(\phi\) is true with respect to \(h'\) and \(m\), for all histories \(h'\) in the same choice cell at \(m\) as \(h\); this is called the positive condition. The truth-condition for \(\dstit_i \phi\) is more exacting; not only the positive condition has to be satisfied but also what is called the negative condition: there must be some history through \(m\) such that \(\phi\) fails to be true with respect to that history and \(m\).
Both \(\cstit\) and \(\dstit\) are studied; it is claimed that they capture important aspects of the concept “sees to it that”. The two operators become interdefinable if one also introduces the concept “it is historically necessary that”. Using \(\Box\) for historical necessity, define
Then the formulas
are true with respect to all indices.
One advantage of stit theory is that the stit analysis of individual action can be extended in natural ways to cover group action.
A number of the initial papers defining the stit tradition are collected in the volume Belnap 2001. One important later work is John F. Horty’s book (2001). The logic of stit was axiomatized by Ming Xu (1998).
Michael Bratman’s philosophical analysis of the notion of intention has had a significant influence on the development of the logic of action within computer science. It will be discussed below.
In a series of papers Carlos Alchourrón, Peter Gärdenfors and David Makinson created what they called a logic of theory change, later known as the AGM paradigm. Two particular kinds of change inspired their work: change due to deontic actions (Alchourrón) and change due to doxastic actions (Gärdenfors and before him Isaac Levi). Examples of deontic actions are derogation and amendment (laws can be annulled or amended), while contraction and expansion are analogous doxastic actions (beliefs can be given up, new beliefs can be added). Later the modal logic of such actions has been explored under the names dynamic deontic logic , dynamic doxastic logic and dynamic epistemic logic . (For the classic paper on AGM, see AGM 1985. For an introduction to dynamic deontic logic and dynamic doxastic logic, see Lindström and Segerberg 2006. We will return to this topic in Section 4, where it is viewed from the perspective of the field of artificial intelligence.
In linguistics, there are two ways in which actions play a role: on the one hand, utterances are actions and on the other they can be used to talk about actions. The first leads to the study of speech acts, a branch of pragmatics, the second to the study of the semantics of action reports, hence is of a distinctly semantic nature. In addition to this, there is a special type of semantics, dynamic semantics, where meanings are not considered as state descriptions but as changes in the state of a hearer.
The study of speech acts goes back to Austin (1957) and Searle (1969). Both emphasise that using language is to perform certain acts. Moreover, there is not just one act but a whole gamut of them (Austin himself puts the number in the magnitude of \(10^3)\). The classification he himself gives involves acts that are nowadays not considered as part of a separate science: the mere act of uttering a word (the phatic act ) or sentence is part of phonetics (or phonology) and only of marginal concern here. By contrast, the illocutionary and perlocutionary acts have been the subject of intense study. An illocutionary act is the linguistic act performed by using that sentence; it is inherently communicative in nature. By contrast, the perlocutionary act is an act that needs surrounding social contexts to be successful. The act of naming a ship or christening a baby, for example, are perlocutionary. The sentence “I hereby pronounce you husband and wife” has the effect of marrying two people only under certain well-defined circumstances. By definition, perlocutionary acts take us outside the domain of language and communication.
Searle and Vanderveken (1985) develop a logic of speech acts which they call illocutionary logic . This was refined in Vanderveken 1990 and Vanderveken 1991. Already, Frege used in his Begriffsschrift the notation “\(\vdash \phi\)”, where \(\phi\) denotes a proposition and “\(\vdash\)” the judgment sign. So, “\(\vdash \phi\)” says that \(\phi\) is provable, but other interpretations of \(\vdash\) are possible (accompanied by different notation; for example, “\(\models \phi\)” says that \(\phi\) is true (in the model), “\(\dashv \phi\)” says that \(\phi\) is refutable, and so on). An elementary speech act is of the form \(F(\phi)\), where \(F\) denotes an illocutionary point and \(\phi\) a proposition. In turn, an illocutionary force is identified by exactly seven elements:
There are exactly five points according to Searle and Vanderveken (1985):
Later treatments of this matter tend to disregard much of the complexity of this earlier approach for the reason that it fails to have any predictive power. Especially difficult to handle are “strengths”, for example. Modern models try to use update models instead (see Section 2.3 below). Van der Sandt 1991 uses a discourse model with three different slates (for each speaker, and one common slate). While each speaker is responsible for maintaining his own slate, changes to the common slate can only be made through communication with each other. Merin 1994 seeks to reduce the manipulations to a sequential combination of so-called elementary social acts : claim , concession , denial , and retraction .
Uttering a sentence is acting. This action can have various consequences, partly intended partly not. The fact that utterances as actions are embedded in a bigger scheme of interaction between humans has been put into focus recently (see, for example, Clark 1996). Another important aspect that has been highlighted recently is the fact that by uttering a sentence we can change the knowledge state of an entire group of agents, see Balbiani et al. 2008. After publicly announcing \(\phi\), \(\phi\) becomes common knowledge among the entire group. This idea sheds new light on a problem of Gricean pragmatics, where certain speech acts can only be successful if certain facts are commonly known between speaker and hearer. It is by means of an utterance that a speaker can establish this common knowledge in case it wasn’t already there.
Davidson (1967) gave an account of action sentences in terms of what is now widely known as events . The basic idea is that an action sentence has the form \((\exists e)(\cdots)\), where \(e\) is a variable over acts. For example, “Brutus violently stabbed Caesar” is translated (ignoring tense) as \((\exists e) (\mathop{\mathrm{stab}}(e,\mathrm{Brutus},\mathrm{Caesar}) \wedge \mathop{\mathrm{violent}}(e))\). This allows to capture the fact that this sentence logically entails that Brutus stabbed Caesar. This idea has been widely adopted in linguistics; moreover, it is now assumed that basically all verbs denote events (Parsons 1990). Thus action sentences are those that speak about special types of events, called eventualities .
Vendler (1957) classified verbs into four groups:
Moens and Steedman (1988) add a fifth category:
The main dividing line is between states and the others. The types (b)–(e) all refer to change. This division has been heavily influential in linguistic theory; mostly, however, research concentrated on its relation to aspect. It is to be noted, for example, that verbs of type (c) can be used with the progressive while verbs of type (d) cannot. In an attempt to explain this, Krifka 1986 and Krifka 1992 have introduced the notion of an incremental theme . The idea is that any eventuality has an underlying activity whose progress can be measured using some underlying participant of the event. If, for example, I write a letter then the progress is measured in amounts of words. The letter is therefore the incremental theme in “I write a letter” since it defines the progress. One implementation of the idea is the theory of aspect by Verkuyl (1993). Another way to implement the idea of change is constituted by a translation into propositional dynamic logic (see Naumann 2001). Van Lambalgen and Hamm (2005) have applied the event calculus by Shanahan (1990) to the description of events.
The idea that propositions can not only be viewed as state descriptions but also as updates has been advocated independently by many people. Consider the possible states of an agent to be (in the simplest case) a theory (that is, a deductively closed set of sentences). Then the update of a theory \(T\) by a proposition \(\phi\) is the deductive closure of \(T \cup \left\{\phi\right\}\).
Gärdenfors 1988 advocates this perspective with particular attention to belief revision. Veltman 1985 develops the update view for the treatment of conditionals. One advantage of the idea is that it is possible to show why the mini discourse “It rains. It may not be raining.” is infelicitous in contrast to “It may not be raining. It rains.”. Given that an update is felicitous only to a consistent theory, and that “may \(\phi\)” (with epistemic “may”) simply means “it is consistent” (written \(\diamond\phi\)), the first is the sequence of updates with \(\phi\) and \(\diamond \neg\phi\). The second step leads to inconsistency, since \(\phi\) has already been added. It is vital in this approach that the context is constantly changing.
Heim 1983 contains an attempt to make this idea fruitful for the treatment of presuppositions. In Heim’s proposal, a sentence has the potential to change the context, and this is why, for example, the sentence “If John is married his wife will be happy.” does not presuppose that John is married. Namely, the second part of the conditional (“his wife will be happy”) is evaluated against the context incremented by the antecedent (“John is married”). This of course is the standard way conditions are evaluated in computer languages. This parallel is exploited in Van Eijck 1994, see also Kracht 1993.
The idea of going dynamic was further developed in Dynamic Predicate Logic (DPL, see Groenendijk and Stokhof 1991), where all expressions are interpreted dynamically. The specific insight in this grammar is that existential quantifiers have a dynamically growing scope. This has first been noted in Kamp 1981, where a semantics was given in terms of intermediate representations, so-called Discourse Representation Structures. Groenendijk and Stokhof replace these structures by introducing a dynamics into the evaluation of a formula, as proposed in Dynamic Logic (DL). An existential quantifier is translated as a random assignment “\(x \leftarrow\ ?\)” of DL, whose interpretation is a relation between assignments: it is the set of pairs \(\langle \beta ,\gamma \rangle\) such that \(\beta(y) = \gamma(y)\) for all \(y \ne x\) (in symbols \(\beta \sim_x \gamma\)). The translation of the sentence “A man walks.” is
This is a proposition, hence interpreted as a set. One can however, push the dynamicity even lower, and make all meanings relational. Then “A man walks.” is interpreted by the ‘program’
Here, \(\mathop{\mathrm{man}}'(x)?\) uses the test constructor “\(?\)”: \(\phi ?\) is the set of all \(\langle \beta ,\beta \rangle\) such that \(\beta\) satisfies \(\phi\). The meaning of the entire program (2) therefore also is a relation between assignments. Namely, it is the set \(R\) of all pairs \(\langle \beta ,\gamma \rangle\) where \(\beta \sim_x \gamma\), and \(\gamma(x)\) walks and is a man. The meaning of (1) by contrast is the set of all \(\beta\) such that some \(\langle \beta ,\gamma \rangle \in R\). Existential quantifiers thus have ‘side effects’: the change in assignment is never undone by a quantifier over a different variable. Hence the open-endedness to the right of the existential. This explains the absence of brackets in (1). For an overview of dynamic semantics see Muskens et al. 1997.
The logic of action plays an important role in computer science. This becomes evident once one realizes that computers perform actions in the form of executing program statements written down in some programming language, changing computer internals and, by interfaces to the outside world, also that outside world. As such a logic of action provides a means to reason about programs, or more precisely, the execution of programs and their effects. This enables one to prove the correctness of programs. In principle, this is something very desirable: if we could prove all our software correct, we would know that they would function exactly the way we designed them. This was already realized by pioneers of computer programming such as Turing (1949) and Von Neumann (Goldstein and Von Neumann 1963). Of course, this ideal is too hard to establish in daily practice for all software. Verification is a nontrivial and time-consuming occupation, and there are also theoretical limitations to it. However, as the alternative is “just” massive testing of programs experimentally, with no 100% guarantee of correctness, it has remained an active area of research to this day.
Program verification has a long history. Already since the inception of the computer and its programming researchers started to think of ways of analyzing programs to be sure they did what they were supposed to do. In the 60s the development of a true mathematical theory of program correctness began to take serious shape (de Bakker 1980, 466). Remarkably, the work of John McCarthy who we will also encounter later on when we turn to the field of artificial intelligence played an important role here, distinguishing and studying fundamental notions such as ‘state’, McCarthy 1963a. This led on the one hand to the field of semantics of programming languages, and on the other to major advances in program correctness by Floyd (1967), Naur (1966), Hoare (1969) and Dijkstra (1976) (de Bakker 1980). Floyd and Naur used an elementary stepwise induction principle and predicates attached to program points to express invariant properties of imperative-style programs (Cousot 1990, 859), programs that are built up from basic assignment statements (of arithmetical expressions to program variables) and may be composed by sequencing, conditionals and repetitions. While the Floyd-Naur approach—called the inductive assertion method —giving rise to a systematic construction of verification conditions, was a method to prove the correctness of programs by means of logic, it was not a logic itself in the strict sense of the word. The way to a proper logic of programs was paved by Hoare, whose compositional proof method led to what is now known as Hoare logic. By exploiting the syntactic structure of (imperative-style) programs, Hoare was able to turn the Floyd-Naur method into a true logic with as assertions so-called Hoare triples of the form \(\left\{P\right\}S\left\{Q\right\}\), where \(P\) and \(Q\) are first-order formulas and \(S\) is a program statement in an imperative-style programming language as mentioned above. The intended reading is if \(P\) holds before execution of the statement \(S\) then \(Q\) holds upon termination of (execution of) \(S\). (The issue whether the execution of \(S\) terminates can be put in the reading of this Hoare triple either conditionally (partial correctness) or nonconditionally (total correctness), giving rise to different logics, see Harel et al. 2000). To give an impression of Hoare-style logics, we give here some rules for a simple programming language consisting of variable assignments to arithmetic expressions, and containing sequential (;), conditional \((\lif)\) and repetitive \((\lwhile)\) composition.
Later Pratt and Harel generalized Hoare logic to dynamic logic (Pratt 1976, Pratt 1979a, Harel 1979, Harel 1984, Kozen and Tiuryn 1990, Harel et al. 2000), of which it was realized [ 1 ] that it is in fact a form of modal logic, by viewing the input-output relation of a program \(S\) as an accessibility relation in the sense of Kripke-style semantics. [ 2 ] A Hoare triple \(\left\{P\right\}S\left\{Q\right\}\) becomes in dynamic logic the following formula: \(P \rightarrow[S] Q\), where [\(S\)] is the modal box operator associated with (the accessibility relation associated with) the input-output relation of program \(S\). The propositional version of Dynamic Logic, PDL, was introduced by Fischer and Ladner (1977), and became an important topic of research in itself. The key axiom of PDL is the induction axiom
where \(^*\) stands for the iteration operator, \(S^*\) denoting an arbitrary (finite) number of iterations of program \(S\). The axiom expresses that if after any number of iterations of \(S\) the truth of \(P\) is preserved by the execution of \(S\), then, if \(P\) is true at the current state, it will also be true after any number of iterations of \(S\). A weaker form of PDL, called HML, with only an atomic action box and diamond and propositional connectives, was introduced by Hennessy & Milner to reason about concurrent processes, and in particular analyze process equivalence (Hennessy and Milner 1980).
It is also worth mentioning here that the work of Dijkstra (1976) on weakest precondition calculus is very much related to dynamic logic (and Hoare’s logic). In fact, what Dijkstra calls the weakest liberal precondition , denoted \(\mathbf{wlp}(S,Q)\), is the same as the box operator in dynamic logic: \(\mathbf{wlp}(S,Q) = [S]Q\), while his weakest precondition , denoted \(\mathbf{wp}(S,Q)\), is the total correctness variant of this, meaning that this expression also entails the termination of statement \(S\) (Cousot 1990).
It was later realized that the application of dynamic logic goes beyond program verification or reasoning about programs. In fact, it constitutes a logic of general action. In Meyer 2000 a number of other applications of dynamic logic are given including deontic logic (see also Meyer 1988), reasoning about database updates, the semantics of reasoning systems such as reflective architectures. As an aside we note here that the use of dynamic logic for deontic logic as proposed in Meyer 1988 needed an extension of the action language, in particular the addition of the ‘action negation’ operator. The rather controversial nature of this operator triggered work on action negation in itself (see e.g. , Broersen 2004). Below we will also encounter the use of dynamic logic in artificial intelligence when specifying intelligent agents.
The logics thus far are adequate for reasoning about programs that are supposed to terminate and display a certain input/output behavior. However, in the late seventies one came to realize that there are also programs that are not of this kind. Reactive programs are designed to react to input streams that in theory may be infinite, and thus show ideally nonterminating behavior. Not so much input-output behavior is relevant here but rather the behavior of programs over time. Therefore Pnueli (1977) proposed a different way of reasoning about programs for this style of programming based on the idea of a logic of time, viz . ( linear-time ) temporal logic . (Since reactivity often involves concurrent or parallel programming, temporal logic is often associated with this style of programming. However, it should be noted that a line of research continued to extend the use of Hoare logic to concurrent programs (Lamport 1977, Cousot 1990, de Roever et al. 2001).) Linear-time temporal logic typically has temporal operators such as next-time, always (in the future), sometime (in the future), until and since.
An interesting difference between temporal logic on the one hand, and dynamic logic and Hoare logic on the other, is that the former is what in the literature is called an endogenous logic, while the latter are so-called exogenous logics. A logic is exogenous if programs are explicit in the logical language, while for endogenous logics this is not the case. In an endogenous logic such as temporal logic the program is assumed to be fixed, and is considered part of the structure over which the logic is interpreted (Harel et al. 2000, 157). Exogenous logics are compositional and have the advantage of allowing analysis by structural induction. Later Pratt (1979b) tried to blend temporal and dynamic logic into what he called process logic , which is an exogenous logic for reasoning about temporal behavior.
At the moment the field of temporal logic as applied in computer science has developed into a complete subfield on its own, including techniques and tools for (semi-)automatic reasoning and model-checking ( cf . Emerson 1990). Also variants of the basic linear-time models have been proposed for verification, such as branching-time temporal logic (and, in particular the logics CTL (computation tree logic) and its extension CTL* (Emerson 1990), in which one can reason explicitly about (quantification over) alternative paths in nondeterministic computations, and more recently also an extension of CTL, called alternating-time temporal logic (ATL), with a modality expressing that a group of agents has a joint strategy to ensure its argument, to reason about so-called open systems. These are systems, the behavior of which depends also on the behavior of their environments, see Alur et al. 1998.
Finally we mention still alternative logics to reason about programs, viz . fixpoint logics, with as typical example the so-called \(\mu\)-calculus, dating back to Scott and de Bakker (1969), and further developed in Hitchcock and Park 1972, Park 1976, de Bakker 1980, and Meyer 1985. The basic operator is the least fixed point operator \(\mu\), capturing iteration and recursion: if \(\phi(X)\) is a logical expression with a free relation variable \(X\), then the expression \(\mu X\phi(X)\) represents the least \(X\) such that \(\phi(X) = X\), if such an \(X\) exists. A propositional version of the \(\mu\)-calculus, called propositional or modal \(\mu\)-calculus comprising the propositional constructs \(\rightarrow\) and false , together with the atomic (action) modality [\(a\)] and \(\mu\) operator is completely axiomatized by propositional modal logic plus the axiom \(\phi[X/\mu X\phi] \rightarrow \mu X\phi\), where \(\phi[X/Y\)] stands for the expression \(\phi\) in which \(X\) is substituted by \(Y\), and rule
(Kozen 1983, Bradfield and Stirling 2007). This logic is known to subsume PDL ( cf . Harel et al. 2000).
In the field of artificial intelligence (AI), the aim is to devise intelligently behaving computer-based artifacts (with the purpose of understanding human intelligence or just making intelligent computer systems and programs). In order to achieve this, there is a tradition within AI to try and construct these systems based on symbolic representations of all relevant factors involved. This tradition is called symbolic AI or ‘good old-fashioned’ AI (GOFAI). In this tradition the sub-area of knowledge representation (KR) obviously is of major importance: it played an important role since the inception of AI, and has developed to a substantial field of its own. One of the prominent areas in KR concerns the representation of actions, performed by either the system to be devised itself or the actors in its environment. Of course, besides their pure representation also reasoning about actions is important, since representation and reasoning with these representations are deemed to be closely connected within KR (which is sometime also called KR&R, knowledge representation & reasoning). A related, more recent development within AI is that of basing the construction of intelligent systems on the concept of an ( intelligent ) agent , an autonomously acting entity, regarding which, by its very nature, logics of action play a crucial role in obtaining a logical description and specification.
As said above, the representation of actions and formalisms/logics to reason with them are very central to AI and particularly the field of KR. One of the main problems that one encounters in the literature on reasoning about actions in AI, and much more so than in mainstream computer science, is the discovery of the so-called frame problem (McCarthy and Hayes 1969). Although this problem has been generalized by philosophers such as Dennett (1984) to a general problem of relevance and salience of properties pertaining to action, the heart of the problem is that in a ‘common-sense’ setting as one encounters in AI, it is virtually impossible to specify all the effects by the actions of concern, as well as, notably, all non -effects. For instance, given an action, think about what changes if the action is performed and what does not—generally the latter is much more difficult to produce than the former, leading to large, complex attempts to specify the non-effects. But there is of course also the problem of relevance : what aspects are relevant for the problem at hand; which properties do we need to take into consideration? In particular, this also pertains to the preconditions of an action that would guarantee the successful performance/execution of an action. Again, in a common-sense environment, these are formidable, and one can always think of another (pre)condition that should be incorporated. For instance, for successfully starting the motor of a car, there should be a charged battery, sufficient fuel, …, but also not too cold weather, or even sufficient power in your fingers to be able to turn the starting key, the presence of a motor in the car, … etc. etc. In AI one tries to find a solution for the frame problem, having to do with the smallest possible specification. Although this problem gave rise to so-called defeasible or non-monotonic solutions such as defaults (‘normally a car has a motor’), which in itself gave rise to a whole new a realm within AI called nonmonotonic or commonsense reasoning , this is beyond the scope of this entry (we refer the interested reader to the article by Thomason (2003) in this encyclopedia). We focus here on a solution that does not appeal to nonmonotonicity (directly).
Reiter (2001) has proposed a (partial) solution within a framework, called the situation calculus , that has been very popular in KR especially in North America since it was proposed by John McCarthy, one of the founding fathers of AI (McCarthy 1963b, McCarthy 1986). The situation calculus is a dialect of first-order logic with some mild second-order features, especially designed to reason about actions. (One of its distinctive features is that of the so-called reification of semantic notions such as states or possible worlds (as well as truth predicates) into syntactic entities (‘situations’) in the object language.) For the sake of conformity in this entry and reasons of space, we will try rendering Reiter’s idea within (first-order) dynamic logic, or rather, a slight extension of it. (We need action variables to denote action expressions and equalities between action variables and actions (or rather action expressions) as well as (universal) quantification over action variables).
What is known as Reiter’s solution to the frame problem assumes a so-called closed system, that is to say, a system in which all (relevant) actions and changeable properties (in this setting often called ‘fluents’ to emphasize their changeability over time) are known. By this assumption it is possible to express the (non)change as a consequence of performing actions as well as the issue of the check for the preconditions to ensure successful performance in a very succinct and elegant manner, and coin it in a so-called successor state axiom of the form
where \(A\) is an action variable, and \(\gamma_{f}^+ (\boldsymbol{x}, A)\) and \(\gamma_{f}^- (\boldsymbol{x}, A)\) are ‘simple’ expressions without action modalities expressing the conditions for \(\phi\) becoming true and false, respectively. So the formula is read informally as, under certain preconditions pertaining to the action \(A\) at hand, the fluent (predicate) \(f\) becomes true of arguments \(\boldsymbol{x}\), if and only if either the condition \(\gamma_{f}^+ (\boldsymbol{x}, A)\) holds or \(f(\boldsymbol{x})\) holds (before the execution of \(A)\) and the condition \(\gamma_{f}^- (\boldsymbol{x}, A)\) (that would cause it to become false) does not hold. Furthermore, the expression \(\Poss(A)\) is used schematically in such axioms, where the whole action theory should be complemented with so-called precondition axioms of the form \(\phi_A \rightarrow \Poss(A)\) for concrete expressions \(\phi_A\) stating the actual preconditions needed for a successful execution of \(A\).
To see how this works out in practice we consider a little example in a domain where we have a vase \(v\) which may be broken or not (so we have “broken” as a fluent), and actions drop and repair. We also assume the (non-changeable) predicates fragile and held-in-hand of an object. Now the successor state axiom becomes
and as precondition axioms we have \(\textrm{held-in-hand}(x) \rightarrow \Poss(\mathop{\mathrm{drop}}(x))\) and \(\mathop{\mathrm{broken}}(x) \rightarrow \Poss(\mathop{\mathrm{repair}}(x))\). This action theory is very succinct: one needs only one successor state axiom per fluent and one precondition axiom per action.
Finally in this subsection we must mention some other well-known approaches to reasoning about action and change. The event calculus (Kowalski and Sergot 1986, Shanahan 1990, Shanahan 1995) and fluent calculus (Thielscher 2005) are alternatives to the situation-based representation of actions in the situation calculus. The reader is also referred to Sandewall and Shoham 1994 for historical and methodological issues as well as the relation with non-monotonic reasoning. These ideas have led to very efficient planning systems ( e.g. , TALplanner, Kvarnström and Doherty 2000) and practical ways to program robotic agents ( e.g. , the GOLOG family (Reiter 2001) of languages based on the situation calculus, and FLUX (Thielscher 2005) based on the fluent calculus).
In the last two decades the notion of an intelligent agent has emerged as a unifying concept to discuss the theory and practice of artificial intelligence ( cf . Russell and Norvig 1995, Nilsson 1998). In short, agents are software entities that display forms of intelligence/rationality and autonomy. They are able to take initiative and make decisions to take action on their own without direct control of a human user. In this subsection we will see how logic (of action) is used to describe / specify the (desired) behavior of agents ( cf . Wooldridge 2002). First we focus on single agents, after which we turn to settings with multiple agents, called multi-agent systems (MAS) or even agent societies.
Interestingly, the origin of the intelligent agent concept lies in philosophy.
First of all there is a direct link with practical reasoning in the classical philosophical tradition going back to Aristotle. Here one is concerned with reasoning about action in a syllogistic manner, such as the following example taken from Audi 1999, p. 729:
Would that I exercise. Jogging is exercise. Therefore, I shall go jogging.
Although this has the form of a deductive syllogism in the familiar Aristotelian tradition of theoretical reasoning, on closer inspection it appears that this syllogism does not express a purely logical deduction. (The conclusion does not follow logically from the premises.) It rather constitutes a representation of a decision of the agent (going to jog), where this decision is based on mental attitudes of the agent, viz . his/her beliefs (jogging is exercise) and his/her desires or goals (would that I exercise). So, practical reasoning is reasoning directed toward action, the process of figuring out what to do, as Wooldridge (2000) puts it. The process of reasoning about what to do next on the basis of mental states such as beliefs and desires is called deliberation.
Dennett (1971) has put forward the notion of the intentional stance : the strategy of interpreting the behaviour of an entity by treating it as if it were a rational agent that governed its choice of action by a consideration of its beliefs and desires. As such it is an anthropomorphic instance of the so called design (functionality) stance, contra the physical stance, towards systems. This stance has been proved to be extremely influential, not only in cognitive science and biology/ethology (in connection with animal behavior), but also as a starting point of thinking about artificial agents.
Finally, and most importantly, there is the work of the philosopher Michael Bratman (1987), which, although in the first instance aimed at human agents, lays the foundation of the BDI approach to artificial agents. In particular, Bratman makes a case for the incorporation of the notion of intention for describing agent behavior. Intentions play the important role of selection of actions that are desired, with a distinct commitment attached to the actions thus selected. Unless there is a rationale for dropping a commitment (such as the belief that an intention has been achieved already or the belief that it is impossible to achieve) the agent should persist / persevere in its commitment, stick to it, so to speak, and try realizing it,
After Bratman’s philosophy was published, researchers tried to formalize this theory using logical means. We mention here three well-known approaches. Cohen and Levesque (1991) tried to capture Bratman’s theory in a linear-time style temporal logic where they added primitive operators for belief and goal as well as some operators to cater for actions, such as operators for expressing that an action is about to be performed \((\lhappens \alpha)\), has just been performed \(\ldone \alpha)\) and what agent is the actor of a primitive action (\(\lact i\ \alpha\): agent \(i\) is the actor of \(\alpha\)). From this basic set-up they build a framework in which ultimately the notion of intention is defined in terms of the other notions. In fact they define two notions: an intention_to_do and an intention_to_be. First they define the notion of an achievement goal (A-Goal): an A-Goal is something that is a goal to hold later, but is believed not to be true now. Then they define a persistent goal (P-Goal): a P-Goal is an A-Goal that is not dropped before it is believed to be achieved or believed to be impossible. Then the intention to do an action is defined as the P-Goal of having done the action, in a way such that the agent was aware of it happening. The intention to achieve a state satisfying \(\phi\) is the P-Goal of having done some action that has \(\phi\) as a result where the agent was aware of something happening leading to \(\phi\), such that what actually happened was not something that the agent explicitly had not as a goal.
Next there is Rao & Georgeff’s formalization of BDI agents using the branching-time temporal logic CTL (Rao and Georgeff 1991, Rao and Georgeff 1998, Wooldridge 2000). On top of CTL they introduce modal operators for Belief \((\lbel)\), Goal \((\lgoal)\) (sometimes replaced by Desire \((\ldes)\)) and Intention (of the to_be kind, \(\lintend\)) as well as operators to talk about the success \((\lsucceeded(e))\) and failure \((\lfailed)\) of elementary actions \(e\). So they do not try to define intention in terms of other notions, but rather introduce intention as a separate operator, of which the meaning is later constrained by ‘reasonable’ axioms. The formal semantics is based on Kripke models with accessibility relations between worlds for the belief, goal and intention operators. However, possible worlds here are complete time trees (modeling the various behaviors of the agent) on which CTL formulas are interpreted in the usual way. Next they propose a number of postulates/axioms that they find reasonable interactions between the operators, and constrain the models of the logic accordingly so that these axioms become validities. For example, they propose the formulas \(\lgoal(\alpha) \rightarrow \lbel(\alpha)\) and \(\lintend(\alpha) \rightarrow \lgoal(\alpha)\), for a certain class of formulas \(\alpha\), of which \(\alpha = \mathop{\mathbf{E}}(\psi)\) is a typical example. Here \(\mathop{\mathbf{E}}\) stands for the existential path quantifier in CTL. Rao and Georgeff also show that one can express commitment strategies in their logic. For example, the following expresses a ‘single-minded committed’ agent, that keeps committed to its intention until it believes it has achieved it or thinks it is impossible (which is very close to what we saw in the definition of intention in the approach of Cohen and Levesque):
where \(\mathbf{A}\) stands for the universal path quantifier in CTL.
Finally there is the KARO approach by Van Linder et al. (Van der Hoek et al. 1998, Meyer et al. 1999), which takes dynamic logic as a basis instead of a temporal logic. First a core is built, consisting of the language of propositional dynamic logic augmented with modal operators for knowledge \((\mathbf{K})\), belief \((\mathbf{B})\) and desire \((\mathbf{D})\) as well as an operator \((\mathbf{A})\) that stands for ability to perform an action. Next the language is extended mostly by abbreviations (definitions in terms of the other operators) to get a fully-fledged BDI-like logic. The most prominent operators are:
The framework furthermore has special actions \(\lcommit\) and \(\luncommit\) to control the agent’s commitments. The semantics of these actions is such that the agent obviously can only commit to an action \(\alpha\) if there is good reason for it, viz . that there is a possible intention of \(\alpha\) with a known goal \(\phi\) as result. Furthermore the agent cannot uncommit to a certain action \(\alpha\) that is part of the agent’s commitments, as long there is a good reason for it to be committed to \(\alpha\), i.e. as long as there is some possible intention where \(\alpha\) is involved. This results in having the following validities in KARO: (Here \(\mathbf{I}(\alpha, \phi)\) denotes the possibly intend operator and \(\mathbf{Com}(\alpha)\) is an operator that expresses that the agent is committed to the action \(\alpha\), which is similar to Cohen & Levesque’s intention-to-do operator \(\lintend_1\) in Cohen and Levesque 1990.)
Informally these axioms say the following: if the agent possibly intends an action for fulfilling a certain goal then it has the opportunity to commit to this action, after which it is recorded on its agenda; as long as an agent possibly intends an action it is not able to uncommit to it (this reflects a form of persistence of commitments: as long as there is a good reason for a plan on the agenda it will have to stay on!); if the agent is committed to an action it has the opportunity to uncommit to it (but it may lack the ability to do this, cf . the previous axiom); if an agent is committed to a sequence of two actions then it knows that it is committed to the first and it also knows that after performing the first action it will be committed to the second.
Besides this focus on motivational attitudes in the tradition of agent logics in BDI style, the KARO framework also provides an extensive account of epistemic and doxastic attitudes. This is worked out most completely in Van Linder et al. 1995. This work hooks into a different strand of research in between artificial intelligence and philosophy, viz . Dynamic Epistemic Logic , the roots of which lie in philosophy, linguistics, computer science and artificial intelligence! Dynamic Epistemic Logic (DEL) is the logic of knowledge change; it is not about one particular logical system, but about a whole family of logics that allow us to specify static and dynamic aspects of knowledge and beliefs of agents ( cf . Van Ditmarsch et al. 2007). The field combines insights from philosophy (about belief revision, AGM-style (AGM 1985), as we have seen in Section 1), dynamic semantics in linguistics and the philosophy of language (as we have seen in Section 2), reasoning about programs by using dynamic logic (as we have seen in Section 3) with ideas in artificial intelligence about how knowledge and actions influence each other (Moore 1977). More generally we can see the influence of the logical analysis of information change as advocated by van Benthem and colleagues (van Benthem 1989, van Benthem 1994, Faller et al. 2000). Also Veltman’s update semantics of default reasoning (Veltman 1996), an important reasoning method in artificial intelligence (Reiter 1980, Russell and Norvig 1995), can be viewed as being part of this paradigm.
For the purpose of this entry, it is interesting to note that the general approach taken is to apply a logic of action, viz . dynamic logic, to model information change. This amounts to an approach in which the epistemic (or doxastic) updates are reified into the logic as actions that change the epistemic/doxastic state of the agent. So, for example in Van Linder et al. 1995 we encounter the actions such as \(\lexpand(\phi)\), \(\lcontract(\phi)\), \(\lrevise(\phi)\), referring to expanding, contracting and revising, respectively, one’s belief with the formula \(\phi\). These can be reasoned about by putting them in dynamic logic boxes and diamonds, so that basically extensions of dynamic logic are employed for reasoning about these updates. It is further shown that these actions satisfy the AGM postulates so that this approach can be viewed as a modal counterpart of the AGM framework. Very similar in spirit is the work of Segerberg (1995) on Dynamic Doxastic Logic (DDL), the modal logic of belief change. In DDL modal operators of the form [\(+\phi\)], [*\(\phi\)] and [\(-\phi\)] are introduced with informal meanings: “after the agent has expanded/revised/contracted his beliefs by \(\phi\)”, respectively. Combined with the ‘standard’ doxastic operator \(B\), where \(B\phi\) is interpreted as “\(\phi\) is in the agent’s belief set”, one can now express properties like [\(+\phi]B\psi\) expressing that after having expanded its beliefs by \(\phi\) the agent believes \(\psi\) (also cf . Hendricks and Symons 2006).
Finally in this subsection we mention recent work where the KARO formalism is used as a basis for describing also other aspects of cognitive behavior of agents, going ‘beyond BDI’, viz . attitudes regarding emotions (Meyer 2006, Steunebrink et al. 2007, Steunebrink et al. 2012). The upshot of this approach is that an expressive logic of action such as KARO can be fruitfully employed to describe how emotions such as joy, gratification, anger, and remorse, are triggered by certain informational and motivational attitudes such as certain beliefs and goals (‘emotion elicitation’) and how, once elicited, the emotional state of an agent may influence its behavior, and in particular its decisions about the next action to take.
Apart from logics to specify attitudes of single agents, also work has been done to describe the attitudes of multi-agent systems as wholes. First we mention the work by Cohen & Levesque in this direction (Levesque et al. 1990, Cohen and Levesque 1991). This work was a major influence on a multi-agent version of KARO (Aldewereld et al. 2004). An important complication in a notion of joint goal involves that of persistence of the goal: where in the single agent case the agent pursues its goal until it believes it has achieved it or believes it can never be achieved, in the context of multiple agents, the agent that realizes this, has to inform the others of the team about it so that the group/team as a whole will believe that this is the case and may drop the goal. This is captured in the approaches mentioned above. Related work, but not a logic of action in the strict sense, concerns the logical treatment of collective intentions (Keplicz and Verbrugge 2002).
It must also be mentioned here that inspired by several sources among which the work on knowledge and belief updates for individual agents as described by DEL and DDL, combined with work on knowledge in groups of agents such as common knowledge (see, e.g., Meyer and Van der Hoek 1995), a whole new subfield has arisen, which can be seen as the multi-agent (counter)part of Dynamic Epistemic Logic. This deals with matters such as the logic of public announcement, and more generally actions that have effect on the knowledge of groups of agents. This has generated quite some work by different authors such as Plaza (1989), Baltag (1999), Gerbrandy (1998), Van Ditmarsch (2000), and Kooi (2003). For example, public announcement logic (Plaza 1989) contains an operator of the form [\(\phi]\psi\), where both \(\phi\) and \(\psi\) are formulas of the logic, expressing “after announcement of \(\phi\), it holds that \(\psi\)”. This logic can be seen as a form of dynamic logic again, where the semantic clause for [\(\phi]\psi\) reads (in informal terms): [\(\phi]\psi\) is true in a model-state pair iff the truth of \(\phi\) in that model-state pair implies the truth of \(\psi\) in a model-state pair, where the state is the same, but the model is transformed to capture the information contained in \(\phi\). Also in the other approaches the transformation of models induced by communicated information plays an important role, notably in the approach by Baltag et al. on action models (Baltag 1999, Baltag and Moss 2004). A typical element in this approach is that in action model logic one has both epistemic and action models and that the update of an epistemic model by an epistemic action (an action that affects the epistemic state of a group of agents) is represented by a (restricted) modal product of that epistemic model and an action model associated with that action. (See Van Ditmarsch et al. 2007, p. 151; this book is a recent comprehensive reference to the field.)
Finally we mention logics that incorporate notions from game theory to reason about multi-agent systems, such as game logic, coalition logic (Pauly 2001) and alternating temporal logic (ATL, which we also encountered at the end of the section on mainstream computer science!), and its epistemic variant ATEL (Van der Hoek and Wooldridge 2003, Van der Hoek et al. 2007). For instance, game logic is an extension of PDL to reason about so-called determined 2-player games. Interestingly there is a connection between these logics and the stit approach we have encountered in philosophy. For instance, Broersen, partially jointly with Herzig and Troquard, has shown several connections such as embeddings of Coalition Logic and ATL in forms of stit logic (Broersen et al. 2006a,b) and extensions of stit (and ATL) to cater for reasoning about interesting properties of multi-agent systems (Broersen 2009, 2010). This area currently is growing fast, also aimed at the application of verifying multi-agent systems ( cf . Van der Hoek et al. 2007), viz. Dastani et al. 2010. The latter constitutes still somewhat of a holy grail in agent technology. On the one hand there are many logics to reason about both single and multiple agents, while on the other hand multi-agent systems are being built that need to be verified. To this day there is still a gap between theory and practice. Much work is being done to render logical means combining the agent logics discussed and the logical techniques from mainstream computer science for the verification of distributed systems (from section 3), but we are not there yet…!
In this entry we have briefly reviewed the history of the logic of action, in philosophy, in linguistics, in computer science and in artificial intelligence. Although the ideas and techniques we have considered were developed in these separate communities in a quite independent way, we feel that they are nevertheless very much related, and by putting them together in this entry we hope we have contributed in a modest way to some cross-fertilization between these communities regarding this interesting and important subject.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
[Please contact the author with suggestions.]
events | frame problem | logic: dynamic epistemic | logic: non-monotonic | logic: propositional dynamic | logic: temporal | semantics: dynamic | situations: in natural language semantics | speech acts
Copyright © 2013 by Krister Segerberg John-Jules Meyer Marcus Kracht
Mirror sites.
View this site from another server:
The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054
Educational resources and simple solutions for your research journey
Any research begins with a research question and a research hypothesis . A research question alone may not suffice to design the experiment(s) needed to answer it. A hypothesis is central to the scientific method. But what is a hypothesis ? A hypothesis is a testable statement that proposes a possible explanation to a phenomenon, and it may include a prediction. Next, you may ask what is a research hypothesis ? Simply put, a research hypothesis is a prediction or educated guess about the relationship between the variables that you want to investigate.
It is important to be thorough when developing your research hypothesis. Shortcomings in the framing of a hypothesis can affect the study design and the results. A better understanding of the research hypothesis definition and characteristics of a good hypothesis will make it easier for you to develop your own hypothesis for your research. Let’s dive in to know more about the types of research hypothesis , how to write a research hypothesis , and some research hypothesis examples .
Table of Contents
A hypothesis is based on the existing body of knowledge in a study area. Framed before the data are collected, a hypothesis states the tentative relationship between independent and dependent variables, along with a prediction of the outcome.
Young researchers starting out their journey are usually brimming with questions like “ What is a hypothesis ?” “ What is a research hypothesis ?” “How can I write a good research hypothesis ?”
A research hypothesis is a statement that proposes a possible explanation for an observable phenomenon or pattern. It guides the direction of a study and predicts the outcome of the investigation. A research hypothesis is testable, i.e., it can be supported or disproven through experimentation or observation.
Here are the characteristics of a good hypothesis :
A study begins with the formulation of a research question. A researcher then performs background research. This background information forms the basis for building a good research hypothesis . The researcher then performs experiments, collects, and analyzes the data, interprets the findings, and ultimately, determines if the findings support or negate the original hypothesis.
Let’s look at each step for creating an effective, testable, and good research hypothesis :
Remember that creating a research hypothesis is an iterative process, i.e., you might have to revise it based on the data you collect. You may need to test and reject several hypotheses before answering the research problem.
When you start writing a research hypothesis , you use an “if–then” statement format, which states the predicted relationship between two or more variables. Clearly identify the independent variables (the variables being changed) and the dependent variables (the variables being measured), as well as the population you are studying. Review and revise your hypothesis as needed.
An example of a research hypothesis in this format is as follows:
“ If [athletes] follow [cold water showers daily], then their [endurance] increases.”
Population: athletes
Independent variable: daily cold water showers
Dependent variable: endurance
You may have understood the characteristics of a good hypothesis . But note that a research hypothesis is not always confirmed; a researcher should be prepared to accept or reject the hypothesis based on the study findings.
Following from above, here is a 10-point checklist for a good research hypothesis :
By following this research hypothesis checklist , you will be able to create a research hypothesis that is strong, well-constructed, and more likely to yield meaningful results.
Different types of research hypothesis are used in scientific research:
A null hypothesis states that there is no change in the dependent variable due to changes to the independent variable. This means that the results are due to chance and are not significant. A null hypothesis is denoted as H0 and is stated as the opposite of what the alternative hypothesis states.
Example: “ The newly identified virus is not zoonotic .”
This states that there is a significant difference or relationship between the variables being studied. It is denoted as H1 or Ha and is usually accepted or rejected in favor of the null hypothesis.
Example: “ The newly identified virus is zoonotic .”
This specifies the direction of the relationship or difference between variables; therefore, it tends to use terms like increase, decrease, positive, negative, more, or less.
Example: “ The inclusion of intervention X decreases infant mortality compared to the original treatment .”
While it does not predict the exact direction or nature of the relationship between the two variables, a non-directional hypothesis states the existence of a relationship or difference between variables but not the direction, nature, or magnitude of the relationship. A non-directional hypothesis may be used when there is no underlying theory or when findings contradict previous research.
Example, “ Cats and dogs differ in the amount of affection they express .”
A simple hypothesis only predicts the relationship between one independent and another independent variable.
Example: “ Applying sunscreen every day slows skin aging .”
A complex hypothesis states the relationship or difference between two or more independent and dependent variables.
Example: “ Applying sunscreen every day slows skin aging, reduces sun burn, and reduces the chances of skin cancer .” (Here, the three dependent variables are slowing skin aging, reducing sun burn, and reducing the chances of skin cancer.)
An associative hypothesis states that a change in one variable results in the change of the other variable. The associative hypothesis defines interdependency between variables.
Example: “ There is a positive association between physical activity levels and overall health .”
A causal hypothesis proposes a cause-and-effect interaction between variables.
Example: “ Long-term alcohol use causes liver damage .”
Note that some of the types of research hypothesis mentioned above might overlap. The types of hypothesis chosen will depend on the research question and the objective of the study.
Here are some good research hypothesis examples :
“The use of a specific type of therapy will lead to a reduction in symptoms of depression in individuals with a history of major depressive disorder.”
“Providing educational interventions on healthy eating habits will result in weight loss in overweight individuals.”
“Plants that are exposed to certain types of music will grow taller than those that are not exposed to music.”
“The use of the plant growth regulator X will lead to an increase in the number of flowers produced by plants.”
Characteristics that make a research hypothesis weak are unclear variables, unoriginality, being too general or too vague, and being untestable. A weak hypothesis leads to weak research and improper methods.
Some bad research hypothesis examples (and the reasons why they are “bad”) are as follows:
“This study will show that treatment X is better than any other treatment . ” (This statement is not testable, too broad, and does not consider other treatments that may be effective.)
“This study will prove that this type of therapy is effective for all mental disorders . ” (This statement is too broad and not testable as mental disorders are complex and different disorders may respond differently to different types of therapy.)
“Plants can communicate with each other through telepathy . ” (This statement is not testable and lacks a scientific basis.)
If a research hypothesis is not testable, the results will not prove or disprove anything meaningful. The conclusions will be vague at best. A testable hypothesis helps a researcher focus on the study outcome and understand the implication of the question and the different variables involved. A testable hypothesis helps a researcher make precise predictions based on prior research.
To be considered testable, there must be a way to prove that the hypothesis is true or false; further, the results of the hypothesis must be reproducible.
1. What is the difference between research question and research hypothesis ?
A research question defines the problem and helps outline the study objective(s). It is an open-ended statement that is exploratory or probing in nature. Therefore, it does not make predictions or assumptions. It helps a researcher identify what information to collect. A research hypothesis , however, is a specific, testable prediction about the relationship between variables. Accordingly, it guides the study design and data analysis approach.
2. When to reject null hypothesis ?
A null hypothesis should be rejected when the evidence from a statistical test shows that it is unlikely to be true. This happens when the test statistic (e.g., p -value) is less than the defined significance level (e.g., 0.05). Rejecting the null hypothesis does not necessarily mean that the alternative hypothesis is true; it simply means that the evidence found is not compatible with the null hypothesis.
3. How can I be sure my hypothesis is testable?
A testable hypothesis should be specific and measurable, and it should state a clear relationship between variables that can be tested with data. To ensure that your hypothesis is testable, consider the following:
4. How do I revise my research hypothesis if my data does not support it?
If your data does not support your research hypothesis , you will need to revise it or develop a new one. You should examine your data carefully and identify any patterns or anomalies, re-examine your research question, and/or revisit your theory to look for any alternative explanations for your results. Based on your review of the data, literature, and theories, modify your research hypothesis to better align it with the results you obtained. Use your revised hypothesis to guide your research design and data collection. It is important to remain objective throughout the process.
5. I am performing exploratory research. Do I need to formulate a research hypothesis?
As opposed to “confirmatory” research, where a researcher has some idea about the relationship between the variables under investigation, exploratory research (or hypothesis-generating research) looks into a completely new topic about which limited information is available. Therefore, the researcher will not have any prior hypotheses. In such cases, a researcher will need to develop a post-hoc hypothesis. A post-hoc research hypothesis is generated after these results are known.
6. How is a research hypothesis different from a research question?
A research question is an inquiry about a specific topic or phenomenon, typically expressed as a question. It seeks to explore and understand a particular aspect of the research subject. In contrast, a research hypothesis is a specific statement or prediction that suggests an expected relationship between variables. It is formulated based on existing knowledge or theories and guides the research design and data analysis.
7. Can a research hypothesis change during the research process?
Yes, research hypotheses can change during the research process. As researchers collect and analyze data, new insights and information may emerge that require modification or refinement of the initial hypotheses. This can be due to unexpected findings, limitations in the original hypotheses, or the need to explore additional dimensions of the research topic. Flexibility is crucial in research, allowing for adaptation and adjustment of hypotheses to align with the evolving understanding of the subject matter.
8. How many hypotheses should be included in a research study?
The number of research hypotheses in a research study varies depending on the nature and scope of the research. It is not necessary to have multiple hypotheses in every study. Some studies may have only one primary hypothesis, while others may have several related hypotheses. The number of hypotheses should be determined based on the research objectives, research questions, and the complexity of the research topic. It is important to ensure that the hypotheses are focused, testable, and directly related to the research aims.
9. Can research hypotheses be used in qualitative research?
Yes, research hypotheses can be used in qualitative research, although they are more commonly associated with quantitative research. In qualitative research, hypotheses may be formulated as tentative or exploratory statements that guide the investigation. Instead of testing hypotheses through statistical analysis, qualitative researchers may use the hypotheses to guide data collection and analysis, seeking to uncover patterns, themes, or relationships within the qualitative data. The emphasis in qualitative research is often on generating insights and understanding rather than confirming or rejecting specific research hypotheses through statistical testing.
Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.
Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place – Get All Access now starting at just $14 a month !
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Nature Reviews Neuroscience volume 2 , pages 661–670 ( 2001 ) Cite this article
12k Accesses
2181 Citations
97 Altmetric
Metrics details
What are the neural bases of action understanding? Although this capacity could merely involve visual analysis of the action, it has been argued that we actually map this visual information onto its motor representation in our nervous system. Here we discuss evidence for the existence of a system, the 'mirror system', that seems to serve this mapping function in primates and humans, and explore its implications for the understanding and imitation of action.
This is a preview of subscription content, access via your institution
Subscribe to this journal
Receive 12 print issues and online access
176,64 € per year
only 14,72 € per issue
Buy this article
Prices may be subject to local taxes which are calculated during checkout
Gross, C. G., Rocha-Miranda, C. E. & Bender, D. B. Visual properties of neurons in the inferotemporal cortex of the macaque. J. Neurophysiol. 35 , 96–111 (1972).
Article CAS PubMed Google Scholar
Tanaka, K., Saito, H. A., Fukada, Y. & Moriya, M. Coding visual images of objects in the inferotemporal cortex of the macaque monkey. J. Neurophysiol. 66 , 170–189 (1991).
Ungerleider, L. G. & Haxby, I. V. “What” and “where” in the human brain. Curr. Opin. Neurobiol. 4 , 157–165 (1994).
Carey, D. P., Perrett, D. I. & Oram, M. W. in Handbook of Neuropsychology: Action and Cognition Vol. 11 (eds Jeannerod, M. & Grafman, J.) 111–130 (Elsevier, Amsterdam, 1997).
Google Scholar
Logothetis, N. Object vision and visual awareness. Curr. Opin. Neurobiol. 8 , 536–544 (1998).
Allison, T., Puce, A. & McCarthy, G. Social perception from visual cues: role of the STS region. Trends Cogn. Sci. 4 , 267–278 (2000).
Kanwisher, N. Domain specificity in face perception. Nature Neurosci. 3 , 759–763 (2000).
Merleau-Ponty, M. Phenomenology of Perception (Routledge, London, 1962).
Gallese, V. The “shared manifold” hypothesis: from mirror neurons to empathy. J. Conscious Stud. 8 , 33–50 (2001).
Gallese, V., Fadiga, L., Fogassi, L. & Rizzolatti, G. Action recognition in the premotor cortex. Brain 119 , 593–609 (1996).
Article PubMed Google Scholar
Rizzolatti, G., Fadiga, L., Fogassi, L. & Gallese, V. Premotor cortex and the recognition of motor actions. Brain Res. Cogn. Brain Res. 3 , 131–141 (1996).
Rizzolatti, G. et al. Functional organization of inferior area 6 in the macaque monkey: II. Area F5 and the control of distal movements. Exp. Brain Res. 71 , 491–507 (1988).
Murata, A. et al. Object representation in the ventral premotor cortex (area F5) of the monkey. J. Neurophysiol. 78 , 2226–2230 (1997).
Rizzolatti G., Fogassi, L. & Gallese, V. in The Cognitive Neurosciences 2nd edn (ed. Gazzaniga, M. S.) 539–552 (MIT Press, Cambridge, Massachusetts, 2000).
Perrett, D. I. et al. Frameworks of analysis for the neural representation of animate objects and actions. J. Exp. Biol. 146 , 87–113 (1989).
Perrett, D. I., Mistlin, A. J., Harries, M. H. & Chitty, A. J. in Vision and Action: The Control of Grasping (ed. Goodale, M. A.) 163–342 (Ablex, Norwood, New Jersey, 1990).
Jellema, T. & Perrett, D. I. in Attention & Performance XIX. Common Mechanisms in Perception and Action (eds Prinz, W. & Hommel, B.) (Oxford Univ. Press, Oxford, in the press).
Petrides, M. & Pandya, D. N. Projections to the frontal cortex from the posterior parietal region in the rhesus monkey. J. Comp. Neurol. 228 , 105–116 (1984).
Matelli, M., Camarda, R., Glickstein, M. & Rizzolatti, G. Afferent and efferent projections of the inferior area 6 in the macaque monkey. J. Comp. Neurol. 251 , 281–298 (1986).
Cavada, C. & Goldman-Rakic, P. S. Posterior parietal cortex in rhesus monkey: II. Evidence for segregated corticocortical networks linking sensory and limbic areas with the frontal lobe. J. Comp. Neurol. 287 , 422–445 (1989).
Seltzer, B. & Pandya, D. N. Parietal, temporal, and occipital projections to cortex of the superior temporal sulcus in the rhesus monkey: a retrograde tracer study. J. Comp. Neurol. 15 , 445–463 (1994).
Article Google Scholar
Rizzolatti, G., Luppino, G. & Matelli, M. The organization of the cortical motor system: new concepts. Electroencephalogr. Clin. Neurophysiol. 106 , 283–296 (1998).
Fogassi, L., Gallese, V., Fadiga, L. & Rizzolatti, G. Neurons responding to the sight of goal directed hand/arm actions in the parietal area PF (7b) of the macaque monkey. Soc. Neurosci. Abstr. 24 , 257 (1998).
Gallese, V., Fogassi, L., Fadiga, L. & Rizzolatti, G. in Attention & Performance XIX. Common Mechanisms in Perception and Action (eds Prinz, W. & Hommel, B.) 334–355 (Oxford Univ. Press, Oxford, in the press).
Amaral, D. G. et al. in The Amygdala: Neurobiological Aspects of Emotion, Memory, and Mental Disfunction (ed. Aggleton, J. P.) 1–66 (Wiley-Liss, New York, 1992).
Baron-Cohen, S. Mindblindness: an Essay on Autism and Theory of Mind (MIT Press/Bradford Books, 1995).
Book Google Scholar
Adolphs, R. Social cognition and the human brain. Trends Cogn. Sci. 3 , 469–479 (1999).
Brothers, L., Ring, B. & Kling, A. Response of neurons in the macaque amygdala to complex social stimuli. Behav. Brain Res. 41 , 199–213 (1990).
Brothers, L. & Ring, B. A neuroethological framework for the representation of minds. J. Cogn. Neurosci. 4 , 107–118 (1992).
Bonda, E., Petrides, M., Ostry, D. & Evans, A. Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J. Neurosci. 16 , 3737–3744 (1996).
Article CAS PubMed PubMed Central Google Scholar
Carr, L., Iacoboni, M., Dubeau, M.-C., Mazziotta, J. C. & Lenzi, G. L. Observing and imitating emotion: implications for the neurological correlates of empathy. Proc. First Int. Conf. Soc. Cogn. Neurosci. (2001).
Cole, J. D. About Face (MIT Press, Cambridge, Massachusetts, 1999).
Cole, J. D. Empathy needs a face. J. Conscious Stud. 8 , 51–68 (2001).
Rizzolatti, G., Fadiga, L., Fogassi, L. & Gallese, V. Resonance behaviors and mirror neurons. Arch. Ital. Biol. 137 , 85–100 (1999).
CAS PubMed Google Scholar
Gastaut, H. J. & Bert, J. EEG changes during cinematographic presentation. Electroencephalogr. Clin. Neurophysiol. 6 , 433–444 (1954).
Cohen-Seat, G., Gastaut, H., Faure, J. & Heuyer, G. Etudes expérimentales de l'activité nerveuse pendant la projection cinématographique. Rev. Int. Filmol. 5 , 7–64 (1954).
Chatrian, G. E. in Handbook of Electroencephalography (ed. Remond, A.) 104–114 (Elsevier, Amsterdam, 1976).
Cochin, S., Barthelemy, C., Lejeune, B., Roux, S., & Martineau, J. Perception of motion and qEEG activity in human adults. Electroencephalogr. Clin. Neurophysiol. 107 , 287–295 (1998).
Cochin, S., Barthelemy, C., Roux, S. & Martineau, J. Observation and execution of movement: similarities demonstrated by quantified electroencephalograpy. Eur. J. Neurosci. 11 , 1839–1842 (1999).
Altschuler, E. L., Vankov, A., Wang, V., Ramachandran, V. S. & Pineda, J. A. Person see, person do: human cortical electrophysiological correlates of monkey see monkey do cell. Soc. Neurosci. Abstr. 23 , 719 (1997).
Altschuler, E. L. et al. Mu wave blocking by observation of movement and its possible use as a tool to study theory of other minds. Soc. Neurosci. Abstr. 26 , 68 (2000).
Salmelin, R. & Hari, R. Spatiotemporal characteristics of sensorimotor neuromagnetic rhythms related to thumb movement. Neuroscience 60 , 537–550 (1994).
Hari, R. & Salmelin, R. Human cortical oscillations: a neuromagnetic view through the skull. Trends Neurosci. 20 , 44–49 (1997).
Salenius, S., Schnitzler, A., Salmelin, R., Jousmaki, V. & Hari, R. Modulation of human cortical rolandic rhythms during natural sensorimotor tasks. Neuroimage 5 , 221–228 (1997).
Hari, R. et al. Activation of human primary motor cortex during action observation: a neuromagnetic study. Proc. Natl Acad. Sci. USA 95 , 15061–15065 (1998).
Fadiga, L. Fogassi, L., Pavesi, G. & Rizzolatti, G. Motor facilitation during action observation: a magnetic stimulation study. J. Neurophysiol. 73 , 2608–2611 (1995).
Strafella, A. P. & Paus, T. Modulation of cortical excitability during action observation: a transcranial magnetic stimulation study. Neuroreport 11 , 2289–2292 (2000).
Baldissera, F., Cavallari, P., Craighero, L. & Fadiga, L. Modulation of spinal excitability during observation of hand actions in humans. Eur. J. Neurosci. 13 , 190–194 (2001).
Rizzolatti, G. et al. Localization of grasp representation in humans by PET: 1. Observation versus execution. Exp. Brain Res. 111 , 246–252 (1996).
Grafton, S. T., Arbib, M. A., Fadiga, L. & Rizzolatti, G. Localization of grasp representations in humans by PET: 2. Observation compared with imagination. Exp. Brain Res. 112 , 103–111 (1996).
Decety, J. et al. Brain activity during observation of actions. Influence of action content and subject's strategy. Brain 120 , 1763–1777 (1997).
Grèzes, J., Costes, N. & Decety, J. Top–down effect of strategy on the perception of human biological motion: a PET investigation. Cogn. Neuropsychol. 15 , 553–582 (1998).
Rizzolatti, G. & Arbib, M. A. Language within our grasp. Trends Neurosci. 21 , 188–194 (1998).
Von Bonin, G. & Bailey, P. The Neocortex of Macaca Mulatta (Univ. Illinois Press, Urbana, 1947).
Petrides, M. & Pandya, D. N. in Handbook of Neuropsychology Vol. IX (eds Boller, F. & Grafman, J.) 17–58 (Elsevier, New York, 1997).
Krams, M., Rushworth, M. F., Deiber, M. P., Frackowiak, R. S. & Passingham, R. E. The preparation, execution and suppression of copied movements in the human brain. Exp. Brain Res. 120 , 386–398 (1998).
Binkofski, F. et al. A fronto-parietal circuit for object manipulation in man: evidence from an fMRI study. Eur. J. Neurosci. 11 , 3276–3286 (1999).
Ehrsson, H. H. et al. Cortical activity in precision- versus power-grip tasks: an fMRI study. J. Neurophysiol. 83 , 528–536 (2000).
Iacoboni, M. et al. Cortical mechanisms of human imitation. Science 286 , 2526–2528 (1999).
Nishitani, N. & Hari, R. Temporal dynamics of cortical representation for action. Proc. Natl Acad. Sci. USA 97 , 913–918 (2000).
Buccino, G. et al. Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur. J. Neurosci. 13 , 400–404 (2001).
Jellema, T., Baker, C. I., Wicker, B. & Perrett, D. I. Neural representation for the perception of the intentionality of actions. Brain Cogn 44 , 280–302 (2000).
Umiltà, M. A. et al. “I know what you are doing”: a neurophysiological study. Neuron 32 , 91–101 (2001).
Assad, J. A. & Maunsell, J. H. R. Neuronal correlates of inferred motion in primates posterior parietal cortex. Nature 373 , 518–521 (1995).
Fillion, C. M., Washburn, D. A. & Gulledge, J. P. Can monkeys ( Macaca mulatta ) represent invisible displacement? J. Comp. Psychol. 110 , 386–395 (1996).
Visalberghi, E. & Fragaszy, D. in “Language” and Intelligence in Monkeys and Apes (eds Parker, S. T. & Gibson, K. R.) 247–273 (Cambridge Univ. Press, Cambridge, Massachusetts, 1990).
Visalberghi, E. & Fragaszy, D. in Imitation in Animals and Artifacts (eds Dautenhahn, K. & Nehaniv, C.) (MIT Press, Boston, Massachusetts, in the press).
Rizzolatti, G., Fadiga, L., Fogassi, L. & Gallese, V. in The Imitative Mind: Development, Evolution and Brain Bases (eds Prinz, W. & Meltzoff, A.) (Cambridge Univ. Press, Cambridge, in the press).
Spence, K. W. Experimental studies of learning and higher mental processes in infra-human primates. Psychol. Bull. 34 , 806–850 (1937).
Thorpe, W. H. Learning and Instinct in Animals 2nd edn (Methuen and Co. Ltd, London, 1963).
Whiten, A. & Ham, R. On the nature and evolution of imitation in the animal kingdom: reappraisal of a century of research. Adv. Study Behav. 21 , 239–283 (1992).
Whiten, A. Imitation of the sequential structure of actions by chimpanzees (Pan troglodytes). J. Comp. Psychol. 112 , 270–281 (1998).
Tomasello, M. & Call, J. Primate Cognition (Oxford Univ. Press, Oxford, 1997).
Byrne, R. W. The Thinking Ape. Evolutionary Origins of Intelligence (Oxford Univ. Press, Oxford, 1995).
Tinbergen, N. The Herring Gull's World (Collins, London, 1953).
Meltzoff, A. N. & Moore, M. K. Imitation of facial and manual gestures by human neonates. Science 198 , 75–78 (1977).
Bråten, S. (ed.) Intersubjective Communication and Emotion in Early Ontogeny (Cambridge Univ. Press, Cambridge, 1999).
Darwin, C. The Expression of the Emotions in Man and Animals (J. Murray, London, 1872).
Dimberg, U., Thunberg, M. & Elmehed, K. Unconscious facial reactions to emotional facial expressions. Psychol. Sci. 11 , 86–89 (2000).
Hepp-Reymond, M. C., Hüsler, E. J., Maier, M. A. & Qi, H.-X. Force-related neuronal activity in two regions of the primate ventral premotor cortex. Can. J. Physiol. Pharmacol. 72 , 571–579 (1994).
Fogassi, L. et al. Visual responses in the dorsal premotor area F2 of the macaque monkey. Exp. Brain Res. 128 , 194–199 (1999).
Gentilucci, M. et al. Functional organization of inferior area 6 in the macaque monkey. I. Somatotopy and the control of proximal movements. Exp. Brain Res. 71 , 475–490 (1988).
Hoshi, E. & Tanji, J. Integration of target and body-part information in the premotor cortex when planning action. Nature 408 , 466–470 (2000).
Byrne, R. Imitation in action. Adv. Study Behav. (in the press).
Byrne, R. W. Imitation without intentionality: using string-parsing to copy the organization of behaviour. Anim. Cogn 2 , 63–72 (1999).
Hikosaka, O., Rand, M. K., Miyachi, S. & Miyashita, K. Learning of sequential movements in the monkey: process of learning and retention of memory. J. Neurophysiol. 74 , 1652–1661 (1995).
Hikosaka, O., Miyashita, K., Miyachi, S., Sakai, K. & Lu, X. Differential roles of the frontal cortex, basal ganglia, and cerebellum in visuomotor sequence learning. Neurobiol. Learn. Mem. 70 , 137–149 (1998).
Hikosaka, O. et al. in The Cognitive Neurosciences 2nd edn (ed. Gazzaniga, M. S.) 553–572 (MIT Press, Cambridge, Massachusetts, 2000).
Tanji, J. New concepts of the supplementary motor area. Curr. Opin. Neurobiol. 6 , 782–787 (1996).
Tanji, J., Shima, K. & Mushiake, H. Multiple cortical motor areas and temporal sequencing of movements. Brain Res. Cogn. Brain Res. 5 , 117–122 (1996).
Shima, K. & Tanji, J. Neuronal activity in the supplementary and presupplementary motor areas for temporal organization of multiple movements. J. Neurophysiol. 84 , 2148–2160 (2000).
Wolpert, D. M. Computational approaches to motor control. Trends Cogn. Sci. 1 , 209–216 (1997).
Wolpert, D. M., Ghahramani, Z. & Jordan, M. I. An internal model for sensorimotor integration. Science 269 , 1880–1882 (1995).
Kawato, M. Internal models for motor control and trajectory planning. Neuroreport 9 , 718–727 (1999).
CAS Google Scholar
Arbib, M. E. & Rizzolatti, G. in The Nature of Concepts. Evolution, Structure, and Representation (ed. Van Loocke, P.) 128–154 (Routledge, London, 1999).
Greenwald, A. G. Sensory feedback mechanisms in performance control: with special reference to the ideo-motor mechanism. Psychol. Rev. 77 , 73–99 (1970).
Prinz, W. Perception and action planning. Eur. J. Cogn. Psychol. 9 , 129–154 (1997).
Brass, M., Bekkering, H., Wohlschlager, A. & Prinz, W. Compatibility between observed and executed finger movements: comparing symbolic, spatial and imitative cues. Brain Cogn 44 , 124–143 (2000).
Iacoboni, M. et al. Mirror properties in a sulcus angularis area. Neuroimage 5 , S821 (2000).
Gallese, V. & Goldman, A. Mirror neurons and the simulation theory of mind-reading. Trends Cogn. Sci. 12 , 493–501 (1998).
Frith, C. D. & Frith, U. Interacting minds: a biological basis. Science 286 , 1692–1695 (1999).
Blakemore, S.-J. & Decety, J. From the perception of action to the understanding of intention. Nature Rev. Neurosci. 2 , 561–567 (2001).
Article CAS Google Scholar
Williams, J. H. G., Whiten, A., Suddendorf, T. & Perrett, D. I. Imitation, mirror neurons, and autism. Neurosci. Biobehav. Rev. 25 , 287–295 (2001).
Von Economo, C. The Cytoarchitectonics of the Human Cerebral Cortex (Oxford Univ. Press, London, 1929).
Download references
Authors and affiliations.
the Istituto di Fisiologia Umana, Università di Parma, Via Volturno 39, Parma, I-43100, Italy
Giacomo Rizzolatti, Leonardo Fogassi & Vittorio Gallese
Giacomo Rizzolatti
You can also search for this author in PubMed Google Scholar
Mit encyclopedia of cognitive sciences.
Positron emission tomography
Motor control
Magnetic resonance imaging
Attribution theory
Perception of motion
Theory of mind
A variant of the transcranial magnetic stimulation technique, in which two coils are used to generate magnetic fields in quick succession over the same cortical region or in different regions at the same time.
Also known as the Hoffmann reflex, the H reflex results from the stimulation of sensory fibres, which causes an excitatory potential in the motor neuron pool after a synaptic delay. Exceeding the potential threshold for a given motor neuron generates an action potential. The resulting discharge will cause the muscle fibres innervated by that neurone to be activated.
A movement not directed towards an object.
A disorder characterized by facial paralysis, attributed to defects in the development of the sixth (abducens) and seventh (facial) cranial nerves.
A philosophical movement founded by the German Edward Husserl, dedicated to describing the structures of experience as they present themselves to consciousness, without recourse to theory, deduction or assumptions from other disciplines, such as the natural sciences.
Stimuli devised by the Swedish psychologist Johannson to study biological motion without interference from shape. Light sources are attached to the joints of people and their movements are recorded in a dark environment.
A technique used to stimulate relatively restricted areas of the human cerebral cortex. It is based on the generation of a strong magnetic field near the area of interest which, if changed rapidly enough, will induce an electric field sufficient to stimulate neurons.
Reprints and permissions
Cite this article.
Rizzolatti, G., Fogassi, L. & Gallese, V. Neurophysiological mechanisms underlying the understanding and imitation of action. Nat Rev Neurosci 2 , 661–670 (2001). https://doi.org/10.1038/35090060
Download citation
Published : 01 September 2001
Issue Date : 01 September 2001
DOI : https://doi.org/10.1038/35090060
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Cognitive impacts of group dynamics: exploring memory errors in observing actions by familiar and unfamiliar individuals.
Current Psychology (2024)
Communications Biology (2023)
Scientific Reports (2023)
Brain Structure and Function (2023)
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Hypothesis Definition, Format, Examples, and Tips
Verywell / Alex Dos Diaz
Falsifiability of a hypothesis.
Hypotheses examples.
A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.
Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."
A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.
In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:
The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.
Unless you are creating an exploratory study, your hypothesis should always explain what you expect to happen.
In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.
Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.
In many cases, researchers may find that the results of an experiment do not support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.
In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."
In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."
So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:
Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the journal articles you read . Many authors will suggest questions that still need to be explored.
To form a hypothesis, you should take these steps:
In the scientific method , falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.
Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that if something was false, then it is possible to demonstrate that it is false.
One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.
A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.
Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.
For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.
These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.
One of the basic principles of any type of scientific research is that the results must be replicable.
Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.
Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.
To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.
The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:
A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the dependent variable if you change the independent variable .
The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."
Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.
Descriptive research such as case studies , naturalistic observations , and surveys are often used when conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.
Once a researcher has collected data using descriptive methods, a correlational study can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.
Experimental methods are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).
Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually cause another to change.
The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.
Thompson WH, Skau S. On the scope of scientific hypotheses . R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607
Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:]. Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z
Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004
Nosek BA, Errington TM. What is replication ? PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691
Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies . Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18
Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.
By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."
Writing a hypothesis is one of the essential elements of a scientific research paper. It needs to be to the point, clearly communicating what your research is trying to accomplish. A blurry, drawn-out, or complexly-structured hypothesis can confuse your readers. Or worse, the editor and peer reviewers.
A captivating hypothesis is not too intricate. This blog will take you through the process so that, by the end of it, you have a better idea of how to convey your research paper's intent in just one sentence.
The first step in your scientific endeavor, a hypothesis, is a strong, concise statement that forms the basis of your research. It is not the same as a thesis statement , which is a brief summary of your research paper .
The sole purpose of a hypothesis is to predict your paper's findings, data, and conclusion. It comes from a place of curiosity and intuition . When you write a hypothesis, you're essentially making an educated guess based on scientific prejudices and evidence, which is further proven or disproven through the scientific method.
The reason for undertaking research is to observe a specific phenomenon. A hypothesis, therefore, lays out what the said phenomenon is. And it does so through two variables, an independent and dependent variable.
The independent variable is the cause behind the observation, while the dependent variable is the effect of the cause. A good example of this is “mixing red and blue forms purple.” In this hypothesis, mixing red and blue is the independent variable as you're combining the two colors at your own will. The formation of purple is the dependent variable as, in this case, it is conditional to the independent variable.
Types of hypotheses
Some would stand by the notion that there are only two types of hypotheses: a Null hypothesis and an Alternative hypothesis. While that may have some truth to it, it would be better to fully distinguish the most common forms as these terms come up so often, which might leave you out of context.
Apart from Null and Alternative, there are Complex, Simple, Directional, Non-Directional, Statistical, and Associative and casual hypotheses. They don't necessarily have to be exclusive, as one hypothesis can tick many boxes, but knowing the distinctions between them will make it easier for you to construct your own.
A null hypothesis proposes no relationship between two variables. Denoted by H 0 , it is a negative statement like “Attending physiotherapy sessions does not affect athletes' on-field performance.” Here, the author claims physiotherapy sessions have no effect on on-field performances. Even if there is, it's only a coincidence.
Considered to be the opposite of a null hypothesis, an alternative hypothesis is donated as H1 or Ha. It explicitly states that the dependent variable affects the independent variable. A good alternative hypothesis example is “Attending physiotherapy sessions improves athletes' on-field performance.” or “Water evaporates at 100 °C. ” The alternative hypothesis further branches into directional and non-directional.
A simple hypothesis is a statement made to reflect the relation between exactly two variables. One independent and one dependent. Consider the example, “Smoking is a prominent cause of lung cancer." The dependent variable, lung cancer, is dependent on the independent variable, smoking.
In contrast to a simple hypothesis, a complex hypothesis implies the relationship between multiple independent and dependent variables. For instance, “Individuals who eat more fruits tend to have higher immunity, lesser cholesterol, and high metabolism.” The independent variable is eating more fruits, while the dependent variables are higher immunity, lesser cholesterol, and high metabolism.
Associative and casual hypotheses don't exhibit how many variables there will be. They define the relationship between the variables. In an associative hypothesis, changing any one variable, dependent or independent, affects others. In a casual hypothesis, the independent variable directly affects the dependent.
Also referred to as the working hypothesis, an empirical hypothesis claims a theory's validation via experiments and observation. This way, the statement appears justifiable and different from a wild guess.
Say, the hypothesis is “Women who take iron tablets face a lesser risk of anemia than those who take vitamin B12.” This is an example of an empirical hypothesis where the researcher the statement after assessing a group of women who take iron tablets and charting the findings.
The point of a statistical hypothesis is to test an already existing hypothesis by studying a population sample. Hypothesis like “44% of the Indian population belong in the age group of 22-27.” leverage evidence to prove or disprove a particular statement.
Writing a hypothesis is essential as it can make or break your research for you. That includes your chances of getting published in a journal. So when you're designing one, keep an eye out for these pointers:
Outside of academia, hypothesis and prediction are often used interchangeably. In research writing, this is not only confusing but also incorrect. And although a hypothesis and prediction are guesses at their core, there are many differences between them.
A hypothesis is an educated guess or even a testable prediction validated through research. It aims to analyze the gathered evidence and facts to define a relationship between variables and put forth a logical explanation behind the nature of events.
Predictions are assumptions or expected outcomes made without any backing evidence. They are more fictionally inclined regardless of where they originate from.
For this reason, a hypothesis holds much more weight than a prediction. It sticks to the scientific method rather than pure guesswork. "Planets revolve around the Sun." is an example of a hypothesis as it is previous knowledge and observed trends. Additionally, we can test it through the scientific method.
Whereas "COVID-19 will be eradicated by 2030." is a prediction. Even though it results from past trends, we can't prove or disprove it. So, the only way this gets validated is to wait and watch if COVID-19 cases end by 2030.
Quick tips on writing a hypothesis
A hypothesis should instantly address the research question or the problem statement. To do so, you need to ask a question. Understand the constraints of your undertaken research topic and then formulate a simple and topic-centric problem. Only after that can you develop a hypothesis and further test for evidence.
Once you have your research's foundation laid out, it would be best to conduct preliminary research. Go through previous theories, academic papers, data, and experiments before you start curating your research hypothesis. It will give you an idea of your hypothesis's viability or originality.
Making use of references from relevant research papers helps draft a good research hypothesis. SciSpace Discover offers a repository of over 270 million research papers to browse through and gain a deeper understanding of related studies on a particular topic. Additionally, you can use SciSpace Copilot , your AI research assistant, for reading any lengthy research paper and getting a more summarized context of it. A hypothesis can be formed after evaluating many such summarized research papers. Copilot also offers explanations for theories and equations, explains paper in simplified version, allows you to highlight any text in the paper or clip math equations and tables and provides a deeper, clear understanding of what is being said. This can improve the hypothesis by helping you identify potential research gaps.
Variables are an essential part of any reasonable hypothesis. So, identify your independent and dependent variable(s) and form a correlation between them. The ideal way to do this is to write the hypothetical assumption in the ‘if-then' form. If you use this form, make sure that you state the predefined relationship between the variables.
In another way, you can choose to present your hypothesis as a comparison between two variables. Here, you must specify the difference you expect to observe in the results.
Now that everything is in place, it's time to write your hypothesis. For starters, create the first draft. In this version, write what you expect to find from your research.
Clearly separate your independent and dependent variables and the link between them. Don't fixate on syntax at this stage. The goal is to ensure your hypothesis addresses the issue.
After preparing the first draft of your hypothesis, you need to inspect it thoroughly. It should tick all the boxes, like being concise, straightforward, relevant, and accurate. Your final hypothesis has to be well-structured as well.
Research projects are an exciting and crucial part of being a scholar. And once you have your research question, you need a great hypothesis to begin conducting research. Thus, knowing how to write a hypothesis is very important.
Now that you have a firmer grasp on what a good hypothesis constitutes, the different kinds there are, and what process to follow, you will find it much easier to write your hypothesis, which ultimately helps your research.
Now it's easier than ever to streamline your research workflow with SciSpace Discover . Its integrated, comprehensive end-to-end platform for research allows scholars to easily discover, write and publish their research and fosters collaboration.
It includes everything you need, including a repository of over 270 million research papers across disciplines, SEO-optimized summaries and public profiles to show your expertise and experience.
If you found these tips on writing a research hypothesis useful, head over to our blog on Statistical Hypothesis Testing to learn about the top researchers, papers, and institutions in this domain.
1. what is the definition of hypothesis.
According to the Oxford dictionary, a hypothesis is defined as “An idea or explanation of something that is based on a few known facts, but that has not yet been proved to be true or correct”.
The hypothesis is a statement that proposes a relationship between two or more variables. An example: "If we increase the number of new users who join our platform by 25%, then we will see an increase in revenue."
A null hypothesis is a statement that there is no relationship between two variables. The null hypothesis is written as H0. The null hypothesis states that there is no effect. For example, if you're studying whether or not a particular type of exercise increases strength, your null hypothesis will be "there is no difference in strength between people who exercise and people who don't."
• Fundamental research
• Applied research
• Qualitative research
• Quantitative research
• Mixed research
• Exploratory research
• Longitudinal research
• Cross-sectional research
• Field research
• Laboratory research
• Fixed research
• Flexible research
• Action research
• Policy research
• Classification research
• Comparative research
• Causal research
• Inductive research
• Deductive research
• Your hypothesis should be able to predict the relationship and outcome.
• Avoid wordiness by keeping it simple and brief.
• Your hypothesis should contain observable and testable outcomes.
• Your hypothesis should be relevant to the research question.
• Null hypotheses are used to test the claim that "there is no difference between two groups of data".
• Alternative hypotheses test the claim that "there is a difference between two data groups".
A research question is a broad, open-ended question you will try to answer through your research. A hypothesis is a statement based on prior research or theory that you expect to be true due to your study. Example - Research question: What are the factors that influence the adoption of the new technology? Research hypothesis: There is a positive relationship between age, education and income level with the adoption of the new technology.
The plural of hypothesis is hypotheses. Here's an example of how it would be used in a statement, "Numerous well-considered hypotheses are presented in this part, and they are supported by tables and figures that are well-illustrated."
The red queen hypothesis in evolutionary biology states that species must constantly evolve to avoid extinction because if they don't, they will be outcompeted by other species that are evolving. Leigh Van Valen first proposed it in 1973; since then, it has been tested and substantiated many times.
The father of the null hypothesis is Sir Ronald Fisher. He published a paper in 1925 that introduced the concept of null hypothesis testing, and he was also the first to use the term itself.
You need to find a significant difference between your two populations to reject the null hypothesis. You can determine that by running statistical tests such as an independent sample t-test or a dependent sample t-test. You should reject the null hypothesis if the p-value is less than 0.05.
General Education
Think about something strange and unexplainable in your life. Maybe you get a headache right before it rains, or maybe you think your favorite sports team wins when you wear a certain color. If you wanted to see whether these are just coincidences or scientific fact, you would form a hypothesis, then create an experiment to see whether that hypothesis is true or not.
But what is a hypothesis, anyway? If you’re not sure about what a hypothesis is--or how to test for one!--you’re in the right place. This article will teach you everything you need to know about hypotheses, including:
So let’s get started!
Merriam Webster defines a hypothesis as “an assumption or concession made for the sake of argument.” In other words, a hypothesis is an educated guess . Scientists make a reasonable assumption--or a hypothesis--then design an experiment to test whether it’s true or not. Keep in mind that in science, a hypothesis should be testable. You have to be able to design an experiment that tests your hypothesis in order for it to be valid.
As you could assume from that statement, it’s easy to make a bad hypothesis. But when you’re holding an experiment, it’s even more important that your guesses be good...after all, you’re spending time (and maybe money!) to figure out more about your observation. That’s why we refer to a hypothesis as an educated guess--good hypotheses are based on existing data and research to make them as sound as possible.
Hypotheses are one part of what’s called the scientific method . Every (good) experiment or study is based in the scientific method. The scientific method gives order and structure to experiments and ensures that interference from scientists or outside influences does not skew the results. It’s important that you understand the concepts of the scientific method before holding your own experiment. Though it may vary among scientists, the scientific method is generally made up of six steps (in order):
You’ll notice that the hypothesis comes pretty early on when conducting an experiment. That’s because experiments work best when they’re trying to answer one specific question. And you can’t conduct an experiment until you know what you’re trying to prove!
After doing your research, you’re ready for another important step in forming your hypothesis: identifying variables. Variables are basically any factor that could influence the outcome of your experiment . Variables have to be measurable and related to the topic being studied.
There are two types of variables: independent variables and dependent variables. I ndependent variables remain constant . For example, age is an independent variable; it will stay the same, and researchers can look at different ages to see if it has an effect on the dependent variable.
Speaking of dependent variables... dependent variables are subject to the influence of the independent variable , meaning that they are not constant. Let’s say you want to test whether a person’s age affects how much sleep they need. In that case, the independent variable is age (like we mentioned above), and the dependent variable is how much sleep a person gets.
Variables will be crucial in writing your hypothesis. You need to be able to identify which variable is which, as both the independent and dependent variables will be written into your hypothesis. For instance, in a study about exercise, the independent variable might be the speed at which the respondents walk for thirty minutes, and the dependent variable would be their heart rate. In your study and in your hypothesis, you’re trying to understand the relationship between the two variables.
The best hypotheses start by asking the right questions . For instance, if you’ve observed that the grass is greener when it rains twice a week, you could ask what kind of grass it is, what elevation it’s at, and if the grass across the street responds to rain in the same way. Any of these questions could become the backbone of experiments to test why the grass gets greener when it rains fairly frequently.
As you’re asking more questions about your first observation, make sure you’re also making more observations . If it doesn’t rain for two weeks and the grass still looks green, that’s an important observation that could influence your hypothesis. You'll continue observing all throughout your experiment, but until the hypothesis is finalized, every observation should be noted.
Finally, you should consult secondary research before writing your hypothesis . Secondary research is comprised of results found and published by other people. You can usually find this information online or at your library. Additionally, m ake sure the research you find is credible and related to your topic. If you’re studying the correlation between rain and grass growth, it would help you to research rain patterns over the past twenty years for your county, published by a local agricultural association. You should also research the types of grass common in your area, the type of grass in your lawn, and whether anyone else has conducted experiments about your hypothesis. Also be sure you’re checking the quality of your research . Research done by a middle school student about what minerals can be found in rainwater would be less useful than an article published by a local university.
Once you’ve considered all of the factors above, you’re ready to start writing your hypothesis. Hypotheses usually take a certain form when they’re written out in a research report.
When you boil down your hypothesis statement, you are writing down your best guess and not the question at hand . This means that your statement should be written as if it is fact already, even though you are simply testing it.
The reason for this is that, after you have completed your study, you'll either accept or reject your if-then or your null hypothesis. All hypothesis testing examples should be measurable and able to be confirmed or denied. You cannot confirm a question, only a statement!
In fact, you come up with hypothesis examples all the time! For instance, when you guess on the outcome of a basketball game, you don’t say, “Will the Miami Heat beat the Boston Celtics?” but instead, “I think the Miami Heat will beat the Boston Celtics.” You state it as if it is already true, even if it turns out you’re wrong. You do the same thing when writing your hypothesis.
Additionally, keep in mind that hypotheses can range from very specific to very broad. These hypotheses can be specific, but if your hypothesis testing examples involve a broad range of causes and effects, your hypothesis can also be broad.
Now that you understand what goes into a hypothesis, it’s time to look more closely at the two most common types of hypothesis: the if-then hypothesis and the null hypothesis.
First of all, if-then hypotheses typically follow this formula:
If ____ happens, then ____ will happen.
The goal of this type of hypothesis is to test the causal relationship between the independent and dependent variable. It’s fairly simple, and each hypothesis can vary in how detailed it can be. We create if-then hypotheses all the time with our daily predictions. Here are some examples of hypotheses that use an if-then structure from daily life:
In each of these situations, you’re making a guess on how an independent variable (sleep, time, or studying) will affect a dependent variable (the amount of work you can do, making it to a party on time, or getting better grades).
You may still be asking, “What is an example of a hypothesis used in scientific research?” Take one of the hypothesis examples from a real-world study on whether using technology before bed affects children’s sleep patterns. The hypothesis read s:
“We hypothesized that increased hours of tablet- and phone-based screen time at bedtime would be inversely correlated with sleep quality and child attention.”
It might not look like it, but this is an if-then statement. The researchers basically said, “If children have more screen usage at bedtime, then their quality of sleep and attention will be worse.” The sleep quality and attention are the dependent variables and the screen usage is the independent variable. (Usually, the independent variable comes after the “if” and the dependent variable comes after the “then,” as it is the independent variable that affects the dependent variable.) This is an excellent example of how flexible hypothesis statements can be, as long as the general idea of “if-then” and the independent and dependent variables are present.
Your if-then hypothesis is not the only one needed to complete a successful experiment, however. You also need a null hypothesis to test it against. In its most basic form, the null hypothesis is the opposite of your if-then hypothesis . When you write your null hypothesis, you are writing a hypothesis that suggests that your guess is not true, and that the independent and dependent variables have no relationship .
One null hypothesis for the cell phone and sleep study from the last section might say:
“If children have more screen usage at bedtime, their quality of sleep and attention will not be worse.”
In this case, this is a null hypothesis because it’s asking the opposite of the original thesis!
Conversely, if your if-then hypothesis suggests that your two variables have no relationship, then your null hypothesis would suggest that there is one. So, pretend that there is a study that is asking the question, “Does the amount of followers on Instagram influence how long people spend on the app?” The independent variable is the amount of followers, and the dependent variable is the time spent. But if you, as the researcher, don’t think there is a relationship between the number of followers and time spent, you might write an if-then hypothesis that reads:
“If people have many followers on Instagram, they will not spend more time on the app than people who have less.”
In this case, the if-then suggests there isn’t a relationship between the variables. In that case, one of the null hypothesis examples might say:
“If people have many followers on Instagram, they will spend more time on the app than people who have less.”
You then test both the if-then and the null hypothesis to gauge if there is a relationship between the variables, and if so, how much of a relationship.
If you’re going to take the time to hold an experiment, whether in school or by yourself, you’re also going to want to take the time to make sure your hypothesis is a good one. The best hypotheses have four major elements in common: plausibility, defined concepts, observability, and general explanation.
At first glance, this quality of a hypothesis might seem obvious. When your hypothesis is plausible, that means it’s possible given what we know about science and general common sense. However, improbable hypotheses are more common than you might think.
Imagine you’re studying weight gain and television watching habits. If you hypothesize that people who watch more than twenty hours of television a week will gain two hundred pounds or more over the course of a year, this might be improbable (though it’s potentially possible). Consequently, c ommon sense can tell us the results of the study before the study even begins.
Improbable hypotheses generally go against science, as well. Take this hypothesis example:
“If a person smokes one cigarette a day, then they will have lungs just as healthy as the average person’s.”
This hypothesis is obviously untrue, as studies have shown again and again that cigarettes negatively affect lung health. You must be careful that your hypotheses do not reflect your own personal opinion more than they do scientifically-supported findings. This plausibility points to the necessity of research before the hypothesis is written to make sure that your hypothesis has not already been disproven.
The more advanced you are in your studies, the more likely that the terms you’re using in your hypothesis are specific to a limited set of knowledge. One of the hypothesis testing examples might include the readability of printed text in newspapers, where you might use words like “kerning” and “x-height.” Unless your readers have a background in graphic design, it’s likely that they won’t know what you mean by these terms. Thus, it’s important to either write what they mean in the hypothesis itself or in the report before the hypothesis.
Here’s what we mean. Which of the following sentences makes more sense to the common person?
If the kerning is greater than average, more words will be read per minute.
If the space between letters is greater than average, more words will be read per minute.
For people reading your report that are not experts in typography, simply adding a few more words will be helpful in clarifying exactly what the experiment is all about. It’s always a good idea to make your research and findings as accessible as possible.
Good hypotheses ensure that you can observe the results.
In order to measure the truth or falsity of your hypothesis, you must be able to see your variables and the way they interact. For instance, if your hypothesis is that the flight patterns of satellites affect the strength of certain television signals, yet you don’t have a telescope to view the satellites or a television to monitor the signal strength, you cannot properly observe your hypothesis and thus cannot continue your study.
Some variables may seem easy to observe, but if you do not have a system of measurement in place, you cannot observe your hypothesis properly. Here’s an example: if you’re experimenting on the effect of healthy food on overall happiness, but you don’t have a way to monitor and measure what “overall happiness” means, your results will not reflect the truth. Monitoring how often someone smiles for a whole day is not reasonably observable, but having the participants state how happy they feel on a scale of one to ten is more observable.
In writing your hypothesis, always keep in mind how you'll execute the experiment.
Perhaps you’d like to study what color your best friend wears the most often by observing and documenting the colors she wears each day of the week. This might be fun information for her and you to know, but beyond you two, there aren’t many people who could benefit from this experiment. When you start an experiment, you should note how generalizable your findings may be if they are confirmed. Generalizability is basically how common a particular phenomenon is to other people’s everyday life.
Let’s say you’re asking a question about the health benefits of eating an apple for one day only, you need to realize that the experiment may be too specific to be helpful. It does not help to explain a phenomenon that many people experience. If you find yourself with too specific of a hypothesis, go back to asking the big question: what is it that you want to know, and what do you think will happen between your two variables?
We know it can be hard to write a good hypothesis unless you’ve seen some good hypothesis examples. We’ve included four hypothesis examples based on some made-up experiments. Use these as templates or launch pads for coming up with your own hypotheses.
You are a student at PrepScholar University. When you walk around campus, you notice that, when the temperature is above 60 degrees, more students study in the quad. You want to know when your fellow students are more likely to study outside. With this information, how do you make the best hypothesis possible?
You must remember to make additional observations and do secondary research before writing your hypothesis. In doing so, you notice that no one studies outside when it’s 75 degrees and raining, so this should be included in your experiment. Also, studies done on the topic beforehand suggested that students are more likely to study in temperatures less than 85 degrees. With this in mind, you feel confident that you can identify your variables and write your hypotheses:
If-then: “If the temperature in Fahrenheit is less than 60 degrees, significantly fewer students will study outside.”
Null: “If the temperature in Fahrenheit is less than 60 degrees, the same number of students will study outside as when it is more than 60 degrees.”
These hypotheses are plausible, as the temperatures are reasonably within the bounds of what is possible. The number of people in the quad is also easily observable. It is also not a phenomenon specific to only one person or at one time, but instead can explain a phenomenon for a broader group of people.
To complete this experiment, you pick the month of October to observe the quad. Every day (except on the days where it’s raining)from 3 to 4 PM, when most classes have released for the day, you observe how many people are on the quad. You measure how many people come and how many leave. You also write down the temperature on the hour.
After writing down all of your observations and putting them on a graph, you find that the most students study on the quad when it is 70 degrees outside, and that the number of students drops a lot once the temperature reaches 60 degrees or below. In this case, your research report would state that you accept or “failed to reject” your first hypothesis with your findings.
Let’s say that you work at a bakery. You specialize in cupcakes, and you make only two colors of frosting: yellow and purple. You want to know what kind of customers are more likely to buy what kind of cupcake, so you set up an experiment. Your independent variable is the customer’s gender, and the dependent variable is the color of the frosting. What is an example of a hypothesis that might answer the question of this study?
Here’s what your hypotheses might look like:
If-then: “If customers’ gender is female, then they will buy more yellow cupcakes than purple cupcakes.”
Null: “If customers’ gender is female, then they will be just as likely to buy purple cupcakes as yellow cupcakes.”
This is a pretty simple experiment! It passes the test of plausibility (there could easily be a difference), defined concepts (there’s nothing complicated about cupcakes!), observability (both color and gender can be easily observed), and general explanation ( this would potentially help you make better business decisions ).
While watching your backyard bird feeder, you realized that different birds come on the days when you change the types of seeds. You decide that you want to see more cardinals in your backyard, so you decide to see what type of food they like the best and set up an experiment.
However, one morning, you notice that, while some cardinals are present, blue jays are eating out of your backyard feeder filled with millet. You decide that, of all of the other birds, you would like to see the blue jays the least. This means you'll have more than one variable in your hypothesis. Your new hypotheses might look like this:
If-then: “If sunflower seeds are placed in the bird feeders, then more cardinals will come than blue jays. If millet is placed in the bird feeders, then more blue jays will come than cardinals.”
Null: “If either sunflower seeds or millet are placed in the bird, equal numbers of cardinals and blue jays will come.”
Through simple observation, you actually find that cardinals come as often as blue jays when sunflower seeds or millet is in the bird feeder. In this case, you would reject your “if-then” hypothesis and “fail to reject” your null hypothesis . You cannot accept your first hypothesis, because it’s clearly not true. Instead you found that there was actually no relation between your different variables. Consequently, you would need to run more experiments with different variables to see if the new variables impact the results.
You’re about to give a speech in one of your classes about the importance of paying attention. You want to take this opportunity to test a hypothesis you’ve had for a while:
If-then: If students sit in the first two rows of the classroom, then they will listen better than students who do not.
Null: If students sit in the first two rows of the classroom, then they will not listen better or worse than students who do not.
You give your speech and then ask your teacher if you can hand out a short survey to the class. On the survey, you’ve included questions about some of the topics you talked about. When you get back the results, you’re surprised to see that not only do the students in the first two rows not pay better attention, but they also scored worse than students in other parts of the classroom! Here, both your if-then and your null hypotheses are not representative of your findings. What do you do?
This is when you reject both your if-then and null hypotheses and instead create an alternative hypothesis . This type of hypothesis is used in the rare circumstance that neither of your hypotheses is able to capture your findings . Now you can use what you’ve learned to draft new hypotheses and test again!
The more comfortable you become with writing hypotheses, the better they will become. The structure of hypotheses is flexible and may need to be changed depending on what topic you are studying. The most important thing to remember is the purpose of your hypothesis and the difference between the if-then and the null . From there, in forming your hypothesis, you should constantly be asking questions, making observations, doing secondary research, and considering your variables. After you have written your hypothesis, be sure to edit it so that it is plausible, clearly defined, observable, and helpful in explaining a general phenomenon.
Writing a hypothesis is something that everyone, from elementary school children competing in a science fair to professional scientists in a lab, needs to know how to do. Hypotheses are vital in experiments and in properly executing the scientific method . When done correctly, hypotheses will set up your studies for success and help you to understand the world a little better, one experiment at a time.
If you’re studying for the science portion of the ACT, there’s definitely a lot you need to know. We’ve got the tools to help, though! Start by checking out our ultimate study guide for the ACT Science subject test. Once you read through that, be sure to download our recommended ACT Science practice tests , since they’re one of the most foolproof ways to improve your score. (And don’t forget to check out our expert guide book , too.)
If you love science and want to major in a scientific field, you should start preparing in high school . Here are the science classes you should take to set yourself up for success.
If you’re trying to think of science experiments you can do for class (or for a science fair!), here’s a list of 37 awesome science experiments you can do at home
How to Get Into Harvard and the Ivy League
How to Get a Perfect 4.0 GPA
How to Write an Amazing College Essay
What Exactly Are Colleges Looking For?
ACT vs. SAT: Which Test Should You Take?
When should you take the SAT or ACT?
Get Your Free
Find Your Target SAT Score
Free Complete Official SAT Practice Tests
Score 800 on SAT Math
Score 800 on SAT Reading and Writing
Score 600 on SAT Math
Score 600 on SAT Reading and Writing
Find Your Target ACT Score
Complete Official Free ACT Practice Tests
Get a 36 on ACT English
Get a 36 on ACT Math
Get a 36 on ACT Reading
Get a 36 on ACT Science
Get a 24 on ACT English
Get a 24 on ACT Math
Get a 24 on ACT Reading
Get a 24 on ACT Science
Stay Informed
Get the latest articles and test prep tips!
Ashley Sufflé Robinson has a Ph.D. in 19th Century English Literature. As a content writer for PrepScholar, Ashley is passionate about giving college-bound students the in-depth information they need to get into the school of their dreams.
Have any questions about this article or other topics? Ask below and we'll reply!
Register now and grab your free ultimate anatomy study guide!
Author: Jana Vasković, MD • Reviewer: Francesca Salvador, MSc Last reviewed: November 03, 2023 Reading time: 12 minutes
For a long time, the process of communication between the nerves and their target tissues was a big unknown for physiologists. With the development of electrophysiology and the discovery of electrical activity of neurons, it was discovered that the transmission of signals from neurons to their target tissues is mediated by action potentials.
An action potential is defined as a sudden, fast, transitory, and propagating change of the resting membrane potential . Only neurons and muscle cells are capable of generating an action potential; that property is called the excitability .
Definition | Sudden, fast, transitory and propagating change of the resting membrane potential |
Stimuli | Subthreshold Threshold Suprathreshold |
Phases | Depolarization Overshoot Repolarization |
Refractoriness | Absolute – depolarization, 2/3 of repolarization Relative – last 1/3 of repolarization |
Synapse | Presynaptic membrane Synaptic cleft Postsynaptic membrane |
This article will discuss the definition, steps and phases of the action potential.
Propagation of action potential.
Action potentials are nerve signals. Neurons generate and conduct these signals along their processes in order to transmit them to the target tissues. Upon stimulation, they will either be stimulated, inhibited, or modulated in some way.
Learn the structure and the types of the neurons with the following study unit.
But what causes the action potential? From an electrical aspect, it is caused by a stimulus with certain value expressed in millivolts [mV]. Not all stimuli can cause an action potential. Adequate stimulus must have a sufficient electrocal value which will reduce the negativity of the nerve cell to the threshold of the action potential. In this manner, there are subthreshold, threshold, and suprathreshold stimuli. Subthreshold stimuli cannot cause an action potential. Threshold stimuli are of enough energy or potential to produce an action potential (nerve impulse). Suprathreshold stimuli also produce an action potential, but their strength is higher than the threshold stimuli.
So, an action potential is generated when a stimulus changes the membrane potential to the values of threshold potential . The threshold potential is usually around -50 to -55 mV. It is important to know that the action potential behaves upon the all-or-none law . This means that any subthreshold stimulus will cause nothing, while threshold and suprathreshold stimuli produce a full response of the excitable cell.
Is an action potential different depending on whether it’s caused by threshold or suprathreshold potential? The answer is no. The length and amplitude of an action potential are always the same. However, increasing the stimulus strength causes an increase in the frequency of an action potential. An action potential propagates along the nerve fiber without decreasing or weakening of amplitude and length. In addition, after one action potential is generated, neurons become refractory to stimuli for a certain period of time in which they cannot generate another action potential.
From the aspect of ions, an action potential is caused by temporary changes in membrane permeability for diffusible ions. These changes cause ion channels to open and the ions to decrease their concentration gradients. The value of threshold potential depends on the membrane permeability, intra- and extracellular concentration of ions, and the properties of the cell membrane.
An action potential has three phases: depolarization, overshoot, repolarization. There are two more states of the membrane potential related to the action potential. The first one is hypopolarization which precedes the depolarization, while the second one is hyperpolarization , which follows the repolarization.
Hypopolarization is the initial increase of the membrane potential to the value of the threshold potential. The threshold potential opens voltage-gated sodium channels and causes a large influx of sodium ions. This phase is called the depolarization . During depolarization, the inside of the cell becomes more and more electropositive, until the potential gets closer the electrochemical equilibrium for sodium of +61 mV. This phase of extreme positivity is the overshoot phase.
After the overshoot, the sodium permeability suddenly decreases due to the closing of its channels. The overshoot value of the cell potential opens voltage-gated potassium channels, which causes a large potassium efflux, decreasing the cell’s electropositivity. This phase is the repolarization phase, whose purpose is to restore the resting membrane potential. Repolarization always leads first to hyperpolarization , a state in which the membrane potential is more negative than the default membrane potential. But soon after that, the membrane establishes again the values of membrane potential.
After reviewing the roles of ions, we can now define the threshold potential more precisely as the value of the membrane potential at which the voltage-gated sodium channels open. In excitable tissues, the threshold potential is around 10 to 15 mV less than the resting membrane potential.
The refractory period is the time after an action potential is generated, during which the excitable cell cannot produce another action potential. There are two subphases of this period, absolute and relative refractoriness.
Absolute refractoriness overlaps the depolarization and around 2/3 of repolarization phase. A new action potential cannot be generated during depolarization because all the voltage-gated sodium channels are already opened or being opened at their maximum speed. During early repolarization, a new action potential is impossible since the sodium channels are inactive and need the resting potential to be in a closed state, from which they can be in an open state once again. Absolute refractoriness ends when enough sodium channels recover from their inactive state.
Relative refractoriness is the period when the generation of a new action potential is possible, but only upon a suprathreshold stimulus. This period overlaps the final 1/3 of repolarization.
An action potential is generated in the body of the neuron and propagated through its axon . Propagation doesn’t decrease or affect the quality of the action potential in any way, so that the target tissue gets the same impulse no matter how far they are from neuronal body.
The action potential generates at one spot of the cell membrane. It propagates along the membrane with every next part of the membrane being sequentially depolarized. This means that the action potential doesn’t move but rather causes a new action potential of the adjacent segment of the neuronal membrane.
We need to emphasize that the action potential always propagates forward , never backwards. This is due to the refractoriness of the parts of the membrane that were already depolarized, so that the only possible direction of propagation is forward. Because of this, an action potential always propagates from the neuronal body, through the axon to the target tissue.
The speed of propagation largely depends on the thickness of the axon and whether it’s myelinated or not. The larger the diameter, the higher the speed of propagation. The propagation is also faster if an axon is myelinated. Myelin increases the propagation speed because it increases the thickness of the fiber. In addition, myelin enables saltatory conduction of the action potential, since only the Ranvier nodes depolarize, and myelin nodes are jumped over. In unmyelinated fibers, every part of the axonal membrane needs to undergo depolarization, making the propagation significantly slower.
Do you want to learn faster all the parts and the functions of the nervous system? Go to our nervous system quiz article and ace your next exam.
A synapse is a junction between the nerve cell and its target tissue. In humans, synapses are chemical , meaning that the nerve impulse is transmitted from the axon ending to the target tissue by the chemical substances called neurotransmitters (ligands). If a neurotransmitter stimulates the target cell to an action, then it is an excitatory neurotransmitter. On the other hand, if it inhibits the target cell, it is an inhibitory neurotransmitter.
Depending on the type of target tissue, there are central and peripheral synapses. Central synapses are between two neurons in the central nervous system, while peripheral synapses occur between a neuron and muscle fiber, peripheral nerve, or gland.
Each synapse consists of the:
Inside the terminal button of the nerve fiber are produced and stored numerous vesicles that contain neurotransmitters. When the presynaptic membrane is depolarized by an action potential, the calcium voltage-gated channels open. This leads to an influx of calcium, which changes the state of certain membrane proteins in the presynaptic membrane, and results with exocitosis of the neurotransmitter in the synaptic cleft.
The postsynaptic membrane contains receptors for the neurotransmitters. Once the neurotransmitter binds to the receptor, the ligand-gated channels of the postsynaptic membrane either open or close. These ligand-gated channels are the ion channels , and their opening or closing will cause a redistribution of ions in the postsynaptic cell. Depending on whether the neurotransmitter is excitatory or inhibitory, this will result with different responses.
Learn the types of the neurons with the following quiz.
An action potential is caused by either threshold or suprathreshold stimuli upon a neuron. It consists of three phases: depolarization, overshoot, and repolarization.
An action potential propagates along the cell membrane of an axon until it reaches the terminal button. Once the terminal button is depolarized, it releases a neurotransmitter into the synaptic cleft. The neurotransmitter binds to its receptors on the postsynaptic membrane of the target cell, causing its response either in terms of stimulation or inhibition.
Action potentials are propagated faster through the thicker and myelinated axons, rather than through the thin and unmyelinated axons. After one action potential is generated, a neuron is unable to generate a new one due to its refractoriness to stimuli.
References:
Article, review and layout:
Illustrations:
Action potential: want to learn more about it?
Our engaging videos, interactive quizzes, in-depth articles and HD atlas are here to get you top results faster.
What do you prefer to learn with?
“I would honestly say that Kenhub cut my study time in half.” – Read more.
Learning anatomy is a massive undertaking, and we're here to help you pass with flying colours.
...it takes less than 60 seconds!
Want access to this gallery.
Stand out all day with Austin Theory in Mattel WWE Elite Series 110! Theory is ready to go live and take on anyone with a glaring head scan. He comes wearing blue and red trunks that feature blue âA-Townâ print on the front. His look is complete with blue laced boots and blue knee pads featuring AT and All Day logo designs, and his accessories include the United States Championship title belt!
COMMENTS
Action Hypotheses are a form of "if/then" hypothesis that we use in Emergent Learning. While scientific hypotheses propose an explanation of phenomena based on evidence from the past, action hypotheses look ahead to explain what we expect to happen as a result of future action. In both cases, the goal is to articulate something that is ...
FORMULATION OF AN ACTION HYPOTHESIS. To form a hypothesis the investigator should. Have a thorough knowledge about the problem. Be clear about the desired goal (solution) Make a real effort to look at the problem in new ways other than the regular practices (come out form conventional thinking) Give importance for imagination and speculation.
What Is Action Research? | Definition & Examples
Davidson has contributed to many topics in the philosophy of action, such as action individuation, the logical form of action sentences, the relation between intention and evaluative judgments, among others, but here we will focus mostly on his arguments for, and his formulation of, the causal theory of action.
A hypothesis is the statement that you are testing. The Models and Definitions of Action Research. Practical Action Research: Practical Action Research involves a practitioner working with the researcher to identify a research problem, propose an intervention, and design methods. It is important that the practitioner as well as the researcher ...
"A hypothesis is a conjectural statement of the relation between two or more variables". (Kerlinger, 1956) "Hypothesis is a formal statement that presents the expected relationship between an independent and dependent variable."(Creswell, 1994) "A research question is essentially a hypothesis asked in the form of a question."
Research Hypothesis In Psychology: Types, & Examples
5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.
The Logic of Action. In this article we provide a brief overview of the logic of action in philosophy, linguistics, computer science, and artificial intelligence. The logic of action is the formal study of action in which formal languages are the main tool of analysis. The concept of action is of central interest to many disciplines: the social ...
What is a research hypothesis: How to write it, types, and ...
The first one, which we will refer to as the 'visual hypothesis', states that action understanding is based on a visual analysis of the different elements that form an action, and that no motor ...
Hypothesis: Definition, Examples, and Types
Research Hypothesis: Definition, Types, Examples and ...
Action theory (sociology)
Formulation of Hypothesis & Examples - Lesson
Merriam Webster defines a hypothesis as "an assumption or concession made for the sake of argument.". In other words, a hypothesis is an educated guess. Scientists make a reasonable assumption--or a hypothesis--then design an experiment to test whether it's true or not.
Action (physics)
Action potential: Definition, Steps, Phases
Step 2: Review Existing Literature. A thorough literature review helps you understand the current state of knowledge on your chosen topic. It allows you to identify what is already known and what gaps exist in the literature. Identifying gaps in existing research can inspire your hypothesis.
Action potential
Theory is ready to go live and take on anyone with a glaring head scan. He comes wearing blue and red trunks that feature blue "A-Town" print on the front. His look is complete with blue laced boots and blue knee pads featuring AT and All Day logo designs, and his accessories include the United States Championship title belt!
Reaction formation depends on the hypothesis that: [t]he instincts and their derivatives may be arranged as pairs of opposites: life versus death, construction versus destruction, action versus passivity, dominance versus submission, and so forth. When one of the instincts produces anxiety by exerting pressure on the ego either directly or by ...