U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Turk J Anaesthesiol Reanim
  • v.44(4); 2016 Aug

Logo of tjar

What is Scientific Research and How Can it be Done?

Scientific researches are studies that should be systematically planned before performing them. In this review, classification and description of scientific studies, planning stage randomisation and bias are explained.

Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new information is revealed with respect to diagnosis, treatment and reliability of applications. The purpose of this review is to provide information about the definition, classification and methodology of scientific research.

Before beginning the scientific research, the researcher should determine the subject, do planning and specify the methodology. In the Declaration of Helsinki, it is stated that ‘the primary purpose of medical researches on volunteers is to understand the reasons, development and effects of diseases and develop protective, diagnostic and therapeutic interventions (method, operation and therapies). Even the best proven interventions should be evaluated continuously by investigations with regard to reliability, effectiveness, efficiency, accessibility and quality’ ( 1 ).

The questions, methods of response to questions and difficulties in scientific research may vary, but the design and structure are generally the same ( 2 ).

Classification of Scientific Research

Scientific research can be classified in several ways. Classification can be made according to the data collection techniques based on causality, relationship with time and the medium through which they are applied.

  • Observational
  • Experimental
  • Descriptive
  • Retrospective
  • Prospective
  • Cross-sectional
  • Social descriptive research ( 3 )

Another method is to classify the research according to its descriptive or analytical features. This review is written according to this classification method.

I. Descriptive research

  • Case series
  • Surveillance studies

II. Analytical research

  • Observational studies: cohort, case control and cross- sectional research
  • Interventional research: quasi-experimental and clinical research
  • Case Report: it is the most common type of descriptive study. It is the examination of a single case having a different quality in the society, e.g. conducting general anaesthesia in a pregnant patient with mucopolysaccharidosis.
  • Case Series: it is the description of repetitive cases having common features. For instance; case series involving interscapular pain related to neuraxial labour analgesia. Interestingly, malignant hyperthermia cases are not accepted as case series since they are rarely seen during historical development.
  • Surveillance Studies: these are the results obtained from the databases that follow and record a health problem for a certain time, e.g. the surveillance of cross-infections during anaesthesia in the intensive care unit.

Moreover, some studies may be experimental. After the researcher intervenes, the researcher waits for the result, observes and obtains data. Experimental studies are, more often, in the form of clinical trials or laboratory animal trials ( 2 ).

Analytical observational research can be classified as cohort, case-control and cross-sectional studies.

Firstly, the participants are controlled with regard to the disease under investigation. Patients are excluded from the study. Healthy participants are evaluated with regard to the exposure to the effect. Then, the group (cohort) is followed-up for a sufficient period of time with respect to the occurrence of disease, and the progress of disease is studied. The risk of the healthy participants getting sick is considered an incident. In cohort studies, the risk of disease between the groups exposed and not exposed to the effect is calculated and rated. This rate is called relative risk. Relative risk indicates the strength of exposure to the effect on the disease.

Cohort research may be observational and experimental. The follow-up of patients prospectively is called a prospective cohort study . The results are obtained after the research starts. The researcher’s following-up of cohort subjects from a certain point towards the past is called a retrospective cohort study . Prospective cohort studies are more valuable than retrospective cohort studies: this is because in the former, the researcher observes and records the data. The researcher plans the study before the research and determines what data will be used. On the other hand, in retrospective studies, the research is made on recorded data: no new data can be added.

In fact, retrospective and prospective studies are not observational. They determine the relationship between the date on which the researcher has begun the study and the disease development period. The most critical disadvantage of this type of research is that if the follow-up period is long, participants may leave the study at their own behest or due to physical conditions. Cohort studies that begin after exposure and before disease development are called ambidirectional studies . Public healthcare studies generally fall within this group, e.g. lung cancer development in smokers.

  • Case-Control Studies: these studies are retrospective cohort studies. They examine the cause and effect relationship from the effect to the cause. The detection or determination of data depends on the information recorded in the past. The researcher has no control over the data ( 2 ).

Cross-sectional studies are advantageous since they can be concluded relatively quickly. It may be difficult to obtain a reliable result from such studies for rare diseases ( 2 ).

Cross-sectional studies are characterised by timing. In such studies, the exposure and result are simultaneously evaluated. While cross-sectional studies are restrictedly used in studies involving anaesthesia (since the process of exposure is limited), they can be used in studies conducted in intensive care units.

  • Quasi-Experimental Research: they are conducted in cases in which a quick result is requested and the participants or research areas cannot be randomised, e.g. giving hand-wash training and comparing the frequency of nosocomial infections before and after hand wash.
  • Clinical Research: they are prospective studies carried out with a control group for the purpose of comparing the effect and value of an intervention in a clinical case. Clinical study and research have the same meaning. Drugs, invasive interventions, medical devices and operations, diets, physical therapy and diagnostic tools are relevant in this context ( 6 ).

Clinical studies are conducted by a responsible researcher, generally a physician. In the research team, there may be other healthcare staff besides physicians. Clinical studies may be financed by healthcare institutes, drug companies, academic medical centres, volunteer groups, physicians, healthcare service providers and other individuals. They may be conducted in several places including hospitals, universities, physicians’ offices and community clinics based on the researcher’s requirements. The participants are made aware of the duration of the study before their inclusion. Clinical studies should include the evaluation of recommendations (drug, device and surgical) for the treatment of a disease, syndrome or a comparison of one or more applications; finding different ways for recognition of a disease or case and prevention of their recurrence ( 7 ).

Clinical Research

In this review, clinical research is explained in more detail since it is the most valuable study in scientific research.

Clinical research starts with forming a hypothesis. A hypothesis can be defined as a claim put forward about the value of a population parameter based on sampling. There are two types of hypotheses in statistics.

  • H 0 hypothesis is called a control or null hypothesis. It is the hypothesis put forward in research, which implies that there is no difference between the groups under consideration. If this hypothesis is rejected at the end of the study, it indicates that a difference exists between the two treatments under consideration.
  • H 1 hypothesis is called an alternative hypothesis. It is hypothesised against a null hypothesis, which implies that a difference exists between the groups under consideration. For example, consider the following hypothesis: drug A has an analgesic effect. Control or null hypothesis (H 0 ): there is no difference between drug A and placebo with regard to the analgesic effect. The alternative hypothesis (H 1 ) is applicable if a difference exists between drug A and placebo with regard to the analgesic effect.

The planning phase comes after the determination of a hypothesis. A clinical research plan is called a protocol . In a protocol, the reasons for research, number and qualities of participants, tests to be applied, study duration and what information to be gathered from the participants should be found and conformity criteria should be developed.

The selection of participant groups to be included in the study is important. Inclusion and exclusion criteria of the study for the participants should be determined. Inclusion criteria should be defined in the form of demographic characteristics (age, gender, etc.) of the participant group and the exclusion criteria as the diseases that may influence the study, age ranges, cases involving pregnancy and lactation, continuously used drugs and participants’ cooperation.

The next stage is methodology. Methodology can be grouped under subheadings, namely, the calculation of number of subjects, blinding (masking), randomisation, selection of operation to be applied, use of placebo and criteria for stopping and changing the treatment.

I. Calculation of the Number of Subjects

The entire source from which the data are obtained is called a universe or population . A small group selected from a certain universe based on certain rules and which is accepted to highly represent the universe from which it is selected is called a sample and the characteristics of the population from which the data are collected are called variables. If data is collected from the entire population, such an instance is called a parameter . Conducting a study on the sample rather than the entire population is easier and less costly. Many factors influence the determination of the sample size. Firstly, the type of variable should be determined. Variables are classified as categorical (qualitative, non-numerical) or numerical (quantitative). Individuals in categorical variables are classified according to their characteristics. Categorical variables are indicated as nominal and ordinal (ordered). In nominal variables, the application of a category depends on the researcher’s preference. For instance, a female participant can be considered first and then the male participant, or vice versa. An ordinal (ordered) variable is ordered from small to large or vice versa (e.g. ordering obese patients based on their weights-from the lightest to the heaviest or vice versa). A categorical variable may have more than one characteristic: such variables are called binary or dichotomous (e.g. a participant may be both female and obese).

If the variable has numerical (quantitative) characteristics and these characteristics cannot be categorised, then it is called a numerical variable. Numerical variables are either discrete or continuous. For example, the number of operations with spinal anaesthesia represents a discrete variable. The haemoglobin value or height represents a continuous variable.

Statistical analyses that need to be employed depend on the type of variable. The determination of variables is necessary for selecting the statistical method as well as software in SPSS. While categorical variables are presented as numbers and percentages, numerical variables are represented using measures such as mean and standard deviation. It may be necessary to use mean in categorising some cases such as the following: even though the variable is categorical (qualitative, non-numerical) when Visual Analogue Scale (VAS) is used (since a numerical value is obtained), it is classified as a numerical variable: such variables are averaged.

Clinical research is carried out on the sample and generalised to the population. Accordingly, the number of samples should be correctly determined. Different sample size formulas are used on the basis of the statistical method to be used. When the sample size increases, error probability decreases. The sample size is calculated based on the primary hypothesis. The determination of a sample size before beginning the research specifies the power of the study. Power analysis enables the acquisition of realistic results in the research, and it is used for comparing two or more clinical research methods.

Because of the difference in the formulas used in calculating power analysis and number of samples for clinical research, it facilitates the use of computer programs for making calculations.

It is necessary to know certain parameters in order to calculate the number of samples by power analysis.

  • Type-I (α) and type-II (β) error levels
  • Difference between groups (d-difference) and effect size (ES)
  • Distribution ratio of groups
  • Direction of research hypothesis (H1)

a. Type-I (α) and Type-II (β) Error (β) Levels

Two types of errors can be made while accepting or rejecting H 0 hypothesis in a hypothesis test. Type-I error (α) level is the probability of finding a difference at the end of the research when there is no difference between the two applications. In other words, it is the rejection of the hypothesis when H 0 is actually correct and it is known as α error or p value. For instance, when the size is determined, type-I error level is accepted as 0.05 or 0.01.

Another error that can be made during a hypothesis test is a type-II error. It is the acceptance of a wrongly hypothesised H 0 hypothesis. In fact, it is the probability of failing to find a difference when there is a difference between the two applications. The power of a test is the ability of that test to find a difference that actually exists. Therefore, it is related to the type-II error level.

Since the type-II error risk is expressed as β, the power of the test is defined as 1–β. When a type-II error is 0.20, the power of the test is 0.80. Type-I (α) and type-II (β) errors can be intentional. The reason to intentionally make such an error is the necessity to look at the events from the opposite perspective.

b. Difference between Groups and ES

ES is defined as the state in which statistical difference also has clinically significance: ES≥0.5 is desirable. The difference between groups is the absolute difference between the groups compared in clinical research.

c. Allocation Ratio of Groups

The allocation ratio of groups is effective in determining the number of samples. If the number of samples is desired to be determined at the lowest level, the rate should be kept as 1/1.

d. Direction of Hypothesis (H1)

The direction of hypothesis in clinical research may be one-sided or two-sided. While one-sided hypotheses hypothesis test differences in the direction of size, two-sided hypotheses hypothesis test differences without direction. The power of the test in two-sided hypotheses is lower than one-sided hypotheses.

After these four variables are determined, they are entered in the appropriate computer program and the number of samples is calculated. Statistical packaged software programs such as Statistica, NCSS and G-Power may be used for power analysis and calculating the number of samples. When the samples size is calculated, if there is a decrease in α, difference between groups, ES and number of samples, then the standard deviation increases and power decreases. The power in two-sided hypothesis is lower. It is ethically appropriate to consider the determination of sample size, particularly in animal experiments, at the beginning of the study. The phase of the study is also important in the determination of number of subjects to be included in drug studies. Usually, phase-I studies are used to determine the safety profile of a drug or product, and they are generally conducted on a few healthy volunteers. If no unacceptable toxicity is detected during phase-I studies, phase-II studies may be carried out. Phase-II studies are proof-of-concept studies conducted on a larger number (100–500) of volunteer patients. When the effectiveness of the drug or product is evident in phase-II studies, phase-III studies can be initiated. These are randomised, double-blinded, placebo or standard treatment-controlled studies. Volunteer patients are periodically followed-up with respect to the effectiveness and side effects of the drug. It can generally last 1–4 years and is valuable during licensing and releasing the drug to the general market. Then, phase-IV studies begin in which long-term safety is investigated (indication, dose, mode of application, safety, effectiveness, etc.) on thousands of volunteer patients.

II. Blinding (Masking) and Randomisation Methods

When the methodology of clinical research is prepared, precautions should be taken to prevent taking sides. For this reason, techniques such as randomisation and blinding (masking) are used. Comparative studies are the most ideal ones in clinical research.

Blinding Method

A case in which the treatments applied to participants of clinical research should be kept unknown is called the blinding method . If the participant does not know what it receives, it is called a single-blind study; if even the researcher does not know, it is called a double-blind study. When there is a probability of knowing which drug is given in the order of application, when uninformed staff administers the drug, it is called in-house blinding. In case the study drug is known in its pharmaceutical form, a double-dummy blinding test is conducted. Intravenous drug is given to one group and a placebo tablet is given to the comparison group; then, the placebo tablet is given to the group that received the intravenous drug and intravenous drug in addition to placebo tablet is given to the comparison group. In this manner, each group receives both the intravenous and tablet forms of the drug. In case a third party interested in the study is involved and it also does not know about the drug (along with the statistician), it is called third-party blinding.

Randomisation Method

The selection of patients for the study groups should be random. Randomisation methods are used for such selection, which prevent conscious or unconscious manipulations in the selection of patients ( 8 ).

No factor pertaining to the patient should provide preference of one treatment to the other during randomisation. This characteristic is the most important difference separating randomised clinical studies from prospective and synchronous studies with experimental groups. Randomisation strengthens the study design and enables the determination of reliable scientific knowledge ( 2 ).

The easiest method is simple randomisation, e.g. determination of the type of anaesthesia to be administered to a patient by tossing a coin. In this method, when the number of samples is kept high, a balanced distribution is created. When the number of samples is low, there will be an imbalance between the groups. In this case, stratification and blocking have to be added to randomisation. Stratification is the classification of patients one or more times according to prognostic features determined by the researcher and blocking is the selection of a certain number of patients for each stratification process. The number of stratification processes should be determined at the beginning of the study.

As the number of stratification processes increases, performing the study and balancing the groups become difficult. For this reason, stratification characteristics and limitations should be effectively determined at the beginning of the study. It is not mandatory for the stratifications to have equal intervals. Despite all the precautions, an imbalance might occur between the groups before beginning the research. In such circumstances, post-stratification or restandardisation may be conducted according to the prognostic factors.

The main characteristic of applying blinding (masking) and randomisation is the prevention of bias. Therefore, it is worthwhile to comprehensively examine bias at this stage.

Bias and Chicanery

While conducting clinical research, errors can be introduced voluntarily or involuntarily at a number of stages, such as design, population selection, calculating the number of samples, non-compliance with study protocol, data entry and selection of statistical method. Bias is taking sides of individuals in line with their own decisions, views and ideological preferences ( 9 ). In order for an error to lead to bias, it has to be a systematic error. Systematic errors in controlled studies generally cause the results of one group to move in a different direction as compared to the other. It has to be understood that scientific research is generally prone to errors. However, random errors (or, in other words, ‘the luck factor’-in which bias is unintended-do not lead to bias ( 10 ).

Another issue, which is different from bias, is chicanery. It is defined as voluntarily changing the interventions, results and data of patients in an unethical manner or copying data from other studies. Comparatively, bias may not be done consciously.

In case unexpected results or outliers are found while the study is analysed, if possible, such data should be re-included into the study since the complete exclusion of data from a study endangers its reliability. In such a case, evaluation needs to be made with and without outliers. It is insignificant if no difference is found. However, if there is a difference, the results with outliers are re-evaluated. If there is no error, then the outlier is included in the study (as the outlier may be a result). It should be noted that re-evaluation of data in anaesthesiology is not possible.

Statistical evaluation methods should be determined at the design stage so as not to encounter unexpected results in clinical research. The data should be evaluated before the end of the study and without entering into details in research that are time-consuming and involve several samples. This is called an interim analysis . The date of interim analysis should be determined at the beginning of the study. The purpose of making interim analysis is to prevent unnecessary cost and effort since it may be necessary to conclude the research after the interim analysis, e.g. studies in which there is no possibility to validate the hypothesis at the end or the occurrence of different side effects of the drug to be used. The accuracy of the hypothesis and number of samples are compared. Statistical significance levels in interim analysis are very important. If the data level is significant, the hypothesis is validated even if the result turns out to be insignificant after the date of the analysis.

Another important point to be considered is the necessity to conclude the participants’ treatment within the period specified in the study protocol. When the result of the study is achieved earlier and unexpected situations develop, the treatment is concluded earlier. Moreover, the participant may quit the study at its own behest, may die or unpredictable situations (e.g. pregnancy) may develop. The participant can also quit the study whenever it wants, even if the study has not ended ( 7 ).

In case the results of a study are contrary to already known or expected results, the expected quality level of the study suggesting the contradiction may be higher than the studies supporting what is known in that subject. This type of bias is called confirmation bias. The presence of well-known mechanisms and logical inference from them may create problems in the evaluation of data. This is called plausibility bias.

Another type of bias is expectation bias. If a result different from the known results has been achieved and it is against the editor’s will, it can be challenged. Bias may be introduced during the publication of studies, such as publishing only positive results, selection of study results in a way to support a view or prevention of their publication. Some editors may only publish research that extols only the positive results or results that they desire.

Bias may be introduced for advertisement or economic reasons. Economic pressure may be applied on the editor, particularly in the cases of studies involving drugs and new medical devices. This is called commercial bias.

In recent years, before beginning a study, it has been recommended to record it on the Web site www.clinicaltrials.gov for the purpose of facilitating systematic interpretation and analysis in scientific research, informing other researchers, preventing bias, provision of writing in a standard format, enhancing contribution of research results to the general literature and enabling early intervention of an institution for support. This Web site is a service of the US National Institutes of Health.

The last stage in the methodology of clinical studies is the selection of intervention to be conducted. Placebo use assumes an important place in interventions. In Latin, placebo means ‘I will be fine’. In medical literature, it refers to substances that are not curative, do not have active ingredients and have various pharmaceutical forms. Although placebos do not have active drug characteristic, they have shown effective analgesic characteristics, particularly in algology applications; further, its use prevents bias in comparative studies. If a placebo has a positive impact on a participant, it is called the placebo effect ; on the contrary, if it has a negative impact, it is called the nocebo effect . Another type of therapy that can be used in clinical research is sham application. Although a researcher does not cure the patient, the researcher may compare those who receive therapy and undergo sham. It has been seen that sham therapies also exhibit a placebo effect. In particular, sham therapies are used in acupuncture applications ( 11 ). While placebo is a substance, sham is a type of clinical application.

Ethically, the patient has to receive appropriate therapy. For this reason, if its use prevents effective treatment, it causes great problem with regard to patient health and legalities.

Before medical research is conducted with human subjects, predictable risks, drawbacks and benefits must be evaluated for individuals or groups participating in the study. Precautions must be taken for reducing the risk to a minimum level. The risks during the study should be followed, evaluated and recorded by the researcher ( 1 ).

After the methodology for a clinical study is determined, dealing with the ‘Ethics Committee’ forms the next stage. The purpose of the ethics committee is to protect the rights, safety and well-being of volunteers taking part in the clinical research, considering the scientific method and concerns of society. The ethics committee examines the studies presented in time, comprehensively and independently, with regard to ethics and science; in line with the Declaration of Helsinki and following national and international standards concerning ‘Good Clinical Practice’. The method to be followed in the formation of the ethics committee should be developed without any kind of prejudice and to examine the applications with regard to ethics and science within the framework of the ethics committee, Regulation on Clinical Trials and Good Clinical Practice ( www.iku.com ). The necessary documents to be presented to the ethics committee are research protocol, volunteer consent form, budget contract, Declaration of Helsinki, curriculum vitae of researchers, similar or explanatory literature samples, supporting institution approval certificate and patient follow-up form.

Only one sister/brother, mother, father, son/daughter and wife/husband can take charge in the same ethics committee. A rector, vice rector, dean, deputy dean, provincial healthcare director and chief physician cannot be members of the ethics committee.

Members of the ethics committee can work as researchers or coordinators in clinical research. However, during research meetings in which members of the ethics committee are researchers or coordinators, they must leave the session and they cannot sign-off on decisions. If the number of members in the ethics committee for a particular research is so high that it is impossible to take a decision, the clinical research is presented to another ethics committee in the same province. If there is no ethics committee in the same province, an ethics committee in the closest settlement is found.

Thereafter, researchers need to inform the participants using an informed consent form. This form should explain the content of clinical study, potential benefits of the study, alternatives and risks (if any). It should be easy, comprehensible, conforming to spelling rules and written in plain language understandable by the participant.

This form assists the participants in taking a decision regarding participation in the study. It should aim to protect the participants. The participant should be included in the study only after it signs the informed consent form; the participant can quit the study whenever required, even when the study has not ended ( 7 ).

Peer-review: Externally peer-reviewed.

Author Contributions: Concept - C.Ö.Ç., A.D.; Design - C.Ö.Ç.; Supervision - A.D.; Resource - C.Ö.Ç., A.D.; Materials - C.Ö.Ç., A.D.; Analysis and/or Interpretation - C.Ö.Ç., A.D.; Literature Search - C.Ö.Ç.; Writing Manuscript - C.Ö.Ç.; Critical Review - A.D.; Other - C.Ö.Ç., A.D.

Conflict of Interest: No conflict of interest was declared by the authors.

Financial Disclosure: The authors declared that this study has received no financial support.

  • Privacy Policy

Research Method

Home » Scientific Research – Types, Purpose and Guide

Scientific Research – Types, Purpose and Guide

Table of Contents

Scientific Research

Scientific Research

Definition:

Scientific research is the systematic and empirical investigation of phenomena, theories, or hypotheses, using various methods and techniques in order to acquire new knowledge or to validate existing knowledge.

It involves the collection, analysis, interpretation, and presentation of data, as well as the formulation and testing of hypotheses. Scientific research can be conducted in various fields, such as natural sciences, social sciences, and engineering, and may involve experiments, observations, surveys, or other forms of data collection. The goal of scientific research is to advance knowledge, improve understanding, and contribute to the development of solutions to practical problems.

Types of Scientific Research

There are different types of scientific research, which can be classified based on their purpose, method, and application. In this response, we will discuss the four main types of scientific research.

Descriptive Research

Descriptive research aims to describe or document a particular phenomenon or situation, without altering it in any way. This type of research is usually done through observation, surveys, or case studies. Descriptive research is useful in generating ideas, understanding complex phenomena, and providing a foundation for future research. However, it does not provide explanations or causal relationships between variables.

Exploratory Research

Exploratory research aims to explore a new area of inquiry or develop initial ideas for future research. This type of research is usually conducted through observation, interviews, or focus groups. Exploratory research is useful in generating hypotheses, identifying research questions, and determining the feasibility of a larger study. However, it does not provide conclusive evidence or establish cause-and-effect relationships.

Experimental Research

Experimental research aims to test cause-and-effect relationships between variables by manipulating one variable and observing the effects on another variable. This type of research involves the use of an experimental group, which receives a treatment, and a control group, which does not receive the treatment. Experimental research is useful in establishing causal relationships, replicating results, and controlling extraneous variables. However, it may not be feasible or ethical to manipulate certain variables in some contexts.

Correlational Research

Correlational research aims to examine the relationship between two or more variables without manipulating them. This type of research involves the use of statistical techniques to determine the strength and direction of the relationship between variables. Correlational research is useful in identifying patterns, predicting outcomes, and testing theories. However, it does not establish causation or control for confounding variables.

Scientific Research Methods

Scientific research methods are used in scientific research to investigate phenomena, acquire knowledge, and answer questions using empirical evidence. Here are some commonly used scientific research methods:

Observational Studies

This method involves observing and recording phenomena as they occur in their natural setting. It can be done through direct observation or by using tools such as cameras, microscopes, or sensors.

Experimental Studies

This method involves manipulating one or more variables to determine the effect on the outcome. This type of study is often used to establish cause-and-effect relationships.

Survey Research

This method involves collecting data from a large number of people by asking them a set of standardized questions. Surveys can be conducted in person, over the phone, or online.

Case Studies

This method involves in-depth analysis of a single individual, group, or organization. Case studies are often used to gain insights into complex or unusual phenomena.

Meta-analysis

This method involves combining data from multiple studies to arrive at a more reliable conclusion. This technique can be used to identify patterns and trends across a large number of studies.

Qualitative Research

This method involves collecting and analyzing non-numerical data, such as interviews, focus groups, or observations. This type of research is often used to explore complex phenomena and to gain an understanding of people’s experiences and perspectives.

Quantitative Research

This method involves collecting and analyzing numerical data using statistical techniques. This type of research is often used to test hypotheses and to establish cause-and-effect relationships.

Longitudinal Studies

This method involves following a group of individuals over a period of time to observe changes and to identify patterns and trends. This type of study can be used to investigate the long-term effects of a particular intervention or exposure.

Data Analysis Methods

There are many different data analysis methods used in scientific research, and the choice of method depends on the type of data being collected and the research question. Here are some commonly used data analysis methods:

  • Descriptive statistics: This involves using summary statistics such as mean, median, mode, standard deviation, and range to describe the basic features of the data.
  • Inferential statistics: This involves using statistical tests to make inferences about a population based on a sample of data. Examples of inferential statistics include t-tests, ANOVA, and regression analysis.
  • Qualitative analysis: This involves analyzing non-numerical data such as interviews, focus groups, and observations. Qualitative analysis may involve identifying themes, patterns, or categories in the data.
  • Content analysis: This involves analyzing the content of written or visual materials such as articles, speeches, or images. Content analysis may involve identifying themes, patterns, or categories in the content.
  • Data mining: This involves using automated methods to analyze large datasets to identify patterns, trends, or relationships in the data.
  • Machine learning: This involves using algorithms to analyze data and make predictions or classifications based on the patterns identified in the data.

Application of Scientific Research

Scientific research has numerous applications in many fields, including:

  • Medicine and healthcare: Scientific research is used to develop new drugs, medical treatments, and vaccines. It is also used to understand the causes and risk factors of diseases, as well as to develop new diagnostic tools and medical devices.
  • Agriculture : Scientific research is used to develop new crop varieties, to improve crop yields, and to develop more sustainable farming practices.
  • Technology and engineering : Scientific research is used to develop new technologies and engineering solutions, such as renewable energy systems, new materials, and advanced manufacturing techniques.
  • Environmental science : Scientific research is used to understand the impacts of human activity on the environment and to develop solutions for mitigating those impacts. It is also used to monitor and manage natural resources, such as water and air quality.
  • Education : Scientific research is used to develop new teaching methods and educational materials, as well as to understand how people learn and develop.
  • Business and economics: Scientific research is used to understand consumer behavior, to develop new products and services, and to analyze economic trends and policies.
  • Social sciences : Scientific research is used to understand human behavior, attitudes, and social dynamics. It is also used to develop interventions to improve social welfare and to inform public policy.

How to Conduct Scientific Research

Conducting scientific research involves several steps, including:

  • Identify a research question: Start by identifying a question or problem that you want to investigate. This question should be clear, specific, and relevant to your field of study.
  • Conduct a literature review: Before starting your research, conduct a thorough review of existing research in your field. This will help you identify gaps in knowledge and develop hypotheses or research questions.
  • Develop a research plan: Once you have a research question, develop a plan for how you will collect and analyze data to answer that question. This plan should include a detailed methodology, a timeline, and a budget.
  • Collect data: Depending on your research question and methodology, you may collect data through surveys, experiments, observations, or other methods.
  • Analyze data: Once you have collected your data, analyze it using appropriate statistical or qualitative methods. This will help you draw conclusions about your research question.
  • Interpret results: Based on your analysis, interpret your results and draw conclusions about your research question. Discuss any limitations or implications of your findings.
  • Communicate results: Finally, communicate your findings to others in your field through presentations, publications, or other means.

Purpose of Scientific Research

The purpose of scientific research is to systematically investigate phenomena, acquire new knowledge, and advance our understanding of the world around us. Scientific research has several key goals, including:

  • Exploring the unknown: Scientific research is often driven by curiosity and the desire to explore uncharted territory. Scientists investigate phenomena that are not well understood, in order to discover new insights and develop new theories.
  • Testing hypotheses: Scientific research involves developing hypotheses or research questions, and then testing them through observation and experimentation. This allows scientists to evaluate the validity of their ideas and refine their understanding of the phenomena they are studying.
  • Solving problems: Scientific research is often motivated by the desire to solve practical problems or address real-world challenges. For example, researchers may investigate the causes of a disease in order to develop new treatments, or explore ways to make renewable energy more affordable and accessible.
  • Advancing knowledge: Scientific research is a collective effort to advance our understanding of the world around us. By building on existing knowledge and developing new insights, scientists contribute to a growing body of knowledge that can be used to inform decision-making, solve problems, and improve our lives.

Examples of Scientific Research

Here are some examples of scientific research that are currently ongoing or have recently been completed:

  • Clinical trials for new treatments: Scientific research in the medical field often involves clinical trials to test new treatments for diseases and conditions. For example, clinical trials may be conducted to evaluate the safety and efficacy of new drugs or medical devices.
  • Genomics research: Scientists are conducting research to better understand the human genome and its role in health and disease. This includes research on genetic mutations that can cause diseases such as cancer, as well as the development of personalized medicine based on an individual’s genetic makeup.
  • Climate change: Scientific research is being conducted to understand the causes and impacts of climate change, as well as to develop solutions for mitigating its effects. This includes research on renewable energy technologies, carbon capture and storage, and sustainable land use practices.
  • Neuroscience : Scientists are conducting research to understand the workings of the brain and the nervous system, with the goal of developing new treatments for neurological disorders such as Alzheimer’s disease and Parkinson’s disease.
  • Artificial intelligence: Researchers are working to develop new algorithms and technologies to improve the capabilities of artificial intelligence systems. This includes research on machine learning, computer vision, and natural language processing.
  • Space exploration: Scientific research is being conducted to explore the cosmos and learn more about the origins of the universe. This includes research on exoplanets, black holes, and the search for extraterrestrial life.

When to use Scientific Research

Some specific situations where scientific research may be particularly useful include:

  • Solving problems: Scientific research can be used to investigate practical problems or address real-world challenges. For example, scientists may investigate the causes of a disease in order to develop new treatments, or explore ways to make renewable energy more affordable and accessible.
  • Decision-making: Scientific research can provide evidence-based information to inform decision-making. For example, policymakers may use scientific research to evaluate the effectiveness of different policy options or to make decisions about public health and safety.
  • Innovation : Scientific research can be used to develop new technologies, products, and processes. For example, research on materials science can lead to the development of new materials with unique properties that can be used in a range of applications.
  • Knowledge creation : Scientific research is an important way of generating new knowledge and advancing our understanding of the world around us. This can lead to new theories, insights, and discoveries that can benefit society.

Advantages of Scientific Research

There are many advantages of scientific research, including:

  • Improved understanding : Scientific research allows us to gain a deeper understanding of the world around us, from the smallest subatomic particles to the largest celestial bodies.
  • Evidence-based decision making: Scientific research provides evidence-based information that can inform decision-making in many fields, from public policy to medicine.
  • Technological advancements: Scientific research drives technological advancements in fields such as medicine, engineering, and materials science. These advancements can improve quality of life, increase efficiency, and reduce costs.
  • New discoveries: Scientific research can lead to new discoveries and breakthroughs that can advance our knowledge in many fields. These discoveries can lead to new theories, technologies, and products.
  • Economic benefits : Scientific research can stimulate economic growth by creating new industries and jobs, and by generating new technologies and products.
  • Improved health outcomes: Scientific research can lead to the development of new medical treatments and technologies that can improve health outcomes and quality of life for people around the world.
  • Increased innovation: Scientific research encourages innovation by promoting collaboration, creativity, and curiosity. This can lead to new and unexpected discoveries that can benefit society.

Limitations of Scientific Research

Scientific research has some limitations that researchers should be aware of. These limitations can include:

  • Research design limitations : The design of a research study can impact the reliability and validity of the results. Poorly designed studies can lead to inaccurate or inconclusive results. Researchers must carefully consider the study design to ensure that it is appropriate for the research question and the population being studied.
  • Sample size limitations: The size of the sample being studied can impact the generalizability of the results. Small sample sizes may not be representative of the larger population, and may lead to incorrect conclusions.
  • Time and resource limitations: Scientific research can be costly and time-consuming. Researchers may not have the resources necessary to conduct a large-scale study, or may not have sufficient time to complete a study with appropriate controls and analysis.
  • Ethical limitations : Certain types of research may raise ethical concerns, such as studies involving human or animal subjects. Ethical concerns may limit the scope of the research that can be conducted, or require additional protocols and procedures to ensure the safety and well-being of participants.
  • Limitations of technology: Technology may limit the types of research that can be conducted, or the accuracy of the data collected. For example, certain types of research may require advanced technology that is not yet available, or may be limited by the accuracy of current measurement tools.
  • Limitations of existing knowledge: Existing knowledge may limit the types of research that can be conducted. For example, if there is limited knowledge in a particular field, it may be difficult to design a study that can provide meaningful results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Documentary Research

Documentary Research – Types, Methods and...

Original Research

Original Research – Definition, Examples, Guide

Humanities Research

Humanities Research – Types, Methods and Examples

Historical Research

Historical Research – Types, Methods and Examples

Artistic Research

Artistic Research – Methods, Types and Examples

Science and the scientific method: Definitions and examples

Here's a look at the foundation of doing science — the scientific method.

Kids follow the scientific method to carry out an experiment.

The scientific method

Hypothesis, theory and law, a brief history of science, additional resources, bibliography.

Science is a systematic and logical approach to discovering how things in the universe work. It is also the body of knowledge accumulated through the discoveries about all the things in the universe. 

The word "science" is derived from the Latin word "scientia," which means knowledge based on demonstrable and reproducible data, according to the Merriam-Webster dictionary . True to this definition, science aims for measurable results through testing and analysis, a process known as the scientific method. Science is based on fact, not opinion or preferences. The process of science is designed to challenge ideas through research. One important aspect of the scientific process is that it focuses only on the natural world, according to the University of California, Berkeley . Anything that is considered supernatural, or beyond physical reality, does not fit into the definition of science.

When conducting research, scientists use the scientific method to collect measurable, empirical evidence in an experiment related to a hypothesis (often in the form of an if/then statement) that is designed to support or contradict a scientific theory .

"As a field biologist, my favorite part of the scientific method is being in the field collecting the data," Jaime Tanner, a professor of biology at Marlboro College, told Live Science. "But what really makes that fun is knowing that you are trying to answer an interesting question. So the first step in identifying questions and generating possible answers (hypotheses) is also very important and is a creative process. Then once you collect the data you analyze it to see if your hypothesis is supported or not."

Here's an illustration showing the steps in the scientific method.

The steps of the scientific method go something like this, according to Highline College :

  • Make an observation or observations.
  • Form a hypothesis — a tentative description of what's been observed, and make predictions based on that hypothesis.
  • Test the hypothesis and predictions in an experiment that can be reproduced.
  • Analyze the data and draw conclusions; accept or reject the hypothesis or modify the hypothesis if necessary.
  • Reproduce the experiment until there are no discrepancies between observations and theory. "Replication of methods and results is my favorite step in the scientific method," Moshe Pritsker, a former post-doctoral researcher at Harvard Medical School and CEO of JoVE, told Live Science. "The reproducibility of published experiments is the foundation of science. No reproducibility — no science."

Some key underpinnings to the scientific method:

  • The hypothesis must be testable and falsifiable, according to North Carolina State University . Falsifiable means that there must be a possible negative answer to the hypothesis.
  • Research must involve deductive reasoning and inductive reasoning . Deductive reasoning is the process of using true premises to reach a logical true conclusion while inductive reasoning uses observations to infer an explanation for those observations.
  • An experiment should include a dependent variable (which does not change) and an independent variable (which does change), according to the University of California, Santa Barbara .
  • An experiment should include an experimental group and a control group. The control group is what the experimental group is compared against, according to Britannica .

The process of generating and testing a hypothesis forms the backbone of the scientific method. When an idea has been confirmed over many experiments, it can be called a scientific theory. While a theory provides an explanation for a phenomenon, a scientific law provides a description of a phenomenon, according to The University of Waikato . One example would be the law of conservation of energy, which is the first law of thermodynamics that says that energy can neither be created nor destroyed. 

A law describes an observed phenomenon, but it doesn't explain why the phenomenon exists or what causes it. "In science, laws are a starting place," said Peter Coppinger, an associate professor of biology and biomedical engineering at the Rose-Hulman Institute of Technology. "From there, scientists can then ask the questions, 'Why and how?'"

Laws are generally considered to be without exception, though some laws have been modified over time after further testing found discrepancies. For instance, Newton's laws of motion describe everything we've observed in the macroscopic world, but they break down at the subatomic level.

This does not mean theories are not meaningful. For a hypothesis to become a theory, scientists must conduct rigorous testing, typically across multiple disciplines by separate groups of scientists. Saying something is "just a theory" confuses the scientific definition of "theory" with the layperson's definition. To most people a theory is a hunch. In science, a theory is the framework for observations and facts, Tanner told Live Science.

This Copernican heliocentric solar system, from 1708, shows the orbit of the moon around the Earth, and the orbits of the Earth and planets round the sun, including Jupiter and its moons, all surrounded by the 12 signs of the zodiac.

The earliest evidence of science can be found as far back as records exist. Early tablets contain numerals and information about the solar system , which were derived by using careful observation, prediction and testing of those predictions. Science became decidedly more "scientific" over time, however.

1200s: Robert Grosseteste developed the framework for the proper methods of modern scientific experimentation, according to the Stanford Encyclopedia of Philosophy. His works included the principle that an inquiry must be based on measurable evidence that is confirmed through testing.

1400s: Leonardo da Vinci began his notebooks in pursuit of evidence that the human body is microcosmic. The artist, scientist and mathematician also gathered information about optics and hydrodynamics.

1500s: Nicolaus Copernicus advanced the understanding of the solar system with his discovery of heliocentrism. This is a model in which Earth and the other planets revolve around the sun, which is the center of the solar system.

1600s: Johannes Kepler built upon those observations with his laws of planetary motion. Galileo Galilei improved on a new invention, the telescope, and used it to study the sun and planets. The 1600s also saw advancements in the study of physics as Isaac Newton developed his laws of motion.

1700s: Benjamin Franklin discovered that lightning is electrical. He also contributed to the study of oceanography and meteorology. The understanding of chemistry also evolved during this century as Antoine Lavoisier, dubbed the father of modern chemistry , developed the law of conservation of mass.

1800s: Milestones included Alessandro Volta's discoveries regarding electrochemical series, which led to the invention of the battery. John Dalton also introduced atomic theory, which stated that all matter is composed of atoms that combine to form molecules. The basis of modern study of genetics advanced as Gregor Mendel unveiled his laws of inheritance. Later in the century, Wilhelm Conrad Röntgen discovered X-rays , while George Ohm's law provided the basis for understanding how to harness electrical charges.

1900s: The discoveries of Albert Einstein , who is best known for his theory of relativity, dominated the beginning of the 20th century. Einstein's theory of relativity is actually two separate theories. His special theory of relativity, which he outlined in a 1905 paper, " The Electrodynamics of Moving Bodies ," concluded that time must change according to the speed of a moving object relative to the frame of reference of an observer. His second theory of general relativity, which he published as " The Foundation of the General Theory of Relativity ," advanced the idea that matter causes space to curve.

In 1952, Jonas Salk developed the polio vaccine , which reduced the incidence of polio in the United States by nearly 90%, according to Britannica . The following year, James D. Watson and Francis Crick discovered the structure of DNA , which is a double helix formed by base pairs attached to a sugar-phosphate backbone, according to the National Human Genome Research Institute .

2000s: The 21st century saw the first draft of the human genome completed, leading to a greater understanding of DNA. This advanced the study of genetics, its role in human biology and its use as a predictor of diseases and other disorders, according to the National Human Genome Research Institute .

  • This video from City University of New York delves into the basics of what defines science.
  • Learn about what makes science science in this book excerpt from Washington State University .
  • This resource from the University of Michigan — Flint explains how to design your own scientific study.

Merriam-Webster Dictionary, Scientia. 2022. https://www.merriam-webster.com/dictionary/scientia

University of California, Berkeley, "Understanding Science: An Overview." 2022. ​​ https://undsci.berkeley.edu/article/0_0_0/intro_01  

Highline College, "Scientific method." July 12, 2015. https://people.highline.edu/iglozman/classes/astronotes/scimeth.htm  

North Carolina State University, "Science Scripts." https://projects.ncsu.edu/project/bio183de/Black/science/science_scripts.html  

University of California, Santa Barbara. "What is an Independent variable?" October 31,2017. http://scienceline.ucsb.edu/getkey.php?key=6045  

Encyclopedia Britannica, "Control group." May 14, 2020. https://www.britannica.com/science/control-group  

The University of Waikato, "Scientific Hypothesis, Theories and Laws." https://sci.waikato.ac.nz/evolution/Theories.shtml  

Stanford Encyclopedia of Philosophy, Robert Grosseteste. May 3, 2019. https://plato.stanford.edu/entries/grosseteste/  

Encyclopedia Britannica, "Jonas Salk." October 21, 2021. https://www.britannica.com/ biography /Jonas-Salk

National Human Genome Research Institute, "​Phosphate Backbone." https://www.genome.gov/genetics-glossary/Phosphate-Backbone  

National Human Genome Research Institute, "What is the Human Genome Project?" https://www.genome.gov/human-genome-project/What  

‌ Live Science contributor Ashley Hamer updated this article on Jan. 16, 2022.

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Alina Bradford

'The most critically harmful fungi to humans': How the rise of C. auris was inevitable

Odd earthquake swarm in Central Europe hints at magma bubbling below the surface

James Webb telescope detects 1-of-a-kind atmosphere around 'Hell Planet' in distant star system

Most Popular

  • 2 Meta just stuck its AI somewhere you didn't expect it — a pair of Ray-Ban smart glasses
  • 3 2,500-year-old Illyrian helmet found in burial mound likely caused 'awe in the enemy'
  • 4 Cave of Crystals: The deadly cavern in Mexico dubbed 'the Sistine Chapel of crystals'
  • 5 Papua New Guineans, genetically isolated for 50,000 years, carry Denisovan genes that help their immune system, study suggests
  • 2 Why can't we see the far side of the moon?
  • 3 32 of the most colorful birds on Earth
  • 4 Space photo of the week: 'God's Hand' leaves astronomers scratching their heads

what is scientific research and examples

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

Scientific Method

Illustration by J.R. Bee. ThoughtCo. 

  • Cell Biology
  • Weather & Climate
  • B.A., Biology, Emory University
  • A.S., Nursing, Chattahoochee Technical College

The scientific method is a series of steps followed by scientific investigators to answer specific questions about the natural world. It involves making observations, formulating a hypothesis , and conducting scientific experiments . Scientific inquiry starts with an observation followed by the formulation of a question about what has been observed. The steps of the scientific method are as follows:

Observation

The first step of the scientific method involves making an observation about something that interests you. This is very important if you are doing a science project because you want your project to be focused on something that will hold your attention. Your observation can be on anything from plant movement to animal behavior, as long as it is something you really want to know more about.​ This is where you come up with the idea for your science project.

Once you've made your observation, you must formulate a question about what you have observed. Your question should tell what it is that you are trying to discover or accomplish in your experiment. When stating your question you should be as specific as possible.​ For example, if you are doing a project on plants , you may want to know how plants interact with microbes. Your question may be: Do plant spices inhibit bacterial growth ?

The hypothesis is a key component of the scientific process. A hypothesis is an idea that is suggested as an explanation for a natural event, a particular experience, or a specific condition that can be tested through definable experimentation. It states the purpose of your experiment, the variables used, and the predicted outcome of your experiment. It is important to note that a hypothesis must be testable. That means that you should be able to test your hypothesis through experimentation .​ Your hypothesis must either be supported or falsified by your experiment. An example of a good hypothesis is: If there is a relation between listening to music and heart rate, then listening to music will cause a person's resting heart rate to either increase or decrease.

Once you've developed a hypothesis, you must design and conduct an experiment that will test it. You should develop a procedure that states very clearly how you plan to conduct your experiment. It is important that you include and identify a controlled variable or dependent variable in your procedure. Controls allow us to test a single variable in an experiment because they are unchanged. We can then make observations and comparisons between our controls and our independent variables (things that change in the experiment) to develop an accurate conclusion.​

The results are where you report what happened in the experiment. That includes detailing all observations and data made during your experiment. Most people find it easier to visualize the data by charting or graphing the information.​

The final step of the scientific method is developing a conclusion. This is where all of the results from the experiment are analyzed and a determination is reached about the hypothesis. Did the experiment support or reject your hypothesis? If your hypothesis was supported, great. If not, repeat the experiment or think of ways to improve your procedure.

  • Six Steps of the Scientific Method
  • What Is an Experiment? Definition and Design
  • Scientific Method Flow Chart
  • Scientific Method Lesson Plan
  • How To Design a Science Fair Experiment
  • Science Projects for Every Subject
  • How to Do a Science Fair Project
  • What Are the Elements of a Good Hypothesis?
  • How to Write a Lab Report
  • What Is a Hypothesis? (Science)
  • Biology Science Fair Project Ideas
  • Understanding Simple vs Controlled Experiments
  • Null Hypothesis Definition and Examples
  • Stove Top Frozen Pizza Science Experiment
  • Dependent Variable Definition and Examples
  • What Is the Difference Between Hard and Soft Science?
  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Science, health, and public trust.

September 8, 2021

Explaining How Research Works

Understanding Research infographic

We’ve heard “follow the science” a lot during the pandemic. But it seems science has taken us on a long and winding road filled with twists and turns, even changing directions at times. That’s led some people to feel they can’t trust science. But when what we know changes, it often means science is working.

Expaling How Research Works Infographic en español

Explaining the scientific process may be one way that science communicators can help maintain public trust in science. Placing research in the bigger context of its field and where it fits into the scientific process can help people better understand and interpret new findings as they emerge. A single study usually uncovers only a piece of a larger puzzle.

Questions about how the world works are often investigated on many different levels. For example, scientists can look at the different atoms in a molecule, cells in a tissue, or how different tissues or systems affect each other. Researchers often must choose one or a finite number of ways to investigate a question. It can take many different studies using different approaches to start piecing the whole picture together.

Sometimes it might seem like research results contradict each other. But often, studies are just looking at different aspects of the same problem. Researchers can also investigate a question using different techniques or timeframes. That may lead them to arrive at different conclusions from the same data.

Using the data available at the time of their study, scientists develop different explanations, or models. New information may mean that a novel model needs to be developed to account for it. The models that prevail are those that can withstand the test of time and incorporate new information. Science is a constantly evolving and self-correcting process.

Scientists gain more confidence about a model through the scientific process. They replicate each other’s work. They present at conferences. And papers undergo peer review, in which experts in the field review the work before it can be published in scientific journals. This helps ensure that the study is up to current scientific standards and maintains a level of integrity. Peer reviewers may find problems with the experiments or think different experiments are needed to justify the conclusions. They might even offer new ways to interpret the data.

It’s important for science communicators to consider which stage a study is at in the scientific process when deciding whether to cover it. Some studies are posted on preprint servers for other scientists to start weighing in on and haven’t yet been fully vetted. Results that haven't yet been subjected to scientific scrutiny should be reported on with care and context to avoid confusion or frustration from readers.

We’ve developed a one-page guide, "How Research Works: Understanding the Process of Science" to help communicators put the process of science into perspective. We hope it can serve as a useful resource to help explain why science changes—and why it’s important to expect that change. Please take a look and share your thoughts with us by sending an email to  [email protected].

Below are some additional resources:

  • Discoveries in Basic Science: A Perfectly Imperfect Process
  • When Clinical Research Is in the News
  • What is Basic Science and Why is it Important?
  • ​ What is a Research Organism?
  • What Are Clinical Trials and Studies?
  • Basic Research – Digital Media Kit
  • Decoding Science: How Does Science Know What It Knows? (NAS)
  • Can Science Help People Make Decisions ? (NAS)

Connect with Us

  • More Social Media from NIH

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

1.2 Scientific Research in Psychology

Learning objectives.

  • Describe a general model of scientific research in psychology and give specific examples that fit the model.
  • Explain who conducts scientific research in psychology and why they do it.
  • Distinguish between basic research and applied research.

A Model of Scientific Research in Psychology

Figure 1.2 “A Simple Model of Scientific Research in Psychology” presents a more specific model of scientific research in psychology. The researcher (who more often than not is really a small group of researchers) formulates a research question, conducts a study designed to answer the question, analyzes the resulting data, draws conclusions about the answer to the question, and publishes the results so that they become part of the research literature. Because the research literature is one of the primary sources of new research questions, this process can be thought of as a cycle. New research leads to new questions, which lead to new research, and so on. Figure 1.2 “A Simple Model of Scientific Research in Psychology” also indicates that research questions can originate outside of this cycle either with informal observations or with practical problems that need to be solved. But even in these cases, the researcher would start by checking the research literature to see if the question had already been answered and to refine it based on what previous research had already found.

Figure 1.2 A Simple Model of Scientific Research in Psychology

A Simple Model of Scientific Research in Psychology

The research by Mehl and his colleagues is described nicely by this model. Their question—whether women are more talkative than men—was suggested to them both by people’s stereotypes and by published claims about the relative talkativeness of women and men. When they checked the research literature, however, they found that this question had not been adequately addressed in scientific studies. They conducted a careful empirical study, analyzed the results (finding very little difference between women and men), and published their work so that it became part of the research literature. The publication of their article is not the end of the story, however, because their work suggests many new questions (about the reliability of the result, about potential cultural differences, etc.) that will likely be taken up by them and by other researchers inspired by their work.

A woman using her cell phone while driving

Scientific research has confirmed that cell phone use impairs a variety of driving behaviors.

Indiana Stan – CC BY-NC 2.0.

As another example, consider that as cell phones became more widespread during the 1990s, people began to wonder whether, and to what extent, cell phone use had a negative effect on driving. Many psychologists decided to tackle this question scientifically (Collet, Guillot, & Petit, 2010). It was clear from previously published research that engaging in a simple verbal task impairs performance on a perceptual or motor task carried out at the same time, but no one had studied the effect specifically of cell phone use on driving. Under carefully controlled conditions, these researchers compared people’s driving performance while using a cell phone with their performance while not using a cell phone, both in the lab and on the road. They found that people’s ability to detect road hazards, reaction time, and control of the vehicle were all impaired by cell phone use. Each new study was published and became part of the growing research literature on this topic.

Who Conducts Scientific Research in Psychology?

Scientific research in psychology is generally conducted by people with doctoral degrees (usually the doctor of philosophy [PhD] ) and master’s degrees in psychology and related fields, often supported by research assistants with bachelor’s degrees or other relevant training. Some of them work for government agencies (e.g., the National Institute of Mental Health), for nonprofit organizations (e.g., the American Cancer Society), or in the private sector (e.g., in product development). However, the majority of them are college and university faculty, who often collaborate with their graduate and undergraduate students. Although some researchers are trained and licensed as clinicians—especially those who conduct research in clinical psychology—the majority are not. Instead, they have expertise in one or more of the many other subfields of psychology: behavioral neuroscience, cognitive psychology, developmental psychology, personality psychology, social psychology, and so on. Doctoral-level researchers might be employed to conduct research full-time or, like many college and university faculty members, to conduct research in addition to teaching classes and serving their institution and community in other ways.

Of course, people also conduct research in psychology because they enjoy the intellectual and technical challenges involved and the satisfaction of contributing to scientific knowledge of human behavior. You might find that you enjoy the process too. If so, your college or university might offer opportunities to get involved in ongoing research as either a research assistant or a participant. Of course, you might find that you do not enjoy the process of conducting scientific research in psychology. But at least you will have a better understanding of where scientific knowledge in psychology comes from, an appreciation of its strengths and limitations, and an awareness of how it can be applied to solve practical problems in psychology and everyday life.

Scientific Psychology Blogs

A fun and easy way to follow current scientific research in psychology is to read any of the many excellent blogs devoted to summarizing and commenting on new findings. Among them are the following:

  • Child-Psych, http://www.child-psych.org
  • PsyBlog, http://www.spring.org.uk
  • Research Digest, http://bps-research-digest.blogspot.com
  • Social Psychology Eye, http://socialpsychologyeye.wordpress.com
  • We’re Only Human, http://www.psychologicalscience.org/onlyhuman

You can also browse to http://www.researchblogging.org , select psychology as your topic, and read entries from a wide variety of blogs.

The Broader Purposes of Scientific Research in Psychology

People have always been curious about the natural world, including themselves and their behavior. (In fact, this is probably why you are studying psychology in the first place.) Science grew out of this natural curiosity and has become the best way to achieve detailed and accurate knowledge. Keep in mind that most of the phenomena and theories that fill psychology textbooks are the products of scientific research. In a typical introductory psychology textbook, for example, one can learn about specific cortical areas for language and perception, principles of classical and operant conditioning, biases in reasoning and judgment, and people’s surprising tendency to obey authority. And scientific research continues because what we know right now only scratches the surface of what we can know.

Scientific research is often classified as being either basic or applied. Basic research in psychology is conducted primarily for the sake of achieving a more detailed and accurate understanding of human behavior, without necessarily trying to address any particular practical problem. The research of Mehl and his colleagues falls into this category. Applied research is conducted primarily to address some practical problem. Research on the effects of cell phone use on driving, for example, was prompted by safety concerns and has led to the enactment of laws to limit this practice. Although the distinction between basic and applied research is convenient, it is not always clear-cut. For example, basic research on sex differences in talkativeness could eventually have an effect on how marriage therapy is practiced, and applied research on the effect of cell phone use on driving could produce new insights into basic processes of perception, attention, and action.

Key Takeaways

  • Research in psychology can be described by a simple cyclical model. A research question based on the research literature leads to an empirical study, the results of which are published and become part of the research literature.
  • Scientific research in psychology is conducted mainly by people with doctoral degrees in psychology and related fields, most of whom are college and university faculty members. They do so for professional and for personal reasons, as well as to contribute to scientific knowledge about human behavior.
  • Basic research is conducted to learn about human behavior for its own sake, and applied research is conducted to solve some practical problem. Both are valuable, and the distinction between the two is not always clear-cut.
  • Practice: Find a description of an empirical study in a professional journal or in one of the scientific psychology blogs. Then write a brief description of the research in terms of the cyclical model presented here. One or two sentences for each part of the cycle should suffice.
  • Practice: Based on your own experience or on things you have already learned about psychology, list three basic research questions and three applied research questions of interest to you.

Collet, C., Guillot, A., & Petit, C. (2010). Phoning while driving I: A review of epidemiological, psychological, behavioural and physiological studies. Ergonomics, 53 , 589–601.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

what is scientific research and examples

Understanding Science

How science REALLY works...

  • Understanding Science 101
  • Misconceptions
  • Scientific ideas lead to ongoing research.
  • Answering one scientific question frequently leads to additional questions to be investigated.

Misconception:  Science is complete.

Correction:  Science is an ongoing process. There is much more yet to learn.  Read more about it.

Scientific ideas lead to ongoing research

Most typically in science, answering one question inspires deeper and more detailed questions for further research. Similarly, coming up with a fruitful idea to explain a previously anomalous observation frequently leads to new expectations and areas of research. So, in a sense, the more we know, the more we know what we don’t yet know. As our knowledge expands, so too does our awareness of what we don’t yet understand. For example, James Watson and Francis Crick’s proposal (based on evidence collected by Rosalind Franklin)that DNA takes the form of a double helix helped answer a burning question in biology about the chemical structure of DNA. And while it helped answer one question, it also generated new expectations (e.g., that DNA is copied via base pairing), raised many new questions (e.g., how does DNA store information?), and contributed to whole new fields of research (e.g., genetic engineering). Like this work on the structure of DNA, most scientific research generates new expectations, inspires new questions, and leads to new discoveries.

A SCIENCE PROTOTYPE: RUTHERFORD AND THE ATOM

Niels Bohr built upon Ernest Rutherford’s work to develop the model of the atom most commonly portrayed in textbooks: a nucleus orbited by electrons at different levels. Despite the new questions it raised (e.g., why do orbiting, negatively-charged electrons not spiral into the positively-charged nucleus?), this model was powerful and, with further modification, led to a wide range of accurate predictions and new discoveries, including predicting the outcome of chemical reactions, determining the composition of distant stars, and conceiving of the atomic bomb.

Rutherford’s story continues as we examine each item on the Science Checklist. To find out how this investigation measures up to the last item of the checklist, read on.

  • Science in action
  • Teaching resources

Learn more about how investigations of the structure of DNA inspired new questions and further research in  The structure of DNA: Cooperation and competition .

  • Learn strategies for building lessons and activities around the Science Checklist: Grades 6-8 Grades 9-12 Grades 13-16
  • Get  graphics and pdfs of the Science Checklist  to use in your classroom.

Science is embedded in the scientific community

Participants in science behave scientifically

Subscribe to our newsletter

  • The science flowchart
  • Science stories
  • Grade-level teaching guides
  • Teaching resource database
  • Journaling tool

What Is Research, and Why Do People Do It?

  • Open Access
  • First Online: 03 December 2022

Cite this chapter

You have full access to this open access chapter

what is scientific research and examples

  • James Hiebert 6 ,
  • Jinfa Cai 7 ,
  • Stephen Hwang 7 ,
  • Anne K Morris 6 &
  • Charles Hohensee 6  

Part of the book series: Research in Mathematics Education ((RME))

17k Accesses

Abstractspiepr Abs1

Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain, and by its commitment to learn from everyone else seriously engaged in research. We call this kind of research scientific inquiry and define it as “formulating, testing, and revising hypotheses.” By “hypotheses” we do not mean the hypotheses you encounter in statistics courses. We mean predictions about what you expect to find and rationales for why you made these predictions. Throughout this and the remaining chapters we make clear that the process of scientific inquiry applies to all kinds of research studies and data, both qualitative and quantitative.

You have full access to this open access chapter,  Download chapter PDF

Part I. What Is Research?

Have you ever studied something carefully because you wanted to know more about it? Maybe you wanted to know more about your grandmother’s life when she was younger so you asked her to tell you stories from her childhood, or maybe you wanted to know more about a fertilizer you were about to use in your garden so you read the ingredients on the package and looked them up online. According to the dictionary definition, you were doing research.

Recall your high school assignments asking you to “research” a topic. The assignment likely included consulting a variety of sources that discussed the topic, perhaps including some “original” sources. Often, the teacher referred to your product as a “research paper.”

Were you conducting research when you interviewed your grandmother or wrote high school papers reviewing a particular topic? Our view is that you were engaged in part of the research process, but only a small part. In this book, we reserve the word “research” for what it means in the scientific world, that is, for scientific research or, more pointedly, for scientific inquiry .

Exercise 1.1

Before you read any further, write a definition of what you think scientific inquiry is. Keep it short—Two to three sentences. You will periodically update this definition as you read this chapter and the remainder of the book.

This book is about scientific inquiry—what it is and how to do it. For starters, scientific inquiry is a process, a particular way of finding out about something that involves a number of phases. Each phase of the process constitutes one aspect of scientific inquiry. You are doing scientific inquiry as you engage in each phase, but you have not done scientific inquiry until you complete the full process. Each phase is necessary but not sufficient.

In this chapter, we set the stage by defining scientific inquiry—describing what it is and what it is not—and by discussing what it is good for and why people do it. The remaining chapters build directly on the ideas presented in this chapter.

A first thing to know is that scientific inquiry is not all or nothing. “Scientificness” is a continuum. Inquiries can be more scientific or less scientific. What makes an inquiry more scientific? You might be surprised there is no universally agreed upon answer to this question. None of the descriptors we know of are sufficient by themselves to define scientific inquiry. But all of them give you a way of thinking about some aspects of the process of scientific inquiry. Each one gives you different insights.

An image of the book's description with the words like research, science, and inquiry and what the word research meant in the scientific world.

Exercise 1.2

As you read about each descriptor below, think about what would make an inquiry more or less scientific. If you think a descriptor is important, use it to revise your definition of scientific inquiry.

Creating an Image of Scientific Inquiry

We will present three descriptors of scientific inquiry. Each provides a different perspective and emphasizes a different aspect of scientific inquiry. We will draw on all three descriptors to compose our definition of scientific inquiry.

Descriptor 1. Experience Carefully Planned in Advance

Sir Ronald Fisher, often called the father of modern statistical design, once referred to research as “experience carefully planned in advance” (1935, p. 8). He said that humans are always learning from experience, from interacting with the world around them. Usually, this learning is haphazard rather than the result of a deliberate process carried out over an extended period of time. Research, Fisher said, was learning from experience, but experience carefully planned in advance.

This phrase can be fully appreciated by looking at each word. The fact that scientific inquiry is based on experience means that it is based on interacting with the world. These interactions could be thought of as the stuff of scientific inquiry. In addition, it is not just any experience that counts. The experience must be carefully planned . The interactions with the world must be conducted with an explicit, describable purpose, and steps must be taken to make the intended learning as likely as possible. This planning is an integral part of scientific inquiry; it is not just a preparation phase. It is one of the things that distinguishes scientific inquiry from many everyday learning experiences. Finally, these steps must be taken beforehand and the purpose of the inquiry must be articulated in advance of the experience. Clearly, scientific inquiry does not happen by accident, by just stumbling into something. Stumbling into something unexpected and interesting can happen while engaged in scientific inquiry, but learning does not depend on it and serendipity does not make the inquiry scientific.

Descriptor 2. Observing Something and Trying to Explain Why It Is the Way It Is

When we were writing this chapter and googled “scientific inquiry,” the first entry was: “Scientific inquiry refers to the diverse ways in which scientists study the natural world and propose explanations based on the evidence derived from their work.” The emphasis is on studying, or observing, and then explaining . This descriptor takes the image of scientific inquiry beyond carefully planned experience and includes explaining what was experienced.

According to the Merriam-Webster dictionary, “explain” means “(a) to make known, (b) to make plain or understandable, (c) to give the reason or cause of, and (d) to show the logical development or relations of” (Merriam-Webster, n.d. ). We will use all these definitions. Taken together, they suggest that to explain an observation means to understand it by finding reasons (or causes) for why it is as it is. In this sense of scientific inquiry, the following are synonyms: explaining why, understanding why, and reasoning about causes and effects. Our image of scientific inquiry now includes planning, observing, and explaining why.

An image represents the observation required in the scientific inquiry including planning and explaining.

We need to add a final note about this descriptor. We have phrased it in a way that suggests “observing something” means you are observing something in real time—observing the way things are or the way things are changing. This is often true. But, observing could mean observing data that already have been collected, maybe by someone else making the original observations (e.g., secondary analysis of NAEP data or analysis of existing video recordings of classroom instruction). We will address secondary analyses more fully in Chap. 4 . For now, what is important is that the process requires explaining why the data look like they do.

We must note that for us, the term “data” is not limited to numerical or quantitative data such as test scores. Data can also take many nonquantitative forms, including written survey responses, interview transcripts, journal entries, video recordings of students, teachers, and classrooms, text messages, and so forth.

An image represents the data explanation as it is not limited and takes numerous non-quantitative forms including an interview, journal entries, etc.

Exercise 1.3

What are the implications of the statement that just “observing” is not enough to count as scientific inquiry? Does this mean that a detailed description of a phenomenon is not scientific inquiry?

Find sources that define research in education that differ with our position, that say description alone, without explanation, counts as scientific research. Identify the precise points where the opinions differ. What are the best arguments for each of the positions? Which do you prefer? Why?

Descriptor 3. Updating Everyone’s Thinking in Response to More and Better Information

This descriptor focuses on a third aspect of scientific inquiry: updating and advancing the field’s understanding of phenomena that are investigated. This descriptor foregrounds a powerful characteristic of scientific inquiry: the reliability (or trustworthiness) of what is learned and the ultimate inevitability of this learning to advance human understanding of phenomena. Humans might choose not to learn from scientific inquiry, but history suggests that scientific inquiry always has the potential to advance understanding and that, eventually, humans take advantage of these new understandings.

Before exploring these bold claims a bit further, note that this descriptor uses “information” in the same way the previous two descriptors used “experience” and “observations.” These are the stuff of scientific inquiry and we will use them often, sometimes interchangeably. Frequently, we will use the term “data” to stand for all these terms.

An overriding goal of scientific inquiry is for everyone to learn from what one scientist does. Much of this book is about the methods you need to use so others have faith in what you report and can learn the same things you learned. This aspect of scientific inquiry has many implications.

One implication is that scientific inquiry is not a private practice. It is a public practice available for others to see and learn from. Notice how different this is from everyday learning. When you happen to learn something from your everyday experience, often only you gain from the experience. The fact that research is a public practice means it is also a social one. It is best conducted by interacting with others along the way: soliciting feedback at each phase, taking opportunities to present work-in-progress, and benefitting from the advice of others.

A second implication is that you, as the researcher, must be committed to sharing what you are doing and what you are learning in an open and transparent way. This allows all phases of your work to be scrutinized and critiqued. This is what gives your work credibility. The reliability or trustworthiness of your findings depends on your colleagues recognizing that you have used all appropriate methods to maximize the chances that your claims are justified by the data.

A third implication of viewing scientific inquiry as a collective enterprise is the reverse of the second—you must be committed to receiving comments from others. You must treat your colleagues as fair and honest critics even though it might sometimes feel otherwise. You must appreciate their job, which is to remain skeptical while scrutinizing what you have done in considerable detail. To provide the best help to you, they must remain skeptical about your conclusions (when, for example, the data are difficult for them to interpret) until you offer a convincing logical argument based on the information you share. A rather harsh but good-to-remember statement of the role of your friendly critics was voiced by Karl Popper, a well-known twentieth century philosopher of science: “. . . if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can” (Popper, 1968, p. 27).

A final implication of this third descriptor is that, as someone engaged in scientific inquiry, you have no choice but to update your thinking when the data support a different conclusion. This applies to your own data as well as to those of others. When data clearly point to a specific claim, even one that is quite different than you expected, you must reconsider your position. If the outcome is replicated multiple times, you need to adjust your thinking accordingly. Scientific inquiry does not let you pick and choose which data to believe; it mandates that everyone update their thinking when the data warrant an update.

Doing Scientific Inquiry

We define scientific inquiry in an operational sense—what does it mean to do scientific inquiry? What kind of process would satisfy all three descriptors: carefully planning an experience in advance; observing and trying to explain what you see; and, contributing to updating everyone’s thinking about an important phenomenon?

We define scientific inquiry as formulating , testing , and revising hypotheses about phenomena of interest.

Of course, we are not the only ones who define it in this way. The definition for the scientific method posted by the editors of Britannica is: “a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments” (Britannica, n.d. ).

An image represents the scientific inquiry definition given by the editors of Britannica and also defines the hypothesis on the basis of the experiments.

Notice how defining scientific inquiry this way satisfies each of the descriptors. “Carefully planning an experience in advance” is exactly what happens when formulating a hypothesis about a phenomenon of interest and thinking about how to test it. “ Observing a phenomenon” occurs when testing a hypothesis, and “ explaining ” what is found is required when revising a hypothesis based on the data. Finally, “updating everyone’s thinking” comes from comparing publicly the original with the revised hypothesis.

Doing scientific inquiry, as we have defined it, underscores the value of accumulating knowledge rather than generating random bits of knowledge. Formulating, testing, and revising hypotheses is an ongoing process, with each revised hypothesis begging for another test, whether by the same researcher or by new researchers. The editors of Britannica signaled this cyclic process by adding the following phrase to their definition of the scientific method: “The modified hypothesis is then retested, further modified, and tested again.” Scientific inquiry creates a process that encourages each study to build on the studies that have gone before. Through collective engagement in this process of building study on top of study, the scientific community works together to update its thinking.

Before exploring more fully the meaning of “formulating, testing, and revising hypotheses,” we need to acknowledge that this is not the only way researchers define research. Some researchers prefer a less formal definition, one that includes more serendipity, less planning, less explanation. You might have come across more open definitions such as “research is finding out about something.” We prefer the tighter hypothesis formulation, testing, and revision definition because we believe it provides a single, coherent map for conducting research that addresses many of the thorny problems educational researchers encounter. We believe it is the most useful orientation toward research and the most helpful to learn as a beginning researcher.

A final clarification of our definition is that it applies equally to qualitative and quantitative research. This is a familiar distinction in education that has generated much discussion. You might think our definition favors quantitative methods over qualitative methods because the language of hypothesis formulation and testing is often associated with quantitative methods. In fact, we do not favor one method over another. In Chap. 4 , we will illustrate how our definition fits research using a range of quantitative and qualitative methods.

Exercise 1.4

Look for ways to extend what the field knows in an area that has already received attention by other researchers. Specifically, you can search for a program of research carried out by more experienced researchers that has some revised hypotheses that remain untested. Identify a revised hypothesis that you might like to test.

Unpacking the Terms Formulating, Testing, and Revising Hypotheses

To get a full sense of the definition of scientific inquiry we will use throughout this book, it is helpful to spend a little time with each of the key terms.

We first want to make clear that we use the term “hypothesis” as it is defined in most dictionaries and as it used in many scientific fields rather than as it is usually defined in educational statistics courses. By “hypothesis,” we do not mean a null hypothesis that is accepted or rejected by statistical analysis. Rather, we use “hypothesis” in the sense conveyed by the following definitions: “An idea or explanation for something that is based on known facts but has not yet been proved” (Cambridge University Press, n.d. ), and “An unproved theory, proposition, or supposition, tentatively accepted to explain certain facts and to provide a basis for further investigation or argument” (Agnes & Guralnik, 2008 ).

We distinguish two parts to “hypotheses.” Hypotheses consist of predictions and rationales . Predictions are statements about what you expect to find when you inquire about something. Rationales are explanations for why you made the predictions you did, why you believe your predictions are correct. So, for us “formulating hypotheses” means making explicit predictions and developing rationales for the predictions.

“Testing hypotheses” means making observations that allow you to assess in what ways your predictions were correct and in what ways they were incorrect. In education research, it is rarely useful to think of your predictions as either right or wrong. Because of the complexity of most issues you will investigate, most predictions will be right in some ways and wrong in others.

By studying the observations you make (data you collect) to test your hypotheses, you can revise your hypotheses to better align with the observations. This means revising your predictions plus revising your rationales to justify your adjusted predictions. Even though you might not run another test, formulating revised hypotheses is an essential part of conducting a research study. Comparing your original and revised hypotheses informs everyone of what you learned by conducting your study. In addition, a revised hypothesis sets the stage for you or someone else to extend your study and accumulate more knowledge of the phenomenon.

We should note that not everyone makes a clear distinction between predictions and rationales as two aspects of hypotheses. In fact, common, non-scientific uses of the word “hypothesis” may limit it to only a prediction or only an explanation (or rationale). We choose to explicitly include both prediction and rationale in our definition of hypothesis, not because we assert this should be the universal definition, but because we want to foreground the importance of both parts acting in concert. Using “hypothesis” to represent both prediction and rationale could hide the two aspects, but we make them explicit because they provide different kinds of information. It is usually easier to make predictions than develop rationales because predictions can be guesses, hunches, or gut feelings about which you have little confidence. Developing a compelling rationale requires careful thought plus reading what other researchers have found plus talking with your colleagues. Often, while you are developing your rationale you will find good reasons to change your predictions. Developing good rationales is the engine that drives scientific inquiry. Rationales are essentially descriptions of how much you know about the phenomenon you are studying. Throughout this guide, we will elaborate on how developing good rationales drives scientific inquiry. For now, we simply note that it can sharpen your predictions and help you to interpret your data as you test your hypotheses.

An image represents the rationale and the prediction for the scientific inquiry and different types of information provided by the terms.

Hypotheses in education research take a variety of forms or types. This is because there are a variety of phenomena that can be investigated. Investigating educational phenomena is sometimes best done using qualitative methods, sometimes using quantitative methods, and most often using mixed methods (e.g., Hay, 2016 ; Weis et al. 2019a ; Weisner, 2005 ). This means that, given our definition, hypotheses are equally applicable to qualitative and quantitative investigations.

Hypotheses take different forms when they are used to investigate different kinds of phenomena. Two very different activities in education could be labeled conducting experiments and descriptions. In an experiment, a hypothesis makes a prediction about anticipated changes, say the changes that occur when a treatment or intervention is applied. You might investigate how students’ thinking changes during a particular kind of instruction.

A second type of hypothesis, relevant for descriptive research, makes a prediction about what you will find when you investigate and describe the nature of a situation. The goal is to understand a situation as it exists rather than to understand a change from one situation to another. In this case, your prediction is what you expect to observe. Your rationale is the set of reasons for making this prediction; it is your current explanation for why the situation will look like it does.

You will probably read, if you have not already, that some researchers say you do not need a prediction to conduct a descriptive study. We will discuss this point of view in Chap. 2 . For now, we simply claim that scientific inquiry, as we have defined it, applies to all kinds of research studies. Descriptive studies, like others, not only benefit from formulating, testing, and revising hypotheses, but also need hypothesis formulating, testing, and revising.

One reason we define research as formulating, testing, and revising hypotheses is that if you think of research in this way you are less likely to go wrong. It is a useful guide for the entire process, as we will describe in detail in the chapters ahead. For example, as you build the rationale for your predictions, you are constructing the theoretical framework for your study (Chap. 3 ). As you work out the methods you will use to test your hypothesis, every decision you make will be based on asking, “Will this help me formulate or test or revise my hypothesis?” (Chap. 4 ). As you interpret the results of testing your predictions, you will compare them to what you predicted and examine the differences, focusing on how you must revise your hypotheses (Chap. 5 ). By anchoring the process to formulating, testing, and revising hypotheses, you will make smart decisions that yield a coherent and well-designed study.

Exercise 1.5

Compare the concept of formulating, testing, and revising hypotheses with the descriptions of scientific inquiry contained in Scientific Research in Education (NRC, 2002 ). How are they similar or different?

Exercise 1.6

Provide an example to illustrate and emphasize the differences between everyday learning/thinking and scientific inquiry.

Learning from Doing Scientific Inquiry

We noted earlier that a measure of what you have learned by conducting a research study is found in the differences between your original hypothesis and your revised hypothesis based on the data you collected to test your hypothesis. We will elaborate this statement in later chapters, but we preview our argument here.

Even before collecting data, scientific inquiry requires cycles of making a prediction, developing a rationale, refining your predictions, reading and studying more to strengthen your rationale, refining your predictions again, and so forth. And, even if you have run through several such cycles, you still will likely find that when you test your prediction you will be partly right and partly wrong. The results will support some parts of your predictions but not others, or the results will “kind of” support your predictions. A critical part of scientific inquiry is making sense of your results by interpreting them against your predictions. Carefully describing what aspects of your data supported your predictions, what aspects did not, and what data fell outside of any predictions is not an easy task, but you cannot learn from your study without doing this analysis.

An image represents the cycle of events that take place before making predictions, developing the rationale, and studying the prediction and rationale multiple times.

Analyzing the matches and mismatches between your predictions and your data allows you to formulate different rationales that would have accounted for more of the data. The best revised rationale is the one that accounts for the most data. Once you have revised your rationales, you can think about the predictions they best justify or explain. It is by comparing your original rationales to your new rationales that you can sort out what you learned from your study.

Suppose your study was an experiment. Maybe you were investigating the effects of a new instructional intervention on students’ learning. Your original rationale was your explanation for why the intervention would change the learning outcomes in a particular way. Your revised rationale explained why the changes that you observed occurred like they did and why your revised predictions are better. Maybe your original rationale focused on the potential of the activities if they were implemented in ideal ways and your revised rationale included the factors that are likely to affect how teachers implement them. By comparing the before and after rationales, you are describing what you learned—what you can explain now that you could not before. Another way of saying this is that you are describing how much more you understand now than before you conducted your study.

Revised predictions based on carefully planned and collected data usually exhibit some of the following features compared with the originals: more precision, more completeness, and broader scope. Revised rationales have more explanatory power and become more complete, more aligned with the new predictions, sharper, and overall more convincing.

Part II. Why Do Educators Do Research?

Doing scientific inquiry is a lot of work. Each phase of the process takes time, and you will often cycle back to improve earlier phases as you engage in later phases. Because of the significant effort required, you should make sure your study is worth it. So, from the beginning, you should think about the purpose of your study. Why do you want to do it? And, because research is a social practice, you should also think about whether the results of your study are likely to be important and significant to the education community.

If you are doing research in the way we have described—as scientific inquiry—then one purpose of your study is to understand , not just to describe or evaluate or report. As we noted earlier, when you formulate hypotheses, you are developing rationales that explain why things might be like they are. In our view, trying to understand and explain is what separates research from other kinds of activities, like evaluating or describing.

One reason understanding is so important is that it allows researchers to see how or why something works like it does. When you see how something works, you are better able to predict how it might work in other contexts, under other conditions. And, because conditions, or contextual factors, matter a lot in education, gaining insights into applying your findings to other contexts increases the contributions of your work and its importance to the broader education community.

Consequently, the purposes of research studies in education often include the more specific aim of identifying and understanding the conditions under which the phenomena being studied work like the observations suggest. A classic example of this kind of study in mathematics education was reported by William Brownell and Harold Moser in 1949 . They were trying to establish which method of subtracting whole numbers could be taught most effectively—the regrouping method or the equal additions method. However, they realized that effectiveness might depend on the conditions under which the methods were taught—“meaningfully” versus “mechanically.” So, they designed a study that crossed the two instructional approaches with the two different methods (regrouping and equal additions). Among other results, they found that these conditions did matter. The regrouping method was more effective under the meaningful condition than the mechanical condition, but the same was not true for the equal additions algorithm.

What do education researchers want to understand? In our view, the ultimate goal of education is to offer all students the best possible learning opportunities. So, we believe the ultimate purpose of scientific inquiry in education is to develop understanding that supports the improvement of learning opportunities for all students. We say “ultimate” because there are lots of issues that must be understood to improve learning opportunities for all students. Hypotheses about many aspects of education are connected, ultimately, to students’ learning. For example, formulating and testing a hypothesis that preservice teachers need to engage in particular kinds of activities in their coursework in order to teach particular topics well is, ultimately, connected to improving students’ learning opportunities. So is hypothesizing that school districts often devote relatively few resources to instructional leadership training or hypothesizing that positioning mathematics as a tool students can use to combat social injustice can help students see the relevance of mathematics to their lives.

We do not exclude the importance of research on educational issues more removed from improving students’ learning opportunities, but we do think the argument for their importance will be more difficult to make. If there is no way to imagine a connection between your hypothesis and improving learning opportunities for students, even a distant connection, we recommend you reconsider whether it is an important hypothesis within the education community.

Notice that we said the ultimate goal of education is to offer all students the best possible learning opportunities. For too long, educators have been satisfied with a goal of offering rich learning opportunities for lots of students, sometimes even for just the majority of students, but not necessarily for all students. Evaluations of success often are based on outcomes that show high averages. In other words, if many students have learned something, or even a smaller number have learned a lot, educators may have been satisfied. The problem is that there is usually a pattern in the groups of students who receive lower quality opportunities—students of color and students who live in poor areas, urban and rural. This is not acceptable. Consequently, we emphasize the premise that the purpose of education research is to offer rich learning opportunities to all students.

One way to make sure you will be able to convince others of the importance of your study is to consider investigating some aspect of teachers’ shared instructional problems. Historically, researchers in education have set their own research agendas, regardless of the problems teachers are facing in schools. It is increasingly recognized that teachers have had trouble applying to their own classrooms what researchers find. To address this problem, a researcher could partner with a teacher—better yet, a small group of teachers—and talk with them about instructional problems they all share. These discussions can create a rich pool of problems researchers can consider. If researchers pursued one of these problems (preferably alongside teachers), the connection to improving learning opportunities for all students could be direct and immediate. “Grounding a research question in instructional problems that are experienced across multiple teachers’ classrooms helps to ensure that the answer to the question will be of sufficient scope to be relevant and significant beyond the local context” (Cai et al., 2019b , p. 115).

As a beginning researcher, determining the relevance and importance of a research problem is especially challenging. We recommend talking with advisors, other experienced researchers, and peers to test the educational importance of possible research problems and topics of study. You will also learn much more about the issue of research importance when you read Chap. 5 .

Exercise 1.7

Identify a problem in education that is closely connected to improving learning opportunities and a problem that has a less close connection. For each problem, write a brief argument (like a logical sequence of if-then statements) that connects the problem to all students’ learning opportunities.

Part III. Conducting Research as a Practice of Failing Productively

Scientific inquiry involves formulating hypotheses about phenomena that are not fully understood—by you or anyone else. Even if you are able to inform your hypotheses with lots of knowledge that has already been accumulated, you are likely to find that your prediction is not entirely accurate. This is normal. Remember, scientific inquiry is a process of constantly updating your thinking. More and better information means revising your thinking, again, and again, and again. Because you never fully understand a complicated phenomenon and your hypotheses never produce completely accurate predictions, it is easy to believe you are somehow failing.

The trick is to fail upward, to fail to predict accurately in ways that inform your next hypothesis so you can make a better prediction. Some of the best-known researchers in education have been open and honest about the many times their predictions were wrong and, based on the results of their studies and those of others, they continuously updated their thinking and changed their hypotheses.

A striking example of publicly revising (actually reversing) hypotheses due to incorrect predictions is found in the work of Lee J. Cronbach, one of the most distinguished educational psychologists of the twentieth century. In 1955, Cronbach delivered his presidential address to the American Psychological Association. Titling it “Two Disciplines of Scientific Psychology,” Cronbach proposed a rapprochement between two research approaches—correlational studies that focused on individual differences and experimental studies that focused on instructional treatments controlling for individual differences. (We will examine different research approaches in Chap. 4 ). If these approaches could be brought together, reasoned Cronbach ( 1957 ), researchers could find interactions between individual characteristics and treatments (aptitude-treatment interactions or ATIs), fitting the best treatments to different individuals.

In 1975, after years of research by many researchers looking for ATIs, Cronbach acknowledged the evidence for simple, useful ATIs had not been found. Even when trying to find interactions between a few variables that could provide instructional guidance, the analysis, said Cronbach, creates “a hall of mirrors that extends to infinity, tormenting even the boldest investigators and defeating even ambitious designs” (Cronbach, 1975 , p. 119).

As he was reflecting back on his work, Cronbach ( 1986 ) recommended moving away from documenting instructional effects through statistical inference (an approach he had championed for much of his career) and toward approaches that probe the reasons for these effects, approaches that provide a “full account of events in a time, place, and context” (Cronbach, 1986 , p. 104). This is a remarkable change in hypotheses, a change based on data and made fully transparent. Cronbach understood the value of failing productively.

Closer to home, in a less dramatic example, one of us began a line of scientific inquiry into how to prepare elementary preservice teachers to teach early algebra. Teaching early algebra meant engaging elementary students in early forms of algebraic reasoning. Such reasoning should help them transition from arithmetic to algebra. To begin this line of inquiry, a set of activities for preservice teachers were developed. Even though the activities were based on well-supported hypotheses, they largely failed to engage preservice teachers as predicted because of unanticipated challenges the preservice teachers faced. To capitalize on this failure, follow-up studies were conducted, first to better understand elementary preservice teachers’ challenges with preparing to teach early algebra, and then to better support preservice teachers in navigating these challenges. In this example, the initial failure was a necessary step in the researchers’ scientific inquiry and furthered the researchers’ understanding of this issue.

We present another example of failing productively in Chap. 2 . That example emerges from recounting the history of a well-known research program in mathematics education.

Making mistakes is an inherent part of doing scientific research. Conducting a study is rarely a smooth path from beginning to end. We recommend that you keep the following things in mind as you begin a career of conducting research in education.

First, do not get discouraged when you make mistakes; do not fall into the trap of feeling like you are not capable of doing research because you make too many errors.

Second, learn from your mistakes. Do not ignore your mistakes or treat them as errors that you simply need to forget and move past. Mistakes are rich sites for learning—in research just as in other fields of study.

Third, by reflecting on your mistakes, you can learn to make better mistakes, mistakes that inform you about a productive next step. You will not be able to eliminate your mistakes, but you can set a goal of making better and better mistakes.

Exercise 1.8

How does scientific inquiry differ from everyday learning in giving you the tools to fail upward? You may find helpful perspectives on this question in other resources on science and scientific inquiry (e.g., Failure: Why Science is So Successful by Firestein, 2015).

Exercise 1.9

Use what you have learned in this chapter to write a new definition of scientific inquiry. Compare this definition with the one you wrote before reading this chapter. If you are reading this book as part of a course, compare your definition with your colleagues’ definitions. Develop a consensus definition with everyone in the course.

Part IV. Preview of Chap. 2

Now that you have a good idea of what research is, at least of what we believe research is, the next step is to think about how to actually begin doing research. This means how to begin formulating, testing, and revising hypotheses. As for all phases of scientific inquiry, there are lots of things to think about. Because it is critical to start well, we devote Chap. 2 to getting started with formulating hypotheses.

Agnes, M., & Guralnik, D. B. (Eds.). (2008). Hypothesis. In Webster’s new world college dictionary (4th ed.). Wiley.

Google Scholar  

Britannica. (n.d.). Scientific method. In Encyclopaedia Britannica . Retrieved July 15, 2022 from https://www.britannica.com/science/scientific-method

Brownell, W. A., & Moser, H. E. (1949). Meaningful vs. mechanical learning: A study in grade III subtraction . Duke University Press..

Cai, J., Morris, A., Hohensee, C., Hwang, S., Robison, V., Cirillo, M., Kramer, S. L., & Hiebert, J. (2019b). Posing significant research questions. Journal for Research in Mathematics Education, 50 (2), 114–120. https://doi.org/10.5951/jresematheduc.50.2.0114

Article   Google Scholar  

Cambridge University Press. (n.d.). Hypothesis. In Cambridge dictionary . Retrieved July 15, 2022 from https://dictionary.cambridge.org/us/dictionary/english/hypothesis

Cronbach, J. L. (1957). The two disciplines of scientific psychology. American Psychologist, 12 , 671–684.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30 , 116–127.

Cronbach, L. J. (1986). Social inquiry by and for earthlings. In D. W. Fiske & R. A. Shweder (Eds.), Metatheory in social science: Pluralisms and subjectivities (pp. 83–107). University of Chicago Press.

Hay, C. M. (Ed.). (2016). Methods that matter: Integrating mixed methods for more effective social science research . University of Chicago Press.

Merriam-Webster. (n.d.). Explain. In Merriam-Webster.com dictionary . Retrieved July 15, 2022, from https://www.merriam-webster.com/dictionary/explain

National Research Council. (2002). Scientific research in education . National Academy Press.

Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P., Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R. K., Rumbaut, R. G., Sloane, F., Weisner, T. S., & Wilson, J. (2019a). Mixed methods for studies that address broad and enduring issues in education research. Teachers College Record, 121 , 100307.

Weisner, T. S. (Ed.). (2005). Discovering successful pathways in children’s development: Mixed methods in the study of childhood and family life . University of Chicago Press.

Download references

Author information

Authors and affiliations.

School of Education, University of Delaware, Newark, DE, USA

James Hiebert, Anne K Morris & Charles Hohensee

Department of Mathematical Sciences, University of Delaware, Newark, DE, USA

Jinfa Cai & Stephen Hwang

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Hiebert, J., Cai, J., Hwang, S., Morris, A.K., Hohensee, C. (2023). What Is Research, and Why Do People Do It?. In: Doing Research: A New Researcher’s Guide. Research in Mathematics Education. Springer, Cham. https://doi.org/10.1007/978-3-031-19078-0_1

Download citation

DOI : https://doi.org/10.1007/978-3-031-19078-0_1

Published : 03 December 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-19077-3

Online ISBN : 978-3-031-19078-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Method

Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories. How these are carried out in detail can vary greatly, but characteristics like these have been looked to as a way of demarcating scientific activity from non-science, where only enterprises which employ some canonical form of scientific method or methods should be considered science (see also the entry on science and pseudo-science ). Others have questioned whether there is anything like a fixed toolkit of methods which is common across science and only science. Some reject privileging one view of method as part of rejecting broader views about the nature of science, such as naturalism (Dupré 2004); some reject any restriction in principle (pluralism).

Scientific method should be distinguished from the aims and products of science, such as knowledge, predictions, or control. Methods are the means by which those goals are achieved. Scientific method should also be distinguished from meta-methodology, which includes the values and justifications behind a particular characterization of scientific method (i.e., a methodology) — values such as objectivity, reproducibility, simplicity, or past successes. Methodological rules are proposed to govern method and it is a meta-methodological question whether methods obeying those rules satisfy given values. Finally, method is distinct, to some degree, from the detailed and contextual practices through which methods are implemented. The latter might range over: specific laboratory techniques; mathematical formalisms or other specialized languages used in descriptions and reasoning; technological or other material means; ways of communicating and sharing results, whether with other scientists or with the public at large; or the conventions, habits, enforced customs, and institutional controls over how and what science is carried out.

While it is important to recognize these distinctions, their boundaries are fuzzy. Hence, accounts of method cannot be entirely divorced from their methodological and meta-methodological motivations or justifications, Moreover, each aspect plays a crucial role in identifying methods. Disputes about method have therefore played out at the detail, rule, and meta-rule levels. Changes in beliefs about the certainty or fallibility of scientific knowledge, for instance (which is a meta-methodological consideration of what we can hope for methods to deliver), have meant different emphases on deductive and inductive reasoning, or on the relative importance attached to reasoning over observation (i.e., differences over particular methods.) Beliefs about the role of science in society will affect the place one gives to values in scientific method.

The issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism, which considers the effectiveness of any methodological prescription to be so context sensitive as to render it not explanatory on its own. Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate. But the details of scientific practice vary with time and place, from institution to institution, across scientists and their subjects of investigation. How significant are the variations for understanding science and its success? How much can method be abstracted from practice? This entry describes some of the attempts to characterize scientific method or methods, as well as arguments for a more context-sensitive approach to methods embedded in actual scientific practices.

1. Overview and organizing themes

2. historical review: aristotle to mill, 3.1 logical constructionism and operationalism, 3.2. h-d as a logic of confirmation, 3.3. popper and falsificationism, 3.4 meta-methodology and the end of method, 4. statistical methods for hypothesis testing, 5.1 creative and exploratory practices.

  • 5.2 Computer methods and the ‘new ways’ of doing science

6.1 “The scientific method” in science education and as seen by scientists

6.2 privileged methods and ‘gold standards’, 6.3 scientific method in the court room, 6.4 deviating practices, 7. conclusion, other internet resources, related entries.

This entry could have been given the title Scientific Methods and gone on to fill volumes, or it could have been extremely short, consisting of a brief summary rejection of the idea that there is any such thing as a unique Scientific Method at all. Both unhappy prospects are due to the fact that scientific activity varies so much across disciplines, times, places, and scientists that any account which manages to unify it all will either consist of overwhelming descriptive detail, or trivial generalizations.

The choice of scope for the present entry is more optimistic, taking a cue from the recent movement in philosophy of science toward a greater attention to practice: to what scientists actually do. This “turn to practice” can be seen as the latest form of studies of methods in science, insofar as it represents an attempt at understanding scientific activity, but through accounts that are neither meant to be universal and unified, nor singular and narrowly descriptive. To some extent, different scientists at different times and places can be said to be using the same method even though, in practice, the details are different.

Whether the context in which methods are carried out is relevant, or to what extent, will depend largely on what one takes the aims of science to be and what one’s own aims are. For most of the history of scientific methodology the assumption has been that the most important output of science is knowledge and so the aim of methodology should be to discover those methods by which scientific knowledge is generated.

Science was seen to embody the most successful form of reasoning (but which form?) to the most certain knowledge claims (but how certain?) on the basis of systematically collected evidence (but what counts as evidence, and should the evidence of the senses take precedence, or rational insight?) Section 2 surveys some of the history, pointing to two major themes. One theme is seeking the right balance between observation and reasoning (and the attendant forms of reasoning which employ them); the other is how certain scientific knowledge is or can be.

Section 3 turns to 20 th century debates on scientific method. In the second half of the 20 th century the epistemic privilege of science faced several challenges and many philosophers of science abandoned the reconstruction of the logic of scientific method. Views changed significantly regarding which functions of science ought to be captured and why. For some, the success of science was better identified with social or cultural features. Historical and sociological turns in the philosophy of science were made, with a demand that greater attention be paid to the non-epistemic aspects of science, such as sociological, institutional, material, and political factors. Even outside of those movements there was an increased specialization in the philosophy of science, with more and more focus on specific fields within science. The combined upshot was very few philosophers arguing any longer for a grand unified methodology of science. Sections 3 and 4 surveys the main positions on scientific method in 20 th century philosophy of science, focusing on where they differ in their preference for confirmation or falsification or for waiving the idea of a special scientific method altogether.

In recent decades, attention has primarily been paid to scientific activities traditionally falling under the rubric of method, such as experimental design and general laboratory practice, the use of statistics, the construction and use of models and diagrams, interdisciplinary collaboration, and science communication. Sections 4–6 attempt to construct a map of the current domains of the study of methods in science.

As these sections illustrate, the question of method is still central to the discourse about science. Scientific method remains a topic for education, for science policy, and for scientists. It arises in the public domain where the demarcation or status of science is at issue. Some philosophers have recently returned, therefore, to the question of what it is that makes science a unique cultural product. This entry will close with some of these recent attempts at discerning and encapsulating the activities by which scientific knowledge is achieved.

Attempting a history of scientific method compounds the vast scope of the topic. This section briefly surveys the background to modern methodological debates. What can be called the classical view goes back to antiquity, and represents a point of departure for later divergences. [ 1 ]

We begin with a point made by Laudan (1968) in his historical survey of scientific method:

Perhaps the most serious inhibition to the emergence of the history of theories of scientific method as a respectable area of study has been the tendency to conflate it with the general history of epistemology, thereby assuming that the narrative categories and classificatory pigeon-holes applied to the latter are also basic to the former. (1968: 5)

To see knowledge about the natural world as falling under knowledge more generally is an understandable conflation. Histories of theories of method would naturally employ the same narrative categories and classificatory pigeon holes. An important theme of the history of epistemology, for example, is the unification of knowledge, a theme reflected in the question of the unification of method in science. Those who have identified differences in kinds of knowledge have often likewise identified different methods for achieving that kind of knowledge (see the entry on the unity of science ).

Different views on what is known, how it is known, and what can be known are connected. Plato distinguished the realms of things into the visible and the intelligible ( The Republic , 510a, in Cooper 1997). Only the latter, the Forms, could be objects of knowledge. The intelligible truths could be known with the certainty of geometry and deductive reasoning. What could be observed of the material world, however, was by definition imperfect and deceptive, not ideal. The Platonic way of knowledge therefore emphasized reasoning as a method, downplaying the importance of observation. Aristotle disagreed, locating the Forms in the natural world as the fundamental principles to be discovered through the inquiry into nature ( Metaphysics Z , in Barnes 1984).

Aristotle is recognized as giving the earliest systematic treatise on the nature of scientific inquiry in the western tradition, one which embraced observation and reasoning about the natural world. In the Prior and Posterior Analytics , Aristotle reflects first on the aims and then the methods of inquiry into nature. A number of features can be found which are still considered by most to be essential to science. For Aristotle, empiricism, careful observation (but passive observation, not controlled experiment), is the starting point. The aim is not merely recording of facts, though. For Aristotle, science ( epistêmê ) is a body of properly arranged knowledge or learning—the empirical facts, but also their ordering and display are of crucial importance. The aims of discovery, ordering, and display of facts partly determine the methods required of successful scientific inquiry. Also determinant is the nature of the knowledge being sought, and the explanatory causes proper to that kind of knowledge (see the discussion of the four causes in the entry on Aristotle on causality ).

In addition to careful observation, then, scientific method requires a logic as a system of reasoning for properly arranging, but also inferring beyond, what is known by observation. Methods of reasoning may include induction, prediction, or analogy, among others. Aristotle’s system (along with his catalogue of fallacious reasoning) was collected under the title the Organon . This title would be echoed in later works on scientific reasoning, such as Novum Organon by Francis Bacon, and Novum Organon Restorum by William Whewell (see below). In Aristotle’s Organon reasoning is divided primarily into two forms, a rough division which persists into modern times. The division, known most commonly today as deductive versus inductive method, appears in other eras and methodologies as analysis/​synthesis, non-ampliative/​ampliative, or even confirmation/​verification. The basic idea is there are two “directions” to proceed in our methods of inquiry: one away from what is observed, to the more fundamental, general, and encompassing principles; the other, from the fundamental and general to instances or implications of principles.

The basic aim and method of inquiry identified here can be seen as a theme running throughout the next two millennia of reflection on the correct way to seek after knowledge: carefully observe nature and then seek rules or principles which explain or predict its operation. The Aristotelian corpus provided the framework for a commentary tradition on scientific method independent of science itself (cosmos versus physics.) During the medieval period, figures such as Albertus Magnus (1206–1280), Thomas Aquinas (1225–1274), Robert Grosseteste (1175–1253), Roger Bacon (1214/1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546), Giacomo Zabarella (1533–1589) all worked to clarify the kind of knowledge obtainable by observation and induction, the source of justification of induction, and best rules for its application. [ 2 ] Many of their contributions we now think of as essential to science (see also Laudan 1968). As Aristotle and Plato had employed a framework of reasoning either “to the forms” or “away from the forms”, medieval thinkers employed directions away from the phenomena or back to the phenomena. In analysis, a phenomena was examined to discover its basic explanatory principles; in synthesis, explanations of a phenomena were constructed from first principles.

During the Scientific Revolution these various strands of argument, experiment, and reason were forged into a dominant epistemic authority. The 16 th –18 th centuries were a period of not only dramatic advance in knowledge about the operation of the natural world—advances in mechanical, medical, biological, political, economic explanations—but also of self-awareness of the revolutionary changes taking place, and intense reflection on the source and legitimation of the method by which the advances were made. The struggle to establish the new authority included methodological moves. The Book of Nature, according to the metaphor of Galileo Galilei (1564–1642) or Francis Bacon (1561–1626), was written in the language of mathematics, of geometry and number. This motivated an emphasis on mathematical description and mechanical explanation as important aspects of scientific method. Through figures such as Henry More and Ralph Cudworth, a neo-Platonic emphasis on the importance of metaphysical reflection on nature behind appearances, particularly regarding the spiritual as a complement to the purely mechanical, remained an important methodological thread of the Scientific Revolution (see the entries on Cambridge platonists ; Boyle ; Henry More ; Galileo ).

In Novum Organum (1620), Bacon was critical of the Aristotelian method for leaping from particulars to universals too quickly. The syllogistic form of reasoning readily mixed those two types of propositions. Bacon aimed at the invention of new arts, principles, and directions. His method would be grounded in methodical collection of observations, coupled with correction of our senses (and particularly, directions for the avoidance of the Idols, as he called them, kinds of systematic errors to which naïve observers are prone.) The community of scientists could then climb, by a careful, gradual and unbroken ascent, to reliable general claims.

Bacon’s method has been criticized as impractical and too inflexible for the practicing scientist. Whewell would later criticize Bacon in his System of Logic for paying too little attention to the practices of scientists. It is hard to find convincing examples of Bacon’s method being put in to practice in the history of science, but there are a few who have been held up as real examples of 16 th century scientific, inductive method, even if not in the rigid Baconian mold: figures such as Robert Boyle (1627–1691) and William Harvey (1578–1657) (see the entry on Bacon ).

It is to Isaac Newton (1642–1727), however, that historians of science and methodologists have paid greatest attention. Given the enormous success of his Principia Mathematica and Opticks , this is understandable. The study of Newton’s method has had two main thrusts: the implicit method of the experiments and reasoning presented in the Opticks, and the explicit methodological rules given as the Rules for Philosophising (the Regulae) in Book III of the Principia . [ 3 ] Newton’s law of gravitation, the linchpin of his new cosmology, broke with explanatory conventions of natural philosophy, first for apparently proposing action at a distance, but more generally for not providing “true”, physical causes. The argument for his System of the World ( Principia , Book III) was based on phenomena, not reasoned first principles. This was viewed (mainly on the continent) as insufficient for proper natural philosophy. The Regulae counter this objection, re-defining the aims of natural philosophy by re-defining the method natural philosophers should follow. (See the entry on Newton’s philosophy .)

To his list of methodological prescriptions should be added Newton’s famous phrase “ hypotheses non fingo ” (commonly translated as “I frame no hypotheses”.) The scientist was not to invent systems but infer explanations from observations, as Bacon had advocated. This would come to be known as inductivism. In the century after Newton, significant clarifications of the Newtonian method were made. Colin Maclaurin (1698–1746), for instance, reconstructed the essential structure of the method as having complementary analysis and synthesis phases, one proceeding away from the phenomena in generalization, the other from the general propositions to derive explanations of new phenomena. Denis Diderot (1713–1784) and editors of the Encyclopédie did much to consolidate and popularize Newtonianism, as did Francesco Algarotti (1721–1764). The emphasis was often the same, as much on the character of the scientist as on their process, a character which is still commonly assumed. The scientist is humble in the face of nature, not beholden to dogma, obeys only his eyes, and follows the truth wherever it leads. It was certainly Voltaire (1694–1778) and du Chatelet (1706–1749) who were most influential in propagating the latter vision of the scientist and their craft, with Newton as hero. Scientific method became a revolutionary force of the Enlightenment. (See also the entries on Newton , Leibniz , Descartes , Boyle , Hume , enlightenment , as well as Shank 2008 for a historical overview.)

Not all 18 th century reflections on scientific method were so celebratory. Famous also are George Berkeley’s (1685–1753) attack on the mathematics of the new science, as well as the over-emphasis of Newtonians on observation; and David Hume’s (1711–1776) undermining of the warrant offered for scientific claims by inductive justification (see the entries on: George Berkeley ; David Hume ; Hume’s Newtonianism and Anti-Newtonianism ). Hume’s problem of induction motivated Immanuel Kant (1724–1804) to seek new foundations for empirical method, though as an epistemic reconstruction, not as any set of practical guidelines for scientists. Both Hume and Kant influenced the methodological reflections of the next century, such as the debate between Mill and Whewell over the certainty of inductive inferences in science.

The debate between John Stuart Mill (1806–1873) and William Whewell (1794–1866) has become the canonical methodological debate of the 19 th century. Although often characterized as a debate between inductivism and hypothetico-deductivism, the role of the two methods on each side is actually more complex. On the hypothetico-deductive account, scientists work to come up with hypotheses from which true observational consequences can be deduced—hence, hypothetico-deductive. Because Whewell emphasizes both hypotheses and deduction in his account of method, he can be seen as a convenient foil to the inductivism of Mill. However, equally if not more important to Whewell’s portrayal of scientific method is what he calls the “fundamental antithesis”. Knowledge is a product of the objective (what we see in the world around us) and subjective (the contributions of our mind to how we perceive and understand what we experience, which he called the Fundamental Ideas). Both elements are essential according to Whewell, and he was therefore critical of Kant for too much focus on the subjective, and John Locke (1632–1704) and Mill for too much focus on the senses. Whewell’s fundamental ideas can be discipline relative. An idea can be fundamental even if it is necessary for knowledge only within a given scientific discipline (e.g., chemical affinity for chemistry). This distinguishes fundamental ideas from the forms and categories of intuition of Kant. (See the entry on Whewell .)

Clarifying fundamental ideas would therefore be an essential part of scientific method and scientific progress. Whewell called this process “Discoverer’s Induction”. It was induction, following Bacon or Newton, but Whewell sought to revive Bacon’s account by emphasising the role of ideas in the clear and careful formulation of inductive hypotheses. Whewell’s induction is not merely the collecting of objective facts. The subjective plays a role through what Whewell calls the Colligation of Facts, a creative act of the scientist, the invention of a theory. A theory is then confirmed by testing, where more facts are brought under the theory, called the Consilience of Inductions. Whewell felt that this was the method by which the true laws of nature could be discovered: clarification of fundamental concepts, clever invention of explanations, and careful testing. Mill, in his critique of Whewell, and others who have cast Whewell as a fore-runner of the hypothetico-deductivist view, seem to have under-estimated the importance of this discovery phase in Whewell’s understanding of method (Snyder 1997a,b, 1999). Down-playing the discovery phase would come to characterize methodology of the early 20 th century (see section 3 ).

Mill, in his System of Logic , put forward a narrower view of induction as the essence of scientific method. For Mill, induction is the search first for regularities among events. Among those regularities, some will continue to hold for further observations, eventually gaining the status of laws. One can also look for regularities among the laws discovered in a domain, i.e., for a law of laws. Which “law law” will hold is time and discipline dependent and open to revision. One example is the Law of Universal Causation, and Mill put forward specific methods for identifying causes—now commonly known as Mill’s methods. These five methods look for circumstances which are common among the phenomena of interest, those which are absent when the phenomena are, or those for which both vary together. Mill’s methods are still seen as capturing basic intuitions about experimental methods for finding the relevant explanatory factors ( System of Logic (1843), see Mill entry). The methods advocated by Whewell and Mill, in the end, look similar. Both involve inductive generalization to covering laws. They differ dramatically, however, with respect to the necessity of the knowledge arrived at; that is, at the meta-methodological level (see the entries on Whewell and Mill entries).

3. Logic of method and critical responses

The quantum and relativistic revolutions in physics in the early 20 th century had a profound effect on methodology. Conceptual foundations of both theories were taken to show the defeasibility of even the most seemingly secure intuitions about space, time and bodies. Certainty of knowledge about the natural world was therefore recognized as unattainable. Instead a renewed empiricism was sought which rendered science fallible but still rationally justifiable.

Analyses of the reasoning of scientists emerged, according to which the aspects of scientific method which were of primary importance were the means of testing and confirming of theories. A distinction in methodology was made between the contexts of discovery and justification. The distinction could be used as a wedge between the particularities of where and how theories or hypotheses are arrived at, on the one hand, and the underlying reasoning scientists use (whether or not they are aware of it) when assessing theories and judging their adequacy on the basis of the available evidence. By and large, for most of the 20 th century, philosophy of science focused on the second context, although philosophers differed on whether to focus on confirmation or refutation as well as on the many details of how confirmation or refutation could or could not be brought about. By the mid-20 th century these attempts at defining the method of justification and the context distinction itself came under pressure. During the same period, philosophy of science developed rapidly, and from section 4 this entry will therefore shift from a primarily historical treatment of the scientific method towards a primarily thematic one.

Advances in logic and probability held out promise of the possibility of elaborate reconstructions of scientific theories and empirical method, the best example being Rudolf Carnap’s The Logical Structure of the World (1928). Carnap attempted to show that a scientific theory could be reconstructed as a formal axiomatic system—that is, a logic. That system could refer to the world because some of its basic sentences could be interpreted as observations or operations which one could perform to test them. The rest of the theoretical system, including sentences using theoretical or unobservable terms (like electron or force) would then either be meaningful because they could be reduced to observations, or they had purely logical meanings (called analytic, like mathematical identities). This has been referred to as the verifiability criterion of meaning. According to the criterion, any statement not either analytic or verifiable was strictly meaningless. Although the view was endorsed by Carnap in 1928, he would later come to see it as too restrictive (Carnap 1956). Another familiar version of this idea is operationalism of Percy William Bridgman. In The Logic of Modern Physics (1927) Bridgman asserted that every physical concept could be defined in terms of the operations one would perform to verify the application of that concept. Making good on the operationalisation of a concept even as simple as length, however, can easily become enormously complex (for measuring very small lengths, for instance) or impractical (measuring large distances like light years.)

Carl Hempel’s (1950, 1951) criticisms of the verifiability criterion of meaning had enormous influence. He pointed out that universal generalizations, such as most scientific laws, were not strictly meaningful on the criterion. Verifiability and operationalism both seemed too restrictive to capture standard scientific aims and practice. The tenuous connection between these reconstructions and actual scientific practice was criticized in another way. In both approaches, scientific methods are instead recast in methodological roles. Measurements, for example, were looked to as ways of giving meanings to terms. The aim of the philosopher of science was not to understand the methods per se , but to use them to reconstruct theories, their meanings, and their relation to the world. When scientists perform these operations, however, they will not report that they are doing them to give meaning to terms in a formal axiomatic system. This disconnect between methodology and the details of actual scientific practice would seem to violate the empiricism the Logical Positivists and Bridgman were committed to. The view that methodology should correspond to practice (to some extent) has been called historicism, or intuitionism. We turn to these criticisms and responses in section 3.4 . [ 4 ]

Positivism also had to contend with the recognition that a purely inductivist approach, along the lines of Bacon-Newton-Mill, was untenable. There was no pure observation, for starters. All observation was theory laden. Theory is required to make any observation, therefore not all theory can be derived from observation alone. (See the entry on theory and observation in science .) Even granting an observational basis, Hume had already pointed out that one could not deductively justify inductive conclusions without begging the question by presuming the success of the inductive method. Likewise, positivist attempts at analyzing how a generalization can be confirmed by observations of its instances were subject to a number of criticisms. Goodman (1965) and Hempel (1965) both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below.

The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive (H-D) method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences. As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod (1924) and others in the 20 th century. Often, Hempel’s (1966) description of the H-D method, illustrated by the case of Semmelweiss’ inferential procedures in establishing the cause of childbed fever, has been presented as a key account of H-D as well as a foil for criticism of the H-D account of confirmation (see, for example, Lipton’s (2004) discussion of inference to the best explanation; also the entry on confirmation ). Hempel described Semmelsweiss’ procedure as examining various hypotheses explaining the cause of childbed fever. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true (what Hempel called the test implications of the hypothesis), then conducting an experiment and observing whether or not the test implications occurred. If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The confirmation of a test implication does not verify a hypothesis, though Hempel did allow that “it provides at least some support, some corroboration or confirmation for it” (Hempel 1966: 8). The degree of this support then depends on the quantity, variety and precision of the supporting evidence.

Another approach that took off from the difficulties with inductive inference was Karl Popper’s critical rationalism or falsificationism (Popper 1959, 1963). Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis. The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. (This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality. )

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.

Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science. Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.

A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications. This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted. Hence, scientific hypotheses must be falsifiable. Not only must there exist some possible observation statement which could falsify the hypothesis or theory, were it observed, (Popper called these the hypothesis’ potential falsifiers) it is crucial to the Popperian scientific method that such falsifications be sincerely attempted on a regular basis.

The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability (Popper 1985: 41f.).

From the 1960s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry.

Thomas Kuhn’s The Structure of Scientific Revolutions (1962) begins with a well-known shot across the bow for philosophers of science:

History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. (1962: 1)

The image Kuhn thought needed transforming was the a-historical, rational reconstruction sought by many of the Logical Positivists, though Carnap and other positivists were actually quite sympathetic to Kuhn’s views. (See the entry on the Vienna Circle .) Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.

The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place. Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. Method in this normal phase operates within a disciplinary matrix (Kuhn’s later concept of a paradigm) which includes standards for problem solving, and defines the range of problems to which the method should be applied. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility.

An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm. Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place

Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress (Feyerabend 1988). His arguments are grounded in re-examining accepted “myths” about the history of science. Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration. Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. As a consequence, the only rule that could provide what he took to be sufficient freedom was the vacuous “anything goes”. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant (Feyerabend 1978).

An even more fundamental kind of criticism was offered by several sociologists of science from the 1970s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes. Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors (see, e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history. (See the entries on the social dimensions of scientific knowledge and social epistemology .) Well-known examinations by Latour and Woolgar (1979/1986), Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seem to bear out that it was social ideologies (on a macro-scale) or individual interactions and circumstances (on a micro-scale) which were the primary causal factors in determining which beliefs gained the status of scientific knowledge. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded.

A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early 2000s, a number of scientists attempting to replicate the results of published experiments could not do so. There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method. (See the entry on reproducibility of scientific results .)

By the close of the 20 th century the search for the scientific method was flagging. Nola and Sankey (2000b) could introduce their volume on method by remarking that “For some, the whole idea of a theory of scientific method is yester-year’s debate …”.

Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation (or refutation), still important progress has been made on understanding how observation can provide evidence for a given theory. Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references.

Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present. Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid-19 th century, and the significance tests developed by Gosset (a.k.a. “Student”), Fisher, Neyman & Pearson and others in the 1920s and 1930s (see, e.g., Swijtink 1987 for a brief historical overview; and also the entry on C.S. Peirce ).

These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component. This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other (see especially Fisher 1955, Neyman 1956 and Pearson 1955, and for analyses of the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). On Fisher’s view, hypothesis testing was a methodology for when to accept or reject a statistical hypothesis, namely that a hypothesis should be rejected by evidence if this evidence would be unlikely relative to other possible outcomes, given the hypothesis were true. In contrast, on Neyman and Pearson’s view, the consequence of error also had to play a role when deciding between hypotheses. Introducing the distinction between the error of rejecting a true hypothesis (type I error) and accepting a false hypothesis (type II error), they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one. Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action. Here, the important point was not whether a hypothesis was true, but whether one should act as if it was.

Similar discussions are found in the philosophical literature. On the one side, Churchman (1948) and Rudner (1953) argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas (2009) and Howard (2003). For a broad set of case studies examining the role of values in science, see e.g. Elliott & Richards 2017.

In recent decades, philosophical discussions of the evaluation of probabilistic hypotheses by statistical inference have largely focused on Bayesianism that understands probability as a measure of a person’s degree of belief in an event, given the available information, and frequentism that instead understands probability as a long-run frequency of a repeatable event. Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events (see, e.g., Sober 2008, chapter 1 for a detailed introduction to Bayesianism and frequentism as well as to likelihoodism). Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs (i.e., background knowledge) and incoming evidence. Bayesianism employs a rule based on Bayes’ theorem, a theorem of the probability calculus which relates conditional probabilities. The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence (an observation, say) being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed (see, e.g., Sprenger & Hartmann 2019 for a comprehensive treatment of Bayesian philosophy of science). Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo (1996) that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics. The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation .

5. Method in Practice

Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge. Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory (see Nickles 1987 for an exposition of this view). The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.

A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century (see section 2 ) is that no such distinction can be clearly seen in scientific activity (see Arabatzis 2006). Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address (see also the entry on scientific discovery ). Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation.

Examining the reasoning practices of historical and contemporary scientists, Nersessian (2008) has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed. These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation. However, Nersessian also emphasizes that

creative model-based reasoning cannot be applied as a simple recipe, is not always productive of solutions, and even its most exemplary usages can lead to incorrect solutions. (Nersessian 2008: 11)

Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is

the creation of concepts through which to comprehend, structure, and communicate about physical phenomena …. (Nersessian 1987: 11)

Similarly, work on heuristics for discovery and theory construction by scholars such as Darden (1991) and Bechtel & Richardson (1993) present science as problem solving and investigate scientific problem solving as a special case of problem-solving in general. Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems.

Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play. The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described (Steinle 1997, 2002; Burian 1997; Waters 2007)). However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction. Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.

The development of high throughput instrumentation in molecular biology and neighbouring fields has given rise to a special type of exploratory experimentation that collects and analyses very large amounts of data, and these new ‘omics’ disciplines are often said to represent a break with the ideal of hypothesis-driven science (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007) and instead described as data-driven research (Leonelli 2012; Strasser 2012) or as a special kind of “convenience experimentation” in which many experiments are done simply because they are extraordinarily convenient to perform (Krohs 2012).

5.2 Computer methods and ‘new ways’ of doing science

The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation (higher speed, better filtering, more variables, sophisticated coordination and control), but also, through modelling and simulations, might constitute a form of experimentation themselves. Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?

Because computers can be used to automate measurements, quantifications, calculations, and statistical analyses where, for practical reasons, these operations cannot be otherwise carried out, many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a “black box”, without the direct involvement or awareness of a human. This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation.

The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available. The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated. Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model.

A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. (1994) raise concerns that “validiation”, because it suggests deductive inference, might lead to over-confidence in the results of simulations. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two (Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs. The status of simulations as experiments has therefore been examined (Kaufmann and Smarr 1993; Humphreys 1995; Hughes 1999; Norton and Suppe 2001). This literature considers the epistemology of these experiments: what we can learn by simulation, and also the kinds of justifications which can be given in applying that knowledge to the “real” world. (Mayo 1996; Parker 2008b). As pointed out, part of the advantage of computer simulation derives from the fact that huge numbers of calculations can be carried out without requiring direct observation by the experimenter/​simulator. At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation.

For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain. Rather, they seem to crucially involve aspects of both. This has led some authors, such as Fox Keller (2003: 200) to argue that we ought to consider computer simulation a “qualitatively different way of doing science”. The literature in general tends to follow Kaufmann and Smarr (1993) in referring to computer simulation as a “third way” for scientific methodology (theoretical reasoning and experimental practice are the first two ways.). It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own (see the entry on computer simulations in science ).

In recent years, the rapid development of machine learning techniques has prompted some scholars to suggest that the scientific method has become “obsolete” (Anderson 2008, Carrol and Goodstein 2009). This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research (for samples, see e.g. Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment of this topic, we refer to the entry scientific research and big data .

6. Discourse on scientific method

Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Often, reference to scientific method is used in ways that convey either the legend of a single, universal method characteristic of all science, or grants to a particular method or set of methods privilege as a special ‘gold standard’, often with reference to particular philosophers to vindicate the claims. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science. In these areas, the philosophical attempts at identifying a set of methods characteristic for scientific endeavors are closely related to the philosophy of science’s classical problem of demarcation (see the entry on science and pseudo-science ) and to the philosophical analysis of the social dimension of scientific knowledge and the role of science in democratic society.

One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education (see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002). [ 5 ] Often, ‘the scientific method’ is presented in textbooks and educational web pages as a fixed four or five step procedure starting from observations and description of a phenomenon and progressing over formulation of a hypothesis which explains the phenomenon, designing and conducting experiments to test the hypothesis, analyzing the results, and ending with drawing a conclusion. Such references to a universal scientific method can be found in educational material at all levels of science education (Blachowicz 2009), and numerous studies have shown that the idea of a general and universal scientific method often form part of both students’ and teachers’ conception of science (see, e.g., Aikenhead 1987; Osborne et al. 2003). In response, it has been argued that science education need to focus more on teaching about the nature of science, although views have differed on whether this is best done through student-led investigations, contemporary cases, or historical cases (Allchin, Andersen & Nielsen 2014)

Although occasionally phrased with reference to the H-D method, important historical roots of the legend in science education of a single, universal scientific method are the American philosopher and psychologist Dewey’s account of inquiry in How We Think (1910) and the British mathematician Karl Pearson’s account of science in Grammar of Science (1892). On Dewey’s account, inquiry is divided into the five steps of

(i) a felt difficulty, (ii) its location and definition, (iii) suggestion of a possible solution, (iv) development by reasoning of the bearing of the suggestions, (v) further observation and experiment leading to its acceptance or rejection. (Dewey 1910: 72)

Similarly, on Pearson’s account, scientific investigations start with measurement of data and observation of their correction and sequence from which scientific laws can be discovered with the aid of creative imagination. These laws have to be subject to criticism, and their final acceptance will have equal validity for “all normally constituted minds”. Both Dewey’s and Pearson’s accounts should be seen as generalized abstractions of inquiry and not restricted to the realm of science—although both Dewey and Pearson referred to their respective accounts as ‘the scientific method’.

Occasionally, scientists make sweeping statements about a simple and distinct scientific method, as exemplified by Feynman’s simplified version of a conjectures and refutations method presented, for example, in the last of his 1964 Cornell Messenger lectures. [ 6 ] However, just as often scientists have come to the same conclusion as recent philosophy of science that there is not any unique, easily described scientific method. For example, the physicist and Nobel Laureate Weinberg described in the paper “The Methods of Science … And Those By Which We Live” (1995) how

The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. (1995: 8)

Interview studies with scientists on their conception of method shows that scientists often find it hard to figure out whether available evidence confirms their hypothesis, and that there are no direct translations between general ideas about method and specific strategies to guide how research is conducted (Schickore & Hangel 2019, Hangel & Schickore 2017)

Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice. For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine (CAM)—alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.

Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. (See, e.g., Parascandola 1998 for an analysis of how this argument has been made to downgrade epidemiology compared to the laboratory sciences.) Similarly, based on an examination of the practices of major funding institutions such as the National Institutes of Health (NIH), the National Science Foundation (NSF) and the Biomedical Sciences Research Practices (BBSRC) in the UK, O’Malley et al. (2009) have argued that funding agencies seem to have a tendency to adhere to the view that the primary activity of science is to test hypotheses, while descriptive and exploratory research is seen as merely preparatory activities that are valuable only insofar as they fuel hypothesis-driven research.

In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data. For example, the codified format of publications in most biomedical journals known as the IMRAD format (Introduction, Method, Results, Analysis, Discussion) is explicitly described by the journal editors as “not an arbitrary publication format but rather a direct reflection of the process of scientific discovery” (see the so-called “Vancouver Recommendations”, ICMJE 2013: 11). However, scientific publications do not in general reflect the process by which the reported scientific results were produced. For example, under the provocative title “Is the scientific paper a fraud?”, Medawar argued that scientific papers generally misrepresent how the results have been produced (Medawar 1963/1996). Similar views have been advanced by philosophers, historians and sociologists of science (Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe 1998) who have argued that scientists’ experimental practices are messy and often do not follow any recognizable pattern. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism (see Schickore 2008 for a review of this work).

Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. A key case is Daubert vs Merrell Dow Pharmaceuticals (92–102, 509 U.S. 579, 1993). In this case, the Supreme Court argued in its 1993 ruling that trial judges must ensure that expert testimony is reliable, and that in doing this the court must look at the expert’s methodology to determine whether the proffered evidence is actually scientific knowledge. Further, referring to works of Popper and Hempel the court stated that

ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge … is whether it can be (and has been) tested. (Justice Blackmun, Daubert v. Merrell Dow Pharmaceuticals; see Other Internet Resources for a link to the opinion)

But as argued by Haack (2005a,b, 2010) and by Foster & Hubner (1999), by equating the question of whether a piece of testimony is reliable with the question whether it is scientific as indicated by a special methodology, the court was producing an inconsistent mixture of Popper’s and Hempel’s philosophies, and this has later led to considerable confusion in subsequent case rulings that drew on the Daubert case (see Haack 2010 for a detailed exposition).

The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. One of the first and most influential attempts at defining misconduct in science was the US definition from 1989 that defined misconduct as

fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community . (Code of Federal Regulations, part 50, subpart A., August 8, 1989, italics added)

However, the “other practices that seriously deviate” clause was heavily criticized because it could be used to suppress creative or novel science. For example, the National Academy of Science stated in their report Responsible Science (1992) that it

wishes to discourage the possibility that a misconduct complaint could be lodged against scientists based solely on their use of novel or unorthodox research methods. (NAS: 27)

This clause was therefore later removed from the definition. For an entry into the key philosophical literature on conduct in science, see Shamoo & Resnick (2009).

The question of the source of the success of science has been at the core of philosophy since the beginning of modern science. If viewed as a matter of epistemology more generally, scientific method is a part of the entire history of philosophy. Over that time, science and whatever methods its practitioners may employ have changed dramatically. Today, many philosophers have taken up the banners of pluralism or of practice to focus on what are, in effect, fine-grained and contextually limited examinations of scientific method. Others hope to shift perspectives in order to provide a renewed general account of what characterizes the activity we call science.

One such perspective has been offered recently by Hoyningen-Huene (2008, 2013), who argues from the history of philosophy of science that after three lengthy phases of characterizing science by its method, we are now in a phase where the belief in the existence of a positive scientific method has eroded and what has been left to characterize science is only its fallibility. First was a phase from Plato and Aristotle up until the 17 th century where the specificity of scientific knowledge was seen in its absolute certainty established by proof from evident axioms; next was a phase up to the mid-19 th century in which the means to establish the certainty of scientific knowledge had been generalized to include inductive procedures as well. In the third phase, which lasted until the last decades of the 20 th century, it was recognized that empirical knowledge was fallible, but it was still granted a special status due to its distinctive mode of production. But now in the fourth phase, according to Hoyningen-Huene, historical and philosophical studies have shown how “scientific methods with the characteristics as posited in the second and third phase do not exist” (2008: 168) and there is no longer any consensus among philosophers and historians of science about the nature of science. For Hoyningen-Huene, this is too negative a stance, and he therefore urges the question about the nature of science anew. His own answer to this question is that “scientific knowledge differs from other kinds of knowledge, especially everyday knowledge, primarily by being more systematic” (Hoyningen-Huene 2013: 14). Systematicity can have several different dimensions: among them are more systematic descriptions, explanations, predictions, defense of knowledge claims, epistemic connectedness, ideal of completeness, knowledge generation, representation of knowledge and critical discourse. Hence, what characterizes science is the greater care in excluding possible alternative explanations, the more detailed elaboration with respect to data on which predictions are based, the greater care in detecting and eliminating sources of error, the more articulate connections to other pieces of knowledge, etc. On this position, what characterizes science is not that the methods employed are unique to science, but that the methods are more carefully employed.

Another, similar approach has been offered by Haack (2003). She sets off, similar to Hoyningen-Huene, from a dissatisfaction with the recent clash between what she calls Old Deferentialism and New Cynicism. The Old Deferentialist position is that science progressed inductively by accumulating true theories confirmed by empirical evidence or deductively by testing conjectures against basic statements; while the New Cynics position is that science has no epistemic authority and no uniquely rational method and is merely just politics. Haack insists that contrary to the views of the New Cynics, there are objective epistemic standards, and there is something epistemologically special about science, even though the Old Deferentialists pictured this in a wrong way. Instead, she offers a new Critical Commonsensist account on which standards of good, strong, supportive evidence and well-conducted, honest, thorough and imaginative inquiry are not exclusive to the sciences, but the standards by which we judge all inquirers. In this sense, science does not differ in kind from other kinds of inquiry, but it may differ in the degree to which it requires broad and detailed background knowledge and a familiarity with a technical vocabulary that only specialists may possess.

  • Aikenhead, G.S., 1987, “High-school graduates’ beliefs about science-technology-society. III. Characteristics and limitations of scientific knowledge”, Science Education , 71(4): 459–487.
  • Allchin, D., H.M. Andersen and K. Nielsen, 2014, “Complementary Approaches to Teaching Nature of Science: Integrating Student Inquiry, Historical Cases, and Contemporary Cases in Classroom Practice”, Science Education , 98: 461–486.
  • Anderson, C., 2008, “The end of theory: The data deluge makes the scientific method obsolete”, Wired magazine , 16(7): 16–07
  • Arabatzis, T., 2006, “On the inextricability of the context of discovery and the context of justification”, in Revisiting Discovery and Justification , J. Schickore and F. Steinle (eds.), Dordrecht: Springer, pp. 215–230.
  • Barnes, J. (ed.), 1984, The Complete Works of Aristotle, Vols I and II , Princeton: Princeton University Press.
  • Barnes, B. and D. Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism , M. Hollis and S. Lukes (eds.), Cambridge: MIT Press, pp. 1–20.
  • Bauer, H.H., 1992, Scientific Literacy and the Myth of the Scientific Method , Urbana: University of Illinois Press.
  • Bechtel, W. and R.C. Richardson, 1993, Discovering complexity , Princeton, NJ: Princeton University Press.
  • Berkeley, G., 1734, The Analyst in De Motu and The Analyst: A Modern Edition with Introductions and Commentary , D. Jesseph (trans. and ed.), Dordrecht: Kluwer Academic Publishers, 1992.
  • Blachowicz, J., 2009, “How science textbooks treat scientific method: A philosopher’s perspective”, The British Journal for the Philosophy of Science , 60(2): 303–344.
  • Bloor, D., 1991, Knowledge and Social Imagery , Chicago: University of Chicago Press, 2 nd edition.
  • Boyle, R., 1682, New experiments physico-mechanical, touching the air , Printed by Miles Flesher for Richard Davis, bookseller in Oxford.
  • Bridgman, P.W., 1927, The Logic of Modern Physics , New York: Macmillan.
  • –––, 1956, “The Methodological Character of Theoretical Concepts”, in The Foundations of Science and the Concepts of Science and Psychology , Herbert Feigl and Michael Scriven (eds.), Minnesota: University of Minneapolis Press, pp. 38–76.
  • Burian, R., 1997, “Exploratory Experimentation and the Role of Histochemical Techniques in the Work of Jean Brachet, 1938–1952”, History and Philosophy of the Life Sciences , 19(1): 27–45.
  • –––, 2007, “On microRNA and the need for exploratory experimentation in post-genomic molecular biology”, History and Philosophy of the Life Sciences , 29(3): 285–311.
  • Carnap, R., 1928, Der logische Aufbau der Welt , Berlin: Bernary, transl. by R.A. George, The Logical Structure of the World , Berkeley: University of California Press, 1967.
  • –––, 1956, “The methodological character of theoretical concepts”, Minnesota studies in the philosophy of science , 1: 38–76.
  • Carrol, S., and D. Goodstein, 2009, “Defining the scientific method”, Nature Methods , 6: 237.
  • Churchman, C.W., 1948, “Science, Pragmatics, Induction”, Philosophy of Science , 15(3): 249–268.
  • Cooper, J. (ed.), 1997, Plato: Complete Works , Indianapolis: Hackett.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press
  • Dewey, J., 1910, How we think , New York: Dover Publications (reprinted 1997).
  • Douglas, H., 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh: University of Pittsburgh Press.
  • Dupré, J., 2004, “Miracle of Monism ”, in Naturalism in Question , Mario De Caro and David Macarthur (eds.), Cambridge, MA: Harvard University Press, pp. 36–58.
  • Elliott, K.C., 2007, “Varieties of exploratory experimentation in nanotoxicology”, History and Philosophy of the Life Sciences , 29(3): 311–334.
  • Elliott, K. C., and T. Richards (eds.), 2017, Exploring inductive risk: Case studies of values in science , Oxford: Oxford University Press.
  • Falcon, Andrea, 2005, Aristotle and the science of nature: Unity without uniformity , Cambridge: Cambridge University Press.
  • Feyerabend, P., 1978, Science in a Free Society , London: New Left Books
  • –––, 1988, Against Method , London: Verso, 2 nd edition.
  • Fisher, R.A., 1955, “Statistical Methods and Scientific Induction”, Journal of The Royal Statistical Society. Series B (Methodological) , 17(1): 69–78.
  • Foster, K. and P.W. Huber, 1999, Judging Science. Scientific Knowledge and the Federal Courts , Cambridge: MIT Press.
  • Fox Keller, E., 2003, “Models, Simulation, and ‘computer experiments’”, in The Philosophy of Scientific Experimentation , H. Radder (ed.), Pittsburgh: Pittsburgh University Press, 198–215.
  • Gilbert, G., 1976, “The transformation of research findings into scientific knowledge”, Social Studies of Science , 6: 281–306.
  • Gimbel, S., 2011, Exploring the Scientific Method , Chicago: University of Chicago Press.
  • Goodman, N., 1965, Fact , Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
  • Haack, S., 1995, “Science is neither sacred nor a confidence trick”, Foundations of Science , 1(3): 323–335.
  • –––, 2003, Defending science—within reason , Amherst: Prometheus.
  • –––, 2005a, “Disentangling Daubert: an epistemological study in theory and practice”, Journal of Philosophy, Science and Law , 5, Haack 2005a available online . doi:10.5840/jpsl2005513
  • –––, 2005b, “Trial and error: The Supreme Court’s philosophy of science”, American Journal of Public Health , 95: S66-S73.
  • –––, 2010, “Federal Philosophy of Science: A Deconstruction-and a Reconstruction”, NYUJL & Liberty , 5: 394.
  • Hangel, N. and J. Schickore, 2017, “Scientists’ conceptions of good research practice”, Perspectives on Science , 25(6): 766–791
  • Harper, W.L., 2011, Isaac Newton’s Scientific Method: Turning Data into Evidence about Gravity and Cosmology , Oxford: Oxford University Press.
  • Hempel, C., 1950, “Problems and Changes in the Empiricist Criterion of Meaning”, Revue Internationale de Philosophie , 41(11): 41–63.
  • –––, 1951, “The Concept of Cognitive Significance: A Reconsideration”, Proceedings of the American Academy of Arts and Sciences , 80(1): 61–77.
  • –––, 1965, Aspects of scientific explanation and other essays in the philosophy of science , New York–London: Free Press.
  • –––, 1966, Philosophy of Natural Science , Englewood Cliffs: Prentice-Hall.
  • Holmes, F.L., 1987, “Scientific writing and scientific discovery”, Isis , 78(2): 220–235.
  • Howard, D., 2003, “Two left turns make a right: On the curious political career of North American philosophy of science at midcentury”, in Logical Empiricism in North America , G.L. Hardcastle & A.W. Richardson (eds.), Minneapolis: University of Minnesota Press, pp. 25–93.
  • Hoyningen-Huene, P., 2008, “Systematicity: The nature of science”, Philosophia , 36(2): 167–180.
  • –––, 2013, Systematicity. The Nature of Science , Oxford: Oxford University Press.
  • Howie, D., 2002, Interpreting probability: Controversies and developments in the early twentieth century , Cambridge: Cambridge University Press.
  • Hughes, R., 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Models as Mediators , M. Morgan and M. Morrison (eds.), Cambridge: Cambridge University Press, pp. 97–145
  • Hume, D., 1739, A Treatise of Human Nature , D. Fate Norton and M.J. Norton (eds.), Oxford: Oxford University Press, 2000.
  • Humphreys, P., 1995, “Computational science and scientific method”, Minds and Machines , 5(1): 499–512.
  • ICMJE, 2013, “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals”, International Committee of Medical Journal Editors, available online , accessed August 13 2014
  • Jeffrey, R.C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246.
  • Kaufmann, W.J., and L.L. Smarr, 1993, Supercomputing and the Transformation of Science , New York: Scientific American Library.
  • Knorr-Cetina, K., 1981, The Manufacture of Knowledge , Oxford: Pergamon Press.
  • Krohs, U., 2012, “Convenience experimentation”, Studies in History and Philosophy of Biological and BiomedicalSciences , 43: 52–57.
  • Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press
  • Latour, B. and S. Woolgar, 1986, Laboratory Life: The Construction of Scientific Facts , Princeton: Princeton University Press, 2 nd edition.
  • Laudan, L., 1968, “Theories of scientific method from Plato to Mach”, History of Science , 7(1): 1–63.
  • Lenhard, J., 2006, “Models and statistical inference: The controversy between Fisher and Neyman-Pearson”, The British Journal for the Philosophy of Science , 57(1): 69–91.
  • Leonelli, S., 2012, “Making Sense of Data-Driven Research in the Biological and the Biomedical Sciences”, Studies in the History and Philosophy of the Biological and Biomedical Sciences , 43(1): 1–3.
  • Levi, I., 1960, “Must the scientist make value judgments?”, Philosophy of Science , 57(11): 345–357
  • Lindley, D., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press.
  • Lipton, P., 2004, Inference to the Best Explanation , London: Routledge, 2 nd edition.
  • Marks, H.M., 2000, The progress of experiment: science and therapeutic reform in the United States, 1900–1990 , Cambridge: Cambridge University Press.
  • Mazzochi, F., 2015, “Could Big Data be the end of theory in science?”, EMBO reports , 16: 1250–1255.
  • Mayo, D.G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • McComas, W.F., 1996, “Ten myths of science: Reexamining what we think we know about the nature of science”, School Science and Mathematics , 96(1): 10–16.
  • Medawar, P.B., 1963/1996, “Is the scientific paper a fraud”, in The Strange Case of the Spotted Mouse and Other Classic Essays on Science , Oxford: Oxford University Press, 33–39.
  • Mill, J.S., 1963, Collected Works of John Stuart Mill , J. M. Robson (ed.), Toronto: University of Toronto Press
  • NAS, 1992, Responsible Science: Ensuring the integrity of the research process , Washington DC: National Academy Press.
  • Nersessian, N.J., 1987, “A cognitive-historical approach to meaning in scientific theories”, in The process of science , N. Nersessian (ed.), Berlin: Springer, pp. 161–177.
  • –––, 2008, Creating Scientific Concepts , Cambridge: MIT Press.
  • Newton, I., 1726, Philosophiae naturalis Principia Mathematica (3 rd edition), in The Principia: Mathematical Principles of Natural Philosophy: A New Translation , I.B. Cohen and A. Whitman (trans.), Berkeley: University of California Press, 1999.
  • –––, 1704, Opticks or A Treatise of the Reflections, Refractions, Inflections & Colors of Light , New York: Dover Publications, 1952.
  • Neyman, J., 1956, “Note on an Article by Sir Ronald Fisher”, Journal of the Royal Statistical Society. Series B (Methodological) , 18: 288–294.
  • Nickles, T., 1987, “Methodology, heuristics, and rationality”, in Rational changes in science: Essays on Scientific Reasoning , J.C. Pitt (ed.), Berlin: Springer, pp. 103–132.
  • Nicod, J., 1924, Le problème logique de l’induction , Paris: Alcan. (Engl. transl. “The Logical Problem of Induction”, in Foundations of Geometry and Induction , London: Routledge, 2000.)
  • Nola, R. and H. Sankey, 2000a, “A selective survey of theories of scientific method”, in Nola and Sankey 2000b: 1–65.
  • –––, 2000b, After Popper, Kuhn and Feyerabend. Recent Issues in Theories of Scientific Method , London: Springer.
  • –––, 2007, Theories of Scientific Method , Stocksfield: Acumen.
  • Norton, S., and F. Suppe, 2001, “Why atmospheric modeling is good science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance , C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
  • O’Malley, M., 2007, “Exploratory experimentation and scientific practice: Metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29(3): 337–360.
  • O’Malley, M., C. Haufe, K. Elliot, and R. Burian, 2009, “Philosophies of Funding”, Cell , 138: 611–615.
  • Oreskes, N., K. Shrader-Frechette, and K. Belitz, 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science , 263(5147): 641–646.
  • Osborne, J., S. Simon, and S. Collins, 2003, “Attitudes towards science: a review of the literature and its implications”, International Journal of Science Education , 25(9): 1049–1079.
  • Parascandola, M., 1998, “Epidemiology—2 nd -Rate Science”, Public Health Reports , 113(4): 312–320.
  • Parker, W., 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science , 22(2): 165–83.
  • –––, 2008b, “Computer Simulation through an Error-Statistical Lens”, Synthese , 163(3): 371–84.
  • Pearson, K. 1892, The Grammar of Science , London: J.M. Dents and Sons, 1951
  • Pearson, E.S., 1955, “Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society , B, 17: 204–207.
  • Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics , Edinburgh: Edinburgh University Press.
  • Popper, K.R., 1959, The Logic of Scientific Discovery , London: Routledge, 2002
  • –––, 1963, Conjectures and Refutations , London: Routledge, 2002.
  • –––, 1985, Unended Quest: An Intellectual Autobiography , La Salle: Open Court Publishing Co..
  • Rudner, R., 1953, “The Scientist Qua Scientist Making Value Judgments”, Philosophy of Science , 20(1): 1–6.
  • Rudolph, J.L., 2005, “Epistemology for the masses: The origin of ‘The Scientific Method’ in American Schools”, History of Education Quarterly , 45(3): 341–376
  • Schickore, J., 2008, “Doing science, writing science”, Philosophy of Science , 75: 323–343.
  • Schickore, J. and N. Hangel, 2019, “‘It might be this, it should be that…’ uncertainty and doubt in day-to-day science practice”, European Journal for Philosophy of Science , 9(2): 31. doi:10.1007/s13194-019-0253-9
  • Shamoo, A.E. and D.B. Resnik, 2009, Responsible Conduct of Research , Oxford: Oxford University Press.
  • Shank, J.B., 2008, The Newton Wars and the Beginning of the French Enlightenment , Chicago: The University of Chicago Press.
  • Shapin, S. and S. Schaffer, 1985, Leviathan and the air-pump , Princeton: Princeton University Press.
  • Smith, G.E., 2002, “The Methodology of the Principia”, in The Cambridge Companion to Newton , I.B. Cohen and G.E. Smith (eds.), Cambridge: Cambridge University Press, 138–173.
  • Snyder, L.J., 1997a, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • –––, 1997b, “The Mill-Whewell Debate: Much Ado About Induction”, Perspectives on Science , 5: 159–198.
  • –––, 1999, “Renovating the Novum Organum: Bacon, Whewell and Induction”, Studies in History and Philosophy of Science , 30: 531–557.
  • Sober, E., 2008, Evidence and Evolution. The logic behind the science , Cambridge: Cambridge University Press
  • Sprenger, J. and S. Hartmann, 2019, Bayesian philosophy of science , Oxford: Oxford University Press.
  • Steinle, F., 1997, “Entering New Fields: Exploratory Uses of Experimentation”, Philosophy of Science (Proceedings), 64: S65–S74.
  • –––, 2002, “Experiments in History and Philosophy of Science”, Perspectives on Science , 10(4): 408–432.
  • Strasser, B.J., 2012, “Data-driven sciences: From wonder cabinets to electronic databases”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 85–87.
  • Succi, S. and P.V. Coveney, 2018, “Big data: the end of the scientific method?”, Philosophical Transactions of the Royal Society A , 377: 20180145. doi:10.1098/rsta.2018.0145
  • Suppe, F., 1998, “The Structure of a Scientific Paper”, Philosophy of Science , 65(3): 381–405.
  • Swijtink, Z.G., 1987, “The objectification of observation: Measurement and statistical methods in the nineteenth century”, in The probabilistic revolution. Ideas in History, Vol. 1 , L. Kruger (ed.), Cambridge MA: MIT Press, pp. 261–285.
  • Waters, C.K., 2007, “The nature and context of exploratory experimentation: An introduction to three case studies of exploratory research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
  • Weinberg, S., 1995, “The methods of science… and those by which we live”, Academic Questions , 8(2): 7–13.
  • Weissert, T., 1997, The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem , New York: Springer Verlag.
  • William H., 1628, Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus , in On the Motion of the Heart and Blood in Animals , R. Willis (trans.), Buffalo: Prometheus Books, 1993.
  • Winsberg, E., 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
  • Wivagg, D. & D. Allchin, 2002, “The Dogma of the Scientific Method”, The American Biology Teacher , 64(9): 645–646
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Blackmun opinion , in Daubert v. Merrell Dow Pharmaceuticals (92–102), 509 U.S. 579 (1993).
  • Scientific Method at philpapers. Darrell Rowbottom (ed.).
  • Recent Articles | Scientific Method | The Scientist Magazine

al-Kindi | Albert the Great [= Albertus magnus] | Aquinas, Thomas | Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science | Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources | Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West | Aristotle | Bacon, Francis | Bacon, Roger | Berkeley, George | biology: experiment in | Boyle, Robert | Cambridge Platonists | confirmation | Descartes, René | Enlightenment | epistemology | epistemology: Bayesian | epistemology: social | Feyerabend, Paul | Galileo Galilei | Grosseteste, Robert | Hempel, Carl | Hume, David | Hume, David: Newtonianism and Anti-Newtonianism | induction: problem of | Kant, Immanuel | Kuhn, Thomas | Leibniz, Gottfried Wilhelm | Locke, John | Mill, John Stuart | More, Henry | Neurath, Otto | Newton, Isaac | Newton, Isaac: philosophy | Ockham [Occam], William | operationalism | Peirce, Charles Sanders | Plato | Popper, Karl | rationality: historicist theories of | Reichenbach, Hans | reproducibility, scientific | Schlick, Moritz | science: and pseudo-science | science: theory and observation in | science: unity of | scientific discovery | scientific knowledge: social dimensions of | simulations in science | skepticism: medieval | space and time: absolute and relational space and motion, post-Newtonian theories | Vienna Circle | Whewell, William | Zabarella, Giacomo

Copyright © 2021 by Brian Hepburn < brian . hepburn @ wichita . edu > Hanne Andersen < hanne . andersen @ ind . ku . dk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

1.5: Types of Scientific Research

  • Last updated
  • Save as PDF
  • Page ID 26208

  • Anol Bhattacherjee
  • University of South Florida via Global Text Project

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Depending on the purpose of research, scientific research projects can be grouped into three types: exploratory, descriptive, and explanatory. Exploratory research is often conducted in new areas of inquiry, where the goals of the research are: (1) to scope out the magnitude or extent of a particular phenomenon, problem, or behavior, (2) to generate some initial ideas (or “hunches”) about that phenomenon, or (3) to test the feasibility of undertaking a more extensive study regarding that phenomenon. For instance, if the citizens of a country are generally dissatisfied with governmental policies regarding during an economic recession, exploratory research may be directed at measuring the extent of citizens’ dissatisfaction, understanding how such dissatisfaction is manifested, such as the frequency of public protests, and the presumed causes of such dissatisfaction, such as ineffective government policies in dealing with inflation, interest rates, unemployment, or higher taxes. Such research may include examination of publicly reported figures, such as estimates of economic indicators, such as gross domestic product (GDP), unemployment, and consumer price index, as archived by third-party sources, obtained through interviews of experts, eminent economists, or key government officials, and/or derived from studying historical examples of dealing with similar problems. This research may not lead to a very accurate understanding of the target problem, but may be worthwhile in scoping out the nature and extent of the problem and serve as a useful precursor to more in-depth research.

Descriptive research is directed at making careful observations and detailed documentation of a phenomenon of interest. These observations must be based on the scientific method (i.e., must be replicable, precise, etc.), and therefore, are more reliable than casual observations by untrained people. Examples of descriptive research are tabulation of demographic statistics by the United States Census Bureau or employment statistics by the Bureau of Labor, who use the same or similar instruments for estimating employment by sector or population growth by ethnicity over multiple employment surveys or censuses. If any changes are made to the measuring instruments, estimates are provided with and without the changed instrumentation to allow the readers to make a fair before-and-after comparison regarding population or employment trends. Other descriptive research may include chronicling ethnographic reports of gang activities among adolescent youth in urban populations, the persistence or evolution of religious, cultural, or ethnic practices in select communities, and the role of technologies such as Twitter and instant messaging in the spread of democracy movements in Middle Eastern countries.

Explanatory research seeks explanations of observed phenomena, problems, or behaviors. While descriptive research examines the what, where, and when of a phenomenon, explanatory research seeks answers to why and how types of questions. It attempts to “connect the dots” in research, by identifying causal factors and outcomes of the target phenomenon. Examples include understanding the reasons behind adolescent crime or gang violence, with the goal of prescribing strategies to overcome such societal ailments. Most academic or doctoral research belongs to the explanation category, though some amount of exploratory and/or descriptive research may also be needed during initial phases of academic research. Seeking explanations for observed events requires strong theoretical and interpretation skills, along with intuition, insights, and personal experience. Those who can do it well are also the most prized scientists in their disciplines.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is scientific research and examples

Home Market Research

What is Research: Definition, Methods, Types & Examples

What is Research

The search for knowledge is closely linked to the object of study; that is, to the reconstruction of the facts that will provide an explanation to an observed event and that at first sight can be considered as a problem. It is very human to seek answers and satisfy our curiosity. Let’s talk about research.

Content Index

What is Research?

What are the characteristics of research.

  • Comparative analysis chart

Qualitative methods

Quantitative methods, 8 tips for conducting accurate research.

Research is the careful consideration of study regarding a particular concern or research problem using scientific methods. According to the American sociologist Earl Robert Babbie, “research is a systematic inquiry to describe, explain, predict, and control the observed phenomenon. It involves inductive and deductive methods.”

Inductive methods analyze an observed event, while deductive methods verify the observed event. Inductive approaches are associated with qualitative research , and deductive methods are more commonly associated with quantitative analysis .

Research is conducted with a purpose to:

  • Identify potential and new customers
  • Understand existing customers
  • Set pragmatic goals
  • Develop productive market strategies
  • Address business challenges
  • Put together a business expansion plan
  • Identify new business opportunities
  • Good research follows a systematic approach to capture accurate data. Researchers need to practice ethics and a code of conduct while making observations or drawing conclusions.
  • The analysis is based on logical reasoning and involves both inductive and deductive methods.
  • Real-time data and knowledge is derived from actual observations in natural settings.
  • There is an in-depth analysis of all data collected so that there are no anomalies associated with it.
  • It creates a path for generating new questions. Existing data helps create more research opportunities.
  • It is analytical and uses all the available data so that there is no ambiguity in inference.
  • Accuracy is one of the most critical aspects of research. The information must be accurate and correct. For example, laboratories provide a controlled environment to collect data. Accuracy is measured in the instruments used, the calibrations of instruments or tools, and the experiment’s final result.

What is the purpose of research?

There are three main purposes:

  • Exploratory: As the name suggests, researchers conduct exploratory studies to explore a group of questions. The answers and analytics may not offer a conclusion to the perceived problem. It is undertaken to handle new problem areas that haven’t been explored before. This exploratory data analysis process lays the foundation for more conclusive data collection and analysis.

LEARN ABOUT: Descriptive Analysis

  • Descriptive: It focuses on expanding knowledge on current issues through a process of data collection. Descriptive research describe the behavior of a sample population. Only one variable is required to conduct the study. The three primary purposes of descriptive studies are describing, explaining, and validating the findings. For example, a study conducted to know if top-level management leaders in the 21st century possess the moral right to receive a considerable sum of money from the company profit.

LEARN ABOUT: Best Data Collection Tools

  • Explanatory: Causal research or explanatory research is conducted to understand the impact of specific changes in existing standard procedures. Running experiments is the most popular form. For example, a study that is conducted to understand the effect of rebranding on customer loyalty.

Here is a comparative analysis chart for a better understanding:

It begins by asking the right questions and choosing an appropriate method to investigate the problem. After collecting answers to your questions, you can analyze the findings or observations to draw reasonable conclusions.

When it comes to customers and market studies, the more thorough your questions, the better the analysis. You get essential insights into brand perception and product needs by thoroughly collecting customer data through surveys and questionnaires . You can use this data to make smart decisions about your marketing strategies to position your business effectively.

To make sense of your study and get insights faster, it helps to use a research repository as a single source of truth in your organization and manage your research data in one centralized data repository .

Types of research methods and Examples

what is research

Research methods are broadly classified as Qualitative and Quantitative .

Both methods have distinctive properties and data collection methods .

Qualitative research is a method that collects data using conversational methods, usually open-ended questions . The responses collected are essentially non-numerical. This method helps a researcher understand what participants think and why they think in a particular way.

Types of qualitative methods include:

  • One-to-one Interview
  • Focus Groups
  • Ethnographic studies
  • Text Analysis

Quantitative methods deal with numbers and measurable forms . It uses a systematic way of investigating events or data. It answers questions to justify relationships with measurable variables to either explain, predict, or control a phenomenon.

Types of quantitative methods include:

  • Survey research
  • Descriptive research
  • Correlational research

LEARN MORE: Descriptive Research vs Correlational Research

Remember, it is only valuable and useful when it is valid, accurate, and reliable. Incorrect results can lead to customer churn and a decrease in sales.

It is essential to ensure that your data is:

  • Valid – founded, logical, rigorous, and impartial.
  • Accurate – free of errors and including required details.
  • Reliable – other people who investigate in the same way can produce similar results.
  • Timely – current and collected within an appropriate time frame.
  • Complete – includes all the data you need to support your business decisions.

Gather insights

What is a research - tips

  • Identify the main trends and issues, opportunities, and problems you observe. Write a sentence describing each one.
  • Keep track of the frequency with which each of the main findings appears.
  • Make a list of your findings from the most common to the least common.
  • Evaluate a list of the strengths, weaknesses, opportunities, and threats identified in a SWOT analysis .
  • Prepare conclusions and recommendations about your study.
  • Act on your strategies
  • Look for gaps in the information, and consider doing additional inquiry if necessary
  • Plan to review the results and consider efficient methods to analyze and interpret results.

Review your goals before making any conclusions about your study. Remember how the process you have completed and the data you have gathered help answer your questions. Ask yourself if what your analysis revealed facilitates the identification of your conclusions and recommendations.

LEARN MORE ABOUT OUR SOFTWARE         FREE TRIAL

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

AlphaFold 3 predicts the structure and interactions of all of life’s molecules

May 08, 2024

[[read-time]] min read

Introducing AlphaFold 3, a new AI model developed by Google DeepMind and Isomorphic Labs. By accurately predicting the structure of proteins, DNA, RNA, ligands and more, and how they interact, we hope it will transform our understanding of the biological world and drug discovery.

Colorful protein structure against an abstract gradient background.

Inside every plant, animal and human cell are billions of molecular machines. They’re made up of proteins, DNA and other molecules, but no single piece works on its own. Only by seeing how they interact together, across millions of types of combinations, can we start to truly understand life’s processes.

In a paper published in Nature , we introduce AlphaFold 3, a revolutionary model that can predict the structure and interactions of all life’s molecules with unprecedented accuracy. For the interactions of proteins with other molecule types we see at least a 50% improvement compared with existing prediction methods, and for some important categories of interaction we have doubled prediction accuracy.

We hope AlphaFold 3 will help transform our understanding of the biological world and drug discovery. Scientists can access the majority of its capabilities, for free, through our newly launched AlphaFold Server , an easy-to-use research tool. To build on AlphaFold 3’s potential for drug design, Isomorphic Labs is already collaborating with pharmaceutical companies to apply it to real-world drug design challenges and, ultimately, develop new life-changing treatments for patients.

Our new model builds on the foundations of AlphaFold 2, which in 2020 made a fundamental breakthrough in protein structure prediction . So far, millions of researchers globally have used AlphaFold 2 to make discoveries in areas including malaria vaccines, cancer treatments and enzyme design. AlphaFold has been cited more than 20,000 times and its scientific impact recognized through many prizes, most recently the Breakthrough Prize in Life Sciences . AlphaFold 3 takes us beyond proteins to a broad spectrum of biomolecules. This leap could unlock more transformative science, from developing biorenewable materials and more resilient crops, to accelerating drug design and genomics research.

7PNM - Spike protein of a common cold virus (Coronavirus OC43): AlphaFold 3’s structural prediction for a spike protein (blue) of a cold virus as it interacts with antibodies (turquoise) and simple sugars (yellow), accurately matches the true structure (gray). The animation shows the protein interacting with an antibody, then a sugar. Advancing our knowledge of such immune-system processes helps better understand coronaviruses, including COVID-19, raising possibilities for improved treatments.

How AlphaFold 3 reveals life’s molecules

Given an input list of molecules, AlphaFold 3 generates their joint 3D structure, revealing how they all fit together. It models large biomolecules such as proteins, DNA and RNA, as well as small molecules, also known as ligands — a category encompassing many drugs. Furthermore, AlphaFold 3 can model chemical modifications to these molecules which control the healthy functioning of cells, that when disrupted can lead to disease.

AlphaFold 3’s capabilities come from its next-generation architecture and training that now covers all of life’s molecules. At the core of the model is an improved version of our Evoformer module — a deep learning architecture that underpinned AlphaFold 2’s incredible performance. After processing the inputs, AlphaFold 3 assembles its predictions using a diffusion network, akin to those found in AI image generators. The diffusion process starts with a cloud of atoms, and over many steps converges on its final, most accurate molecular structure.

AlphaFold 3’s predictions of molecular interactions surpass the accuracy of all existing systems. As a single model that computes entire molecular complexes in a holistic way, it’s uniquely able to unify scientific insights.

7R6R - DNA binding protein: AlphaFold 3’s prediction for a molecular complex featuring a protein (blue) bound to a double helix of DNA (pink) is a near-perfect match to the true molecular structure discovered through painstaking experiments (gray).

Leading drug discovery at Isomorphic Labs

AlphaFold 3 creates capabilities for drug design with predictions for molecules commonly used in drugs, such as ligands and antibodies, that bind to proteins to change how they interact in human health and disease.

AlphaFold 3 achieves unprecedented accuracy in predicting drug-like interactions, including the binding of proteins with ligands and antibodies with their target proteins. AlphaFold 3 is 50% more accurate than the best traditional methods on the PoseBusters benchmark without needing the input of any structural information, making AlphaFold 3 the first AI system to surpass physics-based tools for biomolecular structure prediction. The ability to predict antibody-protein binding is critical to understanding aspects of the human immune response and the design of new antibodies — a growing class of therapeutics.

Using AlphaFold 3 in combination with a complementary suite of in-house AI models, Isomorphic Labs is working on drug design for internal projects as well as with pharmaceutical partners. Isomorphic Labs is using AlphaFold 3 to accelerate and improve the success of drug design — by helping understand how to approach new disease targets, and developing novel ways to pursue existing ones that were previously out of reach.

AlphaFold Server: A free and easy-to-use research tool

8AW3 - RNA modifying protein: AlphaFold 3’s prediction for a molecular complex featuring a protein (blue), a strand of RNA (purple), and two ions (yellow) closely matches the true structure (gray). This complex is involved with the creation of other proteins — a cellular process fundamental to life and health.

Google DeepMind’s newly launched AlphaFold Server is the most accurate tool in the world for predicting how proteins interact with other molecules throughout the cell. It is a free platform that scientists around the world can use for non-commercial research. With just a few clicks, biologists can harness the power of AlphaFold 3 to model structures composed of proteins, DNA, RNA and a selection of ligands, ions and chemical modifications.

AlphaFold Server helps scientists make novel hypotheses to test in the lab, speeding up workflows and enabling further innovation. Our platform gives researchers an accessible way to generate predictions, regardless of their access to computational resources or their expertise in machine learning.

Experimental protein-structure prediction can take about the length of a PhD and cost hundreds of thousands of dollars. Our previous model, AlphaFold 2, has been used to predict hundreds of millions of structures, which would have taken hundreds of millions of researcher-years at the current rate of experimental structural biology.

Demo video showing the capabilities of the server.

Sharing the power of AlphaFold 3 responsibly

With each AlphaFold release, we’ve sought to understand the broad impact of the technology , working together with the research and safety community. We take a science-led approach and have conducted extensive assessments to mitigate potential risks and share the widespread benefits to biology and humanity.

Building on the external consultations we carried out for AlphaFold 2, we’ve now engaged with more than 50 domain experts, in addition to specialist third parties, across biosecurity, research and industry, to understand the capabilities of successive AlphaFold models and any potential risks. We also participated in community-wide forums and discussions ahead of AlphaFold 3’s launch.

AlphaFold Server reflects our ongoing commitment to share the benefits of AlphaFold, including our free database of 200 million protein structures. We’ll also be expanding our free AlphaFold education online course with EMBL-EBI and partnerships with organizations in the Global South to equip scientists with the tools they need to accelerate adoption and research, including on underfunded areas such as neglected diseases and food security. We’ll continue to work with the scientific community and policy makers to develop and deploy AI technologies responsibly.

Opening up the future of AI-powered cell biology

7BBV - Enzyme: AlphaFold 3’s prediction for a molecular complex featuring an enzyme protein (blue), an ion (yellow sphere) and simple sugars (yellow), along with the true structure (gray). This enzyme is found in a soil-borne fungus (Verticillium dahliae) that damages a wide range of plants. Insights into how this enzyme interacts with plant cells could help researchers develop healthier, more resilient crops.

AlphaFold 3 brings the biological world into high definition. It allows scientists to see cellular systems in all their complexity, across structures, interactions and modifications. This new window on the molecules of life reveals how they’re all connected and helps understand how those connections affect biological functions — such as the actions of drugs, the production of hormones and the health-preserving process of DNA repair.

The impacts of AlphaFold 3 and our free AlphaFold Server will be realized through how they empower scientists to accelerate discovery across open questions in biology and new lines of research. We’re just beginning to tap into AlphaFold 3’s potential and can’t wait to see what the future holds.

Related stories

Screenshot 2024-05-12 11.11.27 PM

How The FA uses Google Cloud AI to identify future England football stars

SP_Hero_Update (1)

Google I/O 2024: An I/O for a new generation

Lab Sessions_InfiniteWonderland

How four artists used AI to endlessly reimagine “Alice’s Adventures in Wonderland”

24017_IO_BlogHeader_Day1_01

Experience Google AI in even more ways on Android

OPT_01_45S_VIDEOFX_HERO_IMAGE_2096x1182 (2)

Introducing VideoFX, plus new features for ImageFX and MusicFX

Gemini_Blog_Header_3

Gemini breaks new ground with a faster model, longer context, AI agents and more

Let’s stay in touch. Get the latest news from Google in your inbox.

share this!

May 13, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

AI-assisted writing is quietly booming in academic journals—here's why that's OK

by Julian Koplin, The Conversation

AI-assisted writing is quietly booming in academic journals—here's why that's OK

If you search Google Scholar for the phrase " as an AI language model ," you'll find plenty of AI research literature and also some rather suspicious results. For example, one paper on agricultural technology says,

"As an AI language model, I don't have direct access to current research articles or studies. However, I can provide you with an overview of some recent trends and advancements …"

Obvious gaffes like this aren't the only signs that researchers are increasingly turning to generative AI tools when writing up their research. A recent study examined the frequency of certain words in academic writing (such as "commendable," "meticulously" and "intricate"), and found they became far more common after the launch of ChatGPT—so much so that 1% of all journal articles published in 2023 may have contained AI-generated text.

(Why do AI models overuse these words? There is speculation it's because they are more common in English as spoken in Nigeria, where key elements of model training often occur.)

The aforementioned study also looks at preliminary data from 2024, which indicates that AI writing assistance is only becoming more common. Is this a crisis for modern scholarship, or a boon for academic productivity?

Who should take credit for AI writing?

Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as " contaminating " scholarly literature.

Some argue that using AI output amounts to plagiarism. If your ideas are copy-pasted from ChatGPT, it is questionable whether you really deserve credit for them.

But there are important differences between "plagiarizing" text authored by humans and text authored by AI. Those who plagiarize humans' work receive credit for ideas that ought to have gone to the original author.

By contrast, it is debatable whether AI systems like ChatGPT can have ideas, let alone deserve credit for them. An AI tool is more like your phone's autocomplete function than a human researcher.

The question of bias

Another worry is that AI outputs might be biased in ways that could seep into the scholarly record. Infamously, older language models tended to portray people who are female, black and/or gay in distinctly unflattering ways, compared with people who are male, white and/or straight.

This kind of bias is less pronounced in the current version of ChatGPT.

However, other studies have found a different kind of bias in ChatGPT and other large language models : a tendency to reflect a left-liberal political ideology.

Any such bias could subtly distort scholarly writing produced using these tools.

The hallucination problem

The most serious worry relates to a well-known limitation of generative AI systems: that they often make serious mistakes.

For example, when I asked ChatGPT-4 to generate an ASCII image of a mushroom, it provided me with the following output.

AI-assisted writing is quietly booming in academic journals—here's why that's OK

It then confidently told me I could use this image of a "mushroom" for my own purposes.

These kinds of overconfident mistakes have been referred to as "AI hallucinations" and " AI bullshit ." While it is easy to spot that the above ASCII image looks nothing like a mushroom (and quite a bit like a snail), it may be much harder to identify any mistakes ChatGPT makes when surveying scientific literature or describing the state of a philosophical debate.

Unlike (most) humans, AI systems are fundamentally unconcerned with the truth of what they say. If used carelessly, their hallucinations could corrupt the scholarly record.

Should AI-produced text be banned?

One response to the rise of text generators has been to ban them outright. For example, Science—one of the world's most influential academic journals—disallows any use of AI-generated text .

I see two problems with this approach.

The first problem is a practical one: current tools for detecting AI-generated text are highly unreliable. This includes the detector created by ChatGPT's own developers, which was taken offline after it was found to have only a 26% accuracy rate (and a 9% false positive rate ). Humans also make mistakes when assessing whether something was written by AI.

It is also possible to circumvent AI text detectors. Online communities are actively exploring how to prompt ChatGPT in ways that allow the user to evade detection. Human users can also superficially rewrite AI outputs, effectively scrubbing away the traces of AI (like its overuse of the words "commendable," "meticulously" and "intricate").

The second problem is that banning generative AI outright prevents us from realizing these technologies' benefits. Used well, generative AI can boost academic productivity by streamlining the writing process. In this way, it could help further human knowledge. Ideally, we should try to reap these benefits while avoiding the problems.

The problem is poor quality control, not AI

The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record.

After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail to prevent its publication.

We need to get better at ensuring academic papers are free from serious mistakes, regardless of whether these mistakes are caused by careless use of AI or sloppy human scholarship. Not only is this more achievable than policing AI usage, it will improve the standards of academic research as a whole.

This would be (as ChatGPT might say) a commendable and meticulously intricate solution.

Provided by The Conversation

Explore further

Feedback to editors

what is scientific research and examples

Astronomers discover new Earth-sized world orbiting an ultra-cool star

what is scientific research and examples

Sun shoots out biggest solar flare in almost 2 decades, but Earth should be out of the way this time

2 hours ago

what is scientific research and examples

Tiger beetles fight off bat attacks with ultrasonic mimicry

11 hours ago

what is scientific research and examples

Machine learning model uncovers new drug design opportunities

15 hours ago

what is scientific research and examples

Astronomers find the biggest known batch of planet ingredients swirling around young star

what is scientific research and examples

How 'glowing' plants could help scientists predict flash drought

what is scientific research and examples

New GPS-based method can measure daily ice loss in Greenland

what is scientific research and examples

New candidate genes for human male infertility found by analyzing gorillas' unusual reproductive system

17 hours ago

what is scientific research and examples

Study uncovers technologies that could unveil energy-efficient information processing and sophisticated data security

what is scientific research and examples

Scientists develop an affordable sensor for lead contamination

Relevant physicsforums posts, is "college algebra" really just high school "algebra ii".

16 hours ago

Physics education is 60 years out of date

Plagiarism & chatgpt: is cheating with ai the new normal, physics instructor minimum education to teach community college.

May 11, 2024

Studying "Useful" vs. "Useless" Stuff in School

Apr 30, 2024

Why are Physicists so informal with mathematics?

Apr 29, 2024

More from STEM Educators and Teaching

Related Stories

what is scientific research and examples

AI-generated academic science writing can be identified with over 99% accuracy

Jun 7, 2023

what is scientific research and examples

ChatGPT maker fields tool for spotting AI-written text

Feb 1, 2023

what is scientific research and examples

Is the genie out of the bottle? Can you trust ChatGPT in scientific writing?

Oct 19, 2023

what is scientific research and examples

What is ChatGPT: Here's what you need to know

Feb 16, 2023

what is scientific research and examples

Tool detects AI-generated text in science journals

Nov 7, 2023

what is scientific research and examples

Could artificial intelligence help or hurt medical research articles?

Feb 6, 2024

Recommended for you

what is scientific research and examples

Investigation reveals varied impact of preschool programs on long-term school success

May 2, 2024

what is scientific research and examples

Training of brain processes makes reading more efficient

Apr 18, 2024

what is scientific research and examples

Researchers find lower grades given to students with surnames that come later in alphabetical order

Apr 17, 2024

what is scientific research and examples

Earth, the sun and a bike wheel: Why your high-school textbook was wrong about the shape of Earth's orbit

Apr 8, 2024

what is scientific research and examples

Touchibo, a robot that fosters inclusion in education through touch

Apr 5, 2024

what is scientific research and examples

More than money, family and community bonds prep teens for college success: Study

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 13 May 2024

First in vitro measurement of VHEE relative biological effectiveness (RBE) in lung and prostate cancer cells using the ARES linac at DESY

  • Hannah C. Wanstall 1 , 2 , 3 ,
  • Florian Burkart 4 ,
  • Hannes Dinter 4 ,
  • Max Kellermeier 4 ,
  • Willi Kuropka 4 ,
  • Frank Mayet 4 ,
  • Thomas Vinatier 4 ,
  • Elham Santina 5 ,
  • Amy L. Chadwick 5 ,
  • Michael J. Merchant 5 ,
  • Nicholas T. Henthorn 5 ,
  • Michael Köpke 4 ,
  • Blae Stacey 4 ,
  • Sonja Jaster-Merz 4 &
  • Roger M. Jones 1 , 3  

Scientific Reports volume  14 , Article number:  10957 ( 2024 ) Cite this article

1 Altmetric

Metrics details

  • Biological physics
  • Cell biology
  • Medical research

Very high energy electrons (VHEE) are a potential candidate for radiotherapy applications. This includes tumours in inhomogeneous regions such as lung and prostate cancers, due to the insensitivity of VHEE to inhomogeneities. This study explores how electrons in the VHEE range can be used to perform successful in vitro radiobiological studies. The ARES (accelerator research experiment at SINBAD) facility at DESY, Hamburg, Germany was used to deliver 154 MeV electrons to both prostate (PC3) and lung (A549) cancer cells in suspension. Dose was delivered to samples with repeatability and uniformity, quantified with Gafchromic film. Cell survival in response to VHEE was measured using the clonogenic assay to determine the biological effectiveness of VHEE in cancer cells for the first time using this method. Equivalent experiments were performed using 300 kVp X-rays, to enable VHEE irradiated cells to be compared with conventional photons. VHEE irradiated cancer cell survival was fitted to the linear quadratic (LQ) model (R 2  = 0.96–0.97). The damage from VHEE and X-ray irradiated cells at doses between 1.41 and 6.33 Gy are comparable, suggesting similar relative biological effectiveness (RBE) between the two modalities. This suggests VHEE is as damaging as photon radiotherapy and therefore could be used to successfully damage cancer cells during radiotherapy. The RBE of VHEE was quantified as the relative doses required for 50% (D 0.5 ) and 10% (D 0.1 ) cell survival. Using these values, VHEE RBE was measured as 0.93 (D 0.5 ) and 0.99 (D 0.1 ) for A549 and 0.74 (D 0.5 ) and 0.93 (D 0.1 ) for PC3 cell lines respectively. For the first time, this study has shown that 154 MeV electrons can be used to effectively kill lung and prostate cancer cells, suggesting that VHEE would be a viable radiotherapy modality. Several studies have shown that VHEE has characteristics that would offer significant improvements over conventional photon radiotherapy for example, electrons are relatively easy to steer and can be used to deliver dose rapidly and with high efficiency. Studies have shown improved dose distribution with VHEE in treatment plans, in comparison to VMAT, indicating that VHEE can offer improved and safer treatment plans with reduced side effects. The biological response of cancer cells to VHEE has not been sufficiently studied as of yet, however this initial study provides some initial insights into cell damage. VHEE offers significant benefits over photon radiotherapy and therefore more studies are required to fully understand the biological effectiveness of VHEE.

Similar content being viewed by others

what is scientific research and examples

First theoretical determination of relative biological effectiveness of very high energy electrons

what is scientific research and examples

Electron Scattering in Conventional Cell Flask Experiments and Dose Distribution Dependency

what is scientific research and examples

Dosimetry and radioprotection evaluations of very high energy electron beams

Introduction.

Very high energy electron (VHEE) radiotherapy is typically described as electrons accelerated to energies in the 100–250 MeV range. The idea of using VHEE as a novel radiation to treat cancer was first developed by Desrosiers et al. 1 over 20 years ago. Since this initial investigation, interest in VHEE as a novel radiotherapy technique has expanded, with the development of the first VHEE radiotherapy device announced in 2022, as a collaboration between CERN, Centre Hospitalier Universitaire Vaudois (CHUV) and industry partner THERYQ. The radiotherapy device is expected to be operational by 2024, with first clinical trials planned for 2025 2 , 3 . The VHEE radiotherapy device under development will deliver VHEE at ultra-high dose rates (UHDR) with the aim to deliver FLASH radiotherapy, a novel treatment that uses UHDR to spare healthy tissue. A key driver for this collaboration is that VHEE is thought to be an ideal candidate for FLASH radiotherapy due to the fast and efficient dose delivery capabilities of electrons.

Another benefit of VHEE would be potential advantages during irradiation of cancers located in inhomogeneous regions, such as lung and prostate 4 , 5 . This is due to VHEE having relative insensitivity to regions of varying densities, such as air pockets, in comparison to the dose deposited as a result of irradiation with photons or protons. Increasing electron energy results in a reduced penumbra 1 , 4 and therefore reduced dose scatter through a patient, indicating higher beam energies could be ideal for radiotherapy. Comparisons between VMAT and VHEE treatment plans indicate that VHEE resulted in similar or superior dose distribution for cases that include lung and prostate cancers 6 .

Electron energy in the range of 100–250 MeV significantly increases penetration depth so that the treatment of deep seated tumours would be possible 4 . Electrons in the (6–20 MeV) energy range have a long history of being used in the clinic for various superficial radiotherapy treatments due to their lower energy and therefore there reduced penetration 7 , 8 .

Although significant progress has been made in the development of VHEE for radiotherapy treatment, one aspect with extremely limited data is radiobiology. To our knowledge, there is no published in vitro or in vivo data at the time of writing. First investigations into the biological effectiveness of VHEE have been completed using theoretical models and by measuring damage to plasmid DNA, as a simplistic biological model. These studies aim to quantify the relative biological effectiveness (RBE) of VHEE. RBE is defined as the ratio of two doses where the radiation of interest (VHEE) is compared to a reference modality 9 , typically 250 kVp X-rays. The first experimental investigation into VHEE measures single strand (SSB) and double strand (DSB) DNA breaks to pBR322 plasmid DNA in response to 100–200 MeV electron irradiation (in comparison to 60 Co X-rays) 10 . RBE of VHEE was measured to be ~ 1.1–1.2, where yield of DSBs was the biological measure. This result was validated in response to 35 MeV electrons in an identical plasmid model, with SSB yield as the biological endpoint 11 . Monte Carlo simulations of VHEE have predicted their RBE to be ~ 1.0, with no significant difference relative to photons 12 .

If VHEE radiotherapy is to be implemented clinically, characterisation of VHEE RBE is critical in both cancer and healthy tissue. As the field progresses, it is expected that RBE measurements will be completed across both in vitro and in vivo models, to fully understand the interaction of this novel radiotherapy modality with biological matter, ranging from DNA to tissues. An important step in this process is an RBE measurement of cells in vitro. This will provide initial measurements that can be used to direct in vivo studies, as well as patient research and treatment.

Currently, investigative studies into VHEE radiobiology are extremely limited. One of the most critical obstacles is the lack of biological facilities in close proximity to VHEE accelerators. The overlap of physics and biology research means that very few facilities have the required infrastructure for good aseptic technique to support repeatable radiobiology. The experiment was therefore chosen to be completed at ARES, DESY due to the availability of facilities in close proximity to the VHEE beamline.

To produce radiobiology results with statistical significance, a minimum of three repeats of any in vitro experiment is typically required, with all samples undergoing identical experimental conditions. The repetition of sample irradiation can require considerable durations of VHEE beam time, which is typically competitive and limited. The ability to replicate exact irradiation conditions can present a problem for VHEE accelerators. The machine needs to be highly stable between irradiations and ideally maintain consistent beam energy, shape and alignment throughout all experimental repeats. This can provide a problem in facilities with a rotation of users, as beam conditions will typically be altered frequently and replicating a very specific previous set of conditions can take significant additional time. This is a symptom of current VHEE research machines. Development into clinical use necessitates beam consistency, which will improve beam stability and functionality, improving the feasibility of radiobiology experiments.

Another hurdle with current VHEE accelerators in a research setting is achieving the required field size, which in most cases will be significantly larger than the electron beam. Electron beam size varies between accelerators. At DESY’s ARES facility we used a Gaussian beam with σ ≃ 1.3 mm. Current VHEE accelerators including the CERN linear accelerator for research (CLEAR), the sources for plasma accelerators and radiation compton with lasers and beams (SPARC) and the next linear collider test accelerator (NLCTA) have a Gaussian beam within a range of σ ≃ 1–5 mm 13 . This is considerably smaller than the irradiation area required for most typical in vitro experiments where irradiations are commonly performed in cell culture flasks with cells adherent to the flask surface (ranging between 25 and 225 cm 2 culture area) or well plates (typical area of ~ 13.0 × 8.5 cm). Approaches to increase beam size include using materials such as foils or water to scatter the beam, otherwise pencil beam scanning can provide an overlapping dose profile. Both methods increase irradiation time of the sample which can be problematic when short time points post-irradiation are being investigated. This is particularly an issue when trying to obtain ultra-high dose rates. One way to achieve these dose rates would be by using traditional scattering methods. If spot-scanning methods were to be used to achieve ultra-high dose rates, an extremely high scanning speed (estimated at ~ 5.1 m/s) would be required to create dose rates within the FLASH range 13 .

The consideration of all these features can make a successful radiobiology experiment with VHEE more difficult, expensive and lengthy than experiments with more established modalities such as photons or protons. Fortunately, the current interest into using VHEE for medical applications has yielded several VHEE accelerator development projects worldwide, giving an optimistic outlook on the suitability of VHEE accelerators for radiobiology research. To progress the translational pathway for VHEE, there is a scientific need for radiobiological studies, which will allow informed treatment planning evaluations, and provide evidence to underpin an ethical plan for in vivo experiments. Ideally, experiments would be guided by a base of radiobiological studies, of which there are a limited number, due to the limitations discussed.

A collaboration between the University of Manchester (UK), and DESY (Germany) was initiated to attempt the in vitro irradiation of cancer cells using scanning methods. A radiobiology experiment measuring cell survival was the aim of the experiment, in an attempt to develop a VHEE irradiation protocol for further radiobiology experiments at DESY, as well as measuring the biological response to VHEE for the first time in cancer cells. This initial investigation into cancer cell survival was completed at the ARES RF linear accelerator with target energies of 100–155 MeV electrons, which was achieved following its finalised construction in 2021 14 . ARES demonstrated low energy jitter with a momentum stability of 6E-5 over a 16 h interval at 155 MeV 14 . The location of dedicated BSL-2 biology laboratory located in the nearby PETRA III experimental hall, as well as a highly stable VHEE beam meant that a protocol for the irradiation of cancer cells in vitro could be successfully developed.

Dose uniformity

To obtain a uniform dose profile over the sample areas, various spot spacing’s (0.8–2.6 mm) were tested using a constant Gaussian beam with σ = 1.3 mm. It was decided that all samples should be irradiated in a ‘serpentine pattern’—an irradiation spot pattern that is represented in Fig.  1 .

figure 1

Example of the stage movement to irradiate using the ‘serpentine’ scanning pattern. The scanning pattern was used to create rectangular uniform dose fields over the sample area. Blue dots represent the irradiation spots and the black arrows represent direction of stage movement.

1.8 mm spot spacing was quantified as having the highest dose uniformity based on X and Y dose profiles and standard deviation across all pixel values, based on EBT3 film data. This spot spacing was therefore used for irradiation of all samples. An example of this irradiation area using EBT3 Gafchromic film is indicated in Fig.  2 .

figure 2

( a ) Figure represents an example of the irradiated film pattern shown initially in the accelerator hall with irradiated samples. ( b ) Example of scanned film 24 h post irradiation. The scanned film is representative of the 7 × 12 irradiation pattern with 1.8 mm spot spacing that was used to irradiate all cell samples. The uniformity is indicated by the consistent darkening of the film within the rectangular area which is quantified further for all irradiated samples in Table 1 . EBT3 Gafchromic film scanned using the Epson perfection V850 pro scanner. ( c ) A plot representing the percentage dose to sample. Average dose uniformity of the irradiated area is 4.54% (σ). A 3D representation of the scanned film image indicated. The pixel values from the scanned data have been converted to dose (%). X and Y axis indicates the size of the irradiated area.

Figure  2 provides a visual example of the uniformity, which has been quantified below in Table 1 . The average dose for each irradiated sample area has been supplied for each individual sample. Average dose was measured as well as the standard deviation of pixels across the irradiated area of interest for each sample. Uniformity has been presented as the standard deviation from the mean (σ) across all pixels on the film for each sample irradiation (within the sample area of interest). These measurements show that the mean standard deviation across all samples is 4.54%, with a maximum deviation of 4.93 ± 1.25% for the lowest dose and a minimum of just 3.99 ± 0.60% for the 4.0 Gy dose point. It should also be noted that the dose uniformity of the EBT3 film itself is quoted between 2–3%, based on the manufacturers’ measurements 13 . Mean dose uniformity is also presented in respect to each dose where no trends are observed in correlation with increasing or decreasing average dose. Uniformity is therefore observed to be consistent between individual irradiations at each dose point to within 1.30% error. The homogeneity Index (defined by Eq. ( 1 )) across samples ranged from 0.19 ± 0.02 (4.0 Gy) to 0.30 ± 0.08 (1.5 Gy). These results are consistent with the uniformity measurements, suggesting that the lowest dose is the least uniform, whereas the intermediate dose 4.0 Gy is the most.

Dose repeatability

Another critical factor was the ability to repeat specific doses to obtain experimental repeats that can be compared. This was tested by analysing mean dose to each individual sample for each dose and experimental repeat. Comparisons have also been made between two experimental runs several months apart (January and May 2023) where different beam charges were used. The dose repeatability was measured as mean dose ± standard deviation (σ) across six irradiated samples, for each supplied charge. The film measured doses were 1.5 ± 0.1, 2.5 ± 0.2, 3.2 ± 0.3, 4.0 ± 0.1, 6.0 ± 0.3 and 6.7 ± 0.4 Gy as shown in Fig.  3 .

figure 3

( a ) All points represent measured dose from EBT3 Gafchromic film, within the irradiated sample area. Dose to sample was altered by the number of 18.3 pC electron pulses at each spot within the rectangular irradiation pattern. The number of pulses that corresponds to each dose is indicated on the x axis. Six repeats of each dose point was completed, represented by six separate points for each number of pulses. Error bars are indicative of the standard deviation across pixels in the measured area, specified in Table 1 . The mean across six samples for each number of pulses is indicated by the dotted black line and corresponding black number. Points represent those that were measured in the May 2023 experimental run only. ( b ) Graph represents the dose measured from EBT3 Gafchromic film in response to increasing charge during experimental runs at ARES in both January and May 2023. Individual points represent mean values across six repeats and error bars are indicative of standard deviation measured across six irradiation repeats. During the January and May 2023 experimental runs, pulses with charges of 22.5 and 18.3 pC respectively were used, which is the factor responsible for the differing total charges between the two data sets.

The correlation between charge and dose has been plotted with linear fits. Information regarding the fits is specified in Table 2 below.

Cell survival of A549 and PC3 cells in response to VHEE and X-ray irradiation

A549 and PC3 cells were irradiated with doses of 154 MeV electrons, and 300 kVp X-rays, in matched experimental conditions. It was observed at higher doses that PC3 cells had low colony formation, so the cell survival in response to the two higher doses have not been indicated for this study. Results are presented below in Table 3 and Fig.  4 .

figure 4

Curves indicate the proportion of cell survival ( S ) of A549 ( a ) and PC3 ( b ) cells in vitro in response to dose ( D ) of 154 MeV electrons and 300 kVp X-rays. Error bars are standard deviation where n = 3 for cell survival error and n = 6 for electron dose error. Fitted lines are the linear quadratic (LQ) model, with fitting parameters and goodness of fit indicated in Table 4 .

Differences in cell survival were not found to be significant when using a two-way ANOVA test to compare between modalities at each specific dose for either cell line. The data is shown in Fig.  4 , fitted to the linear quadratic (LQ) model.

Measuring the relative biological effectiveness of VHEE

The RBE of VHEE was determined using values taken from the LQ fits to VHEE and X-ray cell survival data. Fitting parameters to the LQ models to each data set are detailed below in Table 4 . Goodness of fit to the LQ model is also presented as well as D 0.5 and D 0.1 , which represent the dose required to obtain 50% and 10% cell survival respectively. Values for VHEE RBE have been calculated from the D 0.5 and D 0.1 values to provide a quantification of the biological effectiveness of 154 MeV electrons in comparison to photons.

As indicated in Table 4 , the RBE of VHEE can be observed to be 0.99 (D 0.5 ) and 0.93 (D 0.1 ) for A549 lung cancer cells, and 0.74 (D 0.5 ) and 0.93 (D 0.1 ) for PC3 prostate cancer cells. All sets of data were indicated to fit the LQ model with an R 2 value > 0.95. α and β values varied significantly. A549 α values were 0.06 and 0.10 for X-rays and VHEE respectively. The respective X-ray and VHEE β values were 0.07 and 0.05, resulting in α/β ratios of 0.84 and 2.13. A major difference was in the α value for the PC3 cell line, with 0.30 (X-ray) and 0.01 (VHEE) calculated as the best fitting parameters available. Combined with β values of 0.11 and 0.16 for X-ray and VHEE respectively, this resulted in highly different α/β ratios of 2.83 and 0.06.

This study shows for the first time that cell cultures can be successfully irradiated with VHEE using a spot scanning method, to complete cell survival experiments. Dose uniformity across the irradiated sample area was measured to be 4.54% when using EBT3 Gafchromic films. This was considered to be a small error when also considering that the inherent uniformity error of the Gafchromic film is quoted to be 2–3%, in optimal conditions 15 . Gafchromic films have previously been studied to be a reliable dosimetry method for VHEE within their intended dose range 16 , 17 and has been used for several experimental studies using VHEE thus far 5 , 10 . However, the film error is a limitation across all measurements in this study. Eventually, higher accuracy dosimetry could be achieved using ionisation chambers. Although not an issue for the dose rates used in this study, ultra-high dose rates do currently present a problem for standard chambers due to inefficient charge detection 18 . Developments such as the novel flashDiamond detector 19 provide options to advance the precision and accuracy of dose to samples in an optimised experimental set up.

Separate from measuring accurate and precise dose to samples, repeatability of dose is one of the most important aspects of radiobiology. Dose to samples can vary significantly even with consistent beam parameters and conditions. Even small amounts of position jitter in the beam can change the obtained dose by a significant amount, especially when irradiating within a small area. Changes in amount and shape of dark current spots also needs to be considered and measured in experimental VHEE linac. High beam stability is required to obtain repeatable results and this was provided by the ARES linear accelerator, as well as low dark current throughout the experimental runs. To quantify repeatability, average dose to each sample was measured across six irradiated samples as well as the standard deviation from the mean (σ) of all pixel values in the irradiated area of interest as a measure of uniformity. Overall, the average standard deviation from the mean (σ) when combining all irradiation repeats at each dose is 4.54% This varies slightly between doses, with the 1.5 Gy having the largest standard deviation over six irradiated samples (4.93%) and the lowest standard deviation occurring at doses of 4.0 Gy (3.99%). Again, these values must be considered alongside the 2–3% dose error of the film.

This was determined as a successful response, however the dose error does limit the ability of radiobiologists to explore more nuanced responses to VHEE. For example, if we aim to explore and quantify differences in RBE that are most likely within a 0–10% difference of our reference modality, then a large number of studies will have to be performed to demonstrate statistical significance given typical dose uncertainty. The development of VHEE machines with highly stable beams for medical applications is an absolute requirement of clinical applications. Higher accuracy dosimetry for VHEE machines would be also be beneficial to improve on current radiobiological studies and drive clinical translation.

Another limitation of this study is fact that experiments across modalities were completed at different laboratories and times. RBE studies with VHEE would be more scientifically rigorous if there was availability to a photon reference modality in the same location. An ideal facility would allow scientists to perform comparable sets of experiments with X-rays alongside these with VHEE to have matched controls, timings, protocol and reduce inter-lab variation.

The spot scanning method was used to complete the irradiations, with the cells in suspension within 0.5 ml Eppendorf tubes. This method was chosen to maintain a small irradiation area (the serpentine pattern covered a ~ 10 × 20 mm area) and keep the irradiation time for each sample to under 5 min. This method could be utilised to cover larger areas such as flasks and well plates, however the considerably longer irradiation times would have to be taken into account, and the effect of this on the cells measured.

During the VHEE irradiation, cells remained in the accelerator hall for ~ 1 h. It must be considered that the Eppendorf tube environment is sealed and at room temperature, as well as the cells being in suspension. For these reasons, the same protocol was recreated for X-ray irradiated samples, with cells maintained in identical Eppendorf tubes for the same length of time. The effect of these environmental conditions were tested in unirradiated samples. Any effects on cell survival was measured using the plating efficiency for these unirradiated cells. There were no statistical differences between those cells plated immediately after counting, and those stored in suspension within the Eppendorf tubes. Plating efficiency had a larger variance in A549 cells than with PC3 cells, but no differences can be recognised between the two conditions. This test was critical for ensuring that using this alternate methodology was not introducing unpredicted levels of stress to the cells manifesting as the loss of proliferative capability, with could impact the overall result. The lack of difference between conditions was reassuring and the implication was that we could irradiate in the comparably small area of the 0.5 ml Eppendorf tube rather than a flask or well plate.

The cell survival was then measured in response to several doses and the LQ model was fitted to this data, as represented in Fig.  4 . A high quality of fit to the LQ model indicated that both cell lines responded to both VHEE and X-ray irradiation as per the commonly described radiobiological model. The α/β values varied considerably between modalities, even in the case of A549 cells where the data points for VHEE and X-ray were noticeably similar. Due to the high goodness of fit of the LQ model to both cell lines and modalities, the fits were used to determine values for D 0.5 and D 0.1 .

The quantification of VHEE RBE was completed by calculating D 0.5 and D 0.1 , the dose required to kill 50% and 90% of cells respectively. The ratio of these doses was taken to calculate VHEE RBE values of 0.99 and 0.93 for A549 and 0.74 and 0.93 for PC3 cells. Average values for A549 and PC3 cells between the two conditions are 0.96 and 0.84 respectively, suggesting that the efficiency of VHEE cell killing is higher for lung cancer than prostate cancer in this case. Overall, the results indicate that VHEE have an RBE that is slightly less than, but close to 1.0. More investigations must be completed to add to the landscape of VHEE RBE.

Experimental investigations of VHEE RBE with plasmid DNA suggest an RBE of 1.1–1.2 (10). It is possible that the RBE > 1 for plasmids does not translate into a cancer cell model, and that the RBE for cell killing is closer to 0.9–1.0 based on the LQ fits. On the other hand, when measuring cell death at each dose point, there was no significant difference between VHEE and X-ray irradiated suggesting that RBE of VHEE is the same as that of photons. Our result is similar to another study investigating electron RBE using cell survival as the biological endpoint. Herskind et al. 20 measured the RBE of 10 MeV electrons to be 0.98 and 0.91 for MCF7 (breast cancer) and HUVEC (endothelium) cells respectively, suggesting that electrons across a range of energies have an RBE of > 1 when measuring cell survival. An RBE value of 0.84 for cell survival has also predicted for electron energies in the 6–18 MeV range using Monte Carlo modelling 21 . It should be noted that clinically, an RBE of 1 is used for electrons and has been for several decades.

Micronuclei are markers of DNA damage and are commonly used to measure RBE. Micronuclei frequency has been used as a biological endpoint to predict electron RBE as 1.1–1.3 across three studies 22 , 23 , 24 for electrons in the 1.5–8 MeV. Cell types measured were human lymphocytes and an ovarian cancer cell line. A recent systematic review of the literature did however highlight micronuclei frequency as an unreliable assay for quantifying biological effect between radiation modalities 25 . Naturally, more data is required as this is the first published response of cancer cells to VHEE and an overall picture of electron RBE is needed to predict biological effects accurately. Similar experiments with other cell types, including healthy cells, and eventually in vivo models is certainly required to fully understand the biological effect of VHEE.

Cell culture

A549 (human lung adenocarcinoma) and PC3 (human prostate adenocarcinoma) were cultured under sterile conditions in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, 11875093) supplemented with l -Glutamine and 10% fetal bovine serum (FBS) (Gibco, 10270-106). Cells were cultured at 37 °C, 5% CO 2 .

Cell samples irradiated with 154 MeV electrons were cultured and prepared in the Biology Laboratory located in the PETRA III experimental hall, Deutsches Elektronen–Synchrotron (DESY) facility in Hamburg, Germany. In the case of cell samples irradiated with 300 kVp, cell culture and sample preparation took place in the Oglesby Cancer Research Centre (OCRB).

Cells were authenticated and routinely tested for mycoplasma contamination.

Irradiation of A549 and PC3 cells in vitro

A549 and PC3 cells were irradiated with a range of doses across two research centres. Irradiations with 154 MeV electrons were completed at ARES and irradiations with 300 kVp X-rays were completed at the Oglesby Cancer Research Centre (OCRB), using an Xstrahl CIX3 cell irradiator.

Cells were prepared in suspension to a concentration of 5 × 10 5 cells/ml in a 200 μl volume of cells. Cells were irradiated in 0.5 ml Eppendorf tubes (Eppendorf, 0030121023) at doses of 1.4, 2.3, 3.0, 3.7, 5.7 and 6.3 Gy for both 154 MeV electrons and 300 kVp X-rays. Three statistical repeats were completed for each dose and cell line. Physical beam parameters for the VHEE and X-ray irradiations are specified in Table 5 below.

Once samples were prepared, cells remained in suspension at room temperature for approximately 2 h including transport time to and from irradiation source, irradiation and seeding time. Figures referring to plating efficiency using this protocol, as well as images of colony formation are available in the Supplementary section of this manuscript.

X-ray experimental setup on Xstrahl CIX3 cell irradiator at OCRB

0.5 ml Eppendorf tubes containing A549 or PC3 cells in suspension were irradiated with 300 kVp X-rays by lying tubes flat on the internal turntable within the Xstrahl CIX3 cell irradiator. The turntable ensured uniform irradiation over the samples from the vertical X-ray source. Dose to samples was measured based on the X-ray exposure time, with a dose rate of 2.13 Gy/min ± 0.8% used. A 0.7 mm copper filter was used.

Electron experimental set up at ARES

Cells were prepared and irradiated in 0.5 ml Eppendorf tubes and transported from the Biology Laboratory to the ARES accelerator hall in a polystyrene box. Samples were loaded in the custom made C250 aluminium sample holder as indicated in Fig.  5 . Rectangles of EBT3 Gafchromic film were secured in front and behind the samples in the irradiated area to measure dose for each irradiation. The sample holder was attached to a Thorlabs translation stage (Thorlabs, LTS300/M) to ensure precise movements of the samples, therefore creating a uniform scanned dose over the Eppendorf tube volume. Figure  6 shows the schematic of the beamline as well as the samples in the experimental area.

figure 5

A labelled photograph of the experimental area during the May 2023 run after irradiating cancer cells. Significant components of the experimental area are indicated including the sample location, the aluminium sample holder and EBT3 Gafchromic film for measurement of dose to samples. Note that during the irradiation, an identical rectangular section of EBT3 Gafchromic film was placed behind the sample, but has been removed here for visibility of the Eppendorf tubes and sample holder.

figure 6

A schematic representation of the ARES beamline is indicated. Electrons are generated by a normal conducting RF photoinjector, and are then accelerated using an S-band system. Focussing and steering of the beam are provided by several quadrupole magnets, as well as a dipole and corrector magnets. Current measurements are provided by the turbo integration current transformer (ICT). A 50 μm thick Titanium foil separates the accelerator vacuum from air. The electrons then terminate in the experimental area at an energy of approximately 154 MeV.

Samples were irradiated in a pre-optimised scanning pattern that consisted of overlapping Gaussian beam spots, achieved by the movement of the stage in a ‘serpentine’ pattern. Beam size was maintained at 1.3 mm σ for all experiments and the scanning pattern consisted of a 7 × 12 spot pattern using 1.8 mm spot spacing. Beam charge was maintained at 18.3 pC per pulse. Dose to samples was altered by varying the number of pulses administered at each spot in the 7 × 12 pattern. 1, 2, 3, 4, 6, and 7 pulses per spot corresponded to doses of 1.4, 2.3, 3.0, 3.7, 5.7 and 6.3 Gy respectively. Post-irradiation, the accelerator hall was accessed immediately, cells were removed and transported to the biology laboratory for processing.

Measuring cell survival using clonogenic method

Cell survival was measured in both cell lines using a clonogenic assay. Cells were seeded into six well plates within 1 h post-irradiation. Three seeding densities were used per dose, with each seeding density prepared in duplicate, using pre-optimised seeding densities. Cells were then incubated for 8 (A549) or 11 (PC3) days at 37 °C, 5% CO 2 . After the incubation time, cells were washed with PBS and colonies were fixed and stained with 0.7% crystal violet solution (Sigma–Aldrich, V5265) prepared in 30% methanol (Fisher Scientific, M/4000/21). Colonies were counted, with a colony defined a cluster of > 50 cells.

The Xstrahl machine for irradiations with 300 kVp X-rays was calibrated twice per annum to current national standards by the Christie Medical Physics team using an ionisation chamber. Ionisation chamber and probes are calibrated by Christie Medical Physics team annually. At the time of writing, the most recent dosimetry checks measured the X-ray dose rate at 2.13 Gy/min, with a percentage error of 0.8%. Collating dosimetry data from the previous 2 years shows that the maximum percentage error on the dose is 1.3% which has therefore been used to plot the X-ray error bars in Fig.  4 . Dose measurements were also completed using EBT3 Gafchromic film to validate average dose and uniformity of the irradiation field.

Dosimetry of VHEE at ARES was completed by simulating the dose delivered for a given charge using TOPAS Monte Carlo simulation (version 3.7.0) 29 , 30 , with validation using EBT3 Gafchromic film. EBT3 film was calibrated using a medical 15 MeV electron linac at the Christie Hospital, Manchester, UK. All calibration and reference films were scanned on an Epson perfection V850 pro scanner (Epson, B11B224401) at 300 dpi. Measured dose refers to the average of red and green colour channels in every instance.

Film was placed directly in front of and behind samples to measure dose received in the irradiated region directly behind the Eppendorf tube. The difference between the measured dose behind and in front of the tube was calculated to be 6.3%, which was applied uniformly to the dose measured behind the sample to calculate values for the dose received by the sample volume.

Dose uniformity in the irradiated area was measured as the standard deviation across all pixels on the EBT3 film within the irradiated area, as measured using Image J software.

The homogeneity index was calculated using the equation:

where HI is the homogeneity index, P max and P min are the maximum and minimum pixel dose on the Gafchromic film in the sample area.

Dose repeatability was calculated by measuring the standard deviation (σ) of the average doses of six individually irradiated samples, with access to the accelerator hall in between each irradiation.

Fitting of radiobiological model to cell survival data

The linear quadratic (LQ) model was fitted to cell survival data in response to radiation dose. The equation,

where S is the proportion of surviving cells, D represents Dose (Gy) and α (Gy −1 ) and β (Gy −2 ) are both fitting parameters that are described further for each data set in Table 4 . All fits were completed using GraphPad Prism (version 8) software.

Statistical analysis

A Student’s paired t test was used to compare between the unirradiated plating efficiencies for two plating methods.

A two way analysis of variance (ANOVA) was applied to the irradiated cell survival for A549 and PC3 datasets separately to determine differences between radiation modalities at each dose. This was followed up by a Sidak’s multiple comparisons test to identify statistical differences between VHEE and X-ray cell survival. p -values < 0.05 were considered to be statistically significant.

The statistical analysis for both tests was completed using GraphPad Prism (version 8). The threshold for statistical significance used throughout was p  < 0.05.

Data availability

The data underlying this article is available in the article, presented in table format throughout. Any other data or specific information underlying this article will be shared on reasonable request to the corresponding author.

DesRosiers, C., Moskvin, V., Bielajew, A. F. & Papiez, L. 150–250 meV electron beams in radiation therapy. Phys. Med. Biol. 45 (7), 1781–1805 (2000).

Article   CAS   PubMed   Google Scholar  

CHUV, CERN and THERYQ collaborate on FLASH radiotherapy device. Appl. Rad. Oncol . (2022).

Wuensch, W. The CHUV-CERN collaboration on a high-energy electron FLASH therapy facility. In UK Accelerator Institutes Seminar Series (2021). https://www.appliedradiationoncology.com/articles/chuv-cern-and-theryq-collaborate-on-flash-radiotherapy-device . Accessed June 2023.

Lagzda, A. VHEE radiotherapy studies at CLARA and CERN facilities. https://www.research.manchester.ac.uk/portal/files/156333514/FULL_TEXT.PDF . University of Manchester (2019). Accessed June 2023.

Lagzda, A. et al. Influence of heterogeneous media on very high energy electron (VHEE) dose penetration and a Monte Carlo-based comparison with existing radiotherapy modalities. Nucl. Instrum. Methods Phys. Res. Sect. B Beam Interact. Mater. Atoms 482 , 70–81 (2020).

Article   ADS   CAS   Google Scholar  

Bazalova-Carter, M. et al. Treatment planning for radiotherapy with very high-energy electron beams and comparison of VHEE and VMAT plans. Med. Phys. 42 (5), 2615–2625 (2015).

Article   PubMed   Google Scholar  

Kudchadker, R. J., Antolak, J. A., Morrison, W. H., Wong, P. F. & Hogstrom, K. R. Utilization of custom electron bolus in head and neck radiotherapy. J. Appl. Clin. Med. Phys. 4 (4), 321–333 (2003).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Haas, L. L., Laughlin, J. B. & Harvey, R. A. Biological effectiveness of highspeed electron beam in man. Radiology 62 (6), 845–851 (1954).

Laramore, G. E., Rockhill, J. K. & Komarnicky Kocher, L. T. Relative biological effectiveness (RBE). In Encyclopedia of Radiation Oncology (ed. Brady, L. W.) (Springer, 2013).

Google Scholar  

Small, K. L. et al. Evaluating very high energy electron RBE from nanodosimetric pBR322 plasmid DNA damage. Sci. Rep. https://doi.org/10.1038/s41598-021-82772-6 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Wanstall, H. C. et al. Quantification of damage to plasmid DNA from 35 MeV electrons, 228 MeV protons and 300 kVp X-rays in varying hydroxyl radical scavenging environments. J. Radiat. Res. 64 (3), 547–557 (2023).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Delorme, R., Masilela, T. A. M., Etoh, C., Smekens, F. & Prezado, Y. First theoretical determination of relative biological effectiveness of very high energy electrons. Sci. Rep. 11 (1), 11242 (2021).

Ronga, M. G. et al. Back to the future: Very high-energy electrons (vhees) and their potential application in radiation therapy. Cancers 13 (19), 4942 (2021).

Burkart, F. et al . The ARES Linac at DESY. In 31st Int Linear Accel Conf . (JACoW Publishing, 2022).

Ashland. EBT3 Specification and User Guide (2023). http://www.gafchromic.com/documents/EBT3_Specifications.pdf . Accessed June 2023.

Subiel, A. et al. Dosimetry of very high energy electrons (VHEE) for radiotherapy applications: Using radiochromic film measurements and Monte Carlo simulations. Phys. Med. Biol. 59 (19), 5811–5829 (2014).

Rieker, V. F. et al . Developments of reliable VHEE/FLASH passive dosimetry methods and procedures at CLEAR. In 14th Int Particle Accel Conf; Venezia (JACoW Publishing, 2023).

McManus, M. et al. The challenge of ionisation chamber dosimetry in ultra-short pulsed high dose-rate very high energy electron beams. Sci. Rep. 10 (1), 9089 (2020).

Verona, R. G. et al. Application of a novel diamond detector for commissioning of FLASH radiotherapy electron beams. Med. Phys. 49 (8), 5513–5522 (2022).

Herskind, C. et al. Biology of high single doses of IORT: RBE, 5 R’s, and other biological aspects. Radiat. Oncol. 12 (1), 24 (2017).

Chattaraj, A. & Selvam, T. P. Microdosimetry-based relative biological effectiveness calculations for radiotherapeutic electron beams: A FLUKA-based study. Radiol. Phys. Technol. 14 (3), 297–308 (2021).

Acharya, S., Sanjeev, G., Bhat, N. N., Siddappa, K. & Narayana, Y. The effect of electron and gamma irradiation on the induction of micronuclei in cytokinesis-blocked human blood lymphocytes. Radiat. Environ. Biophys. 48 (2), 197–203 (2009).

Andreassi, M. G. et al. Radiobiological effectiveness of ultrashort laser-driven electron bunches: Micronucleus frequency, telomere shortening and cell viability. Radiat. Res. 186 (3), 245–253 (2016).

Article   ADS   CAS   PubMed   Google Scholar  

Nairy, R. K., Bhat, N. N., Sanjeev, G. & Yerol, N. Dose-response study using micronucleus cytome assay: A tool for biodosimetry application. Radiat. Prot. Dosim. 174 (1), 79–87 (2017).

CAS   Google Scholar  

Heaven, C. J. et al. The suitability of micronuclei as markers of relative biological effect. Mutagenesis 37 (1), 3–12 (2022).

National Institute of Standards and Technology. ESTAR: Stopping Power and Range Tables for Electrons . https://physics.nist.gov/cgi-bin/Star/e_table.pl (2024). Accessed June 2023.

Vassiliev, O. N. On calculation of the average linear energy transfer for radiobiological modelling. Biomed. Phys. Eng. Express 7 (1), 015001 (2021).

International Atomic Energy Agency. Radiation Biology: A Handbook for Teachers and Students 20–21 (Springer, 2010).

Faddegon, B. et al. The TOPAS tool for particle simulation, a Monte Carlo simulation tool for physics, biology and clinical research. Phys. Med. 72 , 114–121 (2020).

Perl, J., Shin, J., Schumann, J., Faddegon, B. & Paganetti, H. TOPAS: An innovative proton Monte Carlo platform for research and clinical applications. Med. Phys. 39 (11), 6818–6837 (2012).

Download references

Acknowledgements

This work was supported by the UK Research and Innovation (UKRI), Engineering and Physical Sciences Research Council (EPSRC) [EP/T517823/1] and the UK Research and Innovation (UKRI), Science and Technology Facilities Council (STFC), Cockcroft Institute [ST/V001612/1]. The authors also acknowledge support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF. We are pleased to acknowledge support from several groups including Rob Bristow’s group at the Oglesby Cancer Research Centre (OCRB), UK and the Centre for Structural Systems Biology (CSSB), Germany for the kind donation of A549 and PC3 cells. Thank you to all technical groups at DESY for their work and support in the ARES implementation, maintenance and operation.

Author information

Authors and affiliations.

Department of Physics and Astronomy, Faculty of Science and Engineering, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK

Hannah C. Wanstall & Roger M. Jones

Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Wilmslow Road, Manchester, M20 4BX, UK

Hannah C. Wanstall

Daresbury Laboratory, The Cockcroft Institute, Daresbury, Warrington, WA4 4AD, UK

Deutsches Elektronen Synchrotron (DESY), Notkestrasse 85, 22607, Hamburg, Germany

Florian Burkart, Hannes Dinter, Max Kellermeier, Willi Kuropka, Frank Mayet, Thomas Vinatier, Michael Köpke, Blae Stacey & Sonja Jaster-Merz

Division of Cancer Sciences, Faculty of Biology, Medicine and Health, School of Medical Sciences, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK

Elham Santina, Amy L. Chadwick, Michael J. Merchant & Nicholas T. Henthorn

You can also search for this author in PubMed   Google Scholar

Contributions

Biological experiments, analysis, dosimetry work and preparation of the main manuscript was completed by H.C.W. At ARES, H.D., F.M., W.K., M.Ke., T.V., B.S. and S. J-M., operated the beam for the duration of all cell irradiations and testing. F.B. provided supervision and advice throughout the experiment at ARES. M.Ko. supervised biology work at DESY and maintained the laboratory facilities located in the PETRA III experimental hall. R.M.J., M.J.M., E.S., N.T.H. and A.L.C. provided supervision throughout the project and contributed to the ideas and development of the overall work. Reviews of the manuscript were made by all authors.

Corresponding author

Correspondence to Hannah C. Wanstall .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Wanstall, H.C., Burkart, F., Dinter, H. et al. First in vitro measurement of VHEE relative biological effectiveness (RBE) in lung and prostate cancer cells using the ARES linac at DESY. Sci Rep 14 , 10957 (2024). https://doi.org/10.1038/s41598-024-60585-7

Download citation

Received : 07 December 2023

Accepted : 24 April 2024

Published : 13 May 2024

DOI : https://doi.org/10.1038/s41598-024-60585-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.

what is scientific research and examples

medRxiv

Scientific machine learning for predicting plasma concentrations in anti-cancer therapy

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Diego Valderrama
  • ORCID record for Olga Teplytska
  • ORCID record for Luca Marie Koltermann
  • ORCID record for Elena Trunz
  • ORCID record for Eduard Schmulenson
  • ORCID record for Achim Fritsch
  • ORCID record for Ulrich Jaehde
  • For correspondence: [email protected] [email protected]
  • ORCID record for Holger Fröhlich
  • Info/History
  • Supplementary material
  • Preview PDF

A variety of classical machine learning approaches have been developed over the past ten years with the aim to individualize drug dosages based on measured plasma concentrations. However, the interpretability of these models is challenging as they do not incorporate information on pharmacokinetic (PK) drug disposition. In this work we compare well-known population PK modelling with classical and a newly proposed scientific machine learning (SciML) framework, which combines knowledge on drug disposition with data-driven modelling. Our approach lets us estimate population PK parameters and their inter-individual variability (IIV) using multimodal covariate data of each patient. A dataset of 549 fluorouracil (5FU) plasma concentrations as example for an intravenously administered drug and a dataset of 308 sunitinib concentrations as example for an orally administered drug were used for analysis. Whereas classical machine learning models were not able to describe the data sufficiently, the proposed model allowed us to obtain highly accurate predictions even for new patients. Additionally, we demonstrated that our model could outperform traditional population PK models in terms of accuracy and greater flexibility when learning population parameters if given enough training data.

Competing Interest Statement

H.F. received grants from UCB and AbbVie. The other authors declare no competing interest for this work.

Funding Statement

This work was partially funded by Federal Ministry of Education and Research within the projects BNTrAinee (funding code 16DHBK1022).

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

The 5FU study (protocol-No: CESAR C-II-005, EudraCT-No; 2008-001515-37) with retrospective nature which was approved on 26.05.2008 by the ethics committee of the Albert-Ludwigs-Universitaet Freiburg, Freiburg, Germany. The C-IV-001 Sunitinib study (EudraCT-No: 2012-001415-23) was approved on 17.10.2012 by the ethics committee of the department of medicine of the Johann Wolfgang Goethe-Universitaet Frankfurt am Main, Frankfurt, Germany. It was a phase IV PK/PD substudy of the non-interventional EuroTARGET project, which recruited patients with mRCC at nine medical centres in Germany and the Netherlands

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

↵ * shared first authorship,

↵ # shared senior authorship

Funding This work was partially funded by Federal Ministry of Education and Research within the projects “BNTrAinee” (funding code 16DHBK1022).

Conflict of Interest H.F. received grants from UCB and AbbVie. The other authors declare no competing interest for this work.

Ethics declaration: The 5FU study (protocol-No: CESAR C-II-005, EudraCT-No; 2008-001515-37) was approved on 26.05.2008 by the ethics committee of the Albert-Ludwigs-Universität Freiburg, Freiburg, Germany. The C-IV-001 Sunitinib study (EudraCT-No: 2012-001415-23) was approved on 17.10.2012 by the ethics committee of the department of medicine of the Johann Wolfgang Goethe-Universität Frankfurt am Main, Frankfurt, Germany.

Data Availability

Data used in the present study could be available upon reasonable request to the authors

https://github.com/SCAI-BIO/MMPK-SciML

View the discussion thread.

Supplementary Material

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Pharmacology and Therapeutics
  • Addiction Medicine (323)
  • Allergy and Immunology (627)
  • Anesthesia (163)
  • Cardiovascular Medicine (2365)
  • Dentistry and Oral Medicine (287)
  • Dermatology (206)
  • Emergency Medicine (378)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (833)
  • Epidemiology (11758)
  • Forensic Medicine (10)
  • Gastroenterology (702)
  • Genetic and Genomic Medicine (3725)
  • Geriatric Medicine (348)
  • Health Economics (632)
  • Health Informatics (2388)
  • Health Policy (929)
  • Health Systems and Quality Improvement (895)
  • Hematology (340)
  • HIV/AIDS (780)
  • Infectious Diseases (except HIV/AIDS) (13300)
  • Intensive Care and Critical Care Medicine (767)
  • Medical Education (365)
  • Medical Ethics (104)
  • Nephrology (398)
  • Neurology (3486)
  • Nursing (198)
  • Nutrition (522)
  • Obstetrics and Gynecology (673)
  • Occupational and Environmental Health (661)
  • Oncology (1818)
  • Ophthalmology (535)
  • Orthopedics (218)
  • Otolaryngology (286)
  • Pain Medicine (232)
  • Palliative Medicine (66)
  • Pathology (445)
  • Pediatrics (1031)
  • Pharmacology and Therapeutics (426)
  • Primary Care Research (420)
  • Psychiatry and Clinical Psychology (3171)
  • Public and Global Health (6133)
  • Radiology and Imaging (1276)
  • Rehabilitation Medicine and Physical Therapy (744)
  • Respiratory Medicine (825)
  • Rheumatology (379)
  • Sexual and Reproductive Health (372)
  • Sports Medicine (322)
  • Surgery (400)
  • Toxicology (50)
  • Transplantation (172)
  • Urology (145)

Click to access mobile menu

  • U.S. Department of Health & Human Services HHS
  • National Institutes of Health NIH
  • Division of Program Coordination, Planning, and Strategic Initiatives DPCPSI

The Office of Dietary Supplements (ODS) of the National Institutes of Health (NIH)

Dietary Supplements for Immune Function and Infectious Diseases

This is a general overview. For more in-depth information, see our health professional fact sheet .

How does your immune system work?

Your immune system is made up of cells , tissues , and organs that help fight viruses , bacteria , and other germs that cause infections and other diseases. For example, your skin helps prevent germs from getting inside your body. Cells that line your digestive tract also help protect against harmful germs that cause diseases. White blood cells try to destroy substances they recognize as foreign to your body. Some white blood cells also recognize germs they have been exposed to before and develop antibodies to defend against them in the future.

What do we know about specific dietary supplement ingredients and immune function?

Your immune system needs certain vitamins and minerals to work properly. These include vitamin C , vitamin D , and zinc . Herbal supplements , probiotics, and other dietary supplement ingredients might also affect your immune system.

Eating a variety of nutritious foods can give you enough vitamins, minerals, and other nutrients for a healthy immune system. However, you might wonder whether taking certain dietary supplements can improve your body’s immune system and its ability to fight infections.

This fact sheet describes what we know about the effectiveness and safety of common vitamins, minerals, and other dietary supplement ingredients that might affect immune function .

Dietary supplement ingredients are presented in each section in alphabetical order.

The health professional version of this fact sheet includes more details and references to the scientific literature .

Vitamins and Minerals

Getting enough vitamins and minerals through the foods and beverages you consume is important for a healthy immune system. It’s especially important to get enough of vitamins A, B6, B12, C, D, E, and K as well as folate , copper , iodine , iron , magnesium , selenium , and zinc.

If your diet doesn’t include adequate amounts of certain vitamins and minerals, your immune system will not be able to function as well as it could, you might be more likely to get infections, and you might not recover as well. If your health care provider determines that you are not getting enough of a specific nutrient, vitamin and mineral supplements can help increase intakes to recommended amounts. In most cases, however, if you don’t have a deficiency , increasing your intake of vitamins and minerals through dietary supplements doesn’t help prevent infections or help you recover from them any faster.

Vitamin A is an essential nutrient found in many foods. It exists in two different forms:

  • Preformed vitamin A is found in fish, organ meats (such as liver ), dairy products, and eggs.
  • Provitamin A carotenoids are turned into vitamin A by your body. They are found in fruits, vegetables, and other plant-based products. The most common provitamin A carotenoid in foods and dietary supplements is beta-carotene .

Vitamin A is important for healthy immune function as well as vision, reproduction, growth, and development.

Vitamin A deficiency is rare in the United States, but it is common in many low- and middle-income countries.

The recommended daily amount (known as Recommended Dietary Allowance or RDA) ranges from 300 to 1,200 microgram (mcg) retinol activity equivalents (RAE) for infants , children, and teens, depending on age, and from 700 to 1,300 mcg RAE for adults.

Does it work?

Diarrhea in children.

Children with a vitamin A deficiency are more likely to get diarrhea caused by germs. These children also have a higher chance of dying of diarrhea, especially in sub-Saharan Africa and south Asia.

Research suggests that vitamin A supplements lower the risk and severity of diarrhea in children in low- and middle-income countries. However, vitamin A supplementation might not help very young infants in these countries.

HIV infection

HIV infection can decrease your appetite and weaken your body’s ability to use nutrients from food. HIV can also increase the risk of related health problems, such as diarrhea and respiratory diseases.

It’s not clear if vitamin A supplements lower the risk of spreading HIV or keep the disease from getting worse. Some studies in young children with HIV have found that vitamin A supplements help lower the risk of death. However, it’s not clear whether vitamin A supplements affect the risk of diarrhea or respiratory infections in young children with HIV. Other studies in adults with HIV have found that vitamin A supplements do not improve immune function.

Research in pregnant people with HIV has found that vitamin A supplements do not help reduce the chance of passing HIV from mother to infant. However, one study found that pregnant people with HIV who took vitamin A were more likely to carry their babies to full-term.

Measles in children

In low- and middle-income countries where vitamin A deficiency is common, children with measles are more likely to have severe symptoms and may die from the disease. In these children, vitamin A supplements might help prevent measles, but it’s unclear whether they lower the risk of dying from measles.

Pneumonia and other respiratory infections in children

Is it safe.

Preformed vitamin A is safe at daily intakes up to 600 to 2,800 mcg for infants, children, and teens, depending on age, and up to 3,000 mcg for adults. There are no upper limits for beta-carotene and other forms of provitamin A.

Getting too much preformed vitamin A can cause severe headache, blurred vision, nausea , dizziness, muscle aches, and problems with coordination. In severe cases, getting too much preformed vitamin A can even lead to coma and death.

If you are pregnant, taking too much preformed vitamin A can cause birth defects, including abnormal eyes, skull, lungs , and heart. If you are or might be pregnant or breastfeeding, you should not take high-dose supplements of preformed vitamin A.

High intakes of beta-carotene (provitamin A) do not cause the same problems as preformed vitamin A. Consuming high amounts of beta-carotene can turn the skin yellow-orange, but this condition is harmless and goes away when you eat less of it. However, several studies have shown that smokers, former smokers, and people exposed to asbestos who take high-dose beta-carotene supplements have a higher risk of lung cancer and death.

Vitamin A supplements might interact with some medications such as orlistat (used for weight loss), acitretin (used to treat psoriasis ), and bexarotene (used to treat the skin effects of T-cell lymphoma ).

More information about vitamin A is available in the ODS consumer fact sheet on vitamin A .

Vitamin C is an essential nutrient found in citrus fruits and many other fruits and vegetables. Vitamin C is an antioxidant and is important for healthy immune function. The body also needs vitamin C to make collagen .

The RDA ranges from 15 to 115 milligrams (mg) for infants, children, and teens, depending on age, and from 75 to 120 mg for nonsmoking adults. People who smoke need 35 mg more than the RDA per day.

Common cold

Taking vitamin C regularly might help decrease cold symptoms and reduce the number of days a cold lasts. It might also help reduce the risk of getting a cold in people who undergo extreme physical stress, such as marathon runners and soldiers stationed in very cold locations. However, taking vitamin C after coming down with a cold may not be helpful.

Research suggests that vitamin C supplements might be more effective in people who do not get enough vitamin C from foods and beverages.

Sepsis (using intravenous vitamin C, not vitamin C supplements)

Sepsis is a life-threatening complication of an infection that can damage the body’s organs and tissues. It’s not clear whether high-dose intravenous (IV) vitamin C helps treat sepsis, and in some cases it might be harmful. In some studies, IV vitamin C reduced the risk of death, but in other studies it did not affect the risk of death or the amount of organ damage. Other research suggests that IV vitamin C might increase the risk of death or organ damage.

Vitamin C is safe at daily intakes up to 400 to 1,800 mg for children and teens, depending on age, and up to 2,000 mg for adults. Taking higher amounts of vitamin C can cause diarrhea, nausea, and stomach cramps, and it might also cause false readings on blood sugar monitors, which are used by people with diabetes . In people with hemochromatosis (an iron overload disorder ), high amounts of vitamin C might cause iron build-up in the body, which can damage body tissues.

Vitamin C supplements might decrease the effectiveness of radiation therapy and chemotherapy .

More information about vitamin C is available in the ODS consumer fact sheet on vitamin C .

For information about vitamin C and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Vitamin D is an essential nutrient that is naturally present in fatty fish and fish liver oils and in small amounts in beef liver, egg yolks, and cheese. It’s also added to some foods, such as fortified milk. Your body can also make vitamin D when your skin is exposed to the sun. Vitamin D is important for healthy bones and immune function.

The RDA ranges from 10 to 15 mcg (400 International Units [ IU ] to 600 IU) for infants, children, and teens, depending on age, and from 15 to 20 mcg (600 to 800 IU) for adults.

Flu, pneumonia, and other respiratory infections

People with low vitamin D levels might be more likely to get respiratory infections and might have a higher chance of dying from these infections. Some studies suggest that taking vitamin D supplements regularly might slightly reduce the risk of getting a respiratory infection, especially in people with low vitamin D levels. However, other studies have not found that taking vitamin D supplements reduces the risk of respiratory infections. In addition, vitamin D supplements do not appear to help treat respiratory infections.

People with HIV have a higher risk of vitamin D deficiency partly because many HIV medications cause the body to break down vitamin D faster than normal. Having a vitamin D deficiency might also worsen HIV infection. However, studies haven’t shown that vitamin D supplements improve the health of people with HIV.

Vitamin D is safe at daily intakes up to 25 to 100 mcg (1,000 to 4,000 IU) for infants, children, and teens, depending on age, and up to 100 mcg (4,000 IU) for adults. Taking higher amounts can cause nausea, vomiting, muscle weakness, confusion, pain, loss of appetite, dehydration, excessive urination and thirst, and kidney stones . Extremely high doses can cause kidney failure , damaged blood vessels and heart valves, heart rhythm problems, and death.

Vitamin D supplements might interact with some medications such as orlistat (used for weight loss), statins (used to lower cholesterol levels), thiazide diuretics (used for high blood pressure ), and steroids.

More information about vitamin D is available in the ODS consumer fact sheet on vitamin D .

For information about vitamin D and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Vitamin E (also called alpha-tocopherol ) is an essential nutrient found in nuts, seeds, vegetable oils, and green leafy vegetables. It acts as an antioxidant and helps your immune system function properly. Vitamin E deficiency is rare.

The RDA is 4 to 15 mg for infants, children, and teens, depending on age, and 15 to 19 mg for adults.

Pneumonia and other respiratory infections

It’s not clear whether vitamin E supplements reduce the risk or severity of respiratory infections. Some studies have found that vitamin E supplements might help but others have not, and the effects might depend on whether someone has low vitamin E levels. One study in people who had normal vitamin E levels found that those who took high-dose vitamin E supplements had worse respiratory symptoms and were sick longer.

Vitamin E from food is safe at any level. In supplements, vitamin E is safe at daily intakes up to 200 to 800 mg for children and teens, depending on age, and up to 1,000 mg for adults. Taking higher amounts can increase the risk of bleeding and stroke .

Vitamin E supplements might interact with blood thinners and might reduce the effectiveness of radiation therapy and chemotherapy.

More information about vitamin E is available in the ODS consumer fact sheet on vitamin E .

For information about vitamin E and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Selenium is an essential mineral found in many foods, including Brazil nuts, seafood, meat, poultry , eggs, dairy products, bread, cereals, and other grain products. It acts as an antioxidant and is important for reproduction, thyroid gland function, and DNA production.

The RDA ranges from 15 to 70 micrograms (mcg) for infants, children, and teens, depending on age, and from 55 to 70 mcg for adults.

People with HIV have higher risk of selenium deficiency than other people, and this might worsen their infection and increase the risk of death. However, it’s not clear whether taking selenium supplements improves the health of people with HIV. Some studies have found that selenium supplements might improve immune function slightly in people with HIV, but other studies have not.

Selenium is safe at daily intakes up to 45 to 400 mcg for infants, children, and teens, depending on age, and up to 400 mcg for adults. Taking higher amounts can cause a garlic odor in the breath, a metallic taste in the mouth, hair and nail loss or brittleness, skin rash, nausea, diarrhea, fatigue , irritability, and nervous system problems.

Selenium might interact with cisplatin (a drug used in chemotherapy).

More information about selenium is available in the ODS consumer fact sheet on selenium .

For information about selenium and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Zinc is an essential nutrient found in seafood, meat, beans, nuts, whole grains , and dairy products. It’s important for a healthy immune system, making proteins and DNA, healing wounds, and for proper sense of taste.

The RDA ranges from 2 to 13 mg for infants, children, and teens, depending on age, and from 8 to 12 mg for adults.

Some studies suggest that zinc lozenges and zinc syrup speed recovery from the common cold if you start taking them at the start of a cold. However, these products don’t seem to affect the severity of cold symptoms. More research is needed to determine the best dose and form of zinc for the common cold as well as how often and how long it should be taken.

Pneumonia in children

Some studies in lower income countries show that zinc supplements lower the risk of pneumonia in young children. However, zinc doesn’t seem to speed recovery or reduce the number of deaths from pneumonia.

Studies show that zinc supplements help shorten the duration of diarrhea in children in low-income countries, where zinc deficiency is common. The World Health Organization and UNICEF recommend that children with diarrhea take zinc for 10 to 14 days (20 mg/day, or 10 mg/day for infants under 6 months). However, it’s not clear if zinc supplements help children with diarrhea who already get enough zinc, such as most children in the United States.

Many people with HIV have low zinc levels. This occurs because they have trouble absorbing zinc from food and they often have diarrhea, which increases zinc loss. Some studies have found that supplemental zinc decreases diarrhea and complications of HIV, but other studies have not. Zinc supplements do not appear to reduce the risk of death in people with HIV.

Zinc is safe at daily intakes up to 4 to 34 mg for infants, children, and teens, depending on age, and up to 40 mg for adults. Taking higher amounts can cause nausea, vomiting, loss of appetite, stomach cramps, diarrhea, and headaches. High intakes of zinc over a long time can cause low blood levels of copper and impair immune function.

Zinc supplements might interact with antibiotics , penicillamine (used to treat rheumatoid arthritis ), and thiazide diuretics (used to treat high blood pressure).

More information about zinc is available in the ODS consumer fact sheet on zinc .

For information about zinc and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Andrographis

Andrographis is an herb native to Southeast Asia. It might help your body fight viruses, reduce inflammation , and strengthen your immune system.

Common cold and other respiratory infections

Some studies have found that taking andrographis after getting a cold or other respiratory infection might lessen the severity of symptoms and shorten the length of time symptoms last. However, additional studies are needed to confirm these findings.

No safety concerns have been reported when andrographis is used as directed. Side effects of andrographis can include nausea, vomiting, dizziness, skin rashes, diarrhea, and fatigue.

Andrographis might decrease blood pressure and thin the blood, so it could interact with blood pressure and blood thinning medications.

Andrographis might also decrease the effectiveness of medications that suppress the immune system. Andrographis might affect fertility, so some scientists recommend avoiding it if you are pregnant or planning to have a baby.

For information about andrographis and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Echinacea is an herb that grows in North America and Europe. It might help stop the growth or spread of some types of viruses and other germs. It might also help strengthen your immune system and reduce inflammation.

Common cold and flu

Studies have found that echinacea might slightly reduce the risk of catching a cold, but it doesn’t reduce the severity of symptoms or shorten the length of time symptoms last.

It’s unclear whether echinacea is helpful for the flu.

Echinacea appears to be safe. Side effects can include stomach upset, diarrhea, trouble sleeping, and skin rashes. In rare cases, echinacea might cause allergic reactions.

Echinacea might reduce the effectiveness of some medications, including medications that suppress the immune system. Scientists don’t know if echinacea is safe to take during pregnancy.

For information about echinacea and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Elderberry (European Elder)

Elderberry (or elder berry) is the fruit of a tree that grows in North America, Europe, and parts of Africa and Asia. Elderberry might help your body fight viruses and other germs, reduce inflammation, and strengthen your immune system.

Elderberry doesn’t appear to reduce the risk of coming down with the common cold. However, some studies have found that elderberry might help relieve symptoms of colds and flu and help people recover quicker.

Elderberry flowers and ripe fruit appear to be safe to eat. However, the bark, leaves, seeds, and raw or unripe elderberry fruit can be poisonous and can cause nausea, vomiting, diarrhea, and dehydration. Cooked elderberry fruit and properly manufactured supplements do not have this safety concern.

Elderberry might affect insulin and blood sugar levels. It might also reduce the effectiveness of medications that suppress the immune system. Scientists don’t know if elderberry is safe to take during pregnancy.

For information about elderberry and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Garlic is a vegetable that has been used in cooking throughout history . It is also available as a dietary supplement.

Garlic might help your body fight viruses and other germs.

Only a few studies have looked at whether garlic supplements help prevent the common cold or flu, and it’s not clear if garlic is helpful.

Garlic is considered safe. Side effects can include bad breath, body odor, and skin rash.

Garlic might interact with blood thinners and blood pressure medications.

Ginseng ( Panax ginseng or Panax quinquefolius ) is a plant used in traditional Chinese medicine. It might help your body fight viruses, reduce inflammation, and strengthen your immune system.

Another botanical , eleuthero ( Eleutherococus senticosus ), has sometimes been called Siberian ginseng, but it is not related to true ginseng.

Common cold, flu, and other respiratory infections

Ginseng might reduce the risk of coming down with the common cold, flu, or other respiratory infections. However, it’s unclear whether ginseng helps relieve symptoms or affects the length of time symptoms last.

Ginseng appears to be safe. Side effects can include headache, trouble sleeping, and digestive upset. However, high doses (more than 2.5 grams [g]/day) of ginseng might cause insomnia , rapid heartbeat, high blood pressure, and nervousness.

Ginseng might interact with diabetes medications, stimulants , and medications that suppress the immune system.

For information about ginseng and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Tea and tea catechins

Tea ( Camellia sinensis ) is a popular beverage that may have health benefits. Tea extracts are also available as dietary supplements.

Green, black, and oolong tea leaves are processed in different ways. Green tea is made from dried and steamed tea leaves, and black and oolong teas are made from fermented tea leaves.

Tea, especially green tea, has high amounts of substances called catechins. Catechins might help fight viruses and other germs.

Flu and other respiratory infections

Based on only a few studies, it’s unclear whether tea or tea catechins are helpful for the flu or other respiratory infections. Some studies have found that tea and tea catechins might reduce the risk of coming down with upper respiratory infections. They might also reduce the length and severity of some symptoms but not other symptoms.

Tea is safe to drink. Side effects of green tea extract can include nausea, constipation , stomach discomfort, and increased blood pressure. Some green tea extracts might damage your liver, especially if you take them on an empty stomach.

Tea also contains caffeine, which can disturb your sleep and cause nervousness, jitteriness, and shakiness. Safe doses of caffeine for healthy adults are up to 400 to 500 mg/day and up to 200 mg/day for people who are pregnant.

Tea might interact with atorvastatin (a cholesterol-lowering drug) and stimulants, such as bitter orange or ephedrine.

Other Ingredients

Glutamine is an amino acid found in many foods including beef, fish, poultry, dried beans, eggs, rice, grains, and dairy products. Your body makes enough glutamine to meet your needs, except under rare conditions (for example, if you are critically ill in an intensive care unit [ICU] or have had major surgery).

Glutamine helps your immune system work properly.

Critical illness (giving glutamine as an IV or tube feeding)

It’s unclear whether glutamine helps people who are critically ill. Some studies in hospitalized patients who were critically ill or had undergone major surgery found that glutamine given as an IV or tube feeding reduced the risk of getting an infection, but it did not reduce the risk of death.

Glutamine is considered safe. Side effects can include nausea, bloating, burping, pain, gas, and vomiting. These side effects are more likely to occur with higher doses of glutamine.

No interactions between glutamine and medications have been reported.

N-acetylcysteine and glutathione

N-acetylcysteine (NAC) is similar to cysteine, an amino acid. It acts as an antioxidant and helps reduce mucus in the respiratory tract .

NAC raises levels in your body of a substance called glutathione, which also acts as an antioxidant. NAC and glutathione might also help your body fight viruses and other germs, reduce inflammation, and strengthen your immune system.

People with HIV may have low levels of glutathione, which might increase the risk of certain diseases including tuberculosis . However, there is very little research on NAC supplements in people with HIV. Therefore, scientists don’t know whether it’s helpful.

NAC appears to be safe. Side effects can include nausea, vomiting, stomach pain, diarrhea, indigestion, and heartburn.

NAC might interact with blood thinners and blood pressure medications. Taking NAC with nitroglycerine (used to treat chest pain) might cause low blood pressure and severe headaches.

For information about NAC and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Omega-3 fatty acids

Omega-3s are types of fats, including alpha linolenic acid (ALA), eicosapentaenoic acid (EPA), and docosahexaenoic acid (DHA). ALA is found mainly in plant oils, such as flaxseed, soybean , and canola oils. EPA and DHA are found mainly in fatty fish and fish oils.

Omega-3s are important for healthy cell membranes and proper function of the heart, lungs, brain, immune system, and endocrine system .

The recommended amount of omega-3s for infants is 0.5 g per day, and 0.7 to 1.6 g per day of ALA for children, teens, and adults, depending on age. EPA and DHA do not have individual recommendations.

Omega-3s might help your body fight viruses and other germs, reduce inflammation, and strengthen your immune system.

Acute respiratory distress syndrome (giving omega-3s as an IV or tube feeding)

Acute respiratory distress syndrome (ARDS) is a serious lung condition that can lead to death. In people who do recover, ARDS often causes long-term physical and mental health problems.

Researchers have studied whether giving omega-3s as an IV or tube feeding is helpful for people with ARDS, but results from these studies are not clear. Some studies have found that omega-3s given in this manner might help the lungs work better, but they don’t appear to lower the risk of dying from ARDS. In addition, it’s not clear whether omega-3s given in this manner affect the length of time people are hospitalized with ARDS and need a ventilator to help them breathe.

Respiratory infections in infants and young children

The immune system continues to develop in babies after birth, and their immune cells contain the omega-3s EPA and DHA. However, it’s not clear whether adding omega-3s to infant formula improves immune function or reduces the risk of getting respiratory infections.

A study in school-age children found that children who consumed milk with added EPA and DHA had fewer upper respiratory infections than those who did not consume omega-3s. In another study, however, using an infant formula containing DHA and another fatty acid had no effect on the risk of respiratory infections in infants.

Omega-3s are considered safe. Side effects can include a bad taste in the mouth, bad breath, heartburn, nausea, digestive discomfort, diarrhea, headache, and smelly sweat. Omega-3s might interact with blood thinners, blood pressure medications, and medications that suppress the immune system.

More information about omega-3s is available in the ODS consumer fact sheet on omega-3 fatty acids .

For information about omega-3s and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Probiotics are live microorganisms (bacteria and yeasts) that provide health benefits. They are naturally present in certain fermented foods, added to some food products, and available as dietary supplements. Probiotics act mostly in the stomach and intestines . They might improve immune function and help fight viruses.

Acute diarrhea in infants and children

Acute infectious diarrhea in infants and children causes loose or liquid stools and three or more bowel movements within 24 hours. This condition is often caused by a viral infection and can last for up to a week. Some infants and children also develop fever and vomiting. Some studies have shown that probiotics shorten acute diarrhea by about 1 day, but other studies do not.

Some studies have reported that two strains of probiotics— Lactobacillus rhamnosus GG (LGG) and Saccharomyces boulardii —were most likely to benefit children with acute infectious diarrhea, but other studies have not.

Probiotics might reduce the risk of some respiratory infections and shorten the length of illness. Some studies in infants, children, and adults have found that probiotics reduce the risk of getting a cold and help relieve some symptoms, such as fever and cough. Other studies in children reported fewer sick days from school and quicker recovery. However, formulations of probiotics vary, and the effects of one product may not be the same as another.

Ventilator-associated pneumonia

It’s not clear whether probiotics help people who are critically ill. Some studies have found that probiotics lower the risk of developing pneumonia in people who are critically ill and need a ventilator to help them breathe, but other studies have not.

Probiotics are considered safe for most people. Side effects can include gas and other digestive symptoms. In people who are very ill or have immune system problems, probiotics might cause severe illness. Probiotics might also cause infections or even life-threatening illness in preterm infants. Although probiotics don’t appear to interact with medications, taking antibiotics or antifungal medications might decrease the effectiveness of some probiotics.

More information about probiotics is available in the ODS consumer fact sheet on probiotics .

For information about probiotics and COVID-19, see the ODS consumer fact sheet, Dietary Supplements in the Time of COVID-19 .

Do dietary supplements interact with medications or other supplements?

Yes, some supplements can interact or interfere with medicines you take.

Tell your doctor, pharmacist , and other health care providers about any dietary supplements and prescription or over-the-counter medicines you take. They can tell you if the dietary supplements might interact with your medicines or if the medicines might interfere with how your body absorbs , uses, or breaks down nutrients.

Where can I find out more about dietary supplements and immune function?

  • Office of Dietary Supplements (ODS) Health Professional Fact Sheet on Dietary Supplements for Immune Function and Infectious Diseases

external link disclaimer

  • Herbs at a Glance , National Center for Complementary and Integrative Health
  • ODS Frequently Asked Questions: Which brand(s) of dietary supplements should I purchase?

This fact sheet by the National Institutes of Health (NIH) Office of Dietary Supplements (ODS) provides information that should not take the place of medical advice. We encourage you to talk to your health care providers (doctor, registered dietitian, pharmacist, etc.) about your interest in, questions about, or use of dietary supplements and what may be best for your overall health. Any mention in this publication of a specific brand name is not an endorsement of the product.

Updated: November 14, 2023

IMAGES

  1. Scientific Method: Definition and Examples

    what is scientific research and examples

  2. Scientific Research

    what is scientific research and examples

  3. Scientific Research Steps Part 1

    what is scientific research and examples

  4. General Research VS Scientific Research

    what is scientific research and examples

  5. What is Research

    what is scientific research and examples

  6. Module 1: Introduction: What is Research?

    what is scientific research and examples

VIDEO

  1. Day 2: Basics of Scientific Research Writing (Batch 18)

  2. What are the sciences?? 😇👍👍

  3. Basics of scientific research| Introduction to scientific research| lecture 1

  4. Scientific Programming Languages

  5. Types of Research examples Marketing Research Problem solving /Problem Identification #educational

  6. Meaning & characteristics of scientific research || वैज्ञानिक शोध का अर्थ एवं विशेषताएँ

COMMENTS

  1. What is Scientific Research and How Can it be Done?

    Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

  2. Scientific Research

    For example, research on materials science can lead to the development of new materials with unique properties that can be used in a range of applications. Knowledge creation: Scientific research is an important way of generating new knowledge and advancing our understanding of the world around us. This can lead to new theories, insights, and ...

  3. Science and the scientific method: Definitions and examples

    Science is a systematic and logical approach to discovering how things in the universe work. Scientists use the scientific method to make observations, form hypotheses and gather evidence in an ...

  4. Scientific Research Definition, Classifications & Purpose

    Scientific research is the systematic investigation of scientific theories and hypotheses. A hypothesis is a single assertion, a proposed explanation of something based on available knowledge, for ...

  5. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  6. Scientific Method: Definition and Examples

    The scientific method is a series of steps followed by scientific investigators to answer specific questions about the natural world. It involves making observations, formulating a hypothesis, and conducting scientific experiments. Scientific inquiry starts with an observation followed by the formulation of a question about what has been ...

  7. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  8. Explaining How Research Works

    Placing research in the bigger context of its field and where it fits into the scientific process can help people better understand and interpret new findings as they emerge. A single study usually uncovers only a piece of a larger puzzle. Questions about how the world works are often investigated on many different levels.

  9. 1.2 Scientific Research in Psychology

    A Model of Scientific Research in Psychology. Figure 1.2 "A Simple Model of Scientific Research in Psychology" presents a more specific model of scientific research in psychology. The researcher (who more often than not is really a small group of researchers) formulates a research question, conducts a study designed to answer the question, analyzes the resulting data, draws conclusions ...

  10. Scientific ideas lead to ongoing research

    Snapshot. Misconceptions. Scientific ideas lead to ongoing research. Answering one scientific question frequently leads to additional questions to be investigated. Misconception: Science is complete. Correction: Science is an ongoing process. There is much more yet to learn. Read more about it.

  11. What Is Research, and Why Do People Do It?

    Abstractspiepr Abs1. Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain ...

  12. Scientific Method

    Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of ...

  13. 1.5: Types of Scientific Research

    Page ID. Depending on the purpose of research, scientific research projects can be grouped into three types: exploratory, descriptive, and explanatory. Exploratory research is often conducted in new areas of inquiry, where the goals of the research are: (1) to scope out the magnitude or extent of a particular phenomenon, problem, or behavior ...

  14. Scientific method

    The scientific method is critical to the development of scientific theories, which explain empirical (experiential) laws in a scientifically rational manner. In a typical application of the scientific method, a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the ...

  15. What is Research

    Research is the careful consideration of study regarding a particular concern or research problem using scientific methods. According to the American sociologist Earl Robert Babbie, "research is a systematic inquiry to describe, explain, predict, and control the observed phenomenon. It involves inductive and deductive methods.".

  16. PDF Introduction to Scientific Research

    The use of intuition is sometimes used in science (Polanyi & Sen, 2009), and it is probably seen most readily in the process of forming hypotheses. Although most scientific hypotheses are derived from prior research, some hypotheses arise from hunches and new ways of looking at the literature. You might, for example,

  17. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  18. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  19. Scientific theory

    scientific theory, systematic ideational structure of broad scope, conceived by the human imagination, that encompasses a family of empirical (experiential) laws regarding regularities existing in objects and events, both observed and posited. A scientific theory is a structure suggested by these laws and is devised to explain them in a scientifically rational manner.

  20. What is Scientific Research and How is it Conducted?

    Before something is accepted and documented as scientific knowledge, a systematic process of testing ideas takes place, in order to prove it. This process is called scientific research. Thus, scientific research - also known as the scientific process - is how scientific knowledge is discovered.

  21. Scientific Theory Definition and Examples

    A scientific theory is a well-established explanation of some aspect of the natural world. Theories come from scientific data and multiple experiments. While it is not possible to prove a theory, a single contrary result using the scientific method can disprove it. In other words, a theory is testable and falsifiable. Examples of Scientific ...

  22. What is Research? Definition, Types and Examples

    Research is the application of scientific methods to investigate concepts and phenomena, with a view of gaining a greater understanding about what it is we are investigating. Research can be undertaken in a variety of different contexts, where the three main functions of research are to: ... For example, a national census describing the ...

  23. Google DeepMind and Isomorphic Labs introduce AlphaFold 3 AI model

    Google DeepMind's newly launched AlphaFold Server is the most accurate tool in the world for predicting how proteins interact with other molecules throughout the cell. It is a free platform that scientists around the world can use for non-commercial research. With just a few clicks, biologists can harness the power of AlphaFold 3 to model structures composed of proteins, DNA, RNA and a ...

  24. AI in Data Science: Uses, Examples, and Tools to Consider

    In the business world, data science is often used for business intelligence purposes, particularly to generate actionable insights and inform decision-making. By analyzing large amounts of data, organizations are able to identify trends and patterns that can reveal important insights about the nature of their business, their customers, and the impact of both internal and external events on ...

  25. Research Objectives

    Research objectives describe what your research project intends to accomplish. They should guide every step of the research process, including how you collect data, build your argument, and develop your conclusions. Your research objectives may evolve slightly as your research progresses, but they should always line up with the research carried ...

  26. AI-assisted writing is quietly booming in academic journals—here's why

    For example, one paper on agricultural technology says, If you search Google Scholar for the phrase "as an AI language model," you'll find plenty of AI research literature and also some rather ...

  27. First in vitro measurement of VHEE relative biological ...

    Dose repeatability. Another critical factor was the ability to repeat specific doses to obtain experimental repeats that can be compared. This was tested by analysing mean dose to each individual ...

  28. Scientific machine learning for predicting plasma concentrations in

    A variety of classical machine learning approaches have been developed over the past ten years with the aim to individualize drug dosages based on measured plasma concentrations. However, the interpretability of these models is challenging as they do not incorporate information on pharmacokinetic (PK) drug disposition. In this work we compare well-known population PK modelling with classical ...

  29. Dietary Supplements for Immune Function and Infectious Diseases

    The most common provitamin A carotenoid in foods and dietary supplements is beta-carotene. Vitamin A is important for healthy immune function as well as vision, reproduction, growth, and development. Vitamin A deficiency is rare in the United States, but it is common in many low- and middle-income countries.