banking security
The first three examples highlight that while the name of the dependent variable is the same, namely daily calorific intake, the way that this dependent variable is written out differs in each case.
All comparative research questions have at least two groups . You need to identify these groups. In the examples below, we have identified the groups in the green text .
What is the difference in the daily calorific intake of American men and women ?
What is the difference in the weekly photo uploads on Facebook between British male and female university students ?
What are the differences in perceptions towards Internet banking security between adolescents and pensioners ?
What are the differences in attitudes towards music piracy when pirated music is freely distributed or purchased ?
It is often easy to identify groups because they reflect different types of people (e.g., men and women, adolescents and pensioners), as highlighted by the first three examples. However, sometimes the two groups you are interested in reflect two different conditions, as highlighted by the final example. In this final example, the two conditions (i.e., groups) are pirated music that is freely distributed and pirated music that is purchased. So we are interested in how the attitudes towards music piracy differ when pirated music is freely distributed as opposed to when pirated music in purchased.
Before you write out the groups you are interested in comparing, you typically need to include some adjoining text. Typically, this adjoining text includes the words between or amongst , but other words may be more appropriate, as highlighted by the examples in red text below:
Once you have these details - (1) the starting phrase, (2) the name of the dependent variable, (3) the name of the groups you are interested in comparing, and (4) any potential adjoining words - you can write out the comparative research question in full. The example comparative research questions discussed above are written out in full below:
In the section that follows, the structure of relationship-based research questions is discussed.
There are six steps required to construct a relationship-based research question: (1) choose your starting phrase; (2) identify the independent variable(s); (3) identify the dependent variable(s); (4) identify the group(s); (5) identify the appropriate adjoining text; and (6) write out the relationship-based research question. Each of these steps is discussed in turn.
Identify the independent variable(s)
Identify the dependent variable(s)
Identify the group(s)
Write out the relationship-based research question
Relationship-based research questions typically start with one or two phrases:
Name of the independent variable | Starting phrase |
Two | What is the relationship between? |
Three or more | What are the relationships of? |
What is the relationship between gender and attitudes towards music piracy amongst adolescents?
What is the relationship between study time and exam scores amongst university students?
What is the relationship of career prospects, salary and benefits, and physical working conditions on job satisfaction between managers and non-managers?
All relationship-based research questions have at least one independent variable . You need to identify what this is. In the examples that follow, the independent variable(s) is highlighted in the purple text .
What is the relationship of career prospects , salary and benefits , and physical working conditions on job satisfaction between managers and non-managers?
When doing a dissertation at the undergraduate and master's level, it is likely that your research question will only have one or two independent variables, but this is not always the case.
All relationship-based research questions also have at least one dependent variable . You also need to identify what this is. At the undergraduate and master's level, it is likely that your research question will only have one dependent variable. In the examples that follow, the dependent variable is highlighted in the blue text .
All relationship-based research questions have at least one group , but can have multiple groups . You need to identify this group(s). In the examples below, we have identified the group(s) in the green text .
What is the relationship between gender and attitudes towards music piracy amongst adolescents ?
What is the relationship between study time and exam scores amongst university students ?
What is the relationship of career prospects, salary and benefits, and physical working conditions on job satisfaction between managers and non-managers ?
Before you write out the groups you are interested in comparing, you typically need to include some adjoining text (i.e., usually the words between or amongst):
Number of groups | Adjoining text |
One | amongst? [e.g., group 1] |
Two or more | between? of? [e.g., group 1 and group 2] |
Some examples are highlighted in red text below:
Once you have these details ? (1) the starting phrase, (2) the name of the dependent variable, (3) the name of the independent variable, (4) the name of the group(s) you are interested in, and (5) any potential adjoining words ? you can write out the relationship-based research question in full. The example relationship-based research questions discussed above are written out in full below:
In the previous section, we illustrated how to write out the three types of research question (i.e., descriptive, comparative and relationship-based research questions). Whilst these rules should help you when writing out your research question(s), the main thing you should keep in mind is whether your research question(s) flow and are easy to read .
Last updated
18 April 2023
Reviewed by
Jean Kaluza
Short on time? Get an AI generated summary of this article instead
Comparative analysis is a valuable tool for acquiring deep insights into your organization’s processes, products, and services so you can continuously improve them.
Similarly, if you want to streamline, price appropriately, and ultimately be a market leader, you’ll likely need to draw on comparative analyses quite often.
When faced with multiple options or solutions to a given problem, a thorough comparative analysis can help you compare and contrast your options and make a clear, informed decision.
If you want to get up to speed on conducting a comparative analysis or need a refresher, here’s your guide.
Dovetail streamlines comparative analysis to help you uncover and share actionable insights
A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets.
For instance, you could use comparative analysis to investigate how your product features measure up to the competition.
After a successful comparative analysis, you should be able to identify strengths and weaknesses and clearly understand which product is more effective.
You could also use comparative analysis to examine different methods of producing that product and determine which way is most efficient and profitable.
The potential applications for using comparative analysis in everyday business are almost unlimited. That said, a comparative analysis is most commonly used to examine
Emerging trends and opportunities (new technologies, marketing)
Competitor strategies
Financial health
Effects of trends on a target audience
Make sense of your research by automatically summarizing key takeaways through our free content analysis tool.
Comparative analysis can help narrow your focus so your business pursues the most meaningful opportunities rather than attempting dozens of improvements simultaneously.
A comparative approach also helps frame up data to illuminate interrelationships. For example, comparative research might reveal nuanced relationships or critical contexts behind specific processes or dependencies that wouldn’t be well-understood without the research.
For instance, if your business compares the cost of producing several existing products relative to which ones have historically sold well, that should provide helpful information once you’re ready to look at developing new products or features.
Comparative analysis is generally divided into three subtypes, using quantitative or qualitative data and then extending the findings to a larger group. These include
Pattern analysis —identifying patterns or recurrences of trends and behavior across large data sets.
Data filtering —analyzing large data sets to extract an underlying subset of information. It may involve rearranging, excluding, and apportioning comparative data to fit different criteria.
Decision tree —flowcharting to visually map and assess potential outcomes, costs, and consequences.
In contrast, competitive analysis is a type of comparative analysis in which you deeply research one or more of your industry competitors. In this case, you’re using qualitative research to explore what the competition is up to across one or more dimensions.
For example
Service delivery —metrics like the Net Promoter Scores indicate customer satisfaction levels.
Market position — the share of the market that the competition has captured.
Brand reputation —how well-known or recognized your competitors are within their target market.
Thorough, independent research is a significant asset when doing comparative analysis. It provides evidence to support your findings and may present a perspective or angle not considered previously.
To get the maximum benefit from comparative research, make it a regular practice, and establish a cadence you can realistically stick to. Some business areas you could plan to analyze regularly include:
Profitability
Competition
In addition to simply comparing and contrasting, explore how different variables might affect your outcomes.
For example, a controllable variable would be offering a seasonal feature like a shopping bot to assist in holiday shopping or raising or lowering the selling price of a product.
Uncontrollable variables include weather, changing regulations, the current political climate, or global pandemics.
Most people enter into comparative research with a particular idea or hypothesis already in mind to validate. For instance, you might try to prove the worthwhileness of launching a new service. So, you may be disappointed if your analysis results don’t support your plan.
However, in any comparative analysis, try to maintain an unbiased approach by spending equal time debating the merits and drawbacks of any decision. Ultimately, this will be a practical, more long-term sustainable approach for your business than focusing only on the evidence that favors pursuing your argument or strategy.
To put together a coherent, insightful analysis that goes beyond a list of pros and cons or similarities and differences, try organizing the information into these five components:
1. Frame of reference
Here is where you provide context. First, what driving idea or problem is your research anchored in? Then, for added substance, cite existing research or insights from a subject matter expert, such as a thought leader in marketing, startup growth, or investment
2. Grounds for comparison Why have you chosen to examine the two things you’re analyzing instead of focusing on two entirely different things? What are you hoping to accomplish?
3. Thesis What argument or choice are you advocating for? What will be the before and after effects of going with either decision? What do you anticipate happening with and without this approach?
For example, “If we release an AI feature for our shopping cart, we will have an edge over the rest of the market before the holiday season.” The finished comparative analysis will weigh all the pros and cons of choosing to build the new expensive AI feature including variables like how “intelligent” it will be, what it “pushes” customers to use, how much it takes off the plates of customer service etc.
Ultimately, you will gauge whether building an AI feature is the right plan for your e-commerce shop.
4. Organize the scheme Typically, there are two ways to organize a comparative analysis report. First, you can discuss everything about comparison point “A” and then go into everything about aspect “B.” Or, you alternate back and forth between points “A” and “B,” sometimes referred to as point-by-point analysis.
Using the AI feature as an example again, you could cover all the pros and cons of building the AI feature, then discuss the benefits and drawbacks of building and maintaining the feature. Or you could compare and contrast each aspect of the AI feature, one at a time. For example, a side-by-side comparison of the AI feature to shopping without it, then proceeding to another point of differentiation.
5. Connect the dots Tie it all together in a way that either confirms or disproves your hypothesis.
For instance, “Building the AI bot would allow our customer service team to save 12% on returns in Q3 while offering optimizations and savings in future strategies. However, it would also increase the product development budget by 43% in both Q1 and Q2. Our budget for product development won’t increase again until series 3 of funding is reached, so despite its potential, we will hold off building the bot until funding is secured and more opportunities and benefits can be proved effective.”
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 22 August 2024
Last updated: 5 February 2023
Last updated: 16 August 2024
Last updated: 9 March 2023
Last updated: 30 April 2024
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 4 July 2024
Last updated: 6 March 2024
Last updated: 5 March 2024
Last updated: 13 May 2024
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.
Get started for free
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Edward barroga.
1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.
2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.
The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.
Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6
It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4
There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.
A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5
On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4
Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8
Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12
Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13
There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10
Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .
Quantitative research questions | Quantitative research hypotheses |
---|---|
Descriptive research questions | Simple hypothesis |
Comparative research questions | Complex hypothesis |
Relationship research questions | Directional hypothesis |
Non-directional hypothesis | |
Associative hypothesis | |
Causal hypothesis | |
Null hypothesis | |
Alternative hypothesis | |
Working hypothesis | |
Statistical hypothesis | |
Logical hypothesis | |
Hypothesis-testing | |
Qualitative research questions | Qualitative research hypotheses |
Contextual research questions | Hypothesis-generating |
Descriptive research questions | |
Evaluation research questions | |
Explanatory research questions | |
Exploratory research questions | |
Generative research questions | |
Ideological research questions | |
Ethnographic research questions | |
Phenomenological research questions | |
Grounded theory questions | |
Qualitative case study questions |
In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .
Quantitative research questions | |
---|---|
Descriptive research question | |
- Measures responses of subjects to variables | |
- Presents variables to measure, analyze, or assess | |
What is the proportion of resident doctors in the hospital who have mastered ultrasonography (response of subjects to a variable) as a diagnostic technique in their clinical training? | |
Comparative research question | |
- Clarifies difference between one group with outcome variable and another group without outcome variable | |
Is there a difference in the reduction of lung metastasis in osteosarcoma patients who received the vitamin D adjunctive therapy (group with outcome variable) compared with osteosarcoma patients who did not receive the vitamin D adjunctive therapy (group without outcome variable)? | |
- Compares the effects of variables | |
How does the vitamin D analogue 22-Oxacalcitriol (variable 1) mimic the antiproliferative activity of 1,25-Dihydroxyvitamin D (variable 2) in osteosarcoma cells? | |
Relationship research question | |
- Defines trends, association, relationships, or interactions between dependent variable and independent variable | |
Is there a relationship between the number of medical student suicide (dependent variable) and the level of medical student stress (independent variable) in Japan during the first wave of the COVID-19 pandemic? |
In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .
Quantitative research hypotheses | |
---|---|
Simple hypothesis | |
- Predicts relationship between single dependent variable and single independent variable | |
If the dose of the new medication (single independent variable) is high, blood pressure (single dependent variable) is lowered. | |
Complex hypothesis | |
- Foretells relationship between two or more independent and dependent variables | |
The higher the use of anticancer drugs, radiation therapy, and adjunctive agents (3 independent variables), the higher would be the survival rate (1 dependent variable). | |
Directional hypothesis | |
- Identifies study direction based on theory towards particular outcome to clarify relationship between variables | |
Privately funded research projects will have a larger international scope (study direction) than publicly funded research projects. | |
Non-directional hypothesis | |
- Nature of relationship between two variables or exact study direction is not identified | |
- Does not involve a theory | |
Women and men are different in terms of helpfulness. (Exact study direction is not identified) | |
Associative hypothesis | |
- Describes variable interdependency | |
- Change in one variable causes change in another variable | |
A larger number of people vaccinated against COVID-19 in the region (change in independent variable) will reduce the region’s incidence of COVID-19 infection (change in dependent variable). | |
Causal hypothesis | |
- An effect on dependent variable is predicted from manipulation of independent variable | |
A change into a high-fiber diet (independent variable) will reduce the blood sugar level (dependent variable) of the patient. | |
Null hypothesis | |
- A negative statement indicating no relationship or difference between 2 variables | |
There is no significant difference in the severity of pulmonary metastases between the new drug (variable 1) and the current drug (variable 2). | |
Alternative hypothesis | |
- Following a null hypothesis, an alternative hypothesis predicts a relationship between 2 study variables | |
The new drug (variable 1) is better on average in reducing the level of pain from pulmonary metastasis than the current drug (variable 2). | |
Working hypothesis | |
- A hypothesis that is initially accepted for further research to produce a feasible theory | |
Dairy cows fed with concentrates of different formulations will produce different amounts of milk. | |
Statistical hypothesis | |
- Assumption about the value of population parameter or relationship among several population characteristics | |
- Validity tested by a statistical experiment or analysis | |
The mean recovery rate from COVID-19 infection (value of population parameter) is not significantly different between population 1 and population 2. | |
There is a positive correlation between the level of stress at the workplace and the number of suicides (population characteristics) among working people in Japan. | |
Logical hypothesis | |
- Offers or proposes an explanation with limited or no extensive evidence | |
If healthcare workers provide more educational programs about contraception methods, the number of adolescent pregnancies will be less. | |
Hypothesis-testing (Quantitative hypothesis-testing research) | |
- Quantitative research uses deductive reasoning. | |
- This involves the formation of a hypothesis, collection of data in the investigation of the problem, analysis and use of the data from the investigation, and drawing of conclusions to validate or nullify the hypotheses. |
Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15
There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .
Qualitative research questions | |
---|---|
Contextual research question | |
- Ask the nature of what already exists | |
- Individuals or groups function to further clarify and understand the natural context of real-world problems | |
What are the experiences of nurses working night shifts in healthcare during the COVID-19 pandemic? (natural context of real-world problems) | |
Descriptive research question | |
- Aims to describe a phenomenon | |
What are the different forms of disrespect and abuse (phenomenon) experienced by Tanzanian women when giving birth in healthcare facilities? | |
Evaluation research question | |
- Examines the effectiveness of existing practice or accepted frameworks | |
How effective are decision aids (effectiveness of existing practice) in helping decide whether to give birth at home or in a healthcare facility? | |
Explanatory research question | |
- Clarifies a previously studied phenomenon and explains why it occurs | |
Why is there an increase in teenage pregnancy (phenomenon) in Tanzania? | |
Exploratory research question | |
- Explores areas that have not been fully investigated to have a deeper understanding of the research problem | |
What factors affect the mental health of medical students (areas that have not yet been fully investigated) during the COVID-19 pandemic? | |
Generative research question | |
- Develops an in-depth understanding of people’s behavior by asking ‘how would’ or ‘what if’ to identify problems and find solutions | |
How would the extensive research experience of the behavior of new staff impact the success of the novel drug initiative? | |
Ideological research question | |
- Aims to advance specific ideas or ideologies of a position | |
Are Japanese nurses who volunteer in remote African hospitals able to promote humanized care of patients (specific ideas or ideologies) in the areas of safe patient environment, respect of patient privacy, and provision of accurate information related to health and care? | |
Ethnographic research question | |
- Clarifies peoples’ nature, activities, their interactions, and the outcomes of their actions in specific settings | |
What are the demographic characteristics, rehabilitative treatments, community interactions, and disease outcomes (nature, activities, their interactions, and the outcomes) of people in China who are suffering from pneumoconiosis? | |
Phenomenological research question | |
- Knows more about the phenomena that have impacted an individual | |
What are the lived experiences of parents who have been living with and caring for children with a diagnosis of autism? (phenomena that have impacted an individual) | |
Grounded theory question | |
- Focuses on social processes asking about what happens and how people interact, or uncovering social relationships and behaviors of groups | |
What are the problems that pregnant adolescents face in terms of social and cultural norms (social processes), and how can these be addressed? | |
Qualitative case study question | |
- Assesses a phenomenon using different sources of data to answer “why” and “how” questions | |
- Considers how the phenomenon is influenced by its contextual situation. | |
How does quitting work and assuming the role of a full-time mother (phenomenon assessed) change the lives of women in Japan? |
Qualitative research hypotheses | |
---|---|
Hypothesis-generating (Qualitative hypothesis-generating research) | |
- Qualitative research uses inductive reasoning. | |
- This involves data collection from study participants or the literature regarding a phenomenon of interest, using the collected data to develop a formal hypothesis, and using the formal hypothesis as a framework for testing the hypothesis. | |
- Qualitative exploratory studies explore areas deeper, clarifying subjective experience and allowing formulation of a formal hypothesis potentially testable in a future quantitative approach. |
Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15
Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1
Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14
The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14
As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.
Variables | Unclear and weak statement (Statement 1) | Clear and good statement (Statement 2) | Points to avoid |
---|---|---|---|
Research question | Which is more effective between smoke moxibustion and smokeless moxibustion? | “Moreover, regarding smoke moxibustion versus smokeless moxibustion, it remains unclear which is more effective, safe, and acceptable to pregnant women, and whether there is any difference in the amount of heat generated.” | 1) Vague and unfocused questions |
2) Closed questions simply answerable by yes or no | |||
3) Questions requiring a simple choice | |||
Hypothesis | The smoke moxibustion group will have higher cephalic presentation. | “Hypothesis 1. The smoke moxibustion stick group (SM group) and smokeless moxibustion stick group (-SLM group) will have higher rates of cephalic presentation after treatment than the control group. | 1) Unverifiable hypotheses |
Hypothesis 2. The SM group and SLM group will have higher rates of cephalic presentation at birth than the control group. | 2) Incompletely stated groups of comparison | ||
Hypothesis 3. There will be no significant differences in the well-being of the mother and child among the three groups in terms of the following outcomes: premature birth, premature rupture of membranes (PROM) at < 37 weeks, Apgar score < 7 at 5 min, umbilical cord blood pH < 7.1, admission to neonatal intensive care unit (NICU), and intrauterine fetal death.” | 3) Insufficiently described variables or outcomes | ||
Research objective | To determine which is more effective between smoke moxibustion and smokeless moxibustion. | “The specific aims of this pilot study were (a) to compare the effects of smoke moxibustion and smokeless moxibustion treatments with the control group as a possible supplement to ECV for converting breech presentation to cephalic presentation and increasing adherence to the newly obtained cephalic position, and (b) to assess the effects of these treatments on the well-being of the mother and child.” | 1) Poor understanding of the research question and hypotheses |
2) Insufficient description of population, variables, or study outcomes |
a These statements were composed for comparison and illustrative purposes only.
b These statements are direct quotes from Higashihara and Horiuchi. 16
Variables | Unclear and weak statement (Statement 1) | Clear and good statement (Statement 2) | Points to avoid |
---|---|---|---|
Research question | Does disrespect and abuse (D&A) occur in childbirth in Tanzania? | How does disrespect and abuse (D&A) occur and what are the types of physical and psychological abuses observed in midwives’ actual care during facility-based childbirth in urban Tanzania? | 1) Ambiguous or oversimplistic questions |
2) Questions unverifiable by data collection and analysis | |||
Hypothesis | Disrespect and abuse (D&A) occur in childbirth in Tanzania. | Hypothesis 1: Several types of physical and psychological abuse by midwives in actual care occur during facility-based childbirth in urban Tanzania. | 1) Statements simply expressing facts |
Hypothesis 2: Weak nursing and midwifery management contribute to the D&A of women during facility-based childbirth in urban Tanzania. | 2) Insufficiently described concepts or variables | ||
Research objective | To describe disrespect and abuse (D&A) in childbirth in Tanzania. | “This study aimed to describe from actual observations the respectful and disrespectful care received by women from midwives during their labor period in two hospitals in urban Tanzania.” | 1) Statements unrelated to the research question and hypotheses |
2) Unattainable or unexplorable objectives |
a This statement is a direct quote from Shimoda et al. 17
The other statements were composed for comparison and illustrative purposes only.
To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .
Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.
Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12
In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.
Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.
Disclosure: The authors have no potential conflicts of interest to disclose.
Author Contributions:
You have full access to this open access article
6322 Accesses
2 Citations
1 Altmetric
Explore all metrics
While there is an abundant use of macro data in the social sciences, little attention is given to the sources or the construction of these data. Owing to the restricted amount of indices or items, researchers most often apply the ‘available data at hand’. Since the opportunities to analyse data are constantly increasing and the availability of macro indicators is improving as well, one may be enticed to incorporate even qualitatively inferior indicators for the sake of statistically significant results. The pitfalls of applying biased indicators or using instruments with unknown methodological characteristics are biased estimates, false statistical inferences and, as one potential consequence, the derivation of misleading policy recommendations. This Special Issue assembles contributions that attempt to stimulate the missing debate about the criteria of assessing aggregate data and their measurement properties for comparative analyses.
Avoid common mistakes on your manuscript.
The social sciences are witnessing an ever increasing supply of data at the aggregate levels on several key dimensions of societal progress or politico-institutional conditions. Next to standardised sources for comparing countries worldwide ( Solt, 2014 ), a bulge of indicators have been introduced over the past three decades to allow for comparative analyses regarding such issues as levels of perceived corruption, quality of governance, environmental sustainability, political rights and democratic freedom. And while there is an abundant use of these macro data, less attention has been given to the sources or to the construction of these data. Despite the spike in data availability, information on countries or regions often remains restricted to only a handful of indicators compiled by organisations that have the resources and know-how to offer worldwide a coverage of countries. Due to this restricted amount of indices or items, researchers for the most part apply the ‘available data at hand’ with only little consideration of their measurement properties.
There already have been attempts to address questions of data quality within the community of comparative political science. Herrera and Kapur (2007) try to foster the debate about the quality of comparative data sets by highlighting the three components of validity, coverage and accuracy. Mudde and Schedler (2010) discuss the challenges of data choice, distinguishing between procedural and outcome-oriented criteria when data quality is to be assessed. They relate the procedural criterion to aspects of transparency, reliability and replicability of data. The latter criteria is connected to validity, accuracy and precision ( Mudde and Schedler, 2010 : 411). Both groups of authors agree that research on data properties usually offers little scientific rewards, but that the debate about the measures is crucial and requires constant stimulation.
A few landmark books and articles have laid out some fundamental guidelines and approaches concerning case selection, operationalisation and implications for comparative model testing at the macro level (see for instance King et al, 1994 ; Adcock and Collier, 2001 ; Gerring, 2001 ). Yet it appears that the discussion within comparative research about measurement properties of different indicators lags the ongoing application of numerous indices in all sorts of comparative empirical research. That is, theoretical and empirical work with new and improved measurements has so far refrained from the opportunity to enhance an exchange about the conceptual framework for comparative multivariate modelling. Furthermore, it often remains problematic to grasp the core intentions of different streams of knowledge production especially when the computation of new cross-country indices was performed in response to prior criticism of existing measures.
Judging data properties from a qualitative and quantitative perspective, King et al (1994 : 63, 97) propose the criteria of unbiasedness, efficiency and consistency. In particular they concentrate on the inferential performance of measures. Here, bias relates to the property to introduce specific variance into the measurement, which in turn leads to non-random variation between different or repeated applications of the measure in inferential tasks. For example, Hawken and Munck (2011 : 4) report that ratings on perceived corruption made by commercial risk assessment agencies systematically rate economies as more corrupt than surveys of business executives, representing a bias ‘which does not seem consistent with random measurement error’. Efficiency relates to the variance of a measure when taken as an estimator. The simple idea is that an increase in sample size will likely reduce the variance of a measure and will measure a phenomenon more efficiently. But, even King et al (1994 : 66) emphasise that these two properties come with a trade-off that is not always easily reconcilable to achieve consistency, most likely in the form that researchers should allow for more bias in their measure if they achieve larger improvements in efficiency. They do not elaborate on consistency further, although they obviously relate it to reliability, which points towards traditional criteria or properties of measurement theory.
‘… the criteria of validity and reliability remain the cornerstones of any discussions about measurement properties’.
This traditional approach of (psychometric) test or measurement theory usually provides social scientists with a framework to think about properties of measures or data. That is, the criteria of validity and reliability remain the cornerstones of any discussions about measurement properties. Footnote 1 One can define reliability as an ‘agreement between two efforts to measure same trait through maximally similar methods’ ( Campbell and Fiske, 1959 : 83). Usually, this translates to a test of internal consistency of an indicator or test-retest approaches to check whether the systematic variation of an observed phenomenon can be captured by an empirical measure, at several points in time or across different (sub-)samples ( Nunnally and Bernstein, 1978 : 191). Validity represents a more demanding measurement criterion. A few authors have put forward conceptual approaches to address the problems of constructing indices under the perspective of measurement validity (e.g., Bollen, 1989 ; Adcock and Collier, 2001 ). While measurement validity may be broadly defined as the achievement that ‘… scores (including the results of qualitative classification) meaningfully capture the ideas contained in the corresponding concept’ ( Adcock and Collier, 2001 : 530), it consists of various subcategories such as content, construct, internal/external validity, convergent/discriminant validity and even touches upon more ambitious concepts such as ecological validity as well. These various dimensions also reflect a variety of sources for measurement errors, whether stemming from the process data collection (randomisation versus case selection), survey mode and origin of data, data operationalisation or aggregation of different data sources.
Three aspects require us to think harder about the feasibility of these classical concepts of measurement theory. First, the increasing availability of data for the computation or aggregation of macro indicators should improve the reliability of measurements. In fact, it seems that econometricians have completely abandoned the idea of measurement validity and instead focus on statistical techniques for aggregating data. For instance, a recent debate has yielded the impression that reliability remains the main goal to be established, while the concept of validity are not treated as equally important (see the discussion between Kaufmann et al (2010) and Thomas (2010) ). The problem with the idea to increase the reliability of measures arises at the point when validity is sacrificed due to ‘methodological contamination’ ( Sullivan and Feldman, 1979 : 19), especially with regards to the notion that reliability ‘represents a necessary but not sufficient condition for validity’ ( Nunally and Bernstein, 1978 : 192, italics in the original). Hence, aggregated or broadly defined measures that are unable to discriminate concepts and which are theoretically distinct – and hence are not supposed to be measured by the initial approaches – do not necessarily represent threats to the reliability, but rather to the validity. This is especially the case in empirical tests of theoretical predictions regarding the determinants or consequences of certain politico-institutional conditions, where invalid measures are likely to generate biased coefficients due to measurement error among independent or even dependent variables ( Herrera and Kapur, 2007 ). To this end, results will subsequently lack generalisability. For example, combining several reliable measures of the same phenomena to increase the reliability of the aggregate measure can only claim to be unbiased if all underlying measures capture the same portion of systematic variation in a phenomenon and are able to exclude random measurement error equally well. Testing theories with aggregate measures always comes with the caveat of introducing random measurement error into a measure that is supposed to only represent systematic variation in a phenomenon (see for instance Bollen, 2009 for a discussion), despite being highly reliable.
The potential for a trade-off between reliability and components of validity leads to the second aspect to keep in mind when thinking about measurement properties: Lack of validity may only bother researchers who refer to a theory-driven approach of quantitative analyses. The shift towards a data-driven approach puts less emphasis on the underlying theory from which one derives hypotheses to be tested. Hypothesis testing may even be the least important aspect of statistical modelling ( Varian, 2014 : 5). Instead, the goals of data analyses are prediction, forecasting specific behaviours, events or outcomes based on large sets of data, prior knowledge or prior evidence. Due to large amounts of data available and the increasing computer capacities that have enabled the widespread use of Bayesian approaches or machine learning techniques in the social sciences (see Gelman et al, 2014 ; Jackman, 2009 ), claims can be made that measurement properties that derive their ideas from a theory-driven perspective may lose its relevance. Given this shift, it implies an increasing importance for concepts such as reliability or predictive validity that appear closer to the data-driven approach. Footnote 2
The third challenge confronts comparative scholars working with individual-level data. Here, the extension and longevity of survey programmes such as the World Values Surveys or the International Social Science Project (ISSP) have made the application of multilevel models for comparative cross-sectional longitudinal analyses feasible ( Beck, 2007 ; Fairbrother, 2014 ). Given these opportunities, one core assumption is that measurement invariance holds across countries. That is, questionnaire items capture the same underlying concept across different contexts of data collection in a similar way. On the other hand, the theoretical emphasis on the contextuality of social phenomena creates a desire to reflect such idiosyncratic characteristics of a society within the subsequent measurements approaches.
This creates another trade-off for scholars within the respective research communities. As in the case of reliability and validity, contextually reliable measures can come with a lack of measurement invariance. Given that measurement invariance is tested via its discrepancy to some theoretical model, the shift to data-driven approaches may affect the importance of this particular measurement property in a similar fashion as illustrated for the relationship between reliability and validity.
We perceive this development as neither definitive nor one-dimensional. Measurement theory and the concepts like validity remain crucial to evaluate and apply the right instruments and to know where to look when research questions are to be answered. That is, how to think or assess the properties of data becomes one crucial aspect of any empirical endeavour. But they seldom represent the only criteria for assessing the characteristics of data. Our own work was concentrated on the aspect of comparing different indices by their measurement properties ( Neumann and Graeff, 2010 , 2013 ). One conclusion from this work is that researchers face certain incentives that require decisions on how to cope with the aforementioned trade-offs when measures from comparative data are applied.
Despite the known problems with comparative data, only a few questions remain answered and the stream of new indicators constantly enhances new challenges facing current comparative research. Some key problems can be summarised as follows: How to account for the contextuality of measuring country characteristics while maintaining comparability? What are the consequences when prior knowledge and existing empirical findings are to be included into the derivation of existing and new indicators? How to assess the accuracy of an index and how to even define or measure accuracy in a measurement sense?
This edited issue comprises papers in which the properties of applied aggregate data and the underlying sources for the analysis are explicitly reflected. As the authors bring in different methodological backgrounds, the papers apply the variety of contemporary approaches dealing with reliability and validity. This does not always coincide with a psychometric notion of constructs or measurement criteria. The authors do not, however, fall prey to typical publication strategies such as reporting only significant and/or theoretical congruent results instead of null-results ( Gelman and Loken, 2014 ). All papers share the ambition to accurately reflect the underlying theoretical meaning of the constructs of interest. By this, they refer to the above mentioned key questions in their own way.
Susanne Pickel et al (2015 ) present a new framework for comparative social scientists that tackles one of the most prominent topics in political research: the quality of democracy. In particular, the authors propose a framework to assess the measurement properties of three prominent indices of the quality of democracy. This evaluative process requires both the integration of theoretical considerations about the definitional clarity and validity of the underlying concepts as well as empirical concerns about choice of data sources or procedures of operationalisation and aggregation. Their contribution picks up several important points when one deals with the measurement of macro phenomena. First, although the definition of a concept that encompasses concept validity may vary between researchers or research schools, an assessment of the measurement properties remains tied to rather objective criteria like reliability, transparency, parsimony or replicability. Second, the assessment of a concept and its measurement characteristic ultimately face the challenge of measuring contextual characteristics of a political system as close as possible while adhering to more general measurement principles. The latter represents a task for researchers who want to investigate the comparability of indices. Pickel et al apply a framework that includes twenty criteria, focusing on three indices of quality of democracy. The authors state that a theory-based conceptualisation represents the necessary condition for an attempt to face the (potential) trade-off between the adequacy of a measure and its property to compare it with other measures in a meaningful way.
Mark David Nieman and Jonathan Ring (2015 ) pick up one of the other big topics of political research: human rights. Their starting point is that all researchers dealing with country data on human rights have to rely on a restricted number of data sources. Namely, the Cingranelli-Richards (CIRI) or the Political Terror Scale (PTS) represents two widely used indices that are both constructed by using the same country reports on human rights violations from the United States State Department and Amnesty International. Their main concern is that if data resources share systematic measurement error, for instance due to politico-ideological or geopolitical bias in the country reports, these properties will likely be reflected in the indices constructed from these data sources. After clarifying why the reports of the US State Department possess such undesirable measurement properties, they propose specific remedies for the problem. Nieman and Ring discuss possible solutions such as data truncation as well as strategies of correcting for systematic bias using an instrumental variable approach. Their replication analysis reveals that the application of the corrected version indeed changes results from prior analyses. Their work highlights the importance of the decisions during the process of indicator choice and subsequent analysis, whereas some choice sets and their consequences regarding inferential reasoning pose conflicting incentives for researchers given the publication bias favouring statistical significant findings ( Brodeur et al, 2012 ).
Joakim Kreutz (2015) also scrutinises the methodological foundations of the PTS and CIRI. By referring to both indices, he tries to clarify the connection between human rights and the level of state repression in eighteen West African countries. But instead of focusing on repression levels, Kreutz focuses on changes in repression. By highlighting the importance of repression dynamics, he extends prior evidence on the connection of state repression and politico-institutional factors. From a measurement perspective, disaggregating levels of repression by the direction of change (increase/decrease) and by the nature of repressive actions (indiscriminate, selective targeting) may improve our understanding of the contextual features of repression dynamics. His study provides several implications for current research efforts that try to disentangle the relationship between levels of democracy and state repression.
Alexander Schmotz identifies a gap in the political science literature about the measurement of cooptation, which is the way by which non-members are absorbed by a ruling elite. Concepts of co-optation become particularly important for explaining the upholding of autocratic regimes. As such, issues of co-optation are at the heart of political science research but are only seldom operationalised, especially across time. Schmotz develops an index that is capable to measure several threats to autocratic regimes by social pressure groups. Co-optation is a way to deal with these threats. This topic illustrates some general problems in social science research, namely that theoretical ideas, their predictions about causes and effects, and their testing in empirical research are often intertwined. In such a situation, measurement quality (e.g., content validity) is also related to the performance of the index, in particular if the concept of co-optation refers to a ‘seemingly unrelated set of indicators’ ( Schmotz, 2015 ). Counterintuitive findings are then of particular importance as in study by Schmotz. He comes up with the conclusion that the concept of co-optation might not be as important as the relevant literature suggests. Such a finding – based on a new index with the potential for testing and improving its measurement features – will incite the discussion in this field and will most likely lead to refinements of theoretical ideas and their operationalisations.
Barbara Bechter and Bernd Brandl (2015 ) start with the observation that comparative research is mainly based on aggregates on the national level. This ‘methodological nationalism’ comes to a dead end if the variance between countries for the variable of interest vanishes (which typically occurs for political regime indicators for western countries, such as the Polity index). They provide an excellent example for an answer to the question about what accounts for the contextuality of comparative research measures as they find that for the field of industrial relations relevant variables reveal more variability across industrial sectors than across countries. This does not imply the meaninglessness of cross-country comparisons. Rather, it opens the perspective to alternative levels of analysis, not only in the field of industrial relations.
William Pollock, Jason Barabas, Jennifer Jerit, Martijn Schoonvelde, Susan Banducci and Daniel Stevens ( 2015 ) introduce their study of media effects with the statement that results from analyses of the degree of media exposure on certain attitudes or public opinion are affected by ‘data issues related to the number of observations, the timing of the inquiry, and (most importantly) the design choices that lead to alternative counterfactuals’ ( Pollock et al, 2015 ). In an attempt to provide a comprehensive overview, two identification strategies (difference-in-difference estimator versus within-survey/within-subject) for causal claims from cross- or single country survey data are compared to a traditional approach of statistical inference from regression analyses. Using the European Social Survey and information about media-related events during the data collection process allows them to investigate media effects of political or economic events across countries, across types and number of events as well as across time. With a focus on the external validity of such (quasi-)experimental use of survey data, they are able to generate in parts counterintuitive results regarding the impact of sample size and design effects. Their study emphasises that the process of data collection and design choices have an important impact on subsequent data analyses.
By referring to psychometric techniques, Jan Cieciuch et al (2015 ) raise the question about reliable ways of testing measurement invariance. As a precondition for comparing data, measurement invariance can be determined at the level of theoretical constructs (or latent variables), at the level of relations between the theoretical constructs and their indicators or at the level of indicators themselves. Standard methods to pinpoint measurement invariance based on factor analytical techniques are prone to produce false inferences due to model misspecifications. Cieciuch and his colleagues pick up the discussion in literature about model misspecification and show how one can assess whether a certain level of measurement invariance is obtained. As misspecification must be considered as a matter of degree, their study stimulates the discussion about the question, how much misspecification is acceptable.
King et al (1994: 25) clarify earlier that the achievement of reliability and validity represent key goals in any social inquiry, whether qualitative or quantitative in nature.
This change does not imply a shift from deductive to inductive reasoning from data to theories, because researchers remain bound to deriving their results from a theoretical framework. The nomological core of the data-driven approach stems from the distributive characteristics of different probability distributions. See Gelman and Shalizi (2014) for more details on this line of reasoning.
Adcock, R. and Collier, D. (2001) ‘Measurement validity: A shared standard for qualitative and quantitative research’, American Political Science Review 95 (3): 529–546.
Article Google Scholar
Beck, N. (2007) ‘From statistical nuisances to serious modeling: Changing how we think about the analysis of time-series–cross-section data’, Political Analysis 15 (2): 97–100. doi:10.1093/pan/mpm001.
Bechter, B. and Brandl, B. (2015) ‘Measurement and analysis of industrial relations aggregates: What is the relevant unit of analysis in comparative research?’ European Political Science 14(4): 422–438.
Bollen, K.A. (1989) Structural Equations with Latent Variables, New York, NY: Wiley.
Book Google Scholar
Bollen, K.A. (2009) ‘Liberal democracy series I, 1972–1988: Definition, measurement, and trajectories’, Electoral Studies 28 (3): 368–374.
Brodeur, A., Lé, M., Sangnier, M. and Zylberberg, Y. (2012) Star wars: The empirics strike back’, Paris School of Economics Working Paper 2012–29, pp. 1-.
Campbell, D.T. and Fiske, D.W. (1959) ‘Convergent and discriminant validity by the mutitrait-multimethod matrix’, Psychological Bulletin 56 (2): 81–105.
Cieciuch, J., Davidov, E., Oberski, D.L. and Algersheimer, R. (2015) ‘Testing for measurement invariance by detecting local misspecification and an illustration across online and paper-and-pencil samples’, European Political Science 14(4): 521–538.
Fairbrother, M. (2014) ‘Two multilevel modeling techniques for analyzing comparative longitudinal survey datasets’, Political Science Research and Methods 2 (1): 119–140.
Gelman, A., Carlin, J., Stern, H., Dunson, D.B., Vehtari, A. and Rubin, D. (2014) Bayesian Data Analysis, 3rd edn. London: CRC Press.
Google Scholar
Gelman, A. and Shalizi, C. (2014) ‘Philosophy and the practice of Bayesian statistics’, British Journal of Mathematical and Statistical Psychology 66 (1): 8–38.
Gelman, A. and Loken, E. (2014) ‘The statistical crisis in science data-dependent analysis – a ‘garden of forking paths’ – explains why many statistically significant comparisons don't hold up’, American Scientist 102 (6): 460. doi:10.1511/2014.111.460.
Gerring, J. (2001) Social Science Methodology: A Criterial Framework, Cambridge: Cambridge University Press.
Hawken, A. and Munck, G.L. (2011) ‘Does the evaluator make a difference? Measurement validity in corruption research’, Measurement Validity in Corruption Research.
Herrera, Y.M. and Kapur, D. (2007) ‘Improving data quality: Actors, incentives, and capabilities’, Political Analysis 15 (4): 365–386.
Jackman, S. (2009) Bayesian Analysis for the Social Sciences, New York: John Wiley & Sons.
Kaufmann, D., Kraay, A. and Mastruzzi, M. (2010) ‘Response to ‘what do the worldwide governance indicators measure?’’, European Journal of Development Research 22 (1): 55–58.
King, G., Keohane, R.O. and Verba, S. (1994) Designing Social Inquiry: Scientific Inference in Qualitative Research, Princeton, NJ: Princeton University Press.
Kreutz, J. (2015) ‘Separating dirty war from dirty peace: Revisiting the conceptualization of state repression in quantitative data’, European Political Science 14(4): 458–472.
Mudde, C. and Schedler, A. (2010) ‘Introduction: Rational data choice’, Political Research Quarterly 63 (2): 410–416.
Neumann, R. and Graeff, P. (2010) ‘A multitrait-multimethod approach to pinpoint the validity of aggregated governance indicators’, Quality & Quantity 44 (5): 849–864.
Neumann, R. and Graeff, P. (2013) ‘Method bias in comparative research: Problems of construct validity as exemplified by the measurement of ethnic diversity’, Journal of Mathematical Sociology 37 (2): 85–112.
Nieman, M.D. and Ring, J.J. (2015) ‘The construction of human rights: Accounting for systematic bias in common human rights measures’, European Political Science 14(4): 473–495.
Nunally, J.C. and Bernstein, I.H. (1978) Psychometric Theory, New York: McGraw-Hill.
Pickel, S., Stark, T. and Breustedt, W. (2015) ‘Assessing the quality of quality measures of democracy: a theoretical framework and its empirical application’, European Political Science 14(4): 496–520.
Pollock, W., Barabas, J., Jerit, J., Schoonvelde, M., Banducci, S. and Stevens, D. (2015) ‘Studying media events in the European social surveys across research designs, countries, time, issues, and outcomes’, European Political Science 14(4): 394–421.
Schmotz, A. (2015) ‘Vulnerability and compensation – Constructing an index of co-optation in autocratic regimes’, European Political Science 14(4): 439–457.
Solt, F. (2014) ‘The Standardized World Income Inequality Database‘, Working paper. SWIID Version 5.0, October 2014. http://myweb.uiowa.edu/fsolt/index.html .
Sullivan, J.L. and Feldman, S. (1979) ‘Multiple indicators – An introduction‘ Sage University Paper series in Quantitative Applications in the Social Sciences No. 07–15, Beverly Hills and London: Sage.
Thomas, M. (2010) ‘What do the worldwide governance indicators measure?’ European Journal of Development Research 22 (1): 31–54.
Varian, H.R. (2014) ‘Big data: New tricks for econometrics’, The Journal of Economic Perspectives 28 (2): 3–28.
Download references
Parts of this Special Issue follow upon the symposium ‘The Quality of Measurement – Validity, Reliability and its Ramifications for Multivariate Modelling in Social Sciences’ held at Technische Universität Dresden from 21 to 22 September 2012. Videos of the presentations from the Symposium can be accessed through the website of the symposium at http://tinyurl.com/vwmeasurement . This symposium was financed by the Volkswagen Foundation, which supported the publication of this special issue as well. We thank all participants of the symposium for their remarks and contributions. Foremost, we thank the Volkswagen Foundation for their financial support.
Authors and affiliations.
Technische Universität Dresden, Dresden, 01069, Germany
robert neumann
University of Kiel, Christian-Albrechts-Platz 4, Kiel, 24118, Germany
peter graeff
You can also search for this author in PubMed Google Scholar
Correspondence to robert neumann .
The online version of this article is available Open Access
This work is licensed under a Creative Commons Attribution 3.0 Unported License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/
Reprints and permissions
neumann, r., graeff, p. quantitative approaches to comparative analyses: data properties and their implications for theory, measurement and modelling. Eur Polit Sci 14 , 385–393 (2015). https://doi.org/10.1057/eps.2015.59
Download citation
Published : 06 November 2015
Issue Date : 01 December 2015
DOI : https://doi.org/10.1057/eps.2015.59
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on April 12, 2019 by Raimo Streefkerk . Revised on June 22, 2023.
When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge.
Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.
Quantitative research is at risk for research biases including information bias , omitted variable bias , sampling bias , or selection bias . Qualitative research Qualitative research is expressed in words . It is used to understand concepts, thoughts or experiences. This type of research enables you to gather in-depth insights on topics that are not well understood.
Common qualitative methods include interviews with open-ended questions, observations described in words, and literature reviews that explore concepts and theories.
The differences between quantitative and qualitative research, data collection methods, when to use qualitative vs. quantitative research, how to analyze qualitative and quantitative data, other interesting articles, frequently asked questions about qualitative and quantitative research.
Quantitative and qualitative research use different research methods to collect and analyze data, and they allow you to answer different kinds of research questions.
Quantitative and qualitative data can be collected using various methods. It is important to use a data collection method that will help answer your research question(s).
Many data collection methods can be either qualitative or quantitative. For example, in surveys, observational studies or case studies , your data can be represented as numbers (e.g., using rating scales or counting frequencies) or as words (e.g., with open-ended questions or descriptions of what you observe).
However, some methods are more commonly used in one type or the other.
A rule of thumb for deciding whether to use qualitative or quantitative data is:
For most research topics you can choose a qualitative, quantitative or mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an inductive vs. deductive research approach ; your research question(s) ; whether you’re doing experimental , correlational , or descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.
You survey 300 students at your university and ask them questions such as: “on a scale from 1-5, how satisfied are your with your professors?”
You can perform statistical analysis on the data and draw conclusions such as: “on average students rated their professors 4.4”.
You conduct in-depth interviews with 15 students and ask them open-ended questions such as: “How satisfied are you with your studies?”, “What is the most positive aspect of your study program?” and “What can be done to improve the study program?”
Based on the answers you get you can ask follow-up questions to clarify things. You transcribe all interviews using transcription software and try to find commonalities and patterns.
You conduct interviews to find out how satisfied students are with their studies. Through open-ended questions you learn things you never thought about before and gain new insights. Later, you use a survey to test these insights on a larger scale.
It’s also possible to start with a survey to find out the overall trends, followed by interviews to better understand the reasons behind the trends.
Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analyzed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.
Quantitative data is based on numbers. Simple math or more advanced statistical analysis is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.
Applications such as Excel, SPSS, or R can be used to calculate things like:
Qualitative data is more difficult to analyze than quantitative data. It consists of text, images or videos instead of numbers.
Some common approaches to analyzing qualitative data include:
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
The research methods you use depend on the type of data you need to answer your research question .
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Streefkerk, R. (2023, June 22). Qualitative vs. Quantitative Research | Differences, Examples & Methods. Scribbr. Retrieved September 2, 2024, from https://www.scribbr.com/methodology/qualitative-quantitative-research/
Other students also liked, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, mixed methods research | definition, guide & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Humanities and Social Sciences Communications volume 11 , Article number: 1118 ( 2024 ) Cite this article
Metrics details
The present systematic review provides an overview and analysis of methodological underpinnings of self-regulated learning (SRL) research in ESL/EFL contexts. A search of five academic databases was conducted for studies published from 2017 to 2022. Adopting Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, the search yielded 31 studies conducted in various countries and educational settings. Informed by a 16-item coding scheme, the analysis found that SRL research is more nested within higher education. The results provided evidence to substantiate the idea that quantitative approaches towards SRL research is in the ascendency. Experimental and survey designs were identified as the most preferred research designs. The results revealed an absolute dominance of questionnaire/scale as the most frequently utilised data collection instrument. As for data analysis software, SPSS and Mplus were applied in the majority of studies. The results demonstrated that correlation, confirmatory factor analysis (CFA), and structural equation modelling (SEM) were among the most widely applied statistical tests. Finally, writing, compared to other language skills/subskills, was found to receive a surge of interest in SRL research. The study concludes with some suggestions for further future research.
Introduction.
With the global popularisation of English medium instruction in higher education institutions in non-English-speaking countries around the world, researchers have inevitably been drawn to the significance of maximising students’ learning opportunities in English as a second or foreign language (ESL/EFL) contexts. To this end, identifying variables which may help promote learning outcomes has turned out to become a sine qua non for theory, policy, and practice, exerting influence on students’ learning experience (Ardasheva et al., 2017 , p. 544). Indeed, a wide array of factors and variables conducive to learners’ learning experiences has been reported and documented in the literature, namely mode of instructional delivery such as blended learning (e.g., Bouilheres et al., 2020 ), gamification (e.g., Kian Tan et al., 2023 ), virtual and augmented reality (e.g., Jiawei et al., 2024 ; Videnovik et al., 2020 ), flipped classroom (e.g., Sointu et al., 2023 ), classroom climate (e.g., Li et al., 2023 ), artificial intelligence (AI) (e.g., Díaz and Nussbaum, 2024 ), instructional quality and student satisfaction (e.g., Yang et al., 2023 ) to name but a few.
In a similar vein, one variable, amongst others, which has long been considered to serve as an element in students’ success tends to be self-regulated learning (e.g., Zimmerman, 1990 , p. 4). As a “desirable educational outcome” (Paris and Newman, 1990 , p. 87), encompassing a good few number of variables influential in learning (Panadero, 2017 , p. 1), self-regulated learning has been evidenced as a precious asset to students. The surge of interest in self-regulated learning research over the past decades has culminated in the emergence of several models (e.g., Boekaerts, 2017 ; Winne and Hadwin, 1998 ; Zimmerman, 1989 ), each of which has been the focus of several review studies (e.g., Panadero, 2017 ; Puustinen and Pulkkinen, 2001 ).
Nevertheless, since coming to its own in the 1980s, self-regulated learning research has mostly been around in the fields of mainstream general education and educational psychology. Only later, did the notion gain momentum in ESL/EFL contexts especially over the last decade (e.g., Bai and Wang, 2023 ; Kondo et al., 2012 ). Given that contextual constraints tend to thwart and exert impact on efforts at regulation (Pintrich, 2004 , p. 387), then, examining how self-regulated learning strategies and models transpire and manifest themselves in ESL/EFL contexts, where the medium of instruction is different from students’ mother tongue, imbued with idiosyncratic subtleties (Mazandarani and Troudi, 2022 ), is seemingly of great importance. Despite all the endeavours made so far, drawing solid conclusions as to how self-regulated learning tends to interact with diverse educational variables and covariates (e.g., different types of language skills, subskills, and components, level of education, gender, age, level of proficiency) remains rather enigmatic in language teaching contexts. As such, several researchers have referred to instances of inconsistent findings, leaving lacunae in the SRL literature (see e.g., Chen, 2022 ; Guo et al., 2023 ; Shen and Bai, 2022 ). For instance, in their study on self-regulated learning strategies in a flipped course, Öztürk and Çakıroğlu ( 2021 , p. 1) found that whereas students’ speaking, reading, writing, and grammar performances benefited significantly from SRL, their listening performance was not of any significant difference.
This predicament could be partly due to the fact that research into the dynamics and mechanism of self-regulated learning in ESL/EFL contexts, compared to mainstream general education, appears to be in its infancy, especially when it comes to understanding the paradigmatic underpinnings of SRL research. In the current educational research milieu in which, as Pring ( 2000a ) eloquently contends, there is a bulk of “bad research” (p. 5), gaining deep insights into how to design, collect, analyse, and interpret accurate data, and draw robust conclusions is of high significance. As a prized asset to researchers (Mazandarani, 2022a , p. 217), awareness of paradigmatic nature of what is to be researched is quite seminal on the very grounds that rigourous findings in a research project tend to be contingent upon solid ontological, epistemological, and methodological assumptions, which per se lay the groundwork for selection of appropriate methods and instruments. Yet, research has shown that rarely do researchers make the underlying philosophical assumptions explicit in their works (Mazandarani, 2022a ). On such grounds, therefore, one recommended course of action for enabling researchers to make sense of the past, present, and future directions of what they research into is to appreciate the importance of methodological approaches of their research topics. One way for doing so lies with conducting meta-analyses and systematic review studies. Whilst the literature on different dimensions of SRL in mainstream education hosts various meta-analysis and systematic review studies, offering rich perceptive (e.g., Broadbent and Poon, 2015 ; Dignath et al., 2008 ; Jansen et al., 2019 ; Panadero, 2017 ; Sitzmann and Ely, 2011 ; Theobald, 2021 ), it is somewhat young and in a state of flux in ESL/EFL contexts with some recent meta-analytic works (e.g., Ardasheva et al., 2017 ; Chen, 2022 ; Yang et al., 2023 ) with limited methodologically analytical framework. This study is, therefore, one of the first of its kind which delves into different philosophical and methodological dimensions of SRL research.
Educational research is a messy and convoluted enterprise with trade-offs (Cohen et al., 2018 , p. 3). Despite several decades of philosophical discussion, methodological concepts are yet ambiguous and opaque (Hammersley, 2023 , p. 12). Such a vagueness in terminologies has led to the interchangeable use of methodology and method not only by some researchers, but more surprisingly, by some journals (Mazandarani, 2022a , p. 218). Of note is that researchers’ methodological choices cannot be exercised in a vacuum, devoid of philosophical positions. Inasmuch as philosophical assumptions exert deep influence on the research conduct (Pring, 2000b , p. 88), having implications for researchers’ methodological concerns (Cohen et al., 2018 , p. 6), then, probing into them is of high priority for researchers who inevitably bring to their adopted methodologies a number of assumptions (Crotty, 1998 , p. 7). Despite serving as a desideratum in educational research, however, philosophy along with its underlying assumptions tend to escape researchers’ attention, which is presumably due to the intricacy and abstractness of philosophical assumptions (Mazandarani, 2022a , p. 218). Understanding the methodology of research projects is vital, in that not only does it tell us about researchers’ philosophical stances, but it also provides the rationale that lies behind the chosen methods (Crotty, 1998 , p. 7) and instrumentations and data collection (Cohen et al., 2018 , p. 3). Good researchers are those who are responsible and disciplined (Dörnyei, 2007 , p. 17), and accountable to what they add to the literature. To achieve this, research needs to be philosophically and methodologically well-informed. As such, the methodological analysis of the state-of-the-art research on a given topic can provide invaluable information as to what research philosophy and worldviews dominate research on that topic. This is quite important as how researchers view ‘truth’ and what they consider as ‘knowledge’ tend to have direct and indirect implications for theory, policy, and more importantly practice. However, tracing back to the philosophical underpinnings of a research study may not be straightforward. In so doing, one is required to understand beforehand the competing research paradigms, which as Lincoln, Lynham and Guba ( 2018 , p. 214) argue, have begun to “interbreed”. Irrespective of types of research paradigm (e.g., positivist, interpretivist, pragmatist) undergirding a given research project, researchers need to be cognisant of the impact of philosophical stances on their methodological decisions. For instance, those who espouse positivist position will favour experiment and survey, whereas those who give countenance to anti-positivist standpoints will opt for interpretive approaches such as observation (Cohen et al., 2018 , p. 6). A review of the literature on SRL, simply substantiates the paucity of research on philosophical and methodological issues in SRL studies in the field of ESL/EFL research. It is, therefore, the aim of the present study to address this gap through a systematic review of the articles published on SRL in the past few years. Systematic reviews have come to prominence in recent years (Bryman, 2012 , p. 103). As Andrews ( 2005 , p. 404) posits, the existing body of knowledge deserves reviewing, and in so doing, systematic reviews provide an opportunity as to synthesise research findings in existing literature. Among several functions of systematic reviews, as Andrews ( 2005 , p. 409) continues, is the adopted methodological approaches to a research topic, exploring where the methodological flaws lie.
As mentioned, silence prevails upon literature on systematic review and meta-analysis studies pertinent to SRL in ESL/EFL contexts, especially when it comes to methodological reflexivity. This section, therefore, provides an overview of the most salient attempts mentioned in the literature. In their meta-analysis of 37 articles, Ardasheva et al. ( 2017 ) explored how language learning strategy instruction is associated with self-regulated learning. Supporting the link between the two variables, the results called for further attention to self-regulated learning in strategy instruction research (Ardasheva et al., 2017 , p. 544). In her meta-analytic study on 16 articles, Chen ( 2022 ) investigated to what extent SRL interventions are effective in students’ achievement, strategy employment, and self-efficacy, the results of which gave support for the effectiveness of SRL interventions (Chen, 2022 , p. 14). In a similar study, Xu et al. ( 2023a ) addressed the effect of SRL interventions on students’ academic achievement in both online and blended learning environments across different levels of education. Review of 50 articles showed moderate and positive effect exerted by SRL intervention on students’ performance in elementary, secondary, and higher education contexts as well informal settings (Xu et al., 2023a , p. 2911). Perhaps, the most relevant systematic review which partly addressed the methodological issues surrounding SRL is that of Yang, Wen and Song ( 2023 ). Focusing on technology-enhanced SRL strategies, their systematic review of 34 studies conducted, from 2011 to 2020, substantiated the preponderance of quantitative methods, placing emphasis on outcome rather than process in SRL learning (Yang et al., 2023 , p. 31).
As can be seen in the above-reviewed literature, various methodological aspects of SRL research in ESL/EFL contexts have not yet been addressed. Pursuant to such a lacuna, in this systematic review study, I provide an overall picture of the status quo of the epistemological and methodological approaches of state-of-the-art ESL/EFL-specific SRL research. In particular, this paper delves into methodological issues surrounding SRL research such as what research paradigm, design, data collection instruments, data analysis software, and statistical techniques, amongst others, tend to be adopted by researchers. To partly bridge the gap, the following research questions were posed:
What are the paradigmatic and methodological features of SRL research in ESL/EFL context?
What is the geographical distribution of SRL research?
To what extent different levels of education are addressed in SRL research?
What aspects and variables of language teaching germane to SRL are investigated?
In order to ensure the rigour and robustness of the review process, this systematic review was informed by the PRISMA (Page et al., 2021 ), as the guiding framework undergirding the review process.
The adopted multi-phase search strategy encompassed searching the most relevant terms and queries in five major electronic academic databases, including ScienceDirect, SpringerLink, Taylor & Francis Online, Wiley Online Library, and SageJournals. The rationale behind this is that they publish journals which are mostly indexed by the two most well-known academic indexing databases, i.e., Elsevier’s Scopus and Clarivate Analytics’ Web of Science. As the most frequently used databases for bibliometric analysis (Singh et al., 2021 ), the covered journals in Scopus and Web of Science secured extracting rigorous and quality articles for this study. In order to minimise the search bias, a multi-phase searching strategy was applied. Given that different abbreviations (“EFL”, “ESL”, and “L2”) are inconsistently and interchangeably used in the literature to refer to English language education contexts, I used the three abbreviations separately, together with “self-regulated learning” query. This means that the search of queries was repeated for 15 times to maximise the search hits. However, given that the initial search yielded abundant results embracing irrelevant studies, and in order to make the search more precise, the search was instructed with the parameter of including “self-regulated learning” in “title” AND “EFL” OR “ESL” OR “L2” “anywhere” (see Table 1 ). This modification allowed for finding the most relevant studies on self-regulated learning in English language education contexts. Finally, in order to understand the state-of-the-art trends of research on SRL, the search, conducted in July 2023, was set to cover studies published from 2017 to 2022. The rationale behind the selection of a 6-year period for SRL research was twofold. First, the present study is an attempt to present an updated and state-of-the-art understanding of SRL research. Second, the literature simply shows that SRL research in L2 context has been booming in the past few years, as a consequence of which the selected period is deemed to be saturated enough with quality and relevant studies. As Gan, Liu and Yang ( 2020 ) contend, SRL in recent years has come to its own as educational innovation.
In order to exclude the gray literature and eliminate irrelevant articles, a set of inclusion/exclusion criteria was applied (see Table 2 ). For a study to be considered eligible in the article pool, it must investigate the variable or case of SRL in English language education contexts. Being peer-reviewed, the paper must report an original study. Therefore, all other types of academic publications including reviews, conceptual/theoretical papers, short communications, book chapters, conference proceedings, editorials, etc. were excluded. In line with PRISMA 2020 flowchart (Page et al., 2021 , p. 6), the search was performed in four stages, as illustrated in Fig. 1 .
PRISMA 2020 flow diagram adopted for systematic review (Page et al., 2021 , p. 6).
After proposing a set of parameters and running a rigourous search of electronic databases, an initial pool of articles relevant to search strings was aggregated, yielding 69 articles. In the next phase, all titles and abstracts of the extracted articles were screened based on a screening guide, in which all eligibility criteria were identified. In the event it was not possible to judge upon the relevance of the articles based on the reading of title and abstract, full texts of articles were examined for checking the eligibility criteria and making the final decision. Subsequent to several stages of screening and pruning, informed by PRISMA framework, 38 articles were removed, leaving 31 eligible articles for final data analysis. In the next phase, the extracted full texts of articles were subject to content analysis using a pre-determined coding schedule, in consonance with the proposed research questions. To this end, labelled with a unique ID, the full text of each paper was screened for publisher, journal title, year of publication, geographical context, level of education, methodological approaches, data collection instruments, variables, data analysis software, design of the study, philosophical assumptions, type and number of participants, number of authors, and main data analysis tests and techniques. Apart from screening the relevant sections in each article, Adobe Acrobat’s FIND function was used to locate the information needed for content analysis as indicated in the coding scheme.
The statistical analysis and thematic content analysis of the final 31 article were conducted using SPSS version 27 and NVivo version 12 software, and Microsoft Excel. SPSS was used in order to produce a descriptive profile for articles, codifying the thematic content analysis carried out for each article. Adopting an inductive thematic analysis approach, NVivo, in addition, was used to complement the data exploration. In order to identity the most frequent words and concepts mentioned in the relevant sections of the selected articles, the summary function of NVivo was used. On this ground, titles, abstracts, and keywords sections of all articles were extracted from the full texts, ensuring the elimination of redundant information.
Distribution of articles by publishers.
The number of articles extracted from each publishing databases is shown in Table 3 . The majority of articles were obtained from Taylor & Francis Online with 12 articles (38.7%).
As for the title of journals included in the analysis, Fig. 2 shows that the final 31 articles were published in 24 journals, with System, Cogent Education, International Journal of Educational Research, Computer assisted language learning, and International Journal of Bilingual Education and Bilingualism leading the distribution.
Journal-wise distribution of articles.
Figure 3 illustrates the dispersion of articles over a six-year span from 2017 to 2022, as follows:
Year-wise distribution of articles.
The analysis of the countries where the selected studies were conducted is provided in Table 4 , representing all continents except for Africa and Antarctica. Of great note is the number of studies conducted in Hong Kong and China which almost accounts for near half the studies (48.4%). Notably, the results showed that an absolute majority of studies (71%) were conducted in Asian countries. It is of note that the analysis showed that 11160 participants took part in the selected studies, conducted by 76 researchers.
The analysis of context and participants of the selected studies demonstrated that SRL research targeted both K-12 and higher education contexts, with 13 studies (41.9%) and 17 studies (54.8%), respectively (see Table 5 ).
Methodological approaches of srl research.
As shown in Fig. 4 , a huge majority of articles (80.6%) were conducted quantitatively (e.g., Cho et al., 2020 ; Lin and Dai, 2022 ), showing the researchers’ inclination to adopt positivist-quantitative paradigm. In contrast, qualitative studies (e.g., Hu and Gao, 2020 ; Nakata, 2019 ) and mixed methods studies (e.g., Onah et al., 2020 ; Xu, 2021 ) accounted for a very small proportion of articles, just less than 10% each.
Methodological approaches of studies.
The analysis of the research designs adopted in the selected studies revealed that ‘design’ appears to go unnoticed by researchers, inasmuch a good few of articles (41.9%) had no clear reference to the type of design used for the study. Of the remaining articles, as seen in Table 6 , (quasi)experimental designs (e.g., Ferreira et al., 2017 ; Öztürk and Çakıroğlu, 2021 ; Teng and Zhang, 2020 ) and survey designs (e.g., Yi, 2021 ) were the most adopted research designs, 22.6% and 16.1%, respectively.
As for the instruments and materials used for collecting data, the analysis revealed that researchers deployed a variety of different data collection instruments and tools, many of which were used in one single study. However, as shown in Fig. 5 , ‘questionnaire/scale’ was used in 28 out of 31 articles (90.3%) (e.g., Bai and Guo, 2018 ; Guo et al., 2021 ; Teng, 2021 ), making it the most widely used instrument for collecting data. Researchers also utilised ‘test’ in 15 studies (48.4%) (e.g., Öztürk and Çakıroğlu, 2021 ), introducing it as the second most used data collection instrument, followed by ‘interview’ with 8 studies (25.7%).
Data collection instrument(s).
The obtained results indicated that researchers incorporated various software tools for conducting data analysis. As can be seen in Table 7 , however, more than one-thirds of the studies (35.5%) had no mention of any statistical analysis software. From among the remaining articles, it was found that ‘SPSS’ and ‘MPlus’ were the most utilised data analysis software, with 25.8% (e.g., Ferreiraet al., 2017 ; Lin and Dai, 2022 ) and 19.4% (e.g., Bai and Wang, 2021 ; Bai et al., 2021 ; Yi, 2021 ) of the selected studies, respectively. As for qualitative data analysis, NVivo was the only analysis software reported in the selected studies (e.g., Alvi and Gillies, 2023 ; Zhang, 2017 ).
The analysis of methods and results sections of the selected articles offered a wide range of statistical procedures, techniques, and tests used by researchers to answer the proposed research questions. As highlighted in Table 8 , correlation and regression were used in 12 studies (38.4%) (e.g., Lin and Dai, 2022 ), followed by Confirmatory Factor Analysis (CFA) in 10 articles (32%) (e.g., Şahin Kızıl and Savran, 2018 ), and Structural Equation Modelling (SEM) in 10 articles (32%) (e.g., Tse et al., 2022 ). The use of ANOVA, MANOVA, ANCOVA, and MANCOVA were also reported in the studies, 12.8%, 12.8%, 9.6%, and 3.2% of articles, respectively.
An important yet little-researched dimension of the data analysis revolved around the variables (dependent, independent, moderator, etc.) and language skills and components which have been addressed along with SRL in the selected studies. As presented in Table 9 , in terms of language skills, ‘writing’ skill was found to be of the highest priority in SRL research, inasmuch as 12 articles (38.6%) addressed ‘writing’ in one way or another (e.g., Guo et al., 2021 ; Teng and Zhang, 2020 ). Three articles (9.7%) targeted ‘reading’ skill in relation to SRL (e.g., Tse et al., 2022 ). Online and blended learning were also the focus of three studies (9.6%) (e.g., Lin and Dai, 2022 ; Zhu et al., 2020 ).
As for the thematic analysis of the selected articles, the summary function of NVivo was run. Table 10 presents the 20 most frequently used words in the titles, abstracts, and keywords sections of the selected papers. Quite expectedly, words such as ‘self’, ‘learning’, ‘regulated’, ‘strategies’, ‘students’, ‘writing’, ‘motivation’, ‘efficacy’, ‘assessment’, ‘instruction’, ‘online’, and ‘reading’ were among the highly mentioned concepts.
This systematic review was conducted to cast new light onto the methodological underpinnings undergirding research in SRL in ESL/EFL contexts. Having followed PRISMA guidelines, the search yielded 31 articles published from 2017 to 2022, the data of which were subject to content analysis using SPSS 27, NVivo 12, and Microsoft Excel. The obtained results revealed some facts and gaps, and issues underlying SRL research in ESL/EFL contexts which will be discussed in response to the research questions posed for this study.
The results (see Table 4 ) showed that Asian countries led the state-of-the-art research on SRL, with China and Hong Kong accounting for approximately half the selected studies. This result corroborates the literature on other dimensions of ESL/EFL education. For instance, in his study on L2 teacher education, Mazandarani ( 2022b , p. 1) found out research on L2 teacher education, compared to mainstream general teacher education, is more nested within Asian countries. Similarly, reviewing technological, pedagogical, and content knowledge (TPACK) research, Tseng et al. ( 2022 , p. 948) identified Asia as the context where most of their selected studies were conducted. A note of emphasis herein is that, with the advent of internationalisation, English as medium of instruction (EMI) has been in the ascendency in many non-English-dominant Asian countries. As such, EMI policies in countries such as China, is considered as a vital ingredient for internationalisation of higher education (Zhang, 2018 , p. 542). To this end, it is not surprising that, therefore, various aspects of ESL/EFL education become the focus of academic research in non-English-speaking countries, and SRL research is no exception.
The extent to which different educational levels have been targeted by SRL research was addressed in this study, the results of which gave precedence to higher education, compared to K-12 education (see Table 5 ). This is consistent with the results of scoping review conducted by Xu et al. ( 2023b , p. 8), in which they identified higher education as the most widely investigated level of education (71.17%) in SRL research in blended or online educational contexts. This result also echoes that of Yang, Wen and Song ( 2023 , p. 35) systematic review of technology-enhanced SRL, by which higher education was the context of 73.5% of studies. The surge of interest in self-regulated learning in higher education could be partly due to students’ age-specific learning needs which are different in higher education vis-à-vis K-12 education. Higher education students, for instance, may need more support to acquire self-regulated learning strategies and skills to avail themselves of artificial intelligence technology (Koć-Januchta et al., 2022 , p. 18), which is currently a feature of higher education. Another plausible scenario could be the convenience of conducting intervention research with adults (Xu et al., 2023b , p. 8).
The preponderance of quantitative research methodology was evidenced with 25 studies (80.6%), followed by qualitative and mixed methods methodologies (9.7% each) (see Fig. 4 ). This was expected, in that hypothesis-testing, treatments, interventions, causal relationships, correlations, and predictions, informed by explanatory approach, which tend to be used frequently in SRL research, are the epitome of quantitative research. This finding is consistent with other SRL studies in which quantitative studies were the most prevalent type of research (e.g., Junaštíková, 2023 ; Xu et al., 2023b ; Yang et al., 2023 ).
As for design of study, it was quite surprising that 13 articles (41.9%) had no specific section on or even just a clear reference to type of design used in studies. Lack of clear and correct reference to study design is a common error in many manuscripts (Praharaj, 2023 ). One possible explanation for this issue could be due to some journals’ author guidelines which are rather silent on ‘design’ of the study, or making it optional for authors to refer to design of study in ‘methods’ section, if needed. From the remaining articles which highlighted the design of study, experimental designs were used in seven (22.6%), followed by survey and mixed methods designs, 16.1% and 9.7%, respectively (see Table 6 ).
A variety of instruments utilised for data collection purposes were introduced in this study (see Fig. 5 ). Questionnaires and scales were found to be the dominant data collection instruments in SRL research with 28 studies (90.3%). This finding is consistent with Junaštíková ( 2023 ) review of empirical studies on self-regulation of learning. One explanation for such a ubiquitous use of questionnaires/scales is that, closely aligned with dominance of quantitative research approach, the existence of several well-established, valid, and reliable questionnaires and tests in the literature is of convenience to researchers, in that they can be easily and quickly used in pre and post-intervention phases of research, severing as reliable tools for obtaining numerical data for hypothesis testing. As one of the main data collection tools in a survey design, self-completion questionnaires are of several advantages such as cheap and quick administration, and convenience (Bryman, 2012 , p. 233), making them a suitable instrument for SRL research.
When it comes to software used for data analysis, this review revealed that researchers in more than one-thirds of the selected studies (35.5%) had a lackadaisical approach towards highlighting their employed quantitative and/or qualitative data analysis software. One underlying reason for such a heedlessness, in a similar vein, might emanate from some journals’ policies and author guidelines. Another plausible scenario is that it is customary, in academia, for some researchers to turn to statisticians for assistance with statistical and/or thematic analyses, inasmuch as they see it as a technical domain of enquiry which requires statistical expert knowledge (Mazandarani, 2024 , p. 408). As a consequence, data analysis results and reports tend to be a prime concern for researchers rather than the type or name of data analysis software per se. Of the remaining articles, researchers used a variety of data analysis software, with SPSS leading in frequency (25.8%) (see Table 7 ). Such a results was not uncommon, in that, SPSS is the most frequent statistical analysis software applied in the field of social sciences (Cohen et al., 2018 , p. 725; Dörnyei and Csizér, 2012 , p. 83). The literature on SRL, however, is rather silent about the usefulness of various statistical software packages used for data analysis in the SRL research. Further research is, therefore, needed to investigate which data analysis software can best accommodate SRL researchers’ needs in ESL/EFL contexts.
As for statistical tests and procedures, correlation and regression analyses, CFA, and SEM were found to be highly applied by researchers (see Table 8 ). This finding was expected, on the grounds that one of the main data collection instruments for investigating SRL and its pertinent variables in survey and correlational (associational) designs is questionnaire/scale. As such, correlation analysis and CFA are typical statistical techniques for questionnaire/scale development and validation. Unfortunately, there is a dearth of literature on this aspect of SRL research against which this finding could be compared and contrasted.
The analysis of data offered some new insights into different variables and domains involved in SRL research. As shown in Table 9 , ‘writing’ was the main language skill with respect to which SRL-related investigations were conducted. There are some possible reasons for such a surge of interest in writing, compared to other language skills and components. First, writing is usually a compulsory course across a wide range of academic programmes in higher education in ESL/EFL contexts. Second, compared to other academic tasks, writing assignments is reported to be more connected with students’ procrastination (Fritzsche et al., 2003 , p. 1550). Third, as a multidimensional phenomenon (Bai and Wang, 2021 ), being of an utmost significance for academic success and future occupation (Bai et al., 2021 , p. 65), writing demands high self-regulation abilities embracing an intricate framework of interdependent processes (Zimmerman and Risemberg, 1997 , p. 97). Fourth, research has demonstrated that metacognition is a key ingredient of SRL (e.g., Meyer et al., 2010 ; Senko, Perry and Greiser, 2022 ). On the other hand, cognitive and metacognitive strategies rest at the heart of writing quality (Wischgoll, 2016 , p. 1). It is, therefore, explicable why the literature on SRL has witnessed an upswing in studies, targeting the interconnection between SRL strategies and writing , the two concepts which were among the most frequently mentioned words in the selected articles.
This review study has several limitations. First, this review was limited to papers which were published in English, leaving out contexts where publications are in languages other than English. Second, in order to avoid the gray and low-quality literature, the search was limited to five well-known academic databases. Although this strategy has led to a collection of quality research papers, it might have possibly resulted in underrepresentation of many others. Third, notwithstanding wisely chosen, the combination of search keywords might have excluded some relevant quality articles. Fourth, the adopted coding protocol could have more or even different items, generating a richer analysis. Finally, there was a likeliness to miss some important information whilst screening for keywords to locate the relevant information in the manuscripts through scanning and searching the texts using Adobe Acrobat’s FIND function.
In view of surge of attention in SRL research in mainstream general education and, in particular, ESL/EFL education in recent years, there is a need to understand the status quo of SRL research, identifying the associated strengths, weaknesses, opportunities, and threats. As such, methodological reflexivity provides an opportunity for researchers to reflect on the consequences of their adopted methods, values, biases, and decisions throughout their knowledge production mission (Bryman, 2012 , p. 393). This systematic review brings to the fore several facts and gaps underlying SRL research in ESL/EFL contexts, informed by a 16-item coding scheme. One main issue, amongst others, highlighted in this study was the hegemony of etic philosophical positions towards SRL research. Future research, therefore, should bring into play more emic approaches towards SRL research. In the same fashion, the existing understanding of SRL research and know how is heavily relied on data obtained via questionnaire/scale/test, which can potentially limit researchers’ insights into the underlying issues of SRL. Further utilisation of various types of instruments for data collection can deepen researchers’ views. Finally, from among English language skills, subskills, and components, writing has received much momentum. Further research is expected to address SRL in relation to other language skills and components evenly.
Alvi E, Gillies RM (2023) Self-regulated learning (SRL) perspectives and strategies of Australian primary school students: a qualitative exploration at different year levels. Educ Rev 75(4):680–702. https://doi.org/10.1080/00131911.2021.1948390
Article Google Scholar
Andrews R (2005) The place of systematic reviews in education research. Br J Educ Stud 53(4):399–416. https://doi.org/10.1111/j.1467-8527.2005.00303.x
Ardasheva Y, Wang Z, Adesope OO, Valentine JC (2017) Exploring effectiveness and moderators of language learning strategy instruction on second language and self-regulated learning outcomes. Rev Educ Res 87(3):544–582. https://doi.org/10.3102/0034654316689135
Bai B, Guo W (2018) Influences of self-regulated learning strategy use on self-efficacy in primary school students’ English writing in Hong Kong. Read Writ Q 34(6):523–536. https://doi.org/10.1080/10573569.2018.1499058
Bai B, Wang J (2021) Hong Kong secondary students’ self-regulated learning strategy use and English writing: influences of motivational beliefs. System 96:102404. https://doi.org/10.1016/j.system.2020.102404
Bai B, Wang J (2023) The role of growth mindset, self-efficacy and intrinsic value in self-regulated learning and English language learning achievements. Lang Teach Res 27(1):207–228. https://doi.org/10.1177/1362168820933190
Bai B, Wang J, Nie Y (2021) Self-efficacy, task values and growth mindset: what has the most predictive power for primary school students’ self-regulated learning in English writing and writing competence in an Asian Confucian cultural context? Camb J Educ 51(1):65–84. https://doi.org/10.1080/0305764X.2020.1778639
Boekaerts M (2017) Cognitive load and self-regulation: attempts to build a bridge. Learn Instr 51:90–97. https://doi.org/10.1016/j.learninstruc.2017.07.001
Bouilheres F, Le LTVH, McDonald S, Nkhoma C, Jandug-Montera L (2020) Defining student learning experience through blended learning. Educ Inf Technol 25(4):3049–3069. https://doi.org/10.1007/s10639-020-10100-y
Broadbent J, Poon WL (2015) Self-regulated learning strategies & academic achievement in online higher education learning environments: a systematic review. Internet High Educ 27:1–13. https://doi.org/10.1016/j.iheduc.2015.04.007
Bryman A (2012) Social research methods, 4th edn. Oxford University Press
Chen J (2022) The effectiveness of self-regulated learning (SRL) interventions on L2 learning achievement, strategy employment and self-efficacy: a meta-analytic study [Systematic Review]. Front Psychol https://doi.org/10.3389/fpsyg.2022.1021101
Cho HJ, Yough M, Levesque-Bristol C (2020) Relationships between beliefs about assessment and self-regulated learning in second language learning. Int J Educ Res 99:101505. https://doi.org/10.1016/j.ijer.2019.101505
Cohen L, Manion L, Morrison K (2018) Research methods in education, 8th edn. Routledge
Crotty M (1998) The foundations of social research: Meaning and perspective in the research process. Sage Publications
Díaz B, Nussbaum M (2024) Artificial intelligence for teaching and learning in schools: the need for pedagogical intelligence. Computers Educ 217:105071. https://doi.org/10.1016/j.compedu.2024.105071
Dignath C, Buettner G, Langfeldt H-P (2008) How can primary school students learn self-regulated learning strategies most effectively?: A meta-analysis on self-regulation training programmes. Educ Res Rev 3(2):101–129. https://doi.org/10.1016/j.edurev.2008.02.003
Dörnyei Z (2007) Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies. Oxford University Press
Dörnyei Z, Csizér K (2012) How to design and analyze surveys in second language acquisition research. In: A Mackey, SM Gass (eds) Research methods in second language acquisition: a practical guide. Wiley-Blackwell, pp 74–94
Ferreira PC, Simão AMV, da Silva AL (2017) How and with what accuracy do children report self-regulated learning in contemporary EFL instructional settings? Eur J Psychol Educ 32(4):589–615. https://doi.org/10.1007/s10212-016-0313-x
Fritzsche BA, Rapp Young B, Hickson KC (2003) Individual differences in academic procrastination tendency and writing success. Personal Individ Differ 35(7):1549–1557. https://doi.org/10.1016/S0191-8869(02)00369-0
Gan Z, Liu F, Yang CCR (2020) Student-teachers’ self-efficacy for instructing self-regulated learning in the classroom. J Educ Teach 46(1):120–123. https://doi.org/10.1080/02607476.2019.1708632
Guo W, Bai B, Song H (2021) Influences of process-based instruction on students’ use of self-regulated learning strategies in EFL writing. System 101:102578. https://doi.org/10.1016/j.system.2021.102578
Guo W, Lau KL, Wei J, Bai B (2023) Academic subject and gender differences in high school students’ self-regulated learning of language and mathematics. Curr Psychol 42(10):7965–7980. https://doi.org/10.1007/s12144-021-02120-9
Hammersley M (2023) Methodological concepts: a critical guide. Routledge
Hu J, Gao X (2020) Appropriation of resources by bilingual students for self-regulated learning of science. Int J Bilingual Educ Bilingualism 23(5):567–583. https://doi.org/10.1080/13670050.2017.1386615
Jansen RS, van Leeuwen A, Janssen J, Jak S, Kester L (2019) Self-regulated learning partially mediates the effect of self-regulated learning interventions on achievement in higher education: a meta-analysis. Educ Res Rev 28:100292. https://doi.org/10.1016/j.edurev.2019.100292
Jiawei W, Mokmin NAM, Shaorong J (2024) Enhancing higher education art students’ learning experience through virtual reality: a comprehensive literature review of product design courses. Interactive Learn Environ 1–17. https://doi.org/10.1080/10494820.2024.2315125
Junaštíková J (2023) Self-regulation of learning in the context of modern technology: a review of empirical studies. Interactive Technol Smart Educ. https://doi.org/10.1108/ITSE-02-2023-0030
Kian Tan W, Shahrizal Sunar M, Su Goh E (2023) Analysis of the college underachievers’ transformation via gamified learning experience. Entertain Comput 44:100524. https://doi.org/10.1016/j.entcom.2022.100524
Koć-Januchta MM, Schönborn KJ, Roehrig C, Chaudhri VK, Tibell LAE, Heller HC (2022) Connecting concepts helps put main ideas together”: cognitive load and usability in learning biology with an AI-enriched textbook. Int J Educ Technol High Educ 19(1):11. https://doi.org/10.1186/s41239-021-00317-3
Kondo M, Ishikawa Y, Smith C, Sakamoto K, Shimomura H, Wada N (2012) Mobile assisted language learning in university EFL courses in Japan: developing attitudes and skills for self-regulated learning. ReCALL 24(2):169–187. https://doi.org/10.1017/S0958344012000055
Li W, Ren X, Qian L, Luo H, Liu B (2023) Uncovering the effect of classroom climates on learning experience and performance in a virtual environment. Interactive Learn Environ https://doi.org/10.1080/10494820.2023.2195450
Lin X, Dai Y (2022) An exploratory study of the effect of online learning readiness on self-regulated learning. Int J Chin Educ 11(2):2212585X221111938. https://doi.org/10.1177/2212585x221111938
Lincoln YS, Lynham SA, Guba EG (2018) Paradigmatic controversies, contradictions, and emerging confluences, revisited. In: NK Denzin, YS Lincoln (eds) The SAGE handbook of qualitative research. SAGE Publications, Inc, pp 213–263
Mazandarani O (2022a) Philosophical assumptions in ELT research: a systematic review. Asia-Pac Educ Res 31(3):217–226. https://doi.org/10.1007/s40299-021-00554-0
Mazandarani O (2022b) The status quo of L2 vis-à-vis general teacher education. Educ Stud 48(1):1–19. https://doi.org/10.1080/03055698.2020.1729101
Mazandarani O (2024) Statistical literacy: a point of contention in L2 teacher education. Lang Relat Res 14(6):405–421. https://doi.org/10.29252/lrr.14.6.13
Mazandarani O, Troudi S (2022) Measures and features of teacher effectiveness evaluation: perspectives from Iranian EFL lecturers. Educ Res Policy Pract 21(1):19–42. https://doi.org/10.1007/s10671-021-09290-0
Meyer E, Abrami PC, Wade CA, Aslan O, Deault L (2010) Improving literacy and metacognition with electronic portfolios: teaching and learning with ePEARL. Comput Educ 55(1):84–91. https://doi.org/10.1016/j.compedu.2009.12.005
Nakata Y (2019) Encouraging student teachers to support self-regulated learning: a multiple case study on prospective language teachers. Int J Educ Res 95:200–211. https://doi.org/10.1016/j.ijer.2019.01.007
Onah DFO, Pang ELL, Sinclair JE (2020) Cognitive optimism of distinctive initiatives to foster self-directed and self-regulated learning skills: a comparative analysis of conventional and blended-learning in undergraduate studies. Educ Inf Technol 25(5):4365–4380. https://doi.org/10.1007/s10639-020-10172-w
Öztürk M, Çakıroğlu Ü (2021) Flipped learning design in EFL classrooms: implementing self-regulated learning strategies to develop language skills. Smart Learn Environ https://doi.org/10.1186/s40561-021-00146-x
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, Moher D (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg 88:105906. https://doi.org/10.1016/j.ijsu.2021.105906
Article PubMed Google Scholar
Panadero E (2017) A review of self-regulated learning: six models and four directions for research [review]. Front Psychol. https://doi.org/10.3389/fpsyg.2017.00422
Paris SG, Newman RS (1990) Development aspects of self-regulated learning. Educ Psychol 25(1):87–102. https://doi.org/10.1207/s15326985ep2501_7
Pintrich PR (2004) A conceptual framework for assessing motivation and self-regulated learning in college students. Educ Psychol Rev 16(4):385–407. https://doi.org/10.1007/s10648-004-0006-x
Praharaj SK (2023) Boil the ocean: no apologies for setting the standard high! Indian J Psychol Med. https://doi.org/10.1177/02537176231188588
Pring R (2000a) Editorial conclusion: a philosophical perspective. Oxf Rev Educ 26(3-4):495–501. https://doi.org/10.1080/713688536
Pring R (2000b) Philosophy of educational research, 2nd edn. Continuum
Puustinen M, Pulkkinen L (2001) Models of self-regulated learning: a review. Scand J Educ Res 45(3):269–286. https://doi.org/10.1080/00313830120074206
Şahin Kızıl A, Savran Z (2018) Assessing self-regulated learning: the case of vocabulary learning through information and communication technologies. Computer Assist Lang Learn 31(5-6):599–616. https://doi.org/10.1080/09588221.2018.1428201
Senko C, Perry AH, Greiser M (2022) Does triggering learners’ interest make them overconfident? J Educ Psychol 114(3):482–497. https://doi.org/10.1037/edu0000649
Shen B, Bai B (2022) Chinese university students’ self-regulated writing strategy use and EFL writing performance: influences of self-efficacy, gender, and major. Appl Linguistics Rev. https://doi.org/10.1515/applirev-2020-0103
Singh VK, Singh P, Karmakar M, Leta J, Mayr P (2021) The journal coverage of web of science, Scopus and dimensions: a comparative analysis. Scientometrics 126(6):5113–5142. https://doi.org/10.1007/s11192-021-03948-5
Sitzmann T, Ely K (2011) A meta-analysis of self-regulated learning in work-related training and educational attainment: what we know and where we need to go. Psychol Bull 137(3):421–442. https://doi.org/10.1037/a0022777
Sointu E, Hyypiä M, Lambert MC, Hirsto L, Saarelainen M, Valtonen T (2023) Preliminary evidence of key factors in successful flipping: predicting positive student experiences in flipped classrooms. High Educ 85(3):503–520. https://doi.org/10.1007/s10734-022-00848-2
Teng LS (2021) Individual differences in self-regulated learning: exploring the nexus of motivational beliefs, self-efficacy, and SRL strategies in EFL writing. Language Teach Res. https://doi.org/10.1177/13621688211006881
Teng LS, Zhang LJ (2020) Empowering learners in the second/foreign language classroom: can self-regulated learning strategies-based writing instruction make a difference? J Second Lang Writ 48:100701. https://doi.org/10.1016/j.jslw.2019.100701
Theobald M (2021) Self-regulated learning training programs enhance university students’ academic performance, self-regulated learning strategies, and motivation: a meta-analysis. Contemp Educ Psychol 66:101976. https://doi.org/10.1016/j.cedpsych.2021.101976
Tse SK, Lin L, Ng RHW (2022) Self-regulated learning strategies and reading comprehension among bilingual primary school students in Hong Kong. Int J Bilingual Educ Bilingualism 25(9):3258–3273. https://doi.org/10.1080/13670050.2022.2049686
Tseng J-J, Chai CS, Tan L, Park M (2022) A critical review of research on technological pedagogical and content knowledge (TPACK) in language teaching. Comput Assist Lang Learn 35(4):948–971. https://doi.org/10.1080/09588221.2020.1868531
Videnovik M, Trajkovik V, Kiønig LV, Vold T (2020) Increasing quality of learning experience using augmented reality educational games. Multimed Tools Appl 79(33):23861–23885. https://doi.org/10.1007/s11042-020-09046-7
Winne PH, Hadwin AF (1998) Studying as self-regulated learning. In: DJ Hacker, J Dunlosky, & AC Graesser (eds) Metacognition in educational theory and practice, 1st edn. Lawrence Erlbaum Associates Publishers, pp 277–304
Wischgoll A (2016) Combined training of one cognitive and one metacognitive strategy improves academic writing skills. Front Psychol 7(187):1–13
Xu J (2021) Chinese university students’ L2 writing feedback orientation and self-regulated learning writing strategies in online teaching during COVID-19. Asia-Pac Educ Res 30(6):563–574. https://doi.org/10.1007/s40299-021-00586-6
Xu Z, Zhao Y, Zhang B, Liew J, Kogut A (2023a) A meta-analysis of the efficacy of self-regulated learning interventions on academic achievement in online and blended environments in K-12 and higher education Behav Inf Technol 42(16):2911–2931. https://doi.org/10.1080/0144929X.2022.2151935
Xu Z, Zhao Y, Liew J, Zhou X, Kogut A (2023b) Synthesizing research evidence on self-regulated learning and academic achievement in online and blended learning environments: a scoping review. Educ Res Rev 39:100510. https://doi.org/10.1016/j.edurev.2023.100510
Yang G, Shen Q, Jiang R (2023) Exploring the relationship between university students’ perceived English instructional quality and learner satisfaction in the online environment. System 119:103178. https://doi.org/10.1016/j.system.2023.103178
Yang Y, Wen Y, Song Y (2023) A systematic review of technology-enhanced self-regulated language learning. Educ Technol Soc 26(1):31–44. https://www.jstor.org/stable/48707965
Google Scholar
Yi Y-S (2021) On the usefulness of CDA-based score reporting: implications for self-regulated learning. Lang Test Asia 11(1):13. https://doi.org/10.1186/s40468-021-00127-4
Zhang W (2017) Using classroom assessment to promote self-regulated learning and the factors influencing its (in)effectiveness. Front Educ China 12(2):261–295. https://doi.org/10.1007/s11516-017-0019-0
Zhang Z (2018) English-medium instruction policies in China: internationalisation of higher education. J Multiling Multicult Dev 39(6):542–555. https://doi.org/10.1080/01434632.2017.1404070
Zhu Y, Zhang JH, Au W, Yates G (2020) University students’ online learning attitudes and continuous intention to undertake online courses: a self-regulated learning perspective. Educ Technol Res Dev 68(3):1485–1519. https://doi.org/10.1007/s11423-020-09753-w
Zimmerman BJ (1989) A social cognitive view of self-regulated academic learning. J Educ Psychol 81(3):329–339. https://doi.org/10.1037/0022-0663.81.3.329
Zimmerman BJ (1990) Self-regulated learning and academic achievement: an overview. Educ Psychol 25(1):3–17. https://doi.org/10.1207/s15326985ep2501_2
Zimmerman BJ, Risemberg R (1997) Becoming a self-regulated writer: a social cognitive perspective. Contemp Educ Psychol 22(1):73–101. https://doi.org/10.1006/ceps.1997.0919
Download references
Authors and affiliations.
Department of English Language Teaching, Aliabad Katoul Branch, Islamic Azad University, Aliabad Katoul, Iran
Omid Mazandarani
You can also search for this author in PubMed Google Scholar
The author contributed to and supervised this work.
Correspondence to Omid Mazandarani .
Competing interests.
The author declares no competing interests.
This article does not contain any studies with human participants performed by any of the authors.
Additional information.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Mazandarani, O. Self-regulated learning in ESL/EFL contexts: a methodological exploration. Humanit Soc Sci Commun 11 , 1118 (2024). https://doi.org/10.1057/s41599-024-03617-x
Download citation
Received : 11 January 2024
Accepted : 19 August 2024
Published : 31 August 2024
DOI : https://doi.org/10.1057/s41599-024-03617-x
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
BMC Medical Ethics volume 25 , Article number: 94 ( 2024 ) Cite this article
Metrics details
In the years to come, artificial intelligence will become an indispensable tool in medical practice. The digital transformation will undoubtedly affect today’s medical students. This study focuses on trust from the perspective of three groups of medical students - students from Croatia, students from Slovakia, and international students studying in Slovakia.
A paper-pen survey was conducted using a non-probabilistic convenience sample. In the second half of 2022, 1715 students were surveyed at five faculties in Croatia and three in Slovakia.
Specifically, 38.2% of students indicated familiarity with the concept of AI, while 44.8% believed they would use AI in the future. Patient readiness for the implementation of technologies was mostly assessed as being low. More than half of the students, 59.1%, believe that the implementation of digital technology (AI) will negatively impact the patient-physician relationship and 51,3% of students believe that patients will trust physicians less. The least agreement with the statement was observed among international students, while a higher agreement was expressed by Slovak and Croatian students 40.9% of Croatian students believe that users do not trust the healthcare system, 56.9% of Slovak students agree with this view, while only 17.3% of international students share this opinion. The ability to explain to patients how AI works if they were asked was statistically significantly different for the different student groups, international students expressed the lowest agreement, while the Slovak and Croatian students showed a higher agreement.
This study provides insight into medical students’ attitudes from Croatia, Slovakia, and international students regarding the role of artificial intelligence (AI) in the future healthcare system, with a particular emphasis on the concept of trust. A notable difference was observed between the three groups of students, with international students differing from their Croatian and Slovak colleagues. This study also highlights the importance of integrating AI topics into the medical curriculum, taking into account national social & cultural specificities that could negatively impact AI implementation if not carefully addressed.
Peer Review reports
Technological advancements and artificial intelligence (AI) have transformed healthcare over the past few years. There has been a broad range of applications for AI in medicine, ranging from appointment scheduling and digitising health records to using algorithms to determine drug dosage [ 1 ]. The enthusiasm for the application of AI has extended to various medical specialties, such as radiology [ 2 , 3 ], oncology [ 4 ], neurology [ 5 ], nephrology [ 6 ]. Changes in the field have also prompted many studies to focus on the attitudes of students and their choice of specialisation. Some interesting results that have emerged from the research include a shift in interest toward this specialisation, anticipated changes in daily work, the consideration of fears, and expectations [ 7 , 8 , 9 ]. Students represent an interesting group when researching the future of healthcare and their perceptions regarding the use of AI. Research has shown that in most cases, medical students agree with statements indicating that they understand what AI is [ 10 , 11 ]. However, when asked to define it themselves, the majority are unable to do so [ 12 ]. The existing literature recognises the necessity of incorporating education on the use of AI into the medical curricula, highlighting that the current education in this area is neither sufficient nor satisfactory [ 11 , 12 , 13 , 14 ]. Although medical students expect AI to transform and revolutionise healthcare, they note that the current education on this topic is inadequate [ 15 ]. In Croatia, most medical faculties include medical informatics as a mandatory course in their curriculum (in the 2nd or 5th year of study), while no course directly focused on AI has been found. However, several elective courses, such as “Robotics in Medicine” and “Digital Technologies in the Healthcare System and E-Health,” can be found, which introduce students to AI through practical applications. Although there are no specific subjects on AI in the medical curricula in Slovakia, medical faculties organize lectures and workshops on AI for medical students. At the largest Slovak medical faculty in Bratislava, the topic of AI has been addressed for the last four years in the first-year medical ethics course. The medical students’ readiness for AI, which they should develop during their studies, has received more attention in the form of the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) [ 16 ]. While some studies suggest what medical students should know about artificial intelligence in medicine [ 17 ], others highlight the need for health AI ethics in medical school education [ 18 ]. Students believe that AI will make medicine more exciting in the future and that AI should be a partner rather than a competitor [ 19 ]. They also think that receiving education in AI will greatly benefit their careers [ 20 ]. While significant progress has been observed in implementing AI across various applications, these are still early stages that require validation and identifying solutions for emerging ethical and social challenges [ 21 ]. Students have expressed fear about the reduced interaction with patients due to the integration of AI [ 14 ], decreased job opportunities, and the emergence of new ethical and social challenges [ 10 ]. They are also concerned that AI will increase patient risks, reduce physicians’ skills, and harm patients [ 22 ].
Implementing AI brings about changes that will impact the patient and physician relationship [ 23 ]. Adopting AI involves a patient-centred approach that promotes informed choices [ 24 ]. The relationship between physicians and patients has been evolving under the influence of social circumstances and technological progress. The information and digital age have provided patients with tools empowering them to take on an active role as co-decision-makers, unlike when a paternalistic model prevailed and only physicians had exclusive access to medical information [ 25 , 26 ].
Trust is a crucial factor in the current model of the patient-physician relationship. As a complex concept from the perspective of both physicians and patients, trust is the foundation for successful health outcomes and a quality relationship between them [ 27 ]. Trust is deeply embedded in the physician-patient relationship, making it a fiduciary relationship. Inserting a new actor will bring disruption and potentially even the creation of new dyadic or triadic trusting relationships between physicians and AI, patients and AI, or even between patients, the physician and AI [ 35 ]. Due to technological advancements, trust relationships in healthcare will become even more of an issue, necessitating active reflection and action [ 28 ].
One of the most critical ethical values in the design, development, and deployment of medical AI is transparency. It is not merely a recommendation but a necessity, tied to the informed consent of the user (physician) who may or may not be fully aware of the underlying processes in the algorithmic decision-making. Thus, one of the most pressing issues, alongside transparency, is explainability [ 29 ]. Explainability and transparency are closely linked with the level of trust and trustworthiness; trust mainly refers to the belief that we can depend on someone or something, hence a gradual increase in reliability may lead to trust [ 30 ]. From a phenomenological perspective, trust in medical AI is an affective-cognitive state of the entities involved in these relationships, namely the trustor (the person who trusts) and the trustee (the entity to be trusted) [ 31 ]. In this instance, the trustor is a physician, and the trustee would be the medical AI system. As for the current ongoing discussion on whether medical AI can be trusted or only relied on [ 32 , 33 , 34 ], an interesting research question has emerged, specifically the need to examine whether future physicians perceive that this trust is possible or will be disruptive.
In our study, we aimed to focus on the medical students’ attitudes towards the role of AI in the future of healthcare, particularly focusing on the concept of trust.
This study aims to explore:
How students perceive the phenomenon of trust in physician-patient relationship.
The perception of their own medical expertise in the context of AI use.
Students’ estimation of patient preparedness to embrace AI as part of everyday healthcare provision.
Additionally, the study investigated whether trust is a prerequisite for the physician-patient relationship in the context of AI implementation.
This study involved medical students from Croatia and Slovakia, two Eastern European countries with many similarities, such as in their history and states’ development, social circumstances, and healthcare challenges. International students from different societal backgrounds have also been included in the study and were observed in the analysis as a third group. This study was conducted between May 2022 and November 2022 at five medical schools in Croatia and three in Slovakia (Table 1 ). This study was conducted using a non-probabilistic convenience sample. The inclusion criteria were being a medical student in one of the medical schools in Croatia or Slovakia and being physically present at lectures where the researchers conducted the research. The study included students from all years of study, as was the practice in some other studies conducted on this topic [ 15 , 20 , 33 , 36 , 37 , 39 ]The survey was conducted using the paper-pen method, except at one university in Slovakia where the students, after signing an informed consent form, received a URL link to the survey on the LimeSurvey platform. In agreement with the lecturers, the researchers arrived at the beginning of lectures, introduced the research, and asked for the students’ voluntary participation. Students who were interested in the study were asked to sign the informed consent form. In total, 1715 medical students participated. In the statistical analysis, 14 were excluded due to insufficient survey completion. The final sample consisted of 1701 medical students.
The research team developed a questionnaire, and the English version is available in supplementary files (Additional file 1). The survey and the questions were based on a prior qualitative study conducted in 2021 in Croatia [ 35 ], as well as the literature review of previous surveys conducted involving medical students, patients, and physicians [ 23 , 36 , 37 , 38 , 39 , 40 , 41 ] As used in our qualitative study [ 35 ], the anticipatory ethics approach [ 42 ] was followed with the same scenario. To preserve the continuity between the qualitative and quantitative studies, we deliberatively decided to focus primarily on the ethical, legal and social issues by not using the existing MAIRS-MS [ 16 ]. The survey focused on six broad topics and explored the following regarding the participants: (1) their motivation for enrolling in medical studies and the self-reported knowledge of medical ethics and/or bioethics; (2) the attitudes related to the impact of AI on the patient-physician relationship; (3) their self-reported perception of understanding of artificial intelligence; (4) their propensity to use AI and digital technologies in future medical practice; (5) the perceived utility of AI in the future, and societal readiness and preparedness for implementation; and (6) their demographic characteristics. The questions included multiple-choice answers on a 5-point Likert scale (the participants were instructed to read the statements and express their agreement or disagreement). At the beginning of the survey, a short scenario (Additional file 2) was presented to the medical students based on the anticipatory ethics approach [ 42 ], followed by the survey questions. This short scenario focused on an AI-based virtual assistant used in a hospital context in 2030. The survey was pilot-tested with a small sample of first-year students from the researcher’s university to ensure questionnaire comprehension, clarity, and the time taken to answer the questionnaire. The survey was available in Croatian, Slovak, and English, the latter particularly for the international students studying Medicine in the English program. The part of the questionnaire related to the perception of patient readiness, which was taken for further analysis, consisted of four questions with a high level of internal consistency, as determined by the Cronbach’s alpha score of 0.810.
All statistical analyses were conducted using SPSS version 25 (IBM Corp. Armonk, NY, USA). The simple descriptive statistics have been presented in percentages. An independent t-test and one-way ANOVA were conducted to examine the group differences based on demographic determinants. Principal axis factoring was run on the questions about attitudes towards using AI technology in their future work.
A total of 1701 responses were collected from eight Schools of Medicine (Table 1 ). Among these, 771 students (45.3%) were from Croatia, and 930 (54.7%) were from Slovakia, comprising 587 (34.5%) Slovak students and 343 (20.2%) international students mainly arriving from Western European and Scandinavian countries. Overall, 63.7% (1084) were female, 34.5% (587) were male, while 30 (1.8%) participants’ answers for gender were missing. In this study, female students were more represented than male students, which is in line with gender structure trends in medical studies. The Eurostudent VI survey for Croatia (2019) shows that 77.6% of students in medicine and social care are female compared to 22.1% of male students [ 43 ]. In some other studies on medical students in Croatia, similar ratios as in this research have been observed between male and female students [ 44 , 45 ]. Recent studies in Slovakia on the population of medical students also have a higher proportion of women than men in their samples [ 46 , 47 ]. The most represented group consisted of first-year students, followed by fourth-year and fifth-year students. The lowest representation was among sixth-year students which is attributed to the sampling approach that included students only attending lectures at the Faculty of Medicine. Given the specificities of medical education, this group was often located in hospital centres and clinics, making them less accessible to researchers.
Regarding their acquaintance with the concept of artificial intelligence, a significant portion of students (38.6%) remained neutral, indicating neither agreement nor disagreement with the statement (Fig. 1 ). Additionally, 38.2% of students agreed with the assertion, while 23.2% negatively assessed their familiarity with the concept of AI. There was a statistically significant difference in the mean acquainted score between males and females, t (1162,09) = 7,928, P < .001, with males scoring higher (M = 3.45, SD = 1.014) than females (M = 3.05, SD = 0,977). Similar results were also seen when it came to the statement, “I expect to actively use artificial intelligence in my medical practice.” In this context, 39% of students remained neutral, 44.8% expressed an expectation to actively utilise artificial intelligence in their future medical practice, while 16.2% disagreed.
Student’s attitudes toward AI
Regarding trust within the patient-physician relationship, the medical students exhibit pronounced affirmative attitudes (Fig. 2 ). In response to the statement, “The patient and the physician should trust each other,” 80% of students strongly agreed, 16.8% agreed, 2.1% were neutral, and only 1.1% disagreed. For the statement, “The patient should trust the physician upon consulting him/her,” only 0.8% of students disagreed, 3% were neutral, while 96.2% of students agreed. Among the medical students who participated in this study, 2.9% disagreed with the assertion that “The physician is required to clarify to the patient how he or she came to a certain conclusion.” Here, 8.9% were neutral, and 89.2% agreed.
Student’s attitudes toward different aspects of patient-physician relationship
Based on the provided statements, a statistically significant difference was found among the Croatian, Slovak, and international students, as illustrated in Table 2 . The international students were less likely to agree with the statements asserting that patients should trust the physician during consultations and must rely entirely on the physician’s opinion compared to Croatian and Slovak students. Conversely, they are more inclined to agree that patients respect the physicians’ time, unlike their Croatian and Slovak counterparts, who agreed with this to a lesser extent.
Table 3 presents the percentage of agreement with the statement, “To what extent do you think users trust the healthcare system in the country you study in?” 40.9% of Croatian students believe that users do not trust the healthcare system, 56.9% of Slovak students agree with this view, while only 17.3% of international students share this opinion. A one-way ANOVA was conducted to determine whether the student groups’ perceptions of patient trust differed. The perception of patient trust in the healthcare system was statistically significantly different for the different student groups, Welch’s F (2, 106,211) = 901,153, P < .001. There was a difference in the mean between the Slovak students ( M = 2.51, SD = 0.737), Croatian students ( M = 2,75, SD = 0.847), and international students ( M = 3.28, SD = 0.798), which was statistically significant ( P < .001). Interestingly, the international students believe that users trust the Slovak healthcare system more than Slovak students, with a mean increase of 0.77, 95% CI [0.64, 0.9].
The construct of patient readiness consisted of the student’s perception of patient trust in technology, adaptability, digital literacy, and medical literacy. These aspects have been recognised as necessary for patients to be ready for use of technology. The range was from a minimum of 4 to a maximum of 20. A score of 4 was obtained if the student responded to all statements with “strongly disagree,” up to 20 if the student responded to all statements with “strongly agree”. A statistically significant difference ( P < .001) in the perception of patient readiness was observed among Croatian, Slovak, and international students. The Croatian students gave, on average, the lowest scores for patient readiness ( M = 8,40, SD = 2,814), followed by the Slovak students ( M = 8,79, SD = 2,689), while the international students expressed the highest confidence in patient readiness to use AI technology in the future ( M = 9,62, SD = 2,829).
Here, 59.1% of students agreed that implementing digital technologies will have a negative impact on the patient-physician relationship, at M = 3.62, SD = 1.009. No statistically significant difference was found based on student country of origin. On the other hand, there was a statistically significant difference of P < .001 among the students regarding the belief that patients will trust physicians less as more digital technologies are implemented. Here, 51,3% of students believe that patients will trust physicians less. The least agreement with the statement was observed among international students (M = 3.09, SD = 1.006), while a higher agreement was expressed by Slovak (M = 3.50, SD = 1.030) and Croatian students (M = 3.51, SD = 1.006).
The third aspect of trust focused on confidence in use. Here, 53.6% of students believe that if asked by a patient, they would be able to explain how the technology works. The ability to explain to patients how AI works if they were asked was statistically significantly different for the different student groups, Welch’s F (2, 856,821) = 12.294 P < .001. International students expressed the lowest agreement with the statement (M = 3.09, SD = 1.215), while the Slovak (M = 3.41, SD = 1.048) and Croatian (M = 3.47, SD = 1.096) students showed a higher agreement.
In the scenario (Annex I), AI was presented through the virtual assistant Cronko. The students were asked to assess how likely it was that they would react in a specific way if the diagnosis they provided significantly differed from that of the virtual assistant (AI) (Table 4 ). A statistically significant difference was found among the Slovak, Croatian, and international students. In this case, the international students expressed a lower likelihood of standing by their diagnostic conclusion and a higher mean score for rejecting their conclusion, favouring the AI’s opinion.
The students were also required to decide how patients should react if the diagnosis of the physician and AI significantly differed (Table 5 ). Here, 49.4% of students believe that patients should seek a third (expert) opinion, 42.1% thought that they should trust the physician, and 7.4% believe that they should consider both diagnoses and decide for themselves. Only a small number thought that they should trust the AI (0.7%) or seek a third opinion from another artificial intelligence system (0.4%).
The crosstabulation analysis revealed that international students, at a lower percentage, believe that patients should trust the physician compared to Croatian and Slovak students. Based on Pearson’s Chi-square test (χ2 = 43,731, df = 8, P < . 001 ) , it was concluded that there is a dependence between the student’s country of origin and the opinion that the patient should have trust. The measure of association (Cramer’s V) indicates that there is a statistically significant weak association between the variables (φ = 0.114, P < .001).
As far as the authors are aware, this is the first study providing the perspective of Eastern European countries regarding the attitude of medical students on the use of AI in medical practice. Previous studies have focused on Western countries such as Germany [ 48 , 49 , 50 ], Switzerland [ 37 ], the United Kingdom [ 39 , 40 ], Canada [ 7 , 10 , 12 ], and Asian countries [ 11 , 13 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 ]. Although many expect that AI’s implementation in healthcare will occur in the coming years, only 44.8% of students believe they will use AI in the future. Here, 53.6% of students believed they would be able to explain to patients how AI technology works. Only 38.2% emphasised that they were (currently) familiar with the concept of AI. These results align with a study in Germany, where 64.3% of students expressed that they did not feel well-informed about AI in medicine [ 48 ]. It is important to note that previous research has observed a discrepancy between the perceived understanding of AI and the actual knowledge among medical students [ 9 ]. In the current era, medical education should set a goal to develop the skills that enable students to acquire knowledge about AI and successfully apply it in patient interactions, allowing them to convey information to patients in an understandable manner [ 59 ].
The prevailing view among Croatian and Slovak students was that users do not trust the healthcare system. This perception of a lack of trust aligns with research conducted on the general population. The EVS survey indicated that only 43% of Croatian citizens trust the healthcare system [ 60 ]. Studies have shown that a quarter of the population considers the healthcare system to be completely ineffective, and the majority believes that fundamental changes are needed, with the lowest levels of trust being expressed by social groups with the lowest levels of education [ 61 ]. The general level of satisfaction with the health care system in Slovakia recently reached 44%. When asked “To what extent do you trust conventional medicine in doctors and hospitals?” Slovakia fell to the bottom of the ranking with 55% of the population trusting conventional medicine compared to the European average. Looking at the reasons for Slovak dissatisfaction, the main reasons cited by Slovaks are the inability to get an appointment with a doctor (57%) and a bad personal or mediated negative experience with the care provided (51%) [ 62 ]. As previously highlighted, most international students come from Norway and other Scandinavian countries. Many studies show that trust in healthcare is exceptionally high in these countries [ 63 , 64 , 65 ]. Therefore, international students are expected to project the same perception of trust in the healthcare system onto the healthcare system of a different country outside their home country.
In Croatia and Slovakia, where trust in the healthcare system is relatively low and students perceive that patients do not have much trust in the system, it has been observed that students are more likely to believe that patients must fully trust their physicians during consultations and that patients are not respectful of the physician’s time. The implementation of AI requires collaborative cooperation between the patient and the physician, which necessitates mutual trust and understanding between them [ 66 ].Trust has been defined as “individuals’ calculated exposure to the risk of harm from the actions of an influential other” [ 31 , 67 ] where harm signifies the extent of physical and/or psychological damage that can result from incorrectly calibrated trust decisions [ 31 ]. However, in the physician’s use of medical AI, the damage primarily manifests as harm to the patient and directly affects the physician-patient relationship [ 35 , 68 ]. This also affects the reliability aspect and the physician’s trust in medical AI, as well as its acceptability and future use, which are directly related to trustworthiness.
Also, the different views of international students on issues of AI and medical trust may differ because these individuals mostly come from Western and Northern European countries where the shared decision-making model of the patient-physician relationship is strongly used in medical practice. The shared decision-making model avoids the trap of the two extremes where, on the one hand, the physician has a dominant role as the decision-maker and, on the other, the patient has an absolute position and makes the decision on his or her own. Modern medicine has moved from a paternalistic approach to a physician-patient partnership based on mutual discussion. It is very likely that international students from Western Europe are more accustomed to a system in which the emphasis on patient autonomy and ethical communication is important. The persistence of a paternalistic mentality in the healthcare system is noticeable in some post-communist or transitional countries [ 69 , 70 ]. Although these countries are transforming and increasingly involving patients in decision-making, remnants of the old mentality still exist. The Slovak and Croatian students expressed more negative attitudes regarding patients respecting the time of physicians compared to international students. Similarly, they are more inclined to believe that patients should fully trust the physicians’ opinions. The attitudes of both Croatian and Slovak students towards trust between the patient and physician in the context of AI can be partly explained by the paternalistic model of the patient-physician relationship which is still to some extent present in these countries. Transitional countries, including Croatia and Slovakia, have specific cultural patterns in patient-physician communication, such as a lack of information sharing and a paternalistic approach to the patient [ 71 ]. In the region of Central and South-Eastern Europe, these issues have not been studied systematically [ 71 ]. However, Croatian researchers, following the Slovakian research team [ 72 ], have carried out a study of patient rights, focusing on patient-physician communication and the informed consent process [ 71 ]. The results of this study showed that communication during the process of obtaining informed consent in selected Croatian hospitals was based on the model of shared decision-making, but the paternalistic relationship was still present. We assume that due to the similar cultural and political background, this will probably be analogous in Slovakia, although to the best of our knowledge, such research has not been conducted recently. The case of the still existing medical paternalism in Slovakia, that has started a public debate, was the involuntary sterilisation of Roma women, which began in communist Czechoslovakia and continued into the 2000s. This case has contributed to ongoing mistrust of the national health system among Roma, impacting vaccine uptake and highlighting the need for improved communication and informed consent practices [ 73 , 74 ].
In cases of conflict between the judgements of the physician and AI, our results demonstrate that more than half of the medical students consider that patients should look for a third (expert) opinion (49.4%) or trust the physician (42.1%). These results are similar to a German study [ 48 ] in which the majority (82.5%) stated that the physician’s decision should be followed. In such a disagreement, the international students were keener to reject their own decisions and favoured the AI than the Croatian and Slovak students despite frequenting and attending the same program as their Slovak colleagues. The new insights from our study represent a valuable contribution to the ongoing discussion [ 32 , 33 , 34 ] on the possibility of trusting medical AI from the perspectives of future physicians who will probably use AI in their everyday work.
In cases of different diagnoses, Croatian and Slovak students were more likely to believe that patients should rely on the physician’s opinion. Almost 90% of students think the physician must explain to the patient how they reached a conclusion. However, only 53.6% of students believe they could explain how AI technology works to a patient. This gap may pose a problem in healthcare due to inadequate explanations to patients’ and future physicians’ understanding and acceptance of AI diagnostic conclusions, especially when they differ. Future physicians must know how to use AI, understand and interpret the results, be aware of all risks, and explain it to patients in an understandable way [ 75 ].
Based on our knowledge, no similar research has been conducted focusing on Eastern Europe, specifically Croatia and Slovakia, and emphasizing various aspects of trust that are crucial to consider in the context of medical AI. This study highlights the differences between medical students’ perceptions of trust and patient-physician relationships. The main limitation of this research was the sample selection which cannot be generalised due to its non-probabilistic nature. Due to technical and organisational difficulties, a convenience sample was the only available option. It is essential to consider that the research was conducted at the end of 2022 during the ongoing COVID-19 pandemic, which could have influenced the students’ attitudes within the healthcare system. International students filled out the questionnaire in English (not their first language) which could lead to misinterpretation or misunderstanding of specific questions.
This study provides insight into medical students’ attitudes from Croatia, Slovakia, and international students regarding the role of artificial intelligence (AI) in the future healthcare system, with a particular emphasis on the concept of trust. The insights from our study represent a valuable contribution to the ongoing debate on the possibility of trust in medical AI from the perspective of future physicians. Students agree that physicians and patients must trust each other; however, they also believe that implementing digital technologies will negatively impact the patient-physician relationship. A notable difference was observed between the three groups of students, with international students differing from their Croatian and Slovak colleagues. Croatian and Slovak students are more inclined to believe that patients will have less trust in them with the implementation of AI. Also, they are presenting certain paternalistic views. Additionally, Croatian and Slovak students exhibit higher confidence in their abilities (accuracy of diagnosis, ability to explain how AI functions) than international students. This study also highlights the importance of integrating AI topics into the medical curriculum, taking into account national specificities that could negatively impact AI implementation if not carefully addressed. Increasing explainability and trust through education about AI will contribute to better acceptance in the future, as well as to a stronger relationship between patients and physicians.
The dataset generated by the survey research is available at the link: https://osf.io/2pyv9/files/osfstorage/6606a02b58fa490843e4f06b.
Amisha F, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care. 2019;8(7):2328.
Article Google Scholar
Reyes M, Meier R, Pereira S, et al. On the Interpretability of Artificial Intelligence in Radiology: challenges and opportunities. Radiol Artif Intell. 2020;2(3):e190043.
Mehrizi MHR, Van Ooijen PMA, Homan M. Applications of artificial intelligence (AI) in diagnostic radiology: a technography study. Eur Radiol. 2020;31(4):1805–11.
Dlamini Z, Francies FZ, Hull R, Marima R. Artificial intelligence (AI) and big data in cancer and precision oncology. Comput Struct Biotechnol J. 2020;18:2300–11.
Kalani M, Anjankar A, Revolutionizing, Neurology. The role of Artificial intelligence in advancing diagnosis and treatment. Curēus. 2024.
Bajaj T, Koyner JL. Artificial intelligence in acute kidney injury prediction. Adv Chronic Kidney Dis. 2022;29(5):450–60.
Gong B, Nugent J, Guest W, et al. Influence of artificial intelligence on Canadian medical students’ preference for radiology specialty: a National Survey study. Acad Radiol. 2019;26(4):566–77.
Capparos Galán G, Portero FS. Medical students’ perceptions of the impact of artificial intelligence in radiology. Radiología. 2022;64(6):516–24.
Bin Dahmash A, Alabdulkareem M, Alfutais A, Kamel AM, Alkholaiwi F, Alshehri S et al. Artificial intelligence in radiology: does it impact medical students preference for radiology as their future career? BJR|Open. 2020;2(1):20200037.
Mehta N, Harish V, Bilimoria K et al. Knowledge and attitudes on Artificial intelligence in Healthcare: a provincial survey study of medical students. MedEdPublish. 2021;10(1).
Al Hadithy ZA, Al Lawati A, Al-Zadjali R et al. Knowledge, attitudes, and perceptions of Artificial Intelligence in Healthcare among Medical students at Sultan Qaboos University. Cureus. 2023;15(9).
Teng M, Singla R, Yau O, Lamoureux D, Gupta A, Hu Z, et al. Health Care Students’ perspectives on Artificial Intelligence: Countrywide Survey in Canada. JMIR Med Educ. 2022;8(1):e33390.
Abid S, Awan B, Ismail T, Sarwar N, Sarwar G, Tariq M. Artificial Intelligence: medical students attitude in District Peshawar Pakistan. Pakistan J Public Health. 2019;9(1):19–21.
Bisdas S, Topriceanu C, Zakrzewska Z et al. Artificial Intelligence in Medicine: a multinational Multi-center survey on the medical and dental students’ perception. Front Public Health. 2021;9.
Jebreen K, Radwan E, Kammoun-Rebai W, Alattar E, Radwan A, Safi W et al. Perceptions of undergraduate medical students on artificial intelligence in medicine: mixed-methods survey study from Palestine. BMC Med Educ. 2024;24(1).
Karaca O, Çalişkan S, Demir K. Medical artificial intelligence readiness scale for medical students (MAIRS-MS) – development, validity and reliability study. BMC Med Educ. 2021;21(1).
Park SH, Hyun K, Kim S, Park JH, Lim YS. What should medical students know about artificial intelligence in medicine? J Educational Evaluation Health Professions. 2019;16:18.
Katznelson G, Gerke S. The need for health AI ethics in medical school education. Adv Health Sci Educ. 2021;26(4):1447–58.
Bisdas S, Topriceanu CC, Zakrzewska Z, Irimia AV, Shakallis L, Subhash J et al. Artificial Intelligence in Medicine: a multinational Multi-center survey on the medical and dental students’ perception. Front Public Health. 2021;9.
Tung AYZ, Dong LW. Malaysian medical students’ attitudes and readiness toward AI (Artificial Intelligence): a cross-sectional study. J Med Educ Curric Dev. 2023;10.
Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31–8.
Boillat T, Nawaz FA, Rivas H. Readiness to Embrace Artificial intelligence among medical doctors and students: questionnaire-based study. JMIR Med Educ. 2022;8(2):e34973.
Ongena Y, Haan M, Yakar D, Kwee TC. Patients’ views on the implementation of artificial intelligence in radiology: development and validation of a standardized questionnaire. Eur Radiol. 2019;30(2):1033–40.
Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inform Assoc. 2020;28(4):890–4.
Gerber BS, Eiser AR. The patient-physician relationship in the internet age: future prospects and the research agenda. JMIR J Med Internet Research/Journal Med Internet Res. 2001;3(2):e15.
Google Scholar
Agarwal AK, Murinson BB. New dimensions in patient–physician interaction: values, autonomy, and medical information in the patient-centered clinical encounter. Rambam Maimonides Med J. 2012;3(3):e0017.
Chandra S, Mohammadnezhad M, Ward P. Trust and Communication in a doctor- patient relationship: a literature review. J Healthc Commun. 2018;03(03).
Cado V. Trust as a factor for higher performance in healthcare: COVID 19, digitalization, and positive patient experiences. IJQHC Commun. 2022;2(2).
Gerdes A. The role of explainability in AI-supported medical decision-making. Discover Artif Intell. 2024;4(1).
De Fine Licht K, Brülde B. On defining Reliance and Trust: purposes, conditions of adequacy, and new definitions. Philosophia. 2021;49(5):1981–2001.
Hancock PA, Kessler TT, Kaplan AD, Stowers K, Brill JC, Billings DR et al. How and why humans trust: a meta-analysis and elaborated model. Front Psychol. 2023;14.
Hatherley J. Limits of trust in medical AI. J Med Ethics. 2020;46(7):478–81.
Kerasidou C, Kerasidou A, Büscher M, Wilkinson S. Before and beyond trust: reliance in medical AI. J Med Ethics. 2021;48(11):852–6.
Ferrario A, Loi M, Viganò E. Trust does not need to be human: it is possible to trust medical AI. J Med Ethics. 2020;47(6):437–8.
Čartolovni A, Malešević A, Poslon L. Critical analysis of the AI impact on the patient–physician relationship: a multi-stakeholder qualitative study. Digit Health. 2023;9.
Coppola F, Faggioni L, Regge D, et al. Artificial intelligence: radiologists’ expectations and opinions gleaned from a nationwide online survey. Radiol Med. 2020;126(1):63–71.
Van Der Hoek J, Huber AT, Leichtle AB, et al. A survey on the future of radiology among radiologists, medical students and surgeons: students and surgeons tend to be more skeptical about artificial intelligence and radiologists may fear that other disciplines take over. Eur J Radiol. 2019;121:108742.
Abdullah R, Fakieh B. Health care employees’ perceptions of the use of artificial intelligence applications: survey study. JMIR J Med Internet Res. 2020;22(5):e17620.
Blease C, Bernstein MH, Gaab J, et al. Computerization and the future of primary care: a survey of general practitioners in the UK. PLoS ONE. 2018;13(12):e0207418.
Sit C, Srinivasan R, Amlani A et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights into Imaging. 2020;11(1).
Oh S, Kim JH, Choi SK, Lee HJ, Hong J, Kwon SH. Physician confidence in Artificial Intelligence: an online mobile survey. JMIR J Med Internet Res. 2019;21(3):e12422.
York E, Conley SN. Creative anticipatory ethical reasoning with scenario analysis and design fiction. Sci Eng Ethics. 2020;26(6):2985–3016.
Rimac I, Bovan K, Ogresta J. Nacionalo izvješće istraživanja EUROSTUDENT VI Za Hrvatsku. Ministarstvo znanosti i obrazovanja; 2019.
Dragun R, Veček NN, Marendić M, Pribisalić A, Đivić G, Cena H, et al. Have Lifestyle habits and Psychological Well-being changed among adolescents and medical students due to COVID-19 Lockdown in Croatia? Nutrients. 2020;13(1):97.
Đogaš V, Jerončić A, Marušić M, Marušić A. Who would students ask for help in academic cheating? Cross-sectional study of medical students in Croatia. BMC Med Educ. 2014;14(1).
Sovicova M, Zibolenova J, Svihrova V, Hudeckova H. Odds ratio estimation of Medical Students’ attitudes towards COVID-19 vaccination. Int J Environ Res Public Health/International J Environ Res Public Health. 2021;18(13):6815.
Faixová D, Jurinová Z, Faixová Z, Kyselovič J, Gažová A. Dietary changes during the examination period in medical students. EAS J Pharm Pharmacol. 2023;5(03):78–86.
McLennan S, Meyer A, Schreyer K, Buyx A. German medical students´ views regarding artificial intelligence in medicine: a cross-sectional survey. PLOS Digit Health. 2022;1(10):e0000114.
Gillissen A, Kochanek T, Zupanic M, Ehlers JP. Medical students’ perceptions towards digitization and Artificial Intelligence: a mixed-methods study. Healthcare. 2022;10(4):723.
Moldt JA, Loda T, Mamlouk AM, Nieselt K, Fuhl W, Herrmann-Werner A. Chatbots for future docs: exploring medical students’ attitudes and knowledge towards artificial intelligence and medical chatbots. Med Educ Online. 2023;28(1).
Syed W, Basil A, Al-Rawi M. Assessment of awareness, perceptions, and opinions towards Artificial Intelligence among Healthcare students in Riyadh, Saudi Arabia. Medicina. 2023;59(5):828.
Komasawa N, Nakano T, Terasaki F, Kawata R. Attitude survey toward artificial intelligence in medicine among Japanese medical students. Bull Osaka Med Pharm Univ. 2021;67(1–2):9–16.
Jha N, Shankar PR, Al-Betar MA, Mukhia R, Hada K, Palaian S. Undergraduate medical students’ and interns’ knowledge and perception of artificial intelligence in medicine. Adv Med Educ Pract. 2022;13:927–37.
Swed S, Alibrahim H, Elkalagi NKH et al. Knowledge, attitude, and practice of artificial intelligence among doctors and medical students in Syria: a cross-sectional online survey. Front Artif Intell. 2022;5.
Doumat G, Daher D, Ghanem NN, Khater B. Knowledge and attitudes of medical students in Lebanon toward artificial intelligence: a national survey study. Front Artif Intell. 2022;5.
Buabbas AJ, Miskin B, Alnaqi A, et al. Investigating students’ perceptions towards Artificial Intelligence in Medical Education. Healthcare. 2023;11(9):1298.
Kansal R, Bawa A, Bansal A et al. Differences in knowledge and perspectives on the usage of artificial intelligence among doctors and medical students of a developing country: a cross-sectional study. Curēus Published Online January 19, 2022.
AlZaabi A, AlMaskari S, AalAbdulsalam A. Are physicians and medical students ready for artificial intelligence applications in healthcare? Digit Health. 2023;9:205520762311521.
Pupic N, Ghaffari-Zadeh A, Hu R, et al. An evidence-based approach to artificial intelligence education for medical students: a systematic review. PLOS Digit Health. 2023;2(11):e0000255.
Baloban J, Črpić G, Ježovita J. Vrednote u Hrvatskoj Od 1999. Do 2018. Prema European values study. Kršćanska sadašnjost; 2019.
Popović S. Determinants of citizen’s attitudes and satisfaction with the Croatian health care system. Medicina. 2017;53(1):85–100.
STADA Health Report. 2024. Satisfaction with Healthcare System continues to decline. 2024.
Price D, Bonsaksen T, Leung J, McClure-Thomas C, Ruffolo M, Lamph G et al. Factors Associated with Trust in Public Authorities among adults in Norway, United Kingdom, United States, and Australia two years after the COVID-19 outbreak. Int J Public Health. 2023;68.
Skirbekk H, Magelssen M, Conradsen S. Trust in healthcare before and during the COVID-19 pandemic. BMC Public Health. 2023;23(1).
Baroudi M, Goicolea I, Hurtig AK, San-Sebastian M. Social factors associated with trust in the health system in northern Sweden: a cross-sectional study. BMC Public Health. 2022;22(1).
Jj C. Doctor-patient relationship: from medical paternalism to enhanced autonomy. Singapore Med J. 2002;43(3):152–5.
Hancock PA, Billings DR, Schaefer KE, Chen JYC, De Visser EJ, Parasuraman R. A Meta-analysis of factors affecting Trust in Human-Robot Interaction. Hum Factors. 2011;53(5):517–27.
Čartolovni A, Tomičić A, Mosler EL. Ethical, legal, and social considerations of AI-based medical decision-support tools: a scoping review. Int J Med Informatics. 2022;161:104738.
Vyshka G, Kruja J. Inapplicability of advance directives in a paternalistic setting: the case of a post-communist health system. BMC Med Ethics. 2011;12(1).
Murgic L, Hébert PC, Sovic S, Pavlekovic G. Paternalism and autonomy: views of patients and providers in a transitional (post-communist) country. BMC Med Ethics. 2015;16(1).
Vučemilo L, Ćurković M, Milošević M, Mustajbegović J, Borovečki A. Are physician-patient communication practices slowly changing in Croatia? – a cross-sectional questionnaire study. Croatian Med J. 2013;54(2):185–91.
Nemcekova M, Ziakova K, Mistuna D, Kudlicka J. Respecting patients’ rights. Bull Med Ethics. 1998;140:13–8.
REPORT by Thomas Hammarberg, Commissioner for Human Rights of the Council of Europe. 2011. Online: https://rm.coe.int/16806db7c5
The Advisory Committee on the. Framework Convention for the Protection of National Minorities. Fifth Opinion on Slovak Republic. 2022.
McCoy LG, Nagaraj S, Morgado F, Harish V, Das S, Celi LA. What do medical students actually need to know about artificial intelligence? Npj Digit Med. 2020;3(1).
Download references
This work was supported by the Hrvatska zaklada za znanost (Croatian Science Foundation (CSF)) [grant number UIP-2019-04-3212] “(New) Ethical and Social Challenges of Digital Technologies in the Healthcare Domain”. The funder had no role in the design of this study and its execution, analyses, interpretation of the data, or decision to submit results.
Authors and affiliations.
Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
Anamaria Malešević & Anto Čartolovni
Institute of Social Medicine and Medical Ethics, School of Medicine, Comenius University in Bratislava, Bratislava, Slovakia
Mária Kolesárová
School of Medicine, Catholic University of Croatia, Zagreb, Croatia
Anto Čartolovni
You can also search for this author in PubMed Google Scholar
AČ and AM planned the study. MK assisted in the research implementation process. AM analysed the data, with contributions from MK and AČ. All authors contributed to the data interpretation and writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Anamaria Malešević .
Ethics approval and consent to participate.
This study was approved by the Catholic University of Croatia’s Ethics Committee on 21 January 2022 (Classification number: 641-03/21 − 03/03; registration number: 498 − 16/2-22-06). Participation in the research was anonymous and voluntary. Before completing the survey, participants were informed about the research objectives, data processing, and storage procedures and signed an informed consent form.
Not applicable.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Supplementary material 2, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Malešević, A., Kolesárová, M. & Čartolovni, A. Encompassing trust in medical AI from the perspective of medical students: a quantitative comparative study. BMC Med Ethics 25 , 94 (2024). https://doi.org/10.1186/s12910-024-01092-2
Download citation
Received : 03 May 2024
Accepted : 23 August 2024
Published : 02 September 2024
DOI : https://doi.org/10.1186/s12910-024-01092-2
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6939
COMMENTS
Quantitative Research Topics. Quantitative Research Topics are as follows: The effects of social media on self-esteem among teenagers. A comparative study of academic achievement among students of single-sex and co-educational schools. The impact of gender on leadership styles in the workplace.
Quantitative and Qualitative Research Methods in Comparative Studies. In comparing variables, the statistical and mathematical data collection, and analysis that quantitative research methodology naturally uses to uncover the correlational connection of the variables, can be essential. Additionally, since quantitative research requires a ...
This quantitative, causal comparative study sought to determine if proficiency-based education has an effect on school climate. With sweeping school reform across the United States, educators are seeking ways to improve student achievement and maintain a positive school climate.
Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time.
In eHealth evaluation, comparative studies aim to find out whether group differences in eHealth system adoption make a difference in important outcomes. These groups may differ in their composition, the type of system in use, and the setting where they work over a given time duration. The comparisons are to determine whether significant differences exist for some predefined measures between ...
There are two main types of nonexperimental research designs: comparative design and correlational design. In comparative research, the researcher examines the differences between two or more groups on the phenomenon that is being studied. For example, studying gender difference in learning mathematics is a comparative research.
Comparative research or analysis is a broad term that includes both quantitative and qualitative comparison. Social entities may be based on many lines, such as geographical or
Comparative is a concept that derives from the verb "to compare" (the etymology is Latin comparare, derivation of par = equal, with prefix com-, it is a systematic comparison).Comparative studies are investigations to analyze and evaluate, with quantitative and qualitative methods, a phenomenon and/or facts among different areas, subjects, and/or objects to detect similarities and/or ...
This module presents the macro-quantitative (statistical) methods by giving examples of recent research employing them. It analyzes the regression analysis and the various ways of analyzing data. Moreover, it concludes the course and opens to further perspectives on comparative research designs and methods.
An example of quantitative research topics for 12 th -grade students will come in handy if you want to score a good grade. Here are some of the best ones: The link between global warming and climate change. What is the greenhouse gas impact on biodiversity and the atmosphere.
Other interesting articles. If you want to know more about statistics, methodology, or research bias, make sure to check out some of our other articles with explanations and examples. Statistics. Normal distribution. Skewness. Kurtosis. Degrees of freedom. Variance. Null hypothesis.
In a causal-comparative research design, the researcher compares two groups to find out whether the independent variable affected the outcome or the dependent variable. A causal-comparative method determines whether one variable has a direct influence on the other and why. It identifies the causes of certain occurrences (or non-occurrences).
In nearly all studies in the comparative group, the titles of experimental curricula were explicitly identified. The only exception to this was the ARC Implementation Center study (Sconiers et al., 2002), where three NSF-supported elementary curricula were examined, but in the results, their effects were pooled.
Write out the comparative research question. Once you have these details - (1) the starting phrase, (2) the name of the dependent variable, (3) the name of the groups you are interested in comparing, and (4) any potential adjoining words - you can write out the comparative research question in full. The example comparative research questions ...
Comparative research is a research methodology in the social sciences exemplified in cross-cultural or comparative studies that aims to make comparisons across different countries or cultures.A major problem in comparative research is that the data sets in different countries may define categories differently (for example by using different definitions of poverty) or may not use the same ...
A comparative analysis is a side-by-side comparison that systematically compares two or more things to pinpoint their similarities and differences. The focus of the investigation might be conceptual—a particular problem, idea, or theory—or perhaps something more tangible, like two different data sets. For instance, you could use comparative ...
2013-01-01. The purpose of this quantitative causal-comparative study was to investigate the relationship between the instructional effects of the interactive whiteboard and students' proficiency levels in eighth-grade science as evidenced by the state FCAT scores. A total of 46 eighth-grade science teachers in a South Florida public school ...
INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...
Susanne Pickel et al (2015) present a new framework for comparative social scientists that tackles one of the most prominent topics in political research: the quality of democracy. In particular, the authors propose a framework to assess the measurement properties of three prominent indices of the quality of democracy.
Abstract. In an era of data-driven decision-making, a comprehensive understanding of quantitative research is indispensable. Current guides often provide fragmented insights, failing to offer a holistic view, while more comprehensive sources remain lengthy and less accessible, hindered by physical and proprietary barriers.
The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.
When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.
The results provided evidence to substantiate the idea that quantitative approaches towards SRL research is in the ascendency. Experimental and survey designs were identified as the most preferred ...
Background In the years to come, artificial intelligence will become an indispensable tool in medical practice. The digital transformation will undoubtedly affect today's medical students. This study focuses on trust from the perspective of three groups of medical students - students from Croatia, students from Slovakia, and international students studying in Slovakia. Methods A paper-pen ...