Online: Use Zoom for synchronous discussions, Canvas for text-based role playing.
Using active learning techniques in your teaching requires only a willingness to try something new in the classroom, gather feedback, and plan an activity that furthers your course learning goals. With any use of active learning, it is important that the activity be more than “busy work” or a “break from lecture.” Rather, the approach should be intentionally selected to allow students to practice a key idea or skill with peer or instructor feedback (Messineo, 2017).
Some instructors report that they need specially designed classrooms to teach using active learning strategies. For the low- and moderate-complexity strategies listed above, a purpose-built facility is not needed. For higher complexity strategies, an intentionally designed space facilitates the process, but there is mixed evidence on its necessity for improved student satisfaction and learning outcomes. Low-tech elements of active learning classrooms, such as multiple whiteboards and flexible seating to allow for collaboration, appear to be the most critical elements (Soneral & Wyse, 2017).
Another commonly cited barrier to active learning is student resistance. Student reactions to any new teaching methods are not uniform, and reactions may even vary over the term, moving, for example, from concerns about grades to peers’ involvement in activities (Ellis, 2015). Faculty’s use of specific explanation and facilitation strategies has been found to be positively associated with student participation in and feedback about active learning (Tharayil et al., 2018). Helpful strategies to mitigate resistance include (DeMonbrun et al., 2017; Wiggins et al., 2017):
Sometimes, a few vocal students may give the impression that there is more discontent than there is, so collecting student feedback (such as by an exit ticket ) can give a more accurate picture of the range of student experience.
Instructional approaches that promote student interaction are most likely to enhance student learning in a diverse classroom (Gurin, 2000), and active learning can be a powerful way to promote that exchange. However, whether due to factors such as student-to-student climate issues or lack of participation, good ideas for active learning do not always translate to inclusive learning. Key strategies for making it more effective to that aim include:
For teams and pairs that will be meeting over time, construct the group intentionally. One strategy is to ask students to respond to questions in a 3-2-1 format to help compose groups: (1) What are three characteristics of successful groups for you? (2) What are two strengths that you would bring to the group? (3) Who is one student in the class with whom you would or would not like to work? (adapted from Reid & Garson, 2017). Some instructors also find CATME to be a helpful tool for intentional group assignment. Although there may be times where same- or cross-identity teams are beneficial (Freeman, Theobald, Crowe, & Wenderoth, 2017), it is clear that isolating women or underrepresented groups on a team tends to negatively affect their performance and therefore, should be avoided (Meadows & Sekaquaptewa, 2013).
Professor of biology and engineering Sharon Swartz uses a survey to build teams, asking students to select topics of interest (and not of interest) and provide some background information about themselves, such as academic area. She then uses the surveys to compose the group and finds that "shy students and those who didn't know many class members no longer felt anxious about finding a group to work with, and with 'leaders' distributed among the groups, the group projects improved hugely."
Check in with the group periodically. Scheduled check-ins with group members allow faculty to make adjustments when needed and also provide some accountability for group members. Sharon Swartz also distributes evaluation sheets that allow each student to assess contributions made by each member of the team in terms of (1) intellectual involvement in planning/research, (2) effort toward achieving group goals, (3) cooperation and support of others, and (4) their own contribution. She finds that “students are reassured by knowing that they will have a chance to talk about any challenges that arose in their groups.” Swartz adds, “Typically, knowing that they will be telling me about their experiences with each other ensures that everyone pulls their weight!”
Assign clear roles and expectations. Some research indicates that, especially in STEM contexts, men tend to answer more questions in group presentations, take more technical roles, and underestimate their female classmates’ performance (Grunspan, et al., 2016; Meadows & Sekaquaptewa, 2013). However, one study promisingly suggests that showing students examples of balanced group work in advance (e.g., a video of a presentation or a sample paper) can mitigate these tendencies (Meadows, et al., 2015). Faculty may also wish to assign roles and deliberately rotate them. Defining clear expectations (both verbally and in writing) for classroom participation and group work can also help to include learners who have previously been educated in cultural contexts where active learning techniques may not be as common.
If you would like to discuss active learning strategies for your own classroom, please contact the Sheridan Center for Teaching and Learning for a consultation: [email protected] .
Subscribe to the Sheridan Center newsletter
This resource was authored by Dr. Mary Wright, Associate Provost for Teaching and Learning, Executive Director of Sheridan Center for Teaching and Learning, and Professor (Research) in Sociology, with input from Sheridan Center colleagues.
Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers . San Francisco: Jossey-Bass Publishers.
Barkley, E.F. (2010). Student engagement techniques: A handbook for college faculty . San Francisco: Jossey-Bass.
Bonwell C. C. & Eison, J.A. (1991). Active learning: Creating excitement in the classroom . ASHE-ERIC Higher Education Report No. 1. Washington, DC: The George Washington University, School of Education and Human Development.
Campbell, C.M. (2023). Great college teaching: Where it happens and how to foster it everywhere . Cambridge, MA: Harvard Education Press.
Campbell, C.M., Cabrera, A.F., Michel, J.O., & Patel, S. (2017). From comprehensive to singular: A latent class analysis of college teaching practices. Research in Higher Education, 58 : 581-604.
Connell, G.L., Donovan, D.A., & Chambers, T.G. (2016). Increasing the use of student-centered pedagogies from moderate to high improves student learning and attitudes about biology. CBE - Life Sciences Education, 15 : 1-15.
Davidson, C.N. (2017). The new education: How to revolutionize the university to prepare students for a world in flux . New York: Basic Books.
DeMonbrun, M., Finelli., C.J., Prince, M., Borrego, M., Shekhar, P., Henderson, C., & Waters, C. (2017). Creating an instrument to measure student response to instructional practices. Journal of Engineering Education, 106 (2): 273-298.
Eddy, S.L., & Hogan, K.A. (2014). Getting under the hood: How and for whom does increasing course structure work. CBE Life Sciences Education, 13 :453-468.
Ellis, D.E. (2015). What discourages students from engaging with innovative instructional methods: Creating a barrier framework. Innovative Higher Education, 40 : 111-125.
Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H., & Wenderoth, M.P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences , 111: 8410–8415.
Freeman, S., O’Connor, E., Parks, J.W., Cunningham, M., Hurley, D., Haak, D., Dirks, C., & Wenderoth, M.P. (2007). Prescribed active learning increases performance in introductory biology. CBE-Life Sciences Education, 6 : 132-139.
Freeman, S., Theobald, R., Crowe, A.J., Wenderoth, M.P. (2017). Likes attract: Students self-sort in a classroom by gender, demography, and academic characteristics. Active Learning in Higher Education , 1-12.
Grunspan, D.Z., Eddy, S.L., Brownell, S.E., Wiggins, B.L., Crowe, A.J., & Goodreau, S.M. (2016). Males underestimate academic performance of their female peers in undergraduate biology classrooms. PLOS One . Available: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0148405
Gurin, P. (2000). Expert Report in the Matter of Gratz et al. v. Bollinger et al . No. 97-75321(E.D. Mich.) and No. 97-75928 (E.D. Mich.). Available: http://diversity.umich.edu/admissions/legal/expert/gurintoc.html
Hake, R.R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66 : 64-74.
Johnson, D., Johnson, R.T., & Smith, K.A. (2014). Cooperative learning methods: A meta-analysis. Available: https://www.researchgate.net/profile/David_Johnson50/publication/2200403...
Lee, V. (2007). Sequence activity. Workshop on inquiry-based learning.
Major, C.H., Harris, M.S., & Zakrajsek, T. (2016). Teaching for learning: 101 intentionally designed educational activities to put students on the path to success . New York: Routledge.
Meadows, L., & Sekaquaptewa, D. (2013). The influence of gender stereotypes on role adoption in student teams . ASEE Annual Conference and Exposition, Atlanta, GA. Paper #: 6744.
Messineo, M. (2017). Using the science of learning to improve student learning in sociology classes. Teaching Sociology , 46(1): 1-11.
Michaelson L, Bauman-Knight B, Fink D (2003). Team-based learning: A transformative use of small groups in college teaching . Sterling, VA: Stylus.
National Academies of Sciences, Engineering, and Medicine. (2017). Indicators for monitoring undergraduate STEM education . Available: http://nap.edu/24943
Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93 (3): 223-231.
Reid, R., & Garson, K. (2017). Rethinking multicultural group work as intercultural learning. Journal of Studies in International Education, 21 (3): 195-212.
Soneral, P.A.G., & Wyse, S.A. (2016). A SCALE-UP mock-up: Comparison of student learning gains in high- and low-tech active-learning environments. CBE Life Sciences Education , 16(1): 1-15.
Therayil, S., Borrego, M., Prince, M., Nguyen, K.A., Shekhar, P., Finelli, C.J., & Waters, C. (2018). Strategies to mitigate student resistance to active learning. International Journal of STEM Education, 5 (7). Available: https://stemeducationjournal.springeropen.com/articles/10.1186/s40594-01...
Wiggins, B.L., Eddy, S.L., Wener-Fligner, L., Freisem, K., Grunspan, D.Z., Theobald, E.J., Timbrook, J., & Crowe, A.J. (2017). ASPECT: A survey to assess student perspective of engagement in an active-learning classroom. CBE Life Sciences Education, 16 (2).
Wittwer, J., & Renkl, A. (2008). Why instructional explanations often do not work: A framework for understanding the effectiveness of instructional explanations. Educational Psychologist, 43 (1): 49-64.
This newsletter was originally published in March 2018 and revised in September 2020 and July 2023.
You have full access to this open access article
The retention of fundamental mathematical skills is imperative to provide a foundation on which new skills are developed. Educators often lament about student retention. Cognitive scientists and educators have explored teaching methods that produce learning which endures over time. We wanted to know if using spaced recall quizzes would prevent our students from forgetting fundamental mathematical concepts at a post high school preparatory school where students attend for 1 year preparing to enter the United States Military Academy (USMA). This approach was implemented in a Precalculus course to determine if it would improve students’ long-term retention. Our goal was to identify an effective classroom strategy that led to student recall of fundamental mathematical concepts through the end of the academic year. The concepts that were considered for long-term retention were 12 concepts identified by USMA’s mathematics department as being fundamental for entering students. These concepts are taught during quarter one of the Precalculus with Introduction to Calculus course at the United States Military Academy Preparatory School. It is expected that students will remember the concepts when they take the post-test 6 months later. Our research shows that spaced recall in the form of quizzing had a statistically significant impact on reducing the forgetting of the fundamental concepts while not adversely effecting performance on current instructional concepts. Additionally, these results persisted across multiple sections of the course taught at different times of the day by six instructors with varying teaching styles and years of teaching experience.
Avoid common mistakes on your manuscript.
It has long been established that memory declines over time (Ebbinghaus, 1964 ). Although this is a normal human condition, it is problematic in the field of education, particularly in disciplines where coursework requires students to possess knowledge accumulated from previous classes. When foundational concepts are forgotten, new learning can be stunted (Kamuche & Ledman, 2005 ; Taylor et al., 2017 ). This is particularly true in mathematics and can be seen in other courses of study that require a strong mathematical background (Pearson Jr & Miller, 2012 ). For this reason, we were interested in finding strategies that could be implemented in a mathematics course to help students retain and recall fundamental mathematical concepts. We were most interested in strategies that could be easily implemented and would not require a huge overhaul on the current syllabus.
Based on the work of cognitive psychologists, Lang ( 2016 ) recommends several classroom strategies for improving instruction and learning that can be easily implemented within existing course structures. One of which, is spending a small portion of each class asking students questions on material from any previous lesson. He considers this to be low-level interleaving and claims that asking students to regularly recall previous content, along with spaced learning, improves long-term retention.
The role that spaced learning plays in the durability of memory is also well known (Ebbinghaus, 1964 ). Since Ebbinghaus’ work in 1885, studies have continued to investigate the role of spaced learning on retention. It has been found that retrieval attempts are more beneficial when repeated in spaced-out sessions versus massed sessions (Cepeda et al., 2006 ). The benefits of quizzing as a method of asking students to regularly recall previous course content has also been studied. A meta-analysis by Rowland ( 2014 ) and a review of several laboratory and educational studies by Roediger III and Karpicke ( 2006a ) both conclude that the benefits of quizzing outweigh the benefits of other study activities such as homework and re-reading. This phenomenon has been named the “testing effect.” There is a connection between effortful recall and memory. Memory is made more durable when effort is needed to recall information. Re-reading material with the intent to increase retrieval takes little cognitive effort and is therefore less effective than retrieval activities such as quizzing (Brown et al., 2014 ).
However, to achieve long-term retention through quizzing it is important to space the quizzing effectively. Karpicke and Roediger III ( 2007 ) studied the effects of expanding retrieval intervals and equally spaced retrieval intervals and showed that what matters most is not whether the retrieval intervals are equally spaced or expanded but instead, that the initial retrieval attempt is effortful. If, for example, a quiz on previous concepts is administered too soon after it was learned, effortful retrieval, the key to retention, is diminished, making the exercise less effective. Benjamin and Tullis ( 2010 ) identified that the timing of spaced retrieval is maximized when the “sweet spot” is found between students having to use effort to recall, but not so much effort that they will not be able to remember. Cepeda et al. ( 2008 ) developed a model called the “retention surface” which plots student performance as a function of the study gap. The model identifies the retention interval that can be used to determine the optimal spacing interval between retrieval attempts for a desired retention interval.
Although studies that compared the effects on learning between open response (recall) versus multiple choice (recognition) quizzes found mixed results (Karpicke, 2017 ), we decided to use open response for our research. McDaniel et al. ( 2007 ) found that both multiple choice and open response yielded positive effects on retrieval over simply re-reading material. However, it was the open response questions that produced a greater positive effect on the retrieval of the material. No matter the mode, Roediger III et al. ( 2011 ) claims that testing leads to new retrieval routes which increases the effort for retrieval. When the effort to retrieve information is increased, the desired difficulty has been achieved to lead to long-term retention. The term “desired difficulty,” first identified by psychologists Elizabeth and Robert Bjork (Brown et al., 2014 ), is a key factor in establishing deeper connections so learning is more durable over time.
Some instructors feel that quizzing students too often or quizzing students when they do not have a solid grasp on the material may reinforce misunderstandings. Kornell et al. ( 2009 ) studied this concern and found that unsuccessful attempts in testing that were followed up by feedback produced a significant improvement in follow-on tests. They also found that the effort required to recall material for a test (even if the answer is wrong and feedback is given) produces deep processing, corrects the brain’s retrieval route for that information, and serves as a cue for recall in the future. Additionally, Benjamin and Tullis ( 2010 ) and Karpicke and Roediger III ( 2007 ) concluded in their research that providing feedback on the retrieval attempts helps to mitigate the encoding opportunity that is lost from an unsuccessful retrieval attempt.
The research mentioned here supports the potential for quizzing to improve students’ long-term retention. However, with the large number of factors that can influence education in classroom settings, it is not surprising that efforts to understand how quizzing influences long-term retention have been conducted in laboratory settings. In fact, of the research cited so far, the majority were conducted in laboratory settings, with only two studies being conducted in a classroom (McDaniel et al., 2007 ; Roediger III et al., 2011 ). This does not include the four reviews of existing literature (Benjamin & Tullis, 2010 ; Cepeda et al., 2006 ; Roediger III & Karpicke, 2006a ; Rowland, 2014 ).
While the studies mentioned so far show the potential of spaced quizzing to positively impact student retention, what is needed is promising results with educationally relevant tasks in actual classroom environments. Some studies made a step in that direction by using educationally relevant tasks in a lab setting. The Roediger III and Karpicke ( 2006b ) lab study asked students to read passages on scientific topics and then students restudied the passage or took a recall test. They found that testing, and not studying, improved retention. Additionally, Arnold and McDermott ( 2013 ) used English and Russian word associations in their lab study and found that more recall attempts increased recall ability. Finally, Rohrer and Taylor ( 2007 ) also noted benefits to long-term retention that come from spaced learning in the form of frequent quizzing. Their study used mathematical problems with college students, but again, this work was performed in a lab setting and not as a component of a regular course routine.
Recent years have seen an increase in classroom research on improving retention. Karpicke ( 2017 ) summarized 10 years of research on retention. As part of their summary, they reviewed studies that considered the effects of quizzing on retention in educational classrooms. Their research reports 14 studies conducted between 2009 and 2016 that addressed the benefits of quizzing in classrooms, all of which produced positive results. Of those studies, nine were conducted in college-level courses and five in middle schools but none in high schools. Only two of the studies they mentioned involved mathematical content, both of which were conducted in college courses (Hopkins et al., 2016 ; Lyle & Crawford, 2011 ). Yang et al. ( 2021 ) conducted a meta-analytic review of quizzing’s effect on classroom learning and looked at 222 studies that included 573 effects to more fully understand classroom moderators. They reviewed classroom research conducted in or since the year 2000. Their data is not organized in a manner that links course level to subject matter but most of the studies they reviewed were conducted in middle school and university or college courses and only 8% of the effects were obtained from high school studies while only 4.9% of the effects were obtained from studies involving mathematical content. They provide a very extensive review of 19 moderating variables and implementation considerations, but two potential moderating variables they did not report on were class hour and teacher experience.
Of the studies conducted with mathematical content in classroom settings, the study conducted by Hopkins et al. ( 2016 ) is especially notable. They evaluated massed versus spaced retrieval practice of mathematical concepts in a college introductory calculus course for engineers. They showed that spaced retrieval led to retention of concepts that persisted into the following semester. Although the calculus course did not use the traditional classroom format of lectures, their findings indicate a similar strategy used in a Precalculus course may be equally effective at improving students’ long-term retention of fundamental mathematical concepts.
To this end, the objective of this research was to explore the effectiveness in using spaced recall quizzes in a classroom setting to reduce the forgetting of fundamental mathematical concepts that usually occurs when students are not asked to revisit these concepts. Many studies on retention take place in a lab, yet we set out to investigate the use of weekly quizzes in an in-person classroom. Our classroom setting was specifically in a high school level precalculus course composed of post high school students in a preparatory school environment. In addition, we sought to determine if the results persisted across multiple sections of the course taught at different times of the day by six instructors with varying teaching styles and years of teaching experience. The literature we reviewed did not contain studies that took place in a classroom with the range of teaching experience and number of instructors that we had; therefore, our results add to the body of knowledge in this area of research. These results showcase that the use of spaced recall quizzes can reduce forgetting regardless of the teaching experience of the educator. Using the study of Hopkins et al. ( 2016 ) as a model, we anticipated that the use of intentionally spaced recall, in the form of quizzing, would improve students’ long-term retention of fundamental mathematical concepts in a traditional classroom setting. For this study, long-term is defined as 6 months: the time between the completion of quarter one instruction and the administration of the post-test exam.
The paper is organized as follows. The method we used is put forth in Sect. 2 and includes a description of the student population and the sequence and timeline of key components. The results of the quiz’s effect on long-term retention are then introduced in Sect. 3 . In Sect. 4 we discuss the results in light of moderating factors (e.g., teacher experience, class hour, etc.). Finally, in Sect. 5 we discuss practical implications related to using spaced recall quizzing in the classroom. Section 6 is left for the conclusion.
We conducted our study at the United States Military Academy Preparatory School (USMAPS) located at West Point, New York during the 2020–2021 academic year. When students are not yet qualified for direct admission to the United States Military Academy (USMA) but they show great potential, they are given the opportunity to attend USMAPS. The students that are admitted to USMAPS are deficient in one or more of these three areas: academics, physical fitness, or leadership. The purpose of the school is to develop the students in those three areas to meet the rigorous admission standards of USMA. For most students, the prevalent deficiency is in academics. The students who attend USMAPS live in rooms on campus. On average, the 240 students that enter the 1 year program are demographically and geographically diverse (within the United States and its territories). In 2020–2021, 235 students attended USMAPS with approximately 25% prior service soldiers and most of the others having graduated high school the previous year. Approximately 48% were African American, 4% were Hispanic, 2% were Asian, and 1% identified as “Other Minorities.”
At USMAPS, there are three levels of mathematics courses: Calculus, Precalculus with an Introduction to Calculus (PIC), and a Precalculus course that has an emphasis on Algebra/Trigonometry (PCAT). When students arrive at USMAPS, they take a pretest and fill out a survey to identify their previous mathematical exposure and competency. The pretest, survey, and student Scholastic Assessment Test (SAT) and/or American College Testing (ACT) scores are used to place students in the Calculus, PIC, or PCAT course.
Our study began with \(N=157\) students enrolled in the PIC course. Six instructors taught this course. Five of the instructors taught three sections each. As our hypothesis was that the spaced recall quizzes would reduce forgetting, we wanted to rule out the potential that the reduction in forgetting was a result of the instructor. We designed our study to have the instructors’ sections randomly assigned as follows: one section assigned as treatment, one as control, and one section as mixed. For the sixth instructor who only taught two sections, one section was randomly assigned as treatment and the other as control. In the mixed sections, students were randomly assigned to either the treatment or control group. Enrollment numbers for each section were equally distributed (plus or minus one student). This placed \(N=76\) students in the control group and \(N=81\) students in the treatment group. The study received approval (20-115-1) from the Institutional Review Board (IRB) at USMA and was deemed exempt because it was conducted in an established educational setting and involved normal educational practices. Instructors were not informed which sections were treatment, control, or mixed or which students belonged to each group. Two of the instructors were authors and had access to which students and sections were treatment or control; however, they did not examine that information before the end of the experiment. This was to ensure that there was no unintentional bias on the part of the study leads. The potential for study lead bias was investigated during data analysis (see Sect. 4.4 ).
Although we were able to randomly assign students, scheduling logistics prevented us from redistributing students to control for race, gender, standardized tests, and high school grade point average (GPA). Consequently, we had to maintain the assignment of students obtained by randomly assigning students to sections of treatment, control, or mixed, as previously explained. This was a departure from the approach employed by Hopkins et al. ( 2016 ), who balanced student assignment by also considering racial and gender composition, mean ACT score, and mean high school GPA. Nevertheless, the method we used for obtaining randomness is acceptable and has been used in other studies when additional measures were not available or could not be used to achieve balanced groups (Begolli et al., 2021 ; Jaeger et al., 2015 ; Liming & Cuevas, 2017 ; Shirvani, 2009 ; Sanchez et al., 2020 ).
During the academic year, some students who were originally placed in PIC demonstrated that their fundamental skills were not at the level initially expected based on ACT/SAT scores, previous course work, and pretest performance. In these cases, those students were moved to the PCAT course and thereby removed from the study. In addition, some students were separated from USMAPS during the academic year removing them from the study. There were also a few cases where students were removed due to incomplete data. Consequently, our initial study of \(N=157\) students reduced to \(N=128\) students. This left \(N=60\) students in the control group and \(N=68\) in the treatment group.
This study was conducted within 17 sections of an in-person class that met daily. Classroom (action) research adds elements that are more challenging to control than if the research was done in a lab setting. These elements included student aptitude, instructor experience, and the hour of the day the class met. We address these in Sect. 4 . Another challenge we faced when doing research in the classroom (instead of a lab) was how to handle giving feedback and how to control when and if students accessed the feedback later. We did not give feedback immediately after the quizzes to prevent students from sharing answers in between classes as that would affect the exposure gap (Cepeda et al. 2008 ) and desired difficulty (Bjork and Bjork 1992 ) we had designed into the treatment.
There were advantages to conducting the study in the classroom rather than in the lab. We were fortunate to belong to a mathematics department in which all instructors were willing and interested in this study which allowed us to have a sample size of 128 students. We also had the benefit of conducting the study on the retention of mathematical concepts while using the insights on retention that was gained through studies done in a lab with Swahili-Swedish word pairs (Bertilsson et al. 2017 ), or loosely connected word pairs in another study (Kornell et al. 2009 ).
Our experiment consisted of the progression outlined in Fig. 1 . First, a pretest was administered to all students to establish a baseline comprehension of fundamental concepts. Then, the students were instructed in the fundamental concepts of mathematics in quarter one. Next, all students took weekly quizzes. The treatment group took a weekly quiz on the spaced fundamental concepts and the control students took a weekly quiz on the current topics being learned. Finally, all students were given a post-test to determine their retention of the fundamental concepts. Each of the components are discussed in detail in the following sections.
Progression of the spaced recall experiment. Students take different weekly quizzes based on whether they are in the control or treatment group
To assess long-term retention we considered 12 mathematical concepts identified by the the United States Military Academy’s mathematics department as being fundamental for entering students. These topics are listed in Table 1 and are the fundamental concepts taught in quarter one of the PIC course. The expectation is that students will remember these concepts when they are assessed on the USMAPS post-test exam 6 months later.
The topics that we cycled in the treatment group come from a document published by the United States Military Academy entitled, “Required Mathematical Skills for Entering [Students].” From this document we chose 12 of the 40 skills based on the fact that those 12 skills are taught in the first 8 weeks of the academic year. This would allow us to examine the exposure gap of quizzing three of the topics every 28 days with a goal of reduced forgetting when these topics were tested again at the end of the year on a multiple choice post-test. If we chose topics taught after the first 8 weeks of school, we would run out of time in the school year to test this exposure gap that was designed based on the research of Cepeda et al. ( 2008 ). We tested three different topics in each retention quiz to give each of the twelve topics the same exposure.
When the students complete our program and are admitted to the United States Military Academy, they will take a multiple choice Fundamental Concepts Exam within the first week or so of classes. This Fundamental Concepts Exam is built from the 40 “Required Mathematical Skills for Entering [Students].” Historically, our student body scores extremely low on the Fundamental Concepts Exam despite the fact that our curriculum covers all 40 of the required skills. The frustration that we felt knowing our students continued to score poorly is what lead us to want to do research on ways to improve the long-term retention of these skills. In addition, these fundamental skills are the building blocks for future course work at the United States Military Academy in the areas of advanced mathematics, physics, chemistry, and engineering. We were experiencing a similar dilemma as described in the study by Hopkins et al. ( 2016 ), where a spaced versus massed practice was used in a course titled, Introductory Calculus for Engineers.
The pretest that students took when they arrived at USMAPS in July was used as the initial measurement of knowledge for 10 of the 12 fundamental concepts. The pretest is a 50-problem multiple choice exam which the students have 80 min to complete. The post-test is the same exam given the following April. We mapped the 50 problems in the pretest against the 12 identified fundamental concepts and found that two of the topics were not directly assessed so, although those two topics were included in the weekly quizzes during the study, they were not analyzed as part of the study results.
During quarter one, and prior to the start of the study, all students in the PIC classes experienced instruction in the same mathematical concepts including the 12 fundamental concepts. At the end of quarter one, all PIC students took a final exam that was an open response exam (no multiple choice). The final exam was “group-graded.” Each instructor was given a rubric for a specific problem on the final exam and they graded that problem for every student in the course. This is our practice for grading major exams to build in consistency and fairness in our grading procedure. Performance on this final exam was one metric used to establish the initial statistical equivalence between the treatment and control groups (see Sect. 4.1 ).
Once a week during quarters two, three, and the first two weeks of quarter four, all PIC students took a retention quiz (RQ). The RQs were administered and graded using a web-based platform. The quizzes were administered during class and at a time that was open to the instructor’s discretion. The students logged into the web-based platform and the instructor provided a password that allowed the students to begin the 10 minute timed exam. The web-based platform displayed a timer for the student and no longer accepted submissions once the time had expired. The web-based platform automatically graded the open-response submissions and then released the students’ scores later that day (after all students had taken the RQ). It was coordinated such that all sections of PIC took the weekly RQ on the same day. For motivational purposes, the quizzes were worth points but only counted for 4% of students’ total grade in quarters two and three with no effect on final grade in quarter four. The students in the control group were given RQs that had three questions on the mathematical content currently being covered in the course (massed recall). The students in the treatment group were given RQs that had three questions that ONLY came from the 12 identified fundamental concepts that were taught in quarter one (spaced recall). Each quiz addressed three fundamental concepts and the concepts cycled every 28 days based on the exposure gap suggested in the study by Cepeda et al. ( 2008 ) to achieve the retention interval of 6 months. We also staggered the three concepts being quizzed each class period to prevent any loss of the desired difficulty that might come from students sharing what was on their RQ. Regardless of content, all quiz questions were formatted as open response. Scores and worked solutions were available to the student at the end of the academic day. Class time was not used to discuss quiz solutions as our design was to find a classroom strategy that would not take too much time away for teaching new material as Lang ( 2016 ) suggested. Each of the fundamental concepts were cycled four times over the course of the 16 RQs. Tables 2 and 3 provide an example of a treatment RQ and a control RQ.
Six months after the quarter one final exam and after the 16 RQs were administered, all students took the post-test simultaneously in identical conditions. We conducted an analysis of each groups’ post-test scores to determine if there was any significant difference in performance between the treatment and control group. Again, two of the 12 mathematical concepts that we cycled during the treatment’s RQs were not assessed on the post-test, so we did not measure those two concepts: radicals to rational exponents and distance/rate/time.
To measure whether the spaced weekly RQs led to long-term retention, we compared the performance of all 128 students on 24 selected problems from the pre/post-test. These 24 problems assessed 10 of the fundamental concepts that cycled in the treatment group RQs. We determined the treatment to be a success or failure as shown in Fig. 2 .
To determine whether RQs led to long-term retention, the identification above was used to classify success or failure
The pretest and post-test were identical multiple choice assessments that contained 50 questions. We only analyzed 24 of the 50 questions as they were the questions that assessed one of the 10 topics that we cycled in the treatment group. Due to the fact that these questions were multiple choice, the problem was considered correct if they made the right choice out of the 4 multiple choice options. If they did not choose the correct multiple choice option, we considered that incorrect. As our hypothesis was that the treatment of the spaced recall quizzes would help reduce forgetting, the student was considered to have success if they got the problem right on the post-test (even if it was wrong on the pretest). Notice that a correct answer on the post-test is considered a retention success and an incorrect answer is considered a retention failure no matter the student’s performance on that topic from the pretest that was given before quarter one instruction. After calculating the successes and failures for each of the 24 problems for each student, we then calculated the proportion of success a student experienced on these 24 specific questions and named this the student’s retention index. Comparing the retention index of students in the control group to the retention index of students in the treatment group allowed us to measure the effect of spaced recall, in the form of quizzing, on long-term retention.
It was discovered that the success in reducing forgetting was quite pronounced. To quantify these gains, we used several statistical techniques such as data visualization with density plots, t-tests, and analysis of variance (ANOVA). The results of these statistical techniques are reported in this section and can be found in Table 4 . We used the 50 question post-test to establish the retention of the fundamental mathematics concepts. Figure 3 shows the performance (distribution of scores) between the treatment and control groups. The treatment group’s mean score \((72.4\%)\) is higher than the control group’s \((68.5\%)\) and is statistically significant at the 0.05 level. We compared mean scores using a conservative hypothesis test that examined if the true population mean scores were the same. The p -value of 0.011 provides significant evidence that this hypothesis is implausible. In fact, the ability of the treatment group to outperform the control group with a \(3.9\%\) increase in the mean is statistically significant.
Distribution of post-test scores by group. Mean score difference of \(3.9\%\) is statistically significant and shows the treatment group scored higher on the post-test
Next, we looked at the treatment and control groups’ performance on 24 specific problems within the 50-problem post-test that directly assessed 10 of the 12 fundamental concepts. We used the same conservative hypothesis test as we did in the analysis of the post-test. As shown in Fig. 4 , the mean performance (retention index) of the treatment group \((81.6\%)\) is higher than that of the control group \((77.3\%)\) . The p -value of 0.012 provides statistically significant evidence that the control and treatment group did not perform the same on those 24 specific problems. The mean score of the treatment group was \((4.2\%)\) greater than the mean of the control group, indicating the treatment group was more successful in reducing forgetting of the fundamental concepts.
Distribution of the success of reducing forgetting of the 10 fundamental concepts within the post-test by group. Mean score difference of \(4.2\%\) is statistically significant and shows the treatment group scored higher on the fundamental concepts and was more successful in reducing forgetting
In this section we will explore other factors that strengthened our results. We examined factors such as latent (prior) student aptitude, instructor experience level, class hour of the day, and study lead bias using common statistical techniques. A statistical summary of these factors is included in Table 4 .
Given that new mathematical knowledge builds on prior mathematical understanding, we wanted to verify that our random assignment resulted in two groups with relatively equal mathematical foundations prior to treatment. Without the opportunity to balance our groups for mean SAT/ACT and mean high school grade point average, two measures that might indicate the strength of a student’s mathematical foundation, we used student performance on a pretest to demonstrate that the two groups could be considered equally balanced in their mathematical preparedness. The pretest was administered in late July and was administered simultaneously to all students in identical conditions. Figure 5 shows the performance (distribution of scores) between the treatment and control groups. A t-test that assumed the two mean scores were the same, produced a p -value of 0.841 (see Table 4 ). Consequently, although the treatment group’s mean score \((47.0\%)\) is slightly below the control group’s \((48.8\%)\) , this difference is not statistically significant. The random assignment of students to treatment or control created two groups with relatively equal mathematical foundations.
Distribution of pretest scores by group. Mean score difference of 1.8% is not statistically significant; therefore, the groups have relatively equal mathematical foundations
We also examined student performance on the quarter one final exam which students took after receiving instruction in the 12 fundamental concepts and prior to the start of the study. We wanted to ensure that quarter one instruction did not favor one group over another and therefore create an imbalance between the two groups before the treatment began. Figure 6 shows the quarter one final exam performance (distribution of scores) between the treatment and control groups. Note that the mean scores for each group are almost identical with the treatment group mean score \((79.5\%)\) only slightly above the control group \((79.2\%)\) . As before, we used a t-test and assumed that the means were the same. The p -value of 0.823 (see Table 4 ) indicates that the slight difference in means is not statistically significant. Taken together, results from the pretest and quarter one final exam establish that the initial assignment of students to the treatment and control groups produced two groups that were equally balanced in their mathematical preparedness before the treatment began.
Distribution of quarter 1 final exam scores by group. Mean score difference is 0.3% so both groups are balanced in their mathematical preparedness at the end of quarter 1
The faculty members who taught the Precalculus class have teaching experience that spans from 5 to 20 years. Is it possible that the more experienced instructors are better skilled at their craft and as a result, their students scored higher on the retention quizzes regardless of whether they were in the treatment or control group? We conducted a two-way ANOVA and confirmed that there is almost no evidence to suggest that student performance was affected by the instructor’s level of teaching experience (see Table 4 ). This result is powerful because it shows that spaced recall in the form of recall quizzes can be effective for any instructor no matter the years of teaching experience.
The hour in which students attend class is a possible concern as one may have heard that students would perform better if they did not have a mathematics class so early in the morning, or right after lunch, or at the end of the day. Eliasson et al. ( 2010 ) studied how earlier wake times affect student performance. Trockel et al. ( 2000 ) found early wake up times were the biggest contributor to differences in grade point averages. There is even a recent claim that early morning classes may impede performance (Yeo et al., 2021 ). To explore this influence, we conducted a two-way ANOVA and found that there is little evidence to suggest that in this experiment student performance varied by class hour (see Table 4 ). This could be due to the fact that although the students have class at different times of the day, the whole student body is required to wake up at the same time to attend a morning accountability formation. Importantly, the claim that the treatment was effective in reducing forgetting holds regardless of the time in which students attend class.
Two of the six instructors for this Precalculus course were the designers of this experiment and spent significant time reading the research on the effectiveness of spaced learning techniques and mitigating factors. Both instructors taught a control, treatment, and mixed section. To determine if the instructors subconsciously changed their techniques in response to their understanding of the topic being studied, we used a t-test that assumed student performance between study leads and non-study leads was the same. A p -value of 0.899 indicated that there was almost no evidence that would suggest student performance varied as a result of being a student in a study lead’s class (see Table 4 ).
The results of this study clearly indicate the effectiveness of spaced recall quizzing at reducing student forgetting. This is the similar result we found in the work of Hopkins et al. ( 2016 ); however, we obtained these similar results without the presence of immediate feedback. When used as a classroom strategy it only takes a minimal amount of time away from the current curriculum. Lang ( 2016 ) describes this as a “small teaching” tool that can be added to your current syllabus without a major overhaul of your curriculum. Such a small activity had a huge positive influence on our students’ ability to recall fundamental mathematical concepts at the end of the course. There are some implications worth considering when spaced recall quizzing is used as a classroom strategy. These are: the impact on instructional time, the practice of providing feedback on student solution attempts, and the effect the quizzes may have on student affect, all of which we discuss below.
A possible argument that may be made against using spaced recall quizzes is that they take time away from the instruction of current concepts and therefore reduce the level at which students learn new material. We anticipated that we might observe this result due to the different content of the retention quizzes (RQs) for control verses treatment. The control group RQs addressed current material and the treatment group RQs did not. We anticipated that because of the control group’s extra exposure and time with the current material they would perform better than the treatment group on the quarter three final exam. To check for this possibility, we collected data on the quarter three final exam for both the treatment and control groups. At the end of quarter three, all PIC students took a final exam that was an open response exam (no multiple choice). The final exam was “group-graded.” Each instructor was given a rubric for a specific problem on the final exam and they graded that problem for every student in the course. This is our practice for grading major exams to build in consistency and fairness in our grading procedure. Figure 7 shows the performance (distribution of scores) between the treatment and control groups on that exam. Although the control group’s mean score ( \(76.4\%\) ) is slightly above the treatment group’s ( \(74.9\%\) ), a t-test that examined if the two groups performed the same produced a p -value of 0.400. This provides evidence that the mean quarter three final exam scores were similar between the two groups indicating that both groups understood the new concepts equally well (see Table 4 ). So, not only did revisiting fundamental concepts on the spaced recall quizzes reduce forgetting of those concepts for the treatment group; lost instructional time did not negatively affect their ability to learn new concepts. This is where there is a difference in our results compared to the Hopkins et al. ( 2016 ) study where a spaced versus massed practice was used in a course titled, Introductory Calculus for Engineers. Although both of our studies found the spaced (treatment) students outperformed the massed (control) on the spaced material, we found the control and treatment to perform the same on the current material. Hopkins et al. ( 2016 ) found the spaced learners to outperform the massed learners on the current material.
Distribution of quarter 3 final exam scores by group. Mean score difference of 1.5% is not statistically significant, consequently, both groups performed equally
As previously mentioned, several studies addressed the role that feedback plays in retrieval attempts, particularly when retrieval attempts are unsuccessful (Benjamin & Tullis, 2010 ; Karpicke and Roediger III, 2007 ; Kornell et al., 2009 ). These studies claim that the phenomenon by which retrieval attempts can enhance learning are improved when students receive feedback on their solution attempts. The design of our study prevented us from discussing the quiz solutions during class time because of the potential for the students to discuss the solutions with other students that had not taken their retention quiz yet that day. Any discussion between students would interfere with the designed exposure gap informed by the research of Cepeda et al. ( 2008 ) and the “desired difficulty” as mentioned by Roediger III and Karpicke ( 2006b ), a phrase coined by Bjork and Bjork ( 1992 ). This precluded students from receiving immediate feedback on their solution attempts. Students were able to log into the web-based platform later in the day and see their scores as well as solutions to the problems, but we could not ensure that students did so. This is worth noting because even without a way of providing immediate feedback or ensuring that students engaged with that feedback later, the RQs produced statistically significant improvements in students’ long-term retention (see Sect. 3 ). Consequently, even in classroom situations where providing immediate feedback is not possible, quizzing alone is still effective in improving long-term retention. This is consistent with the findings of Roediger III and Karpicke ( 2006b ) and Karpicke and Roediger III ( 2007 ) that the testing effect occurs even when feedback is not given. RQs have been adopted as part of our curriculum due to the success in reducing forgetting. As we implement the RQs as a practice rather than a research study, we plan to provide immediate feedback and look forward to analyzing the retention with immediate feedback in future studies. One reviewer noted it could be possible that the success of the retention quizzes was a result of whether the student logged into the web-based platform to review the feedback. Whether feedback played a role in increasing retention is an item for further investigation.
In addition to the quantitative measures used to determine the efficacy of the RQs in reducing forgetting; we used the standard end of quarter surveys to ask students to comment on the utility of RQs. Although our research did not formally address student affect, what we observed is worth noting. The feelings of many students in the study can be summed up by one student who said,
“I have a ton of [n]egative feelings towards the retention quizzes. These quizzes are the dumbest thing I have ever taken in school. I think the idea of going over past knowledge that will not be on the [current exam] is not worth the wasted 10 minutes at the beginning of class. I think that the retention quizzes are a waste of time and are an easy F in the grade book. This might just be because I am forgetting the minor past stuff and trying to focus more on what we are learning now but I believe that the retention quiz is not necessary.”
Other students expressed similar sentiments, noting the difficulty of the RQs, and expressed concern about their grade.
“I felt negatively about retention quizzes because I often did poorly on a subject that I had previously done very well on.”
“It was a stressful thing to jump into in class and generally brought everyone’s grade down.”
“I feel like the retention quizzes only takes away from our grade. Our minds are focused on what we are learning currently then we have to revert back to old stuff.”
These sentiments did not surprise us. Lang ( 2016 ) explained that recalling information that a student thought they had already mastered can be frustrating. He suggested informing students about the research supported benefits of quizzing and retrieval practice. Because these quizzes are a recall attempt and not a formative assessment the instructor can also reduce potential negative effects on students’ grades by making them low stakes, giving them less weight in the overall grade.
Another option might be to not grade the quizzes at all, as long as there is some other motivation for students to put forth effort. A key factor in the success of the RQs is effortful recall (Brown et al., 2014 ) So motivating students to expend effort is an important consideration. It is also possible that helping students understand the benefits that come from even unsuccessful retrieval attempts could help resolve some of their frustration and combat any potentially negative learning effects due to poor affect. Future research could investigate the influence that recall quizzes have on student affect and consider effective ways for addressing any potentially negative aspects. Based on what we observed informally, any attempt to use spaced recall in the form of quizzing to improve long-term retention in a classroom setting should include a plan to address students’ frustration with the experience. If the RQs are to be graded, student concerns regarding the impact of the quizzes on their grades should also be addressed. One reviewer suggested that we find ways to make the effort that the students put into the RQs a rewarding experience that allows them to see the benefit of the effortful retrieval rather than a punishment in their grade. This is a great suggestion that we will find a way to implement as our school plans to continue the use of RQs for all students due to the significant improvement on the reduction of the forgetting of fundamental mathematical concepts.
This study demonstrated the effectiveness of spaced recall quizzing in reducing forgetting, thereby improving students' long-term retention. Specifically, identifying essential knowledge such as fundamental concepts and then providing students weekly spaced quizzes where the identified concepts cycle monthly, positively effects students’ ability to recall those concepts at the end of the academic year. Importantly, implementing this small teaching strategy does not reduce the level at which students learn new material, despite the loss of instructional time due to quizzing. Additionally, in situations where it is not possible to provide students with feedback on retrieval attempts, quizzing alone is still effective in improving long-term retention. The effects of quizzing on student affect was unable to be formally addressed, but was evident in end of course surveys as they exposed student frustration with the retention quizzes. The potential for negative responses from students is not something to be ignored. Therefore, anticipating and addressing negative student feelings is an important consideration if spaced recall quizzing is to be implemented in the classroom.
The datasets and R codes generated during this study are available from the corresponding author on reasonable request.
Arnold, K. M., & McDermott, K. B. (2013). Test-potentiated learning: Distinguishing between direct and indirect effects of tests. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 (3), 940.
Google Scholar
Begolli, K. N., Dai, T., McGinn, K. M., & Booth, J. L. (2021). Could probability be out of proportion? Self-explanation and example-based practice help students with lower proportional reasoning skills learn probability. Instructional Science, 49 , 441–473.
Article Google Scholar
Benjamin, A. S., & Tullis, J. (2010). What makes distributed practice effective? Cognitive Psychology, 61 (3), 228–247.
Bertilsson, F., Wiklund-Hörnqvist, C., Stenlund, T., & Jonsson, B. (2017). The testing effect and its relation to working memory capacity and personality characteristics. Journal of Cognitive Education and Psychology, 16 (3), 241–259.
Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. From Learning Processes to Cognitive Processes: Essays Inhonor of William K Estes, 2 , 35–67.
Brown, P.C., Roediger III, H.L., & McDaniel, M.A. (2014). Make it stick: The science of successful learning.
Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132 (3), 354.
Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., & Pashler, H. (2008). Spacing effects in learning: A temporal ridgeline of optimal retention. Psychological Science, 19 (11), 1095–1102.
Ebbinghaus, H. (1964). Memory: A contribution to experimental psychology (henry a. ruger & clara e. bussenius, trans.). New York, NY: Teachers College(Original work published as Das Gedächtnis, 1885).
Eliasson, A. H., Lettieri, C. J., & Eliasson, A. H. (2010). Early to bed, early to rise! sleep habits and academic performance in college students. Sleep and Breathing, 14 (1), 71–75.
Hopkins, R. F., Lyle, K. B., Hieb, J. L., & Ralston, P. A. (2016). Spaced retrieval practice increases college students’ short-and long-term retention of mathematics knowledge. Educational Psychology Review, 28 , 853–873.
Jaeger, A., Eisenkraemer, R. E., & Stein, L. M. (2015). Test-enhanced learning in third-grade children. Educational Psychology, 35 (4), 513–521.
Kamuche, F. U., & Ledman, R. E. (2005). Relationship of time and learning retention. Journal of College Teaching & Learning (TLC), 2 (8), 10.
Karpicke, J.D. (2017). Retrieval-based learning: A decade of progress. Grantee Submission.
Karpicke, J. D., & Roediger, H. L., III. (2007). Expanding retrieval practice promotes short-term retention, but equally spaced retrieval enhances long-term retention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33 (4), 704.
Kornell, N., Hays, M. J., & Bjork, R. A. (2009). Unsuccessful retrieval attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35 (4), 989.
Lang, J. M. (2016). Small teaching: Everyday lessons from the science of learning . Wiley.
Liming, M. C., & Cuevas, J. (2017). An examination of the testing and spacing effects in a middle grades social studies classroom. Georgia Educational Researcher, 14 (1), 103–136.
Lyle, K. B., & Crawford, N. A. (2011). Retrieving essential material at the end of lectures improves performance on statistics exams. Teaching of Psychology, 38 (2), 94–97.
McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19 (4–5), 494–513.
Pearson, W., Jr., & Miller, J. D. (2012). Pathways to an engineering career. Peabody Journal of Education, 87 (1), 46–61.
Roediger, H. L., III., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17 (4), 382.
Roediger, H. L., III., & Karpicke, J. D. (2006a). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1 (3), 181–210.
Roediger, H. L., III., & Karpicke, J. D. (2006b). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17 (3), 249–255.
Rohrer, D., & Taylor, K. (2007). The shuffling of mathematics problems improves learning. Instructional Science, 35 (6), 481–498.
Rowland, C. A. (2014). The effect of testing versus restudy on retention: a meta-analytic review of the testing effect. Psychological Bulletin, 140 (6), 1432.
Sanchez, D. R., Langer, M., & Kaur, R. (2020). Gamification in the classroom: Examining the impact of gamified quizzes on student learning. Computers & Education, 144 (103), 666.
Shirvani, H. (2009). Examining an assessment strategy on high school mathematics achievement: Daily quizzes vs. weekly tests (pp. 34–45). American Secondary Education.
Taylor, A. T., Olofson, E. L., & Novak, W. R. (2017). Enhancing student retention of prerequisite knowledge through pre-class activities and in-class reinforcement. Biochemistry and Molecular Biology Education, 45 (2), 97–104.
Trockel, M. T., Barnes, M. D., & Egget, D. L. (2000). Health-related variables and academic performance among first-year college students: Implications for sleep and other behaviors. Journal of American College Health, 49 (3), 125–131.
Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin, 147 (4), 399.
Yeo, S. C., Lai, C. K., Tan, J., Lim, S., Chandramoghan, Y., & Gooley, J. J. (2021). Large-scale digital traces of university students show that morning classes are bad for attendance, sleep, and academic performance. BioRxiv . https://doi.org/10.1101/2021.05.14.444124
Download references
We would like to thank the USMAPS Math Department head, Dr. Alex Heidenberg, for adopting the retention quizzes into the Precalculus course. We also appreciate the support and participation of the instructors: Elizabeth Giebler, Justyna Marciniak, and Fran Teague. Because of their involvement along with the rich student feedback, the data was more robust than anticipated.
No funding was received for conducting this study.
Diane S. Lindquist, Brenda E. Sparrow and Joseph M. Lindquist have contributed equally to this work.
Mathematics Department, US Military Academy Preparatory School, 950 Reynolds Road, West Point, NY, 10996, USA
Diane S. Lindquist & Brenda E. Sparrow
Department of Mathematical Sciences, US Military Academy, 601 Thayer Road, West Point, NY, 10996, USA
Joseph M. Lindquist
You can also search for this author in PubMed Google Scholar
Lindquist, D and Sparrow, B jointly conceived the study, trained instructor participants, and collected and interpreted data findings. Lindquist, J conducted all statistical explorations.
Correspondence to Diane S. Lindquist .
Competing interests.
The authors have no financial or proprietary interests in any material discussed in this article.
This research was submitted to the West Point Institutional Review Board and the protocol was deemed to meet the requirements for exempt status under 32CFR219.104(d)(1). Since data involved secondary analysis of routinely collected data, consent was not required.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
While descriptive statistics may clearly show a difference between the control and treatment groups, the reader may be interested in whether the results are generalizable to a broader population—say high school students, or perhaps a similar strata of college students. For this, we rely upon inferential statistics to determine the likelihood of similar results given a different sampling of students. This section briefly outlines the tests used to specify this generalizability.
Data Visualization: While summary statistics (maximum, minimum, mean, variance, etc.) are useful point estimates of student performance, they may mask key features. A common tool to visualize these features is density plot. This tool estimates and plots the underlying probability density function resulting in a “smoothed” histogram that is not sensitive to bin size selection. This analysis is of the type shown in Fig. 4 .
T-Test: When comparing two groups (i.e. between control and treatment ), one is often interested in whether an observed performance difference ( \(\mu _1-\mu _2\) ) is significant. The t-test examines the null hypothesis
that the mean performances are the same—and returns a p -value that can be interpreted as a probability that observed differences between the two groups is due to simple chance. A p -value of 0.05 indicates that differences could be explained by simple chance in just 5 out of every 100 experiments. Given this unlikely outcome, the null hypothesis should be rejected. In most contexts, a p -value greater than 0.05 suggests insufficient evidence to reject the null hypothesis. For this analysis, all t-tests were computed using the Welch’s t-test which requires generally normal responses (evident from Figs. 3 , 4 , and others) and relaxes the assumption of equal variances between the two groups required when using the Student’s t-test.
Analysis of Variance (ANOVA): When comparing more than two groups, one is often interested in whether an observed performance difference (i.e between more than 2 instructors ) is significant. ANOVA tests the null hypothesis
that the mean performances are the same. Here, \(\mu _n\) is the mean performance of the \(n^{th}\) group. If the test returns a statistically significant result, there is evidence to reject the null hypothesis ( \(H_0\) ) implying that at least one group is different from the others. One-way ANOVA tests for differences in one independent variable (say control/treatment). Two-way ANOVA tests for differences in two independent variables (say control/treatment and hour of the day taught). When reporting the degrees of freedom for ANOVA, we adopt the notation ( a,b ) where a represents the degrees of freedom for the “between-group” variance and b represents the degrees of freedom for the “within-group” variance.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Lindquist, D.S., Sparrow, B.E. & Lindquist, J.M. Spaced recall reduces forgetting of fundamental mathematical concepts in a post high school precalculus course. Instr Sci (2024). https://doi.org/10.1007/s11251-024-09680-w
Download citation
Received : 17 June 2022
Accepted : 05 July 2024
Published : 03 September 2024
DOI : https://doi.org/10.1007/s11251-024-09680-w
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Memory, attention and creativity as cognitive processes in musical performance: a case study of students and professionals among non-musicians and musicians., related papers.
Showing 1 through 3 of 0 Related Papers
BMC Medical Education volume 24 , Article number: 949 ( 2024 ) Cite this article
Metrics details
Since effective education is one of the main concerns of every society and, in nursing, can lead to the education of successful people, the development of learning and teaching methods with greater effectiveness is one of the educational priorities in every country. The present study aimed to compare the effect of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in a client health assessment.
The present study was a Parallel randomized clinical trial study. The participants were 166 nursing students. The clinical trial data was collected from December 14, 2023, to February 20, 2024. The inclusion criteria were nursing students who had passed the first semester, who were willing to participate and install the app on their mobile devices, and who had no experience with the designed application for this study. The participants were allocated to four groups with colored carts. In the first group, teaching was performed via gamification in a flipped learning environment; in the second group, teaching was performed via the gamification method. A flipped class was implemented in the third group. In the fourth group, the usual lecture method was used. The practical performance to assess the physical health assessment with 10 questions using the key-feature questions, along with the satisfaction and self-efficacy of the students, was also checked with questionnaires.
In this study, 166 nursing students, (99 female and 67 male), with an average (standard deviation) age of 21.29 (1.45) years, participated. There was no statistically significant difference in the demographic characteristics of the participants in the four intervention groups ( P > 0.05). Comparing the results before and after the intervention, the results of the paired t test indicated a significant difference in the satisfaction, learning and self-efficacy of the learners ( P < 0.001). In the comparison of the four groups, the ANOVA results for the comparison of the average scores of knowledge evaluation and satisfaction after intervention among the four groups indicated a statistically significant difference ( P < 0.001). When the knowledge evaluation scores of the groups were compared, the scores for gamification in the flipped learning environment were significantly different from the other methods ( P < 0.05), and there was no significant difference between the scores for the flipped class and lecture methods ( P = 0.43). According to the ANOVA results, when comparing the satisfaction scores of the groups, the students in the flipped learning environment and gamification groups were more satisfied than the flipped class and lecture groups ( P < 0.01).
Based on the results of the present research, it can be concluded that teaching methods have an effect on students’ learning and satisfaction. The teaching method has an effect on the satisfaction of the students, and the use of the flipped class method with the use of gamification was associated with more attractiveness and satisfaction in addition to learning. Teachers can improve the effectiveness of education with their creativity, depending on situation, time, cost, and available resources, by using and integrating educational methods.
Peer Review reports
Effective education is one of the main concerns of every society [ 1 ]. Because the traditional methods of teaching, learning and management have little effectiveness [ 2 ], multiple learning strategies of active learning and the use of technologies [ 3 , 4 , 5 ], it is helpful to integrate the classroom approach among these methods. The reverse is the use of a playful method [ 6 , 7 ]. The flipped classroom was presented in 2007 by Bergmann and Sams, two chemistry teachers at Woodland Park High School in Colorado (USA). Their goal was to ensure that students who could not attend class for various reasons could proceed at the pace of the course and not be harmed due to not attending class [ 8 ]. Bergmann and Sams videotaped and distributed instructional content and found that this model allowed the teacher to focus more attention on the individual learning needs of each student [ 5 , 8 ].
In 2014, the Flipped Learning Network (FLN) was introduced, in which flipped learning was defined as “an educational approach in which direct instruction is transferred from the group learning dimension to individual learning, and in a dynamic and interactive learning environment, where the instructor guides students in applying concepts and engaging creatively with course content”. The four pillars of flexible environment, learning culture, purposeful content and professional instructor have been described in opposite directions [ 9 , 10 ]. In addition to the ever-increasing complexity of the healthcare environment and the rapid advancement of healthcare technology, a global pandemic (COVID-19) has affected educational structures. The pandemic has caused a global educational movement toward blended learning to meet students’ technological and hands-on learning needs. Indeed, at no time in history has there been such a sudden transition to this type of learning [ 11 ], where the flipped classroom was widely used [ 9 ].
In nursing education, the use of flipped classrooms [ 9 , 12 ] and technologies [ 3 , 5 ] has been emphasized. The results obtained in the systematic review of the effect of the flipped classroom on academic performance in nursing education indicated its positive effect, and the opinions of most students about this method included aspects such as its usefulness, flexibility, greater independence or greater participation [ 13 , 14 , 15 , 16 , 17 , 18 , 19 ]. According to the cognitive bases related to the Bloom’s taxonomy, with the flipped classroom method, the student works in the first stage of the learning process at home, which is the simplest stage, and in the second stage, through active learning with the help of the teacher and classmates, in class time, which is used to increase and empower more [ 20 , 21 ]. In addition, the flipped classroom method has certain advantages over traditional learning. The flipped classroom is student-centered and makes students responsible for their own learning [ 22 ], and its use in nursing has been emphasized in systematic review studies [ 3 , 23 , 24 ].
One of the interactive teaching methods using computers is the gamification method. Gamification in education includes the use of game elements to increase motivation and participation and to involve students in the personal learning process [ 1 , 25 ]. Gamification is an active education method. The gamification system increases the level of engagement and motivation of learners by provoking excitement and creating challenges for them. Additionally, with this method, it is possible to provide an opportunity for testing, and in that test, in addition to creating a challenge, learners are given the opportunity to display their achievements through competition [ 26 ].
Nursing education institutions are obliged to improve the ability of nursing students to make correct clinical judgments through various educational programs and the use of new teaching methods [ 27 , 28 ] so that when nursing students enter the clinic, they can fulfill their role as members of the medical team [ 27 ]. Therefore, it is necessary to carry out more research regarding the identification of effective teaching methods that can improve the attractiveness of education and its satisfaction among nursing students [ 1 , 27 ].
This study addresses the lack of comparative research on the effectiveness of flipped classrooms and gamification in nursing education, an area that has not been sufficiently explored. The advantages of combining education methods are that they can be used together [ 6 , 7 ]. For example, by combining education using the flipped class with gamification, more study time is provided by using the flipped class, and the attractiveness of the method is provided by gamification [ 7 ]. Therefore, considering the attractiveness of the new application that is prepared in a flipped class, the current research was conducted aimed at comparing the effects of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in terms of client health assessment.
The present study was a parallel randomized clinical trial research aimed at comparing the effect of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in a client health assessment. The clinical trial data was collected from December 14th, 2023, until February 20th, 2024.
First, in a call, 247 nursing students registered to participate in the study. After checking the entry criteria, 188 people met the entry criteria for the study. The inclusion criteria were nursing students who had passed the first semester, who were willing to participate and install the app on their mobile devices, and who had no experience with the designed application for this study. Exclusion criteria were: miss the mobile and drop out of study, for example, because of transferring, migration or do not like to continue participating in the study. So, 18 students were excluded from study for unwillingness to continue, 2 students because of migration were excluded, and 2 people were excluded for missing their mobile (Fig. 1 ).
Study and sampling process
The participants were allocated to four groups with using colored carts. Before sampling, 188 carts in 4 blue, red, black and white colors (from each color, 45 carts) were prepared in one enveloped pocket. After completing the informed consent and pre-test questionnaires, each student took a colored card from the enveloped pocket. Then, with the lottery, it was determined that the participants with the blue card participated in the gamification in a flipped learning environment, the red cart in the gamification, the black cart in the flipped class, and the white cart in the lecture method. The study and sampling process is shown in Fig. 1 .
The education course was 4 class in 60 min of health status assessment in 4 weeks. Each group has a classroom weekly. Education content was health assessment and clinical examination courses of the Bachelor of Nursing Education curriculum. Course plan was developed based on the curriculum.
For intervention, the application was designed using the cascade model (initial analysis, system analysis, design, programming, testing (alpha and beta), implementation and modification) [ 29 , 30 ]. In the initial analysis stage, the need or the desired problem, which is the issue of education improvement, is raised, and can technical solutions be provided for it? If there are possible solutions, the practicality is evaluated, and in the analysis of the visual appeal system, the up-to-date information, simple language, and comprehensiveness of the information provided in the educational content are checked. In the design phase, the design of the desired system was written, and a program was written by the programmers according to the initial design of the system.
The educational content of the application was prepared based on the health assessment and clinical examination courses of the Bachelor of Nursing Education Program, approved by an expert panel. The application was designed in two parts: education and scenario-based games. In the education section of the application, the content of the education was presented, and in the scenario base game section, the 10 scenarios of health status assessment and clinical examination were designed based on real situations.
In the scenario base game section of the application, the application was embedded as a game in such a way that the student, at the first, observes the chief compliance of the patient, and they must complete patient examinations and choose the correct answer. If they choose correctly, they will take a green cart, and if they make a mistake, they will take a red cart. They could take 4 green carts in each scenario. A yellow cart was shown when the answer was not incorrect, but it was not an exact answer. In each scenario, they must find the correct nursing diagnosis. They must provide a nursing diagnosis based on the priority of care in the scenario.
The fundamental elements of gamification are mechanics (motivating students through points, budgets and rewards), dynamics (engaging users through stories and narratives), and aesthetics (user experiences from applications about being user-friendly and attractive) [ 31 , 32 , 33 ]. The mechanics element was considered in the application, with green carts in each stage. The dynamic element was considered in the scenarios. The aesthetic element was considered and checked in alpha and beta tests.
In the test phase, the Application was checked for errors, and it was tested for user acceptance in two parts, the alpha and beta tests. In the alpha test, the program was used by the designers (four academic nurses and 4 IT men) as users, and in the beta test, a group of users (20 nursing studentsThe fundamental elements of a flipped class are that the students must read the content before the class and do the assignment in the class. In this study, this element was considered, and the provided content was given to participants at first. The students read content for each class before the class, and they solved the assignment in the class. The provided content for the flipped class group was designed in the PowerPoint files, and for the gamification in the Flipped Learning Environment group was designed in the application.
It was improved based on their opinions, and in the next stage, the approved application by the designer and user was used in this study.
In the Lecture group, the content of the education was held in the lecture method, and in each section, at the end of class, a scenario of the designed was given to the students as an assignment. They must solve it by next week. At the end of the study, four scenarios were performed by the students as assignments in this group.
In the Flipped class group, the content was prepared in the four voiced PowerPoints and presented them to the students in the first session. Students read the content of each class, and in class they discussed the educational content and solved the scenarios as an assignment. Eight scenarios were discussed by the students as assignments in this group.
In the Gamification group, in each class, after the educational content was presented, the homework was presented, and students played a scenario of application in the class. Four scenarios were performed by the students as assignments in this group.
In the Gamification in the Flipped Learning Environment group, the designed mobile application was presented in the first session of the course. Students must read the content of the session before the class, and in class they discussed the educational content and solved the scenarios as an assignment. Eight scenarios were performed as homework by students in a gamification environment.
In this study, a questionnaire with 10 key-feature questions (KFQs) was designed by an expert panel of 10 academic nurses. After designing a KFQ questionnaire, its validity and reliability were examined. Validity was confirmed with a content validity ratio (CVR) of 14 expert (academic nurses) and qualitative validity with 7 academic and 7 clinical nurses; reliability was checked by test-retest. The CVR of the questionnaire was 0.96 and was confirmed. All seven academic and seven clinical nurses confirmed the qualitative validity of the questionnaire. The content validity coefficient based on the number of participating professors (at least 10 people) is 0.49 as the minimum acceptable according to the Lauwshe Tables (18, 19) and the necessity of the items of tools was confirmed.
For the test-retest of KFQ questionnaire, 10 nursing students participated. They filled out the questionnaire twice, with an interval of two weeks. The correlation coefficient between their answers was 0.93 with Spearman’s correlation coefficient. The correlation coefficient above 0.7 is good [ 34 , 35 ].
Additionally, education satisfaction was investigated with the Measuring Student Satisfaction Scale from the Student Outcomes Survey [ 27 ], which includes 20 items. The validity of it was confirmed with CVR, and the reliability was checked by Cronbach’s alpha. The CVR of the questionnaire was 0.91 and was confirmed. Cronbach’s alpha was 0.69. Cronbach’s alpha coefficient above 0.7 is good, 0.3–0.7 is good, and less than 0.3 is poor [ 34 , 35 ]. The overall Cronbach’s alpha was appropriate reliability.
The Sherer questionnaire tool was used to assess the self-efficacy of the nursing students [ 36 ]. This tool contains 17 items on a five-point Likert scale. Sherer et al., confirmed the reliability of the questionnaire with Cronbach’s alpha 0.76 [ 36 ]. Also, for this questionnaire, the validity was confirmed with CVR, and the reliability was checked by Cronbach’s alpha. The CVR of the questionnaire was 0.90 and was confirmed. Cronbach’s alpha was 0.45.
The analysis of the research data was performed using the Statistical Package for Social Sciences version 20. The Kolmogorov-Smirnov test was used to assess the normality of the data. Data analysis was performed by using descriptive tests, such as percentage, mean and standard deviation, and statistical tests, such as the chi-square test, paired t test, and ANOVA. In all statistical tests, a significance level was considered less than 0.05.
In the present study, 166 nursing students, 99 women and 67 men, with an average (standard deviation) age of 21.29 (1.45) years, were participated. The demographic characteristics of the participants are shown in Table 1 . The homogeneity of the intervention and control groups was checked with statistical methods, and the results are reported in Table 1 . There was no statistically significant difference in the demographic characteristics of the participants in the groups ( P > 0.05).
Comparing the results before and after the intervention, the results of the paired t test indicated a significant difference in the satisfaction, learning and self-efficacy of the learners ( P < 0.001). Table 2 shows the results of paired t tests.
The ANOVA showed that a statistically significant difference between the mean scores of knowledge and satisfaction after intervention in the four groups ( P < 0.001). The result of the ANOVA was not significant difference between the mean of the self-efficacy after intervention in the four groups ( P = 0.101).
In the analysis of the groups, there was a significant difference in the comparison of the knowledge evaluation scores, such that there was a significant difference between the average of the gamification methods in the flipped learning environment group and the gamification compared to the inverted class and lecture, considering equal variance ( P < 0.001). There were significant differences at the 0.05 level between the two gamification methods in the flipped learning environment group and the gamification group ( P = 0.03). Gamification and flipped classes had no significant difference ( P = 0.054). There was no significant difference between the two methods of flipped class and lecture ( P = 0.43).
According to the ANOVA results, when comparing the satisfaction scores of the groups, there was no significant difference between the means of gamification in the flipped learning environment and the gamification method ( P = 0.49); however, there was a significant difference between the gamification in the flipped learning environment and the gamification with the flipped class and the lecture. Additionally, there were significant differences between the flipped class and the lecture method ( P < 0.01).
This study aimed to compare the effects of the lecture method, flipped class and gamification in a flipped learning environment on the performance of nursing students in assessing the health status of clients. The demographic characteristics of the participants (gender, age, academic semester, grade point average and theory course score) had the same distribution among the four groups, and there was no statistically significant difference ( P < 0.05).
Comparing the results before and after the training, the results of the paired t test indicated a significant difference in the satisfaction, learning and self-efficacy of the learners ( P < 0.001). The results indicate that all four teaching methods effectively affected the learning, satisfaction and self-efficacy of students in evaluating the health status of their clients. However, in the comparison of the 4 groups, ANOVA revealed a statistically significant difference ( P < 0.001). In the analysis comparing the knowledge evaluation scores of the gamification group with those of the other methods group, there were significant differences ( P < 0.05), and there was no significant difference between the two methods (Flipped class and lecture) ( P = 0.439). According to the ANOVA results, the satisfaction scores of the groups were greater for the gamification in the flipped learning environment and gamification groups than for the flipped class and lecture groups ( P < 0.01). The results of the present research indicate that teaching methods have an effect on students’ learning and satisfaction.
Rachayon and his colleagues also used a task-based learning method in combination with digital games in a flipped learning environment to develop students’ English language skills, and their results also indicated the success of combining the above methods [ 7 ]. Muntrikaeo and his colleagues also used a similar model of task-based learning in combination with games in a reversed environment for teaching English, and their findings were also successful [ 6 ]. The results of the current research, which involved the integration of the gamification in the flipped learning environment for teaching health status assessment to nursing students, are similar to those of the above research.
Zou et al., in their systematic review, found that success in the flipped classroom is related to teachers’ creativity in making the classroom interactive, students’ readiness, and the use of technology [ 37 ]. In the present study, the flipped class, along with the use of gamification in the flipped learning environment, increased learner satisfaction and learning. Therefore, their findings are similar to the findings of the present study.
Hernon and his colleagues reported that the use of technology plays a significant role in the development of nursing students’ skills [ 4 ]. Regarding the use of educational applications for health assessment, the results of their research are the same as the current research, and the use of technology not only plays a role in learning but also it has role in education satisfaction. Considering the results of the present study and similar studies, we can conclude that the use of gamification in the flipped learning environment is an interactive teaching method and can be used to improve nursing education. Gamification can increase the attractiveness of education and promote education. If a good application is designed as a flipped enviroment, it provides more time in the classroom for discussion, interaction, and scenario-based education and promotes education satisfaction.
In this study, the satisfaction with education had a significant difference between the groups, but the students’ self-efficacy, despite the significant difference before and after the intervention, did not have a significant difference between the groups. Since all three studied methods were effective in students’ learning and self-efficacy, it can be said that teachers can improve educational effectiveness and satisfaction by using different methods and combining them in educational situations by considering resources and conditions.
The gamification method was associated with higher satisfaction, but it requires more resources, equipment, and skilled personnel. The flipped class method requires fewer resources, is more cost-effective, and provides more time for practice and group discussion. By combining these two methods, the advantages of both can be used, which is confirmed by the results of the present study. It seems that the upside-down environment provides a good opportunity for life-long training, including the promotion of interaction and teamwork, and along with other methods, it is also associated with more effectiveness and benefits.
In this study, knowledge and satisfaction of education had significant differences between groups, but students’ self-efficacy had not significant difference between groups. Maybe it was due to the fact that we participated in the second and third semesters of nursing students, and the interactive skills of students were not assessed. So, the researchers recommended that more research be conducted with the aim of investigating interactive and communication skills using gamification in a flipped environment.
Therefore, this method is helpful in nursing education as well as other medical fields. It is suggested that this method could be combined with other educational methods, such as task-based and team-based methods, to develop the possibility of developing team-based education and task-based education. Integrated gamification methods in the flipped learning environment with mobile applications have greater attractiveness and satisfaction with effective education, and with the use of appropriate applications, it is necessary to create a sense of competition and learning. But, in this study, the interactive skills of students were not assessed. Finally it is emphasized that teachers can improve the effectiveness of education with their creativity, depending on situation, time, cost, and available resources, by using and integrating educational methods.
The teaching method has an effect on students’ satisfaction with the teaching method, and the use of gamification in the flipped learning environment is more effective than the flipped class method, gamification, and the lecture method. Based on the results of the present research, it can be concluded that teaching methods have an effect on students’ learning and satisfaction. The teaching method has an effect on the satisfaction of the students, and the use of the flipped class method with the use of gamification was associated with more attractiveness and satisfaction in addition to learning. Teachers can improve the effectiveness of education with their creativity, depending on situation, time, cost, and available resources, by using and integrating educational methods.
Not installing the program on IOS phones made it impossible for these users to use the application and drop out study, so we recommended that designed application for android and IOS. The ability of the professor to teach with the method of gamification in the flipped learning environment and his mastery of the application are necessary to provide necessary training to the teachers regarding the above methods.
Integrated gamification methods in the flipped learning environment with mobile applications have greater attractiveness and satisfaction. But, in this study, the interactive skills of students were not assessed. So the researchers recommended that more research be conducted with the aim of investigating interactive and communication skills using the gamification method in an upside-down environment.
Data is provided within the manuscript or supplementary information files.
Khaledi A, Ghafouri R, Anboohi SZ, Nasiri M, Ta’atizadeh M. Comparison of gamification and role-playing education on nursing students’ cardiopulmonary resuscitation self-efficacy. BMC Med Educ. 2024;24(1):1–6.
Article Google Scholar
Pellegrino JL, Vance J, Asselin N. The Value of songs for Teaching and Learning Cardiopulmonary Resuscitation (CPR) competencies: a systematic review. Cureus. 2021;13(5).
Chi M, Wang N, Wu Q, Cheng M, Zhu C, Wang X, et al. editors. Implementation of the flipped Classroom combined with problem-based learning in a medical nursing course: a Quasi-experimental Design. Healthcare: MDPI; 2022.
Google Scholar
Hernon O, McSharry E, MacLaren I, Carr PJ. The use of educational technology in teaching and assessing clinical psychomotor skills in nursing and midwifery education: a state-of-the-art literature review. J Prof Nurs. 2023;45:35–50.
River J, Currie J, Crawford T, Betihavas V, Randall S. A systematic review examining the effectiveness of blending technology with team-based learning. Nurse Educ Today. 2016;45:185–92.
Muntrikaeo K, Poonpon K. The effects of Task-based instruction using online Language games in a flipped learning environment (TGF) on English oral communication ability of Thai secondary students. Engl Lang Teach. 2022;15(3):9–21.
Rachayon S, Soontornwipast K. The effects of task-based instruction using a digital game in a flipped learning environment on English oral communication ability of Thai undergraduate nursing students. Engl Lang Teach. 2019;12(7):12–32.
Bergmann J, Sams A. Flip your classroom: Reach every student in every class every day. International society for technology in education; 2012.
Barbour C, Schuessler JB. A preliminary framework to guide implementation of the flipped Classroom Method in nursing education. Nurse Educ Pract. 2019;34:36–42.
Talbert R, Mor-Avi A. A space for learning: an analysis of research on active learning spaces. Heliyon. 2019;5(12).
Jowsey T, Foster G, Cooper-Ioelu P, Jacobs S. Blended learning via distance in pre-registration nursing education: a scoping review. Nurse Educ Pract. 2020;44:102775.
Blegur J, Ma’mun A, Mahendra A, Mahardika IMS, Tlonaen ZA. Bibliometric analysis of micro-teaching model research trends in 2013–2023. J Innov Educational Cult Res. 2023;4(3):523–33.
Yun S, Min S. A study on learning immersion, online class satisfaction, and perceived academic achievement of flip-learning online classes. J Surv Fisheries Sci. 2023;10(4S):432–41.
Sullivan JM. Flipping the classroom: an innovative approach to graduate nursing education. J Prof Nurs. 2022;38:40–4.
Ng EKL. Student engagement in flipped classroom in nursing education: an integrative review. Nurse Educ Pract. 2023:103585.
Kazeminia M, Salehi L, Khosravipour M, Rajati F. Investigation flipped classroom effectiveness in teaching anatomy: a systematic review. J Prof Nurs. 2022;42:15–25.
Özbay Ö, Çınar S. Effectiveness of flipped classroom teaching models in nursing education: a systematic review. Nurse Educ Today. 2021;102:104922.
Betihavas V, Bridgman H, Kornhaber R, Cross M. The evidence for ‘flipping out’: a systematic review of the flipped classroom in nursing education. Nurse Educ Today. 2016;38:15–21.
Tan C, Yue W-G, Fu Y. Effectiveness of flipped classrooms in nursing education: systematic review and meta-analysis. Chin Nurs Res. 2017;4(4):192–200.
Sari NARM, Winarto, Wu T-T, editors. Exemplifying Formative Assessment in Flipped Classroom Learning: The Notion of Bloom’s Taxonomy. International Conference on Innovative Technologies and Learning; 2022: Springer.
SivaKumar A. Augmenting the flipped classroom experience by integrating revised Bloom’s taxonomy: a faculty perspective. Rev Educ. 2023;11(1):e3388.
Merrett CG. Analysis of flipped Classroom techniques and Case Study Based Learning in an introduction to Engineering materials Course. Adv Eng Educ. 2023;11:2–29.
Banks L, Kay R. Exploring flipped classrooms in undergraduate nursing and health science: a systematic review. Nurse Educ Pract. 2022:103417.
Sezer TA, Esenay FI. Impact of flipped classroom approach on undergraduate nursing student’s critical thinking skills. J Prof Nurs. 2022;42:201–8.
Nevin CR, Westfall AO, Rodriguez JM, Dempsey DM, Cherrington A, Roy B, et al. Gamification as a tool for enhancing graduate medical education. Postgrad Med J. 2014;90(1070):685–93.
Verkuyl M, Romaniuk D, Atack L, Mastrilli P. Virtual gaming simulation for nursing education: an experiment. Clin Simul Nurs. 2017;13(5):238–44.
Jang K, Kim SH, Oh JY, Mun JY. Effectiveness of self-re-learning using video recordings of advanced life support on nursing students’ knowledge, self-efficacy, and skills performance. BMC Nurs. 2021;20(1):1–10.
Roel S, Bjørk IT. Comparing nursing student competence in CPR before and after a pedagogical intervention. Nursing Research and Practice. 2020;2020.
Ali WNAW, Yahaya WAJW, Waterfall -ADDIE, Model. An Integration of Software Development Model and Instructional Systems Design in Developing a Digital Video Learning Application. 2023.
Rodríguez S, Sanz AM, Llano G, Navarro A, Parra-Lara LG, Krystosik AR, et al. Acceptability and usability of a mobile application for management and surveillance of vector-borne diseases in Colombia: an implementation study. PLoS ONE. 2020;15(5):e0233269.
Govender T, Arnedo-Moreno J, editors. A survey on gamification elements in mobile language-learning applications. Eighth international conference on technological ecosystems for enhancing multiculturality; 2020.
Landers RN, Armstrong MB, Collmus AB. How to use game elements to enhance learning: Applications of the theory of gamified learning. Serious Games and Edutainment Applications: Volume II. 2017:457 – 83.
Toda AM, Klock AC, Oliveira W, Palomino PT, Rodrigues L, Shi L, et al. Analysing gamification elements in educational environments using an existing Gamification taxonomy. Smart Learn Environ. 2019;6(1):1–14.
Kellar SP, Kelvin EA. Munro’s statistical methods for health care research. Wolters Kluwer Health/Lippincott Williams & Wilkins; 2013.
Polit DF, Yang F. Measurement and the measurement of change: a primer for the health professions. Wolters Kluwer Health; 2015.
Sherer M, Adams CH. Construct validation of the self-efficacy scale. Psychol Rep. 1983;53(3):899–902.
Zou D, Luo S, Xie H, Hwang G-J. A systematic review of research on flipped language classrooms: theoretical foundations, learning activities, tools, research topics and findings. Comput Assist Lang Learn. 2022;35(8):1811–37.
Download references
The authors also wish to thank all the participants and those who helped us in carrying out the research especially all the staffs of Department of Medical Surgical Nursing of School of Nursing & Midwifery of Shahid Beheshti University of Medical Sciences.
The authors received no specific funding for this work.
Authors and affiliations.
Department of Medical Surgical Nursing, School of Nursing & Midwifery, Shahid Beheshti University of Medical Sciences, Vali-Asr Avenue, Cross of Vali-Asr Avenue and Hashemi Rafsanjani (Neiaiesh) Highway, Opposite to Rajaee Heart Hospital, Tehran, Iran
Raziyeh Ghafouri & Vahid Zamanzadeh
Department of Basic Sciences, School of Nursing & Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran
Malihe Nasiri
You can also search for this author in PubMed Google Scholar
VZ and RG formulates the research question that represents the systematic review objective. VZ and RG provide proposal and reports. RG collected the data. MN: Data analysis. All authors read and approved the final manuscript.
Correspondence to Raziyeh Ghafouri .
Ethics approval and consent to participate.
This study was approved by the ethics committee of Shahid Beheshti University of Medical Science (IR.SBMU.PHARMACY.REC.1402.152), and all methods were carried out in accordance with the research ethical codes of the Iran National Committee for Ethics in Biomedical Research. The authors guarantee that they have followed the ethical principles stated in the Declaration of Helsinki (to protect the life, health, dignity, integrity, right to self-determination, privacy, and confidentiality of personal information of research subjects) in all stages of the research. This is the online certificate of the research ethical code: https://ethics.research.ac.ir/ProposalCertificateEn.php?id=404003&Print=true&NoPrintHeader=true&NoPrintFooter=true&NoPrintPageBorder=true&LetterPrint=true . This study was registered in the Iranian Registry of Clinical Trials ( https://irct.behdasht.gov.ir ) on 14/12/2023, with the IRCT ID: IRCT20210131050189N7. To observe ethical considerations, School of Nursing & Midwifery of Shahid Beheshti University of Medical Sciences agreed to participate in the study; the research goals and procedures were elucidated to the participants, the participants were assured of information anonymity and confidentiality, and informed written consent was obtained from each participant and documented. They participated in the study voluntarily and could leave the study at any stage.
Not applicable.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Supplementary material 2, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Ghafouri, R., Zamanzadeh, V. & Nasiri, M. Comparison of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in a client health assessment: a randomized clinical trial. BMC Med Educ 24 , 949 (2024). https://doi.org/10.1186/s12909-024-05966-2
Download citation
Received : 15 March 2024
Accepted : 28 August 2024
Published : 30 August 2024
DOI : https://doi.org/10.1186/s12909-024-05966-2
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6920
20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.
API Solutions
Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.
GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.
One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.
Speaker 1: Welcome to this overview of quantitative research methods. This tutorial will give you the big picture of quantitative research and introduce key concepts that will help you determine if quantitative methods are appropriate for your project study. First, what is educational research? Educational research is a process of scholarly inquiry designed to investigate the process of instruction and learning, the behaviors, perceptions, and attributes of students and teachers, the impact of institutional processes and policies, and all other areas of the educational process. The research design may be quantitative, qualitative, or a mixed methods design. The focus of this overview is quantitative methods. The general purpose of quantitative research is to explain, predict, investigate relationships, describe current conditions, or to examine possible impacts or influences on designated outcomes. Quantitative research differs from qualitative research in several ways. It works to achieve different goals and uses different methods and design. This table illustrates some of the key differences. Qualitative research generally uses a small sample to explore and describe experiences through the use of thick, rich descriptions of detailed data in an attempt to understand and interpret human perspectives. It is less interested in generalizing to the population as a whole. For example, when studying bullying, a qualitative researcher might learn about the experience of the victims and the experience of the bully by interviewing both bullies and victims and observing them on the playground. Quantitative studies generally use large samples to test numerical data by comparing or finding correlations among sample attributes so that the findings can be generalized to the population. If quantitative researchers were studying bullying, they might measure the effects of a bully on the victim by comparing students who are victims and students who are not victims of bullying using an attitudinal survey. In conducting quantitative research, the researcher first identifies the problem. For Ed.D. research, this problem represents a gap in practice. For Ph.D. research, this problem represents a gap in the literature. In either case, the problem needs to be of importance in the professional field. Next, the researcher establishes the purpose of the study. Why do you want to do the study, and what do you intend to accomplish? This is followed by research questions which help to focus the study. Once the study is focused, the researcher needs to review both seminal works and current peer-reviewed primary sources. Based on the research question and on a review of prior research, a hypothesis is created that predicts the relationship between the study's variables. Next, the researcher chooses a study design and methods to test the hypothesis. These choices should be informed by a review of methodological approaches used to address similar questions in prior research. Finally, appropriate analytical methods are used to analyze the data, allowing the researcher to draw conclusions and inferences about the data, and answer the research question that was originally posed. In quantitative research, research questions are typically descriptive, relational, or causal. Descriptive questions constrain the researcher to describing what currently exists. With a descriptive research question, one can examine perceptions or attitudes as well as more concrete variables such as achievement. For example, one might describe a population of learners by gathering data on their age, gender, socioeconomic status, and attributes towards their learning experiences. Relational questions examine the relationship between two or more variables. The X variable has some linear relationship to the Y variable. Causal inferences cannot be made from this type of research. For example, one could study the relationship between students' study habits and achievements. One might find that students using certain kinds of study strategies demonstrate greater learning, but one could not state conclusively that using certain study strategies will lead to or cause higher achievement. Causal questions, on the other hand, are designed to allow the researcher to draw a causal inference. A causal question seeks to determine if a treatment variable in a program had an effect on one or more outcome variables. In other words, the X variable influences the Y variable. For example, one could design a study that answered the question of whether a particular instructional approach caused students to learn more. The research question serves as a basis for posing a hypothesis, a predicted answer to the research question that incorporates operational definitions of the study's variables and is rooted in the literature. An operational definition matches a concept with a method of measurement, identifying how the concept will be quantified. For example, in a study of instructional strategies, the hypothesis might be that students of teachers who use Strategy X will exhibit greater learning than students of teachers who do not. In this study, one would need to operationalize learning by identifying a test or instrument that would measure learning. This approach allows the researcher to create a testable hypothesis. Relational and causal research relies on the creation of a null hypothesis, a version of the research hypothesis that predicts no relationship between variables or no effect of one variable on another. When writing the hypothesis for a quantitative question, the null hypothesis and the research or alternative hypothesis use parallel sentence structure. In this example, the null hypothesis states that there will be no statistical difference between groups, while the research or alternative hypothesis states that there will be a statistical difference between groups. Note also that both hypothesis statements operationalize the critical thinking skills variable by identifying the measurement instrument to be used. Once the research questions and hypotheses are solidified, the researcher must select a design that will create a situation in which the hypotheses can be tested and the research questions answered. Ideally, the research design will isolate the study's variables and control for intervening variables so that one can be certain of the relationships being tested. In educational research, however, it is extremely difficult to establish sufficient controls in the complex social settings being studied. In our example of investigating the impact of a certain instructional strategy in the classroom on student achievement, each day the teacher uses a specific instructional strategy. After school, some of the students in her class receive tutoring. Other students have parents that are very involved in their child's academic progress and provide learning experiences in the home. These students may do better because they received extra help, not because the teacher's instructional strategy is more effective. Unless the researcher can control for the intervening variable of extra help, it will be impossible to effectively test the study's hypothesis. Quantitative research designs can fall into two broad categories, experimental and quasi-experimental. Classic experimental designs are those that randomly assign subjects to either a control or treatment comparison group. The researcher can then compare the treatment group to the control group to test for an intervention's effect, known as a between-subject design. It is important to note that the control group may receive a standard treatment or may receive a treatment of any kind. Quasi-experimental designs do not randomly assign subjects to groups, but rather take advantage of existing groups. A researcher can still have a control and comparison group, but assignment to the groups is not random. The use of a control group is not required. However, the researcher may choose a design in which a single group is pre- and post-tested, known as a within-subjects design. Or a single group may receive only a post-test. Since quasi-experimental designs lack random assignment, the researcher should be aware of the threats to validity. Educational research often attempts to measure abstract variables such as attitudes, beliefs, and feelings. Surveys can capture data about these hard-to-measure variables, as well as other self-reported information such as demographic factors. A survey is an instrument used to collect verifiable information from a sample population. In quantitative research, surveys typically include questions that ask respondents to choose a rating from a scale, select one or more items from a list, or other responses that result in numerical data. Studies that use surveys or tests need to include strategies that establish the validity of the instrument used. There are many types of validity that need to be addressed. Face validity. Does the test appear at face value to measure what it is supposed to measure? Content validity. Content validity includes both item validity and sampling validity. Item validity ensures that the individual test items deal only with the subject being addressed. Sampling validity ensures that the range of item topics is appropriate to the subject being studied. For example, item validity might be high, but if all the items only deal with one aspect of the subjects, then sampling validity is low. Content validity can be established by having experts in the field review the test. Concurrent validity. Does a new test correlate with an older, established test that measures the same thing? Predictive validity. Does the test correlate with another related measure? For example, GRE tests are used at many colleges because these schools believe that a good grade on this test increases the probability that the student will do well at the college. Linear regression can establish the predictive validity of a test. Construct validity. Does the test measure the construct it is intended to measure? Establishing construct validity can be a difficult task when the constructs being measured are abstract. But it can be established by conducting a number of studies in which you test hypotheses regarding the construct, or by completing a factor analysis to ensure that you have the number of constructs that you say you have. In addition to ensuring the validity of instruments, the quantitative researcher needs to establish their reliability as well. Strategies for establishing reliability include Test retest. Correlates scores from two different administrations of the same test. Alternate forms. Correlates scores from administrations of two different forms of the same test. Split half reliability. Treats each half of one test or survey as a separate administration and correlates the results from each. Internal consistency. Uses Cronbach's coefficient alpha to calculate the average of all possible split halves. Quantitative research almost always relies on a sample that is intended to be representative of a larger population. There are two basic sampling strategies, random and non-random, and a number of specific strategies within each of these approaches. This table provides examples of each of the major strategies. The next section of this tutorial provides an overview of the procedures in conducting quantitative data analysis. There are specific procedures for conducting the data collection, preparing for and analyzing data, presenting the findings, and connecting to the body of existing research. This process ensures that the research is conducted as a systematic investigation that leads to credible results. Data comes in various sizes and shapes, and it is important to know about these so that the proper analysis can be used on the data. In 1946, S.S. Stevens first described the properties of measurement systems that allowed decisions about the type of measurement and about the attributes of objects that are preserved in numbers. These four types of data are referred to as nominal, ordinal, interval, and ratio. First, let's examine nominal data. With nominal data, there is no number value that indicates quantity. Instead, a number has been assigned to represent a certain attribute, like the number 1 to represent male and the number 2 to represent female. In other words, the number is just a label. You could also assign numbers to represent race, religion, or any other categorical information. Nominal data only denotes group membership. With ordinal data, there is again no indication of quantity. Rather, a number is assigned for ranking order. For example, satisfaction surveys often ask respondents to rank order their level of satisfaction with services or programs. The next level of measurement is interval data. With interval data, there are equal distances between two values, but there is no natural zero. A common example is the Fahrenheit temperature scale. Differences between the temperature measurements make sense, but ratios do not. For instance, 20 degrees Fahrenheit is not twice as hot as 10 degrees Fahrenheit. You can add and subtract interval level data, but they cannot be divided or multiplied. Finally, we have ratio data. Ratio is the same as interval, however ratios, means, averages, and other numerical formulas are all possible and make sense. Zero has a logical meaning, which shows the absence of, or having none of. Examples of ratio data are height, weight, speed, or any quantities based on a scale with a natural zero. In summary, nominal data can only be counted. Ordinal data can be counted and ranked. Interval data can also be added and subtracted, and ratio data can also be used in ratios and other calculations. Determining what type of data you have is one of the most important aspects of quantitative analysis. Depending on the research question, hypotheses, and research design, the researcher may choose to use descriptive and or inferential statistics to begin to analyze the data. Descriptive statistics are best illustrated when viewed through the lens of America's pastimes. Sports, weather, economy, stock market, and even our retirement portfolio are presented in a descriptive analysis. Basic terminology for descriptive statistics are terms that we are most familiar in this discipline. Frequency, mean, median, mode, range, variance, and standard deviation. Simply put, you are describing the data. Some of the most common graphic representations of data are bar graphs, pie graphs, histograms, and box and whisker graphs. Attempting to reach conclusions and make causal inferences beyond graphic representations or descriptive analyses is referred to as inferential statistics. In other words, examining the college enrollment of the past decade in a certain geographical region would assist in estimating what the enrollment for the next year might be. Frequently in education, the means of two or more groups are compared. When comparing means to assist in answering a research question, one can use a within-group, between-groups, or mixed-subject design. In a within-group design, the researcher compares measures of the same subjects across time, therefore within-group, or under different treatment conditions. This can also be referred to as a dependent-group design. The most basic example of this type of quasi-experimental design would be if a researcher conducted a pretest of a group of students, subjected them to a treatment, and then conducted a post-test. The group has been measured at different points in time. In a between-group design, subjects are assigned to one of the two or more groups. For example, Control, Treatment 1, Treatment 2. Ideally, the sampling and assignment to groups would be random, which would make this an experimental design. The researcher can then compare the means of the treatment group to the control group. When comparing two groups, the researcher can gain insight into the effects of the treatment. In a mixed-subjects design, the researcher is testing for significant differences between two or more independent groups while subjecting them to repeated measures. Choosing a statistical test to compare groups depends on the number of groups, whether the data are nominal, ordinal, or interval, and whether the data meet the assumptions for parametric tests. Nonparametric tests are typically used with nominal and ordinal data, while parametric tests use interval and ratio-level data. In addition to this, some further assumptions are made for parametric tests that the data are normally distributed in the population, that participant selection is independent, and the selection of one person does not determine the selection of another, and that the variances of the groups being compared are equal. The assumption of independent participant selection cannot be violated, but the others are more flexible. The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the method of analysis for a quasi-experimental design. When choosing a t-test, the assumptions are that the data are parametric. The analysis of variance, or ANOVA, assesses whether the means of more than two groups are statistically different from each other. When choosing an ANOVA, the assumptions are that the data are parametric. The chi-square test can be used when you have non-parametric data and want to compare differences between groups. The Kruskal-Wallis test can be used when there are more than two groups and the data are non-parametric. Correlation analysis is a set of statistical tests to determine whether there are linear relationships between two or more sets of variables from the same list of items or individuals, for example, achievement and performance of students. The tests provide a statistical yes or no as to whether a significant relationship or correlation exists between the variables. A correlation test consists of calculating a correlation coefficient between two variables. Again, there are parametric and non-parametric choices based on the assumptions of the data. Pearson R correlation is widely used in statistics to measure the strength of the relationship between linearly related variables. Spearman-Rank correlation is a non-parametric test that is used to measure the degree of association between two variables. Spearman-Rank correlation test does not assume any assumptions about the distribution. Spearman-Rank correlation test is used when the Pearson test gives misleading results. Often a Kendall-Taw is also included in this list of non-parametric correlation tests to examine the strength of the relationship if there are less than 20 rankings. Linear regression and correlation are similar and often confused. Sometimes your methodologist will encourage you to examine both the calculations. Calculate linear correlation if you measured both variables, x and y. Make sure to use the Pearson parametric correlation coefficient if you are certain you are not violating the test assumptions. Otherwise, choose the Spearman non-parametric correlation coefficient. If either variable has been manipulated using an intervention, do not calculate a correlation. While linear regression does indicate the nature of the relationship between two variables, like correlation, it can also be used to make predictions because one variable is considered explanatory while the other is considered a dependent variable. Establishing validity is a critical part of quantitative research. As with the nature of quantitative research, there is a defined approach or process for establishing validity. This also allows for the findings transferability. For a study to be valid, the evidence must support the interpretations of the data, the data must be accurate, and their use in drawing conclusions must be logical and appropriate. Construct validity concerns whether what you did for the program was what you wanted to do, or whether what you observed was what you wanted to observe. Construct validity concerns whether the operationalization of your variables are related to the theoretical concepts you are trying to measure. Are you actually measuring what you want to measure? Internal validity means that you have evidence that what you did in the study, i.e., the program, caused what you observed, i.e., the outcome, to happen. Conclusion validity is the degree to which conclusions drawn about relationships in the data are reasonable. External validity concerns the process of generalizing, or the degree to which the conclusions in your study would hold for other persons in other places and at other times. Establishing reliability and validity to your study is one of the most critical elements of the research process. Once you have decided to embark upon the process of conducting a quantitative study, use the following steps to get started. First, review research studies that have been conducted on your topic to determine what methods were used. Consider the strengths and weaknesses of the various data collection and analysis methods. Next, review the literature on quantitative research methods. Every aspect of your research has a body of literature associated with it. Just as you would not confine yourself to your course textbooks for your review of research on your topic, you should not limit yourself to your course texts for your review of methodological literature. Read broadly and deeply from the scholarly literature to gain expertise in quantitative research. Additional self-paced tutorials have been developed on different methodologies and techniques associated with quantitative research. Make sure that you complete all of the self-paced tutorials and review them as often as needed. You will then be prepared to complete a literature review of the specific methodologies and techniques that you will use in your study. Thank you for watching.
Discover the world's research
COMMENTS
ABSTRACT The purpose of this study is to discern and discuss issues with relevance to the tension between contextuality and generalisation, which recurrently are identified over time in research reviews of teaching methods. The 75 most cited reviews on teaching methods listed in the Web of Science from 1980 to 2017 were analysed. Since our interest is the claims made in each article about the ...
PDF | This paper outlines ways to structure a research-methods class so that students gain a practical knowledge of how research is done. Emphasis is... | Find, read and cite all the research you ...
3 Centre for Teacher Education, University of Vienna, Vienna, Austria Competence in research methods is a major contribution to (future) teachers' professionalism. In the pedagogical approach presented here, which we call the Teaching Clinic, we combine service-learning and design-based research to create meaningful learning engagements.
ical cultures associated with teaching and learning research methods in advanced studies education. through the identification of trends and pitfalls. The rationale behind this objective is the ...
Teaching methods that individualize and adapt instructional conditions to K-12 learners' needs, abilities, and interests help improve learning achievement. The most important variables are the teacher's role in the classroom as a guide and mentor and the adaptability of learning activities and materials.
SAGE authors Gregg Van Ryzin and Dahlia Remler share their vast experience and approach to teaching Research Methods to students with diverse interests and different degrees of prior training. In this new webinar, you will learn how they convey to students that research matters in their fields. They'll cover often-challenging topics, such as: Incorporating real-world examples of research into ...
Teacher educators are seen as potential brokers able to bridge the research-practice gap and accelerate the adoption of current evidence in teacher education. The present study focuses on the in-depth exploration of teacher educators' attitudes toward evidence-based teaching practices and provides a deeper understanding of the challenges encountered when turning evidence into teaching action ...
This paper addresses ways of researching the pedagogy involved in building research methods competencies in the social sciences. The lack of explicit and shared pedagogy in this area make it particularly important that research is conducted to stimulate pedagogic culture, dialogue and development. The authors discuss the range of methods used ...
Obviously, research methods adopted for the exploration of any concept need to align with definitions and conceptualizations of the substantive area in question. In the case of pedagogy the range of interrelated elements is considerable and hence the research methods needed for their exploration and theorization have evolved and grown accordingly.
ABSTRACT A Practical Guide to Teaching Research Methods in Education brings together more than 60 faculty experts. The contributors share detailed lesson plans about selected research concepts or skills in education and related disciplines, as well as discussions of the intellectual preparation needed to effectively teach the lesson.
There has been much research over the past decade building on research-practice partnerships. Teachers and researchers should work collaboratively to improve student learning. Though researchers in higher education typically conduct formal research and publish their work in journal articles, it's important for teachers to also see themselves ...
Discover various teaching and learning methods with their characteristics on ResearchGate, a platform for academic research.
Research is important for teacher education, for students of teaching, and for teachers because it provides conceptual understanding and new relevant knowledge. It is also important as a verb: researching. Research implies systematic thinking, reasoning, gathering data, analyzing, interpreting, and structuring knowledge for dissemination.
What pedagogic research means Also known as the scholarship of teaching and learning (SoTL), or education enquiry, pedagogic research is an established field of academic discourse involving carefully investigating your teaching practice and in turn developing the curriculum.
The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration ...
Over time, the academics' approaches to teaching (i.e., content- or learning-focused approach) were intensively studied. Traditionally, studies estimated the shared variance between the items that describe a behavioral pattern (i.e., the psychometric approach), defined as a learning- or content-focused approach to teaching. In this study, we used a different perspective (i.e., network ...
The two key methods Research courses and workshops and Collaboration with clinical practice are advantageous methods for teaching undergraduate healthcare students evidence-based practice; incorporating many of the Sicily Statement's five steps.
This study revealed the effective teaching methods, requirements and barriers in Iranian Higher Education. Teachers participating in this study believed that teaching and learning in higher education is a shared process, with responsibilities on both student and teacher to contribute to their success.
In medical education, different teaching methods have different effects on nursing or medical students' critical thinking and autonomous learning ability. In addition, more and more medical educators have recognized the shortcomings of traditional teaching methods, so they try to use a variety of teaching methods to enhance students ...
Learn from expert perspectives on how to teach research methods in the social sciences effectively and engagingly.
Teaching technique/ Research technique: ... The meta-thematic analysis method was used in the qualitative dimension of the study conducted with the mixed-meta method. Meta-thematic analysis is a research process that aims to reach general and holistic results by combining the findings of qualitative studies conducted on a specific topic ...
Across courses of different sizes, levels and audiences (concentrators and non-concentrators), research suggests that students learn more in classes that integrate active learning (Freeman et al., 2014; Hake, 1998). In fact, research supporting the use of active learning is so compelling that some have suggested it is unethical for instructors to continue to use a purely lecture-based approach ...
The retention of fundamental mathematical skills is imperative to provide a foundation on which new skills are developed. Educators often lament about student retention. Cognitive scientists and educators have explored teaching methods that produce learning which endures over time. We wanted to know if using spaced recall quizzes would prevent our students from forgetting fundamental ...
This paper compares the effectiveness of traditional and modern teaching methods in different contexts and subjects. Find out the best practices for your classroom on ResearchGate.
The first step in developing the framework for assessing teaching effectiveness (FATE) was creating a definition of effective teaching. As Robert M. Pirsig argued, quality is difficult, if not impossible, to define (Pirsig 1974); thus, we choose to pursue effective teaching over the challenge of defining quality.
Abstract:This paper studies of effective strategies and methods to elevate music practical teaching in higher vocational preschool education. Acknowledging the pivotal role of higher vocational preschools in shaping the foundational years of a child's development, the study navigates through a comprehensive exploration of the significance of music education. Grounded in a robust literature ...
PDF | Teaching method acts as a fundamental catalyst of engineering the students learning at all levels. The present research explores the effect of... | Find, read and cite all the research you ...
The results of the present research indicate that teaching methods have an effect on students' learning and satisfaction. Rachayon and his colleagues also used a task-based learning method in combination with digital games in a flipped learning environment to develop students' English language skills, and their results also indicated the ...
The research design may be quantitative, qualitative, or a mixed methods design. The focus of this overview is quantitative methods. The general purpose of quantitative research is to explain, predict, investigate relationships, describe current conditions, or to examine possible impacts or influences on designated outcomes.
Conclusions: Spaced learning is a fun, enjoyable, cost-effective, and flexible teaching method that can improve the learning level of students through the management of learning anxiety.