CURRICULUM, INSTRUCTION, AND PEDAGOGY article

Research methods in teacher education: meaningful engagement through service-learning.

Dominik E. Froehlich

  • 1 Department of Education and Centre for Teacher Education, University of Vienna, Vienna, Austria
  • 2 Ecological Economics & RCE Vienna, Vienna University of Economics and Business, Vienna, Austria
  • 3 Centre for Teacher Education, University of Vienna, Vienna, Austria

Competence in research methods is a major contribution to (future) teachers’ professionalism. In the pedagogical approach presented here, which we call the Teaching Clinic, we combine service-learning and design-based research to create meaningful learning engagements. We present two cases to illustrate the objectives, processes, and outcomes of the service-learning projects and reflect about both in terms of learning and service outcomes. This includes discussions of how this pedagogical approach led to motivation and engagement, how principles of transfer of training are obeyed, and what this means quite generally for school-university relationships.

Introduction

Research skills, such as the knowledge and skills necessary to pose clear (scientific) questions, to critically review the literature, and to collect, analyze, and interpret data, are important to navigate the complexity of daily life. This is also true for teachers, where research skills and increasingly seen as important elements of professionalism ( Amirova et al., 2020 ) and the establishment of evidence-based teaching practices ( Burke et al., 2005 ). In this article, we aim to present case studies of a pedagogical approach that helps in increasing the perceived relevance of discussing methodological issues within teacher education ( Davidson and Palermo, 2015 ): the Teaching Clinic (TC).

TCs are designed as semester-long courses in which teachers in training (now “students”) collaborate with practicing teachers (now “teachers”) on pedagogical innovations in the teachers’ classrooms through design-based research ( Bakker, 2018 ). They can be seen as instances of service learning in the domain of teacher education ( Stoecker, 2016 ). TCs tackle the combined needs of students, such as the wish for more formal experiences directly in the school-context and obtaining well-transferable competences and knowledge, and teachers, who may want a more direct access to state-of-the-art knowledge and support in implementing pedagogical innovations.

The objective of this article is to present the pedagogical approach taken in the TC and to explore its outcomes in terms of research competence and service through the accounts of stakeholders to the TC. Specifically, we present two exemplary projects that were conducted within the TC and showcase the reflections from students, teachers, and the course facilitators.

Context and Frameworks

The Teaching Clinic (TC) is a course for Master students in a teacher education curriculum at an Austrian university. Established teachers submit research questions about current professional challenges. These questions are then picked up by students, who conceptualize and execute research projects to find evidence based solutions.

The primary objective of doing research at this very local and practical level is to instill a scientific mindset in the students. Research skills are increasingly seen as tools of the professional practice; not as something confined to academic research. Importantly, this perspective is not only shared with the students that work on the projects, but also with the teachers that submit them. In that sense, the TC is about the transfer from university to practice (see also the current debate about “Third Mission”; Schober et al., 2016 ).

In terms of research methods, two secondary objectives of the format exist. First, the TC presents a clear purpose, and, therefore, motivation, to apply research methods. Second, the students apply research methods in a context that is almost identical to the context of their later work. This facilitates the transfer from the training context to the subsequent professional work as teachers ( Blume et al., 2010 ; Quesada-Pallarès and Gegenfurtner, 2015 ).

Main Pedagogical Approach: Service-Learning

The main pedagogical frame used to conduct the TC was service-learning ( Sotelino-Losada et al., 2021 ). There are numerous definitions of service-learning, but perhaps the most cited is the one formulated by Bringle and Hatcher (1995) , who define service-learning as a

“...course-based, credit-bearing educational experience in which students (a) participate in an organized service activity that meets identified community needs and (b) reflect on the service activity in such a way as to gain further understanding of course content, a broader appreciation of the discipline, and an enhanced sense of personal values and civic responsibility” (p. 212).

Service-learning is an experience-based learning approach ( Biberhofer and Rammel, 2017 ) that combines learning objectives with community service and emphasizes individual development and social responsibility through providing a service for others; service situations are viewed as learning settings and opportunities for public engagement ( Forman and Wilkinson, 1997 ). According to Furco (1996) , the key lies in the equal benefit for providers (TC: students) and recipients (TC: teachers). The TC could also be discussed from the perspective of transformative learning ( Mezirow and Taylor, 2009 ), as the learning goal of the seminar is not about pure knowledge acquisition, but about “building the capacity of students as agents of change” ( Biberhofer and Rammel, 2017 , p. 66). The TC provides a rather open learning environment, in which students engage in an open dialogue with each other, with the teachers, and the course facilitator, who does not necessarily possess the necessary subject-matter expertize but provides feedback and guidance throughout this process of dialectic inquiry.

Useful Methodological Lens: Design-Based Research and Action Research

As stipulated above, the main objective of the TC is to implement research projects at a local level in the teaching context. One methodological perspective that is very well adapted to this aim is design-based research (DBR). DBR is a research approach that claims to overcome “the gap between educational practice and theory, because it aims both at developing theories about domain-specific learning and the means that are designed to support that learning” ( Bakker and Van Eerde, 2015 , p. 430). In DBR, the design of learning environments proceeds in a reflective and cyclic process simultaneous to the testing or development of theory. The design includes the selection and creation of interventions which is done in cooperation with practitioners, while holding only little control of the situation. This research approach aims to explain and advise on ways of teaching and learning that proved to be useful and to develop theories that can be of predictive nature for educational practice. Because of its interventionist nature, researchers conducting this type of research are often referred to as “didactical engineers” ( Anderson and Shattuck, 2012 ; Bakker and Van Eerde, 2015 ).

In the TC, we use DBR as a methodological frame to set up the projects. On a micro level, different projects feature very different data (e.g., video recordings, surveys among pupils, interviews, texts, etc.) and methods (e.g., field experiments, statistical analyses, content analysis, etc.). The students need to decide which ones to use, get appropriate data, and run the analysis.

In this section, we present how TCs are a useful context for becoming teachers to develop research competences. Since this is the very first discussion about TCs, we use case studies to explore the outcomes of this pedagogical format. The case studies presented here contain reflections of students, teachers, and course facilitators based on a set of guiding questions in the direction of research methods and service-learning. Specifically, two independent TC projects will be presented. The first project focused on implementing concepts of Education for Sustainable Development (EDS) in the context of socio-economically disadvantaged classes in the field of Science, Mathematics, Engineering, and Technology (STEM). A team of two students and two teachers collaborated to further develop and evaluate lesson plans based on classroom experience, pupils’ feedback, and expert knowledge. The second project focused on improving the feedback strategies in response to students’ writing in language classrooms. Using an experimental research design, a team of four students generated data to allow for the evidence-based improvement of personal feedback and marking strategies.

Both projects will be reflected from the angles of multiple stakeholders; the team of authors of this article include a Master student, a teacher, and a researcher (and facilitator of the course). This reflects the nature of the TC as a learning experience that is co-created by multiple stakeholders; the participating students are not just learners, but also co-researchers and pedagogical co-designers (see Bovill et al., 2016 ). In the context of this publication students not only helped by providing additional reflections and data (see Case 1), but also by taking the position as a co-author (Case 2 was written by a student of the project; the course facilitator, an experienced researcher and first author of this text, provided feedback but otherwise did not interfere in the writing process; for student faculty-student co-authoring also see Abbott et al., 2021 ). For each case, we will first describe the objectives as laid out by the submitting teacher(s), the methodological process to find answers to the questions posed, and the final outcomes as reported back to the teacher(s).

Case 1: Education for Sustainable Development for Socio-Economically Disadvantaged Classes in the Prevocational School Sector (Teacher’s View)

This first case about Education for Sustainable Development (ESD) for socio-economically disadvantaged classes in the prevocational school sector is presented from the point of view of the teacher (who submitted the problem to the TC).

The relevance of the global educational environment for social fields of action was already taken up by the United Nations (UN) before the turn of the millennium and led to the years 2005–2014 being declared the World Decade of Education for Sustainable Development (WDESD) ( Combes, 2005 ). In the German-speaking countries, the Orientation framework for the learning area “ Global Development ” serves as an essential contribution to explicit didactics for ESD in the secondary education sector ( Schreiber and Siege, 2016 ).

This pedagogical concept was used by the students to support two teachers at a prevocational school in implementing ESD didactics into Science, Technology, Engineering, and Mathematics (STEM) lessons. The aim of the curriculum was to help low-income pupils at a vulnerable prevocational school location to develop highly demanded competences in the field of STEM professions.

Another objective was to strengthen the students’ sense of responsibility for society and environment in alignment with the bottom-up drive of the “FridaysforFuture” movement. The starting point for the research needs in schools was a study conducted by the German Federal Environment Agency, which asked whether environmental protection as a motive is useful for addressing young people’s motivation to enter STEM professions more successfully than before ( Örtl, 2017 ). The results of the study imply, among other aspects, that STEM didactics have close links to ESD and that synergetic overlaps in this area seem to be a promising approach for STEM lessons at prevocational schools.

Throughout the TC, promising learning formats in STEM lessons were tested in iterative cycles of implementation, evaluation, and adjustment in the sense of DBR at the chosen school site. Here, learning journals produced by students for different learning formats in STEM lessons on the topic “Renewable Energies” and “Climate Change” served as the primary data to create a scientifically and empirically driven curriculum for motivating students to pursue STEM professions (see Figure 1 ).

www.frontiersin.org

FIGURE 1 . Overview over the work process in Case 1.

The method of structured qualitative content analysis ( Mayring, 2014 ) was used to search for indicators that make learning formats subjectively interesting for students. The students were required to document steps and problems that occurred during the implementation and evaluate the learning opportunities on an ordinal scale from one to five after completion of the learning journal. The underlying learning formats include problem-centered films, concrete technical tasks (programming, mechanics, construction, electronics, and applied computer science), external workshops and lectures with companies from the technology sector.

After an initial review of the data, implications for the indicator “perceived as subjectively interesting by the students” could be concluded. The finding showed that individual isolated learning opportunities on the topic of climate change do not necessarily lead to the desired effect of students showing intrinsic motivation to acquire relevant professional skills for finding solutions.

Based on these interim results, the TC students consulted the scientific literature, which allowed for contextualization of socially relevant and scientific-technical dimensions in the acquisition of competencies in the sense of ESD. The framework for this approach was provided by the German Federal Ministry for Economic Cooperation and Development (BWZ) and the German Conference of Ministers of Education and Cultural Affairs (KMK) ( Schreiber and Siege, 2016 ).

Through the continuous cycles of the DBR approach using different learning formats in ongoing school lessons, the students were able to develop a hybrid ESD/STEM curriculum step by step by evaluating the data material. Decisive input for the concrete lesson plans was derived from the indicators identified through the structured content analysis according to Mayring (2014) , which were perceived by the students as subjectively interesting and motivating. Due to COVID-19-related school closures, it was not possible to complete an annual curriculum. Nevertheless, a total of 16 lessons that met the requirements of the research assignment based on the identified indicators were designed. The curriculum created through the cooperation of school and TC now serves as a preliminary study and basis for a fully empirical main study, which is to be carried out at several school locations after the worldwide COVID-19 pandemic. The results of the qualitative preliminary study were used by the students in the final step to formulate hypotheses for the main study in accordance with the underlying research question following Mayring (2014) methodological approach.

In addition to the general results of the qualitative preliminary study, various positive effects on the stakeholders could be identified in the present case as a result of the service-learning offering. The question chosen in the case study whether environmental protection issues can contextually contribute to motivating young people in STEM classes to take an interest in related professions could not be fully answered. However, the DBR approach has served to identify those indicators based on different learning formats that were rated as subjectively interesting and motivating for students. As a pedagogical and didactic core concept, the approach of the “Recognize- Evaluate- Act”-principle from the Orientation framework for the learning area “ Global Development ” ( Schreiber and Siege, 2015 ) turned out to be particularly promising. Furthermore, a concrete further research assignment for the TC could be derived from the results to initiate a fully comprehensive empirical study based on pre-formulated hypotheses.

The TC research semester was described by the students as an eye-opening experience between university teaching and practical school experience. In this case, the service-learning project enabled the Master students to implement theoretically learned scientific methods in a practical way within the school environment. The scholarly exposure to ESD content along with instructional development using STEM learning opportunities gave the Master students a holistic view into practice-based teaching and learning research.

“The exchange, especially the feedback, with teaching staff at a preparatory vocational school with difficult socio-economic conditions was far more informative and practically relevant to me than most frontal lectures at the university. In addition to the practical and school-relevant part of the research semester, the TC together with ESD principles was an enriching support for me to be able to conduct current educationally relevant research in a scientifically and methodologically correct way. The balance between the cornerstones of school practice, TC and the final research work has given me a new perspective and understanding of the profession of a teacher and the different places of work.” (Student)

Furthermore, the underlying DBR approach has been identified as a promising approach for adapting hybrid ESD-STEM learning formats and teaching contents to determine successful learning effects with students.

The service-learning concept offered freed up additional resources for instructional development that would have been difficult to implement during the regular school year due to administrative duties and other teacher commitments. This gave teachers the opportunity not only to get ideas for lesson design, but also to further develop their own teaching based on sound and up-to-date scientific methodology. Learnings reportedly included new ways “to inspire the students with new approaches and to show them that STEM cannot be purely theoretical, but that it is important for them and society.”

Through the joint development of the curriculum, it was possible to link subject-related STEM lessons with social relevance, which often seems intangible for students, especially in STEM subjects. Teachers attributed great importance to this interconnection in identifying students' ability to explore, reflect, and critically evaluate scientific content from multiple perspectives.

“Experiencing values such as sustainability, environmental awareness and solidarity […] provide a good basis for developing into independent and responsible personalities.” (Teacher)

Particularly the context of teaching at a socially vulnerable school site suggests a value-oriented attitude and precise concept of learning formats next to topics that are relevant to the realities of the young people’s lives.

The composition of the student body in the underlying pre-vocational preparation school class showed a high degree of heterogeneity in terms of country of origin and socio-economic status. 90% of the students had a first language other than German and the vast majority could be classified as belonging to a deprivileged class of the population. The empowerment of being able to work on a curriculum for their future career led to increased motivation for the lessons, which could be seen in the underlying learning journals. The motivation was also reflected in an increased learning curve in STEM-related subject knowledge. Moreover, students’ involvement with environmental and social issues of the 21st century led to an observable increased interest in technical and scientific career profiles.

Case 2: Effective Feedback (Student’s View)

The second case shall illustrate the student’s perspective and is written by a student of the TC. The project had been carried out in the summer term 2020 and was conducted by a team of four students. The collaboration took place in a school in Vienna with two teachers and two lower-secondary classes of theirs.

At the beginning of the TC, we were confronted with a common problem of teachers: An English and a German teacher reported that they spend much time correcting their pupils’ assignments while suspecting that their pupils did not use the feedback for their own progress. In close exchange with the teachers and after an initial evaluation of the problem, we formulated a project goal: an invention should be set to counteract this problem and improve the situation for both learners and teachers.

As a first step, a thorough literature review was necessary to find appropriate strategies to tackle the problem. There exists a plethora of publications about feedback strategies; to narrow down our focus, we opted for the “minimal marking” approach because this strategy directly addresses both issues voiced by the teachers. As Hyland (1990) puts it very precisely:

Many teachers find marking to be a tedious and unrewarding chore. While it is a crucial aspect of the classroom writing process, our diligent attention and careful comments only rarely seem to bring about improvements in subsequent work (p. 279).

Besides Hyland (1990) , also Haswell (1983) dedicated a publication to the same issue. Both suggested minimal marking as a solution to reduce the teacher’s workload by simultaneously increasing the positive effect of the feedback. The basic principle of minimal marking is that instead of detailed feedback, only a cross or a check is set beside the line in which the mistake occurred; subsequently, it is the pupil’s task to correct his/her own text by identifying the mistakes and correcting it using prior knowledge or a dictionary (p. 600). Thus, the pupil receives as minimal information as necessary and is encouraged to edit the text independently ( Haswell, 1983 , p. 601). Through this approach, pupils shall be enabled to edit their texts without much support of the teacher or other adults, which shall help them to develop essential writing skills. This method is considered especially effective because it requires the pupil to act on the feedback received by the teacher ( Hyland, 1990 , p. 279). Although this approach is still applied nowadays ( McNeilly, 2014 ), it has received little attention in research. Due to this research gap and our personal interest in the topic as future language teachers, we ventured out to explore the effects of minimal marking on (a) learner’s mistake awareness, (b) the time teachers spend on giving feedback and (c) the quality as perceived by pupils and teachers.

To answer these research questions, we chose a set of methods consisting of quantitative and qualitative tools (see Figure 2 ). The procedure can be described as the following: Pupils of both classes were divided in an experimental group and a control group. Then all pupils of one class were asked to write a text in response to the same task. Teachers gave feedback using their traditional method, in which they indicate every mistake and write a short comment to each one, among the control group and using the minimal marking strategy to give feedback on the texts of the experimental group. Both teachers measured the time they needed to correct every single text. Then the pupils got their texts back and edited them. Then the edited texts were collected again. Discovered and undiscovered mistakes were counted and analyzed in the texts of both groups. Time spent on correction was analyzed for each group and juxtaposed. The pupils were asked for their opinion after the experiment through an online survey which consisted of closed and open questions. Finally, the teachers shared their experience in a narrative interview, which was also conducted online.

www.frontiersin.org

FIGURE 2 . Experimental setup of case 2.

Although the results of this small-scale study were not statistically significant, some interesting insights could be gained. Although an increased mistake awareness among pupils could not be proved with the number of mistakes occurring in the first and second text, an increased mistake awareness can be inferred from the pupil’s answers given in the questionnaire. Several pupils highlighted the positive aspects of the minimal marking approach and considered it as (really) helpful; one of the learners summarized: “I thought more about my mistakes than I usually do and, hence, I could improve my writing skills. It was more difficult to find the mistakes, but it also helped me to get better” (translated by the authors).

Regarding the time spent for correction, an advantage of the minimal marking approach could only be detected in one of both classes. The German teacher reported an average correction time of 3:14 (first round) and 2:12 (second round) per text using the traditional method and an average time of 1:39 (first round) and 1:14 (second round) per text using the minimal marking approach. The English teacher measured similar times for both feedback strategies. Limiting factors, such as the small sample size of the project and the pupils’ unfamiliarity with the new method, need to be kept in mind when conclusions are drawn. However, this only highlights the need for further research about the effects of this specific feedback method.

Probably the most promising outcomes can be reported about the attitudes of teachers and pupils. Both teachers described the collaboration between them and the students as enriching and both want to continue the collaboration with the university. Additionally, they reported an increased interest in action research for themselves but also among their colleagues. Through the questionnaire we could also observe a positive attitude toward the experiment among pupils, which gives reason for further projects in class and to further incorporate pupils in research. And finally, we students were able to develop a deeper understanding of teacher professionalism and a more positive attitude toward the application of research in teaching. Additionally, all members of the team were convinced that they wanted to apply this method as teachers in their future practice.

As outlined above, the objectives of the pedagogical/didactical concept of the TC are to create a highly effective (co-creative) learning environment for teachers in training while at the same time delivering valuable service to practice.

Reflections on Learning

Several themes of learning emerged in the cases above. The opportunity to combine own learning with delivering a service was described as important. Being of service to someone matters and enhances the motivation of students to engage also with the methodological parts of the course. The ESD case confirms the findings of Biberhofer and Rammel (2017) derived in the context of their two-semester “Sustainability Challenge”, which has been successfully executed since 2010. Participation in real life problems increases intrinsic motivation to investigate solutions (and to apply the methods needed to carry out this investigation). The master students who carried out the minimal marking project pointed out that this collaboration enabled them to apply research methods in an authentic environment and, thus, rendered them more meaningful. Additionally, this project allowed them to engage with research methods in a demanding but motivating manner which is a frequently neglected part in teacher education. The project enabled them to practice teaching methods and evaluate them through scientific methods in a systematic way; this led to a better understanding of the vital symbiosis between research and practical teaching ( Paran, 2017 ). The exploratory case studies presented in this paper cannot give an in-depth account of the learnings processes and outcomes. Future research could seek to further explore the impact of service-learning approaches to the students’ motivation (cf. Medina and Gordon, 2014 ; Huber, 2015 ).

There are some indications that the course had a positive impact on the students’ future professionalism as teachers and their scientific attitude. In the ESD case, students’ engagement with the dimensions of ecological and social justice (which are integral components of ESD) was evident primarily at two levels. On the one hand, the professional engagement with ESD led to the desire to make its impact on students measurable through scientific methodologies. On the other hand, a reframing in the sense of transformative learning ( Mezirow and Taylor, 2009 ) about the subjective role perception as a teacher for shaping an sustainable worldview for future generations of pupils could be observed. The master students of the minimal marking project considered the collaboration with already practicing teachers as especially helpful. Firstly, this special constellation allowed them to gain practical experience besides the obligatory practicum and benefit from the teachers’ experience. Secondly, the conduction within the university context required them to combine academic research and practical teaching. This supported them in their professional development as teachers because it made them realize the importance of research methods in evidence-based teaching ( Paran, 2017 ).

Basically, having learned and experienced how to utilize research methods not just for the purpose of “pure research”, but framing it as a practice of evidence-based teaching, is expected to make teachers perform better in an increasingly VUCA (volatile, uncertain, complex, and ambiguous; Johansen and Euchner, 2013 ; LeBlanc, 2018 ) world. The COVID-19 pandemic is a case in point here. While all teachers needed to react to the changing context, not all did so in a thoughtful and effective manner ( Hodges et al., 2020 ). Different levels of professionalism and of having a scientific mindset could be a major factor in this equation. As described above, increasing this mindset was the primary objective of the course format and we hypothesize that this is an important ingredient to foster teachers’ lifelong learning ( Bakkenes et al., 2010 ; Hoekstra and Korthagen, 2011 ).

An important strategy to increase the learning outcomes for students used in the TC is to work directly in the context that the knowledge should later be applied in. Put differently, the Master students support one individual overcoming a teaching-related challenge. This similarity of contexts aids the transfer of the learnings and helps to make them more applicable ( Blume et al., 2010 ; Burns, 2008 ). Additionally, the learning process is highly social, including peers, the course facilitator(s), and the teachers. Previous research indicates that this social dimension may additionally increase the odds of “successful” learning ( Daniel et al., 2013 ; Penuel et al., 2012 ). In the ESD case, confirmation of the success of the underlying ESD/STEM concept for increasing young people’s enthusiasm for STEM careers was a transformational realization for the teachers and master’s students in the collaborative learning process. Similarly, the minimal marking group reported an unprecedented feeling of responsibility in comparison with other university courses. Because of the knowledge that their collaborating teachers and their students profited from the project, the students’ work felt meaningful and important. Simultaneously, this environment enabled them to gain experience in research methods and practical teaching which would not be possible without this unique course format.

On the other hand, students gained practical experience in non-university organizations during their studies. In the feedback, the students commented on both, gaining experience from the school context in cooperation with the teachers and the methodological and research-oriented support by the TC itself. The work at local school sites additionally motivates the students to further develop their pedagogical and didactic knowledge in a practical setting based on current school challenges. Furthermore, the interaction of the scientific approach together with the practical school experiences enables the students to internalize their own values for contemporary teaching for their future role as teachers themselves.

Reflections on Service

The teachers involved in the projects described above reported high satisfaction with the project results and an increased interest in research even extending to their colleagues. Teachers valued the opportunity to develop and improve their teaching formats and methods based on evidence and through scientific methods. Through the collaboration with students, time-consuming research could be outsourced, and the desired role of the teacher researcher could be fulfilled despite time constraints.

As shown in Figure 1 of the ESD case, the clear division of roles with clear lines of communication and distributions of action items was a major advantage in the creation and implementation of the ESD/STEM curriculum. The well-structured method of the DBR approach throughout the semester allowed for proper planning at each point in time for the scarce time resources of the various stakeholders involved during the school year. The teachers engaged in the minimal marking project also reported an increased interest in research and collaboration with the university which further extended among their colleagues. At the end of the project, they expressed their wish to continue collaboration with students of next courses to further improve their teaching. This illustrates the positive influence on the teachers’ attitude toward research and even a potential multiplying factor of the teaching clinic on teachers beyond the active participants.

Finally, the TC can function as a promising channel to maintain communication between teachers and research ( Paran, 2017 ). The collaboration between teachers working in the field and university may lead to the identification of new research gaps on the one side and more evidence-based teaching on the other. Introducing teachers to the concept of evidence-based teaching in early stages of their education may have a positive effect on their attitude toward research in teaching and be the key to the development of a professional role teacher as researcher ( Paran, 2017 ). This illustrates the close interconnections between research at university, teacher education and practicing teachers and their potential to profit from collaboration with each other.

Other Outcomes

The TC aims to create value in terms of enhancing university social responsibility ( Vasilescu et al., 2010 ). Universities play an important role in addressing global challenges, such as growing socio-economic differences, the climate crisis, or the current COVID-19 pandemic. Irrespective of the specific disciplines, the concept of university social responsibility suggests that universities should not limit themselves to research and teaching, but should commit to solving economic, social, and ecological problems. Universities play a central role in raising students’ awareness of social responsibility to help them develop into social personalities ( Bokhari, 2017 ). In that sense, special attention must be paid to teacher education for its promise of achieving multiplication effects that will eventually reach all educational levels. The principle of “Third Mission” provides a key point of reference in this context, which emphasizes the targeted use of scientific findings to deal with a wide range of societal challenges and proposes the transfer of technologies and innovations to non-academic institutions ( Schober et al., 2016 ).

The systematic approach at hand uses a university research service to address concrete issues in the local field of schooling. The benefit lies in the possibility of merging education theory with socially relevant topics from multiple perspectives. The TC initiates a sustainable circular process, which facilitates the generation of a mutual learning curve for the university system and the school system by instrumentalizing research on an evidence-based level. Thereby, the TC acts as a door opener for practice researchers with access to the otherwise difficult to access compulsory school system.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author Contributions

DF designed the didactical concept and has written the conceptual part of this article. UH and KM were stakeholders to the two case studies presented in the article. Both have collected data for their case-study and offered their own reflections. All authors contributed to the overall discussion.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbott, L., Andes, A., Pattani, A., and Mabrouk, P. A. (2021). An Authorship Partnership with Undergraduates Studying Authorship in Undergraduate Research Experiences. Teaching and Learning Together in Higher Education 1 (32).

Google Scholar

Amirova, A., Iskakovna, J. M., Zakaryanovna, T. G., Nurmakhanovna, Z. T., and Elmira, U. (2020). Creative and research competence as a factor of professional training of future teachers: Perspective of learning technology. Wjet 12 (4), 278–289. doi:10.18844/wjet.v12i4.5181

CrossRef Full Text | Google Scholar

Anderson, T., and Shattuck, J. (2012). Design-Based Research. Educ. Res. 41 (1), 16–25. doi:10.3102/0013189X11428813

Bakkenes, I., Vermunt, J. D., and Wubbels, T. (2010). Teacher learning in the context of educational innovation: Learning activities and learning outcomes of experienced teachers. Learn. Instruct. 20 (6), 533–548. doi:10.1016/j.learninstruc.2009.09.001

Bakker, A. (2018). Design research in education: A practical guide for early career researchers . Abingdon, UK: Routledge . doi:10.4324/9780203701010

CrossRef Full Text

Bakker, A., and Van Eerde, D. (2015). “An introduction to design-based research with an example from statistics education”, in Approaches to qualitative research in mathematics education . Springer , 429–466. doi:10.1007/978-94-017-9181-6_16

Biberhofer, P., and Rammel, C. (2017). Transdisciplinary learning and teaching as answers to urban sustainability challenges. Ijshe 18 (1), 63–83. doi:10.1108/IJSHE-04-2015-0078

Blume, B. D., Ford, J. K., Baldwin, T. T., and Huang, J. L. (2010). Transfer of Training: A Meta-Analytic Review. J. Manag. 36 (4), 1065–1105. doi:10.1177/0149206309352880

Bokhari, A. A. H. (2017). Universities‟ Social Responsibility (USR) and Sustainable Development: A Conceptual Framework. Ijems 4 (12), 8–16. doi:10.14445/23939125/IJEMS-V4I12P102

Bovill, C., Cook-Sather, A., Felten, P., Millard, L., and Moore-Cherry, N. (2016). Addressing potential challenges in co-creating learning and teaching: overcoming resistance, navigating institutional norms and ensuring inclusivity in student-staff partnerships. High Educ. 71 (2), 195–208. doi:10.1007/s10734-015-9896-4

Bringle, R. G., and Hatcher, J. A. (1995). A Service-Learning Curriculum for Faculty. Michigan Journal of Community Service Learning 2 (1), 112–122.

Burke, L. E., Schlenk, E. A., Sereika, S. M., Cohen, S. M., Happ, M. B., and Dorman, J. S. (2005). Developing research competence to support evidence-based practice. J. Professional Nursing 21 (6), 358–363. doi:10.1016/j.profnurs.2005.10.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Burns, J. Z. (2008). Informal learning and transfer of learning: How new trade and industrial teachers perceive their professional growth and development. Career Techn. Educ. Res. 33, 3–24. doi:10.5328/cter33.1.3

Combes, B. P. Y. (2005). The United Nations Decade of Education for Sustainable Development (2005-2014): Learning to Live Together Sustainably. Appl. Environ. Educ. Commun. 4 (3), 215–219. doi:10.1080/15330150591004571

Daniel, G. R., Auhl, G., and Hastings, W. (2013). Collaborative feedback and reflection for professional growth: Preparing first-year pre-service teachers for participation in the community of practice. Asia-Pac. J. Teacher Educ. 41 (2), 159–172. doi:10.1080/1359866X.2013.777025

Davidson, Z. E., and Palermo, C. (2015). Developing Research Competence in Undergraduate Students through Hands on Learning. J. Biomed. Educ. 2015, 1–9. doi:10.1155/2015/306380

Forman, S. G., and Wilkinson, L. C. (1997). Educational policy through service learning: Preparation for citizenship and civic participation. Innov. High Educ. 21 (4), 275–286. doi:10.1007/BF01192276

Furco, A. (1996). Service-Learning: A balanced approach to experiential education , 7

Haswell, R. H. (1983). Minimal Marking. College English 45 (6), 600–604. doi:10.2307/377147

Hodges, C., Moore, S., Lockee, B., Trust, T., and Bond, A. (2020). The Difference Between Emergency Remote Teaching and Online Learning. EDUCAUSE Quarterly 15.

Hoekstra, A., and Korthagen, F. (2011). Teacher Learning in a Context of Educational Change: Informal Learning Versus Systematically Supported Learning. J. Teacher Educ. 62 (1), 76–92. doi:10.1177/0022487110382917

Huber, A. M. (2015). Diminishing the Dread: Exploring service learning and student motivation. Ijdl 6 (1). doi:10.14434/ijdl.v6i1.13364

Hyland, K. (1990). Providing productive feedback. ELT J. 44 (4), 279–285. doi:10.1093/elt/44.4.279

Johansen, B., and Euchner, J. (2013). Navigating the VUCA World. Res.-Technol. Manag. 56 (1), 10–15. doi:10.5437/08956308X5601003

LeBlanc, P. J. (2018). Higher Education in a VUCA World. Change Magazine Higher Learn. 50 (3–4), 23–26. doi:10.1080/00091383.2018.1507370

Mayring, P. (2014). Qualitative content analysis. doi:10.4135/9781446282243

McNeilly, A. (2014). Minimal Marking: A Success Story. cjsotl-rcacea 5 (1). doi:10.5206/cjsotl-rcacea.2014.1.7

Medina, A., and Gordon, L. (2014). Service Learning, Phonemic Perception, and Learner Motivation: A Quantitative Study. Foreign Language Annals 47 (2), 357–371. doi:10.1111/flan.12086

Mezirow, J., and Taylor, E. W. (2009). Transformative learning in practice: Insights from community, workplace, and higher education . John Wiley & Sons .

Örtl, E. (2017). MINT the gap – Umweltschutz als Motivation für technische Berufsbiographien? Umweltbundesamt . https://www.umweltbundesamt.de/publikationen/mint-the-gap-umweltschutz-als-motivation-fuer

Paran, A. (2017). 'Only connect': researchers and teachers in dialogue. ELT J. 71 (4), 499–508. doi:10.1093/elt/ccx033

Penuel, W. R., Sun, M., Frank, K. A., and Gallagher, H. A. (2012). Using social network analysis to study how collegial interactions can augment teacher learning from external professional development. Am. J. Educ. 119 (1), 103–136. doi:10.1086/667756

Quesada-Pallarès, C., and Gegenfurtner, A. (2015). Toward a unified model of motivation for training transfer: A phase perspective. Zeitschrift Für Erziehungswissenschaft 18 (S1), 107–121. doi:10.1007/s11618-014-0604-4

Schober, B., Brandt, L., Kollmayer, M., and Spiel, C. (2016). Overcoming the ivory tower: Transfer and societal responsibility as crucial aspects of the Bildung-Psychology approach. Eur. J. Dev. Psychol. 13 (6), 636–651. doi:10.1080/17405629.2016.1231061

J.-R. Schreiber, and H. Siege (2015). Orientierungsrahmen für den Lernbereich Globale Entwicklung [rientation Framework for the Learning Area Global Development , p. 468.

J.-R. Schreiber, and H. Siege (2015). in Curriculum Framework: Education for Sustainable Development . Engagement Global gGmbH . 2nd edn.

Sotelino-Losada, A., Arbués-Radigales, E., García-Docampo, L., and González-Geraldo, J. L. (2021). Service-Learning in Europe. Dimensions and Understanding From Academic Publication. Front. Educ. 6. doi:10.3389/feduc.2021.604825

Stoecker, R. (2016). Liberating service learning and the rest of higher education civic engagement . Philadelphia, PA: Temple University Press .

Vasilescu, R., Barna, C., Epure, M., and Baicu, C. (2010). Developing university social responsibility: A model for the challenges of the new civil society. Proc. Soc. Behav. Sci. 2 (2), 4177–4182. doi:10.1016/j.sbspro.2010.03.660

Keywords: service-learning, design-based research, research methods, teacher education, engagement

Citation: Froehlich DE, Hobusch U and Moeslinger K (2021) Research Methods in Teacher Education: Meaningful Engagement Through Service-Learning. Front. Educ. 6:680404. doi: 10.3389/feduc.2021.680404

Received: 14 March 2021; Accepted: 05 May 2021; Published: 18 May 2021.

Reviewed by:

Copyright © 2021 Froehlich, Hobusch and Moeslinger. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Dominik E. Froehlich, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • Technical Support
  • Find My Rep

You are here

  • Teaching Research Methods: How to Make It Meaningful to Students

Gregg Van Ryzin

view recording

How do you capture your students' attention in your Research Methods course? What works, and what doesn't? What are some of the challenges you face, and how do you overcome them?

SAGE authors Gregg Van Ryzin and Dahlia Remler share their vast experience and approach to teaching Research Methods to students with diverse interests and different degrees of prior training. In this new webinar, you will learn how they convey to students that research matters in their fields. They'll cover often-challenging topics, such as:

  • Incorporating real-world examples of research into your teaching
  • Encouraging students to distinguish causation from correlation
  • Using intuitive path models to think about multivariate relationships
  • Additional engaging approaches

ppt pRESENTATION

Research Methods in Practice

  • Teaching Campaign Planning: Three Tips that Drive Action
  • Teaching Public Policy: Tips on engaging your students and inspiring active participation
  • Speaking Out: Voicing Movements in the Face of Censorship
  • Improving Journalistic Writing: How students can use clearer thinking to tell better stories
  • Evaluation Failures: Case Studies for Teaching and Learning
  • A Practical Approach to doing Applied Conversation Analysis
  •  Teaching Data Science: Core Learning Outcomes and Topics for an Introductory Course
  • Data Visualization: A Game of Decisions
  •  Three Ideas for Innovative Teaching of Psychology
  • A Social Science Perspective on Data Science
  • Text Mining for Social Scientists
  • Teaching Ethics in Research Methods
  • Teaching Research Methods in Criminology & Criminal Justice: Challenges & Tips
  • Advancing Methodologies: A Conversation with John Creswell
  • Presenting Data Effectively
  • The Power of Stories: Engaging your American Government Students
  • Why Do They Do It? Tips for Teaching Intro to Criminology
  • 5 Ways to Take Your Entrepreneurship Teaching to the Next Level
  • 5 Tips for Teaching Introduction to Mass Communication: Engaging Students Living in a Media World
  • Teaching Statistics to People Who (Think They) Hate Statistics: Tips for Overcoming Statistics Anxiety
  • The Challenges of Teaching Research Methods
  • Finding Common Ground: Bringing Methods and Analysis into Context
  • Successful Qualitative Research: Don't Get Too Comfortable!
  • Why Use Mixed Methods?
  • Top Ten Developments in Qualitative Evaluation Over the Last Decade
  • Empowerment Evaluation
  • Can We Really Know Anything in an Era of Fake News?
  • Teaching Data Analytics in HRM Courses: Why It's Important and How to Do It
  • 5 Ways to Modernize Your Introduction to Business Course
  • Accessibility for Every Brain: Tips for Facilitating a Neurodiverse Classroom
  • Bridging Leadership Skills to Organizational Culture Design
  • Classroom Conversations on DEI in a Global Context: 5 Common Questions Students Ask about DEI And How to Engage Students in Productive Conversations
  • Deconstructing Barbie
  • Having Conversations About Race in the Classroom
  • How to Design and Deliver an Authentic Course in Multicultural Education
  • How to Improve Your Teaching Evaluations By Using the Socratic Method and Essay Exams
  • Introduction to R with Dr. Maja Založnik
  • R You Ready? Transitioning Your Intro Stats Course to R
  • Sustainability Strategies in Supply Chains
  • Teaching About Race, Ethnicity, and Crime
  • The Top 10 Skills Needed by Today’s PR Students to Become Tomorrow’s PR Professionals
  • Transforming Sociology Education
  • Unpacking Issues of Second-Generation Gender Bias
  • Webcast Recording Request: Demystifying Comparative Politics
  • Webinar Recording Request: Decoding the Learning Code of Generation Z
  • What Do Business Students Want in Today’s College Classroom? Perspectives from Both Small and Large Classroom Instructors
  • What’s Congress Really Like?

Breadcrumbs Section. Click here to navigate to respective pages.

A Practical Guide to Teaching Research Methods in Education

A Practical Guide to Teaching Research Methods in Education

DOI link for A Practical Guide to Teaching Research Methods in Education

Get Citation

A Practical Guide to Teaching Research Methods in Education brings together more than 60 faculty experts. The contributors share detailed lesson plans about selected research concepts or skills in education and related disciplines, as well as discussions of the intellectual preparation needed to effectively teach the lesson.

Grounded in the wisdom of practice from exemplary and award-winning faculty from diverse institution types, career stages, and demographic backgrounds, this book draws on both the practical and cognitive elements of teaching educational (and related) research to students in higher education today. The book is divided into eight sections, covering the following key elements within education (and related) research: problems and research questions, literature reviews and theoretical frameworks, research design, quantitative methods, qualitative methods, mixed methods, findings and discussions, and special topics, such as student identity development, community and policy engaged research, and research dissemination. Within each section, individual chapters specifically focus on skills and perspectives needed to navigate the complexities of educational research. The concluding chapter reflects on how teachers of research also need to be learners of research, as faculty continuously strive for mastery, identity, and creativity in how they guide our next generation of knowledge producers through the research process.

Undergraduate and graduate professors of education (and related) research courses, dissertation chairs/committee members, faculty development staff members, and graduate students would all benefit from the lessons and expert commentary contained in this book.

TABLE OF CONTENTS

Chapter | 5  pages, introduction, part section i | 27  pages, topics, problems, and research questions, chapter 1 | 2  pages, introduction to section i, chapter 2 | 8  pages, from personal passion to hot topics, chapter 3 | 8  pages, articulating a research problem and its rationale, chapter 4 | 7  pages, part section ii | 27  pages, literature review and theoretical/conceptual framework, chapter 5 | 2  pages, introduction to section ii, chapter 6 | 7  pages, connecting pieces to the puzzle, chapter 7 | 6  pages, the candy sort, chapter 8 | 10  pages, theoretical and conceptual frameworks, part section iii | 38  pages, research design, chapter 9 | 3  pages, introduction to section iii, chapter 10 | 9  pages, visualize your research design, chapter 11 | 9  pages, let's road trip, chapter 12 | 8  pages, the self and research, chapter 13 | 7  pages, trustworthiness and ethics in research, part section iv | 39  pages, quantitative methods, chapter 14 | 3  pages, introduction to section iv, chapter 15 | 7  pages, making sense of multivariate analysis, chapter 16 | 7  pages, linear regression, chapter 17 | 11  pages, hands-on application of exploratory factor analysis in educational research, chapter 18 | 9  pages, trending topic, part section v | 47  pages, qualitative methods, chapter 19 | 3  pages, introduction to section v, chapter 20 | 8  pages, listening deeply, chapter 21 | 9  pages, write what you see, not what you know, chapter 22 | 8  pages, on the recovery of black life, chapter 23 | 8  pages, emerging approaches, chapter 24 | 9  pages, exploring how epistemologies guide the process of coding data and developing themes, part section vi | 31  pages, mixed methods, chapter 25 | 3  pages, introduction to section vi, chapter 26 | 8  pages, low hanging fruit, ripe for inquiry, chapter 27 | 8  pages, creating your masterpiece, chapter 28 | 10  pages, presenting and visualizing a mixed methods study, part section vii | 43  pages, findings and discussion, chapter 29 | 2  pages, introduction to section vii, chapter 30 | 9  pages, an introduction to regression using critical quantitative thinking, chapter 31 | 7  pages, show the story, chapter 32 | 8  pages, block by block, chapter 33 | 7  pages, making the theoretical practical, chapter 34 | 8  pages, the donut memo, part section viii | 48  pages, special topics, chapter 35 | 3  pages, introduction to section viii, chapter 36 | 7  pages, scholarly identity development of undergraduate researchers, chapter 37 | 8  pages, developing students' cultural competence through video interviews, chapter 38 | 8  pages, preparing students for community-engaged scholarship, chapter 39 | 7  pages, teaching policy implications, chapter 40 | 7  pages, introducing scholars to public writing, chapter | 6  pages, closing words.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

  • Our Mission

How to Read and Interpret Research to Benefit Your Teaching Practice

Teachers can find helpful ideas in research articles and take a strategic approach to get the most out of what they’re reading.

Photo of teacher working at home

Have you read any education blogs, attended a conference session this summer, or gone to a back-to-school meeting so far where information on PowerPoint slides was supported with research like this: “Holland et al., 2023”? Perhaps, like me, you’ve wondered what to do with these citations or how to find and read the work cited. We want to improve our teaching practice and keep learning amid our busy schedules and responsibilities. When we find a sliver of time to look for the research article(s) being cited, how are we supposed to read, interpret, implement, and reflect on it in our practice? 

There has been much research over the past decade building on research-practice partnerships . Teachers and researchers should work collaboratively to improve student learning. Though researchers in higher education typically conduct formal research and publish their work in journal articles, it’s important for teachers to also see themselves as researchers. They engage in qualitative analysis while circulating the room to examine and interpret student work and demonstrate quantitative analysis when making predictions around student achievement data.

There are different sources of knowledge and timely questions to consider that education researchers can learn and take from teachers. So, what if teachers were better equipped to translate research findings from a journal article into improved practice relevant to their classroom’s immediate needs? I’ll offer some suggestions on how to answer this question.

Removing Barriers to New Information

For starters, research is crucial for education. It helps us learn and create new knowledge. Teachers learning how to translate research into practice can help contribute toward continuous improvement in schools. However, not all research is beneficial or easily applicable. While personal interests may lead researchers in a different direction, your classroom experience holds valuable expertise. Researchers should be viewed as allies, not sole authorities.

Additionally, paywalls prevent teachers from accessing valuable research articles that are often referenced in professional development. However, some sites, like Sage and JSTOR , offer open access journals where you can find research relevant to your classroom needs. Google Scholar is another helpful resource where you can plug in keywords like elementary math , achievement , small-group instruction , or diverse learners to find articles freely available as PDFs. Alternatively, you can use Elicit and get answers to specific questions. It can provide a list of relevant articles and summaries of their findings.

Approach research articles differently than other types of writing, as they aren’t intended for our specific audience but rather for academic researchers. Keep this in mind when selecting articles that align with your teaching vision, student demographic, and school environment.

Using behavioral and brain science research, I implemented the spacing effect . I used this strategy to include spaced fluency, partner practices, and spiral reviews (e.g., “do nows”) with an intentional selection of questions and tasks based on student work samples and formative/summative assessment data. It improved my students’ memory, long-term retention, and proficiency, so I didn’t take it too personally when some of them forgot procedures or symbols.

What You’ll Find in a Research Article

Certain elements are always included in a research article. The abstract gives a brief overview. Following that, the introduction typically explains the purpose and significance of the research—often through a theoretical framework and literature review. Other common sections of a research article may include methodology, results or findings, and discussion or conclusion.

The methodology section explains how the researchers answered their research question(s) to understand the topic. The results/findings section provides the answer(s) to the research question(s), while the discussion/conclusion section explains the importance and meaning of the results/findings and why it matters to readers and the field of education at large.

How to Process Information to Find What You’re Looking For

To avoid getting overwhelmed while reading research, take notes. Many articles are lengthy and filled with complex terminology and citations. Choose one relevant article at a time, and jot down important points or questions.

You could apply many strategies to read research, but here’s an idea that takes our time constraints and bandwidth as teachers into account:

  • First, read the title and full abstract, then scan and skim the introduction. You’ll be able to see if it’s relevant to your interests, needs, and whether you need to continue reading. 
  • After you’ve decided if the research is relevant to your classroom and professional development, jump straight to the discussion/conclusion section to see the “so what” about the research findings and how they could apply to your classroom. Review the findings/results section after for more details if needed.

Decipher the Details in the Data 

As a math, science, or English language arts teacher, you might come across figures, tables, or graphs that could spark ideas for your lessons. Some of these visuals and data may seem complex and difficult to understand. To make sense of them, take it slow and read through the notes and descriptions carefully.             

For example, researchers C. Kirabo Jackson and Alexey Makarin created a graph to show that middle school math teachers who had online access and support to use high-quality materials saw a positive impact on math test scores, especially when they used the materials for multiple lessons. The notes below the graph explain how the data was collected and which school districts were involved in the study.

Lastly, after reading the findings/results section, you’ll understand the gist of the research and if it’s applicable to your needs. Reading beyond these sections depends on your schedule and interests. It’s perfectly normal if it takes additional time to digest these sections.

When it comes to reading research, teachers don’t have to go it alone. School and district leaders can involve us in discussions about research findings and their practical implications for our school during professional learning community meetings or professional development sessions before the start of the school year. Even if only a few teachers participate in this process, sharing the main points with peers and the principal can have a significantly positive impact on improving direct instruction for students.

X

Teaching & Learning

  • Education Excellence
  • Professional development
  • Case studies
  • Teaching toolkits
  • MicroCPD-UCL
  • Assessment resources
  • Student partnership
  • Generative AI Hub
  • Community Engaged Learning
  • UCL Student Success

Menu

Researching your teaching practice: an introduction to pedagogic research

What is pedagogic research, why should you do it and what effect can it have on your academic career? 

The words Teaching toolkits ucl arena centre on a blue background

1 August 2019

The Academic Careers Framework at UCL recognises that education activities which support students to learn can strengthen an application for promotion. This includes contributing to pedagogic research.

When applying for UCL Arena Fellowships (nationally recognised teaching awards accredited by the Higher Education Academy), contributing to pedagogic research is recognised in the UK Professional Standards Framework (UKPSF) as an area of activity [A5] and as a professional value [V3].

At the heart of both the UKPSF and pedagogic research is a philosophy of reflective practice, dissemination of research, engagement of students, and attention to disciplinary specificity.

  • The Academic Careers Framework at UCL 
  • The UK Professional Standards Framework (UKPSF) 

What pedagogic research means

Also known as the scholarship of teaching and learning (SoTL), or education enquiry, pedagogic research is an established field of academic discourse involving carefully investigating your teaching practice and in turn developing the curriculum.

It requires a systematic and evidence-based study of student learning, often through a small-scale research projects engaging students.

Pedagogic research is a form of self-study, and/or action research involving critical reflection and reflexivity on current practice, which gives way to new knowledge. It encourages investigating learning, including what works and what does not.

As with any rigorous research endeavour, you will need to be well-informed and critically reflective.

Pedagogic research has the goal of improving the quality of education locally and further afield, through dissemination of best practice to colleagues at UCL and beyond, in conferences and in either discipline-specific education journals or education-focused journals.

Pedagogic research brings together key objectives in UCL’s Education Strategy , by encouraging:

  • active connections between education and research
  • reflection on and development of our education provision
  • connections between staff and students in partnership to improve education.

Pedagogic research allows educators to examine their own practice, reflect on successes and challenges, and share experiences so others can learn from this, improving education more widely.

Consider aligning your research to UCL’s education strategy

A number of pedagogic research projects focus on research-based education , specifically through uncovering answers to the following:

“What kinds of impact, if any, does UCL’s research-based education strategy (Connected Curriculum) have on changing real practice within and across the disciplines, at UCL and beyond?”

Pedagogic research will support a community of scholars

Making transparent how learning is possible and developing practice may well involve collaboration with students in research activities and data collection. Students are well-suited to be co-researchers on pedagogic research projects.

Engaging with the existing body of scholarship will position your work in a larger field and allow you to contribute to the community while learning from others.

Finally, sharing your findings in public forums to help others develop practice will support community-based and shared knowledge construction.

Pedagogic research resembles rigorous disciplinary research

“ “You spend some time looking at different approaches to teaching and learning within a specific field of knowledge and about learning in general in that area. You research how the knowledge is known and practised and applied within the discipline and you consider what others have done and then you plan your program and you monitor the results and improve it. It is also about writing about it and communicating it to others in the larger arena. You communicate what you do locally so other students within the discipline or profession can be helped to learn and more can be known about how the learning is achieved and how thinking and knowledge is structured in the areas. It’s about reflective practice and it’s about active dissemination of that practice for the benefit of learning and teaching.” (Trigwell et al. 2000: 167)

Subject disciplines have distinctive approaches to conducting research into education.

6 key steps to develop your own pedagogic research project

1. identify the problem and set clear goals.

Identify the focused problem you wish to consider. You may already know the intervention or practice you would like to improve, but it is important to have clear goals in mind.

You may focus on overcoming a challenge you face in your education practice. Taking a problem-based approach will make connection between pedagogic research and discipline-specific issues. For example, you could focus on massification and large class teaching, or developing cross-cultural understanding in diverse political science courses.

A helpful place to start is to identify a gap in the existing pedagogic research.

It’s also useful at this early stage to begin thinking about potential audiences for disseminating your work. This will allow you to strategically frame the project in line with what stakeholders need to know; demonstrating the initiative has value will make the work more publishable and relevant to your career development.

  • What do I want to know about student learning in my discipline and/or how do I want to develop it?
  • What do I want to do to develop my practice?
  • Who will I communicate my findings to?
  • How will this goal advance the work of other scholars?

2. Prepare adequately and begin to implement your development

You’ll want to be as prepared as possible.

Conducting a literature review relevant to your discipline and education context will help ensure your project has not already been done and help you refine the study and methodology.

Begin to implement your enhancement activity, for example through revising rubrics, assessment criteria or learning activities.

Avoid conducting a controlled experiment, where only some students receive the benefit of development.

Set a research question that allows you to explore, understand and improve student learning in specific contexts.

Discuss your plans with colleagues and students. Consider engaging collaborators.

Find out if an ethics application is required. At UCL, education research is generally considered ‘low-risk’, involving completing a simple ‘low risk’ ethics application form for Chair’s review. Allow on average two weeks for review.

As part of the application process a participant information sheet and consent form need to be produced if you are recruiting participants to your study. Data protection registration is required only if you are using ‘personal data’.

  • What will my students learn and why is it worth learning?
  • Who are my students and how do students learn effectively?
  • What can I do to support students to learn effectively?
  • What does the literature tell me about this issue?
  • What activities will I design to improve education?
  • What ethical implications are there?
  • How will I measure and evaluate the impact of my practice on student learning?

The British Educational Research Association (BERA) offers a wealth of information on ethics in their online guide.

3. Establish and employ appropriate methods of enquiry

In order to investigate changes to education practice, a range of methods could be employed, including:

  • reflection and analysis
  • focus groups
  • questionnaires and surveys
  • content analysis of text
  • Ethnography
  • Phenomenography
  • observational research and speculation.

Capturing students’ views are important; they will value the opportunity to be involved in improving education at UCL.

Treat your programme as a source of data to answer interesting questions about learning: collect data available at your fingertips.

Your colleagues may also be able to contribute to the research.

Be sure to gain participants’ consent.

  • What methods do I need to employ to measure my practice?
  • Who will I engage?
  • What are my students doing as a result of my practice?

For more on methods:

  • Cohen, L., Manion, L., and Morrison, K. (2007). Research Methods in Educatio n. London: Routledge.
  • Stierer, B. and Antoniou, M. (2004). Are there distinctive methodologies for pedagogic research in higher education? Teaching in Higher Education 9, no. 3: 275–285.

4. Evaluate results

Analyse your data using appropriate strategies.

Draw appropriate conclusions and critically reflect on your findings and intervention.

Return to earlier stages if further development or data collection is needed, before continuing with the project.

How has student learning changed as a result of my practice and what evidence do I have?

  • What lessons have I learned?
  • What adjustments have been made to my teaching?

5. Prepare your presentation

Begin to write up your work, presenting the evidence and results of your intervention.

Use the evidence you gathered to design and refine new activities, assignments and assessments for further iterations. Be critically reflective.

  • What worked and what did not go according to plan?
  • What can others learn from my project?
  • How has enhancement developed student learning?
  • What makes my intervention worth implementing?

6. Share your project with others

Go public with your project and communicate your findings (whether work-in-progress or complete) with peers, who can comment, critique and build on this work.

Engage your students in the work and invite feedback.

Share results internally (at teaching committees, or in reports), across UCL (at the UCL Education Conference , or a UCL Arena event ), or internationally (in open-access publications, and through conference presentations).

More dissemination ideas can be found below.

  • What can engaging others tell me about this development?
  • What impact does my work actually have on others interested in developing their practice?

This may lead to you examining the medium and long-term impact of the education development project.

Engaging multiple stakeholders over a long period of time may result in returning to step 1, through another iteration of development.

How to disseminate your pedagogic research

Sharing your findings and intervention is an important part of pedagogic research.

Look to disseminate through the following forums.

With the UCL community

  • Local teaching committees.
  • Faculty education events.
  • Write a case study for the UCL Teaching & Learning Portal .
  • Propose to deliver an Arena event . Submit a proposal if you'd like to run an event by completing the form (word document) or emailing [email protected]
  • Present at the annual UCL Education Conference .

At a higher education conference

Within the uk.

  • Assessment in Higher Education 
  • British Educational Research Association 
  • Higher Education Academy Annual Conference  
  • Higher Education Conference & Exhibition
  • Society for Research into Higher Education
  • Staff and Education Development Association
  • Universities UK

Wonkhe  has a calendar of many major UK events and conferences.

Outside the UK 

  • Educause (Information Technology in Higher Education, USA)
  • Higher Education Research and Development Society of Australia
  • International Society for the Scholarship of Teaching and Learning
  • Society for Teaching and Learning in Higher Education (Canada)

Through publication

In a pedagogy-based book series:

  • Palgrave’s Critical University Studies Series

In a higher education journal, cross-disciplinary or discipline-specific:

  • Active Learning in Higher Education
  • Assessment and Evaluation in Higher Education
  • Biochemistry and Molecular Biology Education
  • Studies in Higher Education
  • Teaching & Learning Enquiry

The  IOE, UCL's Faculty of Education and Society website  has an updated long list of journals, both cross-disciplinary and discipline-specific.

Successful pedagogic research

Projects with maximum impact:

  • investigate learning processes
  • partner with students in the research and education development
  • engage the body of pedagogic research
  • critically reflect on changes
  • are relevant to a wide audience
  • communicate through open-access forums.
“ Teaching is the most impactful thing we do as academics in higher education. The sheer number of students we encounter and influence over our careers is incredible.     Pedagogic research (SoTL) offers an opportunity for us as academics to refine our practice and to generate understanding through evidence of what works and doesn’t in student learning.     In a research intensive institution, like UCL, pedagogic research offers us the chance to link the teaching and learning space more clearly with our research agendas, whilst at the same time contributing to opening up new opportunities to foster student learning.” David J. Hornsby, Deputy Head of Department (Education), UCL STEAPP 

An example of pedagogic research at UCL

“Recognising that students could better engage with core writing concepts through acting like a teacher, I designed peer review exercises to follow draft submissions of work, as part of a module I coordinate in The Bartlett School of Architecture. After consulting the literature, I realised that there was very little by way of guidance on how to set this up. 

Following the implementation phase, I held a focus group with students to find out their views, which were overwhelmingly positive. This enhancement project also improved students’ marks. I published this work and placed it on the module reading list, which helps underscore the value of this pedagogic tool and makes transparent the learning process.”  Brent Carnell, UCL Arena Centre for Research-based Education and The Bartlett School of Architecture  

  • Carnell, B. (2016). Aiming for autonomy: Formative peer assessment in a final-year undergraduate course . Assessment & Evaluation in Higher Education 41, no. 8: 1269–1283. 

Case studies of interest on the Teaching & Learning Portal:

  • A hybrid teaching approach transforms the functional anatomy module
  • Novel assessment on anatomy module inspires reconfiguration of assessment on entire programme
  • Peer instruction transforms the medical science classroom

Where to find help and support

The following initiatives and opportunities are available to colleagues to support research:

  • Meet with colleagues experienced in pedagogic research, including from the IOE or the Arena Centre for Research-based Education.
  • Funding from UCL ChangeMakers to work in partnership with students to develop education.  
  • Funding from the Arena Centre for Research-based Education. Sign up to the monthly newsletter to hear about the latest funding opportunities.
  • A Guide to Scholarship of Teaching and Learning (SOTL), Vanderbilt University  
  • International Society for the Scholarship of Teaching and Learning resources 
  • Early-career researcher information and resources from the British Educational Research Association (BERA) 
  • Bass, R. (1999). “ The scholarship of teaching: What’s the problem? ” Inventio: Creative Thinking about Learning and Teaching 1 (February), no. 1. 
  • Boyer, E. (1990). Scholarship Reconsidered: Priorities of the Professoriate . Princeton, New Jersey: Carnegie Foundation for the Advancement of Teaching. 
  • Cleaver, E., Lintern, M. and McLinden, M. (2014). Teaching and Learning in Higher Education: Disciplinary Approaches to Educational Enquiry . London: Sage. 
  • Fanghanel, J., McGowan, S., Parker, P., McConnell, C., Potter, J., Locke, W., Healey, M. (2015). “ Defining and supporting the Scholarship of Teaching and Learning (SoTL): A sector wide study .” York, UK: Higher Education Academy. 
  • Felten, P. (2013). “ Principles of good practice in SoTL .” Teaching & Learning Inquiry 1, no. 1: 121–125. 
  • Fung, D. (2017). “ Strength-based scholarship and good education: The scholarship circle. ” Innovations in Education and Training 54, no. 2: 101–110. 
  • Greene, M. J. (2014). “ On the inside looking in: Methodological insights and challenges in conducting qualitative insider research .” The Qualitative Report 19, no. 29: 1–13. 
  • Healey, M. (2000). “ Developing the scholarship of teaching in higher education: A disciplinebased approach .” Higher Education Research & Development 19, no. 2: 169–189. 
  • Healey, M. Resources from Professor Mick Healey  (Higher Education Consultant and Researcher) - a range of resources including bibliographies and handouts. 
  • Healey, M., Matthews, K. E., & Cook-Sather, A. (2019). Writing Scholarship of Teaching and Learning Articles for Peer-Reviewed Journals .  Teaching & Learning Inquiry ,  7 (2), 28-50.
  • Hutchings, P. (2000). “ Approaching the scholarship of teaching and learning .” In Opening Lines: Approaches to the Scholarship of Teaching and Learning, by P. Hutchings, 1–10. Mento Park: The Carnegie Foundation.
  • Hutchings, P., Huber, M. and Ciccone, A. (2011). The Scholarship of Teaching and Learning Reconsidered . San Francisco: Jossey-Bass. 
  • Koster, B. and van den Berg, B. (2014). “ Increasing professional self-understanding: Self-study research by teachers with the help of biography, core reflection and dialogue. ” Studying Teacher Education 10, no. 1: 86–100. 
  • O’Brien, M. (2008). “ Navigating the SoTL landscape: A compass, map and some tools for getting started .” International Journal for the Scholarship of Teaching and Learning 2 (July), no. 2: 1–20.  
  • Rowland, S. and Myatt, P. (2014). “ Getting started in the scholarship of teaching and learning: A “how to” guide for science academics .” Biochemistry and Molecular Biology Education 42, no. 1: 6–14. 
  • Tight, M. (2012). Researching Higher Education. Milton Keynes, UK: Open University Press. 
  • Trigwell, K., Martin, E. Benjamin, J. and Prosser, M. (2000). “ Scholarship of teaching: A model .” Higher Education Research & Development 19, no. 2: 155–168.

This guide has been produced by  UCL Arena . You are welcome to use this guide if you are from another educational facility, but you must credit UCL Arena. 

Further information

More teaching toolkits  - back to the toolkits menu

[email protected]

Learning and Development at UCL  

Academic Careers Framework  

Gain recognition for your role in education at UCL. There are pathways for teaching staff, researchers, postgraduate teaching assistantsand professional services staff: 

Arena one: for postgraduate teaching assistants (PGTAs)  - enables you to apply to become an Associate Fellow of the Higher Education Academy (HEA). 

Arena two: for Lecturers and Teaching Fellows on probation  - enables you to apply to become a UCL Arena Fellow and Fellow of the HEA. 

Arena open: for all other staff who teach, supervise, assess or support students’ learning  at UCL - accredited by the HEA. 

Sign up to the monthly UCL education e-newsletter  to get the latest teaching news, events & resources.  

Professional development events

Funnelback feed: https://search2.ucl.ac.uk/s/search.json?collection=drupal-teaching-learn... Double click the feed URL above to edit

Education news

Funnelback feed: https://cms-feed.ucl.ac.uk/s/search.json?collection=drupal-teaching-lear... Double click the feed URL above to edit

  • Tutorial Review
  • Open access
  • Published: 24 January 2018

Teaching the science of learning

  • Yana Weinstein   ORCID: orcid.org/0000-0002-5144-968X 1 ,
  • Christopher R. Madan 2 , 3 &
  • Megan A. Sumeracki 4  

Cognitive Research: Principles and Implications volume  3 , Article number:  2 ( 2018 ) Cite this article

273k Accesses

101 Citations

756 Altmetric

Metrics details

The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration, concrete examples, and dual coding. We describe the basic research behind each strategy and relevant applied research, present examples of existing and suggested implementation, and make recommendations for further research that would broaden the reach of these strategies.

Significance

Education does not currently adhere to the medical model of evidence-based practice (Roediger, 2013 ). However, over the past few decades, our field has made significant advances in applying cognitive processes to education. From this work, specific recommendations can be made for students to maximize their learning efficiency (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013 ; Roediger, Finn, & Weinstein, 2012 ). In particular, a review published 10 years ago identified a limited number of study techniques that have received solid evidence from multiple replications testing their effectiveness in and out of the classroom (Pashler et al., 2007 ). A recent textbook analysis (Pomerance, Greenberg, & Walsh, 2016 ) took the six key learning strategies from this report by Pashler and colleagues, and found that very few teacher-training textbooks cover any of these six principles – and none cover them all, suggesting that these strategies are not systematically making their way into the classroom. This is the case in spite of multiple recent academic (e.g., Dunlosky et al., 2013 ) and general audience (e.g., Dunlosky, 2013 ) publications about these strategies. In this tutorial review, we present the basic science behind each of these six key principles, along with more recent research on their effectiveness in live classrooms, and suggest ideas for pedagogical implementation. The target audience of this review is (a) educators who might be interested in integrating the strategies into their teaching practice, (b) science of learning researchers who are looking for open questions to help determine future research priorities, and (c) researchers in other subfields who are interested in the ways that principles from cognitive psychology have been applied to education.

While the typical teacher may not be exposed to this research during teacher training, a small cohort of teachers intensely interested in cognitive psychology has recently emerged. These teachers are mainly based in the UK, and, anecdotally (e.g., Dennis (2016), personal communication), appear to have taken an interest in the science of learning after reading Make it Stick (Brown, Roediger, & McDaniel, 2014 ; see Clark ( 2016 ) for an enthusiastic review of this book on a teacher’s blog, and “Learning Scientists” ( 2016c ) for a collection). In addition, a grassroots teacher movement has led to the creation of “researchED” – a series of conferences on evidence-based education (researchED, 2013 ). The teachers who form part of this network frequently discuss cognitive psychology techniques and their applications to education on social media (mainly Twitter; e.g., Fordham, 2016 ; Penfound, 2016 ) and on their blogs, such as Evidence Into Practice ( https://evidenceintopractice.wordpress.com/ ), My Learning Journey ( http://reflectionsofmyteaching.blogspot.com/ ), and The Effortful Educator ( https://theeffortfuleducator.com/ ). In general, the teachers who write about these issues pay careful attention to the relevant literature, often citing some of the work described in this review.

These informal writings, while allowing teachers to explore their approach to teaching practice (Luehmann, 2008 ), give us a unique window into the application of the science of learning to the classroom. By examining these blogs, we can not only observe how basic cognitive research is being applied in the classroom by teachers who are reading it, but also how it is being misapplied, and what questions teachers may be posing that have gone unaddressed in the scientific literature. Throughout this review, we illustrate each strategy with examples of how it can be implemented (see Table  1 and Figs.  1 , 2 , 3 , 4 , 5 , 6 and 7 ), as well as with relevant teacher blog posts that reflect on its application, and draw upon this work to pin-point fruitful avenues for further basic and applied research.

Spaced practice schedule for one week. This schedule is designed to represent a typical timetable of a high-school student. The schedule includes four one-hour study sessions, one longer study session on the weekend, and one rest day. Notice that each subject is studied one day after it is covered in school, to create spacing between classes and study sessions. Copyright note: this image was produced by the authors

a Blocked practice and interleaved practice with fraction problems. In the blocked version, students answer four multiplication problems consecutively. In the interleaved version, students answer a multiplication problem followed by a division problem and then an addition problem, before returning to multiplication. For an experiment with a similar setup, see Patel et al. ( 2016 ). Copyright note: this image was produced by the authors. b Illustration of interleaving and spacing. Each color represents a different homework topic. Interleaving involves alternating between topics, rather than blocking. Spacing involves distributing practice over time, rather than massing. Interleaving inherently involves spacing as other tasks naturally “fill” the spaces between interleaved sessions. Copyright note: this image was produced by the authors, adapted from Rohrer ( 2012 )

Concept map illustrating the process and resulting benefits of retrieval practice. Retrieval practice involves the process of withdrawing learned information from long-term memory into working memory, which requires effort. This produces direct benefits via the consolidation of learned information, making it easier to remember later and causing improvements in memory, transfer, and inferences. Retrieval practice also produces indirect benefits of feedback to students and teachers, which in turn can lead to more effective study and teaching practices, with a focus on information that was not accurately retrieved. Copyright note: this figure originally appeared in a blog post by the first and third authors ( http://www.learningscientists.org/blog/2016/4/1-1 )

Illustration of “how” and “why” questions (i.e., elaborative interrogation questions) students might ask while studying the physics of flight. To help figure out how physics explains flight, students might ask themselves the following questions: “How does a plane take off?”; “Why does a plane need an engine?”; “How does the upward force (lift) work?”; “Why do the wings have a curved upper surface and a flat lower surface?”; and “Why is there a downwash behind the wings?”. Copyright note: the image of the plane was downloaded from Pixabay.com and is free to use, modify, and share

Three examples of physics problems that would be categorized differently by novices and experts. The problems in ( a ) and ( c ) look similar on the surface, so novices would group them together into one category. Experts, however, will recognize that the problems in ( b ) and ( c ) both relate to the principle of energy conservation, and so will group those two problems into one category instead. Copyright note: the figure was produced by the authors, based on figures in Chi et al. ( 1981 )

Example of how to enhance learning through use of a visual example. Students might view this visual representation of neural communications with the words provided, or they could draw a similar visual representation themselves. Copyright note: this figure was produced by the authors

Example of word properties associated with visual, verbal, and motor coding for the word “SPOON”. A word can evoke multiple types of representation (“codes” in dual coding theory). Viewing a word will automatically evoke verbal representations related to its component letters and phonemes. Words representing objects (i.e., concrete nouns) will also evoke visual representations, including information about similar objects, component parts of the object, and information about where the object is typically found. In some cases, additional codes can also be evoked, such as motor-related properties of the represented object, where contextual information related to the object’s functional intention and manipulation action may also be processed automatically when reading the word. Copyright note: this figure was produced by the authors and is based on Aylwin ( 1990 ; Fig.  2 ) and Madan and Singhal ( 2012a , Fig.  3 )

Spaced practice

The benefits of spaced (or distributed) practice to learning are arguably one of the strongest contributions that cognitive psychology has made to education (Kang, 2016 ). The effect is simple: the same amount of repeated studying of the same information spaced out over time will lead to greater retention of that information in the long run, compared with repeated studying of the same information for the same amount of time in one study session. The benefits of distributed practice were first empirically demonstrated in the 19 th century. As part of his extensive investigation into his own memory, Ebbinghaus ( 1885/1913 ) found that when he spaced out repetitions across 3 days, he could almost halve the number of repetitions necessary to relearn a series of 12 syllables in one day (Chapter 8). He thus concluded that “a suitable distribution of [repetitions] over a space of time is decidedly more advantageous than the massing of them at a single time” (Section 34). For those who want to read more about Ebbinghaus’s contribution to memory research, Roediger ( 1985 ) provides an excellent summary.

Since then, hundreds of studies have examined spacing effects both in the laboratory and in the classroom (Kang, 2016 ). Spaced practice appears to be particularly useful at large retention intervals: in the meta-analysis by Cepeda, Pashler, Vul, Wixted, and Rohrer ( 2006 ), all studies with a retention interval longer than a month showed a clear benefit of distributed practice. The “new theory of disuse” (Bjork & Bjork, 1992 ) provides a helpful mechanistic explanation for the benefits of spacing to learning. This theory posits that memories have both retrieval strength and storage strength. Whereas retrieval strength is thought to measure the ease with which a memory can be recalled at a given moment, storage strength (which cannot be measured directly) represents the extent to which a memory is truly embedded in the mind. When studying is taking place, both retrieval strength and storage strength receive a boost. However, the extent to which storage strength is boosted depends upon retrieval strength, and the relationship is negative: the greater the current retrieval strength, the smaller the gains in storage strength. Thus, the information learned through “cramming” will be rapidly forgotten due to high retrieval strength and low storage strength (Bjork & Bjork, 2011 ), whereas spacing out learning increases storage strength by allowing retrieval strength to wane before restudy.

Teachers can introduce spacing to their students in two broad ways. One involves creating opportunities to revisit information throughout the semester, or even in future semesters. This does involve some up-front planning, and can be difficult to achieve, given time constraints and the need to cover a set curriculum. However, spacing can be achieved with no great costs if teachers set aside a few minutes per class to review information from previous lessons. The second method involves putting the onus to space on the students themselves. Of course, this would work best with older students – high school and above. Because spacing requires advance planning, it is crucial that the teacher helps students plan their studying. For example, teachers could suggest that students schedule study sessions on days that alternate with the days on which a particular class meets (e.g., schedule review sessions for Tuesday and Thursday when the class meets Monday and Wednesday; see Fig.  1 for a more complete weekly spaced practice schedule). It important to note that the spacing effect refers to information that is repeated multiple times, rather than the idea of studying different material in one long session versus spaced out in small study sessions over time. However, for teachers and particularly for students planning a study schedule, the subtle difference between the two situations (spacing out restudy opportunities, versus spacing out studying of different information over time) may be lost. Future research should address the effects of spacing out studying of different information over time, whether the same considerations apply in this situation as compared to spacing out restudy opportunities, and how important it is for teachers and students to understand the difference between these two types of spaced practice.

It is important to note that students may feel less confident when they space their learning (Bjork, 1999 ) than when they cram. This is because spaced learning is harder – but it is this “desirable difficulty” that helps learning in the long term (Bjork, 1994 ). Students tend to cram for exams rather than space out their learning. One explanation for this is that cramming does “work”, if the goal is only to pass an exam. In order to change students’ minds about how they schedule their studying, it might be important to emphasize the value of retaining information beyond a final exam in one course.

Ideas for how to apply spaced practice in teaching have appeared in numerous teacher blogs (e.g., Fawcett, 2013 ; Kraft, 2015 ; Picciotto, 2009 ). In England in particular, as of 2013, high-school students need to be able to remember content from up to 3 years back on cumulative exams (General Certificate of Secondary Education (GCSE) and A-level exams; see CIFE, 2012 ). A-levels in particular determine what subject students study in university and which programs they are accepted into, and thus shape the path of their academic career. A common approach for dealing with these exams has been to include a “revision” (i.e., studying or cramming) period of a few weeks leading up to the high-stakes cumulative exams. Now, teachers who follow cognitive psychology are advocating a shift of priorities to spacing learning over time across the 3 years, rather than teaching a topic once and then intensely reviewing it weeks before the exam (Cox, 2016a ; Wood, 2017 ). For example, some teachers have suggested using homework assignments as an opportunity for spaced practice by giving students homework on previous topics (Rose, 2014 ). However, questions remain, such as whether spaced practice can ever be effective enough to completely alleviate the need or utility of a cramming period (Cox, 2016b ), and how one can possibly figure out the optimal lag for spacing (Benney, 2016 ; Firth, 2016 ).

There has been considerable research on the question of optimal lag, and much of it is quite complex; two sessions neither too close together (i.e., cramming) nor too far apart are ideal for retention. In a large-scale study, Cepeda, Vul, Rohrer, Wixted, and Pashler ( 2008 ) examined the effects of the gap between study sessions and the interval between study and test across long periods, and found that the optimal gap between study sessions was contingent on the retention interval. Thus, it is not clear how teachers can apply the complex findings on lag to their own classrooms.

A useful avenue of research would be to simplify the research paradigms that are used to study optimal lag, with the goal of creating a flexible, spaced-practice framework that teachers could apply and tailor to their own teaching needs. For example, an Excel macro spreadsheet was recently produced to help teachers plan for lagged lessons (Weinstein-Jones & Weinstein, 2017 ; see Weinstein & Weinstein-Jones ( 2017 ) for a description of the algorithm used in the spreadsheet), and has been used by teachers to plan their lessons (Penfound, 2017 ). However, one teacher who found this tool helpful also wondered whether the more sophisticated plan was any better than his own method of manually selecting poorly understood material from previous classes for later review (Lovell, 2017 ). This direction is being actively explored within personalized online learning environments (Kornell & Finn, 2016 ; Lindsey, Shroyer, Pashler, & Mozer, 2014 ), but teachers in physical classrooms might need less technologically-driven solutions to teach cohorts of students.

It seems teachers would greatly appreciate a set of guidelines for how to implement spacing in the curriculum in the most effective, but also the most efficient manner. While the cognitive field has made great advances in terms of understanding the mechanisms behind spacing, what teachers need more of are concrete evidence-based tools and guidelines for direct implementation in the classroom. These could include more sophisticated and experimentally tested versions of the software described above (Weinstein-Jones & Weinstein, 2017 ), or adaptable templates of spaced curricula. Moreover, researchers need to evaluate the effectiveness of these tools in a real classroom environment, over a semester or academic year, in order to give pedagogically relevant evidence-based recommendations to teachers.

Interleaving

Another scheduling technique that has been shown to increase learning is interleaving. Interleaving occurs when different ideas or problem types are tackled in a sequence, as opposed to the more common method of attempting multiple versions of the same problem in a given study session (known as blocking). Interleaving as a principle can be applied in many different ways. One such way involves interleaving different types of problems during learning, which is particularly applicable to subjects such as math and physics (see Fig.  2 a for an example with fractions, based on a study by Patel, Liu, & Koedinger, 2016 ). For example, in a study with college students, Rohrer and Taylor ( 2007 ) found that shuffling math problems that involved calculating the volume of different shapes resulted in better test performance 1 week later than when students answered multiple problems about the same type of shape in a row. This pattern of results has also been replicated with younger students, for example 7 th grade students learning to solve graph and slope problems (Rohrer, Dedrick, & Stershic, 2015 ). The proposed explanation for the benefit of interleaving is that switching between different problem types allows students to acquire the ability to choose the right method for solving different types of problems rather than learning only the method itself, and not when to apply it.

Do the benefits of interleaving extend beyond problem solving? The answer appears to be yes. Interleaving can be helpful in other situations that require discrimination, such as inductive learning. Kornell and Bjork ( 2008 ) examined the effects of interleaving in a task that might be pertinent to a student of the history of art: the ability to match paintings to their respective painters. Students who studied different painters’ paintings interleaved at study were more successful on a later identification test than were participants who studied the paintings blocked by painter. Birnbaum, Kornell, Bjork, and Bjork ( 2013 ) proposed the discriminative-contrast hypothesis to explain that interleaving enhances learning by allowing the comparison between exemplars of different categories. They found support for this hypothesis in a set of experiments with bird categorization: participants benefited from interleaving and also from spacing, but not when the spacing interrupted side-by-side comparisons of birds from different categories.

Another type of interleaving involves the interleaving of study and test opportunities. This type of interleaving has been applied, once again, to problem solving, whereby students alternate between attempting a problem and viewing a worked example (Trafton & Reiser, 1993 ); this pattern appears to be superior to answering a string of problems in a row, at least with respect to the amount of time it takes to achieve mastery of a procedure (Corbett, Reed, Hoffmann, MacLaren, & Wagner, 2010 ). The benefits of interleaving study and test opportunities – rather than blocking study followed by attempting to answer problems or questions – might arise due to a process known as “test-potentiated learning”. That is, a study opportunity that immediately follows a retrieval attempt may be more fruitful than when that same studying was not preceded by retrieval (Arnold & McDermott, 2013 ).

For problem-based subjects, the interleaving technique is straightforward: simply mix questions on homework and quizzes with previous materials (which takes care of spacing as well); for languages, mix vocabulary themes rather than blocking by theme (Thomson & Mehring, 2016 ). But interleaving as an educational strategy ought to be presented to teachers with some caveats. Research has focused on interleaving material that is somewhat related (e.g., solving different mathematical equations, Rohrer et al., 2015 ), whereas students sometimes ask whether they should interleave material from different subjects – a practice that has not received empirical support (Hausman & Kornell, 2014 ). When advising students how to study independently, teachers should thus proceed with caution. Since it is easy for younger students to confuse this type of unhelpful interleaving with the more helpful interleaving of related information, it may be best for teachers of younger grades to create opportunities for interleaving in homework and quiz assignments rather than putting the onus on the students themselves to make use of the technique. Technology can be very helpful here, with apps such as Quizlet, Memrise, Anki, Synap, Quiz Champ, and many others (see also “Learning Scientists”, 2017 ) that not only allow instructor-created quizzes to be taken by students, but also provide built-in interleaving algorithms so that the burden does not fall on the teacher or the student to carefully plan which items are interleaved when.

An important point to consider is that in educational practice, the distinction between spacing and interleaving can be difficult to delineate. The gap between the scientific and classroom definitions of interleaving is demonstrated by teachers’ own writings about this technique. When they write about interleaving, teachers often extend the term to connote a curriculum that involves returning to topics multiple times throughout the year (e.g., Kirby, 2014 ; see “Learning Scientists” ( 2016a ) for a collection of similar blog posts by several other teachers). The “interleaving” of topics throughout the curriculum produces an effect that is more akin to what cognitive psychologists call “spacing” (see Fig.  2 b for a visual representation of the difference between interleaving and spacing). However, cognitive psychologists have not examined the effects of structuring the curriculum in this way, and open questions remain: does repeatedly circling back to previous topics throughout the semester interrupt the learning of new information? What are some effective techniques for interleaving old and new information within one class? And how does one determine the balance between old and new information?

Retrieval practice

While tests are most often used in educational settings for assessment, a lesser-known benefit of tests is that they actually improve memory of the tested information. If we think of our memories as libraries of information, then it may seem surprising that retrieval (which happens when we take a test) improves memory; however, we know from a century of research that retrieving knowledge actually strengthens it (see Karpicke, Lehman, & Aue, 2014 ). Testing was shown to strengthen memory as early as 100 years ago (Gates, 1917 ), and there has been a surge of research in the last decade on the mnemonic benefits of testing, or retrieval practice . Most of the research on the effectiveness of retrieval practice has been done with college students (see Roediger & Karpicke, 2006 ; Roediger, Putnam, & Smith, 2011 ), but retrieval-based learning has been shown to be effective at producing learning for a wide range of ages, including preschoolers (Fritz, Morris, Nolan, & Singleton, 2007 ), elementary-aged children (e.g., Karpicke, Blunt, & Smith, 2016 ; Karpicke, Blunt, Smith, & Karpicke, 2014 ; Lipko-Speed, Dunlosky, & Rawson, 2014 ; Marsh, Fazio, & Goswick, 2012 ; Ritchie, Della Sala, & McIntosh, 2013 ), middle-school students (e.g., McDaniel, Thomas, Agarwal, McDermott, & Roediger, 2013 ; McDermott, Agarwal, D’Antonio, Roediger, & McDaniel, 2014 ), and high-school students (e.g., McDermott et al., 2014 ). In addition, the effectiveness of retrieval-based learning has been extended beyond simple testing to other activities in which retrieval practice can be integrated, such as concept mapping (Blunt & Karpicke, 2014 ; Karpicke, Blunt, et al., 2014 ; Ritchie et al., 2013 ).

A debate is currently ongoing as to the effectiveness of retrieval practice for more complex materials (Karpicke & Aue, 2015 ; Roelle & Berthold, 2017 ; Van Gog & Sweller, 2015 ). Practicing retrieval has been shown to improve the application of knowledge to new situations (e.g., Butler, 2010 ; Dirkx, Kester, & Kirschner, 2014 ); McDaniel et al., 2013 ; Smith, Blunt, Whiffen, & Karpicke, 2016 ); but see Tran, Rohrer, and Pashler ( 2015 ) and Wooldridge, Bugg, McDaniel, and Liu ( 2014 ), for retrieval practice studies that showed limited or no increased transfer compared to restudy. Retrieval practice effects on higher-order learning may be more sensitive than fact learning to encoding factors, such as the way material is presented during study (Eglington & Kang, 2016 ). In addition, retrieval practice may be more beneficial for higher-order learning if it includes more scaffolding (Fiechter & Benjamin, 2017 ; but see Smith, Blunt, et al., 2016 ) and targeted practice with application questions (Son & Rivas, 2016 ).

How does retrieval practice help memory? Figure  3 illustrates both the direct and indirect benefits of retrieval practice identified by the literature. The act of retrieval itself is thought to strengthen memory (Karpicke, Blunt, et al., 2014 ; Roediger & Karpicke, 2006 ; Smith, Roediger, & Karpicke, 2013 ). For example, Smith et al. ( 2013 ) showed that if students brought information to mind without actually producing it (covert retrieval), they remembered the information just as well as if they overtly produced the retrieved information (overt retrieval). Importantly, both overt and covert retrieval practice improved memory over control groups without retrieval practice, even when feedback was not provided. The fact that bringing information to mind in the absence of feedback or restudy opportunities improves memory leads researchers to conclude that it is the act of retrieval – thinking back to bring information to mind – that improves memory of that information.

The benefit of retrieval practice depends to a certain extent on successful retrieval (see Karpicke, Lehman, et al., 2014 ). For example, in Experiment 4 of Smith et al. ( 2013 ), students successfully retrieved 72% of the information during retrieval practice. Of course, retrieving 72% of the information was compared to a restudy control group, during which students were re-exposed to 100% of the information, creating a bias in favor of the restudy condition. Yet retrieval led to superior memory later compared to the restudy control. However, if retrieval success is extremely low, then it is unlikely to improve memory (e.g., Karpicke, Blunt, et al., 2014 ), particularly in the absence of feedback. On the other hand, if retrieval-based learning situations are constructed in such a way that ensures high levels of success, the act of bringing the information to mind may be undermined, thus making it less beneficial. For example, if a student reads a sentence and then immediately covers the sentence and recites it out loud, they are likely not retrieving the information but rather just keeping the information in their working memory long enough to recite it again (see Smith, Blunt, et al., 2016 for a discussion of this point). Thus, it is important to balance success of retrieval with overall difficulty in retrieving the information (Smith & Karpicke, 2014 ; Weinstein, Nunes, & Karpicke, 2016 ). If initial retrieval success is low, then feedback can help improve the overall benefit of practicing retrieval (Kang, McDermott, & Roediger, 2007 ; Smith & Karpicke, 2014 ). Kornell, Klein, and Rawson ( 2015 ), however, found that it was the retrieval attempt and not the correct production of information that produced the retrieval practice benefit – as long as the correct answer was provided after an unsuccessful attempt, the benefit was the same as for a successful retrieval attempt in this set of studies. From a practical perspective, it would be helpful for teachers to know when retrieval attempts in the absence of success are helpful, and when they are not. There may also be additional reasons beyond retrieval benefits that would push teachers towards retrieval practice activities that produce some success amongst students; for example, teachers may hesitate to give students retrieval practice exercises that are too difficult, as this may negatively affect self-efficacy and confidence.

In addition to the fact that bringing information to mind directly improves memory for that information, engaging in retrieval practice can produce indirect benefits as well (see Roediger et al., 2011 ). For example, research by Weinstein, Gilmore, Szpunar, and McDermott ( 2014 ) demonstrated that when students expected to be tested, the increased test expectancy led to better-quality encoding of new information. Frequent testing can also serve to decrease mind-wandering – that is, thoughts that are unrelated to the material that students are supposed to be studying (Szpunar, Khan, & Schacter, 2013 ).

Practicing retrieval is a powerful way to improve meaningful learning of information, and it is relatively easy to implement in the classroom. For example, requiring students to practice retrieval can be as simple as asking students to put their class materials away and try to write out everything they know about a topic. Retrieval-based learning strategies are also flexible. Instructors can give students practice tests (e.g., short-answer or multiple-choice, see Smith & Karpicke, 2014 ), provide open-ended prompts for the students to recall information (e.g., Smith, Blunt, et al., 2016 ) or ask their students to create concept maps from memory (e.g., Blunt & Karpicke, 2014 ). In one study, Weinstein et al. ( 2016 ) looked at the effectiveness of inserting simple short-answer questions into online learning modules to see whether they improved student performance. Weinstein and colleagues also manipulated the placement of the questions. For some students, the questions were interspersed throughout the module, and for other students the questions were all presented at the end of the module. Initial success on the short-answer questions was higher when the questions were interspersed throughout the module. However, on a later test of learning from that module, the original placement of the questions in the module did not matter for performance. As with spaced practice, where the optimal gap between study sessions is contingent on the retention interval, the optimum difficulty and level of success during retrieval practice may also depend on the retention interval. Both groups of students who answered questions performed better on the delayed test compared to a control group without question opportunities during the module. Thus, the important thing is for instructors to provide opportunities for retrieval practice during learning. Based on previous research, any activity that promotes the successful retrieval of information should improve learning.

Retrieval practice has received a lot of attention in teacher blogs (see “Learning Scientists” ( 2016b ) for a collection). A common theme seems to be an emphasis on low-stakes (Young, 2016 ) and even no-stakes (Cox, 2015 ) testing, the goal of which is to increase learning rather than assess performance. In fact, one well-known charter school in the UK has an official homework policy grounded in retrieval practice: students are to test themselves on subject knowledge for 30 minutes every day in lieu of standard homework (Michaela Community School, 2014 ). The utility of homework, particularly for younger children, is often a hotly debated topic outside of academia (e.g., Shumaker, 2016 ; but see Jones ( 2016 ) for an opposing viewpoint and Cooper ( 1989 ) for the original research the blog posts were based on). Whereas some research shows clear links between homework and academic achievement (Valle et al., 2016 ), other researchers have questioned the effectiveness of homework (Dettmers, Trautwein, & Lüdtke, 2009 ). Perhaps amending homework to involve retrieval practice might make it more effective; this remains an open empirical question.

One final consideration is that of test anxiety. While retrieval practice can be very powerful at improving memory, some research shows that pressure during retrieval can undermine some of the learning benefit. For example, Hinze and Rapp ( 2014 ) manipulated pressure during quizzing to create high-pressure and low-pressure conditions. On the quizzes themselves, students performed equally well. However, those in the high-pressure condition did not perform as well on a criterion test later compared to the low-pressure group. Thus, test anxiety may reduce the learning benefit of retrieval practice. Eliminating all high-pressure tests is probably not possible, but instructors can provide a number of low-stakes retrieval opportunities for students to help increase learning. The use of low-stakes testing can serve to decrease test anxiety (Khanna, 2015 ), and has recently been shown to negate the detrimental impact of stress on learning (Smith, Floerke, & Thomas, 2016 ). This is a particularly important line of inquiry to pursue for future research, because many teachers who are not familiar with the effectiveness of retrieval practice may be put off by the implied pressure of “testing”, which evokes the much maligned high-stakes standardized tests (e.g., McHugh, 2013 ).

Elaboration

Elaboration involves connecting new information to pre-existing knowledge. Anderson ( 1983 , p.285) made the following claim about elaboration: “One of the most potent manipulations that can be performed in terms of increasing a subject’s memory for material is to have the subject elaborate on the to-be-remembered material.” Postman ( 1976 , p. 28) defined elaboration most parsimoniously as “additions to nominal input”, and Hirshman ( 2001 , p. 4369) provided an elaboration on this definition (pun intended!), defining elaboration as “A conscious, intentional process that associates to-be-remembered information with other information in memory.” However, in practice, elaboration could mean many different things. The common thread in all the definitions is that elaboration involves adding features to an existing memory.

One possible instantiation of elaboration is thinking about information on a deeper level. The levels (or “depth”) of processing framework, proposed by Craik and Lockhart ( 1972 ), predicts that information will be remembered better if it is processed more deeply in terms of meaning, rather than shallowly in terms of form. The leves of processing framework has, however, received a number of criticisms (Craik, 2002 ). One major problem with this framework is that it is difficult to measure “depth”. And if we are not able to actually measure depth, then the argument can become circular: is it that something was remembered better because it was studied more deeply, or do we conclude that it must have been studied more deeply because it is remembered better? (See Lockhart & Craik, 1990 , for further discussion of this issue).

Another mechanism by which elaboration can confer a benefit to learning is via improvement in organization (Bellezza, Cheesman, & Reddy, 1977 ; Mandler, 1979 ). By this view, elaboration involves making information more integrated and organized with existing knowledge structures. By connecting and integrating the to-be-learned information with other concepts in memory, students can increase the extent to which the ideas are organized in their minds, and this increased organization presumably facilitates the reconstruction of the past at the time of retrieval.

Elaboration is such a broad term and can include so many different techniques that it is hard to claim that elaboration will always help learning. There is, however, a specific technique under the umbrella of elaboration for which there is relatively strong evidence in terms of effectiveness (Dunlosky et al., 2013 ; Pashler et al., 2007 ). This technique is called elaborative interrogation, and involves students questioning the materials that they are studying (Pressley, McDaniel, Turnure, Wood, & Ahmad, 1987 ). More specifically, students using this technique would ask “how” and “why” questions about the concepts they are studying (see Fig.  4 for an example on the physics of flight). Then, crucially, students would try to answer these questions – either from their materials or, eventually, from memory (McDaniel & Donnelly, 1996 ). The process of figuring out the answer to the questions – with some amount of uncertainty (Overoye & Storm, 2015 ) – can help learning. When using this technique, however, it is important that students check their answers with their materials or with the teacher; when the content generated through elaborative interrogation is poor, it can actually hurt learning (Clinton, Alibali, & Nathan, 2016 ).

Students can also be encouraged to self-explain concepts to themselves while learning (Chi, De Leeuw, Chiu, & LaVancher, 1994 ). This might involve students simply saying out loud what steps they need to perform to solve an equation. Aleven and Koedinger ( 2002 ) conducted two classroom studies in which students were either prompted by a “cognitive tutor” to provide self-explanations during a problem-solving task or not, and found that the self-explanations led to improved performance. According to the authors, this approach could scale well to real classrooms. If possible and relevant, students could even perform actions alongside their self-explanations (Cohen, 1981 ; see also the enactment effect, Hainselin, Picard, Manolli, Vankerkore-Candas, & Bourdin, 2017 ). Instructors can scaffold students in these types of activities by providing self-explanation prompts throughout to-be-learned material (O’Neil et al., 2014 ). Ultimately, the greatest potential benefit of accurate self-explanation or elaboration is that the student will be able to transfer their knowledge to a new situation (Rittle-Johnson, 2006 ).

The technical term “elaborative interrogation” has not made it into the vernacular of educational bloggers (a search on https://educationechochamberuncut.wordpress.com , which consolidates over 3,000 UK-based teacher blogs, yielded zero results for that term). However, a few teachers have blogged about elaboration more generally (e.g., Hobbiss, 2016 ) and deep questioning specifically (e.g., Class Teaching, 2013 ), just without using the specific terminology. This strategy in particular may benefit from a more open dialog between researchers and teachers to facilitate the use of elaborative interrogation in the classroom and to address possible barriers to implementation. In terms of advancing the scientific understanding of elaborative interrogation in a classroom setting, it would be informative to conduct a larger-scale intervention to see whether having students elaborate during reading actually helps their understanding. It would also be useful to know whether the students really need to generate their own elaborative interrogation (“how” and “why”) questions, versus answering questions provided by others. How long should students persist to find the answers? When is the right time to have students engage in this task, given the levels of expertise required to do it well (Clinton et al., 2016 )? Without knowing the answers to these questions, it may be too early for us to instruct teachers to use this technique in their classes. Finally, elaborative interrogation takes a long time. Is this time efficiently spent? Or, would it be better to have the students try to answer a few questions, pool their information as a class, and then move to practicing retrieval of the information?

Concrete examples

Providing supporting information can improve the learning of key ideas and concepts. Specifically, using concrete examples to supplement content that is more conceptual in nature can make the ideas easier to understand and remember. Concrete examples can provide several advantages to the learning process: (a) they can concisely convey information, (b) they can provide students with more concrete information that is easier to remember, and (c) they can take advantage of the superior memorability of pictures relative to words (see “Dual Coding”).

Words that are more concrete are both recognized and recalled better than abstract words (Gorman, 1961 ; e.g., “button” and “bound,” respectively). Furthermore, it has been demonstrated that information that is more concrete and imageable enhances the learning of associations, even with abstract content (Caplan & Madan, 2016 ; Madan, Glaholt, & Caplan, 2010 ; Paivio, 1971 ). Following from this, providing concrete examples during instruction should improve retention of related abstract concepts, rather than the concrete examples alone being remembered better. Concrete examples can be useful both during instruction and during practice problems. Having students actively explain how two examples are similar and encouraging them to extract the underlying structure on their own can also help with transfer. In a laboratory study, Berry ( 1983 ) demonstrated that students performed well when given concrete practice problems, regardless of the use of verbalization (akin to elaborative interrogation), but that verbalization helped students transfer understanding from concrete to abstract problems. One particularly important area of future research is determining how students can best make the link between concrete examples and abstract ideas.

Since abstract concepts are harder to grasp than concrete information (Paivio, Walsh, & Bons, 1994 ), it follows that teachers ought to illustrate abstract ideas with concrete examples. However, care must be taken when selecting the examples. LeFevre and Dixon ( 1986 ) provided students with both concrete examples and abstract instructions and found that when these were inconsistent, students followed the concrete examples rather than the abstract instructions, potentially constraining the application of the abstract concept being taught. Lew, Fukawa-Connelly, Mejí-Ramos, and Weber ( 2016 ) used an interview approach to examine why students may have difficulty understanding a lecture. Responses indicated that some issues were related to understanding the overarching topic rather than the component parts, and to the use of informal colloquialisms that did not clearly follow from the material being taught. Both of these issues could have potentially been addressed through the inclusion of a greater number of relevant concrete examples.

One concern with using concrete examples is that students might only remember the examples – especially if they are particularly memorable, such as fun or gimmicky examples – and will not be able to transfer their understanding from one example to another, or more broadly to the abstract concept. However, there does not seem to be any evidence that fun relevant examples actually hurt learning by harming memory for important information. Instead, fun examples and jokes tend to be more memorable, but this boost in memory for the joke does not seem to come at a cost to memory for the underlying concept (Baldassari & Kelley, 2012 ). However, two important caveats need to be highlighted. First, to the extent that the more memorable content is not relevant to the concepts of interest, learning of the target information can be compromised (Harp & Mayer, 1998 ). Thus, care must be taken to ensure that all examples and gimmicks are, in fact, related to the core concepts that the students need to acquire, and do not contain irrelevant perceptual features (Kaminski & Sloutsky, 2013 ).

The second issue is that novices often notice and remember the surface details of an example rather than the underlying structure. Experts, on the other hand, can extract the underlying structure from examples that have divergent surface features (Chi, Feltovich, & Glaser, 1981 ; see Fig.  5 for an example from physics). Gick and Holyoak ( 1983 ) tried to get students to apply a rule from one problem to another problem that appeared different on the surface, but was structurally similar. They found that providing multiple examples helped with this transfer process compared to only using one example – especially when the examples provided had different surface details. More work is also needed to determine how many examples are sufficient for generalization to occur (and this, of course, will vary with contextual factors and individual differences). Further research on the continuum between concrete/specific examples and more abstract concepts would also be informative. That is, if an example is not concrete enough, it may be too difficult to understand. On the other hand, if the example is too concrete, that could be detrimental to generalization to the more abstract concept (although a diverse set of very concrete examples may be able to help with this). In fact, in a controversial article, Kaminski, Sloutsky, and Heckler ( 2008 ) claimed that abstract examples were more effective than concrete examples. Later rebuttals of this paper contested whether the abstract versus concrete distinction was clearly defined in the original study (see Reed, 2008 , for a collection of letters on the subject). This ideal point along the concrete-abstract continuum might also interact with development.

Finding teacher blog posts on concrete examples proved to be more difficult than for the other strategies in this review. One optimistic possibility is that teachers frequently use concrete examples in their teaching, and thus do not think of this as a specific contribution from cognitive psychology; the one blog post we were able to find that discussed concrete examples suggests that this might be the case (Boulton, 2016 ). The idea of “linking abstract concepts with concrete examples” is also covered in 25% of teacher-training textbooks used in the US, according to the report by Pomerance et al. ( 2016 ); this is the second most frequently covered of the six strategies, after “posing probing questions” (i.e., elaborative interrogation). A useful direction for future research would be to establish how teachers are using concrete examples in their practice, and whether we can make any suggestions for improvement based on research into the science of learning. For example, if two examples are better than one (Bauernschmidt, 2017 ), are additional examples also needed, or are there diminishing returns from providing more examples? And, how can teachers best ensure that concrete examples are consistent with prior knowledge (Reed, 2008 )?

Dual coding

Both the memory literature and folk psychology support the notion of visual examples being beneficial—the adage of “a picture is worth a thousand words” (traced back to an advertising slogan from the 1920s; Meider, 1990 ). Indeed, it is well-understood that more information can be conveyed through a simple illustration than through several paragraphs of text (e.g., Barker & Manji, 1989 ; Mayer & Gallini, 1990 ). Illustrations can be particularly helpful when the described concept involves several parts or steps and is intended for individuals with low prior knowledge (Eitel & Scheiter, 2015 ; Mayer & Gallini, 1990 ). Figure  6 provides a concrete example of this, illustrating how information can flow through neurons and synapses.

In addition to being able to convey information more succinctly, pictures are also more memorable than words (Paivio & Csapo, 1969 , 1973 ). In the memory literature, this is referred to as the picture superiority effect , and dual coding theory was developed in part to explain this effect. Dual coding follows from the notion of text being accompanied by complementary visual information to enhance learning. Paivio ( 1971 , 1986 ) proposed dual coding theory as a mechanistic account for the integration of multiple information “codes” to process information. In this theory, a code corresponds to a modal or otherwise distinct representation of a concept—e.g., “mental images for ‘book’ have visual, tactual, and other perceptual qualities similar to those evoked by the referent objects on which the images are based” (Clark & Paivio, 1991 , p. 152). Aylwin ( 1990 ) provides a clear example of how the word “dog” can evoke verbal, visual, and enactive representations (see Fig.  7 for a similar example for the word “SPOON”, based on Aylwin, 1990 (Fig.  2 ) and Madan & Singhal, 2012a (Fig.  3 )). Codes can also correspond to emotional properties (Clark & Paivio, 1991 ; Paivio, 2013 ). Clark and Paivio ( 1991 ) provide a thorough review of dual coding theory and its relation to education, while Paivio ( 2007 ) provides a comprehensive treatise on dual coding theory. Broadly, dual coding theory suggests that providing multiple representations of the same information enhances learning and memory, and that information that more readily evokes additional representations (through automatic imagery processes) receives a similar benefit.

Paivio and Csapo ( 1973 ) suggest that verbal and imaginal codes have independent and additive effects on memory recall. Using visuals to improve learning and memory has been particularly applied to vocabulary learning (Danan, 1992 ; Sadoski, 2005 ), but has also shown success in other domains such as in health care (Hartland, Biddle, & Fallacaro, 2008 ). To take advantage of dual coding, verbal information should be accompanied by a visual representation when possible. However, while the studies discussed all indicate that the use of multiple representations of information is favorable, it is important to acknowledge that each representation also increases cognitive load and can lead to over-saturation (Mayer & Moreno, 2003 ).

Given that pictures are generally remembered better than words, it is important to ensure that the pictures students are provided with are helpful and relevant to the content they are expected to learn. McNeill, Uttal, Jarvin, and Sternberg ( 2009 ) found that providing visual examples decreased conceptual errors. However, McNeill et al. also found that when students were given visually rich examples, they performed more poorly than students who were not given any visual example, suggesting that the visual details can at times become a distraction and hinder performance. Thus, it is important to consider that images used in teaching are clear and not ambiguous in their meaning (Schwartz, 2007 ).

Further broadening the scope of dual coding theory, Engelkamp and Zimmer ( 1984 ) suggest that motor movements, such as “turning the handle,” can provide an additional motor code that can improve memory, linking studies of motor actions (enactment) with dual coding theory (Clark & Paivio, 1991 ; Engelkamp & Cohen, 1991 ; Madan & Singhal, 2012c ). Indeed, enactment effects appear to primarily occur during learning, rather than during retrieval (Peterson & Mulligan, 2010 ). Along similar lines, Wammes, Meade, and Fernandes ( 2016 ) demonstrated that generating drawings can provide memory benefits beyond what could otherwise be explained by visual imagery, picture superiority, and other memory enhancing effects. Providing convergent evidence, even when overt motor actions are not critical in themselves, words representing functional objects have been shown to enhance later memory (Madan & Singhal, 2012b ; Montefinese, Ambrosini, Fairfield, & Mammarella, 2013 ). This indicates that motoric processes can improve memory similarly to visual imagery, similar to memory differences for concrete vs. abstract words. Further research suggests that automatic motor simulation for functional objects is likely responsible for this memory benefit (Madan, Chen, & Singhal, 2016 ).

When teachers combine visuals and words in their educational practice, however, they may not always be taking advantage of dual coding – at least, not in the optimal manner. For example, a recent discussion on Twitter centered around one teacher’s decision to have 7 th Grade students replace certain words in their science laboratory report with a picture of that word (e.g., the instructions read “using a syringe …” and a picture of a syringe replaced the word; Turner, 2016a ). Other teachers argued that this was not dual coding (Beaven, 2016 ; Williams, 2016 ), because there were no longer two different representations of the information. The first teacher maintained that dual coding was preserved, because this laboratory report with pictures was to be used alongside the original, fully verbal report (Turner, 2016b ). This particular implementation – having students replace individual words with pictures – has not been examined in the cognitive literature, presumably because no benefit would be expected. In any case, we need to be clearer about implementations for dual coding, and more research is needed to clarify how teachers can make use of the benefits conferred by multiple representations and picture superiority.

Critically, dual coding theory is distinct from the notion of “learning styles,” which describe the idea that individuals benefit from instruction that matches their modality preference. While this idea is pervasive and individuals often subjectively feel that they have a preference, evidence indicates that the learning styles theory is not supported by empirical findings (e.g., Kavale, Hirshoren, & Forness, 1998 ; Pashler, McDaniel, Rohrer, & Bjork, 2008 ; Rohrer & Pashler, 2012 ). That is, there is no evidence that instructing students in their preferred learning style leads to an overall improvement in learning (the “meshing” hypothesis). Moreover, learning styles have come to be described as a myth or urban legend within psychology (Coffield, Moseley, Hall, & Ecclestone, 2004 ; Hattie & Yates, 2014 ; Kirschner & van Merriënboer, 2013 ; Kirschner, 2017 ); skepticism about learning styles is a common stance amongst evidence-informed teachers (e.g., Saunders, 2016 ). Providing evidence against the notion of learning styles, Kraemer, Rosenberg, and Thompson-Schill ( 2009 ) found that individuals who scored as “verbalizers” and “visualizers” did not perform any better on experimental trials matching their preference. Instead, it has recently been shown that learning through one’s preferred learning style is associated with elevated subjective judgements of learning, but not objective performance (Knoll, Otani, Skeel, & Van Horn, 2017 ). In contrast to learning styles, dual coding is based on providing additional, complementary forms of information to enhance learning, rather than tailoring instruction to individuals’ preferences.

Genuine educational environments present many opportunities for combining the strategies outlined above. Spacing can be particularly potent for learning if it is combined with retrieval practice. The additive benefits of retrieval practice and spacing can be gained by engaging in retrieval practice multiple times (also known as distributed practice; see Cepeda et al., 2006 ). Interleaving naturally entails spacing if students interleave old and new material. Concrete examples can be both verbal and visual, making use of dual coding. In addition, the strategies of elaboration, concrete examples, and dual coding all work best when used as part of retrieval practice. For example, in the concept-mapping studies mentioned above (Blunt & Karpicke, 2014 ; Karpicke, Blunt, et al., 2014 ), creating concept maps while looking at course materials (e.g., a textbook) was not as effective for later memory as creating concept maps from memory. When practicing elaborative interrogation, students can start off answering the “how” and “why” questions they pose for themselves using class materials, and work their way up to answering them from memory. And when interleaving different problem types, students should be practicing answering them rather than just looking over worked examples.

But while these ideas for strategy combinations have empirical bases, it has not yet been established whether the benefits of the strategies to learning are additive, super-additive, or, in some cases, incompatible. Thus, future research needs to (a) better formalize the definition of each strategy (particularly critical for elaboration and dual coding), (b) identify best practices for implementation in the classroom, (c) delineate the boundary conditions of each strategy, and (d) strategically investigate interactions between the six strategies we outlined in this manuscript.

Aleven, V. A., & Koedinger, K. R. (2002). An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26 , 147–179.

Article   Google Scholar  

Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behavior, 22 , 261–295.

Arnold, K. M., & McDermott, K. B. (2013). Test-potentiated learning: distinguishing between direct and indirect effects of tests. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 , 940–945.

PubMed   Google Scholar  

Aylwin, S. (1990). Imagery and affect: big questions, little answers. In P. J. Thompson, D. E. Marks, & J. T. E. Richardson (Eds.), Imagery: Current developments . New York: International Library of Psychology.

Google Scholar  

Baldassari, M. J., & Kelley, M. (2012). Make’em laugh? The mnemonic effect of humor in a speech. Psi Chi Journal of Psychological Research, 17 , 2–9.

Barker, P. G., & Manji, K. A. (1989). Pictorial dialogue methods. International Journal of Man-Machine Studies, 31 , 323–347.

Bauernschmidt, A. (2017). GUEST POST: two examples are better than one. [Blog post]. The Learning Scientists Blog . Retrieved from http://www.learningscientists.org/blog/2017/5/30-1 . Accessed 25 Dec 2017.

Beaven, T. (2016). @doctorwhy @FurtherEdagogy @doc_kristy Right, I thought the whole point of dual coding was to use TWO codes: pics + words of the SAME info? [Tweet]. Retrieved from https://twitter.com/TitaBeaven/status/807504041341308929 . Accessed 25 Dec 2017.

Bellezza, F. S., Cheesman, F. L., & Reddy, B. G. (1977). Organization and semantic elaboration in free recall. Journal of Experimental Psychology: Human Learning and Memory, 3 , 539–550.

Benney, D. (2016). (Trying to apply) spacing in a content heavy subject [Blog post]. Retrieved from https://mrbenney.wordpress.com/2016/10/16/trying-to-apply-spacing-in-science/ . Accessed 25 Dec 2017.

Berry, D. C. (1983). Metacognitive experience and transfer of logical reasoning. Quarterly Journal of Experimental Psychology, 35A , 39–49.

Birnbaum, M. S., Kornell, N., Bjork, E. L., & Bjork, R. A. (2013). Why interleaving enhances inductive learning: the roles of discrimination and retrieval. Memory & Cognition, 41 , 392–402.

Bjork, R. A. (1999). Assessing our own competence: heuristics and illusions. In D. Gopher & A. Koriat (Eds.), Attention and peformance XVII. Cognitive regulation of performance: Interaction of theory and application (pp. 435–459). Cambridge, MA: MIT Press.

Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.

Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. From learning processes to cognitive processes: Essays in honor of William K. Estes, 2 , 35–67.

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: creating desirable difficulties to enhance learning. Psychology and the real world: Essays illustrating fundamental contributions to society , 56–64.

Blunt, J. R., & Karpicke, J. D. (2014). Learning with retrieval-based concept mapping. Journal of Educational Psychology, 106 , 849–858.

Boulton, K. (2016). What does cognitive overload look like in the humanities? [Blog post]. Retrieved from https://educationechochamberuncut.wordpress.com/2016/03/05/what-does-cognitive-overload-look-like-in-the-humanities-kris-boulton-2/ . Accessed 25 Dec 2017.

Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick . Cambridge, MA: Harvard University Press.

Book   Google Scholar  

Butler, A. C. (2010). Repeated testing produces superior transfer of learning relative to repeated studying. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36 , 1118–1133.

Caplan, J. B., & Madan, C. R. (2016). Word-imageability enhances association-memory by recruiting hippocampal activity. Journal of Cognitive Neuroscience, 28 , 1522–1538.

Article   PubMed   Google Scholar  

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: a review and quantitative synthesis. Psychological Bulletin, 132 , 354–380.

Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., & Pashler, H. (2008). Spacing effects in learning a temporal ridgeline of optimal retention. Psychological Science, 19 , 1095–1102.

Chi, M. T., De Leeuw, N., Chiu, M. H., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18 , 439–477.

Chi, M. T., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5 , 121–152.

CIFE. (2012). No January A level and other changes. Retrieved from http://www.cife.org.uk/cife-general-news/no-january-a-level-and-other-changes/ . Accessed 25 Dec 2017.

Clark, D. (2016). One book on learning that every teacher, lecturer & trainer should read (7 reasons) [Blog post]. Retrieved from http://donaldclarkplanb.blogspot.com/2016/03/one-book-on-learning-that-every-teacher.html . Accessed 25 Dec 2017.

Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3 , 149–210.

Class Teaching. (2013). Deep questioning [Blog post]. Retrieved from https://classteaching.wordpress.com/2013/07/12/deep-questioning/ . Accessed 25 Dec 2017.

Clinton, V., Alibali, M. W., & Nathan, M. J. (2016). Learning about posterior probability: do diagrams and elaborative interrogation help? The Journal of Experimental Education, 84 , 579–599.

Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: a systematic and critical review . London: Learning & Skills Research Centre.

Cohen, R. L. (1981). On the generality of some memory laws. Scandinavian Journal of Psychology, 22 , 267–281.

Cooper, H. (1989). Synthesis of research on homework. Educational Leadership, 47 , 85–91.

Corbett, A. T., Reed, S. K., Hoffmann, R., MacLaren, B., & Wagner, A. (2010). Interleaving worked examples and cognitive tutor support for algebraic modeling of problem situations. In Proceedings of the Thirty-Second Annual Meeting of the Cognitive Science Society (pp. 2882–2887).

Cox, D. (2015). No stakes testing – not telling students their results [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2015/06/06/no-stakes-testing-not-telling-students-their-results/ . Accessed 25 Dec 2017.

Cox, D. (2016a). Ditch revision. Teach it well [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2016/01/09/ditch-revision-teach-it-well/ . Accessed 25 Dec 2017.

Cox, D. (2016b). ‘They need to remember this in three years time’: spacing & interleaving for the new GCSEs [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2016/03/25/they-need-to-remember-this-in-three-years-time-spacing-interleaving-for-the-new-gcses/ . Accessed 25 Dec 2017.

Craik, F. I. (2002). Levels of processing: past, present… future? Memory, 10 , 305–318.

Craik, F. I., & Lockhart, R. S. (1972). Levels of processing: a framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11 , 671–684.

Danan, M. (1992). Reversed subtitling and dual coding theory: new directions for foreign language instruction. Language Learning, 42 , 497–527.

Dettmers, S., Trautwein, U., & Lüdtke, O. (2009). The relationship between homework time and achievement is not universal: evidence from multilevel analyses in 40 countries. School Effectiveness and School Improvement, 20 , 375–405.

Dirkx, K. J., Kester, L., & Kirschner, P. A. (2014). The testing effect for learning principles and procedures from texts. The Journal of Educational Research, 107 , 357–364.

Dunlosky, J. (2013). Strengthening the student toolbox: study strategies to boost learning. American Educator, 37 (3), 12–21.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14 , 4–58.

Ebbinghaus, H. (1913). Memory (HA Ruger & CE Bussenius, Trans.). New York: Columbia University, Teachers College. (Original work published 1885) . Retrieved from http://psychclassics.yorku.ca/Ebbinghaus/memory8.htm . Accessed 25 Dec 2017.

Eglington, L. G., & Kang, S. H. (2016). Retrieval practice benefits deductive inference. Educational Psychology Review , 1–14.

Eitel, A., & Scheiter, K. (2015). Picture or text first? Explaining sequential effects when learning with pictures and text. Educational Psychology Review, 27 , 153–180.

Engelkamp, J., & Cohen, R. L. (1991). Current issues in memory of action events. Psychological Research, 53 , 175–182.

Engelkamp, J., & Zimmer, H. D. (1984). Motor programme information as a separable memory unit. Psychological Research, 46 , 283–299.

Fawcett, D. (2013). Can I be that little better at……using cognitive science/psychology/neurology to plan learning? [Blog post]. Retrieved from http://reflectionsofmyteaching.blogspot.com/2013/09/can-i-be-that-little-better-atusing.html . Accessed 25 Dec 2017.

Fiechter, J. L., & Benjamin, A. S. (2017). Diminishing-cues retrieval practice: a memory-enhancing technique that works when regular testing doesn’t. Psychonomic Bulletin & Review , 1–9.

Firth, J. (2016). Spacing in teaching practice [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/4/12-1 . Accessed 25 Dec 2017.

Fordham, M. [mfordhamhistory]. (2016). Is there a meaningful distinction in psychology between ‘thinking’ & ‘critical thinking’? [Tweet]. Retrieved from https://twitter.com/mfordhamhistory/status/809525713623781377 . Accessed 25 Dec 2017.

Fritz, C. O., Morris, P. E., Nolan, D., & Singleton, J. (2007). Expanding retrieval practice: an effective aid to preschool children’s learning. The Quarterly Journal of Experimental Psychology, 60 , 991–1004.

Gates, A. I. (1917). Recitation as a factory in memorizing. Archives of Psychology, 6.

Gick, M. L., & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15 , 1–38.

Gorman, A. M. (1961). Recognition memory for nouns as a function of abstractedness and frequency. Journal of Experimental Psychology, 61 , 23–39.

Hainselin, M., Picard, L., Manolli, P., Vankerkore-Candas, S., & Bourdin, B. (2017). Hey teacher, don’t leave them kids alone: action is better for memory than reading. Frontiers in Psychology , 8 .

Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage. Journal of Educational Psychology, 90 , 414–434.

Hartland, W., Biddle, C., & Fallacaro, M. (2008). Audiovisual facilitation of clinical knowledge: A paradigm for dispersed student education based on Paivio’s dual coding theory. AANA Journal, 76 , 194–198.

Hattie, J., & Yates, G. (2014). Visible learning and the science of how we learn . New York: Routledge.

Hausman, H., & Kornell, N. (2014). Mixing topics while studying does not enhance learning. Journal of Applied Research in Memory and Cognition, 3 , 153–160.

Hinze, S. R., & Rapp, D. N. (2014). Retrieval (sometimes) enhances learning: performance pressure reduces the benefits of retrieval practice. Applied Cognitive Psychology, 28 , 597–606.

Hirshman, E. (2001). Elaboration in memory. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social & behavioral sciences (pp. 4369–4374). Oxford: Pergamon.

Chapter   Google Scholar  

Hobbiss, M. (2016). Make it meaningful! Elaboration [Blog post]. Retrieved from https://hobbolog.wordpress.com/2016/06/09/make-it-meaningful-elaboration/ . Accessed 25 Dec 2017.

Jones, F. (2016). Homework – is it really that useless? [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/4/5-1 . Accessed 25 Dec 2017.

Kaminski, J. A., & Sloutsky, V. M. (2013). Extraneous perceptual information interferes with children’s acquisition of mathematical knowledge. Journal of Educational Psychology, 105 (2), 351–363.

Kaminski, J. A., Sloutsky, V. M., & Heckler, A. F. (2008). The advantage of abstract examples in learning math. Science, 320 , 454–455.

Kang, S. H. (2016). Spaced repetition promotes efficient and effective learning policy implications for instruction. Policy Insights from the Behavioral and Brain Sciences, 3 , 12–19.

Kang, S. H. K., McDermott, K. B., & Roediger, H. L. (2007). Test format and corrective feedback modify the effects of testing on long-term retention. European Journal of Cognitive Psychology, 19 , 528–558.

Karpicke, J. D., & Aue, W. R. (2015). The testing effect is alive and well with complex materials. Educational Psychology Review, 27 , 317–326.

Karpicke, J. D., Blunt, J. R., Smith, M. A., & Karpicke, S. S. (2014). Retrieval-based learning: The need for guided retrieval in elementary school children. Journal of Applied Research in Memory and Cognition, 3 , 198–206.

Karpicke, J. D., Lehman, M., & Aue, W. R. (2014). Retrieval-based learning: an episodic context account. In B. H. Ross (Ed.), Psychology of Learning and Motivation (Vol. 61, pp. 237–284). San Diego, CA: Elsevier Academic Press.

Karpicke, J. D., Blunt, J. R., & Smith, M. A. (2016). Retrieval-based learning: positive effects of retrieval practice in elementary school children. Frontiers in Psychology, 7 .

Kavale, K. A., Hirshoren, A., & Forness, S. R. (1998). Meta-analytic validation of the Dunn and Dunn model of learning-style preferences: a critique of what was Dunn. Learning Disabilities Research & Practice, 13 , 75–80.

Khanna, M. M. (2015). Ungraded pop quizzes: test-enhanced learning without all the anxiety. Teaching of Psychology, 42 , 174–178.

Kirby, J. (2014). One scientific insight for curriculum design [Blog post]. Retrieved from https://pragmaticreform.wordpress.com/2014/05/05/scientificcurriculumdesign/ . Accessed 25 Dec 2017.

Kirschner, P. A. (2017). Stop propagating the learning styles myth. Computers & Education, 106 , 166–171.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48 , 169–183.

Knoll, A. R., Otani, H., Skeel, R. L., & Van Horn, K. R. (2017). Learning style, judgments of learning, and learning of verbal and visual information. British Journal of Psychology, 108 , 544-563.

Kornell, N., & Bjork, R. A. (2008). Learning concepts and categories is spacing the “enemy of induction”? Psychological Science, 19 , 585–592.

Kornell, N., & Finn, B. (2016). Self-regulated learning: an overview of theory and data. In J. Dunlosky & S. Tauber (Eds.), The Oxford Handbook of Metamemory (pp. 325–340). New York: Oxford University Press.

Kornell, N., Klein, P. J., & Rawson, K. A. (2015). Retrieval attempts enhance learning, but retrieval success (versus failure) does not matter. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41 , 283–294.

Kraemer, D. J. M., Rosenberg, L. M., & Thompson-Schill, S. L. (2009). The neural correlates of visual and verbal cognitive styles. Journal of Neuroscience, 29 , 3792–3798.

Article   PubMed   PubMed Central   Google Scholar  

Kraft, N. (2015). Spaced practice and repercussions for teaching. Retrieved from http://nathankraft.blogspot.com/2015/08/spaced-practice-and-repercussions-for.html . Accessed 25 Dec 2017.

Learning Scientists. (2016a). Weekly Digest #3: How teachers implement interleaving in their curriculum [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/3/28/weekly-digest-3 . Accessed 25 Dec 2017.

Learning Scientists. (2016b). Weekly Digest #13: how teachers implement retrieval in their classrooms [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/6/5/weekly-digest-13 . Accessed 25 Dec 2017.

Learning Scientists. (2016c). Weekly Digest #40: teachers’ implementation of principles from “Make It Stick” [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/12/18-1 . Accessed 25 Dec 2017.

Learning Scientists. (2017). Weekly Digest #54: is there an app for that? Studying 2.0 [Blog post]. Retrieved from http://www.learningscientists.org/blog/2017/4/9/weekly-digest-54 . Accessed 25 Dec 2017.

LeFevre, J.-A., & Dixon, P. (1986). Do written instructions need examples? Cognition and Instruction, 3 , 1–30.

Lew, K., Fukawa-Connelly, T., Mejí-Ramos, J. P., & Weber, K. (2016). Lectures in advanced mathematics: Why students might not understand what the mathematics professor is trying to convey. Journal of Research in Mathematics Education, 47 , 162–198.

Lindsey, R. V., Shroyer, J. D., Pashler, H., & Mozer, M. C. (2014). Improving students’ long-term knowledge retention through personalized review. Psychological Science, 25 , 639–647.

Lipko-Speed, A., Dunlosky, J., & Rawson, K. A. (2014). Does testing with feedback help grade-school children learn key concepts in science? Journal of Applied Research in Memory and Cognition, 3 , 171–176.

Lockhart, R. S., & Craik, F. I. (1990). Levels of processing: a retrospective commentary on a framework for memory research. Canadian Journal of Psychology, 44 , 87–112.

Lovell, O. (2017). How do we know what to put on the quiz? [Blog Post]. Retrieved from http://www.ollielovell.com/olliesclassroom/know-put-quiz/ . Accessed 25 Dec 2017.

Luehmann, A. L. (2008). Using blogging in support of teacher professional identity development: a case study. The Journal of the Learning Sciences, 17 , 287–337.

Madan, C. R., Glaholt, M. G., & Caplan, J. B. (2010). The influence of item properties on association-memory. Journal of Memory and Language, 63 , 46–63.

Madan, C. R., & Singhal, A. (2012a). Motor imagery and higher-level cognition: four hurdles before research can sprint forward. Cognitive Processing, 13 , 211–229.

Madan, C. R., & Singhal, A. (2012b). Encoding the world around us: motor-related processing influences verbal memory. Consciousness and Cognition, 21 , 1563–1570.

Madan, C. R., & Singhal, A. (2012c). Using actions to enhance memory: effects of enactment, gestures, and exercise on human memory. Frontiers in Psychology, 3 .

Madan, C. R., Chen, Y. Y., & Singhal, A. (2016). ERPs differentially reflect automatic and deliberate processing of the functional manipulability of objects. Frontiers in Human Neuroscience, 10 .

Mandler, G. (1979). Organization and repetition: organizational principles with special reference to rote learning. In L. G. Nilsson (Ed.), Perspectives on Memory Research (pp. 293–327). New York: Academic Press.

Marsh, E. J., Fazio, L. K., & Goswick, A. E. (2012). Memorial consequences of testing school-aged children. Memory, 20 , 899–906.

Mayer, R. E., & Gallini, J. K. (1990). When is an illustration worth ten thousand words? Journal of Educational Psychology, 82 , 715–726.

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38 , 43–52.

McDaniel, M. A., & Donnelly, C. M. (1996). Learning with analogy and elaborative interrogation. Journal of Educational Psychology, 88 , 508–519.

McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: successful transfer performance on classroom exams. Applied Cognitive Psychology, 27 , 360–372.

McDermott, K. B., Agarwal, P. K., D’Antonio, L., Roediger, H. L., & McDaniel, M. A. (2014). Both multiple-choice and short-answer quizzes enhance later exam performance in middle and high school classes. Journal of Experimental Psychology: Applied, 20 , 3–21.

McHugh, A. (2013). High-stakes tests: bad for students, teachers, and education in general [Blog post]. Retrieved from https://teacherbiz.wordpress.com/2013/07/01/high-stakes-tests-bad-for-students-teachers-and-education-in-general/ . Accessed 25 Dec 2017.

McNeill, N. M., Uttal, D. H., Jarvin, L., & Sternberg, R. J. (2009). Should you show me the money? Concrete objects both hurt and help performance on mathematics problems. Learning and Instruction, 19 , 171–184.

Meider, W. (1990). “A picture is worth a thousand words”: from advertising slogan to American proverb. Southern Folklore, 47 , 207–225.

Michaela Community School. (2014). Homework. Retrieved from http://mcsbrent.co.uk/homework-2/ . Accessed 25 Dec 2017.

Montefinese, M., Ambrosini, E., Fairfield, B., & Mammarella, N. (2013). The “subjective” pupil old/new effect: is the truth plain to see? International Journal of Psychophysiology, 89 , 48–56.

O’Neil, H. F., Chung, G. K., Kerr, D., Vendlinski, T. P., Buschang, R. E., & Mayer, R. E. (2014). Adding self-explanation prompts to an educational computer game. Computers In Human Behavior, 30 , 23–28.

Overoye, A. L., & Storm, B. C. (2015). Harnessing the power of uncertainty to enhance learning. Translational Issues in Psychological Science, 1 , 140–148.

Paivio, A. (1971). Imagery and verbal processes . New York: Holt, Rinehart and Winston.

Paivio, A. (1986). Mental representations: a dual coding approach . New York: Oxford University Press.

Paivio, A. (2007). Mind and its evolution: a dual coding theoretical approach . Mahwah: Erlbaum.

Paivio, A. (2013). Dual coding theory, word abstractness, and emotion: a critical review of Kousta et al. (2011). Journal of Experimental Psychology: General, 142 , 282–287.

Paivio, A., & Csapo, K. (1969). Concrete image and verbal memory codes. Journal of Experimental Psychology, 80 , 279–285.

Paivio, A., & Csapo, K. (1973). Picture superiority in free recall: imagery or dual coding? Cognitive Psychology, 5 , 176–206.

Paivio, A., Walsh, M., & Bons, T. (1994). Concreteness effects on memory: when and why? Journal of Experimental Psychology: Learning, Memory, and Cognition, 20 , 1196–1204.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: concepts and evidence. Psychological Science in the Public Interest, 9 , 105–119.

Pashler, H., Bain, P. M., Bottge, B. A., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning. IES practice guide. NCER 2007–2004. National Center for Education Research .

Patel, R., Liu, R., & Koedinger, K. (2016). When to block versus interleave practice? Evidence against teaching fraction addition before fraction multiplication. In Proceedings of the 38th Annual Meeting of the Cognitive Science Society, Philadelphia, PA .

Penfound, B. (2017). Journey to interleaved practice #2 [Blog Post]. Retrieved from https://fullstackcalculus.com/2017/02/03/journey-to-interleaved-practice-2/ . Accessed 25 Dec 2017.

Penfound, B. [BryanPenfound]. (2016). Does blocked practice/learning lessen cognitive load? Does interleaved practice/learning provide productive struggle? [Tweet]. Retrieved from https://twitter.com/BryanPenfound/status/808759362244087808 . Accessed 25 Dec 2017.

Peterson, D. J., & Mulligan, N. W. (2010). Enactment and retrieval. Memory & Cognition, 38 , 233–243.

Picciotto, H. (2009). Lagging homework [Blog post]. Retrieved from http://blog.mathedpage.org/2013/06/lagging-homework.html . Accessed 25 Dec 2017.

Pomerance, L., Greenberg, J., & Walsh, K. (2016). Learning about learning: what every teacher needs to know. Retrieved from http://www.nctq.org/dmsView/Learning_About_Learning_Report . Accessed 25 Dec 2017.

Postman, L. (1976). Methodology of human learning. In W. K. Estes (Ed.), Handbook of learning and cognitive processes (Vol. 3). Hillsdale: Erlbaum.

Pressley, M., McDaniel, M. A., Turnure, J. E., Wood, E., & Ahmad, M. (1987). Generation and precision of elaboration: effects on intentional and incidental learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13 , 291–300.

Reed, S. K. (2008). Concrete examples must jibe with experience. Science, 322 , 1632–1633.

researchED. (2013). How it all began. Retrieved from http://www.researched.org.uk/about/our-story/ . Accessed 25 Dec 2017.

Ritchie, S. J., Della Sala, S., & McIntosh, R. D. (2013). Retrieval practice, with or without mind mapping, boosts fact learning in primary school children. PLoS One, 8 (11), e78976.

Rittle-Johnson, B. (2006). Promoting transfer: effects of self-explanation and direct instruction. Child Development, 77 , 1–15.

Roediger, H. L. (1985). Remembering Ebbinghaus. [Retrospective review of the book On Memory , by H. Ebbinghaus]. Contemporary Psychology, 30 , 519–523.

Roediger, H. L. (2013). Applying cognitive psychology to education translational educational science. Psychological Science in the Public Interest, 14 , 1–3.

Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: basic research and implications for educational practice. Perspectives on Psychological Science, 1 , 181–210.

Roediger, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten benefits of testing and their applications to educational practice. In J. Mester & B. Ross (Eds.), The psychology of learning and motivation: cognition in education (pp. 1–36). Oxford: Elsevier.

Roediger, H. L., Finn, B., & Weinstein, Y. (2012). Applications of cognitive science to education. In Della Sala, S., & Anderson, M. (Eds.), Neuroscience in education: the good, the bad, and the ugly . Oxford, UK: Oxford University Press.

Roelle, J., & Berthold, K. (2017). Effects of incorporating retrieval into learning tasks: the complexity of the tasks matters. Learning and Instruction, 49 , 142–156.

Rohrer, D. (2012). Interleaving helps students distinguish among similar concepts. Educational Psychology Review, 24(3), 355–367.

Rohrer, D., Dedrick, R. F., & Stershic, S. (2015). Interleaved practice improves mathematics learning. Journal of Educational Psychology, 107 , 900–908.

Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46 , 34–35.

Rohrer, D., & Taylor, K. (2007). The shuffling of mathematics problems improves learning. Instructional Science, 35 , 481–498.

Rose, N. (2014). Improving the effectiveness of homework [Blog post]. Retrieved from https://evidenceintopractice.wordpress.com/2014/03/20/improving-the-effectiveness-of-homework/ . Accessed 25 Dec 2017.

Sadoski, M. (2005). A dual coding view of vocabulary learning. Reading & Writing Quarterly, 21 , 221–238.

Saunders, K. (2016). It really is time we stopped talking about learning styles [Blog post]. Retrieved from http://martingsaunders.com/2016/10/it-really-is-time-we-stopped-talking-about-learning-styles/ . Accessed 25 Dec 2017.

Schwartz, D. (2007). If a picture is worth a thousand words, why are you reading this essay? Social Psychology Quarterly, 70 , 319–321.

Shumaker, H. (2016). Homework is wrecking our kids: the research is clear, let’s ban elementary homework. Salon. Retrieved from http://www.salon.com/2016/03/05/homework_is_wrecking_our_kids_the_research_is_clear_lets_ban_elementary_homework . Accessed 25 Dec 2017.

Smith, A. M., Floerke, V. A., & Thomas, A. K. (2016). Retrieval practice protects memory against acute stress. Science, 354 , 1046–1048.

Smith, M. A., Blunt, J. R., Whiffen, J. W., & Karpicke, J. D. (2016). Does providing prompts during retrieval practice improve learning? Applied Cognitive Psychology, 30 , 784–802.

Smith, M. A., & Karpicke, J. D. (2014). Retrieval practice with short-answer, multiple-choice, and hybrid formats. Memory, 22 , 784–802.

Smith, M. A., Roediger, H. L., & Karpicke, J. D. (2013). Covert retrieval practice benefits retention as much as overt retrieval practice. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 , 1712–1725.

Son, J. Y., & Rivas, M. J. (2016). Designing clicker questions to stimulate transfer. Scholarship of Teaching and Learning in Psychology, 2 , 193–207.

Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, 110 , 6313–6317.

Thomson, R., & Mehring, J. (2016). Better vocabulary study strategies for long-term learning. Kwansei Gakuin University Humanities Review, 20 , 133–141.

Trafton, J. G., & Reiser, B. J. (1993). Studying examples and solving problems: contributions to skill acquisition . Technical report, Naval HCI Research Lab, Washington, DC, USA.

Tran, R., Rohrer, D., & Pashler, H. (2015). Retrieval practice: the lack of transfer to deductive inferences. Psychonomic Bulletin & Review, 22 , 135–140.

Turner, K. [doc_kristy]. (2016a). My dual coding (in red) and some y8 work @AceThatTest they really enjoyed practising the technique [Tweet]. Retrieved from https://twitter.com/doc_kristy/status/807220355395977216 . Accessed 25 Dec 2017.

Turner, K. [doc_kristy]. (2016b). @FurtherEdagogy @doctorwhy their work is revision work, they already have the words on a different page, to compliment not replace [Tweet]. Retrieved from https://twitter.com/doc_kristy/status/807360265100599301 . Accessed 25 Dec 2017.

Valle, A., Regueiro, B., Núñez, J. C., Rodríguez, S., Piñeiro, I., & Rosário, P. (2016). Academic goals, student homework engagement, and academic achievement in elementary school. Frontiers in Psychology, 7 .

Van Gog, T., & Sweller, J. (2015). Not new, but nearly forgotten: the testing effect decreases or even disappears as the complexity of learning materials increases. Educational Psychology Review, 27 , 247–264.

Wammes, J. D., Meade, M. E., & Fernandes, M. A. (2016). The drawing effect: evidence for reliable and robust memory benefits in free recall. Quarterly Journal of Experimental Psychology, 69 , 1752–1776.

Weinstein, Y., Gilmore, A. W., Szpunar, K. K., & McDermott, K. B. (2014). The role of test expectancy in the build-up of proactive interference in long-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40 , 1039–1048.

Weinstein, Y., Nunes, L. D., & Karpicke, J. D. (2016). On the placement of practice questions during study. Journal of Experimental Psychology: Applied, 22 , 72–84.

Weinstein, Y., & Weinstein-Jones, F. (2017). Topic and quiz spacing spreadsheet: a planning tool for teachers [Blog Post]. Retrieved from http://www.learningscientists.org/blog/2017/5/11-1 . Accessed 25 Dec 2017.

Weinstein-Jones, F., & Weinstein, Y. (2017). Topic spacing spreadsheet for teachers [Excel macro]. Zenodo. http://doi.org/10.5281/zenodo.573764 . Accessed 25 Dec 2017.

Williams, D. [FurtherEdagogy]. (2016). @doctorwhy @doc_kristy word accompanying the visual? I’m unclear how removing words benefit? Would a flow chart better suit a scientific exp? [Tweet]. Retrieved from https://twitter.com/FurtherEdagogy/status/807356800509104128 . Accessed 25 Dec 2017.

Wood, B. (2017). And now for something a little bit different….[Blog post]. Retrieved from https://justateacherstandinginfrontofaclass.wordpress.com/2017/04/20/and-now-for-something-a-little-bit-different/ . Accessed 25 Dec 2017.

Wooldridge, C. L., Bugg, J. M., McDaniel, M. A., & Liu, Y. (2014). The testing effect with authentic educational materials: a cautionary note. Journal of Applied Research in Memory and Cognition, 3 , 214–221.

Young, C. (2016). Mini-tests. Retrieved from https://colleenyoung.wordpress.com/revision-activities/mini-tests/ . Accessed 25 Dec 2017.

Download references

Acknowledgements

Not applicable.

YW and MAS were partially supported by a grant from The IDEA Center.

Availability of data and materials

Author information, authors and affiliations.

Department of Psychology, University of Massachusetts Lowell, Lowell, MA, USA

Yana Weinstein

Department of Psychology, Boston College, Chestnut Hill, MA, USA

Christopher R. Madan

School of Psychology, University of Nottingham, Nottingham, UK

Department of Psychology, Rhode Island College, Providence, RI, USA

Megan A. Sumeracki

You can also search for this author in PubMed   Google Scholar

Contributions

YW took the lead on writing the “Spaced practice”, “Interleaving”, and “Elaboration” sections. CRM took the lead on writing the “Concrete examples” and “Dual coding” sections. MAS took the lead on writing the “Retrieval practice” section. All authors edited each others’ sections. All authors were involved in the conception and writing of the manuscript. All authors gave approval of the final version.

Corresponding author

Correspondence to Yana Weinstein .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

YW and MAS run a blog, “The Learning Scientists Blog”, which is cited in the tutorial review. The blog does not make money. Free resources on the strategies described in this tutorial review are provided on the blog. Occasionally, YW and MAS are invited by schools/school districts to present research findings from cognitive psychology applied to education.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Weinstein, Y., Madan, C.R. & Sumeracki, M.A. Teaching the science of learning. Cogn. Research 3 , 2 (2018). https://doi.org/10.1186/s41235-017-0087-y

Download citation

Received : 20 December 2016

Accepted : 02 December 2017

Published : 24 January 2018

DOI : https://doi.org/10.1186/s41235-017-0087-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

research about teaching methods

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Medicine (Baltimore)
  • v.99(40); 2020 Oct 2

The effectiveness of different teaching methods on medical or nursing students

a School of nursing, Lanzhou University

b Department of nursing, Gansu Provincial Hospital, China.

Yi-Tong Cai

Chao-ran qu, background:.

One of the major challenges in nursing and medical education is to foster the critical thinking ability and autonomous learning ability for students. But the effect of different teaching methods on these abilities of nursing or medical students has not been conclusive, and few studies have directly compared the differences in the effects of different teaching methods. As a result, it is necessary for students to evaluate the impact of different teaching methods on critical thinking ability and autonomous learning ability.

A systematic search will be performed using Chinese National Knowledge Infrastructure, Wanfang Data (Chinese database), VIP Information (Chinese database), Chinese Biomedical Literature, and English language databases, including PubMed and Embase, Web of Science, CINAHL Complete (EBSCO0, Cochrane library to identify relevant studies from inception to July 10, 2020. We will include random controlled trials that evaluated the different teaching methods. The Quality Assessment of Diagnostic Accuracy Studies 2 quality assessment tool will be used to assess the risk of bias in each study. Standard pairwise meta-analysis and network meta-analysis will be performed using STATA V.12.0, MetaDiSc 1.40, and R 3.4.1 software to compare the diagnostic efficacy of different hormonal biomarkers.

The results of this study will be published in a peer-reviewed journal.

Conclusion:

This study will summarize the direct and indirect evidence to determine the effectiveness of different teaching methods for medical or nursing students and attempt to find the most effective teaching method.

Ethics and dissemination:

Ethics approval and patient consent are not required, because this study is a meta-analysis based on published studies.

INPLASY registration number:

INPLASY202070017

1. Introduction

The ability of critical thinking is to be deliberate about thinking and actively assess and regulate one's cognition,[ 1 , 2 ] which is vital for nursing students and medical students which prepare them for clinical practice, [3] because critical thinking makes them respond quickly to patients’ urgent problems, make a best clinical decision, and provide safe and quality care. [4] And then, students with clinical thinking skills have capabilities such as information seeking, data analysis, decision making, and feedback. [5] However, Absent critical thinking, 1 typically relies on heuristics, a quick method or shortcut for problem-solving, and can fall victim to cognitive biases. [6] Cognitive biases can lead to diagnostic errors, which result in increased patient morbidity and mortality, and the adverse event of nursing. [7] Therefore, critical thinking and experience with technology have been noted as important qualities for graduates transitioning into professional roles [8]

The representative of Social Cognitive School, American psychologist Bandura [9] believes that autonomous learning is that learners constantly monitor and adjust their cognitive and emotional states, observe and apply various learning strategies, adjust learning behaviors, and strive to create and use the process of using material and social resources that contribute to learning. Autonomous learning is defined as a process where the learner is motivationally, behaviourally and meta-cognitively proactive in the learning process. [10] Besides, in the clinical environment, autonomous learning has been linked with academic achievement,[ 11 , 12 , 13 ] success in clinical skills, [14] and emotional health. [15] However, 1 of the major challenges in nursing and medical education is to develop an effective teaching method to foster critical thinking and autonomous learning ability for students.

In medical education, different teaching methods have different effects on nursing or medical students’ critical thinking and autonomous learning ability. In addition, more and more medical educators have recognized the shortcomings of traditional teaching methods, so they try to use a variety of teaching methods to enhance students’ critical thinking and autonomous learning ability, for example, case-based learning, problem-based learning, simulation-based learning. Compared with traditional teaching methods, these teaching methods reflect their own advantages. At present, the effect of different teaching methods on the critical thinking and autonomous learning ability of nursing or medical students has not been conclusive, and few studies have directly compared the differences in the effects of different teaching methods. Consequently, it is necessary and practical to evaluate the impact of different teaching methods on the critical thinking ability and autonomous learning ability of nursing or medical students

A network meta-analysis (NMA) will be conducted to test the differences of different teaching methods. We have registered the protocol on the International Platform of Registered Systematic Review and Meta-analysis Protocols (INPLASY), [16] and the registration number was INPLASY202070017. We will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statements [17] to report our NMA.

2.1. Eligibility criteria

2.1.1. types of patients.

Medical students or nursing students will be included. There will also be no restrictions based on other conditions, such as age, educational attainment, gender, different courses.

2.1.2. Types of studies

We will consider only randomized controlled trials (RCTs) of teaching methods for medical or nursing students. We will exclude non-RCTs, quasi-RCTs, uncontrolled trials, and reviews.

2.1.3. Types of interventions

Studies that evaluated any kind of teaching method (case-based learning, problem-based learning, simulation-based learning) will be included. We will exclude trials that teaching methods are not used as a major therapy. The control interventions will include traditional teaching.

2.1.4. Types of outcome measures

2.1.4.1. primary outcomes.

The primary outcomes are critical thinking (CT), autonomous learning ability. And CT was evaluated by the California Critical Thinking Disposition Inventory (CCTDI), [18] and autonomous learning ability was evaluated by the Self-Directed Learning Instrument (SDLI) for Nursing Students. [19]

2.1.4.2. Secondary outcomes

The secondary outcome measures will include:

  • 1. student satisfaction: Undergraduate Nursing Student Academic Satisfaction Scale [20]
  • 2. score: Self-made scale based on different research content

2.2. Search methods and the identification of studies

2.2.1. electronic searches.

We searched Chinese National Knowledge Infrastructure, Wanfang Data (Chinese database), VIP Information (Chinese database), Chinese Biomedical Literature, and English language databases, including PubMed and Embase, Web of Science, CINAHL Complete (EBSCO), Cochrane library to July 10, 2020, for different teaching methods. The search term will include 3 parts: that is teaching methods (Training Technique∗; Training Technic∗; Problem Based Learning; Problem-Based Curriculum; Problem Based Curriculum; Problem-Based Curricula; Problem Based Curricula; Experiential Learning; Active Learning; Self Directed Learning as Topic; Simulation Training; Interactive Learning; Interactive Learning), critical thinking or autonomous learning ability (Thinking Skill∗, Thought∗, Critical Thinking, Independent learning capability, autonomous learning ability, Self-learning ability, Learner Autonomy, Self-regulated ability), and medical students or nursing students (Medical Student∗ OR Pupil Nurse∗ OR Nursing Student ∗ ). The equivalent search entries will be used while searching in Chinese databases. The fully reproducible search strategy provided in Table ​ Table1 1 is for PubMed. This will be appropriately adapted for search in the other databases. And the flow chart of searching and screening studies is showed at Fig. ​ Fig.1 1

Search strategy used in the PubMed database.

An external file that holds a picture, illustration, etc.
Object name is medi-99-e21668-g001.jpg

Flow chart of searching and screening studies.

2.2.2. Searching other resources

In addition, we will also search for dissertations and grey literature to identify systematic reviewes or clinical trials related to teaching methods. Besides, related journals and conference processes will be manually searched.

2.3. Data collection and analysis

2.3.1. selection of studies and data extraction.

Initial search records will be imported into ENDNOTE X9 literature management software, then the titles and abstracts of records will be screened to identify potential trials according to eligibility criteria. Next, full-text versions of all potentially relevant trials will be obtained and reviewed to ensure eligibility.

A standard data extraction form will be created using Microsoft Excel 2013 to collect data of interest. Which include eligible studies characteristics (eg, name of the first author, year of publication, the country in which the study was conducted), intervention characteristics (eg, the name of teaching methods, intervention time, time of duration), population characteristics (eg, gender, mean age, sample) and outcome(eg, CT, autonomous learning ability, student satisfaction, score)

Study selection and data extraction will be performed by 1 reviewer (YB), and will be checked by other reviewers (CYT, SQ). Any conflicts will be resolved by discussion.

2.3.2. Assessment of risk of bias

Two reviewers (Y.B. and C.YT.) will independently assess the risk of bias for each study as low, moderate, or high using the Quality Assessment of Diagnostic Accuracy Studies. [21] And conflicts will be also resolved by discussion.

2.3.3. Geometry of the network

Using R software V.3.4.1, a network plot will be drawn. In network plots, the size of the nodes is proportional to the number of studies evaluating a test, and the thickness of the lines between the nodes is proportional to the number of direct comparisons between tests. The network is connected because there exists at least 1 study evaluating a given test together with at least 1 of the other remaining tests. [22] A loop connecting 3 tests indicates that there is at least 1 study comparing the 3 targeted tests simultaneously.

2.3.4.1. Pairwise meta-analyses

We will construct forest plots showing estimates of sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and their corresponding 95% confidence intervals for each index test using STATA V.12.0 (Stata) and MetaDiSc 1.40. Q value and the inconsistency index (I2 test) will estimate the heterogeneity of each study. If the I 2 is ≤50%, it means that statistical heterogeneity could be negligible, and the fixed effects model will be used. If the I 2 is >50%, we will explore sources of heterogeneity by subgroup analysis and meta-regression. If there is no clinical heterogeneity, the random-effects model will be used to perform the meta-analysis. In addition, we will also plot sensitivities and specificities in the summary receiver operating characteristic space, using different symbols for different hormonal biomarkers. Besides, we will use STATA V.12.0 (Stata) and Review Manager 5.30 (RevMan) analysis software to build the hierarchical summary receiver operating characteristic graphics for each index test.

2.3.4.2. Indirect comparisons between competing diagnostic tests

We could use STATA V.12.0 (Stata) software to calculate relative diagnostic outcomes between index tests, including relative sensitivity, relative specificity, relative diagnostic odds ratio, relative positive likelihood ratio, and relative negative likelihood ratio, and then we could use these outcomes to conduct indirect comparisons.

2.3.4.3. Subgroup analysis

If sufficient studies are available, we will explore meta-regression or subgroup analysis based on the age, intervention time, and duration of intervention; the country in which the study was conducted, and the risk factors of teaching methods.

3. Discussion

To the best of our knowledge, this is the first NMA protocol comparing different teaching methods for nursing or medical students to foster the critical thinking ability and autonomous learning ability with RCTs. The study will provide a ranking of mesh fixation for teaching methods and we hope the result will provide recommendations for education managers to foster students’ ability. This protocol is designed in adherence to guidelines for NMA protocols and will be conducted and reported strictly according to the PRISMA extension statement for NMA. [23]

Acknowledgments

We are grateful for the helpful reviewer comments on this paper.

Author contributions

Bei Yun, Yi-Tong Cai, and QianSu: plan and design the research. BeiYun, Yi-Tong Cai, Qian Su, and Lin Han tested the feasibility of the study. Yi-Tong Cai, Qian Su, Lian Chen, Chao-Ran Qu and Lin Han provided methodological advice, polished, and revised the manuscript. Bei Yun and Qian Su wrote the manuscript; all authors approved the final version of the manuscript.

Competing interests None declared

Conceptualization: Lin Han.

Investigation: Lin Han.

Methodology: Bei Yun, YiTong Cai, QianSu, LianChen, Chaoran Qu and LinHan

Provenance and peer review

Resources: Bei Yun, YiTong Cai, QianSu.

Software: Bei Yun, YiTong Cai.

Supervision: QianSu, LinHan.

Validation: Lian Chen and ChaoRan Qu

Writing – original draft: Bei Yun,Qian Su.

Writing – review & editing: Bei Yun, Qian Su.

Not commissioned; externally peer reviewed.

Abbreviation: NMA = network meta-analysis.

How to cite this article: Yun B, Su Q, Cai YT, Chen L, Qu CR, Han L. The effectiveness of different teaching methods on medical or nursing students: Protocol for a systematic review and network meta-analysis. Medicine . 2020;99:40(e21668).

BY and QS Contributed equally to this work.

This work was supported by the Scientific Research Project of the health industry in Gansu Province (grant number: GSWSKY-2019-41)

Ethical approval and patient consent are not required since this is a network meta-analysis based on published studies

The results of this network meta-analysis will be submitted to a peer-reviewed journal for publication.

The authors have no conflicts of interest to disclose.

All data generated or analyzed during this study are included in this published article and its supplementary information files

The Harriet W. Sheridan Center for Teaching and Learning

Inclusive teaching through active learning.

  • Teaching Resources
  • Classroom Practices

Across courses of different sizes, levels and audiences (concentrators and non-concentrators), research suggests that students learn more in classes that integrate active learning (Freeman et al., 2014; Hake, 1998). In fact, research supporting the use of active learning is so compelling that some have suggested it is unethical for instructors to continue to use a purely lecture-based approach (Freeman et al.).

Fortunately, most instructors tend to use a combination of lecture and active learning strategies (Campbell, Cabrera, Michel, & Patel, 2017; Campbell, 2023).

I have found the "pair and share" active learning technique to be incredibly effective in my courses. It helps me pace a lecture, maintain student attention, engage students, and teach material to a class where the proficiency level may vary widely among the students.

What is “active learning”? The term generally refers to teaching strategies that:

  • “involve students in doing things and thinking about the things they are doing” (Bonwell & Eisen, 1991, p. 2).
  • require “students to do meaningful learning activities and think about what they are doing” (Prince, 2004, p. 1).
  • “cognitively engage students in building understanding at the highest levels of Bloom’s taxonomy” i.e., critical thinking skills (National Academies, 2017, p. 3-3).

Active learning allows students to make their own sense of ideas they are encountering and to integrate ideas with what they already know. It also gives students opportunities to practice and apply course concepts, to understand what they have learned, and to identify where there is room to improve (Ambrose, Bridges, DiPietro, Lovett, & Norman, 2010; Davidson, 2017). Although simply pausing to ask for questions can achieve this goal for a single student at a time, active learning techniques are valuable for allowing a full class to check and deepen their understanding.

Active learning strategies are also an important component of inclusive teaching because they promote multiple modes of engagement to reach all students — including historically underrepresented groups (Eddy & Hogan, 2014; Freeman et al., 2007; Freeman et al., 2014; Hake, 1998). More extensive use of active learning is associated with higher learning gains (Connell, Donovan, & Chambers, 2015), but as with any teaching strategy, quality of implementation is more important than quantity. Because any new teaching approach takes some adjustment, it works well to start small, trying one or two active learning strategies per class, before engaging in more intensive active learning.

One common misconception is that in order to implement active learning techniques, an instructor must spend all class time on student-centered activities. Although active learning is a critical teaching tool, brief lectures or explanations are also important components of many classes, especially to establish a basic understanding for students new to a subject or, for intermediate learners, to address misconceptions (Wittwer & Renkl, 2008).

Historian  Nancy Jacobs  uses an effective combination of lecture and active learning in her course on South Africa. For the first eight weeks of the term, students learn the history of the country through lectures and discussions. Then, for two weeks, students engage in a role play about the collapse of apartheid. She indicates that the  Reacting to the Past  approach is effective for teaching historical thinking, as well as "deeply empathetic learning" as students embody their roles.

Because “active learning” refers to such a broad range of strategies, this approach is very elastic, taking very little class time or the entire class. Below, we list sample evidence-based active learning activities for a variety of class contexts, organized by in-class time commitment.

  Small Discussion Large Lecture
Low time commitment

, such as (Angelo & Cross, 1993)

Online: Incorporate low-stakes Canvas discussions, such as those that ask students to share thoughts and opinions via text, audio, or video.

to allow students time to review their notes and identify questions or to compare notes with a peer. (Major, Harris, & Zakrajsek, 2016; Prince, 2004)

Online: Pause your lecture and have students ask questions via Zoom chat. Students can also write down questions that they can share later in a Canvas discussion or in .

Medium time commitment

Design a , which places prompts around the room and asks students to walk from station-to-station to synthesize written answers on large post-it sheets (Major, Harris, & Zakrajsek, 2016).

Online: Conduct a gallery walk via Google Doc. (Headers will be the stations.)

Use individually or as a .

Online: Use Zoom polling.

Ask students to put a in order to test their understanding of historical or scientific processes (Lee, 2007)

Online: Students can engage in a sequence activity in Zoom breakout rooms.

High time commitment

Have students discuss readings or types of problems via a (Johnson, Johnson, & Smith, 2014).

Use to flip your classroom so that a majority of class time is spent with students in groups working on focused tasks or problem-solving (Michaelson, Bauman-Knight, & Fink, 2003).

Online: Have students work in groups on an activity that they will later present to the class as a whole via Canvas discussions or Zoom.

: Divide students into small groups or pairs and pass a sheet of paper with prompt or problem down the row. After passing through several rounds, students report on the "best" responses (Barkley, 2010).

Online: Divide students into groups. Have each group address a prompt or problem and post their solution to a Canvas discussion. Invite students to “upvote” the best solution.

Build in , like or grant review panels.
Online: Use Zoom for synchronous discussions, Canvas for text-based role playing.

Using active learning techniques in your teaching requires only a willingness to try something new in the classroom, gather feedback, and plan an activity that furthers your course learning goals. With any use of active learning, it is important that the activity be more than “busy work” or a “break from lecture.” Rather, the approach should be intentionally selected to allow students to practice a key idea or skill with peer or instructor feedback (Messineo, 2017).

Some instructors report that they need specially designed classrooms to teach using active learning strategies. For the low- and moderate-complexity strategies listed above, a purpose-built facility is not needed. For higher complexity strategies, an intentionally designed space facilitates the process, but there is mixed evidence on its necessity for improved student satisfaction and learning outcomes. Low-tech elements of active learning classrooms, such as multiple whiteboards and flexible seating to allow for collaboration, appear to be the most critical elements (Soneral & Wyse, 2017).

Another commonly cited barrier to active learning is student resistance. Student reactions to any new teaching methods are not uniform, and reactions may even vary over the term, moving, for example, from concerns about grades to peers’ involvement in activities (Ellis, 2015). Faculty’s use of specific explanation and facilitation strategies has been found to be positively associated with student participation in and feedback about active learning (Tharayil et al., 2018). Helpful strategies to mitigate resistance include (DeMonbrun et al., 2017; Wiggins et al., 2017):

  • Explaining the purpose or value of an activity. If it is an activity that you have implemented in past years, student quotes about key outcomes can be particularly powerful.
  • Previewing what might be challenging.
  • Clearly describing the process and what students are expected to produce.
  • Inviting questions.
  • Walking around the room during an activity, being mindful to check in with non-participating students by asking questions or seeing if they are stuck.

Sometimes, a few vocal students may give the impression that there is more discontent than there is, so collecting student feedback (such as by an  exit ticket ) can give a more accurate picture of the range of student experience.

Instructional approaches that promote student interaction are most likely to enhance student learning in a diverse classroom (Gurin, 2000), and active learning can be a powerful way to promote that exchange. However, whether due to factors such as student-to-student climate issues or lack of participation, good ideas for active learning do not always translate to inclusive learning. Key strategies for making it more effective to that aim include:

For teams and pairs that will be meeting over time, construct the group intentionally. One strategy is to ask students to respond to questions in a 3-2-1 format to help compose groups: (1) What are three characteristics of successful groups for you? (2) What are two strengths that you would bring to the group? (3) Who is one student in the class with whom you would or would not like to work? (adapted from Reid & Garson, 2017). Some instructors also find CATME to be a helpful tool for intentional group assignment. Although there may be times where same- or cross-identity teams are beneficial (Freeman, Theobald, Crowe, & Wenderoth, 2017), it is clear that isolating women or underrepresented groups on a team tends to negatively affect their performance and therefore, should be avoided (Meadows & Sekaquaptewa, 2013).

Professor of biology and engineering Sharon Swartz uses a survey to build teams, asking students to select topics of interest (and not of interest) and provide some background information about themselves, such as academic area. She then uses the surveys to compose the group and finds that "shy students and those who didn't know many class members no longer felt anxious about finding a group to work with, and with 'leaders' distributed among the groups, the group projects improved hugely."

Check in with the group periodically. Scheduled check-ins with group members allow faculty to make adjustments when needed and also provide some accountability for group members. Sharon Swartz also distributes evaluation sheets that allow each student to assess contributions made by each member of the team in terms of (1) intellectual involvement in planning/research, (2) effort toward achieving group goals, (3) cooperation and support of others, and (4) their own contribution. She finds that “students are reassured by knowing that they will have a chance to talk about any challenges that arose in their groups.” Swartz adds, “Typically, knowing that they will be telling me about their experiences with each other ensures that everyone pulls their weight!”

Assign clear roles and expectations. Some research indicates that, especially in STEM contexts, men tend to answer more questions in group presentations, take more technical roles, and underestimate their female classmates’ performance (Grunspan, et al., 2016; Meadows & Sekaquaptewa, 2013). However, one study promisingly suggests that showing students examples of balanced group work in advance (e.g., a video of a presentation or a sample paper) can mitigate these tendencies (Meadows, et al., 2015). Faculty may also wish to assign roles and deliberately rotate them. Defining clear expectations (both verbally and in writing) for classroom participation and group work can also help to include learners who have previously been educated in cultural contexts where active learning techniques may not be as common.

If you would like to discuss active learning strategies for your own classroom, please contact the Sheridan Center for Teaching and Learning for a consultation:  [email protected] .

Subscribe to the Sheridan Center newsletter

This resource was authored by Dr. Mary Wright, Associate Provost for Teaching and Learning, Executive Director of Sheridan Center for Teaching and Learning, and Professor (Research) in Sociology, with input from Sheridan Center colleagues.

Angelo, T. A., & Cross, K. P. (1993).  Classroom assessment techniques: A handbook for college teachers . San Francisco: Jossey-Bass Publishers.

Barkley, E.F. (2010).  Student engagement techniques: A handbook for college faculty . San Francisco: Jossey-Bass.

Bonwell C. C. & Eison, J.A. (1991).  Active learning: Creating excitement in the classroom . ASHE-ERIC Higher Education Report No. 1. Washington, DC: The George Washington University, School of Education and Human Development.

Campbell, C.M. (2023).  Great college teaching: Where it happens and how to foster it everywhere . Cambridge, MA: Harvard Education Press.

Campbell, C.M., Cabrera, A.F., Michel, J.O., & Patel, S. (2017). From comprehensive to singular: A latent class analysis of college teaching practices.  Research in Higher Education, 58 : 581-604.

Connell, G.L., Donovan, D.A., & Chambers, T.G. (2016). Increasing the use of student-centered pedagogies from moderate to high improves student learning and attitudes about biology.  CBE - Life Sciences Education, 15 : 1-15.

Davidson, C.N. (2017).  The new education: How to revolutionize the university to prepare students for a world in flux . New York: Basic Books.

DeMonbrun, M., Finelli., C.J., Prince, M., Borrego, M., Shekhar, P., Henderson, C., & Waters, C. (2017). Creating an instrument to measure student response to instructional practices.  Journal of Engineering Education, 106 (2): 273-298.

Eddy, S.L., & Hogan, K.A. (2014). Getting under the hood: How and for whom does increasing course structure work.  CBE Life Sciences Education, 13 :453-468.

Ellis, D.E. (2015). What discourages students from engaging with innovative instructional methods: Creating a barrier framework.  Innovative Higher Education, 40 : 111-125.

Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H., & Wenderoth, M.P. (2014). Active learning increases student performance in science, engineering, and mathematics.  Proceedings of the National Academy of Sciences , 111: 8410–8415.

Freeman, S., O’Connor, E., Parks, J.W., Cunningham, M., Hurley, D., Haak, D., Dirks, C., & Wenderoth, M.P. (2007). Prescribed active learning increases performance in introductory biology.  CBE-Life Sciences Education, 6 : 132-139.

Freeman, S., Theobald, R., Crowe, A.J., Wenderoth, M.P. (2017). Likes attract: Students self-sort in a classroom by gender, demography, and academic characteristics.  Active Learning in Higher Education , 1-12.

Grunspan, D.Z., Eddy, S.L., Brownell, S.E., Wiggins, B.L., Crowe, A.J., & Goodreau, S.M. (2016). Males underestimate academic performance of their female peers in undergraduate biology classrooms.  PLOS One . Available:  http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0148405

Gurin, P. (2000).  Expert Report in the Matter of Gratz et al. v. Bollinger et al . No. 97-75321(E.D. Mich.) and No. 97-75928 (E.D. Mich.). Available:  http://diversity.umich.edu/admissions/legal/expert/gurintoc.html

Hake, R.R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses.  American Journal of Physics, 66 : 64-74.

Johnson, D., Johnson, R.T., & Smith, K.A. (2014). Cooperative learning methods: A meta-analysis. Available:  https://www.researchgate.net/profile/David_Johnson50/publication/2200403...

Lee, V. (2007). Sequence activity. Workshop on inquiry-based learning.

Major, C.H., Harris, M.S., & Zakrajsek, T. (2016).  Teaching for learning: 101 intentionally designed educational activities to put students on the path to success . New York: Routledge.

Meadows, L., & Sekaquaptewa, D. (2013).  The influence of gender stereotypes on role adoption in student teams . ASEE Annual Conference and Exposition, Atlanta, GA. Paper #: 6744.

Messineo, M. (2017). Using the science of learning to improve student learning in sociology classes.  Teaching Sociology , 46(1): 1-11.

Michaelson L, Bauman-Knight B, Fink D (2003).  Team-based learning: A transformative use of small groups in college teaching . Sterling, VA: Stylus.

National Academies of Sciences, Engineering, and Medicine. (2017).  Indicators for monitoring undergraduate STEM education . Available:  http://nap.edu/24943

Prince, M. (2004). Does active learning work? A review of the research.  Journal of Engineering Education, 93 (3): 223-231.

Reid, R., & Garson, K. (2017). Rethinking multicultural group work as intercultural learning.  Journal of Studies in International Education, 21 (3): 195-212.

Soneral, P.A.G., & Wyse, S.A. (2016). A SCALE-UP mock-up: Comparison of student learning gains in high- and low-tech active-learning environments.  CBE Life Sciences Education , 16(1): 1-15.

Therayil, S., Borrego, M., Prince, M., Nguyen, K.A., Shekhar, P., Finelli, C.J., & Waters, C. (2018). Strategies to mitigate student resistance to active learning.  International Journal of STEM Education, 5 (7). Available:  https://stemeducationjournal.springeropen.com/articles/10.1186/s40594-01...

Wiggins, B.L., Eddy, S.L., Wener-Fligner, L., Freisem, K., Grunspan, D.Z., Theobald, E.J., Timbrook, J., & Crowe, A.J. (2017). ASPECT: A survey to assess student perspective of engagement in an active-learning classroom.  CBE Life Sciences Education, 16 (2).

Wittwer, J., & Renkl, A. (2008). Why instructional explanations often do not work: A framework for understanding the effectiveness of instructional explanations.  Educational Psychologist, 43 (1): 49-64.

This newsletter was originally published in March 2018 and revised in September 2020 and July 2023.

Spaced recall reduces forgetting of fundamental mathematical concepts in a post high school precalculus course

  • Original Research
  • Open access
  • Published: 03 September 2024

Cite this article

You have full access to this open access article

research about teaching methods

  • Diane S. Lindquist   ORCID: orcid.org/0000-0002-1645-2947 1   na1 ,
  • Brenda E. Sparrow   ORCID: orcid.org/0000-0003-0640-1094 1   na1 &
  • Joseph M. Lindquist   ORCID: orcid.org/0000-0003-4172-9773 2   na1  

The retention of fundamental mathematical skills is imperative to provide a foundation on which new skills are developed. Educators often lament about student retention. Cognitive scientists and educators have explored teaching methods that produce learning which endures over time. We wanted to know if using spaced recall quizzes would prevent our students from forgetting fundamental mathematical concepts at a post high school preparatory school where students attend for 1 year preparing to enter the United States Military Academy (USMA). This approach was implemented in a Precalculus course to determine if it would improve students’ long-term retention. Our goal was to identify an effective classroom strategy that led to student recall of fundamental mathematical concepts through the end of the academic year. The concepts that were considered for long-term retention were 12 concepts identified by USMA’s mathematics department as being fundamental for entering students. These concepts are taught during quarter one of the Precalculus with Introduction to Calculus course at the United States Military Academy Preparatory School. It is expected that students will remember the concepts when they take the post-test 6 months later. Our research shows that spaced recall in the form of quizzing had a statistically significant impact on reducing the forgetting of the fundamental concepts while not adversely effecting performance on current instructional concepts. Additionally, these results persisted across multiple sections of the course taught at different times of the day by six instructors with varying teaching styles and years of teaching experience.

Avoid common mistakes on your manuscript.

Introduction

It has long been established that memory declines over time (Ebbinghaus,  1964 ). Although this is a normal human condition, it is problematic in the field of education, particularly in disciplines where coursework requires students to possess knowledge accumulated from previous classes. When foundational concepts are forgotten, new learning can be stunted (Kamuche & Ledman,  2005 ; Taylor et al.,  2017 ). This is particularly true in mathematics and can be seen in other courses of study that require a strong mathematical background (Pearson Jr & Miller,  2012 ). For this reason, we were interested in finding strategies that could be implemented in a mathematics course to help students retain and recall fundamental mathematical concepts. We were most interested in strategies that could be easily implemented and would not require a huge overhaul on the current syllabus.

Based on the work of cognitive psychologists, Lang ( 2016 ) recommends several classroom strategies for improving instruction and learning that can be easily implemented within existing course structures. One of which, is spending a small portion of each class asking students questions on material from any previous lesson. He considers this to be low-level interleaving and claims that asking students to regularly recall previous content, along with spaced learning, improves long-term retention.

The role that spaced learning plays in the durability of memory is also well known (Ebbinghaus,  1964 ). Since Ebbinghaus’ work in 1885, studies have continued to investigate the role of spaced learning on retention. It has been found that retrieval attempts are more beneficial when repeated in spaced-out sessions versus massed sessions (Cepeda et al.,  2006 ). The benefits of quizzing as a method of asking students to regularly recall previous course content has also been studied. A meta-analysis by Rowland ( 2014 ) and a review of several laboratory and educational studies by Roediger III and Karpicke ( 2006a ) both conclude that the benefits of quizzing outweigh the benefits of other study activities such as homework and re-reading. This phenomenon has been named the “testing effect.” There is a connection between effortful recall and memory. Memory is made more durable when effort is needed to recall information. Re-reading material with the intent to increase retrieval takes little cognitive effort and is therefore less effective than retrieval activities such as quizzing (Brown et al.,  2014 ).

However, to achieve long-term retention through quizzing it is important to space the quizzing effectively. Karpicke and Roediger III ( 2007 ) studied the effects of expanding retrieval intervals and equally spaced retrieval intervals and showed that what matters most is not whether the retrieval intervals are equally spaced or expanded but instead, that the initial retrieval attempt is effortful. If, for example, a quiz on previous concepts is administered too soon after it was learned, effortful retrieval, the key to retention, is diminished, making the exercise less effective. Benjamin and Tullis ( 2010 ) identified that the timing of spaced retrieval is maximized when the “sweet spot” is found between students having to use effort to recall, but not so much effort that they will not be able to remember. Cepeda et al. ( 2008 ) developed a model called the “retention surface” which plots student performance as a function of the study gap. The model identifies the retention interval that can be used to determine the optimal spacing interval between retrieval attempts for a desired retention interval.

Although studies that compared the effects on learning between open response (recall) versus multiple choice (recognition) quizzes found mixed results (Karpicke,  2017 ), we decided to use open response for our research. McDaniel et al. ( 2007 ) found that both multiple choice and open response yielded positive effects on retrieval over simply re-reading material. However, it was the open response questions that produced a greater positive effect on the retrieval of the material. No matter the mode, Roediger III et al. ( 2011 ) claims that testing leads to new retrieval routes which increases the effort for retrieval. When the effort to retrieve information is increased, the desired difficulty has been achieved to lead to long-term retention. The term “desired difficulty,” first identified by psychologists Elizabeth and Robert Bjork (Brown et al.,  2014 ), is a key factor in establishing deeper connections so learning is more durable over time.

Some instructors feel that quizzing students too often or quizzing students when they do not have a solid grasp on the material may reinforce misunderstandings. Kornell et al. ( 2009 ) studied this concern and found that unsuccessful attempts in testing that were followed up by feedback produced a significant improvement in follow-on tests. They also found that the effort required to recall material for a test (even if the answer is wrong and feedback is given) produces deep processing, corrects the brain’s retrieval route for that information, and serves as a cue for recall in the future. Additionally, Benjamin and Tullis ( 2010 ) and Karpicke and Roediger III ( 2007 ) concluded in their research that providing feedback on the retrieval attempts helps to mitigate the encoding opportunity that is lost from an unsuccessful retrieval attempt.

The research mentioned here supports the potential for quizzing to improve students’ long-term retention. However, with the large number of factors that can influence education in classroom settings, it is not surprising that efforts to understand how quizzing influences long-term retention have been conducted in laboratory settings. In fact, of the research cited so far, the majority were conducted in laboratory settings, with only two studies being conducted in a classroom (McDaniel et al.,  2007 ; Roediger III et al.,  2011 ). This does not include the four reviews of existing literature (Benjamin & Tullis,  2010 ; Cepeda et al.,  2006 ; Roediger III & Karpicke,  2006a ; Rowland,  2014 ).

While the studies mentioned so far show the potential of spaced quizzing to positively impact student retention, what is needed is promising results with educationally relevant tasks in actual classroom environments. Some studies made a step in that direction by using educationally relevant tasks in a lab setting. The Roediger III and Karpicke ( 2006b ) lab study asked students to read passages on scientific topics and then students restudied the passage or took a recall test. They found that testing, and not studying, improved retention. Additionally, Arnold and McDermott ( 2013 ) used English and Russian word associations in their lab study and found that more recall attempts increased recall ability. Finally, Rohrer and Taylor ( 2007 ) also noted benefits to long-term retention that come from spaced learning in the form of frequent quizzing. Their study used mathematical problems with college students, but again, this work was performed in a lab setting and not as a component of a regular course routine.

Recent years have seen an increase in classroom research on improving retention. Karpicke ( 2017 ) summarized 10 years of research on retention. As part of their summary, they reviewed studies that considered the effects of quizzing on retention in educational classrooms. Their research reports 14 studies conducted between 2009 and 2016 that addressed the benefits of quizzing in classrooms, all of which produced positive results. Of those studies, nine were conducted in college-level courses and five in middle schools but none in high schools. Only two of the studies they mentioned involved mathematical content, both of which were conducted in college courses (Hopkins et al.,  2016 ; Lyle & Crawford,  2011 ). Yang et al. ( 2021 ) conducted a meta-analytic review of quizzing’s effect on classroom learning and looked at 222 studies that included 573 effects to more fully understand classroom moderators. They reviewed classroom research conducted in or since the year 2000. Their data is not organized in a manner that links course level to subject matter but most of the studies they reviewed were conducted in middle school and university or college courses and only 8% of the effects were obtained from high school studies while only 4.9% of the effects were obtained from studies involving mathematical content. They provide a very extensive review of 19 moderating variables and implementation considerations, but two potential moderating variables they did not report on were class hour and teacher experience.

Of the studies conducted with mathematical content in classroom settings, the study conducted by Hopkins et al. ( 2016 ) is especially notable. They evaluated massed versus spaced retrieval practice of mathematical concepts in a college introductory calculus course for engineers. They showed that spaced retrieval led to retention of concepts that persisted into the following semester. Although the calculus course did not use the traditional classroom format of lectures, their findings indicate a similar strategy used in a Precalculus course may be equally effective at improving students’ long-term retention of fundamental mathematical concepts.

To this end, the objective of this research was to explore the effectiveness in using spaced recall quizzes in a classroom setting to reduce the forgetting of fundamental mathematical concepts that usually occurs when students are not asked to revisit these concepts. Many studies on retention take place in a lab, yet we set out to investigate the use of weekly quizzes in an in-person classroom. Our classroom setting was specifically in a high school level precalculus course composed of post high school students in a preparatory school environment. In addition, we sought to determine if the results persisted across multiple sections of the course taught at different times of the day by six instructors with varying teaching styles and years of teaching experience. The literature we reviewed did not contain studies that took place in a classroom with the range of teaching experience and number of instructors that we had; therefore, our results add to the body of knowledge in this area of research. These results showcase that the use of spaced recall quizzes can reduce forgetting regardless of the teaching experience of the educator. Using the study of Hopkins et al. ( 2016 ) as a model, we anticipated that the use of intentionally spaced recall, in the form of quizzing, would improve students’ long-term retention of fundamental mathematical concepts in a traditional classroom setting. For this study, long-term is defined as 6  months: the time between the completion of quarter one instruction and the administration of the post-test exam.

The paper is organized as follows. The method we used is put forth in Sect.  2 and includes a description of the student population and the sequence and timeline of key components. The results of the quiz’s effect on long-term retention are then introduced in Sect.  3 . In Sect.  4 we discuss the results in light of moderating factors (e.g., teacher experience, class hour, etc.). Finally, in Sect.  5 we discuss practical implications related to using spaced recall quizzing in the classroom. Section  6 is left for the conclusion.

Participants

We conducted our study at the United States Military Academy Preparatory School (USMAPS) located at West Point, New York during the 2020–2021 academic year. When students are not yet qualified for direct admission to the United States Military Academy (USMA) but they show great potential, they are given the opportunity to attend USMAPS. The students that are admitted to USMAPS are deficient in one or more of these three areas: academics, physical fitness, or leadership. The purpose of the school is to develop the students in those three areas to meet the rigorous admission standards of USMA. For most students, the prevalent deficiency is in academics. The students who attend USMAPS live in rooms on campus. On average, the 240 students that enter the 1 year program are demographically and geographically diverse (within the United States and its territories). In 2020–2021, 235 students attended USMAPS with approximately 25% prior service soldiers and most of the others having graduated high school the previous year. Approximately 48% were African American, 4% were Hispanic, 2% were Asian, and 1% identified as “Other Minorities.”

At USMAPS, there are three levels of mathematics courses: Calculus, Precalculus with an Introduction to Calculus (PIC), and a Precalculus course that has an emphasis on Algebra/Trigonometry (PCAT). When students arrive at USMAPS, they take a pretest and fill out a survey to identify their previous mathematical exposure and competency. The pretest, survey, and student Scholastic Assessment Test (SAT) and/or American College Testing (ACT) scores are used to place students in the Calculus, PIC, or PCAT course.

Our study began with \(N=157\) students enrolled in the PIC course. Six instructors taught this course. Five of the instructors taught three sections each. As our hypothesis was that the spaced recall quizzes would reduce forgetting, we wanted to rule out the potential that the reduction in forgetting was a result of the instructor. We designed our study to have the instructors’ sections randomly assigned as follows: one section assigned as treatment, one as control, and one section as mixed. For the sixth instructor who only taught two sections, one section was randomly assigned as treatment and the other as control. In the mixed sections, students were randomly assigned to either the treatment or control group. Enrollment numbers for each section were equally distributed (plus or minus one student). This placed \(N=76\) students in the control group and \(N=81\) students in the treatment group. The study received approval (20-115-1) from the Institutional Review Board (IRB) at USMA and was deemed exempt because it was conducted in an established educational setting and involved normal educational practices. Instructors were not informed which sections were treatment, control, or mixed or which students belonged to each group. Two of the instructors were authors and had access to which students and sections were treatment or control; however, they did not examine that information before the end of the experiment. This was to ensure that there was no unintentional bias on the part of the study leads. The potential for study lead bias was investigated during data analysis (see Sect.  4.4 ).

Although we were able to randomly assign students, scheduling logistics prevented us from redistributing students to control for race, gender, standardized tests, and high school grade point average (GPA). Consequently, we had to maintain the assignment of students obtained by randomly assigning students to sections of treatment, control, or mixed, as previously explained. This was a departure from the approach employed by Hopkins et al. ( 2016 ), who balanced student assignment by also considering racial and gender composition, mean ACT score, and mean high school GPA. Nevertheless, the method we used for obtaining randomness is acceptable and has been used in other studies when additional measures were not available or could not be used to achieve balanced groups (Begolli et al.,  2021 ; Jaeger et al.,  2015 ; Liming & Cuevas,  2017 ; Shirvani,  2009 ; Sanchez et al.,  2020 ).

During the academic year, some students who were originally placed in PIC demonstrated that their fundamental skills were not at the level initially expected based on ACT/SAT scores, previous course work, and pretest performance. In these cases, those students were moved to the PCAT course and thereby removed from the study. In addition, some students were separated from USMAPS during the academic year removing them from the study. There were also a few cases where students were removed due to incomplete data. Consequently, our initial study of \(N=157\) students reduced to \(N=128\) students. This left \(N=60\) students in the control group and \(N=68\) in the treatment group.

This study was conducted within 17 sections of an in-person class that met daily. Classroom (action) research adds elements that are more challenging to control than if the research was done in a lab setting. These elements included student aptitude, instructor experience, and the hour of the day the class met. We address these in Sect.  4 . Another challenge we faced when doing research in the classroom (instead of a lab) was how to handle giving feedback and how to control when and if students accessed the feedback later. We did not give feedback immediately after the quizzes to prevent students from sharing answers in between classes as that would affect the exposure gap (Cepeda et al.  2008 ) and desired difficulty (Bjork and Bjork  1992 ) we had designed into the treatment.

There were advantages to conducting the study in the classroom rather than in the lab. We were fortunate to belong to a mathematics department in which all instructors were willing and interested in this study which allowed us to have a sample size of 128 students. We also had the benefit of conducting the study on the retention of mathematical concepts while using the insights on retention that was gained through studies done in a lab with Swahili-Swedish word pairs (Bertilsson et al.  2017 ), or loosely connected word pairs in another study (Kornell et al.  2009 ).

Our experiment consisted of the progression outlined in Fig.  1 . First, a pretest was administered to all students to establish a baseline comprehension of fundamental concepts. Then, the students were instructed in the fundamental concepts of mathematics in quarter one. Next, all students took weekly quizzes. The treatment group took a weekly quiz on the spaced fundamental concepts and the control students took a weekly quiz on the current topics being learned. Finally, all students were given a post-test to determine their retention of the fundamental concepts. Each of the components are discussed in detail in the following sections.

figure 1

Progression of the spaced recall experiment. Students take different weekly quizzes based on whether they are in the control or treatment group

Pretest: establish a baseline comprehension of fundamental concepts

To assess long-term retention we considered 12 mathematical concepts identified by the the United States Military Academy’s mathematics department as being fundamental for entering students. These topics are listed in Table 1 and are the fundamental concepts taught in quarter one of the PIC course. The expectation is that students will remember these concepts when they are assessed on the USMAPS post-test exam 6 months later.

The topics that we cycled in the treatment group come from a document published by the United States Military Academy entitled, “Required Mathematical Skills for Entering [Students].” From this document we chose 12 of the 40 skills based on the fact that those 12 skills are taught in the first 8 weeks of the academic year. This would allow us to examine the exposure gap of quizzing three of the topics every 28 days with a goal of reduced forgetting when these topics were tested again at the end of the year on a multiple choice post-test. If we chose topics taught after the first 8 weeks of school, we would run out of time in the school year to test this exposure gap that was designed based on the research of Cepeda et al. ( 2008 ). We tested three different topics in each retention quiz to give each of the twelve topics the same exposure.

When the students complete our program and are admitted to the United States Military Academy, they will take a multiple choice Fundamental Concepts Exam within the first week or so of classes. This Fundamental Concepts Exam is built from the 40 “Required Mathematical Skills for Entering [Students].” Historically, our student body scores extremely low on the Fundamental Concepts Exam despite the fact that our curriculum covers all 40 of the required skills. The frustration that we felt knowing our students continued to score poorly is what lead us to want to do research on ways to improve the long-term retention of these skills. In addition, these fundamental skills are the building blocks for future course work at the United States Military Academy in the areas of advanced mathematics, physics, chemistry, and engineering. We were experiencing a similar dilemma as described in the study by Hopkins et al. ( 2016 ), where a spaced versus massed practice was used in a course titled, Introductory Calculus for Engineers.

The pretest that students took when they arrived at USMAPS in July was used as the initial measurement of knowledge for 10 of the 12 fundamental concepts. The pretest is a 50-problem multiple choice exam which the students have 80 min to complete. The post-test is the same exam given the following April. We mapped the 50 problems in the pretest against the 12 identified fundamental concepts and found that two of the topics were not directly assessed so, although those two topics were included in the weekly quizzes during the study, they were not analyzed as part of the study results.

Instruction in fundamental concepts

During quarter one, and prior to the start of the study, all students in the PIC classes experienced instruction in the same mathematical concepts including the 12 fundamental concepts. At the end of quarter one, all PIC students took a final exam that was an open response exam (no multiple choice). The final exam was “group-graded.” Each instructor was given a rubric for a specific problem on the final exam and they graded that problem for every student in the course. This is our practice for grading major exams to build in consistency and fairness in our grading procedure. Performance on this final exam was one metric used to establish the initial statistical equivalence between the treatment and control groups (see Sect.  4.1 ).

Weekly quizzes

Once a week during quarters two, three, and the first two weeks of quarter four, all PIC students took a retention quiz (RQ). The RQs were administered and graded using a web-based platform. The quizzes were administered during class and at a time that was open to the instructor’s discretion. The students logged into the web-based platform and the instructor provided a password that allowed the students to begin the 10 minute timed exam. The web-based platform displayed a timer for the student and no longer accepted submissions once the time had expired. The web-based platform automatically graded the open-response submissions and then released the students’ scores later that day (after all students had taken the RQ). It was coordinated such that all sections of PIC took the weekly RQ on the same day. For motivational purposes, the quizzes were worth points but only counted for 4% of students’ total grade in quarters two and three with no effect on final grade in quarter four. The students in the control group were given RQs that had three questions on the mathematical content currently being covered in the course (massed recall). The students in the treatment group were given RQs that had three questions that ONLY came from the 12 identified fundamental concepts that were taught in quarter one (spaced recall). Each quiz addressed three fundamental concepts and the concepts cycled every 28 days based on the exposure gap suggested in the study by Cepeda et al. ( 2008 ) to achieve the retention interval of 6 months. We also staggered the three concepts being quizzed each class period to prevent any loss of the desired difficulty that might come from students sharing what was on their RQ. Regardless of content, all quiz questions were formatted as open response. Scores and worked solutions were available to the student at the end of the academic day. Class time was not used to discuss quiz solutions as our design was to find a classroom strategy that would not take too much time away for teaching new material as Lang ( 2016 ) suggested. Each of the fundamental concepts were cycled four times over the course of the 16 RQs. Tables  2 and 3 provide an example of a treatment RQ and a control RQ.

Post-test: establish retention of fundamental concepts

Six months after the quarter one final exam and after the 16 RQs were administered, all students took the post-test simultaneously in identical conditions. We conducted an analysis of each groups’ post-test scores to determine if there was any significant difference in performance between the treatment and control group. Again, two of the 12 mathematical concepts that we cycled during the treatment’s RQs were not assessed on the post-test, so we did not measure those two concepts: radicals to rational exponents and distance/rate/time.

Measures (retention index)

To measure whether the spaced weekly RQs led to long-term retention, we compared the performance of all 128 students on 24 selected problems from the pre/post-test. These 24 problems assessed 10 of the fundamental concepts that cycled in the treatment group RQs. We determined the treatment to be a success or failure as shown in Fig.  2 .

figure 2

To determine whether RQs led to long-term retention, the identification above was used to classify success or failure

The pretest and post-test were identical multiple choice assessments that contained 50 questions. We only analyzed 24 of the 50 questions as they were the questions that assessed one of the 10 topics that we cycled in the treatment group. Due to the fact that these questions were multiple choice, the problem was considered correct if they made the right choice out of the 4 multiple choice options. If they did not choose the correct multiple choice option, we considered that incorrect. As our hypothesis was that the treatment of the spaced recall quizzes would help reduce forgetting, the student was considered to have success if they got the problem right on the post-test (even if it was wrong on the pretest). Notice that a correct answer on the post-test is considered a retention success and an incorrect answer is considered a retention failure no matter the student’s performance on that topic from the pretest that was given before quarter one instruction. After calculating the successes and failures for each of the 24 problems for each student, we then calculated the proportion of success a student experienced on these 24 specific questions and named this the student’s retention index. Comparing the retention index of students in the control group to the retention index of students in the treatment group allowed us to measure the effect of spaced recall, in the form of quizzing, on long-term retention.

It was discovered that the success in reducing forgetting was quite pronounced. To quantify these gains, we used several statistical techniques such as data visualization with density plots, t-tests, and analysis of variance (ANOVA). The results of these statistical techniques are reported in this section and can be found in Table 4 . We used the 50 question post-test to establish the retention of the fundamental mathematics concepts. Figure  3 shows the performance (distribution of scores) between the treatment and control groups. The treatment group’s mean score \((72.4\%)\) is higher than the control group’s \((68.5\%)\) and is statistically significant at the 0.05 level. We compared mean scores using a conservative hypothesis test that examined if the true population mean scores were the same. The p -value of 0.011 provides significant evidence that this hypothesis is implausible. In fact, the ability of the treatment group to outperform the control group with a \(3.9\%\) increase in the mean is statistically significant.

figure 3

Distribution of post-test scores by group. Mean score difference of \(3.9\%\) is statistically significant and shows the treatment group scored higher on the post-test

Next, we looked at the treatment and control groups’ performance on 24 specific problems within the 50-problem post-test that directly assessed 10 of the 12 fundamental concepts. We used the same conservative hypothesis test as we did in the analysis of the post-test. As shown in Fig.  4 , the mean performance (retention index) of the treatment group \((81.6\%)\) is higher than that of the control group \((77.3\%)\) . The p -value of 0.012 provides statistically significant evidence that the control and treatment group did not perform the same on those 24 specific problems. The mean score of the treatment group was \((4.2\%)\) greater than the mean of the control group, indicating the treatment group was more successful in reducing forgetting of the fundamental concepts.

figure 4

Distribution of the success of reducing forgetting of the 10 fundamental concepts within the post-test by group. Mean score difference of \(4.2\%\) is statistically significant and shows the treatment group scored higher on the fundamental concepts and was more successful in reducing forgetting

In this section we will explore other factors that strengthened our results. We examined factors such as latent (prior) student aptitude, instructor experience level, class hour of the day, and study lead bias using common statistical techniques. A statistical summary of these factors is included in Table 4 .

Student aptitude

Given that new mathematical knowledge builds on prior mathematical understanding, we wanted to verify that our random assignment resulted in two groups with relatively equal mathematical foundations prior to treatment. Without the opportunity to balance our groups for mean SAT/ACT and mean high school grade point average, two measures that might indicate the strength of a student’s mathematical foundation, we used student performance on a pretest to demonstrate that the two groups could be considered equally balanced in their mathematical preparedness. The pretest was administered in late July and was administered simultaneously to all students in identical conditions. Figure  5 shows the performance (distribution of scores) between the treatment and control groups. A t-test that assumed the two mean scores were the same, produced a p -value of 0.841 (see Table 4 ). Consequently, although the treatment group’s mean score \((47.0\%)\) is slightly below the control group’s \((48.8\%)\) , this difference is not statistically significant. The random assignment of students to treatment or control created two groups with relatively equal mathematical foundations.

figure 5

Distribution of pretest scores by group. Mean score difference of 1.8% is not statistically significant; therefore, the groups have relatively equal mathematical foundations

We also examined student performance on the quarter one final exam which students took after receiving instruction in the 12 fundamental concepts and prior to the start of the study. We wanted to ensure that quarter one instruction did not favor one group over another and therefore create an imbalance between the two groups before the treatment began. Figure  6 shows the quarter one final exam performance (distribution of scores) between the treatment and control groups. Note that the mean scores for each group are almost identical with the treatment group mean score \((79.5\%)\) only slightly above the control group \((79.2\%)\) . As before, we used a t-test and assumed that the means were the same. The p -value of 0.823 (see Table 4 ) indicates that the slight difference in means is not statistically significant. Taken together, results from the pretest and quarter one final exam establish that the initial assignment of students to the treatment and control groups produced two groups that were equally balanced in their mathematical preparedness before the treatment began.

figure 6

Distribution of quarter 1 final exam scores by group. Mean score difference is 0.3% so both groups are balanced in their mathematical preparedness at the end of quarter 1

Instructor experience level

The faculty members who taught the Precalculus class have teaching experience that spans from 5 to 20 years. Is it possible that the more experienced instructors are better skilled at their craft and as a result, their students scored higher on the retention quizzes regardless of whether they were in the treatment or control group? We conducted a two-way ANOVA and confirmed that there is almost no evidence to suggest that student performance was affected by the instructor’s level of teaching experience (see Table 4 ). This result is powerful because it shows that spaced recall in the form of recall quizzes can be effective for any instructor no matter the years of teaching experience.

The hour in which students attend class is a possible concern as one may have heard that students would perform better if they did not have a mathematics class so early in the morning, or right after lunch, or at the end of the day. Eliasson et al. ( 2010 ) studied how earlier wake times affect student performance. Trockel et al. ( 2000 ) found early wake up times were the biggest contributor to differences in grade point averages. There is even a recent claim that early morning classes may impede performance (Yeo et al.,  2021 ). To explore this influence, we conducted a two-way ANOVA and found that there is little evidence to suggest that in this experiment student performance varied by class hour (see Table 4 ). This could be due to the fact that although the students have class at different times of the day, the whole student body is required to wake up at the same time to attend a morning accountability formation. Importantly, the claim that the treatment was effective in reducing forgetting holds regardless of the time in which students attend class.

Study lead bias

Two of the six instructors for this Precalculus course were the designers of this experiment and spent significant time reading the research on the effectiveness of spaced learning techniques and mitigating factors. Both instructors taught a control, treatment, and mixed section. To determine if the instructors subconsciously changed their techniques in response to their understanding of the topic being studied, we used a t-test that assumed student performance between study leads and non-study leads was the same. A p -value of 0.899 indicated that there was almost no evidence that would suggest student performance varied as a result of being a student in a study lead’s class (see Table 4 ).

Classroom implications

The results of this study clearly indicate the effectiveness of spaced recall quizzing at reducing student forgetting. This is the similar result we found in the work of Hopkins et al. ( 2016 ); however, we obtained these similar results without the presence of immediate feedback. When used as a classroom strategy it only takes a minimal amount of time away from the current curriculum. Lang ( 2016 ) describes this as a “small teaching” tool that can be added to your current syllabus without a major overhaul of your curriculum. Such a small activity had a huge positive influence on our students’ ability to recall fundamental mathematical concepts at the end of the course. There are some implications worth considering when spaced recall quizzing is used as a classroom strategy. These are: the impact on instructional time, the practice of providing feedback on student solution attempts, and the effect the quizzes may have on student affect, all of which we discuss below.

A possible argument that may be made against using spaced recall quizzes is that they take time away from the instruction of current concepts and therefore reduce the level at which students learn new material. We anticipated that we might observe this result due to the different content of the retention quizzes (RQs) for control verses treatment. The control group RQs addressed current material and the treatment group RQs did not. We anticipated that because of the control group’s extra exposure and time with the current material they would perform better than the treatment group on the quarter three final exam. To check for this possibility, we collected data on the quarter three final exam for both the treatment and control groups. At the end of quarter three, all PIC students took a final exam that was an open response exam (no multiple choice). The final exam was “group-graded.” Each instructor was given a rubric for a specific problem on the final exam and they graded that problem for every student in the course. This is our practice for grading major exams to build in consistency and fairness in our grading procedure. Figure 7 shows the performance (distribution of scores) between the treatment and control groups on that exam. Although the control group’s mean score ( \(76.4\%\) ) is slightly above the treatment group’s ( \(74.9\%\) ), a t-test that examined if the two groups performed the same produced a p -value of 0.400. This provides evidence that the mean quarter three final exam scores were similar between the two groups indicating that both groups understood the new concepts equally well (see Table 4 ). So, not only did revisiting fundamental concepts on the spaced recall quizzes reduce forgetting of those concepts for the treatment group; lost instructional time did not negatively affect their ability to learn new concepts. This is where there is a difference in our results compared to the Hopkins et al. ( 2016 ) study where a spaced versus massed practice was used in a course titled, Introductory Calculus for Engineers. Although both of our studies found the spaced (treatment) students outperformed the massed (control) on the spaced material, we found the control and treatment to perform the same on the current material. Hopkins et al. ( 2016 ) found the spaced learners to outperform the massed learners on the current material.

figure 7

Distribution of quarter 3 final exam scores by group. Mean score difference of 1.5% is not statistically significant, consequently, both groups performed equally

As previously mentioned, several studies addressed the role that feedback plays in retrieval attempts, particularly when retrieval attempts are unsuccessful (Benjamin & Tullis,  2010 ; Karpicke and Roediger III,  2007 ; Kornell et al.,  2009 ). These studies claim that the phenomenon by which retrieval attempts can enhance learning are improved when students receive feedback on their solution attempts. The design of our study prevented us from discussing the quiz solutions during class time because of the potential for the students to discuss the solutions with other students that had not taken their retention quiz yet that day. Any discussion between students would interfere with the designed exposure gap informed by the research of Cepeda et al. ( 2008 ) and the “desired difficulty” as mentioned by Roediger III and Karpicke ( 2006b ), a phrase coined by Bjork and Bjork ( 1992 ). This precluded students from receiving immediate feedback on their solution attempts. Students were able to log into the web-based platform later in the day and see their scores as well as solutions to the problems, but we could not ensure that students did so. This is worth noting because even without a way of providing immediate feedback or ensuring that students engaged with that feedback later, the RQs produced statistically significant improvements in students’ long-term retention (see Sect.  3 ). Consequently, even in classroom situations where providing immediate feedback is not possible, quizzing alone is still effective in improving long-term retention. This is consistent with the findings of Roediger III and Karpicke ( 2006b ) and Karpicke and Roediger III ( 2007 ) that the testing effect occurs even when feedback is not given. RQs have been adopted as part of our curriculum due to the success in reducing forgetting. As we implement the RQs as a practice rather than a research study, we plan to provide immediate feedback and look forward to analyzing the retention with immediate feedback in future studies. One reviewer noted it could be possible that the success of the retention quizzes was a result of whether the student logged into the web-based platform to review the feedback. Whether feedback played a role in increasing retention is an item for further investigation.

In addition to the quantitative measures used to determine the efficacy of the RQs in reducing forgetting; we used the standard end of quarter surveys to ask students to comment on the utility of RQs. Although our research did not formally address student affect, what we observed is worth noting. The feelings of many students in the study can be summed up by one student who said,

“I have a ton of [n]egative feelings towards the retention quizzes. These quizzes are the dumbest thing I have ever taken in school. I think the idea of going over past knowledge that will not be on the [current exam] is not worth the wasted 10 minutes at the beginning of class. I think that the retention quizzes are a waste of time and are an easy F in the grade book. This might just be because I am forgetting the minor past stuff and trying to focus more on what we are learning now but I believe that the retention quiz is not necessary.”

Other students expressed similar sentiments, noting the difficulty of the RQs, and expressed concern about their grade.

“I felt negatively about retention quizzes because I often did poorly on a subject that I had previously done very well on.”
“It was a stressful thing to jump into in class and generally brought everyone’s grade down.”
“I feel like the retention quizzes only takes away from our grade. Our minds are focused on what we are learning currently then we have to revert back to old stuff.”

These sentiments did not surprise us. Lang ( 2016 ) explained that recalling information that a student thought they had already mastered can be frustrating. He suggested informing students about the research supported benefits of quizzing and retrieval practice. Because these quizzes are a recall attempt and not a formative assessment the instructor can also reduce potential negative effects on students’ grades by making them low stakes, giving them less weight in the overall grade.

Another option might be to not grade the quizzes at all, as long as there is some other motivation for students to put forth effort. A key factor in the success of the RQs is effortful recall (Brown et al.,  2014 ) So motivating students to expend effort is an important consideration. It is also possible that helping students understand the benefits that come from even unsuccessful retrieval attempts could help resolve some of their frustration and combat any potentially negative learning effects due to poor affect. Future research could investigate the influence that recall quizzes have on student affect and consider effective ways for addressing any potentially negative aspects. Based on what we observed informally, any attempt to use spaced recall in the form of quizzing to improve long-term retention in a classroom setting should include a plan to address students’ frustration with the experience. If the RQs are to be graded, student concerns regarding the impact of the quizzes on their grades should also be addressed. One reviewer suggested that we find ways to make the effort that the students put into the RQs a rewarding experience that allows them to see the benefit of the effortful retrieval rather than a punishment in their grade. This is a great suggestion that we will find a way to implement as our school plans to continue the use of RQs for all students due to the significant improvement on the reduction of the forgetting of fundamental mathematical concepts.

This study demonstrated the effectiveness of spaced recall quizzing in reducing forgetting, thereby improving students' long-term retention. Specifically, identifying essential knowledge such as fundamental concepts and then providing students weekly spaced quizzes where the identified concepts cycle monthly, positively effects students’ ability to recall those concepts at the end of the academic year. Importantly, implementing this small teaching strategy does not reduce the level at which students learn new material, despite the loss of instructional time due to quizzing. Additionally, in situations where it is not possible to provide students with feedback on retrieval attempts, quizzing alone is still effective in improving long-term retention. The effects of quizzing on student affect was unable to be formally addressed, but was evident in end of course surveys as they exposed student frustration with the retention quizzes. The potential for negative responses from students is not something to be ignored. Therefore, anticipating and addressing negative student feelings is an important consideration if spaced recall quizzing is to be implemented in the classroom.

Data availibility

The datasets and R codes generated during this study are available from the corresponding author on reasonable request.

Arnold, K. M., & McDermott, K. B. (2013). Test-potentiated learning: Distinguishing between direct and indirect effects of tests. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 (3), 940.

Google Scholar  

Begolli, K. N., Dai, T., McGinn, K. M., & Booth, J. L. (2021). Could probability be out of proportion? Self-explanation and example-based practice help students with lower proportional reasoning skills learn probability. Instructional Science, 49 , 441–473.

Article   Google Scholar  

Benjamin, A. S., & Tullis, J. (2010). What makes distributed practice effective? Cognitive Psychology, 61 (3), 228–247.

Bertilsson, F., Wiklund-Hörnqvist, C., Stenlund, T., & Jonsson, B. (2017). The testing effect and its relation to working memory capacity and personality characteristics. Journal of Cognitive Education and Psychology, 16 (3), 241–259.

Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. From Learning Processes to Cognitive Processes: Essays Inhonor of William K Estes, 2 , 35–67.

Brown, P.C., Roediger III, H.L., & McDaniel, M.A. (2014). Make it stick: The science of successful learning.

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132 (3), 354.

Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., & Pashler, H. (2008). Spacing effects in learning: A temporal ridgeline of optimal retention. Psychological Science, 19 (11), 1095–1102.

Ebbinghaus, H. (1964). Memory: A contribution to experimental psychology (henry a. ruger & clara e. bussenius, trans.). New York, NY: Teachers College(Original work published as Das Gedächtnis, 1885).

Eliasson, A. H., Lettieri, C. J., & Eliasson, A. H. (2010). Early to bed, early to rise! sleep habits and academic performance in college students. Sleep and Breathing, 14 (1), 71–75.

Hopkins, R. F., Lyle, K. B., Hieb, J. L., & Ralston, P. A. (2016). Spaced retrieval practice increases college students’ short-and long-term retention of mathematics knowledge. Educational Psychology Review, 28 , 853–873.

Jaeger, A., Eisenkraemer, R. E., & Stein, L. M. (2015). Test-enhanced learning in third-grade children. Educational Psychology, 35 (4), 513–521.

Kamuche, F. U., & Ledman, R. E. (2005). Relationship of time and learning retention. Journal of College Teaching & Learning (TLC), 2 (8), 10.

Karpicke, J.D. (2017). Retrieval-based learning: A decade of progress. Grantee Submission.

Karpicke, J. D., & Roediger, H. L., III. (2007). Expanding retrieval practice promotes short-term retention, but equally spaced retrieval enhances long-term retention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33 (4), 704.

Kornell, N., Hays, M. J., & Bjork, R. A. (2009). Unsuccessful retrieval attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35 (4), 989.

Lang, J. M. (2016). Small teaching: Everyday lessons from the science of learning . Wiley.

Liming, M. C., & Cuevas, J. (2017). An examination of the testing and spacing effects in a middle grades social studies classroom. Georgia Educational Researcher, 14 (1), 103–136.

Lyle, K. B., & Crawford, N. A. (2011). Retrieving essential material at the end of lectures improves performance on statistics exams. Teaching of Psychology, 38 (2), 94–97.

McDaniel, M. A., Anderson, J. L., Derbish, M. H., & Morrisette, N. (2007). Testing the testing effect in the classroom. European Journal of Cognitive Psychology, 19 (4–5), 494–513.

Pearson, W., Jr., & Miller, J. D. (2012). Pathways to an engineering career. Peabody Journal of Education, 87 (1), 46–61.

Roediger, H. L., III., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17 (4), 382.

Roediger, H. L., III., & Karpicke, J. D. (2006a). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1 (3), 181–210.

Roediger, H. L., III., & Karpicke, J. D. (2006b). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17 (3), 249–255.

Rohrer, D., & Taylor, K. (2007). The shuffling of mathematics problems improves learning. Instructional Science, 35 (6), 481–498.

Rowland, C. A. (2014). The effect of testing versus restudy on retention: a meta-analytic review of the testing effect. Psychological Bulletin, 140 (6), 1432.

Sanchez, D. R., Langer, M., & Kaur, R. (2020). Gamification in the classroom: Examining the impact of gamified quizzes on student learning. Computers & Education, 144 (103), 666.

Shirvani, H. (2009). Examining an assessment strategy on high school mathematics achievement: Daily quizzes vs. weekly tests (pp. 34–45). American Secondary Education.

Taylor, A. T., Olofson, E. L., & Novak, W. R. (2017). Enhancing student retention of prerequisite knowledge through pre-class activities and in-class reinforcement. Biochemistry and Molecular Biology Education, 45 (2), 97–104.

Trockel, M. T., Barnes, M. D., & Egget, D. L. (2000). Health-related variables and academic performance among first-year college students: Implications for sleep and other behaviors. Journal of American College Health, 49 (3), 125–131.

Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin, 147 (4), 399.

Yeo, S. C., Lai, C. K., Tan, J., Lim, S., Chandramoghan, Y., & Gooley, J. J. (2021). Large-scale digital traces of university students show that morning classes are bad for attendance, sleep, and academic performance. BioRxiv . https://doi.org/10.1101/2021.05.14.444124

Download references

Acknowledgements

We would like to thank the USMAPS Math Department head, Dr. Alex Heidenberg, for adopting the retention quizzes into the Precalculus course. We also appreciate the support and participation of the instructors: Elizabeth Giebler, Justyna Marciniak, and Fran Teague. Because of their involvement along with the rich student feedback, the data was more robust than anticipated.

No funding was received for conducting this study.

Author information

Diane S. Lindquist, Brenda E. Sparrow and Joseph M. Lindquist have contributed equally to this work.

Authors and Affiliations

Mathematics Department, US Military Academy Preparatory School, 950 Reynolds Road, West Point, NY, 10996, USA

Diane S. Lindquist & Brenda E. Sparrow

Department of Mathematical Sciences, US Military Academy, 601 Thayer Road, West Point, NY, 10996, USA

Joseph M. Lindquist

You can also search for this author in PubMed   Google Scholar

Contributions

Lindquist, D and Sparrow, B jointly conceived the study, trained instructor participants, and collected and interpreted data findings. Lindquist, J conducted all statistical explorations.

Corresponding author

Correspondence to Diane S. Lindquist .

Ethics declarations

Competing interests.

The authors have no financial or proprietary interests in any material discussed in this article.

Ethical approval

This research was submitted to the West Point Institutional Review Board and the protocol was deemed to meet the requirements for exempt status under 32CFR219.104(d)(1). Since data involved secondary analysis of routinely collected data, consent was not required.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Statistical tests

While descriptive statistics may clearly show a difference between the control and treatment groups, the reader may be interested in whether the results are generalizable to a broader population—say high school students, or perhaps a similar strata of college students. For this, we rely upon inferential statistics to determine the likelihood of similar results given a different sampling of students. This section briefly outlines the tests used to specify this generalizability.

Data Visualization: While summary statistics (maximum, minimum, mean, variance, etc.) are useful point estimates of student performance, they may mask key features. A common tool to visualize these features is density plot. This tool estimates and plots the underlying probability density function resulting in a “smoothed” histogram that is not sensitive to bin size selection. This analysis is of the type shown in Fig.  4 .

T-Test: When comparing two groups (i.e. between control and treatment ), one is often interested in whether an observed performance difference ( \(\mu _1-\mu _2\) ) is significant. The t-test examines the null hypothesis

that the mean performances are the same—and returns a p -value that can be interpreted as a probability that observed differences between the two groups is due to simple chance. A p -value of 0.05 indicates that differences could be explained by simple chance in just 5 out of every 100 experiments. Given this unlikely outcome, the null hypothesis should be rejected. In most contexts, a p -value greater than 0.05 suggests insufficient evidence to reject the null hypothesis. For this analysis, all t-tests were computed using the Welch’s t-test which requires generally normal responses (evident from Figs.  3 ,  4 , and others) and relaxes the assumption of equal variances between the two groups required when using the Student’s t-test.

Analysis of Variance (ANOVA): When comparing more than two groups, one is often interested in whether an observed performance difference (i.e between more than 2 instructors ) is significant. ANOVA tests the null hypothesis

that the mean performances are the same. Here, \(\mu _n\) is the mean performance of the \(n^{th}\) group. If the test returns a statistically significant result, there is evidence to reject the null hypothesis ( \(H_0\) ) implying that at least one group is different from the others. One-way ANOVA tests for differences in one independent variable (say control/treatment). Two-way ANOVA tests for differences in two independent variables (say control/treatment and hour of the day taught). When reporting the degrees of freedom for ANOVA, we adopt the notation ( a,b ) where a represents the degrees of freedom for the “between-group” variance and b represents the degrees of freedom for the “within-group” variance.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Lindquist, D.S., Sparrow, B.E. & Lindquist, J.M. Spaced recall reduces forgetting of fundamental mathematical concepts in a post high school precalculus course. Instr Sci (2024). https://doi.org/10.1007/s11251-024-09680-w

Download citation

Received : 17 June 2022

Accepted : 05 July 2024

Published : 03 September 2024

DOI : https://doi.org/10.1007/s11251-024-09680-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Spaced recall
  • Fundamental concepts
  • Mathematics
  • Find a journal
  • Publish with us
  • Track your research
  • DOI: 10.55014/pij.v7i1.545
  • Corpus ID: 267738387

Effective Strategies and Methods to Enhance Music Practical Teaching in Higher Vocational Preschool Education

  • Published in Pacific International Journal 15 February 2024

One Citation

Memory, attention and creativity as cognitive processes in musical performance: a case study of students and professionals among non-musicians and musicians., related papers.

Showing 1 through 3 of 0 Related Papers

  • Open access
  • Published: 30 August 2024

Comparison of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in a client health assessment: a randomized clinical trial

  • Raziyeh Ghafouri 1 ,
  • Vahid Zamanzadeh 1 &
  • Malihe Nasiri 2  

BMC Medical Education volume  24 , Article number:  949 ( 2024 ) Cite this article

Metrics details

Since effective education is one of the main concerns of every society and, in nursing, can lead to the education of successful people, the development of learning and teaching methods with greater effectiveness is one of the educational priorities in every country. The present study aimed to compare the effect of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in a client health assessment.

The present study was a Parallel randomized clinical trial study. The participants were 166 nursing students. The clinical trial data was collected from December 14, 2023, to February 20, 2024. The inclusion criteria were nursing students who had passed the first semester, who were willing to participate and install the app on their mobile devices, and who had no experience with the designed application for this study. The participants were allocated to four groups with colored carts. In the first group, teaching was performed via gamification in a flipped learning environment; in the second group, teaching was performed via the gamification method. A flipped class was implemented in the third group. In the fourth group, the usual lecture method was used. The practical performance to assess the physical health assessment with 10 questions using the key-feature questions, along with the satisfaction and self-efficacy of the students, was also checked with questionnaires.

In this study, 166 nursing students, (99 female and 67 male), with an average (standard deviation) age of 21.29 (1.45) years, participated. There was no statistically significant difference in the demographic characteristics of the participants in the four intervention groups ( P  > 0.05). Comparing the results before and after the intervention, the results of the paired t test indicated a significant difference in the satisfaction, learning and self-efficacy of the learners ( P  < 0.001). In the comparison of the four groups, the ANOVA results for the comparison of the average scores of knowledge evaluation and satisfaction after intervention among the four groups indicated a statistically significant difference ( P  < 0.001). When the knowledge evaluation scores of the groups were compared, the scores for gamification in the flipped learning environment were significantly different from the other methods ( P  < 0.05), and there was no significant difference between the scores for the flipped class and lecture methods ( P  = 0.43). According to the ANOVA results, when comparing the satisfaction scores of the groups, the students in the flipped learning environment and gamification groups were more satisfied than the flipped class and lecture groups ( P  < 0.01).

Based on the results of the present research, it can be concluded that teaching methods have an effect on students’ learning and satisfaction. The teaching method has an effect on the satisfaction of the students, and the use of the flipped class method with the use of gamification was associated with more attractiveness and satisfaction in addition to learning. Teachers can improve the effectiveness of education with their creativity, depending on situation, time, cost, and available resources, by using and integrating educational methods.

Peer Review reports

Introduction

Effective education is one of the main concerns of every society [ 1 ]. Because the traditional methods of teaching, learning and management have little effectiveness [ 2 ], multiple learning strategies of active learning and the use of technologies [ 3 , 4 , 5 ], it is helpful to integrate the classroom approach among these methods. The reverse is the use of a playful method [ 6 , 7 ]. The flipped classroom was presented in 2007 by Bergmann and Sams, two chemistry teachers at Woodland Park High School in Colorado (USA). Their goal was to ensure that students who could not attend class for various reasons could proceed at the pace of the course and not be harmed due to not attending class [ 8 ]. Bergmann and Sams videotaped and distributed instructional content and found that this model allowed the teacher to focus more attention on the individual learning needs of each student [ 5 , 8 ].

In 2014, the Flipped Learning Network (FLN) was introduced, in which flipped learning was defined as “an educational approach in which direct instruction is transferred from the group learning dimension to individual learning, and in a dynamic and interactive learning environment, where the instructor guides students in applying concepts and engaging creatively with course content”. The four pillars of flexible environment, learning culture, purposeful content and professional instructor have been described in opposite directions [ 9 , 10 ]. In addition to the ever-increasing complexity of the healthcare environment and the rapid advancement of healthcare technology, a global pandemic (COVID-19) has affected educational structures. The pandemic has caused a global educational movement toward blended learning to meet students’ technological and hands-on learning needs. Indeed, at no time in history has there been such a sudden transition to this type of learning [ 11 ], where the flipped classroom was widely used [ 9 ].

In nursing education, the use of flipped classrooms [ 9 , 12 ] and technologies [ 3 , 5 ] has been emphasized. The results obtained in the systematic review of the effect of the flipped classroom on academic performance in nursing education indicated its positive effect, and the opinions of most students about this method included aspects such as its usefulness, flexibility, greater independence or greater participation [ 13 , 14 , 15 , 16 , 17 , 18 , 19 ]. According to the cognitive bases related to the Bloom’s taxonomy, with the flipped classroom method, the student works in the first stage of the learning process at home, which is the simplest stage, and in the second stage, through active learning with the help of the teacher and classmates, in class time, which is used to increase and empower more [ 20 , 21 ]. In addition, the flipped classroom method has certain advantages over traditional learning. The flipped classroom is student-centered and makes students responsible for their own learning [ 22 ], and its use in nursing has been emphasized in systematic review studies [ 3 , 23 , 24 ].

One of the interactive teaching methods using computers is the gamification method. Gamification in education includes the use of game elements to increase motivation and participation and to involve students in the personal learning process [ 1 , 25 ]. Gamification is an active education method. The gamification system increases the level of engagement and motivation of learners by provoking excitement and creating challenges for them. Additionally, with this method, it is possible to provide an opportunity for testing, and in that test, in addition to creating a challenge, learners are given the opportunity to display their achievements through competition [ 26 ].

Nursing education institutions are obliged to improve the ability of nursing students to make correct clinical judgments through various educational programs and the use of new teaching methods [ 27 , 28 ] so that when nursing students enter the clinic, they can fulfill their role as members of the medical team [ 27 ]. Therefore, it is necessary to carry out more research regarding the identification of effective teaching methods that can improve the attractiveness of education and its satisfaction among nursing students [ 1 , 27 ].

This study addresses the lack of comparative research on the effectiveness of flipped classrooms and gamification in nursing education, an area that has not been sufficiently explored. The advantages of combining education methods are that they can be used together [ 6 , 7 ]. For example, by combining education using the flipped class with gamification, more study time is provided by using the flipped class, and the attractiveness of the method is provided by gamification [ 7 ]. Therefore, considering the attractiveness of the new application that is prepared in a flipped class, the current research was conducted aimed at comparing the effects of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in terms of client health assessment.

The present study was a parallel randomized clinical trial research aimed at comparing the effect of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in a client health assessment. The clinical trial data was collected from December 14th, 2023, until February 20th, 2024.

Participants

First, in a call, 247 nursing students registered to participate in the study. After checking the entry criteria, 188 people met the entry criteria for the study. The inclusion criteria were nursing students who had passed the first semester, who were willing to participate and install the app on their mobile devices, and who had no experience with the designed application for this study. Exclusion criteria were: miss the mobile and drop out of study, for example, because of transferring, migration or do not like to continue participating in the study. So, 18 students were excluded from study for unwillingness to continue, 2 students because of migration were excluded, and 2 people were excluded for missing their mobile (Fig.  1 ).

figure 1

Study and sampling process

The participants were allocated to four groups with using colored carts. Before sampling, 188 carts in 4 blue, red, black and white colors (from each color, 45 carts) were prepared in one enveloped pocket. After completing the informed consent and pre-test questionnaires, each student took a colored card from the enveloped pocket. Then, with the lottery, it was determined that the participants with the blue card participated in the gamification in a flipped learning environment, the red cart in the gamification, the black cart in the flipped class, and the white cart in the lecture method. The study and sampling process is shown in Fig.  1 .

Intervention

The education course was 4 class in 60 min of health status assessment in 4 weeks. Each group has a classroom weekly. Education content was health assessment and clinical examination courses of the Bachelor of Nursing Education curriculum. Course plan was developed based on the curriculum.

For intervention, the application was designed using the cascade model (initial analysis, system analysis, design, programming, testing (alpha and beta), implementation and modification) [ 29 , 30 ]. In the initial analysis stage, the need or the desired problem, which is the issue of education improvement, is raised, and can technical solutions be provided for it? If there are possible solutions, the practicality is evaluated, and in the analysis of the visual appeal system, the up-to-date information, simple language, and comprehensiveness of the information provided in the educational content are checked. In the design phase, the design of the desired system was written, and a program was written by the programmers according to the initial design of the system.

The educational content of the application was prepared based on the health assessment and clinical examination courses of the Bachelor of Nursing Education Program, approved by an expert panel. The application was designed in two parts: education and scenario-based games. In the education section of the application, the content of the education was presented, and in the scenario base game section, the 10 scenarios of health status assessment and clinical examination were designed based on real situations.

In the scenario base game section of the application, the application was embedded as a game in such a way that the student, at the first, observes the chief compliance of the patient, and they must complete patient examinations and choose the correct answer. If they choose correctly, they will take a green cart, and if they make a mistake, they will take a red cart. They could take 4 green carts in each scenario. A yellow cart was shown when the answer was not incorrect, but it was not an exact answer. In each scenario, they must find the correct nursing diagnosis. They must provide a nursing diagnosis based on the priority of care in the scenario.

The fundamental elements of gamification are mechanics (motivating students through points, budgets and rewards), dynamics (engaging users through stories and narratives), and aesthetics (user experiences from applications about being user-friendly and attractive) [ 31 , 32 , 33 ]. The mechanics element was considered in the application, with green carts in each stage. The dynamic element was considered in the scenarios. The aesthetic element was considered and checked in alpha and beta tests.

In the test phase, the Application was checked for errors, and it was tested for user acceptance in two parts, the alpha and beta tests. In the alpha test, the program was used by the designers (four academic nurses and 4 IT men) as users, and in the beta test, a group of users (20 nursing studentsThe fundamental elements of a flipped class are that the students must read the content before the class and do the assignment in the class. In this study, this element was considered, and the provided content was given to participants at first. The students read content for each class before the class, and they solved the assignment in the class. The provided content for the flipped class group was designed in the PowerPoint files, and for the gamification in the Flipped Learning Environment group was designed in the application.

It was improved based on their opinions, and in the next stage, the approved application by the designer and user was used in this study.

Lecture group

In the Lecture group, the content of the education was held in the lecture method, and in each section, at the end of class, a scenario of the designed was given to the students as an assignment. They must solve it by next week. At the end of the study, four scenarios were performed by the students as assignments in this group.

Flipped class group

In the Flipped class group, the content was prepared in the four voiced PowerPoints and presented them to the students in the first session. Students read the content of each class, and in class they discussed the educational content and solved the scenarios as an assignment. Eight scenarios were discussed by the students as assignments in this group.

Gamification group

In the Gamification group, in each class, after the educational content was presented, the homework was presented, and students played a scenario of application in the class. Four scenarios were performed by the students as assignments in this group.

Gamification in the flipped learning environment group

In the Gamification in the Flipped Learning Environment group, the designed mobile application was presented in the first session of the course. Students must read the content of the session before the class, and in class they discussed the educational content and solved the scenarios as an assignment. Eight scenarios were performed as homework by students in a gamification environment.

Data collection tools

In this study, a questionnaire with 10 key-feature questions (KFQs) was designed by an expert panel of 10 academic nurses. After designing a KFQ questionnaire, its validity and reliability were examined. Validity was confirmed with a content validity ratio (CVR) of 14 expert (academic nurses) and qualitative validity with 7 academic and 7 clinical nurses; reliability was checked by test-retest. The CVR of the questionnaire was 0.96 and was confirmed. All seven academic and seven clinical nurses confirmed the qualitative validity of the questionnaire. The content validity coefficient based on the number of participating professors (at least 10 people) is 0.49 as the minimum acceptable according to the Lauwshe Tables (18, 19) and the necessity of the items of tools was confirmed.

For the test-retest of KFQ questionnaire, 10 nursing students participated. They filled out the questionnaire twice, with an interval of two weeks. The correlation coefficient between their answers was 0.93 with Spearman’s correlation coefficient. The correlation coefficient above 0.7 is good [ 34 , 35 ].

Additionally, education satisfaction was investigated with the Measuring Student Satisfaction Scale from the Student Outcomes Survey [ 27 ], which includes 20 items. The validity of it was confirmed with CVR, and the reliability was checked by Cronbach’s alpha. The CVR of the questionnaire was 0.91 and was confirmed. Cronbach’s alpha was 0.69. Cronbach’s alpha coefficient above 0.7 is good, 0.3–0.7 is good, and less than 0.3 is poor [ 34 , 35 ]. The overall Cronbach’s alpha was appropriate reliability.

The Sherer questionnaire tool was used to assess the self-efficacy of the nursing students [ 36 ]. This tool contains 17 items on a five-point Likert scale. Sherer et al., confirmed the reliability of the questionnaire with Cronbach’s alpha 0.76 [ 36 ]. Also, for this questionnaire, the validity was confirmed with CVR, and the reliability was checked by Cronbach’s alpha. The CVR of the questionnaire was 0.90 and was confirmed. Cronbach’s alpha was 0.45.

Data analysis

The analysis of the research data was performed using the Statistical Package for Social Sciences version 20. The Kolmogorov-Smirnov test was used to assess the normality of the data. Data analysis was performed by using descriptive tests, such as percentage, mean and standard deviation, and statistical tests, such as the chi-square test, paired t test, and ANOVA. In all statistical tests, a significance level was considered less than 0.05.

In the present study, 166 nursing students, 99 women and 67 men, with an average (standard deviation) age of 21.29 (1.45) years, were participated. The demographic characteristics of the participants are shown in Table  1 . The homogeneity of the intervention and control groups was checked with statistical methods, and the results are reported in Table  1 . There was no statistically significant difference in the demographic characteristics of the participants in the groups ( P  > 0.05).

Comparing the results before and after the intervention, the results of the paired t test indicated a significant difference in the satisfaction, learning and self-efficacy of the learners ( P  < 0.001). Table  2 shows the results of paired t tests.

The ANOVA showed that a statistically significant difference between the mean scores of knowledge and satisfaction after intervention in the four groups ( P  < 0.001). The result of the ANOVA was not significant difference between the mean of the self-efficacy after intervention in the four groups ( P  = 0.101).

In the analysis of the groups, there was a significant difference in the comparison of the knowledge evaluation scores, such that there was a significant difference between the average of the gamification methods in the flipped learning environment group and the gamification compared to the inverted class and lecture, considering equal variance ( P  < 0.001). There were significant differences at the 0.05 level between the two gamification methods in the flipped learning environment group and the gamification group ( P  = 0.03). Gamification and flipped classes had no significant difference ( P  = 0.054). There was no significant difference between the two methods of flipped class and lecture ( P  = 0.43).

According to the ANOVA results, when comparing the satisfaction scores of the groups, there was no significant difference between the means of gamification in the flipped learning environment and the gamification method ( P  = 0.49); however, there was a significant difference between the gamification in the flipped learning environment and the gamification with the flipped class and the lecture. Additionally, there were significant differences between the flipped class and the lecture method ( P  < 0.01).

Discussions

This study aimed to compare the effects of the lecture method, flipped class and gamification in a flipped learning environment on the performance of nursing students in assessing the health status of clients. The demographic characteristics of the participants (gender, age, academic semester, grade point average and theory course score) had the same distribution among the four groups, and there was no statistically significant difference ( P  < 0.05).

Comparing the results before and after the training, the results of the paired t test indicated a significant difference in the satisfaction, learning and self-efficacy of the learners ( P  < 0.001). The results indicate that all four teaching methods effectively affected the learning, satisfaction and self-efficacy of students in evaluating the health status of their clients. However, in the comparison of the 4 groups, ANOVA revealed a statistically significant difference ( P  < 0.001). In the analysis comparing the knowledge evaluation scores of the gamification group with those of the other methods group, there were significant differences ( P  < 0.05), and there was no significant difference between the two methods (Flipped class and lecture) ( P  = 0.439). According to the ANOVA results, the satisfaction scores of the groups were greater for the gamification in the flipped learning environment and gamification groups than for the flipped class and lecture groups ( P  < 0.01). The results of the present research indicate that teaching methods have an effect on students’ learning and satisfaction.

Rachayon and his colleagues also used a task-based learning method in combination with digital games in a flipped learning environment to develop students’ English language skills, and their results also indicated the success of combining the above methods [ 7 ]. Muntrikaeo and his colleagues also used a similar model of task-based learning in combination with games in a reversed environment for teaching English, and their findings were also successful [ 6 ]. The results of the current research, which involved the integration of the gamification in the flipped learning environment for teaching health status assessment to nursing students, are similar to those of the above research.

Zou et al., in their systematic review, found that success in the flipped classroom is related to teachers’ creativity in making the classroom interactive, students’ readiness, and the use of technology [ 37 ]. In the present study, the flipped class, along with the use of gamification in the flipped learning environment, increased learner satisfaction and learning. Therefore, their findings are similar to the findings of the present study.

Hernon and his colleagues reported that the use of technology plays a significant role in the development of nursing students’ skills [ 4 ]. Regarding the use of educational applications for health assessment, the results of their research are the same as the current research, and the use of technology not only plays a role in learning but also it has role in education satisfaction. Considering the results of the present study and similar studies, we can conclude that the use of gamification in the flipped learning environment is an interactive teaching method and can be used to improve nursing education. Gamification can increase the attractiveness of education and promote education. If a good application is designed as a flipped enviroment, it provides more time in the classroom for discussion, interaction, and scenario-based education and promotes education satisfaction.

In this study, the satisfaction with education had a significant difference between the groups, but the students’ self-efficacy, despite the significant difference before and after the intervention, did not have a significant difference between the groups. Since all three studied methods were effective in students’ learning and self-efficacy, it can be said that teachers can improve educational effectiveness and satisfaction by using different methods and combining them in educational situations by considering resources and conditions.

The gamification method was associated with higher satisfaction, but it requires more resources, equipment, and skilled personnel. The flipped class method requires fewer resources, is more cost-effective, and provides more time for practice and group discussion. By combining these two methods, the advantages of both can be used, which is confirmed by the results of the present study. It seems that the upside-down environment provides a good opportunity for life-long training, including the promotion of interaction and teamwork, and along with other methods, it is also associated with more effectiveness and benefits.

In this study, knowledge and satisfaction of education had significant differences between groups, but students’ self-efficacy had not significant difference between groups. Maybe it was due to the fact that we participated in the second and third semesters of nursing students, and the interactive skills of students were not assessed. So, the researchers recommended that more research be conducted with the aim of investigating interactive and communication skills using gamification in a flipped environment.

Therefore, this method is helpful in nursing education as well as other medical fields. It is suggested that this method could be combined with other educational methods, such as task-based and team-based methods, to develop the possibility of developing team-based education and task-based education. Integrated gamification methods in the flipped learning environment with mobile applications have greater attractiveness and satisfaction with effective education, and with the use of appropriate applications, it is necessary to create a sense of competition and learning. But, in this study, the interactive skills of students were not assessed. Finally it is emphasized that teachers can improve the effectiveness of education with their creativity, depending on situation, time, cost, and available resources, by using and integrating educational methods.

The teaching method has an effect on students’ satisfaction with the teaching method, and the use of gamification in the flipped learning environment is more effective than the flipped class method, gamification, and the lecture method. Based on the results of the present research, it can be concluded that teaching methods have an effect on students’ learning and satisfaction. The teaching method has an effect on the satisfaction of the students, and the use of the flipped class method with the use of gamification was associated with more attractiveness and satisfaction in addition to learning. Teachers can improve the effectiveness of education with their creativity, depending on situation, time, cost, and available resources, by using and integrating educational methods.

Limitations

Not installing the program on IOS phones made it impossible for these users to use the application and drop out study, so we recommended that designed application for android and IOS. The ability of the professor to teach with the method of gamification in the flipped learning environment and his mastery of the application are necessary to provide necessary training to the teachers regarding the above methods.

Integrated gamification methods in the flipped learning environment with mobile applications have greater attractiveness and satisfaction. But, in this study, the interactive skills of students were not assessed. So the researchers recommended that more research be conducted with the aim of investigating interactive and communication skills using the gamification method in an upside-down environment.

Data availability

Data is provided within the manuscript or supplementary information files.

Khaledi A, Ghafouri R, Anboohi SZ, Nasiri M, Ta’atizadeh M. Comparison of gamification and role-playing education on nursing students’ cardiopulmonary resuscitation self-efficacy. BMC Med Educ. 2024;24(1):1–6.

Article   Google Scholar  

Pellegrino JL, Vance J, Asselin N. The Value of songs for Teaching and Learning Cardiopulmonary Resuscitation (CPR) competencies: a systematic review. Cureus. 2021;13(5).

Chi M, Wang N, Wu Q, Cheng M, Zhu C, Wang X, et al. editors. Implementation of the flipped Classroom combined with problem-based learning in a medical nursing course: a Quasi-experimental Design. Healthcare: MDPI; 2022.

Google Scholar  

Hernon O, McSharry E, MacLaren I, Carr PJ. The use of educational technology in teaching and assessing clinical psychomotor skills in nursing and midwifery education: a state-of-the-art literature review. J Prof Nurs. 2023;45:35–50.

River J, Currie J, Crawford T, Betihavas V, Randall S. A systematic review examining the effectiveness of blending technology with team-based learning. Nurse Educ Today. 2016;45:185–92.

Muntrikaeo K, Poonpon K. The effects of Task-based instruction using online Language games in a flipped learning environment (TGF) on English oral communication ability of Thai secondary students. Engl Lang Teach. 2022;15(3):9–21.

Rachayon S, Soontornwipast K. The effects of task-based instruction using a digital game in a flipped learning environment on English oral communication ability of Thai undergraduate nursing students. Engl Lang Teach. 2019;12(7):12–32.

Bergmann J, Sams A. Flip your classroom: Reach every student in every class every day. International society for technology in education; 2012.

Barbour C, Schuessler JB. A preliminary framework to guide implementation of the flipped Classroom Method in nursing education. Nurse Educ Pract. 2019;34:36–42.

Talbert R, Mor-Avi A. A space for learning: an analysis of research on active learning spaces. Heliyon. 2019;5(12).

Jowsey T, Foster G, Cooper-Ioelu P, Jacobs S. Blended learning via distance in pre-registration nursing education: a scoping review. Nurse Educ Pract. 2020;44:102775.

Blegur J, Ma’mun A, Mahendra A, Mahardika IMS, Tlonaen ZA. Bibliometric analysis of micro-teaching model research trends in 2013–2023. J Innov Educational Cult Res. 2023;4(3):523–33.

Yun S, Min S. A study on learning immersion, online class satisfaction, and perceived academic achievement of flip-learning online classes. J Surv Fisheries Sci. 2023;10(4S):432–41.

Sullivan JM. Flipping the classroom: an innovative approach to graduate nursing education. J Prof Nurs. 2022;38:40–4.

Ng EKL. Student engagement in flipped classroom in nursing education: an integrative review. Nurse Educ Pract. 2023:103585.

Kazeminia M, Salehi L, Khosravipour M, Rajati F. Investigation flipped classroom effectiveness in teaching anatomy: a systematic review. J Prof Nurs. 2022;42:15–25.

Özbay Ö, Çınar S. Effectiveness of flipped classroom teaching models in nursing education: a systematic review. Nurse Educ Today. 2021;102:104922.

Betihavas V, Bridgman H, Kornhaber R, Cross M. The evidence for ‘flipping out’: a systematic review of the flipped classroom in nursing education. Nurse Educ Today. 2016;38:15–21.

Tan C, Yue W-G, Fu Y. Effectiveness of flipped classrooms in nursing education: systematic review and meta-analysis. Chin Nurs Res. 2017;4(4):192–200.

Sari NARM, Winarto, Wu T-T, editors. Exemplifying Formative Assessment in Flipped Classroom Learning: The Notion of Bloom’s Taxonomy. International Conference on Innovative Technologies and Learning; 2022: Springer.

SivaKumar A. Augmenting the flipped classroom experience by integrating revised Bloom’s taxonomy: a faculty perspective. Rev Educ. 2023;11(1):e3388.

Merrett CG. Analysis of flipped Classroom techniques and Case Study Based Learning in an introduction to Engineering materials Course. Adv Eng Educ. 2023;11:2–29.

Banks L, Kay R. Exploring flipped classrooms in undergraduate nursing and health science: a systematic review. Nurse Educ Pract. 2022:103417.

Sezer TA, Esenay FI. Impact of flipped classroom approach on undergraduate nursing student’s critical thinking skills. J Prof Nurs. 2022;42:201–8.

Nevin CR, Westfall AO, Rodriguez JM, Dempsey DM, Cherrington A, Roy B, et al. Gamification as a tool for enhancing graduate medical education. Postgrad Med J. 2014;90(1070):685–93.

Verkuyl M, Romaniuk D, Atack L, Mastrilli P. Virtual gaming simulation for nursing education: an experiment. Clin Simul Nurs. 2017;13(5):238–44.

Jang K, Kim SH, Oh JY, Mun JY. Effectiveness of self-re-learning using video recordings of advanced life support on nursing students’ knowledge, self-efficacy, and skills performance. BMC Nurs. 2021;20(1):1–10.

Roel S, Bjørk IT. Comparing nursing student competence in CPR before and after a pedagogical intervention. Nursing Research and Practice. 2020;2020.

Ali WNAW, Yahaya WAJW, Waterfall -ADDIE, Model. An Integration of Software Development Model and Instructional Systems Design in Developing a Digital Video Learning Application. 2023.

Rodríguez S, Sanz AM, Llano G, Navarro A, Parra-Lara LG, Krystosik AR, et al. Acceptability and usability of a mobile application for management and surveillance of vector-borne diseases in Colombia: an implementation study. PLoS ONE. 2020;15(5):e0233269.

Govender T, Arnedo-Moreno J, editors. A survey on gamification elements in mobile language-learning applications. Eighth international conference on technological ecosystems for enhancing multiculturality; 2020.

Landers RN, Armstrong MB, Collmus AB. How to use game elements to enhance learning: Applications of the theory of gamified learning. Serious Games and Edutainment Applications: Volume II. 2017:457 – 83.

Toda AM, Klock AC, Oliveira W, Palomino PT, Rodrigues L, Shi L, et al. Analysing gamification elements in educational environments using an existing Gamification taxonomy. Smart Learn Environ. 2019;6(1):1–14.

Kellar SP, Kelvin EA. Munro’s statistical methods for health care research. Wolters Kluwer Health/Lippincott Williams & Wilkins; 2013.

Polit DF, Yang F. Measurement and the measurement of change: a primer for the health professions. Wolters Kluwer Health; 2015.

Sherer M, Adams CH. Construct validation of the self-efficacy scale. Psychol Rep. 1983;53(3):899–902.

Zou D, Luo S, Xie H, Hwang G-J. A systematic review of research on flipped language classrooms: theoretical foundations, learning activities, tools, research topics and findings. Comput Assist Lang Learn. 2022;35(8):1811–37.

Download references

Acknowledgements

The authors also wish to thank all the participants and those who helped us in carrying out the research especially all the staffs of Department of Medical Surgical Nursing of School of Nursing & Midwifery of Shahid Beheshti University of Medical Sciences.

The authors received no specific funding for this work.

Author information

Authors and affiliations.

Department of Medical Surgical Nursing, School of Nursing & Midwifery, Shahid Beheshti University of Medical Sciences, Vali-Asr Avenue, Cross of Vali-Asr Avenue and Hashemi Rafsanjani (Neiaiesh) Highway, Opposite to Rajaee Heart Hospital, Tehran, Iran

Raziyeh Ghafouri & Vahid Zamanzadeh

Department of Basic Sciences, School of Nursing & Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Malihe Nasiri

You can also search for this author in PubMed   Google Scholar

Contributions

VZ and RG formulates the research question that represents the systematic review objective. VZ and RG provide proposal and reports. RG collected the data. MN: Data analysis. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Raziyeh Ghafouri .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the ethics committee of Shahid Beheshti University of Medical Science (IR.SBMU.PHARMACY.REC.1402.152), and all methods were carried out in accordance with the research ethical codes of the Iran National Committee for Ethics in Biomedical Research. The authors guarantee that they have followed the ethical principles stated in the Declaration of Helsinki (to protect the life, health, dignity, integrity, right to self-determination, privacy, and confidentiality of personal information of research subjects) in all stages of the research. This is the online certificate of the research ethical code: https://ethics.research.ac.ir/ProposalCertificateEn.php?id=404003&Print=true&NoPrintHeader=true&NoPrintFooter=true&NoPrintPageBorder=true&LetterPrint=true . This study was registered in the Iranian Registry of Clinical Trials ( https://irct.behdasht.gov.ir ) on 14/12/2023, with the IRCT ID: IRCT20210131050189N7. To observe ethical considerations, School of Nursing & Midwifery of Shahid Beheshti University of Medical Sciences agreed to participate in the study; the research goals and procedures were elucidated to the participants, the participants were assured of information anonymity and confidentiality, and informed written consent was obtained from each participant and documented. They participated in the study voluntarily and could leave the study at any stage.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ghafouri, R., Zamanzadeh, V. & Nasiri, M. Comparison of education using the flipped class, gamification and gamification in the flipped learning environment on the performance of nursing students in a client health assessment: a randomized clinical trial. BMC Med Educ 24 , 949 (2024). https://doi.org/10.1186/s12909-024-05966-2

Download citation

Received : 15 March 2024

Accepted : 28 August 2024

Published : 30 August 2024

DOI : https://doi.org/10.1186/s12909-024-05966-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Flipped classroom
  • Gamification, flipped learning environment
  • Mobile application

BMC Medical Education

ISSN: 1472-6920

research about teaching methods

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: Welcome to this overview of quantitative research methods. This tutorial will give you the big picture of quantitative research and introduce key concepts that will help you determine if quantitative methods are appropriate for your project study. First, what is educational research? Educational research is a process of scholarly inquiry designed to investigate the process of instruction and learning, the behaviors, perceptions, and attributes of students and teachers, the impact of institutional processes and policies, and all other areas of the educational process. The research design may be quantitative, qualitative, or a mixed methods design. The focus of this overview is quantitative methods. The general purpose of quantitative research is to explain, predict, investigate relationships, describe current conditions, or to examine possible impacts or influences on designated outcomes. Quantitative research differs from qualitative research in several ways. It works to achieve different goals and uses different methods and design. This table illustrates some of the key differences. Qualitative research generally uses a small sample to explore and describe experiences through the use of thick, rich descriptions of detailed data in an attempt to understand and interpret human perspectives. It is less interested in generalizing to the population as a whole. For example, when studying bullying, a qualitative researcher might learn about the experience of the victims and the experience of the bully by interviewing both bullies and victims and observing them on the playground. Quantitative studies generally use large samples to test numerical data by comparing or finding correlations among sample attributes so that the findings can be generalized to the population. If quantitative researchers were studying bullying, they might measure the effects of a bully on the victim by comparing students who are victims and students who are not victims of bullying using an attitudinal survey. In conducting quantitative research, the researcher first identifies the problem. For Ed.D. research, this problem represents a gap in practice. For Ph.D. research, this problem represents a gap in the literature. In either case, the problem needs to be of importance in the professional field. Next, the researcher establishes the purpose of the study. Why do you want to do the study, and what do you intend to accomplish? This is followed by research questions which help to focus the study. Once the study is focused, the researcher needs to review both seminal works and current peer-reviewed primary sources. Based on the research question and on a review of prior research, a hypothesis is created that predicts the relationship between the study's variables. Next, the researcher chooses a study design and methods to test the hypothesis. These choices should be informed by a review of methodological approaches used to address similar questions in prior research. Finally, appropriate analytical methods are used to analyze the data, allowing the researcher to draw conclusions and inferences about the data, and answer the research question that was originally posed. In quantitative research, research questions are typically descriptive, relational, or causal. Descriptive questions constrain the researcher to describing what currently exists. With a descriptive research question, one can examine perceptions or attitudes as well as more concrete variables such as achievement. For example, one might describe a population of learners by gathering data on their age, gender, socioeconomic status, and attributes towards their learning experiences. Relational questions examine the relationship between two or more variables. The X variable has some linear relationship to the Y variable. Causal inferences cannot be made from this type of research. For example, one could study the relationship between students' study habits and achievements. One might find that students using certain kinds of study strategies demonstrate greater learning, but one could not state conclusively that using certain study strategies will lead to or cause higher achievement. Causal questions, on the other hand, are designed to allow the researcher to draw a causal inference. A causal question seeks to determine if a treatment variable in a program had an effect on one or more outcome variables. In other words, the X variable influences the Y variable. For example, one could design a study that answered the question of whether a particular instructional approach caused students to learn more. The research question serves as a basis for posing a hypothesis, a predicted answer to the research question that incorporates operational definitions of the study's variables and is rooted in the literature. An operational definition matches a concept with a method of measurement, identifying how the concept will be quantified. For example, in a study of instructional strategies, the hypothesis might be that students of teachers who use Strategy X will exhibit greater learning than students of teachers who do not. In this study, one would need to operationalize learning by identifying a test or instrument that would measure learning. This approach allows the researcher to create a testable hypothesis. Relational and causal research relies on the creation of a null hypothesis, a version of the research hypothesis that predicts no relationship between variables or no effect of one variable on another. When writing the hypothesis for a quantitative question, the null hypothesis and the research or alternative hypothesis use parallel sentence structure. In this example, the null hypothesis states that there will be no statistical difference between groups, while the research or alternative hypothesis states that there will be a statistical difference between groups. Note also that both hypothesis statements operationalize the critical thinking skills variable by identifying the measurement instrument to be used. Once the research questions and hypotheses are solidified, the researcher must select a design that will create a situation in which the hypotheses can be tested and the research questions answered. Ideally, the research design will isolate the study's variables and control for intervening variables so that one can be certain of the relationships being tested. In educational research, however, it is extremely difficult to establish sufficient controls in the complex social settings being studied. In our example of investigating the impact of a certain instructional strategy in the classroom on student achievement, each day the teacher uses a specific instructional strategy. After school, some of the students in her class receive tutoring. Other students have parents that are very involved in their child's academic progress and provide learning experiences in the home. These students may do better because they received extra help, not because the teacher's instructional strategy is more effective. Unless the researcher can control for the intervening variable of extra help, it will be impossible to effectively test the study's hypothesis. Quantitative research designs can fall into two broad categories, experimental and quasi-experimental. Classic experimental designs are those that randomly assign subjects to either a control or treatment comparison group. The researcher can then compare the treatment group to the control group to test for an intervention's effect, known as a between-subject design. It is important to note that the control group may receive a standard treatment or may receive a treatment of any kind. Quasi-experimental designs do not randomly assign subjects to groups, but rather take advantage of existing groups. A researcher can still have a control and comparison group, but assignment to the groups is not random. The use of a control group is not required. However, the researcher may choose a design in which a single group is pre- and post-tested, known as a within-subjects design. Or a single group may receive only a post-test. Since quasi-experimental designs lack random assignment, the researcher should be aware of the threats to validity. Educational research often attempts to measure abstract variables such as attitudes, beliefs, and feelings. Surveys can capture data about these hard-to-measure variables, as well as other self-reported information such as demographic factors. A survey is an instrument used to collect verifiable information from a sample population. In quantitative research, surveys typically include questions that ask respondents to choose a rating from a scale, select one or more items from a list, or other responses that result in numerical data. Studies that use surveys or tests need to include strategies that establish the validity of the instrument used. There are many types of validity that need to be addressed. Face validity. Does the test appear at face value to measure what it is supposed to measure? Content validity. Content validity includes both item validity and sampling validity. Item validity ensures that the individual test items deal only with the subject being addressed. Sampling validity ensures that the range of item topics is appropriate to the subject being studied. For example, item validity might be high, but if all the items only deal with one aspect of the subjects, then sampling validity is low. Content validity can be established by having experts in the field review the test. Concurrent validity. Does a new test correlate with an older, established test that measures the same thing? Predictive validity. Does the test correlate with another related measure? For example, GRE tests are used at many colleges because these schools believe that a good grade on this test increases the probability that the student will do well at the college. Linear regression can establish the predictive validity of a test. Construct validity. Does the test measure the construct it is intended to measure? Establishing construct validity can be a difficult task when the constructs being measured are abstract. But it can be established by conducting a number of studies in which you test hypotheses regarding the construct, or by completing a factor analysis to ensure that you have the number of constructs that you say you have. In addition to ensuring the validity of instruments, the quantitative researcher needs to establish their reliability as well. Strategies for establishing reliability include Test retest. Correlates scores from two different administrations of the same test. Alternate forms. Correlates scores from administrations of two different forms of the same test. Split half reliability. Treats each half of one test or survey as a separate administration and correlates the results from each. Internal consistency. Uses Cronbach's coefficient alpha to calculate the average of all possible split halves. Quantitative research almost always relies on a sample that is intended to be representative of a larger population. There are two basic sampling strategies, random and non-random, and a number of specific strategies within each of these approaches. This table provides examples of each of the major strategies. The next section of this tutorial provides an overview of the procedures in conducting quantitative data analysis. There are specific procedures for conducting the data collection, preparing for and analyzing data, presenting the findings, and connecting to the body of existing research. This process ensures that the research is conducted as a systematic investigation that leads to credible results. Data comes in various sizes and shapes, and it is important to know about these so that the proper analysis can be used on the data. In 1946, S.S. Stevens first described the properties of measurement systems that allowed decisions about the type of measurement and about the attributes of objects that are preserved in numbers. These four types of data are referred to as nominal, ordinal, interval, and ratio. First, let's examine nominal data. With nominal data, there is no number value that indicates quantity. Instead, a number has been assigned to represent a certain attribute, like the number 1 to represent male and the number 2 to represent female. In other words, the number is just a label. You could also assign numbers to represent race, religion, or any other categorical information. Nominal data only denotes group membership. With ordinal data, there is again no indication of quantity. Rather, a number is assigned for ranking order. For example, satisfaction surveys often ask respondents to rank order their level of satisfaction with services or programs. The next level of measurement is interval data. With interval data, there are equal distances between two values, but there is no natural zero. A common example is the Fahrenheit temperature scale. Differences between the temperature measurements make sense, but ratios do not. For instance, 20 degrees Fahrenheit is not twice as hot as 10 degrees Fahrenheit. You can add and subtract interval level data, but they cannot be divided or multiplied. Finally, we have ratio data. Ratio is the same as interval, however ratios, means, averages, and other numerical formulas are all possible and make sense. Zero has a logical meaning, which shows the absence of, or having none of. Examples of ratio data are height, weight, speed, or any quantities based on a scale with a natural zero. In summary, nominal data can only be counted. Ordinal data can be counted and ranked. Interval data can also be added and subtracted, and ratio data can also be used in ratios and other calculations. Determining what type of data you have is one of the most important aspects of quantitative analysis. Depending on the research question, hypotheses, and research design, the researcher may choose to use descriptive and or inferential statistics to begin to analyze the data. Descriptive statistics are best illustrated when viewed through the lens of America's pastimes. Sports, weather, economy, stock market, and even our retirement portfolio are presented in a descriptive analysis. Basic terminology for descriptive statistics are terms that we are most familiar in this discipline. Frequency, mean, median, mode, range, variance, and standard deviation. Simply put, you are describing the data. Some of the most common graphic representations of data are bar graphs, pie graphs, histograms, and box and whisker graphs. Attempting to reach conclusions and make causal inferences beyond graphic representations or descriptive analyses is referred to as inferential statistics. In other words, examining the college enrollment of the past decade in a certain geographical region would assist in estimating what the enrollment for the next year might be. Frequently in education, the means of two or more groups are compared. When comparing means to assist in answering a research question, one can use a within-group, between-groups, or mixed-subject design. In a within-group design, the researcher compares measures of the same subjects across time, therefore within-group, or under different treatment conditions. This can also be referred to as a dependent-group design. The most basic example of this type of quasi-experimental design would be if a researcher conducted a pretest of a group of students, subjected them to a treatment, and then conducted a post-test. The group has been measured at different points in time. In a between-group design, subjects are assigned to one of the two or more groups. For example, Control, Treatment 1, Treatment 2. Ideally, the sampling and assignment to groups would be random, which would make this an experimental design. The researcher can then compare the means of the treatment group to the control group. When comparing two groups, the researcher can gain insight into the effects of the treatment. In a mixed-subjects design, the researcher is testing for significant differences between two or more independent groups while subjecting them to repeated measures. Choosing a statistical test to compare groups depends on the number of groups, whether the data are nominal, ordinal, or interval, and whether the data meet the assumptions for parametric tests. Nonparametric tests are typically used with nominal and ordinal data, while parametric tests use interval and ratio-level data. In addition to this, some further assumptions are made for parametric tests that the data are normally distributed in the population, that participant selection is independent, and the selection of one person does not determine the selection of another, and that the variances of the groups being compared are equal. The assumption of independent participant selection cannot be violated, but the others are more flexible. The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the method of analysis for a quasi-experimental design. When choosing a t-test, the assumptions are that the data are parametric. The analysis of variance, or ANOVA, assesses whether the means of more than two groups are statistically different from each other. When choosing an ANOVA, the assumptions are that the data are parametric. The chi-square test can be used when you have non-parametric data and want to compare differences between groups. The Kruskal-Wallis test can be used when there are more than two groups and the data are non-parametric. Correlation analysis is a set of statistical tests to determine whether there are linear relationships between two or more sets of variables from the same list of items or individuals, for example, achievement and performance of students. The tests provide a statistical yes or no as to whether a significant relationship or correlation exists between the variables. A correlation test consists of calculating a correlation coefficient between two variables. Again, there are parametric and non-parametric choices based on the assumptions of the data. Pearson R correlation is widely used in statistics to measure the strength of the relationship between linearly related variables. Spearman-Rank correlation is a non-parametric test that is used to measure the degree of association between two variables. Spearman-Rank correlation test does not assume any assumptions about the distribution. Spearman-Rank correlation test is used when the Pearson test gives misleading results. Often a Kendall-Taw is also included in this list of non-parametric correlation tests to examine the strength of the relationship if there are less than 20 rankings. Linear regression and correlation are similar and often confused. Sometimes your methodologist will encourage you to examine both the calculations. Calculate linear correlation if you measured both variables, x and y. Make sure to use the Pearson parametric correlation coefficient if you are certain you are not violating the test assumptions. Otherwise, choose the Spearman non-parametric correlation coefficient. If either variable has been manipulated using an intervention, do not calculate a correlation. While linear regression does indicate the nature of the relationship between two variables, like correlation, it can also be used to make predictions because one variable is considered explanatory while the other is considered a dependent variable. Establishing validity is a critical part of quantitative research. As with the nature of quantitative research, there is a defined approach or process for establishing validity. This also allows for the findings transferability. For a study to be valid, the evidence must support the interpretations of the data, the data must be accurate, and their use in drawing conclusions must be logical and appropriate. Construct validity concerns whether what you did for the program was what you wanted to do, or whether what you observed was what you wanted to observe. Construct validity concerns whether the operationalization of your variables are related to the theoretical concepts you are trying to measure. Are you actually measuring what you want to measure? Internal validity means that you have evidence that what you did in the study, i.e., the program, caused what you observed, i.e., the outcome, to happen. Conclusion validity is the degree to which conclusions drawn about relationships in the data are reasonable. External validity concerns the process of generalizing, or the degree to which the conclusions in your study would hold for other persons in other places and at other times. Establishing reliability and validity to your study is one of the most critical elements of the research process. Once you have decided to embark upon the process of conducting a quantitative study, use the following steps to get started. First, review research studies that have been conducted on your topic to determine what methods were used. Consider the strengths and weaknesses of the various data collection and analysis methods. Next, review the literature on quantitative research methods. Every aspect of your research has a body of literature associated with it. Just as you would not confine yourself to your course textbooks for your review of research on your topic, you should not limit yourself to your course texts for your review of methodological literature. Read broadly and deeply from the scholarly literature to gain expertise in quantitative research. Additional self-paced tutorials have been developed on different methodologies and techniques associated with quantitative research. Make sure that you complete all of the self-paced tutorials and review them as often as needed. You will then be prepared to complete a literature review of the specific methodologies and techniques that you will use in your study. Thank you for watching.

techradar

  • Educational Psychology
  • Student Engagement

The Efficacy of the Spaced Learning Teaching Method on Student Engagement and Learning Anxiety

  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Fanxiang Meng
  • Haoran Wang
  • BMC Med Educ
  • Ali Khalafi
  • Zahra Fallah

Hamid Sharif-Nia

  • Seyedeh Toktam

Karim Qayumi

  • Kittiphong Phanchamlong
  • Kullasatree Manee
  • Nuttapong Watwiset

Veena Prachagool

  • Noraini Mohd Noor

Kamariah Yunus

  • Nurul Husna Yaacob
  • EDUC PSYCHOL REV
  • Zi Yang Wong

Gregory Arief D. Liem

  • Douglas McHugh
  • Frank H. Netter
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

COMMENTS

  1. Full article: Reviews of teaching methods

    ABSTRACT The purpose of this study is to discern and discuss issues with relevance to the tension between contextuality and generalisation, which recurrently are identified over time in research reviews of teaching methods. The 75 most cited reviews on teaching methods listed in the Web of Science from 1980 to 2017 were analysed. Since our interest is the claims made in each article about the ...

  2. (PDF) Teaching Research Methods: Learning by Doing

    PDF | This paper outlines ways to structure a research-methods class so that students gain a practical knowledge of how research is done. Emphasis is... | Find, read and cite all the research you ...

  3. Frontiers

    3 Centre for Teacher Education, University of Vienna, Vienna, Austria Competence in research methods is a major contribution to (future) teachers' professionalism. In the pedagogical approach presented here, which we call the Teaching Clinic, we combine service-learning and design-based research to create meaningful learning engagements.

  4. (PDF) Teaching and Learning Research Methodologies in Education: A

    ical cultures associated with teaching and learning research methods in advanced studies education. through the identification of trends and pitfalls. The rationale behind this objective is the ...

  5. Twenty‐first century adaptive teaching and individualized learning

    Teaching methods that individualize and adapt instructional conditions to K-12 learners' needs, abilities, and interests help improve learning achievement. The most important variables are the teacher's role in the classroom as a guide and mentor and the adaptability of learning activities and materials.

  6. Teaching Research Methods: How to Make It Meaningful to Students

    SAGE authors Gregg Van Ryzin and Dahlia Remler share their vast experience and approach to teaching Research Methods to students with diverse interests and different degrees of prior training. In this new webinar, you will learn how they convey to students that research matters in their fields. They'll cover often-challenging topics, such as: Incorporating real-world examples of research into ...

  7. Turning research evidence into teaching action: Teacher educators

    Teacher educators are seen as potential brokers able to bridge the research-practice gap and accelerate the adoption of current evidence in teacher education. The present study focuses on the in-depth exploration of teacher educators' attitudes toward evidence-based teaching practices and provides a deeper understanding of the challenges encountered when turning evidence into teaching action ...

  8. Methods that teach: developing pedagogic research methods, developing

    This paper addresses ways of researching the pedagogy involved in building research methods competencies in the social sciences. The lack of explicit and shared pedagogy in this area make it particularly important that research is conducted to stimulate pedagogic culture, dialogue and development. The authors discuss the range of methods used ...

  9. Research methods for pedagogy: seeing the hidden and hard to know

    Obviously, research methods adopted for the exploration of any concept need to align with definitions and conceptualizations of the substantive area in question. In the case of pedagogy the range of interrelated elements is considerable and hence the research methods needed for their exploration and theorization have evolved and grown accordingly.

  10. A Practical Guide to Teaching Research Methods in Education

    ABSTRACT A Practical Guide to Teaching Research Methods in Education brings together more than 60 faculty experts. The contributors share detailed lesson plans about selected research concepts or skills in education and related disciplines, as well as discussions of the intellectual preparation needed to effectively teach the lesson.

  11. Using Research to Improve Teaching

    There has been much research over the past decade building on research-practice partnerships. Teachers and researchers should work collaboratively to improve student learning. Though researchers in higher education typically conduct formal research and publish their work in journal articles, it's important for teachers to also see themselves ...

  12. Teaching Learning Methods

    Discover various teaching and learning methods with their characteristics on ResearchGate, a platform for academic research.

  13. Research-Based Teacher Education

    Research is important for teacher education, for students of teaching, and for teachers because it provides conceptual understanding and new relevant knowledge. It is also important as a verb: researching. Research implies systematic thinking, reasoning, gathering data, analyzing, interpreting, and structuring knowledge for dissemination.

  14. Researching your teaching practice: an introduction to pedagogic research

    What pedagogic research means Also known as the scholarship of teaching and learning (SoTL), or education enquiry, pedagogic research is an established field of academic discourse involving carefully investigating your teaching practice and in turn developing the curriculum.

  15. Teaching the science of learning

    The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration ...

  16. Approaches to teaching in higher education: the perspective ...

    Over time, the academics' approaches to teaching (i.e., content- or learning-focused approach) were intensively studied. Traditionally, studies estimated the shared variance between the items that describe a behavioral pattern (i.e., the psychometric approach), defined as a learning- or content-focused approach to teaching. In this study, we used a different perspective (i.e., network ...

  17. Methods for teaching evidence-based practice: a scoping review

    The two key methods Research courses and workshops and Collaboration with clinical practice are advantageous methods for teaching undergraduate healthcare students evidence-based practice; incorporating many of the Sicily Statement's five steps.

  18. Effective Teaching Methods in Higher Education: Requirements and

    This study revealed the effective teaching methods, requirements and barriers in Iranian Higher Education. Teachers participating in this study believed that teaching and learning in higher education is a shared process, with responsibilities on both student and teacher to contribute to their success.

  19. The effectiveness of different teaching methods on medical or nursing

    In medical education, different teaching methods have different effects on nursing or medical students' critical thinking and autonomous learning ability. In addition, more and more medical educators have recognized the shortcomings of traditional teaching methods, so they try to use a variety of teaching methods to enhance students ...

  20. Teaching Research Methods in the Social Sciences: Expert Perspectives

    Learn from expert perspectives on how to teach research methods in the social sciences effectively and engagingly.

  21. Review of Education

    Teaching technique/ Research technique: ... The meta-thematic analysis method was used in the qualitative dimension of the study conducted with the mixed-meta method. Meta-thematic analysis is a research process that aims to reach general and holistic results by combining the findings of qualitative studies conducted on a specific topic ...

  22. Inclusive Teaching Through Active Learning

    Across courses of different sizes, levels and audiences (concentrators and non-concentrators), research suggests that students learn more in classes that integrate active learning (Freeman et al., 2014; Hake, 1998). In fact, research supporting the use of active learning is so compelling that some have suggested it is unethical for instructors to continue to use a purely lecture-based approach ...

  23. Spaced recall reduces forgetting of fundamental mathematical ...

    The retention of fundamental mathematical skills is imperative to provide a foundation on which new skills are developed. Educators often lament about student retention. Cognitive scientists and educators have explored teaching methods that produce learning which endures over time. We wanted to know if using spaced recall quizzes would prevent our students from forgetting fundamental ...

  24. (PDF) A Comparative Study on the Effectiveness of Traditional and

    This paper compares the effectiveness of traditional and modern teaching methods in different contexts and subjects. Find out the best practices for your classroom on ResearchGate.

  25. Establishing a Framework for Assessing Teaching Effectiveness

    The first step in developing the framework for assessing teaching effectiveness (FATE) was creating a definition of effective teaching. As Robert M. Pirsig argued, quality is difficult, if not impossible, to define (Pirsig 1974); thus, we choose to pursue effective teaching over the challenge of defining quality.

  26. Effective Strategies and Methods to Enhance Music Practical Teaching in

    Abstract:This paper studies of effective strategies and methods to elevate music practical teaching in higher vocational preschool education. Acknowledging the pivotal role of higher vocational preschools in shaping the foundational years of a child's development, the study navigates through a comprehensive exploration of the significance of music education. Grounded in a robust literature ...

  27. (PDF) Comparison of Impact of Traditional and Modern Teaching Methods

    PDF | Teaching method acts as a fundamental catalyst of engineering the students learning at all levels. The present research explores the effect of... | Find, read and cite all the research you ...

  28. Comparison of education using the flipped class, gamification and

    The results of the present research indicate that teaching methods have an effect on students' learning and satisfaction. Rachayon and his colleagues also used a task-based learning method in combination with digital games in a flipped learning environment to develop students' English language skills, and their results also indicated the ...

  29. Comprehensive Guide to Quantitative Research Methods in Education

    The research design may be quantitative, qualitative, or a mixed methods design. The focus of this overview is quantitative methods. The general purpose of quantitative research is to explain, predict, investigate relationships, describe current conditions, or to examine possible impacts or influences on designated outcomes.

  30. The Efficacy of the Spaced Learning Teaching Method on Student

    Conclusions: Spaced learning is a fun, enjoyable, cost-effective, and flexible teaching method that can improve the learning level of students through the management of learning anxiety.