• Tutorial Review
  • Open access
  • Published: 24 January 2018

Teaching the science of learning

  • Yana Weinstein   ORCID: orcid.org/0000-0002-5144-968X 1 ,
  • Christopher R. Madan 2 , 3 &
  • Megan A. Sumeracki 4  

Cognitive Research: Principles and Implications volume  3 , Article number:  2 ( 2018 ) Cite this article

245k Accesses

91 Citations

764 Altmetric

Metrics details

The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration, concrete examples, and dual coding. We describe the basic research behind each strategy and relevant applied research, present examples of existing and suggested implementation, and make recommendations for further research that would broaden the reach of these strategies.

Significance

Education does not currently adhere to the medical model of evidence-based practice (Roediger, 2013 ). However, over the past few decades, our field has made significant advances in applying cognitive processes to education. From this work, specific recommendations can be made for students to maximize their learning efficiency (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013 ; Roediger, Finn, & Weinstein, 2012 ). In particular, a review published 10 years ago identified a limited number of study techniques that have received solid evidence from multiple replications testing their effectiveness in and out of the classroom (Pashler et al., 2007 ). A recent textbook analysis (Pomerance, Greenberg, & Walsh, 2016 ) took the six key learning strategies from this report by Pashler and colleagues, and found that very few teacher-training textbooks cover any of these six principles – and none cover them all, suggesting that these strategies are not systematically making their way into the classroom. This is the case in spite of multiple recent academic (e.g., Dunlosky et al., 2013 ) and general audience (e.g., Dunlosky, 2013 ) publications about these strategies. In this tutorial review, we present the basic science behind each of these six key principles, along with more recent research on their effectiveness in live classrooms, and suggest ideas for pedagogical implementation. The target audience of this review is (a) educators who might be interested in integrating the strategies into their teaching practice, (b) science of learning researchers who are looking for open questions to help determine future research priorities, and (c) researchers in other subfields who are interested in the ways that principles from cognitive psychology have been applied to education.

While the typical teacher may not be exposed to this research during teacher training, a small cohort of teachers intensely interested in cognitive psychology has recently emerged. These teachers are mainly based in the UK, and, anecdotally (e.g., Dennis (2016), personal communication), appear to have taken an interest in the science of learning after reading Make it Stick (Brown, Roediger, & McDaniel, 2014 ; see Clark ( 2016 ) for an enthusiastic review of this book on a teacher’s blog, and “Learning Scientists” ( 2016c ) for a collection). In addition, a grassroots teacher movement has led to the creation of “researchED” – a series of conferences on evidence-based education (researchED, 2013 ). The teachers who form part of this network frequently discuss cognitive psychology techniques and their applications to education on social media (mainly Twitter; e.g., Fordham, 2016 ; Penfound, 2016 ) and on their blogs, such as Evidence Into Practice ( https://evidenceintopractice.wordpress.com/ ), My Learning Journey ( http://reflectionsofmyteaching.blogspot.com/ ), and The Effortful Educator ( https://theeffortfuleducator.com/ ). In general, the teachers who write about these issues pay careful attention to the relevant literature, often citing some of the work described in this review.

These informal writings, while allowing teachers to explore their approach to teaching practice (Luehmann, 2008 ), give us a unique window into the application of the science of learning to the classroom. By examining these blogs, we can not only observe how basic cognitive research is being applied in the classroom by teachers who are reading it, but also how it is being misapplied, and what questions teachers may be posing that have gone unaddressed in the scientific literature. Throughout this review, we illustrate each strategy with examples of how it can be implemented (see Table  1 and Figs.  1 , 2 , 3 , 4 , 5 , 6 and 7 ), as well as with relevant teacher blog posts that reflect on its application, and draw upon this work to pin-point fruitful avenues for further basic and applied research.

Spaced practice schedule for one week. This schedule is designed to represent a typical timetable of a high-school student. The schedule includes four one-hour study sessions, one longer study session on the weekend, and one rest day. Notice that each subject is studied one day after it is covered in school, to create spacing between classes and study sessions. Copyright note: this image was produced by the authors

a Blocked practice and interleaved practice with fraction problems. In the blocked version, students answer four multiplication problems consecutively. In the interleaved version, students answer a multiplication problem followed by a division problem and then an addition problem, before returning to multiplication. For an experiment with a similar setup, see Patel et al. ( 2016 ). Copyright note: this image was produced by the authors. b Illustration of interleaving and spacing. Each color represents a different homework topic. Interleaving involves alternating between topics, rather than blocking. Spacing involves distributing practice over time, rather than massing. Interleaving inherently involves spacing as other tasks naturally “fill” the spaces between interleaved sessions. Copyright note: this image was produced by the authors, adapted from Rohrer ( 2012 )

Concept map illustrating the process and resulting benefits of retrieval practice. Retrieval practice involves the process of withdrawing learned information from long-term memory into working memory, which requires effort. This produces direct benefits via the consolidation of learned information, making it easier to remember later and causing improvements in memory, transfer, and inferences. Retrieval practice also produces indirect benefits of feedback to students and teachers, which in turn can lead to more effective study and teaching practices, with a focus on information that was not accurately retrieved. Copyright note: this figure originally appeared in a blog post by the first and third authors ( http://www.learningscientists.org/blog/2016/4/1-1 )

Illustration of “how” and “why” questions (i.e., elaborative interrogation questions) students might ask while studying the physics of flight. To help figure out how physics explains flight, students might ask themselves the following questions: “How does a plane take off?”; “Why does a plane need an engine?”; “How does the upward force (lift) work?”; “Why do the wings have a curved upper surface and a flat lower surface?”; and “Why is there a downwash behind the wings?”. Copyright note: the image of the plane was downloaded from Pixabay.com and is free to use, modify, and share

Three examples of physics problems that would be categorized differently by novices and experts. The problems in ( a ) and ( c ) look similar on the surface, so novices would group them together into one category. Experts, however, will recognize that the problems in ( b ) and ( c ) both relate to the principle of energy conservation, and so will group those two problems into one category instead. Copyright note: the figure was produced by the authors, based on figures in Chi et al. ( 1981 )

Example of how to enhance learning through use of a visual example. Students might view this visual representation of neural communications with the words provided, or they could draw a similar visual representation themselves. Copyright note: this figure was produced by the authors

Example of word properties associated with visual, verbal, and motor coding for the word “SPOON”. A word can evoke multiple types of representation (“codes” in dual coding theory). Viewing a word will automatically evoke verbal representations related to its component letters and phonemes. Words representing objects (i.e., concrete nouns) will also evoke visual representations, including information about similar objects, component parts of the object, and information about where the object is typically found. In some cases, additional codes can also be evoked, such as motor-related properties of the represented object, where contextual information related to the object’s functional intention and manipulation action may also be processed automatically when reading the word. Copyright note: this figure was produced by the authors and is based on Aylwin ( 1990 ; Fig.  2 ) and Madan and Singhal ( 2012a , Fig.  3 )

Spaced practice

The benefits of spaced (or distributed) practice to learning are arguably one of the strongest contributions that cognitive psychology has made to education (Kang, 2016 ). The effect is simple: the same amount of repeated studying of the same information spaced out over time will lead to greater retention of that information in the long run, compared with repeated studying of the same information for the same amount of time in one study session. The benefits of distributed practice were first empirically demonstrated in the 19 th century. As part of his extensive investigation into his own memory, Ebbinghaus ( 1885/1913 ) found that when he spaced out repetitions across 3 days, he could almost halve the number of repetitions necessary to relearn a series of 12 syllables in one day (Chapter 8). He thus concluded that “a suitable distribution of [repetitions] over a space of time is decidedly more advantageous than the massing of them at a single time” (Section 34). For those who want to read more about Ebbinghaus’s contribution to memory research, Roediger ( 1985 ) provides an excellent summary.

Since then, hundreds of studies have examined spacing effects both in the laboratory and in the classroom (Kang, 2016 ). Spaced practice appears to be particularly useful at large retention intervals: in the meta-analysis by Cepeda, Pashler, Vul, Wixted, and Rohrer ( 2006 ), all studies with a retention interval longer than a month showed a clear benefit of distributed practice. The “new theory of disuse” (Bjork & Bjork, 1992 ) provides a helpful mechanistic explanation for the benefits of spacing to learning. This theory posits that memories have both retrieval strength and storage strength. Whereas retrieval strength is thought to measure the ease with which a memory can be recalled at a given moment, storage strength (which cannot be measured directly) represents the extent to which a memory is truly embedded in the mind. When studying is taking place, both retrieval strength and storage strength receive a boost. However, the extent to which storage strength is boosted depends upon retrieval strength, and the relationship is negative: the greater the current retrieval strength, the smaller the gains in storage strength. Thus, the information learned through “cramming” will be rapidly forgotten due to high retrieval strength and low storage strength (Bjork & Bjork, 2011 ), whereas spacing out learning increases storage strength by allowing retrieval strength to wane before restudy.

Teachers can introduce spacing to their students in two broad ways. One involves creating opportunities to revisit information throughout the semester, or even in future semesters. This does involve some up-front planning, and can be difficult to achieve, given time constraints and the need to cover a set curriculum. However, spacing can be achieved with no great costs if teachers set aside a few minutes per class to review information from previous lessons. The second method involves putting the onus to space on the students themselves. Of course, this would work best with older students – high school and above. Because spacing requires advance planning, it is crucial that the teacher helps students plan their studying. For example, teachers could suggest that students schedule study sessions on days that alternate with the days on which a particular class meets (e.g., schedule review sessions for Tuesday and Thursday when the class meets Monday and Wednesday; see Fig.  1 for a more complete weekly spaced practice schedule). It important to note that the spacing effect refers to information that is repeated multiple times, rather than the idea of studying different material in one long session versus spaced out in small study sessions over time. However, for teachers and particularly for students planning a study schedule, the subtle difference between the two situations (spacing out restudy opportunities, versus spacing out studying of different information over time) may be lost. Future research should address the effects of spacing out studying of different information over time, whether the same considerations apply in this situation as compared to spacing out restudy opportunities, and how important it is for teachers and students to understand the difference between these two types of spaced practice.

It is important to note that students may feel less confident when they space their learning (Bjork, 1999 ) than when they cram. This is because spaced learning is harder – but it is this “desirable difficulty” that helps learning in the long term (Bjork, 1994 ). Students tend to cram for exams rather than space out their learning. One explanation for this is that cramming does “work”, if the goal is only to pass an exam. In order to change students’ minds about how they schedule their studying, it might be important to emphasize the value of retaining information beyond a final exam in one course.

Ideas for how to apply spaced practice in teaching have appeared in numerous teacher blogs (e.g., Fawcett, 2013 ; Kraft, 2015 ; Picciotto, 2009 ). In England in particular, as of 2013, high-school students need to be able to remember content from up to 3 years back on cumulative exams (General Certificate of Secondary Education (GCSE) and A-level exams; see CIFE, 2012 ). A-levels in particular determine what subject students study in university and which programs they are accepted into, and thus shape the path of their academic career. A common approach for dealing with these exams has been to include a “revision” (i.e., studying or cramming) period of a few weeks leading up to the high-stakes cumulative exams. Now, teachers who follow cognitive psychology are advocating a shift of priorities to spacing learning over time across the 3 years, rather than teaching a topic once and then intensely reviewing it weeks before the exam (Cox, 2016a ; Wood, 2017 ). For example, some teachers have suggested using homework assignments as an opportunity for spaced practice by giving students homework on previous topics (Rose, 2014 ). However, questions remain, such as whether spaced practice can ever be effective enough to completely alleviate the need or utility of a cramming period (Cox, 2016b ), and how one can possibly figure out the optimal lag for spacing (Benney, 2016 ; Firth, 2016 ).

There has been considerable research on the question of optimal lag, and much of it is quite complex; two sessions neither too close together (i.e., cramming) nor too far apart are ideal for retention. In a large-scale study, Cepeda, Vul, Rohrer, Wixted, and Pashler ( 2008 ) examined the effects of the gap between study sessions and the interval between study and test across long periods, and found that the optimal gap between study sessions was contingent on the retention interval. Thus, it is not clear how teachers can apply the complex findings on lag to their own classrooms.

A useful avenue of research would be to simplify the research paradigms that are used to study optimal lag, with the goal of creating a flexible, spaced-practice framework that teachers could apply and tailor to their own teaching needs. For example, an Excel macro spreadsheet was recently produced to help teachers plan for lagged lessons (Weinstein-Jones & Weinstein, 2017 ; see Weinstein & Weinstein-Jones ( 2017 ) for a description of the algorithm used in the spreadsheet), and has been used by teachers to plan their lessons (Penfound, 2017 ). However, one teacher who found this tool helpful also wondered whether the more sophisticated plan was any better than his own method of manually selecting poorly understood material from previous classes for later review (Lovell, 2017 ). This direction is being actively explored within personalized online learning environments (Kornell & Finn, 2016 ; Lindsey, Shroyer, Pashler, & Mozer, 2014 ), but teachers in physical classrooms might need less technologically-driven solutions to teach cohorts of students.

It seems teachers would greatly appreciate a set of guidelines for how to implement spacing in the curriculum in the most effective, but also the most efficient manner. While the cognitive field has made great advances in terms of understanding the mechanisms behind spacing, what teachers need more of are concrete evidence-based tools and guidelines for direct implementation in the classroom. These could include more sophisticated and experimentally tested versions of the software described above (Weinstein-Jones & Weinstein, 2017 ), or adaptable templates of spaced curricula. Moreover, researchers need to evaluate the effectiveness of these tools in a real classroom environment, over a semester or academic year, in order to give pedagogically relevant evidence-based recommendations to teachers.

Interleaving

Another scheduling technique that has been shown to increase learning is interleaving. Interleaving occurs when different ideas or problem types are tackled in a sequence, as opposed to the more common method of attempting multiple versions of the same problem in a given study session (known as blocking). Interleaving as a principle can be applied in many different ways. One such way involves interleaving different types of problems during learning, which is particularly applicable to subjects such as math and physics (see Fig.  2 a for an example with fractions, based on a study by Patel, Liu, & Koedinger, 2016 ). For example, in a study with college students, Rohrer and Taylor ( 2007 ) found that shuffling math problems that involved calculating the volume of different shapes resulted in better test performance 1 week later than when students answered multiple problems about the same type of shape in a row. This pattern of results has also been replicated with younger students, for example 7 th grade students learning to solve graph and slope problems (Rohrer, Dedrick, & Stershic, 2015 ). The proposed explanation for the benefit of interleaving is that switching between different problem types allows students to acquire the ability to choose the right method for solving different types of problems rather than learning only the method itself, and not when to apply it.

Do the benefits of interleaving extend beyond problem solving? The answer appears to be yes. Interleaving can be helpful in other situations that require discrimination, such as inductive learning. Kornell and Bjork ( 2008 ) examined the effects of interleaving in a task that might be pertinent to a student of the history of art: the ability to match paintings to their respective painters. Students who studied different painters’ paintings interleaved at study were more successful on a later identification test than were participants who studied the paintings blocked by painter. Birnbaum, Kornell, Bjork, and Bjork ( 2013 ) proposed the discriminative-contrast hypothesis to explain that interleaving enhances learning by allowing the comparison between exemplars of different categories. They found support for this hypothesis in a set of experiments with bird categorization: participants benefited from interleaving and also from spacing, but not when the spacing interrupted side-by-side comparisons of birds from different categories.

Another type of interleaving involves the interleaving of study and test opportunities. This type of interleaving has been applied, once again, to problem solving, whereby students alternate between attempting a problem and viewing a worked example (Trafton & Reiser, 1993 ); this pattern appears to be superior to answering a string of problems in a row, at least with respect to the amount of time it takes to achieve mastery of a procedure (Corbett, Reed, Hoffmann, MacLaren, & Wagner, 2010 ). The benefits of interleaving study and test opportunities – rather than blocking study followed by attempting to answer problems or questions – might arise due to a process known as “test-potentiated learning”. That is, a study opportunity that immediately follows a retrieval attempt may be more fruitful than when that same studying was not preceded by retrieval (Arnold & McDermott, 2013 ).

For problem-based subjects, the interleaving technique is straightforward: simply mix questions on homework and quizzes with previous materials (which takes care of spacing as well); for languages, mix vocabulary themes rather than blocking by theme (Thomson & Mehring, 2016 ). But interleaving as an educational strategy ought to be presented to teachers with some caveats. Research has focused on interleaving material that is somewhat related (e.g., solving different mathematical equations, Rohrer et al., 2015 ), whereas students sometimes ask whether they should interleave material from different subjects – a practice that has not received empirical support (Hausman & Kornell, 2014 ). When advising students how to study independently, teachers should thus proceed with caution. Since it is easy for younger students to confuse this type of unhelpful interleaving with the more helpful interleaving of related information, it may be best for teachers of younger grades to create opportunities for interleaving in homework and quiz assignments rather than putting the onus on the students themselves to make use of the technique. Technology can be very helpful here, with apps such as Quizlet, Memrise, Anki, Synap, Quiz Champ, and many others (see also “Learning Scientists”, 2017 ) that not only allow instructor-created quizzes to be taken by students, but also provide built-in interleaving algorithms so that the burden does not fall on the teacher or the student to carefully plan which items are interleaved when.

An important point to consider is that in educational practice, the distinction between spacing and interleaving can be difficult to delineate. The gap between the scientific and classroom definitions of interleaving is demonstrated by teachers’ own writings about this technique. When they write about interleaving, teachers often extend the term to connote a curriculum that involves returning to topics multiple times throughout the year (e.g., Kirby, 2014 ; see “Learning Scientists” ( 2016a ) for a collection of similar blog posts by several other teachers). The “interleaving” of topics throughout the curriculum produces an effect that is more akin to what cognitive psychologists call “spacing” (see Fig.  2 b for a visual representation of the difference between interleaving and spacing). However, cognitive psychologists have not examined the effects of structuring the curriculum in this way, and open questions remain: does repeatedly circling back to previous topics throughout the semester interrupt the learning of new information? What are some effective techniques for interleaving old and new information within one class? And how does one determine the balance between old and new information?

Retrieval practice

While tests are most often used in educational settings for assessment, a lesser-known benefit of tests is that they actually improve memory of the tested information. If we think of our memories as libraries of information, then it may seem surprising that retrieval (which happens when we take a test) improves memory; however, we know from a century of research that retrieving knowledge actually strengthens it (see Karpicke, Lehman, & Aue, 2014 ). Testing was shown to strengthen memory as early as 100 years ago (Gates, 1917 ), and there has been a surge of research in the last decade on the mnemonic benefits of testing, or retrieval practice . Most of the research on the effectiveness of retrieval practice has been done with college students (see Roediger & Karpicke, 2006 ; Roediger, Putnam, & Smith, 2011 ), but retrieval-based learning has been shown to be effective at producing learning for a wide range of ages, including preschoolers (Fritz, Morris, Nolan, & Singleton, 2007 ), elementary-aged children (e.g., Karpicke, Blunt, & Smith, 2016 ; Karpicke, Blunt, Smith, & Karpicke, 2014 ; Lipko-Speed, Dunlosky, & Rawson, 2014 ; Marsh, Fazio, & Goswick, 2012 ; Ritchie, Della Sala, & McIntosh, 2013 ), middle-school students (e.g., McDaniel, Thomas, Agarwal, McDermott, & Roediger, 2013 ; McDermott, Agarwal, D’Antonio, Roediger, & McDaniel, 2014 ), and high-school students (e.g., McDermott et al., 2014 ). In addition, the effectiveness of retrieval-based learning has been extended beyond simple testing to other activities in which retrieval practice can be integrated, such as concept mapping (Blunt & Karpicke, 2014 ; Karpicke, Blunt, et al., 2014 ; Ritchie et al., 2013 ).

A debate is currently ongoing as to the effectiveness of retrieval practice for more complex materials (Karpicke & Aue, 2015 ; Roelle & Berthold, 2017 ; Van Gog & Sweller, 2015 ). Practicing retrieval has been shown to improve the application of knowledge to new situations (e.g., Butler, 2010 ; Dirkx, Kester, & Kirschner, 2014 ); McDaniel et al., 2013 ; Smith, Blunt, Whiffen, & Karpicke, 2016 ); but see Tran, Rohrer, and Pashler ( 2015 ) and Wooldridge, Bugg, McDaniel, and Liu ( 2014 ), for retrieval practice studies that showed limited or no increased transfer compared to restudy. Retrieval practice effects on higher-order learning may be more sensitive than fact learning to encoding factors, such as the way material is presented during study (Eglington & Kang, 2016 ). In addition, retrieval practice may be more beneficial for higher-order learning if it includes more scaffolding (Fiechter & Benjamin, 2017 ; but see Smith, Blunt, et al., 2016 ) and targeted practice with application questions (Son & Rivas, 2016 ).

How does retrieval practice help memory? Figure  3 illustrates both the direct and indirect benefits of retrieval practice identified by the literature. The act of retrieval itself is thought to strengthen memory (Karpicke, Blunt, et al., 2014 ; Roediger & Karpicke, 2006 ; Smith, Roediger, & Karpicke, 2013 ). For example, Smith et al. ( 2013 ) showed that if students brought information to mind without actually producing it (covert retrieval), they remembered the information just as well as if they overtly produced the retrieved information (overt retrieval). Importantly, both overt and covert retrieval practice improved memory over control groups without retrieval practice, even when feedback was not provided. The fact that bringing information to mind in the absence of feedback or restudy opportunities improves memory leads researchers to conclude that it is the act of retrieval – thinking back to bring information to mind – that improves memory of that information.

The benefit of retrieval practice depends to a certain extent on successful retrieval (see Karpicke, Lehman, et al., 2014 ). For example, in Experiment 4 of Smith et al. ( 2013 ), students successfully retrieved 72% of the information during retrieval practice. Of course, retrieving 72% of the information was compared to a restudy control group, during which students were re-exposed to 100% of the information, creating a bias in favor of the restudy condition. Yet retrieval led to superior memory later compared to the restudy control. However, if retrieval success is extremely low, then it is unlikely to improve memory (e.g., Karpicke, Blunt, et al., 2014 ), particularly in the absence of feedback. On the other hand, if retrieval-based learning situations are constructed in such a way that ensures high levels of success, the act of bringing the information to mind may be undermined, thus making it less beneficial. For example, if a student reads a sentence and then immediately covers the sentence and recites it out loud, they are likely not retrieving the information but rather just keeping the information in their working memory long enough to recite it again (see Smith, Blunt, et al., 2016 for a discussion of this point). Thus, it is important to balance success of retrieval with overall difficulty in retrieving the information (Smith & Karpicke, 2014 ; Weinstein, Nunes, & Karpicke, 2016 ). If initial retrieval success is low, then feedback can help improve the overall benefit of practicing retrieval (Kang, McDermott, & Roediger, 2007 ; Smith & Karpicke, 2014 ). Kornell, Klein, and Rawson ( 2015 ), however, found that it was the retrieval attempt and not the correct production of information that produced the retrieval practice benefit – as long as the correct answer was provided after an unsuccessful attempt, the benefit was the same as for a successful retrieval attempt in this set of studies. From a practical perspective, it would be helpful for teachers to know when retrieval attempts in the absence of success are helpful, and when they are not. There may also be additional reasons beyond retrieval benefits that would push teachers towards retrieval practice activities that produce some success amongst students; for example, teachers may hesitate to give students retrieval practice exercises that are too difficult, as this may negatively affect self-efficacy and confidence.

In addition to the fact that bringing information to mind directly improves memory for that information, engaging in retrieval practice can produce indirect benefits as well (see Roediger et al., 2011 ). For example, research by Weinstein, Gilmore, Szpunar, and McDermott ( 2014 ) demonstrated that when students expected to be tested, the increased test expectancy led to better-quality encoding of new information. Frequent testing can also serve to decrease mind-wandering – that is, thoughts that are unrelated to the material that students are supposed to be studying (Szpunar, Khan, & Schacter, 2013 ).

Practicing retrieval is a powerful way to improve meaningful learning of information, and it is relatively easy to implement in the classroom. For example, requiring students to practice retrieval can be as simple as asking students to put their class materials away and try to write out everything they know about a topic. Retrieval-based learning strategies are also flexible. Instructors can give students practice tests (e.g., short-answer or multiple-choice, see Smith & Karpicke, 2014 ), provide open-ended prompts for the students to recall information (e.g., Smith, Blunt, et al., 2016 ) or ask their students to create concept maps from memory (e.g., Blunt & Karpicke, 2014 ). In one study, Weinstein et al. ( 2016 ) looked at the effectiveness of inserting simple short-answer questions into online learning modules to see whether they improved student performance. Weinstein and colleagues also manipulated the placement of the questions. For some students, the questions were interspersed throughout the module, and for other students the questions were all presented at the end of the module. Initial success on the short-answer questions was higher when the questions were interspersed throughout the module. However, on a later test of learning from that module, the original placement of the questions in the module did not matter for performance. As with spaced practice, where the optimal gap between study sessions is contingent on the retention interval, the optimum difficulty and level of success during retrieval practice may also depend on the retention interval. Both groups of students who answered questions performed better on the delayed test compared to a control group without question opportunities during the module. Thus, the important thing is for instructors to provide opportunities for retrieval practice during learning. Based on previous research, any activity that promotes the successful retrieval of information should improve learning.

Retrieval practice has received a lot of attention in teacher blogs (see “Learning Scientists” ( 2016b ) for a collection). A common theme seems to be an emphasis on low-stakes (Young, 2016 ) and even no-stakes (Cox, 2015 ) testing, the goal of which is to increase learning rather than assess performance. In fact, one well-known charter school in the UK has an official homework policy grounded in retrieval practice: students are to test themselves on subject knowledge for 30 minutes every day in lieu of standard homework (Michaela Community School, 2014 ). The utility of homework, particularly for younger children, is often a hotly debated topic outside of academia (e.g., Shumaker, 2016 ; but see Jones ( 2016 ) for an opposing viewpoint and Cooper ( 1989 ) for the original research the blog posts were based on). Whereas some research shows clear links between homework and academic achievement (Valle et al., 2016 ), other researchers have questioned the effectiveness of homework (Dettmers, Trautwein, & Lüdtke, 2009 ). Perhaps amending homework to involve retrieval practice might make it more effective; this remains an open empirical question.

One final consideration is that of test anxiety. While retrieval practice can be very powerful at improving memory, some research shows that pressure during retrieval can undermine some of the learning benefit. For example, Hinze and Rapp ( 2014 ) manipulated pressure during quizzing to create high-pressure and low-pressure conditions. On the quizzes themselves, students performed equally well. However, those in the high-pressure condition did not perform as well on a criterion test later compared to the low-pressure group. Thus, test anxiety may reduce the learning benefit of retrieval practice. Eliminating all high-pressure tests is probably not possible, but instructors can provide a number of low-stakes retrieval opportunities for students to help increase learning. The use of low-stakes testing can serve to decrease test anxiety (Khanna, 2015 ), and has recently been shown to negate the detrimental impact of stress on learning (Smith, Floerke, & Thomas, 2016 ). This is a particularly important line of inquiry to pursue for future research, because many teachers who are not familiar with the effectiveness of retrieval practice may be put off by the implied pressure of “testing”, which evokes the much maligned high-stakes standardized tests (e.g., McHugh, 2013 ).

Elaboration

Elaboration involves connecting new information to pre-existing knowledge. Anderson ( 1983 , p.285) made the following claim about elaboration: “One of the most potent manipulations that can be performed in terms of increasing a subject’s memory for material is to have the subject elaborate on the to-be-remembered material.” Postman ( 1976 , p. 28) defined elaboration most parsimoniously as “additions to nominal input”, and Hirshman ( 2001 , p. 4369) provided an elaboration on this definition (pun intended!), defining elaboration as “A conscious, intentional process that associates to-be-remembered information with other information in memory.” However, in practice, elaboration could mean many different things. The common thread in all the definitions is that elaboration involves adding features to an existing memory.

One possible instantiation of elaboration is thinking about information on a deeper level. The levels (or “depth”) of processing framework, proposed by Craik and Lockhart ( 1972 ), predicts that information will be remembered better if it is processed more deeply in terms of meaning, rather than shallowly in terms of form. The leves of processing framework has, however, received a number of criticisms (Craik, 2002 ). One major problem with this framework is that it is difficult to measure “depth”. And if we are not able to actually measure depth, then the argument can become circular: is it that something was remembered better because it was studied more deeply, or do we conclude that it must have been studied more deeply because it is remembered better? (See Lockhart & Craik, 1990 , for further discussion of this issue).

Another mechanism by which elaboration can confer a benefit to learning is via improvement in organization (Bellezza, Cheesman, & Reddy, 1977 ; Mandler, 1979 ). By this view, elaboration involves making information more integrated and organized with existing knowledge structures. By connecting and integrating the to-be-learned information with other concepts in memory, students can increase the extent to which the ideas are organized in their minds, and this increased organization presumably facilitates the reconstruction of the past at the time of retrieval.

Elaboration is such a broad term and can include so many different techniques that it is hard to claim that elaboration will always help learning. There is, however, a specific technique under the umbrella of elaboration for which there is relatively strong evidence in terms of effectiveness (Dunlosky et al., 2013 ; Pashler et al., 2007 ). This technique is called elaborative interrogation, and involves students questioning the materials that they are studying (Pressley, McDaniel, Turnure, Wood, & Ahmad, 1987 ). More specifically, students using this technique would ask “how” and “why” questions about the concepts they are studying (see Fig.  4 for an example on the physics of flight). Then, crucially, students would try to answer these questions – either from their materials or, eventually, from memory (McDaniel & Donnelly, 1996 ). The process of figuring out the answer to the questions – with some amount of uncertainty (Overoye & Storm, 2015 ) – can help learning. When using this technique, however, it is important that students check their answers with their materials or with the teacher; when the content generated through elaborative interrogation is poor, it can actually hurt learning (Clinton, Alibali, & Nathan, 2016 ).

Students can also be encouraged to self-explain concepts to themselves while learning (Chi, De Leeuw, Chiu, & LaVancher, 1994 ). This might involve students simply saying out loud what steps they need to perform to solve an equation. Aleven and Koedinger ( 2002 ) conducted two classroom studies in which students were either prompted by a “cognitive tutor” to provide self-explanations during a problem-solving task or not, and found that the self-explanations led to improved performance. According to the authors, this approach could scale well to real classrooms. If possible and relevant, students could even perform actions alongside their self-explanations (Cohen, 1981 ; see also the enactment effect, Hainselin, Picard, Manolli, Vankerkore-Candas, & Bourdin, 2017 ). Instructors can scaffold students in these types of activities by providing self-explanation prompts throughout to-be-learned material (O’Neil et al., 2014 ). Ultimately, the greatest potential benefit of accurate self-explanation or elaboration is that the student will be able to transfer their knowledge to a new situation (Rittle-Johnson, 2006 ).

The technical term “elaborative interrogation” has not made it into the vernacular of educational bloggers (a search on https://educationechochamberuncut.wordpress.com , which consolidates over 3,000 UK-based teacher blogs, yielded zero results for that term). However, a few teachers have blogged about elaboration more generally (e.g., Hobbiss, 2016 ) and deep questioning specifically (e.g., Class Teaching, 2013 ), just without using the specific terminology. This strategy in particular may benefit from a more open dialog between researchers and teachers to facilitate the use of elaborative interrogation in the classroom and to address possible barriers to implementation. In terms of advancing the scientific understanding of elaborative interrogation in a classroom setting, it would be informative to conduct a larger-scale intervention to see whether having students elaborate during reading actually helps their understanding. It would also be useful to know whether the students really need to generate their own elaborative interrogation (“how” and “why”) questions, versus answering questions provided by others. How long should students persist to find the answers? When is the right time to have students engage in this task, given the levels of expertise required to do it well (Clinton et al., 2016 )? Without knowing the answers to these questions, it may be too early for us to instruct teachers to use this technique in their classes. Finally, elaborative interrogation takes a long time. Is this time efficiently spent? Or, would it be better to have the students try to answer a few questions, pool their information as a class, and then move to practicing retrieval of the information?

Concrete examples

Providing supporting information can improve the learning of key ideas and concepts. Specifically, using concrete examples to supplement content that is more conceptual in nature can make the ideas easier to understand and remember. Concrete examples can provide several advantages to the learning process: (a) they can concisely convey information, (b) they can provide students with more concrete information that is easier to remember, and (c) they can take advantage of the superior memorability of pictures relative to words (see “Dual Coding”).

Words that are more concrete are both recognized and recalled better than abstract words (Gorman, 1961 ; e.g., “button” and “bound,” respectively). Furthermore, it has been demonstrated that information that is more concrete and imageable enhances the learning of associations, even with abstract content (Caplan & Madan, 2016 ; Madan, Glaholt, & Caplan, 2010 ; Paivio, 1971 ). Following from this, providing concrete examples during instruction should improve retention of related abstract concepts, rather than the concrete examples alone being remembered better. Concrete examples can be useful both during instruction and during practice problems. Having students actively explain how two examples are similar and encouraging them to extract the underlying structure on their own can also help with transfer. In a laboratory study, Berry ( 1983 ) demonstrated that students performed well when given concrete practice problems, regardless of the use of verbalization (akin to elaborative interrogation), but that verbalization helped students transfer understanding from concrete to abstract problems. One particularly important area of future research is determining how students can best make the link between concrete examples and abstract ideas.

Since abstract concepts are harder to grasp than concrete information (Paivio, Walsh, & Bons, 1994 ), it follows that teachers ought to illustrate abstract ideas with concrete examples. However, care must be taken when selecting the examples. LeFevre and Dixon ( 1986 ) provided students with both concrete examples and abstract instructions and found that when these were inconsistent, students followed the concrete examples rather than the abstract instructions, potentially constraining the application of the abstract concept being taught. Lew, Fukawa-Connelly, Mejí-Ramos, and Weber ( 2016 ) used an interview approach to examine why students may have difficulty understanding a lecture. Responses indicated that some issues were related to understanding the overarching topic rather than the component parts, and to the use of informal colloquialisms that did not clearly follow from the material being taught. Both of these issues could have potentially been addressed through the inclusion of a greater number of relevant concrete examples.

One concern with using concrete examples is that students might only remember the examples – especially if they are particularly memorable, such as fun or gimmicky examples – and will not be able to transfer their understanding from one example to another, or more broadly to the abstract concept. However, there does not seem to be any evidence that fun relevant examples actually hurt learning by harming memory for important information. Instead, fun examples and jokes tend to be more memorable, but this boost in memory for the joke does not seem to come at a cost to memory for the underlying concept (Baldassari & Kelley, 2012 ). However, two important caveats need to be highlighted. First, to the extent that the more memorable content is not relevant to the concepts of interest, learning of the target information can be compromised (Harp & Mayer, 1998 ). Thus, care must be taken to ensure that all examples and gimmicks are, in fact, related to the core concepts that the students need to acquire, and do not contain irrelevant perceptual features (Kaminski & Sloutsky, 2013 ).

The second issue is that novices often notice and remember the surface details of an example rather than the underlying structure. Experts, on the other hand, can extract the underlying structure from examples that have divergent surface features (Chi, Feltovich, & Glaser, 1981 ; see Fig.  5 for an example from physics). Gick and Holyoak ( 1983 ) tried to get students to apply a rule from one problem to another problem that appeared different on the surface, but was structurally similar. They found that providing multiple examples helped with this transfer process compared to only using one example – especially when the examples provided had different surface details. More work is also needed to determine how many examples are sufficient for generalization to occur (and this, of course, will vary with contextual factors and individual differences). Further research on the continuum between concrete/specific examples and more abstract concepts would also be informative. That is, if an example is not concrete enough, it may be too difficult to understand. On the other hand, if the example is too concrete, that could be detrimental to generalization to the more abstract concept (although a diverse set of very concrete examples may be able to help with this). In fact, in a controversial article, Kaminski, Sloutsky, and Heckler ( 2008 ) claimed that abstract examples were more effective than concrete examples. Later rebuttals of this paper contested whether the abstract versus concrete distinction was clearly defined in the original study (see Reed, 2008 , for a collection of letters on the subject). This ideal point along the concrete-abstract continuum might also interact with development.

Finding teacher blog posts on concrete examples proved to be more difficult than for the other strategies in this review. One optimistic possibility is that teachers frequently use concrete examples in their teaching, and thus do not think of this as a specific contribution from cognitive psychology; the one blog post we were able to find that discussed concrete examples suggests that this might be the case (Boulton, 2016 ). The idea of “linking abstract concepts with concrete examples” is also covered in 25% of teacher-training textbooks used in the US, according to the report by Pomerance et al. ( 2016 ); this is the second most frequently covered of the six strategies, after “posing probing questions” (i.e., elaborative interrogation). A useful direction for future research would be to establish how teachers are using concrete examples in their practice, and whether we can make any suggestions for improvement based on research into the science of learning. For example, if two examples are better than one (Bauernschmidt, 2017 ), are additional examples also needed, or are there diminishing returns from providing more examples? And, how can teachers best ensure that concrete examples are consistent with prior knowledge (Reed, 2008 )?

Dual coding

Both the memory literature and folk psychology support the notion of visual examples being beneficial—the adage of “a picture is worth a thousand words” (traced back to an advertising slogan from the 1920s; Meider, 1990 ). Indeed, it is well-understood that more information can be conveyed through a simple illustration than through several paragraphs of text (e.g., Barker & Manji, 1989 ; Mayer & Gallini, 1990 ). Illustrations can be particularly helpful when the described concept involves several parts or steps and is intended for individuals with low prior knowledge (Eitel & Scheiter, 2015 ; Mayer & Gallini, 1990 ). Figure  6 provides a concrete example of this, illustrating how information can flow through neurons and synapses.

In addition to being able to convey information more succinctly, pictures are also more memorable than words (Paivio & Csapo, 1969 , 1973 ). In the memory literature, this is referred to as the picture superiority effect , and dual coding theory was developed in part to explain this effect. Dual coding follows from the notion of text being accompanied by complementary visual information to enhance learning. Paivio ( 1971 , 1986 ) proposed dual coding theory as a mechanistic account for the integration of multiple information “codes” to process information. In this theory, a code corresponds to a modal or otherwise distinct representation of a concept—e.g., “mental images for ‘book’ have visual, tactual, and other perceptual qualities similar to those evoked by the referent objects on which the images are based” (Clark & Paivio, 1991 , p. 152). Aylwin ( 1990 ) provides a clear example of how the word “dog” can evoke verbal, visual, and enactive representations (see Fig.  7 for a similar example for the word “SPOON”, based on Aylwin, 1990 (Fig.  2 ) and Madan & Singhal, 2012a (Fig.  3 )). Codes can also correspond to emotional properties (Clark & Paivio, 1991 ; Paivio, 2013 ). Clark and Paivio ( 1991 ) provide a thorough review of dual coding theory and its relation to education, while Paivio ( 2007 ) provides a comprehensive treatise on dual coding theory. Broadly, dual coding theory suggests that providing multiple representations of the same information enhances learning and memory, and that information that more readily evokes additional representations (through automatic imagery processes) receives a similar benefit.

Paivio and Csapo ( 1973 ) suggest that verbal and imaginal codes have independent and additive effects on memory recall. Using visuals to improve learning and memory has been particularly applied to vocabulary learning (Danan, 1992 ; Sadoski, 2005 ), but has also shown success in other domains such as in health care (Hartland, Biddle, & Fallacaro, 2008 ). To take advantage of dual coding, verbal information should be accompanied by a visual representation when possible. However, while the studies discussed all indicate that the use of multiple representations of information is favorable, it is important to acknowledge that each representation also increases cognitive load and can lead to over-saturation (Mayer & Moreno, 2003 ).

Given that pictures are generally remembered better than words, it is important to ensure that the pictures students are provided with are helpful and relevant to the content they are expected to learn. McNeill, Uttal, Jarvin, and Sternberg ( 2009 ) found that providing visual examples decreased conceptual errors. However, McNeill et al. also found that when students were given visually rich examples, they performed more poorly than students who were not given any visual example, suggesting that the visual details can at times become a distraction and hinder performance. Thus, it is important to consider that images used in teaching are clear and not ambiguous in their meaning (Schwartz, 2007 ).

Further broadening the scope of dual coding theory, Engelkamp and Zimmer ( 1984 ) suggest that motor movements, such as “turning the handle,” can provide an additional motor code that can improve memory, linking studies of motor actions (enactment) with dual coding theory (Clark & Paivio, 1991 ; Engelkamp & Cohen, 1991 ; Madan & Singhal, 2012c ). Indeed, enactment effects appear to primarily occur during learning, rather than during retrieval (Peterson & Mulligan, 2010 ). Along similar lines, Wammes, Meade, and Fernandes ( 2016 ) demonstrated that generating drawings can provide memory benefits beyond what could otherwise be explained by visual imagery, picture superiority, and other memory enhancing effects. Providing convergent evidence, even when overt motor actions are not critical in themselves, words representing functional objects have been shown to enhance later memory (Madan & Singhal, 2012b ; Montefinese, Ambrosini, Fairfield, & Mammarella, 2013 ). This indicates that motoric processes can improve memory similarly to visual imagery, similar to memory differences for concrete vs. abstract words. Further research suggests that automatic motor simulation for functional objects is likely responsible for this memory benefit (Madan, Chen, & Singhal, 2016 ).

When teachers combine visuals and words in their educational practice, however, they may not always be taking advantage of dual coding – at least, not in the optimal manner. For example, a recent discussion on Twitter centered around one teacher’s decision to have 7 th Grade students replace certain words in their science laboratory report with a picture of that word (e.g., the instructions read “using a syringe …” and a picture of a syringe replaced the word; Turner, 2016a ). Other teachers argued that this was not dual coding (Beaven, 2016 ; Williams, 2016 ), because there were no longer two different representations of the information. The first teacher maintained that dual coding was preserved, because this laboratory report with pictures was to be used alongside the original, fully verbal report (Turner, 2016b ). This particular implementation – having students replace individual words with pictures – has not been examined in the cognitive literature, presumably because no benefit would be expected. In any case, we need to be clearer about implementations for dual coding, and more research is needed to clarify how teachers can make use of the benefits conferred by multiple representations and picture superiority.

Critically, dual coding theory is distinct from the notion of “learning styles,” which describe the idea that individuals benefit from instruction that matches their modality preference. While this idea is pervasive and individuals often subjectively feel that they have a preference, evidence indicates that the learning styles theory is not supported by empirical findings (e.g., Kavale, Hirshoren, & Forness, 1998 ; Pashler, McDaniel, Rohrer, & Bjork, 2008 ; Rohrer & Pashler, 2012 ). That is, there is no evidence that instructing students in their preferred learning style leads to an overall improvement in learning (the “meshing” hypothesis). Moreover, learning styles have come to be described as a myth or urban legend within psychology (Coffield, Moseley, Hall, & Ecclestone, 2004 ; Hattie & Yates, 2014 ; Kirschner & van Merriënboer, 2013 ; Kirschner, 2017 ); skepticism about learning styles is a common stance amongst evidence-informed teachers (e.g., Saunders, 2016 ). Providing evidence against the notion of learning styles, Kraemer, Rosenberg, and Thompson-Schill ( 2009 ) found that individuals who scored as “verbalizers” and “visualizers” did not perform any better on experimental trials matching their preference. Instead, it has recently been shown that learning through one’s preferred learning style is associated with elevated subjective judgements of learning, but not objective performance (Knoll, Otani, Skeel, & Van Horn, 2017 ). In contrast to learning styles, dual coding is based on providing additional, complementary forms of information to enhance learning, rather than tailoring instruction to individuals’ preferences.

Genuine educational environments present many opportunities for combining the strategies outlined above. Spacing can be particularly potent for learning if it is combined with retrieval practice. The additive benefits of retrieval practice and spacing can be gained by engaging in retrieval practice multiple times (also known as distributed practice; see Cepeda et al., 2006 ). Interleaving naturally entails spacing if students interleave old and new material. Concrete examples can be both verbal and visual, making use of dual coding. In addition, the strategies of elaboration, concrete examples, and dual coding all work best when used as part of retrieval practice. For example, in the concept-mapping studies mentioned above (Blunt & Karpicke, 2014 ; Karpicke, Blunt, et al., 2014 ), creating concept maps while looking at course materials (e.g., a textbook) was not as effective for later memory as creating concept maps from memory. When practicing elaborative interrogation, students can start off answering the “how” and “why” questions they pose for themselves using class materials, and work their way up to answering them from memory. And when interleaving different problem types, students should be practicing answering them rather than just looking over worked examples.

But while these ideas for strategy combinations have empirical bases, it has not yet been established whether the benefits of the strategies to learning are additive, super-additive, or, in some cases, incompatible. Thus, future research needs to (a) better formalize the definition of each strategy (particularly critical for elaboration and dual coding), (b) identify best practices for implementation in the classroom, (c) delineate the boundary conditions of each strategy, and (d) strategically investigate interactions between the six strategies we outlined in this manuscript.

Aleven, V. A., & Koedinger, K. R. (2002). An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26 , 147–179.

Article   Google Scholar  

Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behavior, 22 , 261–295.

Arnold, K. M., & McDermott, K. B. (2013). Test-potentiated learning: distinguishing between direct and indirect effects of tests. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 , 940–945.

PubMed   Google Scholar  

Aylwin, S. (1990). Imagery and affect: big questions, little answers. In P. J. Thompson, D. E. Marks, & J. T. E. Richardson (Eds.), Imagery: Current developments . New York: International Library of Psychology.

Google Scholar  

Baldassari, M. J., & Kelley, M. (2012). Make’em laugh? The mnemonic effect of humor in a speech. Psi Chi Journal of Psychological Research, 17 , 2–9.

Barker, P. G., & Manji, K. A. (1989). Pictorial dialogue methods. International Journal of Man-Machine Studies, 31 , 323–347.

Bauernschmidt, A. (2017). GUEST POST: two examples are better than one. [Blog post]. The Learning Scientists Blog . Retrieved from http://www.learningscientists.org/blog/2017/5/30-1 . Accessed 25 Dec 2017.

Beaven, T. (2016). @doctorwhy @FurtherEdagogy @doc_kristy Right, I thought the whole point of dual coding was to use TWO codes: pics + words of the SAME info? [Tweet]. Retrieved from https://twitter.com/TitaBeaven/status/807504041341308929 . Accessed 25 Dec 2017.

Bellezza, F. S., Cheesman, F. L., & Reddy, B. G. (1977). Organization and semantic elaboration in free recall. Journal of Experimental Psychology: Human Learning and Memory, 3 , 539–550.

Benney, D. (2016). (Trying to apply) spacing in a content heavy subject [Blog post]. Retrieved from https://mrbenney.wordpress.com/2016/10/16/trying-to-apply-spacing-in-science/ . Accessed 25 Dec 2017.

Berry, D. C. (1983). Metacognitive experience and transfer of logical reasoning. Quarterly Journal of Experimental Psychology, 35A , 39–49.

Birnbaum, M. S., Kornell, N., Bjork, E. L., & Bjork, R. A. (2013). Why interleaving enhances inductive learning: the roles of discrimination and retrieval. Memory & Cognition, 41 , 392–402.

Bjork, R. A. (1999). Assessing our own competence: heuristics and illusions. In D. Gopher & A. Koriat (Eds.), Attention and peformance XVII. Cognitive regulation of performance: Interaction of theory and application (pp. 435–459). Cambridge, MA: MIT Press.

Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.

Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. From learning processes to cognitive processes: Essays in honor of William K. Estes, 2 , 35–67.

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: creating desirable difficulties to enhance learning. Psychology and the real world: Essays illustrating fundamental contributions to society , 56–64.

Blunt, J. R., & Karpicke, J. D. (2014). Learning with retrieval-based concept mapping. Journal of Educational Psychology, 106 , 849–858.

Boulton, K. (2016). What does cognitive overload look like in the humanities? [Blog post]. Retrieved from https://educationechochamberuncut.wordpress.com/2016/03/05/what-does-cognitive-overload-look-like-in-the-humanities-kris-boulton-2/ . Accessed 25 Dec 2017.

Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick . Cambridge, MA: Harvard University Press.

Book   Google Scholar  

Butler, A. C. (2010). Repeated testing produces superior transfer of learning relative to repeated studying. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36 , 1118–1133.

Caplan, J. B., & Madan, C. R. (2016). Word-imageability enhances association-memory by recruiting hippocampal activity. Journal of Cognitive Neuroscience, 28 , 1522–1538.

Article   PubMed   Google Scholar  

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: a review and quantitative synthesis. Psychological Bulletin, 132 , 354–380.

Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., & Pashler, H. (2008). Spacing effects in learning a temporal ridgeline of optimal retention. Psychological Science, 19 , 1095–1102.

Chi, M. T., De Leeuw, N., Chiu, M. H., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18 , 439–477.

Chi, M. T., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5 , 121–152.

CIFE. (2012). No January A level and other changes. Retrieved from http://www.cife.org.uk/cife-general-news/no-january-a-level-and-other-changes/ . Accessed 25 Dec 2017.

Clark, D. (2016). One book on learning that every teacher, lecturer & trainer should read (7 reasons) [Blog post]. Retrieved from http://donaldclarkplanb.blogspot.com/2016/03/one-book-on-learning-that-every-teacher.html . Accessed 25 Dec 2017.

Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3 , 149–210.

Class Teaching. (2013). Deep questioning [Blog post]. Retrieved from https://classteaching.wordpress.com/2013/07/12/deep-questioning/ . Accessed 25 Dec 2017.

Clinton, V., Alibali, M. W., & Nathan, M. J. (2016). Learning about posterior probability: do diagrams and elaborative interrogation help? The Journal of Experimental Education, 84 , 579–599.

Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: a systematic and critical review . London: Learning & Skills Research Centre.

Cohen, R. L. (1981). On the generality of some memory laws. Scandinavian Journal of Psychology, 22 , 267–281.

Cooper, H. (1989). Synthesis of research on homework. Educational Leadership, 47 , 85–91.

Corbett, A. T., Reed, S. K., Hoffmann, R., MacLaren, B., & Wagner, A. (2010). Interleaving worked examples and cognitive tutor support for algebraic modeling of problem situations. In Proceedings of the Thirty-Second Annual Meeting of the Cognitive Science Society (pp. 2882–2887).

Cox, D. (2015). No stakes testing – not telling students their results [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2015/06/06/no-stakes-testing-not-telling-students-their-results/ . Accessed 25 Dec 2017.

Cox, D. (2016a). Ditch revision. Teach it well [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2016/01/09/ditch-revision-teach-it-well/ . Accessed 25 Dec 2017.

Cox, D. (2016b). ‘They need to remember this in three years time’: spacing & interleaving for the new GCSEs [Blog post]. Retrieved from https://missdcoxblog.wordpress.com/2016/03/25/they-need-to-remember-this-in-three-years-time-spacing-interleaving-for-the-new-gcses/ . Accessed 25 Dec 2017.

Craik, F. I. (2002). Levels of processing: past, present… future? Memory, 10 , 305–318.

Craik, F. I., & Lockhart, R. S. (1972). Levels of processing: a framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11 , 671–684.

Danan, M. (1992). Reversed subtitling and dual coding theory: new directions for foreign language instruction. Language Learning, 42 , 497–527.

Dettmers, S., Trautwein, U., & Lüdtke, O. (2009). The relationship between homework time and achievement is not universal: evidence from multilevel analyses in 40 countries. School Effectiveness and School Improvement, 20 , 375–405.

Dirkx, K. J., Kester, L., & Kirschner, P. A. (2014). The testing effect for learning principles and procedures from texts. The Journal of Educational Research, 107 , 357–364.

Dunlosky, J. (2013). Strengthening the student toolbox: study strategies to boost learning. American Educator, 37 (3), 12–21.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14 , 4–58.

Ebbinghaus, H. (1913). Memory (HA Ruger & CE Bussenius, Trans.). New York: Columbia University, Teachers College. (Original work published 1885) . Retrieved from http://psychclassics.yorku.ca/Ebbinghaus/memory8.htm . Accessed 25 Dec 2017.

Eglington, L. G., & Kang, S. H. (2016). Retrieval practice benefits deductive inference. Educational Psychology Review , 1–14.

Eitel, A., & Scheiter, K. (2015). Picture or text first? Explaining sequential effects when learning with pictures and text. Educational Psychology Review, 27 , 153–180.

Engelkamp, J., & Cohen, R. L. (1991). Current issues in memory of action events. Psychological Research, 53 , 175–182.

Engelkamp, J., & Zimmer, H. D. (1984). Motor programme information as a separable memory unit. Psychological Research, 46 , 283–299.

Fawcett, D. (2013). Can I be that little better at……using cognitive science/psychology/neurology to plan learning? [Blog post]. Retrieved from http://reflectionsofmyteaching.blogspot.com/2013/09/can-i-be-that-little-better-atusing.html . Accessed 25 Dec 2017.

Fiechter, J. L., & Benjamin, A. S. (2017). Diminishing-cues retrieval practice: a memory-enhancing technique that works when regular testing doesn’t. Psychonomic Bulletin & Review , 1–9.

Firth, J. (2016). Spacing in teaching practice [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/4/12-1 . Accessed 25 Dec 2017.

Fordham, M. [mfordhamhistory]. (2016). Is there a meaningful distinction in psychology between ‘thinking’ & ‘critical thinking’? [Tweet]. Retrieved from https://twitter.com/mfordhamhistory/status/809525713623781377 . Accessed 25 Dec 2017.

Fritz, C. O., Morris, P. E., Nolan, D., & Singleton, J. (2007). Expanding retrieval practice: an effective aid to preschool children’s learning. The Quarterly Journal of Experimental Psychology, 60 , 991–1004.

Gates, A. I. (1917). Recitation as a factory in memorizing. Archives of Psychology, 6.

Gick, M. L., & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15 , 1–38.

Gorman, A. M. (1961). Recognition memory for nouns as a function of abstractedness and frequency. Journal of Experimental Psychology, 61 , 23–39.

Hainselin, M., Picard, L., Manolli, P., Vankerkore-Candas, S., & Bourdin, B. (2017). Hey teacher, don’t leave them kids alone: action is better for memory than reading. Frontiers in Psychology , 8 .

Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage. Journal of Educational Psychology, 90 , 414–434.

Hartland, W., Biddle, C., & Fallacaro, M. (2008). Audiovisual facilitation of clinical knowledge: A paradigm for dispersed student education based on Paivio’s dual coding theory. AANA Journal, 76 , 194–198.

Hattie, J., & Yates, G. (2014). Visible learning and the science of how we learn . New York: Routledge.

Hausman, H., & Kornell, N. (2014). Mixing topics while studying does not enhance learning. Journal of Applied Research in Memory and Cognition, 3 , 153–160.

Hinze, S. R., & Rapp, D. N. (2014). Retrieval (sometimes) enhances learning: performance pressure reduces the benefits of retrieval practice. Applied Cognitive Psychology, 28 , 597–606.

Hirshman, E. (2001). Elaboration in memory. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social & behavioral sciences (pp. 4369–4374). Oxford: Pergamon.

Chapter   Google Scholar  

Hobbiss, M. (2016). Make it meaningful! Elaboration [Blog post]. Retrieved from https://hobbolog.wordpress.com/2016/06/09/make-it-meaningful-elaboration/ . Accessed 25 Dec 2017.

Jones, F. (2016). Homework – is it really that useless? [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/4/5-1 . Accessed 25 Dec 2017.

Kaminski, J. A., & Sloutsky, V. M. (2013). Extraneous perceptual information interferes with children’s acquisition of mathematical knowledge. Journal of Educational Psychology, 105 (2), 351–363.

Kaminski, J. A., Sloutsky, V. M., & Heckler, A. F. (2008). The advantage of abstract examples in learning math. Science, 320 , 454–455.

Kang, S. H. (2016). Spaced repetition promotes efficient and effective learning policy implications for instruction. Policy Insights from the Behavioral and Brain Sciences, 3 , 12–19.

Kang, S. H. K., McDermott, K. B., & Roediger, H. L. (2007). Test format and corrective feedback modify the effects of testing on long-term retention. European Journal of Cognitive Psychology, 19 , 528–558.

Karpicke, J. D., & Aue, W. R. (2015). The testing effect is alive and well with complex materials. Educational Psychology Review, 27 , 317–326.

Karpicke, J. D., Blunt, J. R., Smith, M. A., & Karpicke, S. S. (2014). Retrieval-based learning: The need for guided retrieval in elementary school children. Journal of Applied Research in Memory and Cognition, 3 , 198–206.

Karpicke, J. D., Lehman, M., & Aue, W. R. (2014). Retrieval-based learning: an episodic context account. In B. H. Ross (Ed.), Psychology of Learning and Motivation (Vol. 61, pp. 237–284). San Diego, CA: Elsevier Academic Press.

Karpicke, J. D., Blunt, J. R., & Smith, M. A. (2016). Retrieval-based learning: positive effects of retrieval practice in elementary school children. Frontiers in Psychology, 7 .

Kavale, K. A., Hirshoren, A., & Forness, S. R. (1998). Meta-analytic validation of the Dunn and Dunn model of learning-style preferences: a critique of what was Dunn. Learning Disabilities Research & Practice, 13 , 75–80.

Khanna, M. M. (2015). Ungraded pop quizzes: test-enhanced learning without all the anxiety. Teaching of Psychology, 42 , 174–178.

Kirby, J. (2014). One scientific insight for curriculum design [Blog post]. Retrieved from https://pragmaticreform.wordpress.com/2014/05/05/scientificcurriculumdesign/ . Accessed 25 Dec 2017.

Kirschner, P. A. (2017). Stop propagating the learning styles myth. Computers & Education, 106 , 166–171.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48 , 169–183.

Knoll, A. R., Otani, H., Skeel, R. L., & Van Horn, K. R. (2017). Learning style, judgments of learning, and learning of verbal and visual information. British Journal of Psychology, 108 , 544-563.

Kornell, N., & Bjork, R. A. (2008). Learning concepts and categories is spacing the “enemy of induction”? Psychological Science, 19 , 585–592.

Kornell, N., & Finn, B. (2016). Self-regulated learning: an overview of theory and data. In J. Dunlosky & S. Tauber (Eds.), The Oxford Handbook of Metamemory (pp. 325–340). New York: Oxford University Press.

Kornell, N., Klein, P. J., & Rawson, K. A. (2015). Retrieval attempts enhance learning, but retrieval success (versus failure) does not matter. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41 , 283–294.

Kraemer, D. J. M., Rosenberg, L. M., & Thompson-Schill, S. L. (2009). The neural correlates of visual and verbal cognitive styles. Journal of Neuroscience, 29 , 3792–3798.

Article   PubMed   PubMed Central   Google Scholar  

Kraft, N. (2015). Spaced practice and repercussions for teaching. Retrieved from http://nathankraft.blogspot.com/2015/08/spaced-practice-and-repercussions-for.html . Accessed 25 Dec 2017.

Learning Scientists. (2016a). Weekly Digest #3: How teachers implement interleaving in their curriculum [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/3/28/weekly-digest-3 . Accessed 25 Dec 2017.

Learning Scientists. (2016b). Weekly Digest #13: how teachers implement retrieval in their classrooms [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/6/5/weekly-digest-13 . Accessed 25 Dec 2017.

Learning Scientists. (2016c). Weekly Digest #40: teachers’ implementation of principles from “Make It Stick” [Blog post]. Retrieved from http://www.learningscientists.org/blog/2016/12/18-1 . Accessed 25 Dec 2017.

Learning Scientists. (2017). Weekly Digest #54: is there an app for that? Studying 2.0 [Blog post]. Retrieved from http://www.learningscientists.org/blog/2017/4/9/weekly-digest-54 . Accessed 25 Dec 2017.

LeFevre, J.-A., & Dixon, P. (1986). Do written instructions need examples? Cognition and Instruction, 3 , 1–30.

Lew, K., Fukawa-Connelly, T., Mejí-Ramos, J. P., & Weber, K. (2016). Lectures in advanced mathematics: Why students might not understand what the mathematics professor is trying to convey. Journal of Research in Mathematics Education, 47 , 162–198.

Lindsey, R. V., Shroyer, J. D., Pashler, H., & Mozer, M. C. (2014). Improving students’ long-term knowledge retention through personalized review. Psychological Science, 25 , 639–647.

Lipko-Speed, A., Dunlosky, J., & Rawson, K. A. (2014). Does testing with feedback help grade-school children learn key concepts in science? Journal of Applied Research in Memory and Cognition, 3 , 171–176.

Lockhart, R. S., & Craik, F. I. (1990). Levels of processing: a retrospective commentary on a framework for memory research. Canadian Journal of Psychology, 44 , 87–112.

Lovell, O. (2017). How do we know what to put on the quiz? [Blog Post]. Retrieved from http://www.ollielovell.com/olliesclassroom/know-put-quiz/ . Accessed 25 Dec 2017.

Luehmann, A. L. (2008). Using blogging in support of teacher professional identity development: a case study. The Journal of the Learning Sciences, 17 , 287–337.

Madan, C. R., Glaholt, M. G., & Caplan, J. B. (2010). The influence of item properties on association-memory. Journal of Memory and Language, 63 , 46–63.

Madan, C. R., & Singhal, A. (2012a). Motor imagery and higher-level cognition: four hurdles before research can sprint forward. Cognitive Processing, 13 , 211–229.

Madan, C. R., & Singhal, A. (2012b). Encoding the world around us: motor-related processing influences verbal memory. Consciousness and Cognition, 21 , 1563–1570.

Madan, C. R., & Singhal, A. (2012c). Using actions to enhance memory: effects of enactment, gestures, and exercise on human memory. Frontiers in Psychology, 3 .

Madan, C. R., Chen, Y. Y., & Singhal, A. (2016). ERPs differentially reflect automatic and deliberate processing of the functional manipulability of objects. Frontiers in Human Neuroscience, 10 .

Mandler, G. (1979). Organization and repetition: organizational principles with special reference to rote learning. In L. G. Nilsson (Ed.), Perspectives on Memory Research (pp. 293–327). New York: Academic Press.

Marsh, E. J., Fazio, L. K., & Goswick, A. E. (2012). Memorial consequences of testing school-aged children. Memory, 20 , 899–906.

Mayer, R. E., & Gallini, J. K. (1990). When is an illustration worth ten thousand words? Journal of Educational Psychology, 82 , 715–726.

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38 , 43–52.

McDaniel, M. A., & Donnelly, C. M. (1996). Learning with analogy and elaborative interrogation. Journal of Educational Psychology, 88 , 508–519.

McDaniel, M. A., Thomas, R. C., Agarwal, P. K., McDermott, K. B., & Roediger, H. L. (2013). Quizzing in middle-school science: successful transfer performance on classroom exams. Applied Cognitive Psychology, 27 , 360–372.

McDermott, K. B., Agarwal, P. K., D’Antonio, L., Roediger, H. L., & McDaniel, M. A. (2014). Both multiple-choice and short-answer quizzes enhance later exam performance in middle and high school classes. Journal of Experimental Psychology: Applied, 20 , 3–21.

McHugh, A. (2013). High-stakes tests: bad for students, teachers, and education in general [Blog post]. Retrieved from https://teacherbiz.wordpress.com/2013/07/01/high-stakes-tests-bad-for-students-teachers-and-education-in-general/ . Accessed 25 Dec 2017.

McNeill, N. M., Uttal, D. H., Jarvin, L., & Sternberg, R. J. (2009). Should you show me the money? Concrete objects both hurt and help performance on mathematics problems. Learning and Instruction, 19 , 171–184.

Meider, W. (1990). “A picture is worth a thousand words”: from advertising slogan to American proverb. Southern Folklore, 47 , 207–225.

Michaela Community School. (2014). Homework. Retrieved from http://mcsbrent.co.uk/homework-2/ . Accessed 25 Dec 2017.

Montefinese, M., Ambrosini, E., Fairfield, B., & Mammarella, N. (2013). The “subjective” pupil old/new effect: is the truth plain to see? International Journal of Psychophysiology, 89 , 48–56.

O’Neil, H. F., Chung, G. K., Kerr, D., Vendlinski, T. P., Buschang, R. E., & Mayer, R. E. (2014). Adding self-explanation prompts to an educational computer game. Computers In Human Behavior, 30 , 23–28.

Overoye, A. L., & Storm, B. C. (2015). Harnessing the power of uncertainty to enhance learning. Translational Issues in Psychological Science, 1 , 140–148.

Paivio, A. (1971). Imagery and verbal processes . New York: Holt, Rinehart and Winston.

Paivio, A. (1986). Mental representations: a dual coding approach . New York: Oxford University Press.

Paivio, A. (2007). Mind and its evolution: a dual coding theoretical approach . Mahwah: Erlbaum.

Paivio, A. (2013). Dual coding theory, word abstractness, and emotion: a critical review of Kousta et al. (2011). Journal of Experimental Psychology: General, 142 , 282–287.

Paivio, A., & Csapo, K. (1969). Concrete image and verbal memory codes. Journal of Experimental Psychology, 80 , 279–285.

Paivio, A., & Csapo, K. (1973). Picture superiority in free recall: imagery or dual coding? Cognitive Psychology, 5 , 176–206.

Paivio, A., Walsh, M., & Bons, T. (1994). Concreteness effects on memory: when and why? Journal of Experimental Psychology: Learning, Memory, and Cognition, 20 , 1196–1204.

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: concepts and evidence. Psychological Science in the Public Interest, 9 , 105–119.

Pashler, H., Bain, P. M., Bottge, B. A., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning. IES practice guide. NCER 2007–2004. National Center for Education Research .

Patel, R., Liu, R., & Koedinger, K. (2016). When to block versus interleave practice? Evidence against teaching fraction addition before fraction multiplication. In Proceedings of the 38th Annual Meeting of the Cognitive Science Society, Philadelphia, PA .

Penfound, B. (2017). Journey to interleaved practice #2 [Blog Post]. Retrieved from https://fullstackcalculus.com/2017/02/03/journey-to-interleaved-practice-2/ . Accessed 25 Dec 2017.

Penfound, B. [BryanPenfound]. (2016). Does blocked practice/learning lessen cognitive load? Does interleaved practice/learning provide productive struggle? [Tweet]. Retrieved from https://twitter.com/BryanPenfound/status/808759362244087808 . Accessed 25 Dec 2017.

Peterson, D. J., & Mulligan, N. W. (2010). Enactment and retrieval. Memory & Cognition, 38 , 233–243.

Picciotto, H. (2009). Lagging homework [Blog post]. Retrieved from http://blog.mathedpage.org/2013/06/lagging-homework.html . Accessed 25 Dec 2017.

Pomerance, L., Greenberg, J., & Walsh, K. (2016). Learning about learning: what every teacher needs to know. Retrieved from http://www.nctq.org/dmsView/Learning_About_Learning_Report . Accessed 25 Dec 2017.

Postman, L. (1976). Methodology of human learning. In W. K. Estes (Ed.), Handbook of learning and cognitive processes (Vol. 3). Hillsdale: Erlbaum.

Pressley, M., McDaniel, M. A., Turnure, J. E., Wood, E., & Ahmad, M. (1987). Generation and precision of elaboration: effects on intentional and incidental learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13 , 291–300.

Reed, S. K. (2008). Concrete examples must jibe with experience. Science, 322 , 1632–1633.

researchED. (2013). How it all began. Retrieved from http://www.researched.org.uk/about/our-story/ . Accessed 25 Dec 2017.

Ritchie, S. J., Della Sala, S., & McIntosh, R. D. (2013). Retrieval practice, with or without mind mapping, boosts fact learning in primary school children. PLoS One, 8 (11), e78976.

Rittle-Johnson, B. (2006). Promoting transfer: effects of self-explanation and direct instruction. Child Development, 77 , 1–15.

Roediger, H. L. (1985). Remembering Ebbinghaus. [Retrospective review of the book On Memory , by H. Ebbinghaus]. Contemporary Psychology, 30 , 519–523.

Roediger, H. L. (2013). Applying cognitive psychology to education translational educational science. Psychological Science in the Public Interest, 14 , 1–3.

Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: basic research and implications for educational practice. Perspectives on Psychological Science, 1 , 181–210.

Roediger, H. L., Putnam, A. L., & Smith, M. A. (2011). Ten benefits of testing and their applications to educational practice. In J. Mester & B. Ross (Eds.), The psychology of learning and motivation: cognition in education (pp. 1–36). Oxford: Elsevier.

Roediger, H. L., Finn, B., & Weinstein, Y. (2012). Applications of cognitive science to education. In Della Sala, S., & Anderson, M. (Eds.), Neuroscience in education: the good, the bad, and the ugly . Oxford, UK: Oxford University Press.

Roelle, J., & Berthold, K. (2017). Effects of incorporating retrieval into learning tasks: the complexity of the tasks matters. Learning and Instruction, 49 , 142–156.

Rohrer, D. (2012). Interleaving helps students distinguish among similar concepts. Educational Psychology Review, 24(3), 355–367.

Rohrer, D., Dedrick, R. F., & Stershic, S. (2015). Interleaved practice improves mathematics learning. Journal of Educational Psychology, 107 , 900–908.

Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46 , 34–35.

Rohrer, D., & Taylor, K. (2007). The shuffling of mathematics problems improves learning. Instructional Science, 35 , 481–498.

Rose, N. (2014). Improving the effectiveness of homework [Blog post]. Retrieved from https://evidenceintopractice.wordpress.com/2014/03/20/improving-the-effectiveness-of-homework/ . Accessed 25 Dec 2017.

Sadoski, M. (2005). A dual coding view of vocabulary learning. Reading & Writing Quarterly, 21 , 221–238.

Saunders, K. (2016). It really is time we stopped talking about learning styles [Blog post]. Retrieved from http://martingsaunders.com/2016/10/it-really-is-time-we-stopped-talking-about-learning-styles/ . Accessed 25 Dec 2017.

Schwartz, D. (2007). If a picture is worth a thousand words, why are you reading this essay? Social Psychology Quarterly, 70 , 319–321.

Shumaker, H. (2016). Homework is wrecking our kids: the research is clear, let’s ban elementary homework. Salon. Retrieved from http://www.salon.com/2016/03/05/homework_is_wrecking_our_kids_the_research_is_clear_lets_ban_elementary_homework . Accessed 25 Dec 2017.

Smith, A. M., Floerke, V. A., & Thomas, A. K. (2016). Retrieval practice protects memory against acute stress. Science, 354 , 1046–1048.

Smith, M. A., Blunt, J. R., Whiffen, J. W., & Karpicke, J. D. (2016). Does providing prompts during retrieval practice improve learning? Applied Cognitive Psychology, 30 , 784–802.

Smith, M. A., & Karpicke, J. D. (2014). Retrieval practice with short-answer, multiple-choice, and hybrid formats. Memory, 22 , 784–802.

Smith, M. A., Roediger, H. L., & Karpicke, J. D. (2013). Covert retrieval practice benefits retention as much as overt retrieval practice. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39 , 1712–1725.

Son, J. Y., & Rivas, M. J. (2016). Designing clicker questions to stimulate transfer. Scholarship of Teaching and Learning in Psychology, 2 , 193–207.

Szpunar, K. K., Khan, N. Y., & Schacter, D. L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences, 110 , 6313–6317.

Thomson, R., & Mehring, J. (2016). Better vocabulary study strategies for long-term learning. Kwansei Gakuin University Humanities Review, 20 , 133–141.

Trafton, J. G., & Reiser, B. J. (1993). Studying examples and solving problems: contributions to skill acquisition . Technical report, Naval HCI Research Lab, Washington, DC, USA.

Tran, R., Rohrer, D., & Pashler, H. (2015). Retrieval practice: the lack of transfer to deductive inferences. Psychonomic Bulletin & Review, 22 , 135–140.

Turner, K. [doc_kristy]. (2016a). My dual coding (in red) and some y8 work @AceThatTest they really enjoyed practising the technique [Tweet]. Retrieved from https://twitter.com/doc_kristy/status/807220355395977216 . Accessed 25 Dec 2017.

Turner, K. [doc_kristy]. (2016b). @FurtherEdagogy @doctorwhy their work is revision work, they already have the words on a different page, to compliment not replace [Tweet]. Retrieved from https://twitter.com/doc_kristy/status/807360265100599301 . Accessed 25 Dec 2017.

Valle, A., Regueiro, B., Núñez, J. C., Rodríguez, S., Piñeiro, I., & Rosário, P. (2016). Academic goals, student homework engagement, and academic achievement in elementary school. Frontiers in Psychology, 7 .

Van Gog, T., & Sweller, J. (2015). Not new, but nearly forgotten: the testing effect decreases or even disappears as the complexity of learning materials increases. Educational Psychology Review, 27 , 247–264.

Wammes, J. D., Meade, M. E., & Fernandes, M. A. (2016). The drawing effect: evidence for reliable and robust memory benefits in free recall. Quarterly Journal of Experimental Psychology, 69 , 1752–1776.

Weinstein, Y., Gilmore, A. W., Szpunar, K. K., & McDermott, K. B. (2014). The role of test expectancy in the build-up of proactive interference in long-term memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40 , 1039–1048.

Weinstein, Y., Nunes, L. D., & Karpicke, J. D. (2016). On the placement of practice questions during study. Journal of Experimental Psychology: Applied, 22 , 72–84.

Weinstein, Y., & Weinstein-Jones, F. (2017). Topic and quiz spacing spreadsheet: a planning tool for teachers [Blog Post]. Retrieved from http://www.learningscientists.org/blog/2017/5/11-1 . Accessed 25 Dec 2017.

Weinstein-Jones, F., & Weinstein, Y. (2017). Topic spacing spreadsheet for teachers [Excel macro]. Zenodo. http://doi.org/10.5281/zenodo.573764 . Accessed 25 Dec 2017.

Williams, D. [FurtherEdagogy]. (2016). @doctorwhy @doc_kristy word accompanying the visual? I’m unclear how removing words benefit? Would a flow chart better suit a scientific exp? [Tweet]. Retrieved from https://twitter.com/FurtherEdagogy/status/807356800509104128 . Accessed 25 Dec 2017.

Wood, B. (2017). And now for something a little bit different….[Blog post]. Retrieved from https://justateacherstandinginfrontofaclass.wordpress.com/2017/04/20/and-now-for-something-a-little-bit-different/ . Accessed 25 Dec 2017.

Wooldridge, C. L., Bugg, J. M., McDaniel, M. A., & Liu, Y. (2014). The testing effect with authentic educational materials: a cautionary note. Journal of Applied Research in Memory and Cognition, 3 , 214–221.

Young, C. (2016). Mini-tests. Retrieved from https://colleenyoung.wordpress.com/revision-activities/mini-tests/ . Accessed 25 Dec 2017.

Download references

Acknowledgements

Not applicable.

YW and MAS were partially supported by a grant from The IDEA Center.

Availability of data and materials

Author information, authors and affiliations.

Department of Psychology, University of Massachusetts Lowell, Lowell, MA, USA

Yana Weinstein

Department of Psychology, Boston College, Chestnut Hill, MA, USA

Christopher R. Madan

School of Psychology, University of Nottingham, Nottingham, UK

Department of Psychology, Rhode Island College, Providence, RI, USA

Megan A. Sumeracki

You can also search for this author in PubMed   Google Scholar

Contributions

YW took the lead on writing the “Spaced practice”, “Interleaving”, and “Elaboration” sections. CRM took the lead on writing the “Concrete examples” and “Dual coding” sections. MAS took the lead on writing the “Retrieval practice” section. All authors edited each others’ sections. All authors were involved in the conception and writing of the manuscript. All authors gave approval of the final version.

Corresponding author

Correspondence to Yana Weinstein .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

YW and MAS run a blog, “The Learning Scientists Blog”, which is cited in the tutorial review. The blog does not make money. Free resources on the strategies described in this tutorial review are provided on the blog. Occasionally, YW and MAS are invited by schools/school districts to present research findings from cognitive psychology applied to education.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Weinstein, Y., Madan, C.R. & Sumeracki, M.A. Teaching the science of learning. Cogn. Research 3 , 2 (2018). https://doi.org/10.1186/s41235-017-0087-y

Download citation

Received : 20 December 2016

Accepted : 02 December 2017

Published : 24 January 2018

DOI : https://doi.org/10.1186/s41235-017-0087-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

research paper learning targets

Experts@Minnesota Logo

Setting Clear Learning Targets to Guide Instruction for All Students

Research output : Contribution to journal › Article › peer-review

As more states adopt the Common Core State Standards, teachers face new challenges. Teachers must unpack these standards and develop explicit learning targets to make these rigorous standards accessible to their students. This task can be especially challenging for special educators who must balance standards-based education with individualized instruction. This paper describes the value of clarifying learning targets, defines different types of targets, and provides strategies and resources to assist practitioners in unpacking standards to develop learning targets. In addition, the authors suggest how the standards can be used to drive individualized education program planning to maximize learning for students with disabilities and increase the likelihood of student success.

Bibliographical note

  • Common Core State Standards
  • learning targets

This output contributes to the following UN Sustainable Development Goals (SDGs)

Publisher link

  • 10.1177/1053451214536042

Other files and links

  • Link to publication in Scopus
  • Link to the citations in Scopus

Fingerprint

  • Education Program Social Sciences 100%
  • Student Success Social Sciences 100%
  • Individualized Instruction Social Sciences 100%
  • Core Curriculum Social Sciences 100%
  • Explicit Learning Social Sciences 100%
  • Students with Disabilities Social Sciences 100%
  • Learning Objectives Keyphrases 100%
  • Physician Nursing and Health Professions 100%

T1 - Setting Clear Learning Targets to Guide Instruction for All Students

AU - Konrad, Moira

AU - Keesey, Susan

AU - Ressa, Virginia A.

AU - Alexeeff, Maggie

AU - Chan, Paula E.

AU - Peters, Mary T.

N1 - Publisher Copyright: © Hammill Institute on Disabilities 2014.

PY - 2014/11/8

Y1 - 2014/11/8

N2 - As more states adopt the Common Core State Standards, teachers face new challenges. Teachers must unpack these standards and develop explicit learning targets to make these rigorous standards accessible to their students. This task can be especially challenging for special educators who must balance standards-based education with individualized instruction. This paper describes the value of clarifying learning targets, defines different types of targets, and provides strategies and resources to assist practitioners in unpacking standards to develop learning targets. In addition, the authors suggest how the standards can be used to drive individualized education program planning to maximize learning for students with disabilities and increase the likelihood of student success.

AB - As more states adopt the Common Core State Standards, teachers face new challenges. Teachers must unpack these standards and develop explicit learning targets to make these rigorous standards accessible to their students. This task can be especially challenging for special educators who must balance standards-based education with individualized instruction. This paper describes the value of clarifying learning targets, defines different types of targets, and provides strategies and resources to assist practitioners in unpacking standards to develop learning targets. In addition, the authors suggest how the standards can be used to drive individualized education program planning to maximize learning for students with disabilities and increase the likelihood of student success.

KW - Common Core State Standards

KW - IEP goals

KW - learning targets

KW - objectives

KW - planning

UR - http://www.scopus.com/inward/record.url?scp=84908529808&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84908529808&partnerID=8YFLogxK

U2 - 10.1177/1053451214536042

DO - 10.1177/1053451214536042

M3 - Article

AN - SCOPUS:84908529808

SN - 1053-4512

JO - Intervention in School and Clinic

JF - Intervention in School and Clinic

Research Supporting Proficiency-Based Learning: Learning Standards

When educators talk about “proficiency-based learning,” they are referring to a variety of diverse instructional practices—many of which have been used by the world’s best schools and teachers for decades—and to organizational structures that support or facilitate the application of those practices in schools. Proficiency-based learning may take different forms from school to school—there is no universal model or approach—and educators may use some or all of the beliefs and practices of proficiency-based learning identified by the Great Schools Partnership.

On this page, we have provided a selection of statements and references that support and describe one foundational feature of proficiency-based learning systems, Learning Standards . In a few cases, we have also included additional explanation to help readers better understand the statements or the studies from which they were excerpted. The list is not intended to be either comprehensive or authoritative—our goal is merely to give school leaders and educators a brief, accessible introduction to available research.

“Clear learning goals help students learn better (Seidel, Rimmele, & Prenzel, 2005). When students understand exactly what they’re supposed to learn and what their work will look like when they learn it, they’re better able to monitor and adjust their work, select effective strategies, and connect current work to prior learning (Black, Harrison, Lee, Marshall, & Wiliam, 2004; Moss, Brookhart, & Long, 2011). This point has been demonstrated for all age groups, from young children (Higgins, Harris, & Kuehn, 1994) through high school students (Ross & Starling, 2008), and in a variety of subjects—in writing (Andrade, Du, & Mycek, 2010); mathematics (Ross, Hogaboam-Gray, & Rolheiser, 2002); and social studies (Ross & Starling, 2008). The important point here is that students should have clear goals. If the teacher is the only one who understands where learning should be headed, students are flying blind. In all the studies we just cited, students were taught the learning goals and criteria for success, and that’s what made the difference.” —Brookhart, S. M., & Moss, C. M. (2014, October). Learning targets on parade. Educational Leadership , 72 (7), 28–33.

“The most effective teaching and the most meaningful student learning happen when teachers design the right learning target for today’s lesson and use it along with their students to aim for and assess understanding. Our theory grew from continuous research with educators focused on raising student achievement through formative assessment processes (e.g., Brookhart, Moss, & Long, 2009, 2010, 2011; Moss, Brookhart, & Long 2011a, 2011b, 2011c). What we discovered and continue to refine is an understanding of the central role that learning targets play in schools. Learning targets are student-friendly descriptions—via words, pictures, actions, or some combination of the three—of what you intend students to learn or accomplish in a given lesson. When shared meaningfully, they become actual targets that students can see and direct their efforts toward. They also serve as targets for the adults in the school whose responsibility it is to plan, monitor, assess, and improve the quality of learning opportunities to raise the achievement of all students.” —Brookhart, S. M., & Moss, C. M. (2012). Learning targets: Helping students aim for understanding in today’s lesson . Alexandria, VA: Association for Supervision and Curriculum Development.

“Setting objectives and providing feedback work in tandem. Teachers need to identify success criteria for learning objectives so students know when they have achieved those objectives (Hattie & Timperley, 2007). Similarly, feedback should be provided for tasks that are related to the learning objectives; this way, students understand the purpose of the work they are asked to do, build a coherent understanding of a content domain, and develop high levels of skill in a specific domain.” —Dean, C. B., Hubbell, E. R., Pitler, H., & Stone, B. (2012). Classroom instruction that works: Research-based strategies for increasing student achievement . Alexandria, VA: Association for Supervision and Curriculum Development.

“Setting objectives is the process of establishing a direction to guide learning (Pintrich & Schunk, 2002). When teachers communicate objectives for student learning, students can see more easily the connections between what they are doing in class and what they are supposed to learn. They can gauge their starting point in relation to the learning objectives and determine what they need to pay attention to and where they might need help from the teacher or others. This clarity helps decrease anxiety about their ability to succeed. In addition, students build intrinsic motivation when they set personal learning objectives.” —Dean, C. B., Hubbell, E. R., Pitler, H., & Stone, B. (2012). Classroom instruction that works: Research-based strategies for increasing student achievement . Alexandria, VA: Association for Supervision and Curriculum Development.

“Providing specific feedback that helps students know how to improve their performance requires teachers to identify and understand the learning objectives (Stiggins, 2001). If teachers do not understand the learning objectives, it is difficult for them to provide students with information about what good performance or high-quality work looks like…. Effective feedback should also provide information about how close students come to meeting the criterion and details about what they need to do to attain the next level of performance (Shirbagi, 2007; Shute, 2008). Teachers can provide elaboration in the form of worked examples, questions, or prompts—such as ‘What’s this problem all about?’—or as information about the correct answer (Kramarski & Zeichner, 2001; Shute, 2008).” —Dean, C. B., Hubbell, E. R., Pitler, H., & Stone, B. (2012). Classroom instruction that works: Research-based strategies for increasing student achievement . Alexandria, VA: Association for Supervision and Curriculum Development.

“[Learning targets] convey to students the destination for the lesson—what to learn, how deeply to learn it, and exactly how to demonstrate their new learning. In our estimation (Moss & Brookhart, 2009) and that of others (Seidle, Rimmele, & Prenzel, 2005; Stiggins, Arter, Chappuis, & Chappuis, 2009), the intention for the lesson is one of the most important things students should learn. Without a precise description of where they are headed, too many students are ‘flying blind’…. A shared learning target unpacks a ‘lesson-sized’ amount of learning—the precise ‘chunk’ of the particular content students are to master (Leahy, Lyon, Thompson, & Wiliam, 2005). It describes exactly how well we expect them to learn it and how we will ask them to demonstrate that learning…. Instructional objectives are about instruction, derived from content standards, written in teacher language, and used to guide teaching during a lesson or across a series of lessons. They are not designed for students but for the teacher. A shared learning target, on the other hand, frames the lesson from the students’ point of view. A shared learning target helps students grasp the lesson’s purpose—why it is crucial to learn this chunk of information, on this day, and in this way.” —Brookhart, S. M., Long, B. A., & Moss, C. M. (2011, March). Know your learning target. Educational Leadership , 68 (6), 66–69.

“Students who have clear pictures of the learning target and of the criteria for success are likely to also have a sense of what they can and should do to make their work measure up to those criteria and that goal. Clear learning targets direct both teachers and students toward specific goals. Students can meet goals only if they are actually working toward them, and they can’t work toward them until they understand what they are. Once students understand where they are headed, they are more likely to feel that they can be successful, can actually reach the goal. Students’ belief that they can be successful at a particular task or assignment is called self-efficacy (Bandura, 1997). Students who have self-efficacy are more likely to persist in their work and especially more likely to persist in the face of challenge (Pajares, 1996).” —Moss, C. M., & Brookhart, S. M. (2009). Advancing formative assessment in every classroom: A guide for instructional leaders . Alexandria, VA: Association for Supervision and Curriculum Development.

“Although they have different labels ( standards , learning results , expectations , and outcomes ), every state has standards that are determined at the state level. These standards are published and all teachers, parents, and students, should be familiar with them. This is essential because the research shows that ‘it is very difficult for students to achieve a learning goal unless they understand that goal and can assess what they need to do to reach it’ (Black et al., 2003).” —O’Connor, K. (2009, January). Reforming grading practices in secondary schools. Principal’s Research Review , 4 (1), 1–7.

“Arguably the most basic issue a teacher can consider is what he or she will do to establish and communicate learning goals, track student progress, and celebrate success. In effect, this design question includes three distinct but highly related elements: (1) setting and communicating learning goals, (2) tracking student progress, and (3) celebrating success. These elements have a fairly straightforward relationship. Establishing and communicating learning goals are the starting place. After all, for learning to be effective, clear targets in terms of information and skill must be established…. For example, the Lipsey and Wilson (1993) study synthesizes findings from 204 reports. Consider the average effect size of 0.55 from those 204 effect sizes. This means that in the 204 studies they examined, the average score in classes where goal setting was effectively employed was 0.55 standard deviations greater than the average score in classes where goal setting was not employed…. For the Lipsey and Wilson effect size of 0.55, the percentile gain is 21. This means that the average score in classes where goal setting was effectively employed would be 21 percentile points higher than the average score in classes where goal setting was not employed.” —Marzano, R. J., & Brown, J. L. (2007). The art and science of teaching: A comprehensive framework for effective instruction . Alexandria, VA: Association for Supervision and Curriculum Development.

“Equipped with state standards that had been clarified and specified in the district’s written curriculum, teachers in the higher performing schools carefully studied, further detailed and effectively implemented those standards using tools such as curriculum maps, pacing guides, aligned instructional programs and materials, and formative benchmark assessments. Higher performing schools deeply integrated the state standards into their written curriculum, but viewed state standards as the floor for student achievement, not the target. Educators did not see those standards as a digression from the real curriculum, but as the foundation of the curriculum. With a focus on core learning skills, grade-level and vertical teams continually reviewed and revised the curriculum. That curriculum communicated high expectations for all students, not just the academically advanced.” — Dolejs, C. (2006). Report on key practices and policies of consistently higher performing high schools . Washington, DC: National High School Center. (NOTE: Based on an analysis of 74 average and higher performing high schools in 10 states that identified the fundamental teaching and learning practices shared across higher performing high schools.)

← Return to Main Menu  

482 Congress Street, Suite 500 Portland, ME 04101 Phone: (207) 773-0505 Fax: (877) 849-7052

©2024 Great Schools Partnership

Explore our unique and engaging communities of practice

Visit our events page for details on registration, pricing, and more.

Communities of Practice

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Med Libr Assoc
  • v.103(3); 2015 Jul

Bloom’s taxonomy of cognitive learning objectives

Information professionals who train or instruct others can use Bloom’s taxonomy to write learning objectives that describe the skills and abilities that they desire their learners to master and demonstrate. Bloom’s taxonomy differentiates between cognitive skill levels and calls attention to learning objectives that require higher levels of cognitive skills and, therefore, lead to deeper learning and transfer of knowledge and skills to a greater variety of tasks and contexts.

As learners, we know from experience that some learning tasks are more difficult than others. To take an example from elementary school, knowing our multiplication tables by rote requires a qualitatively different type of thinking than does applying our multiplication skills through solving “word problems.” And in both cases, a teacher could assess our knowledge and skills in either of these types of thinking by asking us to demonstrate those skills in action, in other words, by doing something that is observable and measurable. With the publication in 1956 of the Taxonomy of Educational Objectives: The Classification of Educational Goals , an educational classic was born that powerfully incorporated these concepts to create a classification of cognitive skills [ 1 ]. The classification system came to be called Bloom’s taxonomy, after Benjamin Bloom, one of the editors of the volume, and has had significant and lasting influence on the teaching and learning process at all levels of education to the present day.

Bloom’s taxonomy contains six categories of cognitive skills ranging from lower-order skills that require less cognitive processing to higher-order skills that require deeper learning and a greater degree of cognitive processing ( Figure 1 ). The differentiation into categories of higher-order and lower-order skills arose later; Bloom himself did not use these terms.

An external file that holds a picture, illustration, etc.
Object name is mlab-103-03-152-153-f01.jpg

Bloom’s taxonomy

Knowledge is the foundational cognitive skill and refers to the retention of specific, discrete pieces of information like facts and definitions or methodology, such as the sequence of events in a step-by-step process. Knowledge can be assessed by straightforward means, for example, multiple choice or short-answer questions that require the retrieval or recognition of information, for example, “Name five sources of drug information.” Health professionals must have command of vast amounts of knowledge such as protocols, interactions, and medical terminology that are committed to memory, but simple recall of facts does not provide evidence of comprehension, which is the next higher level in Bloom’s taxonomy.

Learners show comprehension of the meaning of the information that they encounter by paraphrasing it in their own words, classifying items in groups, comparing and contrasting items with other similar entities, or explaining a principle to others. For example, librarians might probe a learner’s understanding of information sources by asking the learner to compare and contrast the information found in those sources. Comprehension requires more cognitive processing than simply remembering information, and learning objectives that address comprehension will help learners begin to incorporate knowledge into their existing cognitive schemas by which they understand the world [ 2 ]. This allows learners to use knowledge, skills, or techniques in new situations through application, the third level of Bloom’s taxonomy. An example of application familiar to medical librarians is the ability to use best practices in the literature searching process, such as using Medical Subject Headings (MeSH) terms for key concepts in a search.

Moving to higher levels of the taxonomy, we next see learning objectives relating to analysis . Here is where the skills that we commonly think of as critical thinking enter. Distinguishing between fact and opinion and identifying the claims upon which an argument is built require analysis, as does breaking down an information need into its component parts in order to identify the most appropriate search terms.

Following analysis is the level of synthesis, which entails creating a novel product in a specific situation. An example of an evidence-based medicine–related task requiring synthesis is formulating a well-built clinical question after analyzing a clinician’s information gaps [ 3 ]. The formulation of a management plan for a specific patient is another clinical task involving synthesis.

Finally, the pinnacle of Bloom’s taxonomy is evaluation , which is also important to critical thinking. When instructors reflect on a teaching session and use learner feedback and assessment results to judge the value of the session, they engage in evaluation. Critically appraising the validity of a clinical study and judging the relevance of its results for application to a specific patient also require evaluative skills. It is important to recognize that higher-level skills in the taxonomy incorporate many lower-level skills as well: to critically appraise the medical literature ( evaluation) , one must have knowledge and comprehension of various study designs, apply that knowledge to a specific published study to recognize the study design that has been used, and then analyze it to isolate the various components of internal validity such as blinding and randomization. For an illustrative list of learning objectives from evidence-based medicine curricula at US and Canadian medical schools categorized according to Bloom’s taxonomy, refer to the 2014 Journal of the Medical Library Association article by Blanco et al. [ 3 ].

CHANGES IN BLOOM’S TAXONOMY

Based on findings of cognitive science following the original publication, a later revision of the taxonomy changes the nomenclature and order of the cognitive processes in the original version. In this later version, the levels are remember, understand, apply, analyze, evaluate, and create. This reorganization places the skill of synthesis rather than evaluation at the highest level of the hierarchy [ 2 ]. Furthermore, this revision adds a new dimension across all six cognitive processes. It specifies the four types of knowledge that might be addressed by a learning activity: factual (terminology and discrete facts) ; conceptual (categories, theories, principles, and models) ; procedural (knowledge of a technique, process, or methodology); and metacognitive (including self-assessment ability and knowledge of various learning skills and techniques).

It is important to note that the most common usage of Bloom’s taxonomy focuses on cognitive learning skills rather than psychomotor or affective skills, two domains that are crucial to the success of health professionals. Examples of psychomotor and affective skills are knot tying in surgery and empathy toward patients, respectively.

Information professionals who train or instruct others can use Bloom’s taxonomy to write learning objectives that describe the skills and abilities that they desire their learners to master and demonstrate.

The taxonomy is useful in two important ways. First, use of the taxonomy encourages instructors to think of learning objectives in behavioral terms to consider what the learner can do as a result of the instruction. A learning objective written using action verbs will indicate the best method of assessing the skills and knowledge taught. Lists of action verbs that are appropriate for learning objectives at each level of Bloom’s taxonomy are widely available on the Internet [ 4 ]. Second, considering learning goals in light of Bloom’s taxonomy highlights the need for including learning objectives that require higher levels of cognitive skills that lead to deeper learning and transfer of knowledge and skills to a greater variety of tasks and contexts.

Today’s health professions educators wish to develop learners’ skills at the higher levels of Bloom’s taxonomy that require demonstration of deeper cognitive processing such as critical thinking and evaluative judgments, but studies have shown that learning objectives in many training programs and curricula focus overwhelmingly on the lower levels of the taxonomy, knowledge and comprehension [ 3 , 5 ]. This shortcoming must be considered by educators if health professionals are to achieve increasing levels of skill and function.

Nancy E. Adams, MLIS, ude.usp.cmh@smadan , Associate Director and Coordinator of Education and Instruction, George T. Harrell Health Sciences Library, Penn State Hershey Milton S. Hershey Medical Center, 500 University Drive, Mail Code H127, Hershey, PA 17033-0850

  • Our Mission

Making Learning Targets Clear to Students

When students clearly understand classroom expectations, they’re better able to assess and improve their performance.

A sixth-grade student shares her project about weather-related natural disasters.

The third-grade classroom is busy. Students are in the middle of making slime, and you can sense the level of engagement across the class. The teacher is working very hard to make sure that students are clear on the goals of learning: to understand the relationships between solids, liquids, and gases. The students can see success criteria posted on the wall and examples of successful work spread across the room. Visiting teachers are engaging in random checks of understanding to ensure that students are clear on expectations.

As part of the school’s improvement process, teachers and principals are visiting classrooms and interviewing students on their understanding of the expectations of learning, the students’ self assessment on their progress, and what next steps they need to take to meet established goals. The team was in the third-grade classroom and filmed student interviews; they discussed the key findings with the teacher after class.

Here are samples of two interviews of students in the same classroom.

Interview 1

Teacher:  What are you learning?

Student:  We’re learning about slime.

Teacher:  How do you know if you are successful?

Student:  We don’t want the slime to stick to us.

Interview 2

Teacher:  What are you learning?

Student:  We’re learning about solids, liquids, and gases.

Student:  We can define and relate each phase. Right now we’re learning about solids, liquids, and gases by creating slime.

The teacher is doing great things. The question is whether all students are following the teacher’s lead. Building student clarity is a constant pursuit: We never arrive, but it’s worth our focus.

Clarity Research Snapshot

When students are clear on expectations of learning, they tend to double their rate of learning. Moreover, when students are clear on expectations, they have a better chance of assessing their current performance and using feedback accurately. As John Hattie explains in Visible Learning , self assessment, feedback, and student clarity yield substantial growth in student learning. Yet the implementation of this idea is extremely difficult.

Let’s take a look at some challenges and solutions to implementation.

Challenge 1. Novices don’t really understand rubrics:  Experts love rubrics because the bullet points clearly delineate the core expectations that they want students to learn. Novices don’t fully appreciate the bullet points because they simply don’t have a concrete example of what’s written. As such, they scan the tool for items that are familiar to them: activities (such as getting into groups), tasks (such as completing six problems correctly), and contexts (such as exploring bridges). Because of this scanning for the familiar, students begin thinking about group work, assignment completion, and bridges and are therefore less likely to be any clearer on the standards they’re learning about.

To focus students on the actual learning, teachers are encouraged to start with examples of great work that meets the expectations of the teacher. The more students see examples of great work in multiple contexts, the better able students are to use rubrics to evaluate their own work as well as to give, receive, and use feedback. This also makes it easier for them to focus on the core content that the teacher is after.

Challenge 2. Telling people the expectations clearly doesn’t mean the expectations are clear to them:  As a school leader, I have found that simply sending a clear message to people doesn’t ensure that they are clear on my message. Almost every message I send must be followed up with clarifying emails, meetings with parents and/or faculty, and a series of communication check-ins. As such, developing clarity isn’t one way. Clarity is interactive and built through multiple engagements.

Co-construction is an interactive process that enables students to build clarity of expectations. Co-construction is the active involvement of building success criteria with students rather than presenting success criteria to students. Here is one example of co-construction:

  • Providing students with work samples that illustrate success
  • Asking students to identify the parts of the work sample that make it successful (i.e., criteria for success)
  • Writing out the criteria for success with students 

Challenge 3. The classroom is mostly hidden from teachers, and students give each other inaccurate feedback:  As Graham Nuthall tells us in The Hidden Lives of Learners , the majority of the classroom experience is hidden from the teacher’s observation. In the hidden classroom, kids typically give and receive most of the feedback to and from their peers, and most of that feedback is incorrect. To address this, we should consider having students do the following:

  • Use the fishbowl protocol to share their feedback, and process the accuracy of the feedback to the expectations of the lesson or unit
  • Begin units with discussions on what mastery looks like, and have students evaluate the differences between various levels of mastery

Students’ achievement and attitude improve when they have the tools to own their own learning. As educators, it’s up to us to provide those tools. 

  • NAEYC Login
  • Member Profile
  • Hello Community
  • Accreditation Portal
  • Online Learning
  • Online Store

Popular Searches:   DAP ;  Coping with COVID-19 ;  E-books ;  Anti-Bias Education ;  Online Store

Learning Targets: Helping Kindergartners Focus on Letter Names and Sounds (Voices)

Five Kindergarten aged boys in a classroom playing with learning tools

You are here

Thoughts on the Article | Andrew J. Stremmel, Voices Executive Editor

In the following article, reading intervention teacher Heidi Heath DeStefano reflects on her struggles to help children learn letter names and sounds. She then sets out to conduct research to determine if establishing learning targets motivates and focuses the children in learning letters and sounds of the alphabet. Like all teachers who seek to be effective, she wants to help children learn, be engaged, and take ownership of their learning. Her study is guided by a theory of action, a comprehensive review of the literature, and a strategic process of instruction and assessment that is monitored, recorded, and reflected upon systematically and intentionally.

The first and arguably most important step in the teacher research process is identifying a problem of meaning (something that puzzles or perplexes). The problems and dilemmas of teaching are all around: as Heidi demonstrates, problems arise from an interest in finding out more about one’s teaching, or about how children learn or attempt to understand; while trying to develop some new teaching method; or when faced with needing to make a decision and act upon it. Heidi’s research represents a distinctive way of knowing about teaching and learning that alters not only her teaching practice and understanding of children’s learning (in this case an ability to internalize learning goals) but also her notion of what it means to be a learner in her own classroom. Here, she reflects deeply on the value of learning goals and gains insights into the need to teach more intentionally and responsively.

Teacher research, at its best, is a good story that begins with something we wonder about—something that perplexes or astonishes us—and ends with something we want to make known. It always leaves us with the questions What did I learn? and What can I change or do differently? These questions and conclusions are what make teacher research a vehicle for change, transformation, and improvement.

As a reading intervention teacher in an elementary school, one of my responsibilities is to help students fill in their learning gaps. Struggling readers often come to me with negative attitudes toward reading, so I need to present learning activities in creative ways in order to capture their interest.

One teaching assignment I had was to help a group of 11 very active kindergartners learn letter names and a subset of letter sounds. After several weeks of engaging the children in a variety of letter-acquisition activities—most of which I drew from my school district’s literacy programs—the assessment information revealed that my instruction was not making a significant impact on student learning. More alarming, I noticed the children were not motivated to learn the letters or to answer accurately when I assessed them. For example, they were using the magnetic letters as toys, and I was spending more time on behavior management than on teaching. I found that activities like sorting letters and tracing letters in sand helped keep students from rough and tumble play, but these activities were not bringing them any closer to the instructional objective.

I realized I had not effectively conveyed the purpose of our intervention sessions. Consequently, I needed to find a way to focus their efforts and encourage them to take ownership of their learning.

In professional development sessions, my school district had begun training teachers on providing students with clear statements of intended learning outcomes when we introduced new units or activities. These outcomes provide a shared direction: different students may take different paths (building on their unique prior experiences), but they all end up mastering the same essential knowledge and skills. I challenged myself to adapt the idea of a learning outcome in a format these kindergartners would relate to. I decided to attach the statement, “I CAN NAME ALL OF MY LETTERS AND SOUNDS,” to a colorful 10-ring target and refer to it as a learning target .

In the past, I had goals for my lessons, but I had never tried anything this explicit to engage the kindergartners in thinking about the goals. Since this was a new strategy for me, I approached the work as both a teacher and a researcher. I mapped out a study to determine if the learning target would motivate the children to shift from rough and tumble play to focused learning. I was still going to use playful activities and games, but I hoped that the children’s attention and thinking would shift if I could guide them more clearly on the purpose of our sessions.

research paper learning targets

Literature review

Prompted by education researcher Robert J. Marzano (2003), who explains that when students can identify what they’re learning, they significantly outscore students who cannot, my research on using learning outcomes began with two sources: Learning Targets: Helping Students Aim for Understanding in Today’s Lesson (Moss & Brookhart 2012) and Visible Learning for Literacy: Implementing the Practices That Work Best to Accelerate Student Learning (Fisher, Frey, & Hattie 2016).

The learning target theory of action in Learning Targets involves using clear, student-friendly descriptions of what students will learn, then incorporating a cycle of instruction, monitoring, and assessing until they reach their goals (Moss & Brookhart 2012). Learners are actively engaged with the teacher in this cycle. As an experienced teacher, the reasoning behind setting targets resonated with me: “no matter what we decide students need to learn, not much will happen until students understand what they are supposed to learn during a lesson and set their sights on learning it” (Moss, Brookhart, & Long 2011, 1).

Visible Learning for Literacy (Fisher, Frey, & Hattie 2016) offers similar advice, suggesting that teachers need to set learning intentions and provide success criteria. After a comprehensive review of the literature, the authors found that instructional strategies that incorporate key aspects of learning intentions—such as teacher clarity, formative assessment, and prompt feedback—are among the top 10 most effective ways to increase student achievement. Much like the cycle described in the learning target theory of action, the keys with learning intentions are clarity in goals, monitoring progress (i.e., formative assessment, which may be teacher-made quizzes or other means of determining students’ recent learning), and feedback that is timely enough to guide the teacher and learners. This feedback loop allows teachers to use information from formative assessments to adjust instruction and reteach material so students meet their learning objectives before their final evaluations. (Or, in the case of my kindergartners, before they became frustrated and started to worry that they wouldn’t be able to learn to read.) Focusing on how feedback supports learners’ self-efficacy, the authors discuss “three internal questions that drive learners”:

  • Where am I going? What are my goals?
  • How am I going there? What progress is being made toward the goal?
  • Where to next? What activities need to be undertaken next to make better progress? (Fisher, Frey, & Hattie 2016, 101)

These ideas about clear goals and actionable feedback are not new. Four decades ago, Benjamin Bloom found a similar instructional cycle to be quite effective. While searching for methods of group instruction as effective as one-on-one tutoring, he showed that mastery learning (an approach that combines typical large-group instruction with frequent assessments, timely feedback, and customized supports) significantly improved students’ learning compared with conventional methods (in which assessments are only given at the end of a unit to provide grades). Although mastery learning conducted with the whole class was not as effective as one-on-one tutoring, it was nearly as effective and far more efficient (Bloom 1984).

Adapting this body of research for the kindergarten intervention group, I hoped to dramatically increase the impact of my instruction on the students’ learning.

Studying the effect of learning targets with kindergartners

The existing literature convinced me that learning targets are important, but much of the research has been done with older children. I was interested in answering the following questions:

  • Would the kindergartners in my reading intervention class understand the concept of a learning target?
  • Would using the learning target help motivate the students to take ownership of their learning and aid them in acquiring the letters and sounds of the alphabet?
  • If using a learning target were successful for the students’ alphabet learning, could it be used for other concepts?

research paper learning targets

Setting and participants

The elementary school where I conducted this action research study is located in a small, predominantly white and middle-class community in Wyoming. In recent years, we have enrolled one or two Hispanic children who speak Spanish and English in each classroom, but who often need English vocabulary development (DeStefano 2017). The demographics of our school are gradually changing: the school has long served children whose parents are well-educated professionals and is now serving a larger proportion of children from working-class families.

The learning environment for this study was my small intervention room, containing a table, magnetic whiteboards propped around the room, and a few storage closets on which I often post word lists.

The children became more enthusiastic as they took ownership of their learning.

My teaching schedule included three kindergarten reading intervention groups with three to five students in each group, for a total of 11 students (four girls and seven boys). Two of the children spoke Spanish and English (neither qualified for dual language learner services); the remaining nine children only spoke English. All 11 children were identified by their regular classroom teachers as needing extra instruction to master letter names and sounds.

My study of the learning target approach (which I explain in detail in the following sections) spanned nine weeks. I met with each of the small groups for 15 to 20 minutes a day, four to five days per week.

Data collection

Throughout the study, I collected three types of data:

  • A letter-naming assessment that consisted of a random assortment of all uppercase and lowercase letters, plus typewritten g and a. This resulted in 54 letters that the children had to be able to name to meet this part of the target.
  • A sound identification assessment that consisted of a random assortment of all uppercase and lowercase letters, plus typewritten g and a , resulting in 54 letters for which the children had to provide sounds. I was teaching and assessing one sound for each consonant (e.g., /k/ for C and c) and the short vowel sounds; therefore, the children needed to learn just one sound for each letter during this intervention.
  • My observations of the students’ reactions to the intervention.

I met with each group of children and showed them the colorful 10-ring target posted on the wall of our room. We discussed that the purpose of a target is to help us practice getting a bull’s-eye. Then we discussed that the bull’s-eye in our classroom was to name all of the letters and their main sounds. (I also briefly explained that they would learn more sounds in first grade and that in kindergarten, we were learning the most important sounds that each letter makes.) I modeled what meeting the target looks like, pointing to each letter in a randomly sorted list and quickly saying the name and associated sound. I told the students that the purpose in this learning space was to aim for our target: “I CAN NAME ALL OF MY LETTERS AND SOUNDS.” At the beginning and end of each session, we reviewed this target by saying it aloud as I pointed to the words.

After my initial explanation of the learning target, I assessed each child on the letter names and sounds to determine which ones each child already knew. On the average, students named 41.5 out of 54 letter names and 32 out of 54 letter sounds.

Next, I met with each student individually and we discussed which letter names and sounds they needed to acquire in order to reach the learning target. I created customized little booklets containing the letters to be learned by each child and discussed the specific activities that would help them learn those letters.

Every week, I administered the same assessments to each available student individually and discussed his or her progress. During this feedback discussion, I prompted the children to think about the questions that drive learners, from “Where am I going?” to “What activities need to be undertaken next to make better progress?” (Fisher, Frey, & Hattie 2016, 101). (While I met with children individually, the others in the group engaged in the learning activities I provided, focusing their efforts on the letters they personally needed to acquire.) To help motivate the children, I was sure to find some progress in each assessment. After the assessments, we celebrated as a group. As I stated what a child had accomplished that week, the child got to throw a small pillow at our target. When a student correctly provided a letter’s sound and name, there was an added celebration: we dramatically ripped that page out of the booklet.

As a standard teaching procedure, I took notice of the students’ comments and observable behaviors. I recorded my observations on a daily basis as short, anecdotal notes. I maintained a record for each child so that I could efficiently track progress. Using my observations and weekly assessments, I created the little booklets that contained the specific letters each child needed to learn and planned the activities that would help them learn those letters. My notes also helped me be more specific with my feedback, and I could more easily relate the children’s efforts to their success.

Within the first week of implementation, it appeared that the learning target would be highly effective. I felt rising elation as I saw the children becoming more enthusiastic and motivated as they took ownership of their learning. Assessment scores also steadily improved.

One child immediately understood his purpose for being with me and met his goal early on in the intervention. Discussing his strengths and needs with his teacher, we agreed that he was overall doing well in learning to read and therefore not in need of additional interventions. I had him return to his classroom, victorious. Another boy noticed and asked, “Hey! Where is he going?” In response, I asked, “Why are we all working so hard in my room?” He repeated what we had been saying every day when we entered the class: “We are here because we need to learn to name all of our letters and sounds.” I told him, “He hit the bull’s-eye, so he is going back to his classroom to work on other things.” After thinking for a moment, he set his mouth in a firm line and went to work on his activity with obvious resolve. It took a while, but all of the children finally understood that everyone in our school is required to know their letters and sounds—“even the principal ,” as we often said while making the connection to learning to read and write (Duke & Mesmer 2018).

Finding 1: Every child acquired knowledge of the letter names and sounds

Based on the weekly assessment scores, by the end of my nine-week study, every child could name all the letters (uppercase and lowercase, plus typewritten g and a ) and virtually all of the target sounds—just one student lacked one letter sound. For both letter names and sounds, the kindergartners’ knowledge increased sharply in the first four to five weeks and most of the children had reached the target by week six.

Finding 2: The children understood the connection between effort and outcome and were able to focus their efforts

While it may seem redundant to ask students to state the learning target before each session, to review it and their progress at the end of each session, and to individually assess the children and discuss the target every week, I found that repetition was required for the children to internalize the purpose of their presence in the intervention room. Although I was surprised by how explicit and repetitive I needed to be, I also saw it working. As understanding of their learning target deepened, the students engaged in far less unfocused play and became determined to reach their learning target. During our discussions, I repeatedly reinforced the connection between their efforts and their successes. Students were able to understand the connection between their work and their expanding knowledge of the alphabet. My observations are consistent with prior research indicating that teachers who focused on formative assessment saw an increase in students’ learning, self-efficacy, and self-regulation (Brookhart, Moss, & Long 2008).

Finding 3: The children understood that using learning targets can be a lifelong strategy and were able to identify personal goals

The children found the novelty of throwing a small pillow at the target to celebrate progress highly motivating. Just as I was careful to note the specific progress that each child made, I also explicitly tied this sought-after event to the cause and effect relationship of goal-setting, practice, feedback, and success.

Once this understanding was solid, we discussed that anyone, at any age, can have a learning target. I told them about my personal learning target of speaking another language fluently, and I shared a few of the activities I was doing to reach it. Each student identified a personal learning target and we brainstormed ways they could work toward them. For example, one child wanted to learn how to make his own sack lunch and another wanted to ride a bike without training wheels.

After students obtained their alphabet learning target (which virtually all of the children did by week six), I introduced a new target for reading: “I CAN MAKE WHAT I SAY MATCH WHAT I SEE.” This new target would help students focus on each letter so they could accurately decode the text in their books. The concept of a learning target was easier for students to understand with the second target, and once again it seemed to enhance their learning.

When the school principal asked me if I had any celebrations that could be presented at the monthly school board meeting, I could barely contain my enthusiasm! The students and their families stood in front of the school board a month later, and we proudly presented their achievements along with their personal learning targets. Before the learning target intervention, I would have been worried about bringing my rambunctious kindergartners to a school board meeting, but the pride they felt for their achievements was strong, and they conducted themselves with dignity.

Finding 4: Learning targets benefitted families, older students, and myself

Using learning targets to help motivate and focus the children was well received by families from the beginning. Throughout, seeing their children’s excitement and growing confidence as they worked toward and reached a goal was positive for the whole family.

Because of this success, I also began using learning targets with my older students, and it worked well with all of my reading intervention groups. I believe the targets helped me as much as the children because creating the learning targets aided in focusing my instruction. I gave myself the professional learning target of providing a learning target for each new concept to be taught!

Implications

A great deal of evidence indicates that different approaches are needed for different students and situations and that teachers must adjust instruction to meet the needs of their students (Fisher, Frey, & Hattie 2016). When first working with these energetic kindergartners, I began to think they might not be developmentally ready to learn letter names and sounds. It was an eye-opening teaching experience to witness how, given a clear and motivating learning target, unfocused children became strategic learners in a positive and fun way. In addition, I am hopeful that establishing the process of setting a goal, working strategically toward that goal, and celebrating success at the beginning of students’ academic careers will set them up to repeat this pattern of success in the future. After all, research shows that self-efficacy as a learner (“the confidence or strength of belief that we have in ourselves that we can make our learning happen”) has a big impact on achievement (Fisher, Frey, & Hattie 2016, 24).

The increases I observed in my kindergartners’ self-regulation and self-efficacy have been reported in other studies (Brookhart, Moss, & Long 2008; Moss, Brookhart, & Long 2011). I’ve also learned I’m not the first teacher to use colorful 10-ring targets to represent learning objectives (Moss, Brookhart, & Long 2011). In my experience, goal-setting is a concept most teachers are familiar with, but in the struggle to focus my active kindergarten students, I had forgotten how powerful it can be. In light of this study, I now have effective methods for helping young children internalize learning goals.

This experience prompted me to do some deep reflecting on my teaching practice. I had been presenting activities without conveying strong learning goals. Had the students’ initial lack of focus mirrored my own? Had my teaching become a bit automatic over the years? Revisiting the effectiveness of learning goals and experiencing again how powerful they can be for children was an important learning touchstone for me. This, along with furthering my understanding of learning targets by studying research on the topic, helped return me to teaching intentionally and with greater responsiveness to children’s needs.

One unexpected result of this study is that I learned to be more culturally sensitive. During an activity in which students were using magnetic letters to match initial sounds to picture cards, a Hispanic student had placed the letter d in front of a picture of money. She stood, frustrated, staring at the remaining picture of a dog and then at the letter m in her hand. I asked her, “What is the first sound you hear in money?” Her eyes widened in comprehension, then narrowed as she drew herself up and proudly said, “WE call it dinero!” Because she spoke English, I had been instructing her without considering her cultural lens. She was magnificent, and I was humbled.

Using colorful 10-ring targets can make learning goals more concrete for young learners. My Wyoming students—many of whom come from hunting backgrounds and all of whom are familiar with hunting—were highly motivated by this particular visual. For students with different backgrounds, other visuals may resonate more strongly. However, the learning target theory of action is a research-based tool teachers can consider when designing instruction to have greater impact on student self-regulation, self-efficacy, and learning.

Bloom, B.S. 1984. “The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.” Educational Leadership 41 (8): 4–17.

Brookhart, S., C.M. Moss, & B. Long. 2008. “Formative Assessment That Empowers.” Educational Leadership 66 (3): 52–57.

DeStefano, H.H. 2017. “Supporting Struggling Readers: Using Vocabulary Cartoons During Transition Times.” Young Children 72 (5): 64–72.

Duke, N.K., & H.A.E. Mesmer. 2018. “Phonics Faux Pas: Avoiding Instructional Missteps in Teaching Letter-Sound Relationships.” American Educator 42 (4): 12–16. https://www.aft.org/ae/winter2018-2019/duke_mesmer .

Fisher, D., N. Frey, & J. Hattie. 2016. Visible Learning for Literacy: Implementing the Practices That Work Best to Accelerate Student Learning . Thousand Oaks, CA: Corwin.

Marzano, R.J. 2003. What Works in Schools: Translating Research Into Action . Alexandria, VA: ASCD.

Moss, C.M., S.M. Brookhart, & B.A. Long. 2011. “Knowing Your Learning Target.” Educational Leadership 68 (6): 66–69.                                     

Moss, C.M., & S.M. Brookhart. 2012. Learning Targets: Helping Students Aim for Understanding in Today’s Lesson . Alexandria, VA: ASCD.

Heidi Heath DeStefano, MEd, has taught kindergarten and first grade and is currently a Reading Recovery teacher at Pronghorn Elementary, in Gillette, Wyoming. [email protected]

Heidi's headshot

Vol. 74, No. 5

Print this article

Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

Call for High School Projects

Machine learning for social impact .

The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) is an interdisciplinary conference that brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields. 

This year, we invite high school students to submit research papers on the topic of machine learning for social impact.  A subset of finalists will be selected to present their projects virtually and will have their work spotlighted on the NeurIPS homepage.  In addition, the leading authors of up to five winning projects will be invited to attend an award ceremony at NeurIPS 2024 in Vancouver.  

Each submission must describe independent work wholly performed by the high school student authors.  We expect each submission to highlight either demonstrated positive social impact or the potential for positive social impact using machine learning. Application areas may include but are not limited to the following:

  • Agriculture
  • Climate change
  • Homelessness
  • Food security
  • Mental health
  • Water quality

Authors will be asked to confirm that their submissions accord with the NeurIPS code of conduct and the NeurIPS code of ethics .

Submission deadline: All submissions must be made by June 27th, 4pm EDT. The system will close after this time, and no further submissions will be possible.

We are using OpenReview to manage submissions. Papers should be submitted here . Submission will open June 1st.  Submissions under review will be visible only to their assigned program committee. We will not be soliciting comments from the general public during the reviewing process. Anyone who plans to submit a paper as an author or a co-author will need to create (or update) their OpenReview profile by the full paper submission deadline. 

Formatting instructions:   All submissions must be in PDF format. Submissions are limited to four content pages , including all figures and tables; additional pages containing only references are allowed. You must format your submission using the NeurIPS 2024 LaTeX style file using the “preprint” option for non-anonymous submission. The maximum file size for submissions is 50MB. Submissions that violate the NeurIPS style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.  Papers may be rejected without consideration of their merits if they fail to meet the submission requirements, as described in this document. 

Mentorship and collaboration:  The submitted research can be a component of a larger research endeavor involving external collaborators, but the submission should describe only the authors’ contributions.  The authors can also have external mentors but must disclose the nature of the mentorship.  At the time of submission, the authors will be asked to describe the involvement of any mentors or external collaborators and to distinguish mentor and collaborator contributions from those of the authors.  In addition, the authors may (optionally) include an acknowledgements section acknowledging the contributions of others following the content sections of the submission. The acknowledgements section will not count toward the submission page limit.

Proof of high school attendance: Submitting authors will also be asked to upload a signed letter, on school letterhead, from each author’s high school confirming that the author was enrolled in high school during the 2023-2024 academic year.

Supplementary artifacts:  In their submission, authors may link to supplementary artifacts including videos, working demonstrations, digital posters, websites, or source code.  Please do not link to additional text.  All such supplementary material should be wholly created by the authors and should directly support the submission content. 

Review process:   Each submission will be reviewed by anonymous referees. The authors, however, should not be anonymous. No written feedback will be provided to the authors.  

Use of Large Language Models (LLMs): We welcome authors to use any tool that is suitable for preparing high-quality papers and research. However, we ask authors to keep in mind two important criteria. First, we expect papers to fully describe their methodology.  Any tool that is important to that methodology, including the use of LLMs, should be described also. For example, authors should mention tools (including LLMs) that were used for data processing or filtering, visualization, facilitating or running experiments, or proving theorems. It may also be advisable to describe the use of LLMs in implementing the method (if this corresponds to an important, original, or non-standard component of the approach). Second, authors are responsible for the entire content of the paper, including all text and figures, so while authors are welcome to use any tool they wish for writing the paper, they must ensure that all text is correct and original.

Dual submissions:  Submissions that are substantially similar to papers that the authors have previously published or submitted in parallel to other peer-reviewed venues with proceedings or journals may not be submitted to NeurIPS. Papers previously presented at workshops or science fairs are permitted, so long as they did not appear in a conference proceedings (e.g., CVPRW proceedings), a journal, or a book.  However, submissions will not be published in formal proceedings, so work submitted to this call may be published elsewhere in the future. Plagiarism is prohibited by the NeurIPS Code of Conduct .

Paper checklist: In order to improve the rigor and transparency of research submitted to and published at NeurIPS, authors are required to complete a paper checklist . The paper checklist is intended to help authors reflect on a wide variety of issues relating to responsible machine learning research, including reproducibility, transparency, research ethics, and societal impact. The checklist does not count towards the page limit and will be entered in OpenReview.

Contact:   [email protected]

Help | Advanced Search

Computer Science > Cryptography and Security

Title: differentially private federated learning: a systematic review.

Abstract: In recent years, privacy and security concerns in machine learning have promoted trusted federated learning to the forefront of research. Differential privacy has emerged as the de facto standard for privacy protection in federated learning due to its rigorous mathematical foundation and provable guarantee. Despite extensive research on algorithms that incorporate differential privacy within federated learning, there remains an evident deficiency in systematic reviews that categorize and synthesize these studies. Our work presents a systematic overview of the differentially private federated learning. Existing taxonomies have not adequately considered objects and level of privacy protection provided by various differential privacy models in federated learning. To rectify this gap, we propose a new taxonomy of differentially private federated learning based on definition and guarantee of various differential privacy models and federated scenarios. Our classification allows for a clear delineation of the protected objects across various differential privacy models and their respective neighborhood levels within federated learning environments. Furthermore, we explore the applications of differential privacy in federated learning scenarios. Our work provide valuable insights into privacy-preserving federated learning and suggest practical directions for future research.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

  • Open access
  • Published: 14 May 2024

Protocol for a scoping review study on learning plan use in undergraduate medical education

  • Anna Romanova   ORCID: orcid.org/0000-0003-1118-1604 1 ,
  • Claire Touchie 1 ,
  • Sydney Ruller 2 ,
  • Victoria Cole 3 &
  • Susan Humphrey-Murto 4  

Systematic Reviews volume  13 , Article number:  131 ( 2024 ) Cite this article

89 Accesses

Metrics details

The current paradigm of competency-based medical education and learner-centredness requires learners to take an active role in their training. However, deliberate and planned continual assessment and performance improvement is hindered by the fragmented nature of many medical training programs. Attempts to bridge this continuity gap between supervision and feedback through learner handover have been controversial. Learning plans are an alternate educational tool that helps trainees identify their learning needs and facilitate longitudinal assessment by providing supervisors with a roadmap of their goals. Informed by self-regulated learning theory, learning plans may be the answer to track trainees’ progress along their learning trajectory. The purpose of this study is to summarise the literature regarding learning plan use specifically in undergraduate medical education and explore the student’s role in all stages of learning plan development and implementation.

Following Arksey and O’Malley’s framework, a scoping review will be conducted to explore the use of learning plans in undergraduate medical education. Literature searches will be conducted using multiple databases by a librarian with expertise in scoping reviews. Through an iterative process, inclusion and exclusion criteria will be developed and a data extraction form refined. Data will be analysed using quantitative and qualitative content analyses.

By summarising the literature on learning plan use in undergraduate medical education, this study aims to better understand how to support self-regulated learning in undergraduate medical education. The results from this project will inform future scholarly work in competency-based medical education at the undergraduate level and have implications for improving feedback and supporting learners at all levels of competence.

Scoping review registration:

Open Science Framework osf.io/wvzbx.

Peer Review reports

Competency-based medical education (CBME) has transformed the approach to medical education to focus on demonstration of acquired competencies rather than time-based completion of rotations [ 1 ]. As a result, undergraduate and graduate medical training programs worldwide have adopted outcomes-based assessments in the form of entrustable professional activities (EPAs) comprised of competencies to be met [ 2 ]. These assessments are completed longitudinally by multiple different evaluators to generate an overall impression of a learner’s competency.

In CBME, trainees will progress along their learning trajectory at individual speeds and some may excel while others struggle to achieve the required knowledge, skills or attitudes. Therefore, deliberate and planned continual assessment and performance improvement is required. However, due to the fragmented nature of many medical training programs where learners rotate through different rotations and work with many supervisors, longitudinal observation is similarly fragmented. This makes it difficult to determine where trainees are on their learning trajectories and can affect the quality of feedback provided to them, which is a known major influencer of academic achievement [ 3 ]. As a result, struggling learners may not be identified until late in their training and the growth of high-performing learners may be stifled [ 4 , 5 , 6 ].

Bridging this continuity gap between supervision and feedback through some form of learner handover or forward feeding has been debated since the 1970s and continues to this day [ 5 , 7 , 8 , 9 , 10 , 11 ]. The goal of learner handover is to improve trainee assessment and feedback by sharing their performance and learning needs between supervisors or across rotations. However, several concerns have been raised about this approach including that it could inappropriately bias subsequent assessments of the learner’s abilities [ 9 , 11 , 12 ]. A different approach to keeping track of trainees’ learning goals and progress along their learning trajectories is required. Learning plans (LPs) informed by self-regulated learning (SRL) theory may be the answer.

SRL has been defined as a cyclical process where learners actively control their thoughts, actions and motivation to achieve their goals [ 13 ]. Several models of SRL exist but all entail that the trainee is responsible for setting, planning, executing, monitoring and reflecting on their learning goals [ 13 ]. According to Zimmerman’s SRL model, this process occurs in three stages: forethought phase before an activity, performance phase during an activity and self-reflection phase after an activity [ 13 ]. Since each trainee leads their own learning process and has an individual trajectory towards competence, this theory relates well to the CBME paradigm which is grounded in learner-centredness [ 1 ]. However, we know that medical students and residents have difficulty identifying their own learning goals and therefore need guidance to effectively partake in SRL [ 14 , 15 , 16 , 17 ]. Motivation has also emerged as a key component of SRL, and numerous studies have explored factors that influence student engagement in learning [ 18 , 19 ]. In addition to meeting their basic psychological needs of autonomy, relatedness and competence, perceived learning relevance through meaningful learning activities has been shown to increase trainee engagement in their learning [ 19 ].

LPs are a well-known tool across many educational fields including CBME that can provide trainees with meaningful learning activities since they help them direct their own learning goals in a guided fashion [ 20 ]. Also known as personal learning plans, learning contracts, personal action plans, personal development plans, and learning goals, LPs are documents that outline the learner’s roadmap to achieve their learning goals. They require the learner to self-identify what they need to learn and why, how they are going to do it, how they will know when they are finished, define the timeframe for goal achievement and assess the impact of their learning [ 20 ]. In so doing, LPs give more autonomy to the learner and facilitate objective and targeted feedback from supervisors. This approach has been described as “most congruent with the assumptions we make about adults as learners” [ 21 ].

LP use has been explored across various clinical settings and at all levels of medical education; however, most of the experience lies in postgraduate medical education [ 22 ]. Medical students are a unique learner population with learning needs that appear to be very well suited for using LPs for two main reasons. First, their education is often divided between classroom and clinical settings. During clinical training, students need to be more independent in setting learning goals to meet desired competencies as their education is no longer outlined for them in a detailed fashion by the medical school curriculum [ 23 ]. SRL in the workplace is also different than in the classroom due to additional complexities of clinical care that can impact students’ ability to self-regulate their learning [ 24 ]. Second, although most medical trainees have difficulty with goal setting, medical students in particular need more guidance compared to residents due to their relative lack of experience upon which they can build within the SRL framework [ 25 ]. LPs can therefore provide much-needed structure to their learning but should be guided by an experienced tutor to be effective [ 15 , 24 ].

LPs fit well within the learner-centred educational framework of CBME by helping trainees identify their learning needs and facilitating longitudinal assessment by providing supervisors with a roadmap of their goals. In so doing, they can address current issues with learner handover and identification as well as remediation of struggling learners. Moreover, they have the potential to help trainees develop lifelong skills with respect to continuing professional development after graduation which is required by many medical licensing bodies.

An initial search of the JBI Database, Cochrane Database, MEDLINE (PubMed) and Google Scholar conducted in July–August 2022 revealed a paucity of research on LP use in undergraduate medical education (UGME). A related systematic review by van Houten–Schat et al. [ 24 ] on SRL in the clinical setting identified three interventions used by medical students and residents in SRL—coaching, LPs and supportive tools. However, only a couple of the included studies looked specifically at medical students’ use of LPs, so this remains an area in need of more exploration. A scoping review would provide an excellent starting point to map the body of literature on this topic.

The objective of this scoping review will therefore be to explore LP use in UGME. In doing so, it will address a gap in knowledge and help determine additional areas for research.

This study will follow Arksey and O’Malley’s [ 26 ] five-step framework for scoping review methodology. It will not include the optional sixth step which entails stakeholder consultation as relevant stakeholders will be intentionally included in the research team (a member of UGME leadership, a medical student and a first-year resident).

Step 1—Identifying the research question

The overarching purpose of this study is to “explore the use of LPs in UGME”. More specifically we seek to achieve the following:

Summarise the literature regarding the use of LPs in UGME (including context, students targeted, frameworks used)

Explore the role of the student in all stages of the LP development and implementation

Determine existing research gaps

Step 2—Identifying relevant studies

An experienced health sciences librarian (VC) will conduct all searches and develop the initial search strategy. The preliminary search strategy is shown in Appendix A (see Additional file 2). Articles will be included if they meet the following criteria [ 27 ]:

Participants

Medical students enrolled at a medical school at the undergraduate level.

Any use of LPs by medical students. LPs are defined as a document, usually presented in a table format, that outlines the learner’s roadmap to achieve their learning goals [ 20 ].

Any stage of UGME in any geographic setting.

Types of evidence sources

We will search existing published and unpublished (grey) literature. This may include research studies, reviews, or expert opinion pieces.

Search strategy

With the assistance of an experienced librarian (VC), a pilot search will be conducted to inform the final search strategy. A search will be conducted in the following electronic databases: MEDLINE, Embase, Education Source, APA PsycInfo and Web of Science. The search terms will be developed in consultation with the research team and librarian. The search strategy will proceed according to the JBI Manual for Evidence Synthesis three-step search strategy for reviews [ 27 ]. First, we will conduct a limited search in two appropriate online databases and analyse text words from the title, abstracts and index terms of relevant papers. Next, we will conduct a second search using all identified key words in all databases. Third, we will review reference lists of all included studies to identify further relevant studies to include in the review. We will also contact the authors of relevant papers for further information if required. This will be an iterative process as the research team becomes more familiar with the literature and will be guided by the librarian. Any modifications to the search strategy as it evolves will be described in the scoping review report. As a measure of rigour, the search strategy will be peer-reviewed by another librarian using the PRESS checklist [ 28 ]. No language or date limits will be applied.

Step 3—Study selection

The screening process will consist of a two-step approach: screening titles/abstracts and, if they meet inclusion criteria, this will be followed by a full-text review. All screening will be done by two members of the research team and any disagreements will be resolved by an independent third member of the team. Based on preliminary inclusion criteria, the whole research team will first pilot the screening process by reviewing a random sample of 25 titles/abstracts. The search strategy, eligibility criteria and study objectives will be refined in an iterative process. We anticipate several meetings as the topic is not well described in the literature. A flowchart of the review process will be generated. Any modifications to the study selection process will be described in the scoping review report. The papers will be excluded if a full text is not available. The search results will be managed using Covidence software.

Step 4—Charting the data

A preliminary data extraction tool is shown in Appendix B (see Additional file 3 ). Data will be extracted into Excel and will include demographic information and specific details about the population, concept, context, study methods and outcomes as they relate to the scoping review objectives. The whole research team will pilot the data extraction tool on ten articles selected for full-text review. Through an iterative process, the final data extraction form will be refined. Subsequently, two members of the team will independently extract data from all articles included for full-text review using this tool. Charting disagreements will be resolved by the principal and senior investigators. Google Translate will be used for any included articles that are not in the English language.

Step 5—Collating, summarising and reporting the results

Quantitative and qualitative analyses will be used to summarise the results. Quantitative analysis will capture descriptive statistics with details about the population, concept, context, study methods and outcomes being examined in this scoping review. Qualitative content analysis will enable interpretation of text data through the systematic classification process of coding and identifying themes and patterns [ 29 ]. Several team meetings will be held to review potential themes to ensure an accurate representation of the data. The PRISMA Extension for Scoping Reviews (PRISMA-ScR) will be used to guide the reporting of review findings [ 30 ]. Data will be presented in tables and/or diagrams as applicable. A descriptive summary will explain the presented results and how they relate to the scoping review objectives.

By summarising the literature on LP use in UGME, this study will contribute to a better understanding of how to support SRL amongst medical students. The results from this project will also inform future scholarly work in CBME at the undergraduate level and have implications for improving feedback as well as supporting learners at all levels of competence. In doing so, this study may have practical applications by informing learning plan incorporation into CBME-based curricula.

We do not anticipate any practical or operational issues at this time. We assembled a team with the necessary expertise and tools to complete this project.

Availability of data and materials

All data generated or analysed during this study will be included in the published scoping review article.

Abbreviations

  • Competency-based medical education

Entrustable professional activity

  • Learning plan
  • Self-regulated learning
  • Undergraduate medical education

Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638–45.

Article   PubMed   Google Scholar  

Shorey S, Lau TC, Lau ST, Ang E. Entrustable professional activities in health care education: a scoping review. Med Educ. 2019;53(8):766–77.

Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112.

Article   Google Scholar  

Dudek NL, Marks MB, Regehr G. Failure to fail: the perspectives of clinical supervisors. Acad Med. 2005;80(10 Suppl):S84–7.

Warm EJ, Englander R, Pereira A, Barach P. Improving learner handovers in medical education. Acad Med. 2017;92(7):927–31.

Spooner M, Duane C, Uygur J, et al. Self-regulatory learning theory as a lens on how undergraduate and postgraduate learners respond to feedback: a BEME scoping review : BEME Guide No. 66. Med Teach. 2022;44(1):3–18.

Frellsen SL, Baker EA, Papp KK, Durning SJ. Medical school policies regarding struggling medical students during the internal medicine clerkships: results of a National Survey. Acad Med. 2008;83(9):876–81.

Humphrey-Murto S, LeBlanc A, Touchie C, et al. The influence of prior performance information on ratings of current performance and implications for learner handover: a scoping review. Acad Med. 2019;94(7):1050–7.

Morgan HK, Mejicano GC, Skochelak S, et al. A responsible educational handover: improving communication to improve learning. Acad Med. 2020;95(2):194–9.

Dory V, Danoff D, Plotnick LH, et al. Does educational handover influence subsequent assessment? Acad Med. 2021;96(1):118–25.

Humphrey-Murto S, Lingard L, Varpio L, et al. Learner handover: who is it really for? Acad Med. 2021;96(4):592–8.

Shaw T, Wood TJ, Touchie T, Pugh D, Humphrey-Murto S. How biased are you? The effect of prior performance information on attending physician ratings and implications for learner handover. Adv Health Sci Educ Theory Pract. 2021;26(1):199–214.

Artino AR, Brydges R, Gruppen LD. Chapter 14: Self-regulated learning in health professional education: theoretical perspectives and research methods. In: Cleland J, Duning SJ, editors. Researching Medical Education. 1st ed. John Wiley & Sons; 2015. p. 155–66.

Chapter   Google Scholar  

Cleland J, Arnold R, Chesser A. Failing finals is often a surprise for the student but not the teacher: identifying difficulties and supporting students with academic difficulties. Med Teach. 2005;27(6):504–8.

Reed S, Lockspeiser TM, Burke A, et al. Practical suggestions for the creation and use of meaningful learning goals in graduate medical education. Acad Pediatr. 2016;16(1):20–4.

Wolff M, Stojan J, Cranford J, et al. The impact of informed self-assessment on the development of medical students’ learning goals. Med Teach. 2018;40(3):296–301.

Sawatsky AP, Halvorsen AJ, Daniels PR, et al. Characteristics and quality of rotation-specific resident learning goals: a prospective study. Med Educ Online. 2020;25(1):1714198.

Article   PubMed   PubMed Central   Google Scholar  

Pintrich PR. Chapter 14: The role of goal orientation in self-regulated learning. In: Boekaerts M, Pintrich PR, Zeidner M, editors. Handbook of self-regulation. 1st ed. Academic Press; 2000. p. 451–502.

Kassab SE, El-Sayed W, Hamdy H. Student engagement in undergraduate medical education: a scoping review. Med Educ. 2022;56(7):703–15.

Challis M. AMEE medical education guide No. 19: Personal learning plans. Med Teach. 2000;22(3):225–36.

Knowles MS. Using learning contracts. 1 st ed. San Francisco: Jossey Bass; 1986.

Parsell G, Bligh J. Contract learning, clinical learning and clinicians. Postgrad Med J. 1996;72(847):284–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Teunissen PW, Scheele F, Scherpbier AJJA, et al. How residents learn: qualitative evidence for the pivotal role of clinical activities. Med Educ. 2007;41(8):763–70.

Article   CAS   PubMed   Google Scholar  

van Houten-Schat MA, Berkhout JJ, van Dijk N, Endedijk MD, Jaarsma ADC, Diemers AD. Self-regulated learning in the clinical context: a systematic review. Med Educ. 2018;52(10):1008–15.

Taylor DCM, Hamdy H. Adult learning theories: Implications for learning and teaching in medical education: AMEE Guide No. 83. Med Teach. 2013;35(11):e1561–72.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalol H. Chapter 11: Scoping reviews. In: Aromataris E, Munn Z, eds. JBI Manual for Evidence Synthesis. JBI; 2020. https://synthesismanual.jbi.global. . Accessed 30 Aug 2022.

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–6.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Venables M, Larocque A, Sikora L, Archibald D, Grudniewicz A. Understanding indigenous health education and exploring indigenous anti-racism approaches in undergraduate medical education: a scoping review protocol. OSF; 2022. https://osf.io/umwgr/ . Accessed 26 Oct 2022.

Download references

Acknowledgements

Not applicable.

This study will be supported through grants from the Department of Medicine at the Ottawa Hospital and the University of Ottawa. The funding bodies had no role in the study design and will not have any role in the collection, analysis and interpretation of data or writing of the manuscript.

Author information

Authors and affiliations.

The Ottawa Hospital – General Campus, 501 Smyth Rd, PO Box 209, Ottawa, ON, K1H 8L6, Canada

Anna Romanova & Claire Touchie

The Ottawa Hospital Research Institute, Ottawa, Canada

Sydney Ruller

The University of Ottawa, Ottawa, Canada

Victoria Cole

The Ottawa Hospital – Riverside Campus, Ottawa, Canada

Susan Humphrey-Murto

You can also search for this author in PubMed   Google Scholar

Contributions

AR designed and drafted the protocol. CT and SH contributed to the refinement of the research question, study methods and editing of the manuscript. VC designed the initial search strategy. All authors reviewed the manuscript for final approval. The review guarantors are CT and SH. The corresponding author is AR.

Authors’ information

AR is a clinician teacher and Assistant Professor with the Division of General Internal Medicine at the University of Ottawa. She is also the Associate Director for the internal medicine clerkship rotation at the General campus of the Ottawa Hospital.

CT is a Professor of Medicine with the Divisions of General Internal Medicine and Infectious Diseases at the University of Ottawa. She is also a member of the UGME Competence Committee at the University of Ottawa and an advisor for the development of a new school of medicine at Toronto Metropolitan University.

SH is an Associate Professor with the Department of Medicine at the University of Ottawa and holds a Tier 2 Research Chair in Medical Education. She is also the Interim Director for the Research Support Unit within the Department of Innovation in Medical Education at the University of Ottawa.

CT and SH have extensive experience with medical education research and have numerous publications in this field.

SR is a Research Assistant with the Division of General Internal Medicine at the Ottawa Hospital Research Institute.

VC is a Health Sciences Research Librarian at the University of Ottawa.

SR and VC have extensive experience in systematic and scoping reviews.

Corresponding author

Correspondence to Anna Romanova .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1. prisma-p 2015 checklist., 13643_2024_2553_moesm2_esm.docx.

Additional file 2: Appendix A. Preliminary search strategy [ 31 ].

Additional file 3: Appendix B. Preliminary data extraction tool.

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Romanova, A., Touchie, C., Ruller, S. et al. Protocol for a scoping review study on learning plan use in undergraduate medical education. Syst Rev 13 , 131 (2024). https://doi.org/10.1186/s13643-024-02553-w

Download citation

Received : 29 November 2022

Accepted : 03 May 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s13643-024-02553-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research paper learning targets

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 08 May 2024

Accurate structure prediction of biomolecular interactions with AlphaFold 3

  • Josh Abramson   ORCID: orcid.org/0009-0000-3496-6952 1   na1 ,
  • Jonas Adler   ORCID: orcid.org/0000-0001-9928-3407 1   na1 ,
  • Jack Dunger 1   na1 ,
  • Richard Evans   ORCID: orcid.org/0000-0003-4675-8469 1   na1 ,
  • Tim Green   ORCID: orcid.org/0000-0002-3227-1505 1   na1 ,
  • Alexander Pritzel   ORCID: orcid.org/0000-0002-4233-9040 1   na1 ,
  • Olaf Ronneberger   ORCID: orcid.org/0000-0002-4266-1515 1   na1 ,
  • Lindsay Willmore   ORCID: orcid.org/0000-0003-4314-0778 1   na1 ,
  • Andrew J. Ballard   ORCID: orcid.org/0000-0003-4956-5304 1 ,
  • Joshua Bambrick   ORCID: orcid.org/0009-0003-3908-0722 2 ,
  • Sebastian W. Bodenstein 1 ,
  • David A. Evans 1 ,
  • Chia-Chun Hung   ORCID: orcid.org/0000-0002-5264-9165 2 ,
  • Michael O’Neill 1 ,
  • David Reiman   ORCID: orcid.org/0000-0002-1605-7197 1 ,
  • Kathryn Tunyasuvunakool   ORCID: orcid.org/0000-0002-8594-1074 1 ,
  • Zachary Wu   ORCID: orcid.org/0000-0003-2429-9812 1 ,
  • Akvilė Žemgulytė 1 ,
  • Eirini Arvaniti 3 ,
  • Charles Beattie   ORCID: orcid.org/0000-0003-1840-054X 3 ,
  • Ottavia Bertolli   ORCID: orcid.org/0000-0001-8578-3216 3 ,
  • Alex Bridgland 3 ,
  • Alexey Cherepanov   ORCID: orcid.org/0000-0002-5227-0622 4 ,
  • Miles Congreve 4 ,
  • Alexander I. Cowen-Rivers 3 ,
  • Andrew Cowie   ORCID: orcid.org/0000-0002-4491-1434 3 ,
  • Michael Figurnov   ORCID: orcid.org/0000-0003-1386-8741 3 ,
  • Fabian B. Fuchs 3 ,
  • Hannah Gladman 3 ,
  • Rishub Jain 3 ,
  • Yousuf A. Khan   ORCID: orcid.org/0000-0003-0201-2796 3 ,
  • Caroline M. R. Low 4 ,
  • Kuba Perlin 3 ,
  • Anna Potapenko 3 ,
  • Pascal Savy 4 ,
  • Sukhdeep Singh 3 ,
  • Adrian Stecula   ORCID: orcid.org/0000-0001-6914-6743 4 ,
  • Ashok Thillaisundaram 3 ,
  • Catherine Tong   ORCID: orcid.org/0000-0001-7570-4801 4 ,
  • Sergei Yakneen   ORCID: orcid.org/0000-0001-7827-9839 4 ,
  • Ellen D. Zhong   ORCID: orcid.org/0000-0001-6345-1907 3 ,
  • Michal Zielinski 3 ,
  • Augustin Žídek   ORCID: orcid.org/0000-0002-0748-9684 3 ,
  • Victor Bapst 1   na2 ,
  • Pushmeet Kohli   ORCID: orcid.org/0000-0002-7466-7997 1   na2 ,
  • Max Jaderberg   ORCID: orcid.org/0000-0002-9033-2695 2   na2 ,
  • Demis Hassabis   ORCID: orcid.org/0000-0003-2812-9917 1 , 2   na2 &
  • John M. Jumper   ORCID: orcid.org/0000-0001-6169-6580 1   na2  

Nature ( 2024 ) Cite this article

261k Accesses

1 Citations

1189 Altmetric

Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

  • Drug discovery
  • Machine learning
  • Protein structure predictions
  • Structural biology

The introduction of AlphaFold 2 1 has spurred a revolution in modelling the structure of proteins and their interactions, enabling a huge range of applications in protein modelling and design 2–6 . In this paper, we describe our AlphaFold 3 model with a substantially updated diffusion-based architecture, which is capable of joint structure prediction of complexes including proteins, nucleic acids, small molecules, ions, and modified residues. The new AlphaFold model demonstrates significantly improved accuracy over many previous specialised tools: far greater accuracy on protein-ligand interactions than state of the art docking tools, much higher accuracy on protein-nucleic acid interactions than nucleic-acid-specific predictors, and significantly higher antibody-antigen prediction accuracy than AlphaFold-Multimer v2.3 7,8 . Together these results show that high accuracy modelling across biomolecular space is possible within a single unified deep learning framework.

You have full access to this article via your institution.

Similar content being viewed by others

research paper learning targets

Highly accurate protein structure prediction with AlphaFold

research paper learning targets

De novo generation of multi-target compounds using deep generative chemistry

research paper learning targets

Augmenting large language models with chemistry tools

Author information.

These authors contributed equally: Josh Abramson, Jonas Adler, Jack Dunger, Richard Evans, Tim Green, Alexander Pritzel, Olaf Ronneberger, Lindsay Willmore

These authors jointly supervised this work: Victor Bapst, Pushmeet Kohli, Max Jaderberg, Demis Hassabis, John M. Jumper

Authors and Affiliations

Core Contributor, Google DeepMind, London, UK

Josh Abramson, Jonas Adler, Jack Dunger, Richard Evans, Tim Green, Alexander Pritzel, Olaf Ronneberger, Lindsay Willmore, Andrew J. Ballard, Sebastian W. Bodenstein, David A. Evans, Michael O’Neill, David Reiman, Kathryn Tunyasuvunakool, Zachary Wu, Akvilė Žemgulytė, Victor Bapst, Pushmeet Kohli, Demis Hassabis & John M. Jumper

Core Contributor, Isomorphic Labs, London, UK

Joshua Bambrick, Chia-Chun Hung, Max Jaderberg & Demis Hassabis

Google DeepMind, London, UK

Eirini Arvaniti, Charles Beattie, Ottavia Bertolli, Alex Bridgland, Alexander I. Cowen-Rivers, Andrew Cowie, Michael Figurnov, Fabian B. Fuchs, Hannah Gladman, Rishub Jain, Yousuf A. Khan, Kuba Perlin, Anna Potapenko, Sukhdeep Singh, Ashok Thillaisundaram, Ellen D. Zhong, Michal Zielinski & Augustin Žídek

Isomorphic Labs, London, UK

Alexey Cherepanov, Miles Congreve, Caroline M. R. Low, Pascal Savy, Adrian Stecula, Catherine Tong & Sergei Yakneen

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Max Jaderberg , Demis Hassabis or John M. Jumper .

Supplementary information

Supplementary information.

This Supplementary Information file contains the following 9 sections: (1) Notation; (2) Data pipeline; (3) Model architecture; (4) Auxiliary heads; (5) Training and inference; (6) Evaluation; (7) Differences to AlphaFold2 and AlphaFold-Multimer; (8) Supplemental Results; and (9) Appendix: CCD Code and PDB ID tables.

Reporting Summary

Rights and permissions.

Reprints and permissions

About this article

Cite this article.

Abramson, J., Adler, J., Dunger, J. et al. Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature (2024). https://doi.org/10.1038/s41586-024-07487-w

Download citation

Received : 19 December 2023

Accepted : 29 April 2024

Published : 08 May 2024

DOI : https://doi.org/10.1038/s41586-024-07487-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Major alphafold upgrade offers boost for drug discovery.

  • Ewen Callaway

Nature (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

research paper learning targets

IMAGES

  1. PPT

    research paper learning targets

  2. How to write learning targets

    research paper learning targets

  3. 55 Learning Objectives Examples (2024)

    research paper learning targets

  4. PPT

    research paper learning targets

  5. -Formulating learning objectives per target group

    research paper learning targets

  6. How to Create Learning Targets to Motivate Your Students (2023)

    research paper learning targets

VIDEO

  1. Paper Targets Hack for the Range!

  2. #10 th std# 2nd Revision # social Question Paper @ learning junction

  3. #10 th Public #Exam Question Paper 2024@learning junction

  4. - Fundamental Paper Learning but it’s in my editing style

  5. EDLE 1207

  6. May 04, 2024|SST|solved paper|learning corner

COMMENTS

  1. Setting Clear Learning Targets to Guide Instruction for All Students

    This task can be especially challenging for special educators who must balance standards-based education with individualized instruction. This paper describes the value of clarifying learning targets, defines different types of targets, and provides strategies and resources to assist practitioners in unpacking standards to develop learning targets.

  2. PDF Student Goal Setting: An Evidence-Based Practice

    Student Goal Setting. The act of goal setting is a desired competency area for students associated with the "learning-to-learn" skills students need to engage in deeper learning (William and Flora Hewlett Foundation, 2013). The act of goal setting, therefore, is a practice that educators can use to help fuel students' learning-to-learn ...

  3. PDF Unlocking Student Success: The Power of Success Criteria, Relationships

    "Success criteria make the learning target, or 'it,' visible for both teachers and students by . describing what learners must know and be able to do that would demonstrate that they . have met the learning intentions for the day" (Almarode et al, 2021, Ch. 1). To illustrate the . potential of success criteria, consider the research on

  4. Making a Tough Choice: Teacher Target-Setting and Student Achievement

    The sheer number of targets set in these seven schools over 4 years underscores the magnitude of the SLO system undertaking: In Table 4, we show that between about 8,000 and 9,000 individual student learning targets were set per test and subject from 2011-2012 through 2014-2015, for a total of approximately 34,000 target scores.

  5. Teaching the science of learning

    The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration ...

  6. Improving Students' Learning With Effective Learning Techniques

    Crawford C. C. (1925a). The correlation between college lecture notes and quiz papers. Journal of Educational Research, 12, 282-291. Crossref. Google Scholar. Crawford C. C. (1925b). Some experimental studies of the results of college note-taking. ... Distributed practice and procedural memory consolidation in musicians' skill learning ...

  7. PDF Action tool A: Understanding Learning Targets

    Checklist for Evaluating Learning Targets A learning target contains ALL of the following characteristics. It must Describe exactly what the student is going to learn by the end of today's lesson. Be stated in developmentally appropriate language that the student can understand.

  8. (PDF) Target Setting: a Case Study Looking at How ...

    The paper engages with target setting, one of the government's key priorities, from the standpoint, not of teachers, policy makers, parents or academics, but rather from the perspective of the ...

  9. Setting Clear Learning Targets to Guide Instruction for All Students

    This paper describes the value of clarifying learning targets, defines different types of targets, and provides strategies and resources to assist practitioners in unpacking standards to develop learning targets. ... Dive into the research topics of 'Setting Clear Learning Targets to Guide Instruction for All Students'. Together they form a ...

  10. Proficiency or Growth? An Exploration of Two Approaches for Writing

    An Exploration of Two Approaches for Writing Student Learning Targets ... These examples, and this paper as whole, aim to make explicit the inherent—but sometimes overlooked—policy decision (i.e., whether to prioritize proficiency or growth) that often accompanies the building of an evaluation system that includes measures of student ...

  11. Writing and Using Learning Objectives

    Abstract. Learning objectives (LOs) are used to communicate the purpose of instruction. Done well, they convey the expectations that the instructor—and by extension, the academic field—has in terms of what students should know and be able to do after completing a course of study. As a result, they help students better understand course ...

  12. PDF Learning Targets: Helping Students Aim for Understanding In ...

    Learning Targets, Student Look‐Fors, Performance of Understanding, Formative Learning Cycle 1. Learning Targets • If students are not using it (aiming for understanding of important concepts and becoming more proficient in targeted skills) they are not engaged in the

  13. How to Write Well-Defined Learning Objectives

    Well-defined learning objectives outline the desired outcome for learners, which will help specify the instructional method. For example, if we want the learners to demonstrate correct intubation procedure in a normal adult 100% of the time, we need the instructional method to involve some sort of hands-on experience so that learners can ...

  14. Day

    The Curriculum Journal publishes research in the field of curriculum studies, including curriculum theory, curriculum-making practices, and issues relating to policy development. This article extends currently reported theory and practice in the use of learning goals or targets with students in secondary and further education.

  15. Full article: Is research-based learning effective? Evidence from a pre

    The effectiveness of research-based learning. Conducting one's own research project involves various cognitive, behavioural, and affective experiences (Lopatto, Citation 2009, 29), which in turn lead to a wide range of benefits associated with RBL. RBL is associated with long-term societal benefits because it can foster scientific careers: Students participating in RBL reported a greater ...

  16. Research Supporting Proficiency-Based Learning: Learning Standards

    A shared learning target, on the other hand, frames the lesson from the students' point of view. A shared learning target helps students grasp the lesson's purpose—why it is crucial to learn this chunk of information, on this day, and in this way." —Brookhart, S. M., Long, B. A., & Moss, C. M. (2011, March). Know your learning target.

  17. Bloom's taxonomy of cognitive learning objectives

    Bloom's taxonomy. Knowledge is the foundational cognitive skill and refers to the retention of specific, discrete pieces of information like facts and definitions or methodology, such as the sequence of events in a step-by-step process. Knowledge can be assessed by straightforward means, for example, multiple choice or short-answer questions ...

  18. How to Make Learning Targets Clear to Students

    Here is one example of co-construction: Providing students with work samples that illustrate success. Asking students to identify the parts of the work sample that make it successful (i.e., criteria for success) Writing out the criteria for success with students. Challenge 3.

  19. PDF Providing Clarity: Using Learning Targets and Success Criteria to

    Putting the standard on the board. Having the students break down the nouns and verbs. Writing the essential questions on the board. Writing the learning targets on the board. Stating the learning targets and standard. As leaders, we are doing the following: Conducting focus walks to see if they are posted.

  20. PDF Series Learning Targets

    Learning targets should reflect different levels of thinking, from the foundational knowledge level (e.g., name, identify, describe) to higher-order skills (e.g., analyze, compare and contrast, and evaluate). Check to see that sets of learning targets ramp up the rigor in the classroom.

  21. Learning Targets: Helping Kindergartners Focus on Letter Names ...

    At the beginning and end of each session, we reviewed this target by saying it aloud as I pointed to the words. After my initial explanation of the learning target, I assessed each child on the letter names and sounds to determine which ones each child already knew. On the average, students named 41.5 out of 54 letter names and 32 out of 54 ...

  22. PDF Take a Closer Look: Learning Targets

    What is a Learning Target. "Learning targets are student-friendly descriptions— via words, pictures, actions, or some combination of the three—of what you intend students to learn or accomplish in a given lesson. When shared meaningfully, they become actual targets students can see and direct here efforts toward.". -Moss & Brookhart.

  23. PDF Student Learning Target

    The content teachers then administered a short research project pre-assessment (one social studies and one science related) that required the identified 222 students to engage in the inquiry-based research process using skills highlighted as priority content. We scored these using a rubric aligned to the priority content developed in

  24. Bloom's taxonomy

    Bloom's taxonomy is a set of three hierarchical models used for classification of educational learning objectives into levels of complexity and specificity. The three lists cover the learning objectives in cognitive, affective and psychomotor domains. The cognitive domain list has been the primary focus of most traditional education and is frequently used to structure curriculum learning ...

  25. 2024 Call for High School Projects

    Call for High School Projects Machine Learning for Social Impact The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024) is an interdisciplinary conference that brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other ...

  26. Research-Based Learning for Pre-Service Teachers' Meta ...

    Keywords: research-based learning, meta-reflexivity, Pre-service teachers, English language teacher education Suggested Citation: Suggested Citation Çomoğlu, Irem and Ceylan, Eda and Dikilitaş, Kenan and Akgün Özpolat, Eda, Research-Based Learning for Pre-Service Teachers' Meta-Reflexivity: Insights from English Language Teacher ...

  27. Differentially Private Federated Learning: A Systematic Review

    In recent years, privacy and security concerns in machine learning have promoted trusted federated learning to the forefront of research. Differential privacy has emerged as the de facto standard for privacy protection in federated learning due to its rigorous mathematical foundation and provable guarantee. Despite extensive research on algorithms that incorporate differential privacy within ...

  28. Protocol for a scoping review study on learning plan use in

    The current paradigm of competency-based medical education and learner-centredness requires learners to take an active role in their training. However, deliberate and planned continual assessment and performance improvement is hindered by the fragmented nature of many medical training programs. Attempts to bridge this continuity gap between supervision and feedback through learner handover ...

  29. Training, validation, and test data sets

    A training data set is a data set of examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, a classifier.. For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the optimal combinations of variables that will generate a good predictive model.

  30. Accurate structure prediction of biomolecular interactions with

    Abstract. The introduction of AlphaFold 2 1 has spurred a revolution in modelling the structure of proteins and their interactions, enabling a huge range of applications in protein modelling and ...