University of Auckland logo

Stay informed

Receive updates on teaching and learning initiatives and events.

  • Resources for teaching
  • Assessment design
  • Tests and exams

Short answer and essay questions

Short answer and essay questions are types of assessment that are commonly used to evaluate a student’s understanding and knowledge.

Tips for creating short answer and essay questions

e.g., What is __? or how could __ be put into practice?
  • Consider the course  learning outcomes . Design questions that appropriately assess the relevant learning objectives.
  • Make sure the  content measures knowledge  appropriate to the desired learner level and learning goal.
  • When students think critically they are required to  step beyond recalling factual information , incorporating evidence and examples to corroborate and/or dispute the validity of assertions/suppositions and compare and contrast multiple perspectives on the same argument.
e.g., paragraphs? sentences? Is bullet point format acceptable or does it have to be an essay format?
  • Specify how many  marks each question is worth .
  • Word limits  should be applied within Canvas for discursive or essay-type responses.
  • Check that your  language and instructions  are appropriate to the student population and discipline of study. Not all students have English as their first language.
  • Ensure the  instructions to students are clear , including optional and compulsory questions and the various components of the assessment.

Questions that promote deeper thinking

Use “open-ended” questions to provoke divergent thinking.

These questions will allow for a variety of possible answers and encourage students to think at a deeper level. Some generic question stems that trigger or stimulate different forms of critical thinking include:

  • “What are the implications of …?”
  • “Why is it important …?”
  • “What is another way to look at …?”

Use questions that are deliberate in the types of higher order thinking to promote/assess

Rather than promoting recall of facts, use questions that allow students to demonstrate their comprehension, application and analysis of the concepts.

Generic question stems that can be used to trigger and assess higher order thinking

Comprehension.

Convert information into a form that  makes sense to the individual .

  • How would you put __ into your own words?
  • What would be an example of __?

Application

Apply abstract or theoretical principles to  concrete ,  practical  situations.

  • How can you make use of __?
  • How could __ be put into practice?

Break down  or  dissect  information.

  • What are the most important/significant ideas or elements of __?
  • What assumptions/biases underlie or are hidden within __?

Build up  or  connect  separate pieces of information to form a larger, more coherent pattern

  • How can these different ideas be grouped together into a more general category?  

Critically judge  the validity or aesthetic value of ideas, data, or products.

  • How would you judge the accuracy or validity of __? 
  • How would you evaluate the ethical (moral) implications or consequences of __?

Draw conclusions about  particular instances  that are logically consistent.

  • What specific conclusions can be drawn from this general __? 
  • What particular actions would be consistent with this general __? 

Balanced thinking

Carefully consider arguments/evidence  for  and  against  a particular position.

  • What evidence supports and contradicts __? 
  • What are arguments for and counterarguments against __? 

Causal reasoning

Identify  cause-effect relationships  between different ideas or actions.

  • How would you explain why __ occurred? 
  • How would __ affect or influence __? 

Creative thinking

Generate  imaginative  ideas or  novel  approaches to traditional practices.  

  • What might be a metaphor or analogy for __? 
  • What might happen if __? (hypothetical reasoning)

Redesign test questions for open-book format

It is important to redesign the assessment tasks to authentically assess the intended learning outcomes in a way that is appropriate for this mode of assessment. Replacing questions that simply recall facts with questions that require higher level cognitive skills—for example analysis and explanation of why and how students reached an answer—provides opportunities for reflective questions based on students’ own experiences.

More quick, focused problem-solving and analysis—conducted with restricted access to limited allocated resources—will need to incorporate a student’s ability to demonstrate a more thoughtful research-based approach and/or the ability to negotiate an understanding of more complex problems, sometimes in an open-book format.

Layers can be added to the problem/process, and the inclusion of a reflective aspect can help achieve these goals, whether administered in an oral test or written examination format.

Alternative format, focusing on explanation

 

The strongest and most resilient connective tissue is
A. adipose tissue
B. reticular connective tissue
C. 
D. elastic connective tissue
E. areolar connective tissue
What is the strongest and most resilient connective tissue?
Answer: 
Why is fibrocartilage tissue the strongest and most resilient connective tissue?
Comparing adipose tissue and fibrocartilage tissue, discuss reasons for relative strength and resilience of these connective tissues.

Example 2: Analytic style multiple choice question or short answer

In a study aimed at identifying factors associated with risk of developing dementia, a group of elderly people with a formal diagnosis of dementia were compared with a group of elderly people without dementia for a range of factors related to health, lifestyle and occupation. The patients with dementia were matched with those without dementia by age, sex and area of residence. Data collection was by interview. For the patients with severe dementia, where the dementia interfered with data collection, surrogates (usually a family member) assisted with data collection.
This study is a
A. 
B. Cohort study
C. Cross-sectional survey
D. Field study
What type of study is this?
Answer: 
In a case-control study aimed at identifying factors associated with risk of developing dementia, a group of elderly people with a formal diagnosis of dementia were compared with a group of elderly people without dementia for a range of factors related to health, lifestyle and occupation. The patients with dementia were matched with those without dementia by age, sex and area of residence. Data collection was by interview. For the patients with severe dementia, where the dementia interfered with data collection, surrogates (usually a family member) assisted with data collection.
What makes this a case control study?

In a study aimed at identifying factors associated with risk of developing dementia, a group of elderly people with a formal diagnosis of dementia were compared with a group of elderly people without dementia for a range of factors related to health, lifestyle and occupation. The patients with dementia were matched with those without dementia by age, sex and area of residence. Data collection was by interview. For the patients with severe dementia, where the dementia interfered with data collection, surrogates (usually a family member) assisted with data collection.
What type of study is this? Why do you think this?

Acknowledgement: Deakin University and original multiple choice questions: Jennifer Lindley, Monash University.

Setting word limits for discursive or essay-type responses

Try to set a  fair and reasonable word count  for long answer and essay questions. Some points to consider are:

  • Weighting  – what is the relative weighting of the question in the assessment?
  • Level of study  – what is the suggested word count for written assessments in your discipline, for that level of study?
  • Skills development  – what skills are you requiring students to demonstrate? Higher level cognitive skills, such as evaluation and analysis, tend to require a lengthier word count in order to adequately respond to the assessment prompt.
  • Referencing  – will you require students to  reference their sources ? This takes time, which should be accounted for in the total time to complete the assessment. References generally would not count towards the word count. Include clear marking guidelines for referencing in rubrics, including assessing skills such as critical thinking and evaluation of information.

Communicate your expectations around word count to students in your assessment instructions, including how you will deal with submissions that are outside the word count.

E.g.,  Write 600-800 words evaluating the key concepts of XYZ. Excess text over the word limit will not be marked.

Let students know how to check the word count in their submission:

  • Show  word count in Inspera  – question type: Essay.

Canvas shows the word count at the bottom of the text editor.

Multi-choice questions

Write MCQs that assess reasoning, rather than recall.

Page updated 16/03/2023 (added open-book section)

What do you think about this page? Is there something missing? For enquiries unrelated to this content, please visit the Staff Service Centre

This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Center for Teaching

Student assessment in teaching and learning.

essay questions about assessment of learning

Much scholarship has focused on the importance of student assessment in teaching and learning in higher education. Student assessment is a critical aspect of the teaching and learning process. Whether teaching at the undergraduate or graduate level, it is important for instructors to strategically evaluate the effectiveness of their teaching by measuring the extent to which students in the classroom are learning the course material.

This teaching guide addresses the following: 1) defines student assessment and why it is important, 2) identifies the forms and purposes of student assessment in the teaching and learning process, 3) discusses methods in student assessment, and 4) makes an important distinction between assessment and grading., what is student assessment and why is it important.

In their handbook for course-based review and assessment, Martha L. A. Stassen et al. define assessment as “the systematic collection and analysis of information to improve student learning.” (Stassen et al., 2001, pg. 5) This definition captures the essential task of student assessment in the teaching and learning process. Student assessment enables instructors to measure the effectiveness of their teaching by linking student performance to specific learning objectives. As a result, teachers are able to institutionalize effective teaching choices and revise ineffective ones in their pedagogy.

The measurement of student learning through assessment is important because it provides useful feedback to both instructors and students about the extent to which students are successfully meeting course learning objectives. In their book Understanding by Design , Grant Wiggins and Jay McTighe offer a framework for classroom instruction—what they call “Backward Design”—that emphasizes the critical role of assessment. For Wiggens and McTighe, assessment enables instructors to determine the metrics of measurement for student understanding of and proficiency in course learning objectives. They argue that assessment provides the evidence needed to document and validate that meaningful learning has occurred in the classroom. Assessment is so vital in their pedagogical design that their approach “encourages teachers and curriculum planners to first ‘think like an assessor’ before designing specific units and lessons, and thus to consider up front how they will determine if students have attained the desired understandings.” (Wiggins and McTighe, 2005, pg. 18)

For more on Wiggins and McTighe’s “Backward Design” model, see our Understanding by Design teaching guide.

Student assessment also buttresses critical reflective teaching. Stephen Brookfield, in Becoming a Critically Reflective Teacher, contends that critical reflection on one’s teaching is an essential part of developing as an educator and enhancing the learning experience of students. Critical reflection on one’s teaching has a multitude of benefits for instructors, including the development of rationale for teaching practices. According to Brookfield, “A critically reflective teacher is much better placed to communicate to colleagues and students (as well as to herself) the rationale behind her practice. She works from a position of informed commitment.” (Brookfield, 1995, pg. 17) Student assessment, then, not only enables teachers to measure the effectiveness of their teaching, but is also useful in developing the rationale for pedagogical choices in the classroom.

Forms and Purposes of Student Assessment

There are generally two forms of student assessment that are most frequently discussed in the scholarship of teaching and learning. The first, summative assessment , is assessment that is implemented at the end of the course of study. Its primary purpose is to produce a measure that “sums up” student learning. Summative assessment is comprehensive in nature and is fundamentally concerned with learning outcomes. While summative assessment is often useful to provide information about patterns of student achievement, it does so without providing the opportunity for students to reflect on and demonstrate growth in identified areas for improvement and does not provide an avenue for the instructor to modify teaching strategy during the teaching and learning process. (Maki, 2002) Examples of summative assessment include comprehensive final exams or papers.

The second form, formative assessment , involves the evaluation of student learning over the course of time. Its fundamental purpose is to estimate students’ level of achievement in order to enhance student learning during the learning process. By interpreting students’ performance through formative assessment and sharing the results with them, instructors help students to “understand their strengths and weaknesses and to reflect on how they need to improve over the course of their remaining studies.” (Maki, 2002, pg. 11) Pat Hutchings refers to this form of assessment as assessment behind outcomes. She states, “the promise of assessment—mandated or otherwise—is improved student learning, and improvement requires attention not only to final results but also to how results occur. Assessment behind outcomes means looking more carefully at the process and conditions that lead to the learning we care about…” (Hutchings, 1992, pg. 6, original emphasis). Formative assessment includes course work—where students receive feedback that identifies strengths, weaknesses, and other things to keep in mind for future assignments—discussions between instructors and students, and end-of-unit examinations that provide an opportunity for students to identify important areas for necessary growth and development for themselves. (Brown and Knight, 1994)

It is important to recognize that both summative and formative assessment indicate the purpose of assessment, not the method . Different methods of assessment (discussed in the next section) can either be summative or formative in orientation depending on how the instructor implements them. Sally Brown and Peter Knight in their book, Assessing Learners in Higher Education, caution against a conflation of the purposes of assessment its method. “Often the mistake is made of assuming that it is the method which is summative or formative, and not the purpose. This, we suggest, is a serious mistake because it turns the assessor’s attention away from the crucial issue of feedback.” (Brown and Knight, 1994, pg. 17) If an instructor believes that a particular method is formative, he or she may fall into the trap of using the method without taking the requisite time to review the implications of the feedback with students. In such cases, the method in question effectively functions as a form of summative assessment despite the instructor’s intentions. (Brown and Knight, 1994) Indeed, feedback and discussion is the critical factor that distinguishes between formative and summative assessment.

Methods in Student Assessment

Below are a few common methods of assessment identified by Brown and Knight that can be implemented in the classroom. [1] It should be noted that these methods work best when learning objectives have been identified, shared, and clearly articulated to students.

Self-Assessment

The goal of implementing self-assessment in a course is to enable students to develop their own judgement. In self-assessment students are expected to assess both process and product of their learning. While the assessment of the product is often the task of the instructor, implementing student assessment in the classroom encourages students to evaluate their own work as well as the process that led them to the final outcome. Moreover, self-assessment facilitates a sense of ownership of one’s learning and can lead to greater investment by the student. It enables students to develop transferable skills in other areas of learning that involve group projects and teamwork, critical thinking and problem-solving, as well as leadership roles in the teaching and learning process.

Things to Keep in Mind about Self-Assessment

  • Self-assessment is different from self-grading. According to Brown and Knight, “Self-assessment involves the use of evaluative processes in which judgement is involved, where self-grading is the marking of one’s own work against a set of criteria and potential outcomes provided by a third person, usually the [instructor].” (Pg. 52)
  • Students may initially resist attempts to involve them in the assessment process. This is usually due to insecurities or lack of confidence in their ability to objectively evaluate their own work. Brown and Knight note, however, that when students are asked to evaluate their work, frequently student-determined outcomes are very similar to those of instructors, particularly when the criteria and expectations have been made explicit in advance.
  • Methods of self-assessment vary widely and can be as eclectic as the instructor. Common forms of self-assessment include the portfolio, reflection logs, instructor-student interviews, learner diaries and dialog journals, and the like.

Peer Assessment

Peer assessment is a type of collaborative learning technique where students evaluate the work of their peers and have their own evaluated by peers. This dimension of assessment is significantly grounded in theoretical approaches to active learning and adult learning . Like self-assessment, peer assessment gives learners ownership of learning and focuses on the process of learning as students are able to “share with one another the experiences that they have undertaken.” (Brown and Knight, 1994, pg. 52)

Things to Keep in Mind about Peer Assessment

  • Students can use peer assessment as a tactic of antagonism or conflict with other students by giving unmerited low evaluations. Conversely, students can also provide overly favorable evaluations of their friends.
  • Students can occasionally apply unsophisticated judgements to their peers. For example, students who are boisterous and loquacious may receive higher grades than those who are quieter, reserved, and shy.
  • Instructors should implement systems of evaluation in order to ensure valid peer assessment is based on evidence and identifiable criteria .  

According to Euan S. Henderson, essays make two important contributions to learning and assessment: the development of skills and the cultivation of a learning style. (Henderson, 1980) Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes them in the classroom.

Things to Keep in Mind about Essays

  • A common challenge of the essay is that students can use them simply to regurgitate rather than analyze and synthesize information to make arguments.
  • Instructors commonly assume that students know how to write essays and can encounter disappointment or frustration when they discover that this is not the case for some students. For this reason, it is important for instructors to make their expectations clear and be prepared to assist or expose students to resources that will enhance their writing skills.

Exams and time-constrained, individual assessment

Examinations have traditionally been viewed as a gold standard of assessment in education, particularly in university settings. Like essays they can be summative or formative forms of assessment.

Things to Keep in Mind about Exams

  • Exams can make significant demands on students’ factual knowledge and can have the side-effect of encouraging cramming and surface learning. On the other hand, they can also facilitate student demonstration of deep learning if essay questions or topics are appropriately selected. Different formats include in-class tests, open-book, take-home exams and the like.
  • In the process of designing an exam, instructors should consider the following questions. What are the learning objectives that the exam seeks to evaluate? Have students been adequately prepared to meet exam expectations? What are the skills and abilities that students need to do well? How will this exam be utilized to enhance the student learning process?

As Brown and Knight assert, utilizing multiple methods of assessment, including more than one assessor, improves the reliability of data. However, a primary challenge to the multiple methods approach is how to weigh the scores produced by multiple methods of assessment. When particular methods produce higher range of marks than others, instructors can potentially misinterpret their assessment of overall student performance. When multiple methods produce different messages about the same student, instructors should be mindful that the methods are likely assessing different forms of achievement. (Brown and Knight, 1994).

For additional methods of assessment not listed here, see “Assessment on the Page” and “Assessment Off the Page” in Assessing Learners in Higher Education .

In addition to the various methods of assessment listed above, classroom assessment techniques also provide a useful way to evaluate student understanding of course material in the teaching and learning process. For more on these, see our Classroom Assessment Techniques teaching guide.

Assessment is More than Grading

Instructors often conflate assessment with grading. This is a mistake. It must be understood that student assessment is more than just grading. Remember that assessment links student performance to specific learning objectives in order to provide useful information to instructors and students about student achievement. Traditional grading on the other hand, according to Stassen et al. does not provide the level of detailed and specific information essential to link student performance with improvement. “Because grades don’t tell you about student performance on individual (or specific) learning goals or outcomes, they provide little information on the overall success of your course in helping students to attain the specific and distinct learning objectives of interest.” (Stassen et al., 2001, pg. 6) Instructors, therefore, must always remember that grading is an aspect of student assessment but does not constitute its totality.

Teaching Guides Related to Student Assessment

Below is a list of other CFT teaching guides that supplement this one. They include:

  • Active Learning
  • An Introduction to Lecturing
  • Beyond the Essay: Making Student Thinking Visible in the Humanities
  • Bloom’s Taxonomy
  • How People Learn
  • Syllabus Construction

References and Additional Resources

This teaching guide draws upon a number of resources listed below. These sources should prove useful for instructors seeking to enhance their pedagogy and effectiveness as teachers.

Angelo, Thomas A., and K. Patricia Cross. Classroom Assessment Techniques: A Handbook for College Teachers . 2 nd edition. San Francisco: Jossey-Bass, 1993. Print.

Brookfield, Stephen D. Becoming a Critically Reflective Teacher . San Francisco: Jossey-Bass, 1995. Print.

Brown, Sally, and Peter Knight. Assessing Learners in Higher Education . 1 edition. London ; Philadelphia: Routledge, 1998. Print.

Cameron, Jeanne et al. “Assessment as Critical Praxis: A Community College Experience.” Teaching Sociology 30.4 (2002): 414–429. JSTOR . Web.

Gibbs, Graham and Claire Simpson. “Conditions under which Assessment Supports Student Learning. Learning and Teaching in Higher Education 1 (2004): 3-31.

Henderson, Euan S. “The Essay in Continuous Assessment.” Studies in Higher Education 5.2 (1980): 197–203. Taylor and Francis+NEJM . Web.

Maki, Peggy L. “Developing an Assessment Plan to Learn about Student Learning.” The Journal of Academic Librarianship 28.1 (2002): 8–13. ScienceDirect . Web. The Journal of Academic Librarianship.

Sharkey, Stephen, and William S. Johnson. Assessing Undergraduate Learning in Sociology . ASA Teaching Resource Center, 1992. Print.

Wiggins, Grant, and Jay McTighe. Understanding By Design . 2nd Expanded edition. Alexandria, VA: Assn. for Supervision & Curriculum Development, 2005. Print.

[1] Brown and Night discuss the first two in their chapter entitled “Dimensions of Assessment.” However, because this chapter begins the second part of the book that outlines assessment methods, I have collapsed the two under the category of methods for the purposes of continuity.

Teaching Guides

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules

ClickCease

The Science of Reading

2-day workshop sept 14 & 21, standards-based grading, 2-day workshop sept 17 & 24, social-emotional learning, 2-day workshop oct 22 & 29, engaging the 21st century learner, 4-day workshop oct 30, nov 6, 13, & 20, reclaiming the joy of teaching, full day workshop dec 7.

essay questions about assessment of learning

Assessing Student Learning: 6 Types of Assessment and How to Use Them

assessment with bulb

Assessing student learning is a critical component of effective teaching and plays a significant role in fostering academic success. We will explore six different types of assessment and evaluation strategies that can help K-12 educators, school administrators, and educational organizations enhance both student learning experiences and teacher well-being.

We will provide practical guidance on how to implement and utilize various assessment methods, such as formative and summative assessments, diagnostic assessments, performance-based assessments, self-assessments, and peer assessments.

Additionally, we will also discuss the importance of implementing standard-based assessments and offer tips for choosing the right assessment strategy for your specific needs.

Importance of Assessing Student Learning

Assessment plays a crucial role in education, as it allows educators to measure students’ understanding, track their progress, and identify areas where intervention may be necessary. Assessing student learning not only helps educators make informed decisions about instruction but also contributes to student success and teacher well-being.

Assessments provide insight into student knowledge, skills, and progress while also highlighting necessary adjustments in instruction. Effective assessment practices ultimately contribute to better educational outcomes and promote a culture of continuous improvement within schools and classrooms.

1. Formative assessment

teacher assessing the child

Formative assessment is a type of assessment that focuses on monitoring student learning during the instructional process. Its primary purpose is to provide ongoing feedback to both teachers and students, helping them identify areas of strength and areas in need of improvement. This type of assessment is typically low-stakes and does not contribute to a student’s final grade.

Some common examples of formative assessments include quizzes, class discussions, exit tickets, and think-pair-share activities. This type of assessment allows educators to track student understanding throughout the instructional period and identify gaps in learning and intervention opportunities.

To effectively use formative assessments in the classroom, teachers should implement them regularly and provide timely feedback to students.

This feedback should be specific and actionable, helping students understand what they need to do to improve their performance. Teachers should use the information gathered from formative assessments to refine their instructional strategies and address any misconceptions or gaps in understanding. Formative assessments play a crucial role in supporting student learning and helping educators make informed decisions about their instructional practices.

Check Out Our Online Course: Standards-Based Grading: How to Implement a Meaningful Grading System that Improves Student Success

2. summative assessment.

students taking summative assessment

Examples of summative assessments include final exams, end-of-unit tests, standardized tests, and research papers. To effectively use summative assessments in the classroom, it’s important to ensure that they are aligned with the learning objectives and content covered during instruction.

This will help to provide an accurate representation of a student’s understanding and mastery of the material. Providing students with clear expectations and guidelines for the assessment can help reduce anxiety and promote optimal performance.

Summative assessments should be used in conjunction with other assessment types, such as formative assessments, to provide a comprehensive evaluation of student learning and growth.

3. Diagnostic assessment

Diagnostic assessment, often used at the beginning of a new unit or term, helps educators identify students’ prior knowledge, skills, and understanding of a particular topic.

This type of assessment enables teachers to tailor their instruction to meet the specific needs and learning gaps of their students. Examples of diagnostic assessments include pre-tests, entry tickets, and concept maps.

To effectively use diagnostic assessments in the classroom, teachers should analyze the results to identify patterns and trends in student understanding.

This information can be used to create differentiated instruction plans and targeted interventions for students struggling with the upcoming material. Sharing the results with students can help them understand their strengths and areas for improvement, fostering a growth mindset and encouraging active engagement in their learning.

4. Performance-based assessment

Performance-based assessment is a type of evaluation that requires students to demonstrate their knowledge, skills, and abilities through the completion of real-world tasks or activities.

The main purpose of this assessment is to assess students’ ability to apply their learning in authentic, meaningful situations that closely resemble real-life challenges. Examples of performance-based assessments include projects, presentations, portfolios, and hands-on experiments.

These assessments allow students to showcase their understanding and application of concepts in a more active and engaging manner compared to traditional paper-and-pencil tests.

To effectively use performance-based assessments in the classroom, educators should clearly define the task requirements and assessment criteria, providing students with guidelines and expectations for their work. Teachers should also offer support and feedback throughout the process, allowing students to revise and improve their performance.

Incorporating opportunities for peer feedback and self-reflection can further enhance the learning process and help students develop essential skills such as collaboration, communication, and critical thinking.

5. Self-assessment

Self-assessment is a valuable tool for encouraging students to engage in reflection and take ownership of their learning. This type of assessment requires students to evaluate their own progress, skills, and understanding of the subject matter. By promoting self-awareness and critical thinking, self-assessment can contribute to the development of lifelong learning habits and foster a growth mindset.

Examples of self-assessment activities include reflective journaling, goal setting, self-rating scales, or checklists. These tools provide students with opportunities to assess their strengths, weaknesses, and areas for improvement. When implementing self-assessment in the classroom, it is important to create a supportive environment where students feel comfortable and encouraged to be honest about their performance.

Teachers can guide students by providing clear criteria and expectations for self-assessment, as well as offering constructive feedback to help them set realistic goals for future learning.

Incorporating self-assessment as part of a broader assessment strategy can reinforce learning objectives and empower students to take an active role in their education.

Reflecting on their performance and understanding the assessment criteria can help them recognize both short-term successes and long-term goals. This ongoing process of self-evaluation can help students develop a deeper understanding of the material, as well as cultivate valuable skills such as self-regulation, goal setting, and critical thinking.

6. Peer assessment

Peer assessment, also known as peer evaluation, is a strategy where students evaluate and provide feedback on their classmates’ work. This type of assessment allows students to gain a better understanding of their own work, as well as that of their peers.

Examples of peer assessment activities include group projects, presentations, written assignments, or online discussion boards.

In these settings, students can provide constructive feedback on their peers’ work, identify strengths and areas for improvement, and suggest specific strategies for enhancing performance.

Constructive peer feedback can help students gain a deeper understanding of the material and develop valuable skills such as working in groups, communicating effectively, and giving constructive criticism.

To successfully integrate peer assessment in the classroom, consider incorporating a variety of activities that allow students to practice evaluating their peers’ work, while also receiving feedback on their own performance.

Encourage students to focus on both strengths and areas for improvement, and emphasize the importance of respectful, constructive feedback. Provide opportunities for students to reflect on the feedback they receive and incorporate it into their learning process. Monitor the peer assessment process to ensure fairness, consistency, and alignment with learning objectives.

Implementing Standard-Based Assessments

kids having quizzes

Standard-based assessments are designed to measure students’ performance relative to established learning standards, such as those generated by the Common Core State Standards Initiative or individual state education guidelines.

By implementing these types of assessments, educators can ensure that students meet the necessary benchmarks for their grade level and subject area, providing a clearer picture of student progress and learning outcomes.

To successfully implement standard-based assessments, it is essential to align assessment tasks with the relevant learning standards.

This involves creating assessments that directly measure students’ knowledge and skills in relation to the standards rather than relying solely on traditional testing methods.

As a result, educators can obtain a more accurate understanding of student performance and identify areas that may require additional support or instruction. Grading formative and summative assessments within a standard-based framework requires a shift in focus from assigning letter grades or percentages to evaluating students’ mastery of specific learning objectives.

This approach encourages educators to provide targeted feedback that addresses individual student needs and promotes growth and improvement. By utilizing rubrics or other assessment tools, teachers can offer clear, objective criteria for evaluating student work, ensuring consistency and fairness in the grading process.

Tips For Choosing the Right Assessment Strategy

When selecting an assessment strategy, it’s crucial to consider its purpose. Ask yourself what you want to accomplish with the assessment and how it will contribute to student learning. This will help you determine the most appropriate assessment type for your specific situation.

Aligning assessments with learning objectives is another critical factor. Ensure that the assessment methods you choose accurately measure whether students have met the desired learning outcomes. This alignment will provide valuable feedback to both you and your students on their progress. Diversifying assessment methods is essential for a comprehensive evaluation of student learning.

By using a variety of assessment types, you can gain a more accurate understanding of students’ strengths and weaknesses. This approach also helps support different learning styles and reduces the risk of overemphasis on a single assessment method.

Incorporating multiple forms of assessment, such as formative, summative, diagnostic, performance-based, self-assessment, and peer assessment, can provide a well-rounded understanding of student learning. By doing so, educators can make informed decisions about instruction, support, and intervention strategies to enhance student success and overall classroom experience.

Challenges and Solutions in Assessment Implementation

Implementing various assessment strategies can present several challenges for educators. One common challenge is the limited time and resources available for creating and administering assessments. To address this issue, teachers can collaborate with colleagues to share resources, divide the workload, and discuss best practices.

Utilizing technology and online platforms can also streamline the assessment process and save time. Another challenge is ensuring that assessments are unbiased and inclusive.

To overcome this, educators should carefully review assessment materials for potential biases and design assessments that are accessible to all students, regardless of their cultural backgrounds or learning abilities.

Offering flexible assessment options for the varying needs of learners can create a more equitable and inclusive learning environment. It is essential to continually improve assessment practices and seek professional development opportunities.

Seeking support from colleagues, attending workshops and conferences related to assessment practices, or enrolling in online courses can help educators stay up-to-date on best practices while also providing opportunities for networking with other professionals.

Ultimately, these efforts will contribute to an improved understanding of the assessments used as well as their relevance in overall student learning.

Assessing student learning is a crucial component of effective teaching and should not be overlooked. By understanding and implementing the various types of assessments discussed in this article, you can create a more comprehensive and effective approach to evaluating student learning in your classroom.

Remember to consider the purpose of each assessment, align them with your learning objectives, and diversify your methods for a well-rounded evaluation of student progress.

If you’re looking to further enhance your assessment practices and overall professional development, Strobel Education offers workshops , courses , keynotes , and coaching  services tailored for K-12 educators. With a focus on fostering a positive school climate and enhancing student learning,  Strobel Education can support your journey toward improved assessment implementation and greater teacher well-being.

Related Posts

SBG: Step 1 Prioritizing

SBG | Step 1 – Prioritizing

essay questions about assessment of learning

SBG | Step 3 – Creating Assessments

Subscribe to our blog today, keep in touch.

Copyright 2024 Strobel Education, all rights reserved.

Best Practices for Assessing Student Learning

Varying assessment types.

Best practices in assessing student learning include using several types of assessment that enable students to show evidence of their learning in various ways as they learn the content and achieve the learning outcomes.

  • Effective assessment types include:
  • Short-answers
  • Research paper
  • Multiple-choice questions
  • Multiple-response questions (more than one correct or partially correct answer)
  • True/False questions
  • Matching terms and definitions/concepts
  • Performance
  • Presentation

Short and Frequent Assessments

Assessment of student learning should be frequent throughout the course. These frequent experiences should require students to perform a task, answer questions, or other action that gives evidence of their learning. The most important thing is to give immediate and ongoing feedback so students will know what they are doing well and what improvements they need to make. You should quickly evaluate the assessments so that the students can have immediate feedback that will guide their continued learning. Although some of these assessments will provide data and scores to be included in student grades, it is not necessary to record scores for all assessments.

Culminating Assessments

Culminating assessments are not necessarily comprehensive exams. The BYU policy for final exams includes this statement (see https://policy.byu.edu/view/index.php?p=64 ):

Final examinations:

  • A final examination or comparable culminating evaluation of student learning is expected for every course. Exceptions must be approved by the dean and chair.
  • Scheduled final examinations are to be administered in accord with the published Final Examination Schedule as to date, time, and place. They are not to be given or taken early.
  • Finals taken in the Testing Center must be completed during the Final Examination Period. Finals should be designed so that they can be completed within a period of time equivalent to a regularly scheduled final.
  • Any other culminating evaluation of student learning (e.g., oral examination, take-home examination, portfolio review, juried performance, etc.), must be completed during the Final Examination Period, should not require more time to complete than a regular final examination, and must not conflict with another scheduled final examination.

A culminating assessment occurs at the end of the course and focuses on the student’s achievement of the learning outcomes. The student’s experience with this concluding assessment should be both representative of their learning and inspiring for continued growth. You can provide a culminating assessment in a variety of ways, including presentations, research papers, group/panel discussions, poster presentations, oral exams, traditional final exams, etc.

Using Assessments to Improve Student Learning

By aligning your assessments with the learning outcomes, you can evaluate student performance and organize feedback and plans for your students’ further learning. There are several ways you can analyze and use student-performance data to help improve your students’ learning. Two main ways of doing this are using item-analysis data in Learning Suite Exams (for exams scored online or at the Testing Center), or using scoring rubrics.

Using student-performance data will enable you to generate focused feedback for students to use as they improve their learning and move forward in your class. This data will also help you identify gaps in what you are intending for students to be able to know and do versus their actual achievement of those outcomes. You will then be able to make decisions and identify areas where you can improve your course organization and teaching.

Assessment Services Links

  • Campus Maps
  • Faculties & Schools

Now searching for:

  • Home  
  • Courses  
  • Subjects  
  • Assessment and Artificial Intelligence  
  • Design standards  
  • Methods  
  • Exams  
  • Online exams  
  • Interactive oral assessment  
  • Report  
  • Case study analysis or scenario-based questions  
  • Essay  
  • Newspaper article/editorial  
  • Literature review  
  • Student presentations  
  • Posters/infographics  
  • Portfolio  
  • Reflection  
  • Annotated bibliography  
  • OSCE/Online Practical Exam  
  • Viva voce  
  • Marking criteria and rubrics  
  • Review checklist  
  • Alterations  
  • Moderation  
  • Feedback  
  • Teaching  
  • Learning technology  
  • Professional learning  
  • Framework and policy  
  • Strategic projects  
  • About and contacts  
  • Help and resources  

Essay assessments ask students to demonstrate a point of view supported by evidence. They allow students to demonstrate what they've learned and build their writing skills.

An essay question prompts a written response, which may vary from a few paragraphs to a number of pages.

Essay questions are generally open-ended. They differ from short answer questions in that they:

  • require more time
  • are less structured
  • require students to integrate information and interpretation.

When to use an essay

Essays can be used to test students' higher order thinking.

Advantages and limitations

  • Limitations
  • Test analysis, reasoning, synthesis and evaluation skills.
  • Are open ended. This allows students to answer the question in a variety of ways and demonstrate depth and creativity.
  • Allow for deep learning and connections.
  • Allow students to draw on research and reasoning to provide justification and show integration.
  • Opportunity to assess a student’s writing ability.
  • Can be quicker to prepare than other item/assessment types.
  • Can be structured in different ways.
  • Can limit the range of assessable content and the number of assessment items that can be used.
  • Favour students with good writing skills.
  • not too open ended
  • align with content and learning outcomes.
  • Can allow for plagiarism.
  • Can be difficult to moderate.
  • Time consuming to assess.
  • Markers need to identify knowledge and understanding, despite levels of expression, i.e. elegant language can mask superficial thinking, while clumsy language can disguise understanding of ideas.

Guidelines for developing essay assessments

Essay question.

Effective essay questions provide students with a focus (types of thinking and content) to use in their response.

Make sure your essay question:

  • is aligned with the intended learning outcome
  • is an appropriate length
  • contains a clear task or a specific problem situation
  • is worded and structured in such a way that it will be clear to the students what they are expected to do
  • is not indeterminate, vague or open to numerous and/or subjective interpretations
  • contains verbs that match the intended learning outcomes (if you use verbs like discuss or explain , indicate which points should be discussed/explained)
  • defines the scope of the task to avoid students going off on an unrelated tangent
  • allows for answers at different levels, i.e. a basic, satisfactory response and an extended, high level response
  • includes differentiating aspects in the way the question is written.

Review the question and improve using the following questions:

  • Does the question align with the learning outcome?
  • Is the focus clear?
  • Is the scope specific and clear enough?
  • Is there enough direction to guide the student to the expected response?

Alignment to learning outcomes

To ensure the assessment item aligns with learning outcomes:

  • prepare a model answer or an outline of major points that should be included in the answer
  • critically review the essay item for clarity
  • check the question is aligned with the intended learning outcome and model answer.

Student preparation

Make sure your students are prepared by:

  • teaching them how to approach essays
  • scaffold learning so there are opportunities to guide and practise essay writing
  • ensuring students know the recommended time for completing their answer
  • ensuring students know the weighting of the essay.

Examples of essay question verbs

In the table below you will find lists of verbs that are commonly used in essay questions. These words:

  • relate to learning outcomes
  • can be thought of as aligning with critical essay questions or descriptive essay questions
  • can be used as starting points for the development of essay questions.
Descriptive question wordsCritical question words
DefineAnalyse
DemonstrateEvaluate
DescribeJustify
ElaborateCritically evaluate
ExplainReview
ExploreAssess
IdentifyDiscuss
OutlineExamine
SummariseTo what extent
 Compare
 Contrast

Students in School: Importance of Assessment Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Are tests important for students? Why? How should learning be assessed? Essays like the one on this page aim to answer these questions.

Introduction

Assessment of students is a vital exercise aimed at evaluating their knowledge, talents, thoughts, or beliefs (Harlen, 2007). It involves testing a part of the content taught in class to ascertain the students’ learning progress. Assessment should put into consideration students’ class work and outside class work. For younger kids, the teacher should focus on language development.

This will enhance the kids’ confidence when expressing their ideas whenever asked. As in organizations, checks on the performance of students’ progress should be undertaken regularly. Notably, organizations have a high probability of investing in futility because they lack opportunity for correction.

However, in schools there are more chances of correcting mistakes. Similarly, teachers and parents should have a basis of nurturing and correcting the students. This is only possible through assessment of students at certain intervals during their learning progress. Equally, parents or teachers can use tests as they teach as a means of offering quick solutions to challenges experienced by students while learning.

All trainers should work together with their students with the aim of achieving some goals. To evaluate if the goals are met, trainers use various assessment methods depending on the profession. This is exactly true when it comes to assessment in schools. Assessment should focus on the student learning progress.

It should be employed from the kindergarten to the highest levels of learning institutions such as the university. The most essential fact about assessment is that it has to be specific. This implies that each test should try to evaluate if a student is able to demonstrate the understanding of certain concepts taught in class. Contrary to what most examiners believe, assessment should never be used as a means of ranking students.

I this case the key aims of assessment will be lost. Ranking is not bad, but to some extent it might create a negative impression and demoralize the students who are not ranked at top in class. They feel that they are foolish, which is not the case. In general, assessment should be used for evaluation of results and thus creating and formulation of strategies for improving the students’ learning and performance.

Importance of assessment in school

Assessment forms an important part of learning that determines whether the objectives of education have been attained or not (Salvia, 2001). For important decision making concerning the student’s performance, assessment is inevitable. It is very crucial since it determines what course or career can the student partake depending on class performance.

This is not possible without an exam assessment. It engages instructors with a number of questions, which include whether they are teaching the students what they are supposed to be taught or not, and whether their teaching approach is suitable for students.

Students should be subjected to assessment beyond class work, because the world is changing and they are supposed to adapt to dynamics they encounter in their everyday lives. Assessment is important for parents, students, and teachers.

Teachers should be able to identify the students’ level of knowledge and their special needs. They should be able to identify skills, design lesson plans, and come up with the goals of learning. Similarly, instructors should be able to create new learning arrangements and select appropriate learning materials to meet individual student’s needs.

Teachers have to inform parents about the student’s progress in class. This is only possible with the assessment of the students through either exam or group assessment. The assessment will make teachers improve learning mechanisms to meet the needs and abilities of all students. It provides teachers with a way of informing the public about the student’s progress in school.

Whenever parents are informed about the results of their children, they have to contribute to decision making concerning the student’s education needs (Harlen, 2007). Parents are able to select and pay for the relevant curriculum for their students. They can hire personal tutors or pay tuition to promote the learning of the student.

Students should be able to evaluate their performance and learning in school with the use of assessment results. It forms the basis of self-motivation as through it students are able to put extra efforts in order improve their exam performance. Without results, a student might be tempted to assume that he or she has mastered everything taught in class.

Methods of assessment

Various mechanisms can be used to assess the students in school. These include both group assessment and various examinations issued during the learning session. The exam could be done on a weekly, monthly, or terminal basis. Through this, a student is required to submit a written paper or oral presentation. Assignments are normally given with a fixed date of submission.

The teacher determines the amount of time required depending on the complexity of the assignment. It can take a day, a week, or even a month and this ensures that the student does not only rely on class work. It promotes research work and instills the self-driven virtue to the student. In addition, short time exam gives a quick feedback to the teacher about the student performance.

Exam methods of assessment

Before looking at the various methods of exam assessment, it is important to understand the major role that the assessment plays in the learning of the student. Carrying out an assessment at regular intervals allows the teachers to know how their students are progressing over time with respect to their previous assessments (Harlen, 2007).

Actually, testing of students helps in their learning and creates motivation to learn more and improve their performance in the future examination. It also guides the teacher on ways of passing on the knowledge to the students. There are three purposes of assessment and these include assessment for learning, assessment to learning, and assessment of learning.

All these help the teacher in planning of his lessons and means of getting feedback from students. Moreover, these three factors of learning join the efforts of parents, student, and teachers in the process of learning. There are several repercussions realized when parents do not monitor closely the performance of their kids.

Education experts assert that parents who fail to monitor their children’s learning progress are like farmers who sow seeds during planting season and wait to reap during the harvesting season yet they did nothing about it. The success of the student is easily achieved when there is harmony among the parents, teachers, and the students.

Methods of assessment can be categorized into three steps: baseline, formative and summative (Stefanakis, 2010). The baseline is considered as the basic and marks the beginning of learning. The summative one carries the bigger weight than the formative in the overall performance of the student. It carries more marks and it is usually done at the end of the teaching period in the term paper.

The aim is to check for the overall understanding of the unit or topic by the student. As the formative assessment is a continuous process during the learning session in the classroom, the instructor should use the general feedback and observations while teaching. It can provide an immediate solution to the teacher because the area that troubles the student is easily identified and the teacher takes appropriate action.

Teachers should never ignore the formative or wait for the summative at the end of the learning term. Even if the teacher discovers weakness of the student, it might be less useful since there will be no room for improvement. Actually, it is more of a reactive measure rather than proactive summative assessment. Various mechanisms can be used to realize the formative assessment.

These include surveys, which involve collecting of students’ opinions, attitudes, and behaviors during class (Nitko, 2001). They help the instructor to interact with the student more closely, creating a supportive learning environment for the student. The teacher is able to clear any existing misconception from the students due to prior knowledge. It can also involve reflections of the student.

Here, the student is required to take some time and reflect on what was taught. It necessitates the student to ask several questions regarding what was taught, for instance, questions about the hottest topic, new concepts, or questions left unanswered. It also involves the teacher asking questions during a teaching session. This makes the teacher to point out the areas the students have not understood.

By doing so, the teacher is able to focus and put more effort on some topics as compared to others. The teacher can also decide to issue homework or assignments to students. This gives students an opportunity to build confidence on the knowledge acquired during class work (Stefanakis, 2010).

Most importantly, the teacher could include the objectives and expectations of each lesson and this can be in form of questions. These questions create awareness and curiosity of students about the topic.

For the above methods of assessment, various formats have been adopted. First is the baseline assessment, which aims at examining individual’s experience as well as the prior knowledge. There are pencil and paper easement method, which is a written test. It can be a short essay or multiple choice questions. It checks for the student’s understanding of certain concepts.

The third is the embedded assessment. It deals with testing the students in contextual learning and it is done in the formative stage. The fourth involves oral reports that aim at capturing the student’s communication and scientific skills. They are carried out in the formative stage. Interviews evaluate the group and individual performance during the formative stage.

There is also a performance task, which requires the student to work on an action related to the problem while explaining a scientific idea. Usually, it is assessed both in the summative and formative stages. All these formats ensure the objective of the assessment is achieved (Harlen, 2007). The above exam method promotes learning and acquiring of knowledge among the students.

Group methods of assessment

Assessment is a flexible activity as what is done to an individual during assessment can also be done in a group and still achieve the objectives of the assessment. Group work aims to ensure that students work together. The method is not as smooth as that of an individual’s assessment since awarding of grades is a bit tricky and not straightforward.

The instructors will not know which student has contributed a lot in the group work, unless the same grade is given to group members to create fairness in the process of assessment (Paquette, 2010). It is advisable to consider both the process and finished product when assessing group work.

By just looking at the final work of the group, no one can tell who did what and did not. Individual contributions are implicit in the final project. The teacher should employ some other measures to be able to distribute grades fairly.

The solutions of assessing group include consideration of the process and the final work. The instructor should assess the process involved in the development of the final work. The aspect of the project includes punctuality, cooperation and contribution of the individual student to the group work (Stefanakis, 2010). The participation of each student and teamwork should be assessed.

Fair grading requires looking at the achievement of the objectives of the project. In addition, the instructors can let the students assess and evaluate themselves through group participation. This enhances group teamwork and yields a fair distribution of grades. This is realized because the members of the group know how to research and present written analysis of their work.

Self-assessment aims at realizing respect, promptness, and listening to minority views within the group. Another effective way of ensuring that group work becomes successful is by holding group members accountable. This actually curbs the issue of joy riding among the group members. Individuals are allocated with a certain portion of the entire job.

This involves asking members to demonstrate what they have learned and how they have contributed into the group. In addition, the products and processes are assessed. Another interesting scenario is realized when the instructor gives students the opportunity to evaluate the work of other team members. The gauging of individuals involves the investigating of various aspects of the projects.

These include communication skills, efforts, cooperation, and participation of individual members. It is facilitated by the use of forms, which are completed by the students.

Group work aims at improving both accountability of individuals and vital information due to dynamics experienced in the group. To some extent, an instructor can involve the external feedbacks. These feedbacks are finally incorporated into the final score of the student’s group grade.

There are various mechanisms for assessing and grading the group. First, there is shared grading. Through this, the submitted work of the group is assessed and same grade to all members is awarded without considering the individual’s contribution. Secondly, there is averaging of the group grade. Through this, each member is required to submit the portion allocated.

After assessing the individual’s work, an average of all the members is evaluated and this grade is awarded to group members. This average group grade promotes members to focus on group and individual work. There is also individual grading, where the student’s allocated work is assessed and grades given to individuals.

This enhances efforts during working with all the members. In fact, this method is the fairest way of grading group work. There is also an individual report grading in which each member is required to write individual report. After submitting, assessment is done and a grade is given to the student.

Finally, there is an individual examination grading where questions are examined based on the project. This encourages students to participate fully during the project. It is hard to answer the questions if you have not participated in the group work.

How assessment prepares students for higher education/ workforce/ student character

It is a fact that in any institution exam is an inevitable criterion of assessing students. Whichever the system adopted by the governments of various countries worldwide, exam is an important event as teachers are able to allow those students who perform well to progress in their learning (Stefanakis, 2010). Those who have not met the minimum grading will require extra tuition before they are promoted.

This will involve the initiatives of parents to hire tutors for the student. Exam assessment prepares the student for higher levels of learning, because the higher institutions of learning have exam assessment too. Therefore, it is important for the students to get used to exam as well as research, which will boost the student understanding during lectures in the university or in college.

Similarly, at the end of a university degree course the students are required to carry out a project either as individual or group work. The knowledge and experience of teamwork gained during the lower study levels will play a great role in successful completion of tasks in the university.

Another important factor of assessment is that it helps a student to develop his or her character from childhood to adulthood. For the first time a student joins the school the test should be initiated.

From small things the student is asked by the teacher or by other colleagues, he or she learns how to associate with other students especially during the group work tasks. The student learns and embraces teamwork, cooperation, and accountability. These virtues are a foundation for character. In addition, the student acquires communication skills especially during the presentation of project work or during class sessions.

These small facts about life accumulate and contribute to life outside the school. The student is able to work in any environment. The exam credentials are vital requirements in the job market. All firms base their employment qualification on exams. More often, employers choose best workers based on their exam papers.

This approach has been vital since employers might not have time to assess ability to demonstrate their skills (Stefanakis, 2010). Therefore, the underlying basis is both exam and group assessment. Group assessment helps to build teamwork, which is a vital virtue in the workplace. Most projects in an organization are done in groups. Hence, teamwork aspects are very crucial during implementation.

The student utilizes the knowledge and experience of group work during school. The working environment is not so much different from socialization in school. In any organization, the success of a company is determined by the teamwork and unity of the workers. These vital virtues are learnt and developed in school and are enhanced by assessment.

Harlen, W. (2007). Assessment of learning . Los Angeles, CA: SAGE Publications.

Nitko, A. J. (2001). Educational assessment of students (3rd ed.). Upper Saddle River, N.J.: Merrill.

Paquette, K. R. (2010). Striving for the perfect classroom instructional and assessment strategies to meet the needs of today’s diverse learners . New York: Nova Science Publishers.

Salvia, J. (2001). Assessment (8th ed.). Boston: Houghton Mifflin.

Stefanakis, E. H. (2010). Differentiated assessment how to assess the learning potential of every student . San Francisco: Jossey-Bass.

  • Connections Between Students’ Prior Experiences, Interests, and Thought Processes
  • Standardized and Nonstandardized Assessments
  • The Essence of Summative
  • Summative Assessment Planning and Procedure
  • A Repair Kit for Grading: 15 Fixes for Broken Grades
  • Literacy: Diagnosing Reading Skills, Reporting Progress & Outcome Data
  • Convenience and Flexibility of the Online Classes
  • New Service to Be Offered: Information Literacy Seminars for New Students
  • Teaching and Reading Plan
  • Foucault on the Way Formal Institutions Regulate, Discipline or Train Us to Become Certain Kinds of Persons (Subjects)
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2019, April 22). Students in School: Importance of Assessment Essay. https://ivypanda.com/essays/assessment-of-students-in-schools-essay/

"Students in School: Importance of Assessment Essay." IvyPanda , 22 Apr. 2019, ivypanda.com/essays/assessment-of-students-in-schools-essay/.

IvyPanda . (2019) 'Students in School: Importance of Assessment Essay'. 22 April.

IvyPanda . 2019. "Students in School: Importance of Assessment Essay." April 22, 2019. https://ivypanda.com/essays/assessment-of-students-in-schools-essay/.

1. IvyPanda . "Students in School: Importance of Assessment Essay." April 22, 2019. https://ivypanda.com/essays/assessment-of-students-in-schools-essay/.

Bibliography

IvyPanda . "Students in School: Importance of Assessment Essay." April 22, 2019. https://ivypanda.com/essays/assessment-of-students-in-schools-essay/.

  • Faculty & Staff

Constructing tests

Whether you use low-stakes assessments, such as practice quizzes, or high-stakes assessments, such as midterms and finals, the careful design of your tests and quizzes can provide you with better information on what and how much students have learned, as well as whether they are able to apply what they have learned.

On this page, you can explore strategies for:

Designing your test or quiz Creating multiple choice questions Creating essay and short answer questions Helping students succeed on your test/quiz Promoting academic integrity Assessing your assessment

Designing your test or quiz

Tests and quizzes can help instructors work toward a number of different goals. For example, a frequent cadence of quizzes can help motivate students, give you insight into students’ progress, and identify aspects of the course you might need to adjust.

Understanding what you want to accomplish with the test or quiz will help guide your  decisionmaking about things like length, format, level of detail expected from students, and the time frame for providing feedback to the students. Regardless of what type of test or quiz you develop, it is good to:

  • Align your test/quiz with your course learning outcomes and objectives. For example, if your course goals focus primarily on building students’ synthesis skills, make sure your test or quiz asks students to demonstrate their ability to connect concepts.
  • Design questions that allow students to demonstrate their level of learning. To determine which concepts you might need to reinforce, create questions that give you insight into a student’s level of competency. For example, if you want students to understand a 4-step process, develop questions that show you which of the steps they grasp and which they need more help understanding.
  • Develop questions that map to what you have explored and discussed in class. If you are using publisher-provided question banks or assessment tools, be sure to review and select questions carefully to ensure alignment with what students have encountered in class or in your assignments.
  • Incorporate Universal Design for Learning (UDL) principles into your test or quiz design. UDL embraces a commitment to offering students multiple means of demonstrating their understanding. Consider offering practice tests/quizzes and creating tests/quizzes with different types of questions (e.g., multiple choice, written, diagram-based). Also, offer options in the test/quiz itself. For example, give students a choice of questions to answer or let them choose between writing or speaking their essay response. Valuing different communication modes helps create an inclusive environment that supports all students.

Creating multiple choice questions

While it is not advisable to rely solely on multiple choice questions to gauge student learning, they are often necessary in large-enrollment courses. And multiple choice questions can add value to any course by providing instructors with quick insight into whether students have a basic understanding of key information. They are also a great way to incorporate more formative assessment into your teaching .

Creating effective multiple choice questions can be difficult, especially if you want students to go beyond simply recalling information. But it is possible to develop multiple choice questions that require higher-order thinking. Here are some strategies for writing effective multiple choice questions:

  • Design questions that ask students to evaluate information (e.g., use an example, case study, or real-world dataset).
  • Make sure the answer options are consistent in length and detail. If the correct answer is noticeably different, students may end up choosing the right answer for the wrong reasons.
  • Design questions focused on your learning goals. Avoid writing “gotcha” questions – the goal is not to trip students up, but to assess their learning and progress toward your learning outcomes.
  • Test drive your questions with a colleague or teaching assistant to determine if your intention is clear.

Creating essay and short answer questions

Essay and short answer questions that require students to compose written responses of several sentences or paragraphs offer instructors insights into students’ ability to reason, synthesize, and evaluate information. Here are some strategies for writing effective essay questions:

  • Signal the type of thinking you expect students to demonstrate. Use specific words and phrases (e.g., identify, compare, critique) that guide students to how to go about responding to the question.
  • Write questions that can be reasonably answered in the time allotted. Consider offering students some guideposts on how much effort they should give to a question (e.g., “Write no more than 2 short paragraphs” or “Each short answer question is worth 20 points/10% of the test”).
  • Share your grading criteria before the test or quiz. Rubrics are a great way to help students prepare for an essay- or short answer-based test.

Strategies for grading essay and short answer questions

Although essay and short paragraph questions are more labor-intensive to grade than multiple-choice questions, the pay-off is often greater – they provide more insight into students’ critical thinking skills. Here are some strategies to help streamline the essay/short answer grading process:

  • Develop a rubric to keep you focused on your core criteria for success. Identify the point value/percentage associated with each criterion to streamline scoring.
  • Grade all student responses to the same question before moving to the next question. For example, if your test has two essay questions, grade all essay #1 responses first. Then grade all essay #2 responses. This promotes grading equity and may provide a more holistic view of how the class as a whole answered each question.
  • Focus on assessing students’ ideas and arguments. Few people write beautifully in a timed test. Unless it is key to your learning outcomes, don’t grade on grammar or polish.

Helping students succeed

While important in university settings, tests aren’t commonly found outside classroom settings. Think about your own work – how often are you expected to sit down, turn over a sheet of paper, and demonstrate the isolated skill or understanding listed on the paper in less than an hour? Sound stressful? It is! And sometimes that stress can be compounded by students’ lives beyond the classroom.

“Giving a traditional test feels fair from the vantage point of an instructor….The students take it at the same time and work in the same room. But their lives outside the test spill into it. Some students might have to find child care to take an evening exam. Others…have ADHD and are under additional stress in a traditional testing environment. So test conditions can be inequitable. They are also artificial: People are rarely required to solve problems under similar pressure, without outside resources, after they graduate. They rarely have to wait for a professor to grade them to understand their own performance.” “ What fixing a snowblower taught one professor about teaching ” Chronicle of Higher Education

Many students understandably experience stress, anxiety, and apprehension about taking tests and that can affect their performance. Here are some strategies for reducing stress in testing environments:

  • Set expectations. In your syllabus and on the first day of class, share your course learning outcomes and clearly define what constitutes cheating. As the quarter progresses, talk with students about the focus, time, location, and format of each test or quiz.
  • Provide study questions or low-/no-stakes practice tests that scaffold to the actual test. Practice questions should ask students to engage in the same kind of thinking you will be assessing on the actual test or quiz.
  • Co-create questions with your students . Ask students (individually or in small groups) to develop and answer potential test questions. Collect these and use them to create the questions for the actual test.
  • Develop relevant tests/quiz questions that assess skills and knowledge you explored during lecture or discussion.
  • Share examples of successful answers and provide explanations of why those answers were successful.

Promoting academic integrity

The primary goal of a test or quiz is to provide insight into a student’s understanding or ability. If a student cheats, the instructor has no opportunity to assess learning and help the student grow. There are a variety of strategies instructors can employ to create a culture of academic integrity . Here are a few specifically related to developing tests and quizzes:

  • Avoid high-stakes tests. High-stakes tests (e.g., a test worth more than 25% of the course grade) make it hard for a student to rebound from a mistake. If students worry that a disastrous exam will make it impossible to pass the course, they are more likely to cheat. Opt for assessments that provides students opportunities to grow and rebound from missteps (e.g., more low- or no-stakes assignments ).
  • Require students to reference specific materials or assignments in their answers. Asking students to draw on specific materials in their answers makes it harder for students to copy/paste answers from other sources.
  • Require students to explain or justify their answers . If it isn’t feasible to do this during the test, consider developing a post-test assignment that asks learners to justify their test answers. Instructors need not spend a lot of time grading these; just spot check them to get a sense of whether learners are thinking for themselves.
  • Prompt students to apply their learning through questions built around discrete scenarios, real-world problems, or unique datasets.
  • Use question banks and randomize questions when building your quiz or test.
  • Include a certification question. Ask students to acknowledge their academic integrity responsibilities at the beginning of a quiz/exam with a question like this one: “The work I submit is my own work. I will not consult with, discuss the contents of this quiz/test with, or show the quiz/test to anyone else, including other students. I understand that doing so is a violation of UW’s academic integrity policy and may subject me to disciplinary action, including suspension and dismissal.”
  • Allow learners to work together. Ask learners to collaborate to answer quiz/exam questions. By helping each other, learners engage with the course’s content and develop valuable collaboration skills.

Assessing your assessment

Observation and iteration are key parts of a reflective teaching practice . Take time after you’ve graded a test or quiz to examine its effectiveness and identify ways to improve it. Start by asking yourself some basic questions:

  • Did the test/quiz assess what I wanted it to assess? Based on students’ performance on your questions, are you confident that students grasp the concepts and skills you designed the test/quiz to measure?
  • Did the test align with my course goals and learning objectives? Does student performance indicate that they have made progress toward your learning goals? If not, you may need to revise your questions.
  • Were there particular questions that many students missed? If so, reconsider how you’ve worded the question or examine whether the question asked students about something you didn’t discuss (or didn’t discuss enough) in class.
  • Were students able to finish the test or quiz in the time allotted? If not, reconsider the number and difficulty of your questions.

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Designing Assessments of Student Learning

Image Hollie Nyseth Brehm, ​​​​​Associate Professor, Department of Sociology  Professor Hollie Nyseth Brehm was a graduate student the first time she taught a class, “I didn’t have any training on how to teach, so I assigned a final paper and gave them instructions: ‘Turn it in at the end of course.’ That was sort of it.” Brehm didn’t have a rubric or a process to check in with students along the way. Needless to say, the assignment didn’t lead to any major breakthroughs for her students. But it was a learning experience for Brehm. As she grew her teaching skills, she began to carefully craft assignments to align to course goals, make tasks realistic and meaningful, and break down large assignments into manageable steps. "Now I always have rubrics. … I always scaffold the assignment such that they’ll start by giving me their paper topic and a couple of sources and then turn in a smaller portion of it, and we write it in pieces. And that leads to a much better learning experience for them—and also for me, frankly, when I turn to grade it .”

Reflect  

Have you ever planned a big assignment that didn’t turn out as you’d hoped? What did you learn, and how would you design that assignment differently now? 

What are students learning in your class? Are they meeting your learning outcomes? You simply cannot answer these questions without assessment of some kind.

As educators, we measure student learning through many means, including assignments, quizzes, and tests. These assessments can be formal or informal, graded or ungraded. But assessment is not simply about awarding points and assigning grades. Learning is a process, not a product, and that process takes place during activities such as recall and practice. Assessing skills in varied ways helps you adjust your teaching throughout your course to support student learning

Instructor speaking to student on their laptop

Research tells us that our methods of assessment don’t only measure how much students have learned. They also play an important role in the learning process. A phenomenon known as the “testing effect” suggests students learn more from repeated testing than from repeated exposure to the material they are trying to learn (Karpicke & Roediger, 2008). While exposure to material, such as during lecture or study, helps students store new information, it’s crucial that students actively practice retrieving that information and putting it to use. Frequent assessment throughout a course provides students with the practice opportunities that are essential to learning.

In addition we can’t assume students can transfer what they have practiced in one context to a different context. Successful transfer of learning requires understanding of deep, structural features and patterns that novices to a subject are still developing (Barnett & Ceci, 2002; Bransford & Schwartz, 1999). If we want students to be able to apply their learning in a wide variety of contexts, they must practice what they’re learning in a wide variety of contexts .

Providing a variety of assessment types gives students multiple opportunities to practice and demonstrate learning. One way to categorize the range of assessment options is as formative or summative.

Formative and Summative Assessment

Opportunities not simply to practice, but to receive feedback on that practice, are crucial to learning (Ambrose et al., 2010). Formative assessment facilitates student learning by providing frequent low-stakes practice coupled with immediate and focused feedback. Whether graded or ungraded, formative assessment helps you monitor student progress and guide students to understand which outcomes they’ve mastered, which they need to focus on, and what strategies can support their learning. Formative assessment also informs how you modify your teaching to better meet student needs throughout your course.

Technology Tip

Design quizzes in CarmenCanvas to provide immediate and useful feedback to students based on their answers. Learn more about setting up quizzes in Carmen. 

Summative assessment measures student learning by comparing it to a standard. Usually these types of assessments evaluate a range of skills or overall performance at the end of a unit, module, or course. Unlike formative assessment, they tend to focus more on product than process. These high-stakes experiences are typically graded and should be less frequent (Ambrose et al., 2010).

Formative assessment examplesSummative assessment examples

Using Bloom's Taxonomy

A visual depiction of the Bloom's Taxonomy categories positioned like the layers of a cake. [row 1, at bottom] Remember; Recognizing and recalling facts. [Row 2] Understand: Understanding what the facts mean. [Row 3] Apply: Applying the facts, rules, concepts, and ideas. [Row 4] Analyze: Breaking down information into component parts. [Row 5] Evaluate: Judging the value of information or ideas. [Row 6, at top] Create: Combining parts to make a new whole.

Bloom’s Taxonomy is a common framework for thinking about how students can demonstrate their learning on assessments, as well as for articulating course and lesson learning outcomes .

Benjamin Bloom (alongside collaborators Max Englehart, Edward Furst, Walter Hill, and David Krathwohl) published Taxonomy of Educational Objectives in 1956.   The taxonomy provided a system for categorizing educational goals with the intent of aiding educators with assessment. Commonly known as Bloom’s Taxonomy, the framework has been widely used to guide and define instruction in both K-12 and university settings. The original taxonomy from 1956 included a cognitive domain made up of six categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The categories after Knowledge were presented as “skills and abilities,” with the understanding that knowledge was the necessary precondition for putting these skills and abilities into practice. 

A revised Bloom's Taxonomy from 2001 updated these six categories to reflect how learners interact with knowledge. In the revised version, students can:  Remember content, Understand ideas, Apply information to new situations, Analyze relationships between ideas, Evaluate information to justify perspectives or decisions, and Create new ideas or original work. In the graphic pictured here, the categories from the revised taxonomy are imagined as the layers of a cake.

Assessing students on a variety of Bloom's categories will give you a better sense of how well they understand your course content. The taxonomy can be a helpful guide to predicting which tasks will be most difficult for students so you can provide extra support where it is needed. It can also be used to craft more transparent assignments and test questions by honing in on the specific skills you want to assess and finding the right language to communicate exactly what you want students to do.  See the Sample Bloom's Verbs in the Examples section below.

Diving deeper into Bloom's Taxonomy

Like most aspects of our lives, activities and assessments in today’s classroom are inextricably linked with technology. In 2008, Andrew Churches extended Bloom’s Taxonomy to address the emerging changes in learning behaviors and opportunities as “technology advances and becomes more ubiquitous.” Consult Bloom’s Digital Taxonomy for ideas on using digital tools to facilitate and assess learning across the six categories of learning.

Did you know that the cognitive domain (commonly referred to simply as Bloom's Taxonomy) was only one of three domains in the original Bloom's Taxonomy (1956)? While it is certainly the most well-known and widely used, the other two domains— psychomotor and affective —may be of interest to some educators. The psychomotor domain relates to physical movement, coordination, and motor skills—it might apply to the performing arts or other courses that involve movement, manipulation of objects, and non-discursive communication like body language. The affective domain pertains to feelings, values, motivations, and attitudes and is used more often in disciplines like medicine, social work, and education, where emotions and values are integral aspects of learning. Explore the full taxonomy in  Three Domains of Learning: Cognitive, Affective, and Psychomotor (Hoque, 2017).

In Practice

Consider the following to make your assessments of student learning effective and meaningful.

Align assignments, quizzes, and tests closely to learning outcomes.

It goes without saying that you want students to achieve the learning outcomes for your course. The testing effect implies, then, that your assessments must help them retrieve the knowledge and practice the skills that are relevant to those outcomes.

Plan assessments that measure specific outcomes for your course. Instead of choosing quizzes and tests that are easy to grade or assignment types common to your discipline, carefully consider what assessments will best help students practice important skills. When assignments and feedback are aligned to learning outcomes, and you share this alignment with students, they have a greater appreciation for your course and develop more effective strategies for study and practice targeted at achieving those outcomes (Wang, et al., 2013).

Student working in a lab.

Provide authentic learning experiences.

Consider how far removed from “the real world” traditional assessments like academic essays, standard textbook problems, and multiple-choice exams feel to students. In contrast, assignments that are authentic resemble real-world tasks. They feel relevant and purposeful, which can increase student motivation and engagement (Fink, 2013). Authentic assignments also help you assess whether students will be able to transfer what they learn into realistic contexts beyond your course.

Integrate assessment opportunities that prepare students to be effective and successful once they graduate, whether as professionals, as global citizens, or in their personal lives.

To design authentic assignments:

  • Choose real-world content . If you want students to be able to apply disciplinary methods, frameworks, and terminology to solve real-world problems after your course, you must have them engage with real-world examples, procedures, and tools during your course. Include actual case studies, documents, data sets, and problems from your field in your assessments.
  • Target a real-world audience . Ask students to direct their work to a tangible reader, listener or viewer, rather than to you. For example, they could write a blog for their peers or create a presentation for a future employer.
  • Use real-world formats . Have students develop content in formats used in professional or real-life discourse. For example, instead of a conventional paper, students could write an email to a colleague or a letter to a government official, develop a project proposal or product pitch for a community-based company, post a how-to video on YouTube, or create an infographic to share on social media.

Simulations, role plays, case studies, portfolios, project-based learning, and service learning are all great avenues to bring authentic assessment into your course.

Make sure assignments are achievable.

Your students juggle coursework from several classes, so it’s important to be conscious of workload. Assign tasks they can realistically handle at a given point in the term. If it takes you three hours to do something, it will likely take your students six hours or more. Choose assignments that assess multiple learning outcomes from your course to keep your grading manageable and your feedback useful (Rayner et al., 2016).

Scaffold assignments so students can develop knowledge and skills over time.

For large assignments, use scaffolding to integrate multiple opportunities for feedback, reflection, and improvement. Scaffolding means breaking a complex assignment down into component parts or smaller progressive tasks over time. Practicing these smaller tasks individually before attempting to integrate them into a completed assignment supports student learning by reducing the amount of information they need to process at a given time (Salden et al., 2006).

Scaffolding ensures students will start earlier and spend more time on big assignments. And it provides you more opportunities to give feedback and guidance to support their ultimate success. Additionally, scaffolding can draw students’ attention to important steps in a process that are often overlooked, such as planning and revision, leading them to be more independent and thoughtful about future work.

A familiar example of scaffolding is a research paper. You might ask students to submit a topic or thesis in Week 3 of the semester, an annotated bibliography of sources in Week 6, a detailed outline in Week 9, a first draft on which they can get peer feedback in Week 11, and the final draft in the last week of the semester.

Your course journey is decided in part by how you sequence assignments. Consider where students are in their learning and place assignments at strategic points throughout the term. Scaffold across the course journey by explaining how each assignment builds upon the learning achieved in previous ones (Walvoord & Anderson, 2011). 

Be transparent about assignment instructions and expectations. 

Communicate clearly to students about the purpose of each assignment, the process for completing the task, and the criteria you will use to evaluate it before they begin the work. Studies have shown that transparent assignments support students to meet learning goals and result in especially large increases in success and confidence for underserved students (Winkelmes et al., 2016).

To increase assignment transparency:

Instructor giving directions to a class.

  • Explain how the assignment links to one or more course learning outcomes . Understanding why the assignment matters and how it supports their learning can increase student motivation and investment in the work.
  • Outline steps of the task in the assignment prompt . Clear directions help students structure their time and effort. This is also a chance to call out disciplinary standards with which students are not yet familiar or guide them to focus on steps of the process they often neglect, such as initial research.
  • Provide a rubric with straightforward evaluation criteria . Rubrics make transparent which parts of an assignment you care most about. Sharing clear criteria sets students up for success by giving them the tools to self-evaluate and revise their work before submitting it. Be sure to explain your rubric, and particularly to unpack new or vague terms; for example, language like "argue," “close reading,” "list significant findings," and "document" can mean different things in different disciplines. It is helpful to show exemplars and non-exemplars along with your rubric to highlight differences in unacceptable, acceptable, and exceptional work.

Engage students in reflection or discussion to increase assignment transparency. Have them consider how the assessed outcomes connect to their personal lives or future careers. In-class activities that ask them to grade sample assignments and discuss the criteria they used, compare exemplars and non-exemplars, engage in self- or peer-evaluation, or complete steps of the assignment when you are present to give feedback can all support student success.

Technology Tip   

Enter all  assignments and due dates  in your Carmen course to increase transparency. When assignments are entered in Carmen, they also populate to Calendar, Syllabus, and Grades areas so students can easily track their upcoming work. Carmen also allows you to  develop rubrics  for every assignment in your course. 

Sample Bloom’s Verbs

Building a question bank, using the transparent assignment template, sample assignment: ai-generated lesson plan.

Include frequent low-stakes assignments and assessments throughout your course to provide the opportunities for practice and feedback that are essential to learning. Consider a variety of formative and summative assessment types so students can demonstrate learning in multiple ways. Use Bloom’s Taxonomy to determine—and communicate—the specific skills you want to assess.

Remember that effective assessments of student learning are:

  • Aligned to course learning outcomes
  • Authentic, or resembling real-world tasks
  • Achievable and realistic
  • Scaffolded so students can develop knowledge and skills over time
  • Transparent in purpose, tasks, and criteria for evaluation
  • Collaborative learning techniques: A handbook for college faculty (book)
  • Cheating Lessons (book)
  • Minds online: Teaching effectively with technology (book)
  • Assessment: The Silent Killer of Learning (video)
  • TILT Higher Ed Examples and Resource (website)
  • Writing to Learn: Critical Thinking Activities for Any Classroom (guide)

Ambrose, S.A., Bridges, M.W., Lovett, M.C., DiPietro, M., & Norman, M.K. (2010).  How learning works: Seven research-based principles for smart teaching . John Wiley & Sons. 

Barnett, S.M., & Ceci, S.J. (2002). When and where do we apply what we learn? A taxonomy for far transfer.  Psychological Bulletin , 128 (4). 612–637.  doi.org/10.1037/0033-2909.128.4.612  

Bransford, J.D, & Schwartz, D.L. (1999). Rethinking transfer: A simple proposal with multiple implications.  Review of Research in Education , 24 . 61–100.  doi.org/10.3102/0091732X024001061  

Fink, L. D. (2013).  Creating significant learning experiences: An integrated approach to designing college courses . John Wiley & Sons. 

Karpicke, J.D., & Roediger, H.L., III. (2008). The critical importance of retrieval for learning.  Science ,  319 . 966–968.  doi.org/10.1126/science.1152408  

Rayner, K., Schotter, E. R., Masson, M. E., Potter, M. C., & Treiman, R. (2016). So much to read, so little time: How do we read, and can speed reading help?.  Psychological Science in the Public Interest ,  17 (1), 4-34.  doi.org/10.1177/1529100615623267     

Salden, R.J.C.M., Paas, F., van Merriënboer, J.J.G. (2006). A comparison of approaches to learning task selection in the training of complex cognitive skills.  Computers in Human Behavior , 22 (3). 321–333.  doi.org/10.1016/j.chb.2004.06.003  

Walvoord, B. E., & Anderson, V. J. (2010).  Effective grading: A tool for learning and assessment in college . John Wiley & Sons. 

Wang, X., Su, Y., Cheung, S., Wong, E., & Kwong, T. (2013). An exploration of Biggs’ constructive alignment in course design and its impact on students’ learning approaches.  Assessment & Evaluation in Higher Education , 38 (4). 477–491.  doi.org/10.1016/j.chb.2004.06.003  

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). A teaching intervention that increases underserved college students’ success.  Peer Review , 18 (1/2). 31–36. Retrieved from  https://www.aacu.org/peerreview/2016/winter-spring/Winkelmes

Related Teaching Topics

A positive approach to academic integrity, creating and adapting assignments for online courses, ai teaching strategies: transparent assignment design, designing research or inquiry-based assignments, using backward design to plan your course, universal design for learning: planning with all students in mind, search for resources.

  • Visit the University of Nebraska–Lincoln
  • Apply to the University of Nebraska–Lincoln
  • Give to the University of Nebraska–Lincoln

Search Form

Assessing student writing, what does it mean to assess writing.

  • Suggestions for Assessing Writing

Means of Responding

Rubrics: tools for response and assessment, constructing a rubric.

Assessment is the gathering of information about student learning. It can be used for formative purposes−−to adjust instruction−−or summative purposes: to render a judgment about the quality of student work. It is a key instructional activity, and teachers engage in it every day in a variety of informal and formal ways.

Assessment of student writing is a process. Assessment of student writing and performance in the class should occur at many different stages throughout the course and could come in many different forms. At various points in the assessment process, teachers usually take on different roles such as motivator, collaborator, critic, evaluator, etc., (see Brooke Horvath for more on these roles) and give different types of response.

One of the major purposes of writing assessment is to provide feedback to students. We know that feedback is crucial to writing development. The 2004 Harvard Study of Writing concluded, "Feedback emerged as the hero and the anti-hero of our study−powerful enough to convince students that they could or couldn't do the work in a given field, to push them toward or away from selecting their majors, and contributed, more than any other single factor, to students' sense of academic belonging or alienation" (http://www.fas.harvard.edu/~expos/index.cgi?section=study).

Source: Horvath, Brooke K. "The Components of Written Response: A Practical Synthesis of Current Views." Rhetoric Review 2 (January 1985): 136−56. Rpt. in C Corbett, Edward P. J., Nancy Myers, and Gary Tate. The Writing Teacher's Sourcebook . 4th ed. New York: Oxford Univ. Press, 2000.

Suggestions for Assessing Student Writing

Be sure to know what you want students to be able to do and why. Good assessment practices start with a pedagogically sound assignment description and learning goals for the writing task at hand. The type of feedback given on any task should depend on the learning goals you have for students and the purpose of the assignment. Think early on about why you want students to complete a given writing project (see guide to writing strong assignments page). What do you want them to know? What do you want students to be able to do? Why? How will you know when they have reached these goals? What methods of assessment will allow you to see that students have accomplished these goals (portfolio assessment assigning multiple drafts, rubric, etc)? What will distinguish the strongest projects from the weakest?

Begin designing writing assignments with your learning goals and methods of assessment in mind.

Plan and implement activities that support students in meeting the learning goals. How will you support students in meeting these goals? What writing activities will you allow time for? How can you help students meet these learning goals?

Begin giving feedback early in the writing process. Give multiple types of feedback early in the writing process. For example, talking with students about ideas, write written responses on drafts, have students respond to their peers' drafts in process, etc. These are all ways for students to receive feedback while they are still in the process of revising.

Structure opportunities for feedback at various points in the writing process. Students should also have opportunities to receive feedback on their writing at various stages in the writing process. This does not mean that teachers need to respond to every draft of a writing project. Structuring time for peer response and group workshops can be a very effective way for students to receive feedback from other writers in the class and for them to begin to learn to revise and edit their own writing.

Be open with students about your expectations and the purposes of the assignments. Students respond better to writing projects when they understand why the project is important and what they can learn through the process of completing it. Be explicit about your goals for them as writers and why those goals are important to their learning. Additionally, talk with students about methods of assessment. Some teachers have students help collaboratively design rubrics for the grading of writing. Whatever methods of assessment you choose, be sure to let students in on how they will be evaluated.

 Do not burden students with excessive feedback. Our instinct as teachers, especially when we are really interested in students´ writing is to offer as many comments and suggestions as we can. However, providing too much feedback can leave students feeling daunted and uncertain where to start in terms of revision. Try to choose one or two things to focus on when responding to a draft. Offer students concrete possibilities or strategies for revision.

Allow students to maintain control over their paper. Instead of acting as an editor, suggest options or open-ended alternatives the student can choose for their revision path. Help students learn to assess their own writing and the advice they get about it.

Purposes of Responding We provide different kinds of response at different moments. But we might also fall into a kind of "default" mode, working to get through the papers without making a conscious choice about how and why we want to respond to a given assignment. So it might be helpful to identify the two major kinds of response we provide:

  • Formative Response: response that aims primarily to help students develop their writing. Might focus on confidence-building, on engaging the student in a conversation about her ideas or writing choices so as to help student to see herself as a successful and promising writer. Might focus on helping student develop a particular writing project, from one draft to next. Or, might suggest to student some general skills she could focus on developing over the course of a semester.
  • Evaluative Response: response that focuses on evaluation of how well a student has done. Might be related to a grade. Might be used primarily on a final product or portfolio. Tends to emphasize whether or not student has met the criteria operative for specific assignment and to explain that judgment.

We respond to many kinds of writing and at different stages in the process, from reading responses, to exercises, to generation or brainstorming, to drafts, to source critiques, to final drafts. It is also helpful to think of the various forms that response can take.

  • Conferencing: verbal, interactive response. This might happen in class or during scheduled sessions in offices. Conferencing can be more dynamic: we can ask students questions about their work, modeling a process of reflecting on and revising a piece of writing. Students can also ask us questions and receive immediate feedback. Conference is typically a formative response mechanism, but might also serve usefully to convey evaluative response.
  • Written Comments on Drafts
  • Local: when we focus on "local" moments in a piece of writing, we are calling attention to specifics in the paper. Perhaps certain patterns of grammar or moments where the essay takes a sudden, unexpected turn. We might also use local comments to emphasize a powerful turn of phrase, or a compelling and well-developed moment in a piece. Local commenting tends to happen in the margins, to call attention to specific moments in the piece by highlighting them and explaining their significance. We tend to use local commenting more often on drafts and when doing formative response.
  • Global: when we focus more on the overall piece of writing and less on the specific moments in and of themselves. Global comments tend to come at the end of a piece, in narrative-form response. We might use these to step back and tell the writer what we learned overall, or to comment on a pieces' general organizational structure or focus. We tend to use these for evaluative response and often, deliberately or not, as a means of justifying the grade we assigned.
  • Rubrics: charts or grids on which we identify the central requirements or goals of a specific project. Then, we evaluate whether or not, and how effectively, students met those criteria. These can be written with students as a means of helping them see and articulate the goals a given project.

Rubrics are tools teachers and students use to evaluate and classify writing, whether individual pieces or portfolios. They identify and articulate what is being evaluated in the writing, and offer "descriptors" to classify writing into certain categories (1-5, for instance, or A-F). Narrative rubrics and chart rubrics are the two most common forms. Here is an example of each, using the same classification descriptors:

Example: Narrative Rubric for Inquiring into Family & Community History

An "A" project clearly and compellingly demonstrates how the public event influenced the family/community. It shows strong audience awareness, engaging readers throughout. The form and structure are appropriate for the purpose(s) and audience(s) of the piece. The final product is virtually error-free. The piece seamlessly weaves in several other voices, drawn from appropriate archival, secondary, and primary research. Drafts - at least two beyond the initial draft - show extensive, effective revision. Writer's notes and final learning letter demonstrate thoughtful reflection and growing awareness of writer's strengths and challenges.

A "B" project clearly and compellingly demonstrates how the public event influenced the family/community. It shows strong audience awareness, and usually engages readers. The form and structure are appropriate for the audience(s) and purpose(s) of the piece, though the organization may not be tight in a couple places. The final product includes a few errors, but these do no interfere with readers' comprehension. The piece effectively, if not always seamlessly, weaves several other voices, drawn from appropriate archival, secondary, and primary research. One area of research may not be as strong as the other two. Drafts - at least two beyond the initial drafts - show extensive, effective revision. Writer's notes and final learning letter demonstrate thoughtful reflection and growing awareness of writer's strengths and challenges.

A "C" project demonstrates how the public event influenced the family/community. It shows audience awareness, sometimes engaging readers. The form and structure are appropriate for the audience(s) and purpose(s), but the organization breaks down at times. The piece includes several, apparent errors, which at times compromises the clarity of the piece. The piece incorporates other voices, drawn from at least two kinds of research, but in a generally forced or awkward way. There is unevenness in the quality and appropriateness of the research. Drafts - at least one beyond the initial draft - show some evidence of revision. Writer's notes and final learning letter show some reflection and growth in awareness of writer's strengths and challenges.

A "D" project discusses a public event and a family/community, but the connections may not be clear. It shows little audience awareness. The form and structure is poorly chosen or poorly executed. The piece includes many errors, which regularly compromise the comprehensibility of the piece. There is an attempt to incorporate other voices, but this is done awkwardly or is drawn from incomplete or inappropriate research. There is little evidence of revision. Writer's notes and learning letter are missing or show little reflection or growth.

An "F" project is not responsive to the prompt. It shows little or no audience awareness. The purpose is unclear and the form and structure are poorly chosen and poorly executed. The piece includes many errors, compromising the clarity of the piece throughout. There is little or no evidence of research. There is little or no evidence of revision. Writer's notes and learning letter are missing or show no reflection or growth.

Chart Rubric for Community/Family History Inquiry Project

Clearly and compellingly demonstrates influence of event Clearly and compellingly demonstrates influence of event Demonstrates influence of event Discusses event; connections unclear Not responsive to prompt
Strong audience awareness; engages throughout Strong audience awareness; usually engages Audience awareness; sometimes engages Little audience awareness Little or no audience awareness
Appropriate for audience(s), purpose(s) Appropriate for audience(s), purpose(s); organization occasionally not tight Appropriate for audience(s), purpose(s); organization breaks down at times Poorly chosen or poorly executed Poorly chosen and executed
Virtually error-free Few, unobtrusive errors Several apparent, sometimes obtrusive errors Many, obtrusive errors Many obtrusive errors
Seamlessly weaves voices; 3 kinds of research Effectively weaves voices; 3 kinds of research; 1 may not be as strong Incorporates other voices, but awkwardly; at least 2 kinds of research Attempts to incorporate voices, but awkwardly; poor research Little or no evidence of research
Extensive, effective (at least 2 drafts beyond 1st) Extensive, effective (at least 2 drafts beyond 1st) Some evidence of revision Little evidence or revision No evidence of revision
Thoughtful reflection; growing self-awareness Thoughtful reflection; growing self-awareness Some evidence of reflection, growth Little evidence of reflection Little or no evidence of reflection
Thoughtful reflection; growing self-awareness Thoughtful reflection; growing self-awareness Some evidence of reflection, growth Little evidence of reflection Little or no evidence of reflection

All good rubrics begin (and end) with solid criteria. We always start working on rubrics by generating a list - by ourselves or with students - of what we value for a particular project or portfolio. We generally list far more items than we could use in a single rubric. Then, we narrow this list down to the most important items - between 5 and 7, ideally. We do not usually rank these items in importance, but it is certainly possible to create a hierarchy of criteria on a rubric (usually by listing the most important criteria at the top of the chart or at the beginning of the narrative description).

Once we have our final list of criteria, we begin to imagine how writing would fit into a certain classification category (1-5, A-F, etc.). How would an "A" essay differ from a "B" essay in Organization? How would a "B" story differ from a "C" story in Character Development? The key here is to identify useful descriptors - drawing the line at appropriate places. Sometimes, these gradations will be precise: the difference between handing in 80% and 90% of weekly writing, for instance. Other times, they will be vague: the difference between "effective revisions" and "mostly effective revisions", for instance. While it is important to be as precise as possible, it is also important to remember that rubric writing (especially in writing classrooms) is more art than science, and will never - and nor should it - stand in for algorithms. When we find ourselves getting caught up in minute gradations, we tend to be overlegislating students´- writing and losing sight of the purpose of the exercise: to support students' development as writers. At the moment when rubric-writing thwarts rather than supports students' writing, we should discontinue the practice. Until then, many students will find rubrics helpful -- and sometimes even motivating.

  • MyU : For Students, Faculty, and Staff
  • Academic Leaders
  • Faculty and Instructors
  • Graduate Students and Postdocs

Center for Educational Innovation

Request a consultation

  • Campus and Collegiate Liaisons
  • Pedagogical Innovations Journal Club
  • Teaching Enrichment Series
  • Recorded Webinars
  • Video Series
  • All Services
  • Teaching Consultations
  • Student Feedback Facilitation
  • Instructional Media Production
  • Curricular and Educational Initiative Consultations
  • Educational Research and Evaluation
  • Thank a Teacher
  • All Teaching Resources
  • Teaching with GenAI
  • Active Learning
  • Active Learning Classrooms
  • Aligned Course Design
  • Assessments
  • Documenting Growth in Teaching
  • Early Term Feedback
  • Inclusive Teaching at a Predominantly White Institution
  • Leveraging the Learning Sciences
  • Online Teaching and Design
  • Scholarship of Teaching and Learning
  • Strategies to Support Challenging Conversations in the Classroom
  • Teaching During the Election Season
  • Team Projects
  • Writing Your Teaching Philosophy
  • All Programs
  • Assessment Deep Dive
  • Designing and Delivering Online Learning
  • Early Career Teaching and Learning Program
  • Inclusive STEM Teaching Program
  • International Teaching Assistant (ITA) Program
  • Preparing Future Faculty Program
  • Teaching with Access and Inclusion Program
  • "Teaching with AI" Book Club
  • Teaching for Student Well-Being Program
  • Teaching Assistant and Postdoc Professional Development Program
  • Essay Exams

Essay exams provide opportunities to evaluate students’ reasoning skills such as the ability to compare and contrast concepts, justify a position on a topic, interpret cases from the perspective of different theories or models, evaluate a claim or assertion with evidence, design an experiment, and other higher level cognitive skills. They can reveal if students understand the theory behind course material or how different concepts and theories relate to each other. 

+ Advantages and Challenges of essay exams

Advantages:

  • Can be used to measure higher order cognitive skills
  • Takes relatively less time to write questions
  • Difficult for respondents to get correct answers by guessing

Challenges:

  • Can be time consuming to administer and to score
  • Can be challenging to identify measurable, reliable criteria for assessing student responses
  • Limited range of content can be sampled during any one testing period
  • Timed exams in general add stress unrelated to a student's mastery of the material

+ Creating an essay exam

  • Limit the use of essay questions to learning aims that require learners to share their thinking processes, connect and analyze information, and communicate their understanding for a specific purpose. 
  • Write each item so that students clearly understand the specific task and what deliverables are required for a complete answer (e.g. diagram, amount of evidence, number of examples).
  • Indicate the relative amount of time and effort students should spend on each essay item, for example “2 – 3 sentences should suffice for this question”.
  • Consider using several narrowly focused items rather than one broad item.
  • Consider offering students choice among essay questions, while ensuring that all learning aims are assessed.

When designing essay exams, consider the reasoning skills you want to assess in your students. The following table lists different skills to measure with example prompts to guide assessment questions. 

Table from Piontek, 2008
Skill to Assess Possible Question Stems
Comparing
Relating Cause and Effect 
Justifying
Summarizing
Generalizing
Inferring
Classifying
Creating
Applying
Analyzing
Synthesizing

+ Preparing students for an essay exam

Adapted from Piontek, 2008

Prior to the essay exam

  • Administer a formative assessment that asks students to do a brief write on a question similar to one you will use on an exam and provide them with feedback on their responses.
  • Provide students with examples of essay responses that do and do not meet your criteria and standards. 
  • Provide students with the learning aims they will be responsible for mastering to help them focus their preparation appropriately.
  • Have students apply the scoring rubric to sample essay responses and provide them with feedback on their work.

Resource video : 2-minute video description of a formative assessment that helps prepare students for an essay exam. 

+ Administering an essay exam

  • Provide adequate time for students to take the assessment. A strategy some instructors use is to time themselves answering the exam questions completely and then multiply that time by 3-4.
  • Endeavor to create a distraction-free environment.
  • Review the suggestions for informal accommodations for multilingual learners , which may be helpful in setting up an essay exam for all learners.

+ Grading an essay exam

To ensure essays are graded fairly and without bias:

  • Outline what constitutes an acceptable answer (criteria for knowledge and skills).
  • Select an appropriate scoring method based on the criteria.
  • Clarify the role of writing mechanics and other factors independent of the learning aims being measured.
  • Share with students ahead of time.
  • Use a systematic process for scoring each essay item.  For instance, score all responses to a single question in one setting.
  • Anonymize student work (if possible) to ensure fairer and more objective feedback. For example students could use their student ID number in place of their name.

+ References & Resources

  • For more information on setting criteria, preparing students, and grading essay exams read:  Boye, A. (2019) Writing Better Essay Exams , IDEA paper #76.
  • For more detailed descriptions of how to develop and score essay exams read: Piontek, M.E. (2008). Best Practices for Designing and Grading Exams, CRLT Occasional Paper # 24.

Web resources

  • Designing Effective Writing Assignments  (Teaching with Writing Program - UMNTC ) 
  • Writing Assignment Checklist (Teaching with Writing Program - UMNTC)
  • Designing and Using Rubrics (Center for Writing - UMTC)
  • Caroline Hilk
  • Why Use Active Learning?
  • Successful Active Learning Implementation
  • Addressing Active Learning Challenges
  • Research and Resources
  • Addressing Challenges
  • Course Planning
  • Align Assessments
  • Multiple Low Stakes Assessments
  • Authentic Assessments
  • Formative and Summative Assessments
  • Varied Forms of Assessments
  • Cumulative Assessments
  • Equitable Assessments
  • Multiple Choice Exams and Quizzes
  • Academic Paper
  • Skill Observation
  • Alternative Assessments
  • Assessment Plan
  • Grade Assessments
  • Prepare Students
  • Reduce Student Anxiety
  • SRT Scores: Interpreting & Responding
  • Student Feedback Question Prompts
  • Definitions and PWI Focus
  • A Flexible Framework
  • Class Climate
  • Course Content
  • An Ongoing Endeavor
  • Working memory
  • Retrieval of information
  • Spaced practice
  • Active learning
  • Metacognition
  • Research Questions and Design
  • Gathering data
  • Publication
  • Learn About Your Context
  • Design Your Course to Support Challenging Conversations
  • Design Your Challenging Conversations Class Session
  • Use Effective Facilitation Strategies
  • What to Do in a Challenging Moment
  • Debrief and Reflect On Your Experience, and Try, Try Again
  • Supplemental Resources
  • Why Use Team Projects?
  • Project Description Examples
  • Project Description for Students
  • Team Projects and Student Development Outcomes
  • Forming Teams
  • Team Output
  • Individual Contributions to the Team
  • Individual Student Understanding
  • Supporting Students
  • Wrapping up the Project
  • GRAD 8101: Teaching in Higher Education
  • Finding a Practicum Mentor
  • GRAD 8200: Teaching for Learning
  • Proficiency Rating & TA Eligibility
  • Schedule a SETTA
  • TAPD Webinars

Example of an Assessment Topic

Below is an example of an assessment reading and question in the style you can expect for the BWA. 

  • Read the passage and the essay topic that follows. Respond to the topic by writing an essay that is controlled by a central idea and is developed by discussing specific examples.
  • You will have two hours to read the passage and complete your essay. You may print out the passage, make notes, or highlight parts of the passage. Plan your essay before you start writing. Allow time to reread and proofread your essay to make any revisions or corrections.
  • Your essay will be evaluated on the basis of your ability to develop the central idea, to express yourself clearly, and to use the conventions of written English. The topic has no "correct" response.Sample Reading Passage

Introductory Note

Daniel J. Levitin is a professor of psychology and neuroscience at McGill University, where he directs the Laboratory for Musical Perception, Cognition, and Expertise. The following passage is adapted from This Is Your Brain on Music by Daniel J. Levitin, copyright© 2006 by Daniel J. Levitin. Used by permission of Dutton, a division of Penguin Group (USA) Inc.

Expertise Dissected

How do people become expert musicians? And why is it that of the millions of people who take music lessons as children, relatively few continue to play music as adults? When they find out what I do for a living, many people tell me that they love music, but that their music lessons "didn't take." I think they're being too hard on themselves. Although many people say that music lessons didn't take, cognitive neuroscientists have found otherwise in their laboratories. Even a small exposure to music lessons as a child creates neural circuits for music processing that are more efficient than those of people who lack training. Music lessons teach us to listen better, and they accelerate our ability to discern structure and form in music, making it easier for us to tell what music we like and what we don't like.

But what about those classes of people that we all acknowledge are true musical experts--the Alfred Brendels, Sarah Changs, Wynton Marsalises, and Tori Amoses? How did they get what most of us don't have, an extraordinary facility to play and perform?

The scientific study of expertise has been a major topic within cognitive science for the past thirty years, and musical expertise has tended to be studied within the context of general expertise. In almost all cases, musical expertise has been defined as technical achievement--mastery of an instrument or of compositional skills. The late Michael Howe, and his collaborators Jane Davidson and John Sloboda, launched an international debate when they asked whether the popular notion of "talent" is scientifically defensible. They assumed the following alternatives: either high levels of musical achievement are based on innate brain structures (what people refer to as talent) or they are simply the result of training and practice. They define talent as something (1) that originates in genetic structures and (2) that is identifiable at an early stage by trained people who can recognize it even before exceptional levels of performance have been acquired.

It is evident that some children acquire skills more rapidly than others: the ages of onset for walking, talking, and toilet training vary widely from one child to another, even within the same household. There may be genetic factors at work, but it is difficult to separate genetic factors from factors with a presumably environmental component, such as motivation, personality, and family dynamics. Similar factors can influence musical development and can mask the contributions of genetics to musical ability. Brain studies, so far, haven't been of much use in sorting out the issue because it has been difficult to separate cause from effect. For example, studies of violin players by Thomas Elbert have shown that the region of the brain responsible for moving the left hand--the hand that requires the most precision in violin playing--increases in size as a result of practice. We do not know yet if the propensity for increase preexists in the genetics of some people and not others.

The strongest evidence for the talent position is that some people simply acquire musical skills more rapidly than others. The evidence against the talent position--or, rather, in favor of the view that practice makes perfect--comes from research on how much training the experts or high achievement people actually do. Like experts in mathematics, chess, or sports, experts in music require lengthy periods of instruction and practice the most, sometimes twice as much as those who weren't judged as good.

In one study, students were secretly divided into two groups (not revealed to the students so as not to bias them) based on teachers' perceptions of their talent. Several years later, the students who achieved the highest performance ratings were those who had practiced the most, irrespective of which "talent" group they had been assigned to previously. This suggests that practice is the cause of achievement, not merely something correlated with it. It further suggests that talent is a label that we're using in a circular fashion: when we say that someone is talented, we think we mean that they have some innate predisposition to excel, but in the end, we only apply the term retrospectively, after they have made significant achievements.

Anders Ericsson at Florida State University and his colleagues approach the topic of musical expertise as a general problem in cognitive psychology involving how humans become experts in general. In other words, he takes as a starting assumption that there are certain issues involved in becoming an expert at anything, that we can learn about musical expertise by studying expert writers, chess players, athletes, artists, mathematicians, in addition to musicians. The emerging picture from studies of high achievers in many fields is that ten thousand hours of practice are required to achieve the level of mastery associated with being a world-class expert-in anything. In study after study--of composers, basketball players, fiction writers, ice skaters, concert pianists, chess players, master criminals, and what have you--this number comes up again and again. Ten thousand hours is equivalent to roughly three hours a day, or twenty hours a week, of practice over ten years. Of course, this doesn't address why some people don't seem to get anywhere when they practice, and why some people get more out of their practice sessions than others. But no one has yet found a case in which true world-class expertise was accomplished in less time. It seems that it takes the brain this long to assimilate all that it needs to know to achieve true mastery.

[Copyright© 2014 by the University of California. All rights reserved. Produced for the University of California.]

Essay Topic

According to Levitin, what roles do talent and practice play in enabling people to reach outstanding achievements in any field? What do you think of his views? Write an essay responding to these two questions. To develop your own position, be sure to discuss  our own specific examples. Those examples can be drawn from anything you've read as well as from your own observation and experience.

PSYC 7010: LEARNING AND ASSESSMENT SPECIFIC LEARNING OBJECTIVES

Return to: | Courses |  Home |

The Power Point Presentations can be viewed directly using Internet Explorer or can be downloaded by clicking the right mouse button and viewed using Power Point or the PowerPoint 97 Viewer . Exams will all be essays; guidelines for writing an essay are provided at the end of the objectives.

The following textbooks have student study guides with practice tests that can help you organize your knowledge:

  • Eggen & Kauchak, Educational psychology: Windows on classrooms (5th ed.)
  • Slavin, Educational psychology (7th ed.)

1. Define educational psychology and discuss how it can help teachers and administrators to carry out their respective roles. Include in your discussion how the use of the scientific method and research have impacted the development of a knowledge base for educational psychology. [ PPT Presentation: Introduction ] [ PPT Presentation: Research ]

2. Compare and contrast the following terms :         a. learning, maturation, and development;         b. aptitude and achievement;         c. instruction, assessment, and evaluation;         d. reliability and validity;         e. traditional assessment and performance assessment;         f. formative assessment and summative assessment;         g. norm-referenced evaluation and criterion-referenced evaluation. Why is assessment and evaluation of aptitude and achievement such an important issue in today's schools. Make some recommendations you would make for how assessment and evaluation should be done at the grade level and subject you teach. [Dietel et al., 1991]

3. Draw and discuss a model of the teaching-learning process . Name and define each of the categories of variables in your model and identify some of the research that has been used to build the model. (Be certain to identify the source of your model.) [ PPT Presentation #1 ] [ PPT Presentation # 2 ] [ PPT Presentation # 3 ] [ McIlrath & Huitt , 1995; Huitt , 1999]

4. Define and discuss academic learning time (ALT) and how it might be improved? (be specific about whether the proposed changes relate to the school or to the classroom; if to the classroom, whether they relate to instruction or management). How does ALT relate to John Carroll's model of school learning? Which theoretical approach to learning is the concept of ALT most closely linked? [PPT Presentation]

5. Provide an overview of the systems model of human development presented in class, describing how the behavioral, cognitive, humanistic and learning/development theories address different factors in this model. [ PPT Presentation - Basics ] [ PPT Presentation - Context ]

6. Discuss at least 5 important trends that are presently influencing or are likely to influence education during the next 20 years. Include in your discussion the impact the transition to the information age is having on the required knowledge, attitudes, and skills required for successful adult living. Make recommendations on how professional educators should respond to changing conditions. [ PPT Presentation ] [ Connections video ][ PPT in Spanish ]

7. Name and discuss the essential foundations and competencies needed to work effectively in the information age as developed by the (U.S. Department of Labor) Secretary's Commission on Achieving Necessary Skills (SCANS). Discuss Dr. Huitt's critique of the SCANS report in terms of important attitudes, knowledge, and skills for being successful in the information age? How do these compare to the concept "Becoming a Brilliant Star?" What evidence would you use to persuade students, parents, educators and community members that these are indeed important? How is this view similar or different from Dr. Clarken's model "Becoming Our True Selves?" [See # 4 above; Huitt , 1997; Huitt , 2004] [ Brilliant Star ]  [ Clarken, 2004 ]

PROCESS VARIABLES AND LEARNING THEORIES

8. Define learning and describe the major focus and assumptions of the behavioral, cognitive, constructivistic, humanistic, and social cognitive theories of learning. Describe some implications of classroom and school practice that can be derived from these theories. [PPT Presentation] 

9. Compare and contrast three major behavioral theories of learning , giving examples of how each of these can be used in the teaching-learning process.  [ PPT Presentation ]

10. Define operant learning and give original examples of four different methods for altering behavior using this theory. Discuss how this theory can be applied to the teaching-learning process, including how the Premack principle can be used to determine reinforcers. Additionally, name and define each of the schedules of reinforcement, and give an example of each kind as it might be used in the classroom. [ PPT Presentation ]

11. Define and discuss the major viewpoints and theories related to cognition and memory in the systems model of human behavior. [ PPT Presentation -- Overview ] [ PPT Presentation--Constructivism ][ PPT-Constructivism in Spanish ]

12. Define and differentiate the stage, levels-of-processing, parallel distributed processing, and connectionist models of information processing. Draw and discuss the information-processing model of memory and give an example of how it works. Discuss the kinds of stimuli likely to arouse the orienting response and describe how short-term memory and long-term memory operate. Discuss some principles for getting information into both types of memory. How might these principles be implemented to improve instruction? [ PPT Presentation -- Information Processing ] [ PPT Presentation -- Stage Model ] [ PPT Presentation --Using the Theory ] [ PPT Presentation - Terms ]

13. Describe each of the six levels of Bloom's taxonomy of the cognitive domain , providing an original example of actions students might take to demonstrate competency at each level. [ PPT Presentation--Domains ]

14. Describe intellectual development according to Piaget , including a discussion of both the process and the stages of development. Note behavioral characteristics of each stage, describing how assimilation and accommodation are exemplified for each stage of development. Describe specific actions that teachers can take to incorporate Piaget's theory into the classroom. [183-217] [ PPT Presentation-Process ] [ PPT Presentation-Stages ]

15. Describe cognitive development according to Jerome Bruner and Lev Vygotsky. Compare Piaget's theory to Vygotsky's sociohistorical theory of cognitive development. How do these relate to other cognitive theories and to constructivism ? What does Bruner add to this discussion?  [PPT Presentation]

16. Define critical thinking and discuss why it is an important topic to be addressed by today's educators. How is critical thinking similiar to and different from creativity? [ Hummel & Huitt , 1994][ PPT Presentation ]

17. Define metacognition and describe five ways to help students increase their metacognitive skills. [PPT Presentation]

18. What is the SQ3R/SQ4R/PQ4R method of study? What is the relationship of study habits and attitudes to achievement? What can you as a teacher do to improve these in students? [PPT Presentation]

Classroom Processes

19. Discuss J. B. Carroll's model of school learning . Relate Carroll's model to Bloom's model of mastery learning and to the concept of Academic Learning Time. [ PPT Presentation ]

20. Describe why planning is an important classroom activity. Name and define the major steps often used by educators in the planning process . Provide an original example of how you might use this process. Write an instructional objective for six different topics according to the standards set forth by either Mager or Gronlund.  [ PPT Presentation--Planning ] [PPT Presentation--Objectives ]

Instruction

21. Name and discuss four different categories of models of instruction , relating each category to a theoretical approach to learning and to a component of mind or behavior. [Huitt, Monetti & Hummel, in press] [ PPT Presentation--Overview ] [ PPT Presentation--Direct Instruction ]

22. Describe how the cognitive approach has impacted the direct instruction model . How can mastery learning combined with direct instruction utilize both a behavioral and cognitive approach? [PPT Presentation]

23. Name and describe at least three general categories of classroom management activities a teacher might use if implementing a behavioral approach to classroom management, giving original examples of each. State which of these principles would be especially important during the first week of school year, and why. How does this conceptualization relate to the three aspects of classroom management: obedience, responsibility, and solidarity? [Randolph & Evertson, 1994] [ PPT Presentation--Overview ] 

24. Using the research on the first-week management behavior of effective classroom teachers, state what you would do during your first week as a new teacher and why you would do that. Discuss the difference between focusing on increasing on-task behavior or decreasing off-task behavior (Give specific, original examples, not just generalities.). [ PPT Presentation ]

25. Name and discuss the major viewpoints and theories related to affective and emotion in the systems model of human behavior. [PPT Presentation]

26. Describe each of the five levels of Krathwol's affective domain , providing an original example of actions students might take to demonstrate competency at each level. [PPT Presentation]

27. Name and define five values you believe are especially important for students in the 21st century. Support your proposal with research, theory, and statements the requirements of being successful in an information age economy. How would recommend educators go about teaching those values? How is values education related to character education? [ PPT Presentation ]

28. Name and describe Erikson's theory of psychosocial development . Note behaviors associated with each stage and the implications of the theory for classroom practice. Evaluate the theory--that is, what evidence exists for its validation or what evidence would lead you to reject it? [ PPT Presentation ]

29. Describe how optimism, enthusiasm, and empathy might influence the teaching/learning process.  [PPT Presentation]

30. Name and discuss the major principles and objectives of humanistic education . Describe what a teacher might do in order to implement these principles. Summarize the findings from the meta-analyses examining the outcomes of open education discussed in your text. Include findings regarding both achievement and affective outcomes. [PPT Presentation]

Conation, Volition, and Self-Regulation

31. Name and discuss the major viewpoints and theories related to conation, volition, and self-regulation in the systems model of human behavior. [PPT Presentation]
32. Define the terms self-concept and self-esteem and discuss how these might influence learning. [PPT Presentation] 33. Name and discuss the stages in Maslow's hierarchy of needs . How does this theory relate to achieving excellence in the nine areas of life presented in the "Becoming a Brilliant Star" exercise discussed in class? [PPT Presentation] 34. Discuss social learning theory and the social cognition theory of learning. Define conation , describe how it works and how it might develop. How does goal-setting impact conation and learning? How does conation relate to self-regulation and self-control? What can educators do to help students develop conation? [ PPT Presentation-Social Learning and Social Cognition ] [PPT Presentation-Conation] 35. Describe some implications of social learning theory and social cognition theory for instructional practice. How can cooperative learning be incorporated into a direct instruction format? What would we have to do to encourage the development of self-regulation? How does this theory relate to the research on the impact of teacher expectations and efficacy on student performance. What can teachers do to maximize the positive effects of teacher efficacy? [Ashton, 1984] [PPT Presentation]

36. Define learning and compare and contrast the factors that behavioral, cognitive, humanistic, and social cognitive theorists believe influence the learning process . Mention ways in which the theories are alike and ways in which they are different and how each can be used by educators to improve student learning. [PPT Presentation]

37. Name and discuss at least 5 principles of learning that most learning theorists agree on, regardless of their theoretical orientation. Give specific examples of how these principles could be used in the classroom. [PPT Presentation]

38. Discuss how your view of humankind's spiritual nature might influence your interpretation of human growth and development literature as well as the teaching/learning process. [ PPT Presentation ]

39. Describe the structure and functioning of the brain and other components of humankind's biological nature that might influence human growth and development as well as the teaching/learning process. [PPT Presentation] [ PPT Presentation - Physical Development ]

40. Name and define the major aspects of the systems model of human behavior presented in class. Be certain to distinguish between internal and external influences on development. Explain why context is such an important aspect of human behavior at this point in history. How can such a model help educators in their professional roles? [see #3 above; summarize #6, #16, #21, #24, #25] [PPT Presentation]

1. reliability and validity; 2. formative and summative evaluation; and 3. criterion-referenced and norm-referenced tests .

Discuss how and when these concepts would be important in classroom assessment, evaluation, and grading practices relative to the important variables discussed in this course. [Dietel et al., 1991; Popham, 1999][PPT Presentation]

42. Compare and contrast the behavioral, cognitive, social cognitive, and humanistic learning theories and describe important assessment issues highlighted by each theory. Describe some assessment procedures that can address these issues. [PPT Presentation]

a. Completion and Short-Answer Items; b. Essay Items; c. Multiple-Choice, Matching and True/False Items.

  • Ashton, P. (1984). Teacher efficacy: A motivational paradigm for effective teacher education. Journal of Teacher Education, 35( 5),28-32.
  • Clarken, R. (2004, October 30). The process of becoming our true selves . Presentation at the Forum for Integrated Education and Educational Reform sponsored by the Council for Global Integrative Education, Santa Cruz, CA. Retrieved November 2004, from http://mediasite.nmu.edu/MediasiteLive30/LiveViewer/NoPopupRedirector.aspx?peid=0a145154-c21e-487b-a109-86e1498f5006&shouldResize=False
  • Dietel, R., Herman, J., & Knuth, R. (1991). What does research say about assessment ? Oak Brook, IL: North Central Regional Educational Laboratory (NCREL).
  • Huitt, W. (1995, November 6). Success in the information age: A paradigm shift . Workshop presentation at the Georgia Independent School Association, Atlanta, Georgia. Available online at http:// chiron .valdosta.edu/whuitt/col/context/infoage.html
  • Huitt, W. (1997, April 18). The SCANS report revisited . Paper delivered at the Fifth Annual Gulf South Business and Vocational Education Conference, Valdosta State University, Valdosta, GA. Available online at http:// chiron .valdosta.edu/whuitt/col/student/scanspap.html
  • Huitt, W. (1999, April 20). Implementing effective school achievement reform: Four principles . Paper presented at the School Counseling Summit, Valdosta State University, Valdosta, GA. Available online at http://chiron.valdosta.edu/whuitt/files/school_reform.html
  • Huitt, W. (2006, April 26). Becoming a Brilliant Star: A model of formative holistic education. Paper presented at the International Networking for Educational Transformation (iNet) Conference , Augusta, GA. Available at   http://chiron.valdosta.edu/whuitt/brilstar.html [ PowerPoint ] [ mp3 ]

Huitt, W. (2006, April 25). Educational accountability in an era of global decentralization . Paper presented at the International Networking for Educational Transformation (iNet) Conference , Augusta, GA. Available at http://chiron.valdosta.edu/whuitt/papers/edaccount.doc [ PowerPoint ] [ mp3-Part1 ] [ mp3-Part2 ]

Huitt, W., Monnetti, D., & Hummel, J. (in press). Designing direct instruction. In  C. Reigeluth and A. Carr-Chellman, Instructional-Design Theories and Models: Volume III, Building a Common Knowledgebase . Mahwah, NJ: Lawrence Erlbaum Associates. Available online at http://chiron.valdosta.edu/whuitt/papers/designing_direct_instruction.doc

Hummel, J., & Huitt, W. (1994). What you measure is what you get. GaASCD Newsletter: The Reporter , 10-11.

  • McIlrath, D., & Huitt, W. (1995, December). The teaching-learning process: A discussion of models. Educational Psychology Interactive . Valdosta, GA: Valdosta State University.
  • Popham, W. J. (1999), Why standardized tests don't measure educational quality. Educational Leadership, 56 (6), 8-15.
  • Randolph, C., & Evertson, C. (1994). Images of management for learner-centered classrooms. Action in Teacher Education , 55-64.
  • Use complete sentences
  • Use proper punctuation
  • Use proper spelling
  • What is the issue to be addressed?
  • Why is this issue important?
  • What will be included in your answer; how will your answer be organized?
  • Body -- Present information in clear, concise, and logical manner
  • Summarize your main points
  • Relate information in body to original proposition (why is this issue important?)

Some examples of good essay writing are provided by the Educational Testing Service, developer of the GRE-Writing Test.

There are a number of common errors in student writing that you should avoid. If you are not comfortable with writing, you might want to write some sample essays have have them checked by someone in the student writing center located in West Hall, Room 204.

Last updated: January 2005

Logo

Essay on Learning Assessment

Students are often asked to write an essay on Learning Assessment in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Learning Assessment

What is learning assessment.

Learning assessment is a way teachers check what students understand. It’s like a snapshot of a student’s knowledge at a certain time. Assessments can be tests, quizzes, or even class participation.

Types of Assessments

There are different kinds of assessments. Some happen during learning, called formative assessments. Others occur at the end, known as summative assessments. Projects and presentations can also be used to assess learning.

Why Assessments Matter

Assessments help teachers know if students are learning what they should. They show which areas students are good at and which need more work. This helps teachers decide what to teach next.

Feedback from Assessments

After assessments, students get feedback. This means teachers tell students how they did and how they can improve. Good feedback helps students learn better and feel confident.

250 Words Essay on Learning Assessment

Learning assessment is like a teacher using a map to find out how far a student has traveled on their education journey. It’s a way to see what a student knows and can do after a lesson or a series of lessons. Think of it as a progress report that helps teachers and students understand more about the learning process.

There are two main types of assessments: tests and projects. Tests are usually a set of questions that you answer to show what you remember and understand. Projects, on the other hand, are like big tasks where you create something, like a model or a report, to show your skills and knowledge.

Assessments are important because they give feedback. Feedback is like getting directions on what to do next. It tells you what you’re good at and what you need to work on. This helps you learn better and helps teachers know how to help you.

How Assessments Help Students

When you get your test or project back, you can see your mistakes and learn from them. This is how you grow and get better at different subjects. Assessments also help you get ready for the next class or the next grade by showing you what you need to practice more.

In conclusion, learning assessment is a key part of school. It’s not just about getting grades. It’s about understanding your strengths and areas to improve. This way, both you and your teacher can work together to make sure you’re learning and growing every day.

500 Words Essay on Learning Assessment

Types of learning assessments.

There are two main types of assessments: formative and summative. Formative assessments are like practice runs. They happen while students are still learning, and they help teachers see where students might need more help. Examples are quick quizzes or in-class activities. Summative assessments are like the final race. They measure what students have learned after a period of teaching. These are usually big tests or final projects at the end of a unit or term.

Why are Assessments Important?

Assessments are important because they help students and teachers in many ways. For students, they show what they have learned and what they still need to work on. For teachers, assessments give information to plan their lessons better and help each student improve. They also let parents know how their children are doing in school.

How to Make Assessments Fair and Useful

Challenges with assessments.

Sometimes, tests can make students feel nervous, and they might not do as well as they could. Also, some students might be good at taking tests but not as good at other important things like working in groups or being creative. It’s important for schools to use different kinds of assessments to get the full picture of what a student can do.

Technology and Assessment

Technology is changing how we do assessments. Now, students can take tests on computers, and teachers can use apps to check homework quickly. This can make learning more interesting and give teachers more time to help students instead of grading papers.

The Future of Assessments

In conclusion, learning assessment is a tool that helps students grow and succeed. It’s not just about giving grades but about understanding and improving the learning journey. By making assessments fair, clear, and varied, we can make sure they are helpful for everyone involved.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Our Mission

How to Nurture a Sense of Belonging for Students With Disabilities

Prioritizing the inclusion of students with disabilities into all aspects of the school community ensures a welcoming learning environment.

Photo of diverse high school students representing students with disabilities

School is a place where everyone should feel that they belong. However, for students with disabilities, this has not always been the case—the education system has a long history of exclusion and segregation when it comes to these students. It wasn’t that long ago that there was no expectation that children with disabilities should or could attend public schools, the Individuals with Disabilities Education Act changed that by mandating that children with disabilities are entitled to a free appropriate public education alongside their peers with and without disabilities in the general education setting or least restrictive environment to the greatest extent possible. This is often easier said than done.

Moving From Exclusion Toward Inclusion

Children with disabilities have been brought into the public school setting amid their same-age peers without disabilities. However, the lives of students with and without disabilities still rarely intersect. In the absence of shared activities, strong social connections are unlikely to form. Integration falls short of fostering true belonging for students with disabilities within their school communities.

While situations have greatly improved, it has been a long journey from exclusion to segregation to integration to inclusion. The journey won’t be complete until we all embrace the next step, a sense of belonging. Belonging comes when each person in the school community feels valued and accepted by their peers and teachers. It is when everyone strives to create connections among students that reciprocal relationships can form and all parties can feel like true members of their school.

Simple Ways to Promote a Sense of Belonging

So how can this be accomplished? As Cheryl M. Jorgensen, Michael McSheehan, and Rae M. Sonnenmeir have written, school community members must go beyond simply allowing students with disabilities to be present in all school activities . They need to take an active role in promoting an atmosphere of belonging. Some can easily be integrated into daily classroom routines. 

Consider the following six options to support a sense of belonging:

  • During class discussions, regularly ask students with and without disabilities to share stories of when they felt welcomed by others. Students can learn from each other’s experiences.
  • Engineer occasions for students with and without disabilities to collaborate on projects and assignments that offer both independent and group accountability so that students can learn to value every group member.
  • Consider how peers can provide natural support to one another. Often adults are assigned to help and support children with disabilities. This can be marginalizing and exclusive. Encourage peers to support their classmates as friends and colleagues, not as helpers.
  • Combine “ universal supports ” (those that benefit everyone) and “individualized supports” (those that an individual student might need) to make it more viable to meet the educational needs of all the students, such as visual schedules, timers, and flexible seating options.
  • Integrate student choice, goal setting, and preferences into lessons and other class activities every day, such as offering flexible seating during independent work or options for self-assessment like rubrics and checklists for class and homework assignments.
  • Make sure that students with disabilities are considered for schoolwide recognition, awards, and accolades available to any student.

When students with and without disabilities have plenty of well-supported opportunities to spend time together within and beyond the classroom, many preconceived notions or misconceptions about people with disabilities can be turned around simply through the experience of sharing space. Sometimes, though, it may take a bit more planning and collaboration. 

Know Your Students and Build Community 

Researcher Eric W. Carter advocates for expanding dimensions of belonging for students with disabilities . One way this can be done is by having teachers work together to create student profiles for all learners that emphasize student strengths. This ensures that everyone knows the positive qualities of students with (and without) disabilities. Ask parents, other teachers, and other students about students’ interests, preferences, desires, likes, dislikes, abilities, and talents. 

Teachers can then use this information to group students for projects, assignments, or other social activities. For instance, rather than randomly grouping students, assign them to groups by a common like or dislike, favorite food, or least favorite school subject. This allows students to connect socially before taking on the assigned task. People tend to collaborate more effectively when they share a connection.

Design the School Environment to Meet Students’ Needs

According to research, creating a true culture of belonging needs to extend beyond the classroom and permeate the entire school environment . Schoolwide efforts might include conducting a walk-through of your school buildings and surrounding areas to identify any physical or environmental barriers that could prevent students with disabilities from accessing the location and the people within it.

Pay attention to how people at your school talk about students with disabilities. Do they emphasize the disability labels over the students? (For example, do they use phrases like “IEP [individualized education program] students” or “special ed students”?) Do they equate disability with deficit? Do their words and actions communicate acceptance and belonging, or do they tend more toward exclusion, discomfort, or intolerance?

Gently and respectfully interrupt these connotations each time you encounter them. Educate people voicing them, rather than scolding. When you hear “IEP students,” respond with “students with disabilities.” If you hear someone describing what a student cannot do, remind them of all the things the student can do.

It’s also helpful to plan schoolwide events and activities aligned with national awareness days and months: Developmental Disabilities (March), Down Syndrome (March 21 and October), Cerebral Palsy (March), Autism (April), Disability Pride Month (July), the UN International Day of Persons With Disabilities (December 3), and Inclusive Schools (December). 

With our long history of exclusion and segregation of students with disabilities in education, shifting to a sense of belonging may feel like a big leap. We have come a long way, but we have a bit further to go.

Trinity College Dublin, the University of Dublin

Student Learning Development

View the contact page for more contact and location information

Students walking beside the Cricket pitch at Trinity College Dublin

Reach your potential

Student Learning Development supports Trinity students reach their academic potential.

Parliament Square TCD

New to College? We offer a range of helpful services!

We offer a range of services including individual appointments, workshops and skills events.

Welcome to Student Learning Development

These services are designed to develop your skills in areas such as academic writing, self and time management, exam & assessment skills for PG & UG students.

READ MORE ABOUT US

Postgraduate writing retreat

Postgraduate Writing Retreat - Mon 15th July 9.30-15.00

Over a dedicated 5.5 hour online session, we will use the pomodoro method along with writing tasks to help you generate writing...

Academic Writing Centre

Academic Writing Centre - Run by SLD TCD

30 minute online individual sessions where students gain advice with writing strategies, structuring, critical thinking & writing and referencing...

Services for STEM & HS students at SLD

What's at SLD for STEM Students?

See the highlights some of the main services tailored to students studying in the Faculty of Science, Technology, Engineering, and Mathematics and also those studying in the Faculty of Health Science.

The Shut Up and Write Café

Shut Up and Write Café for PG Students

The Shut Up and Write Café runs all year. Sessions are hosted using Blackboard Collaborate. No video or audio is necessary to join...

Individual Support at SLD

Individual Support

If you need support with any area of Academic/Learning Skills e.g. assessment, self management, procrastination, organisation, study skills, PG theses/viva or any general queries you have...

Academic Skills Workshops

Academic Skills Webinars

We run a wide range of FREE workshops for undergraduates and postgraduates. These are based on proven strategies to help students improve their performance in their studies.

Further Services

Services for staff, student counselling, student to student s2s, other college services.

Expand the section navigation mobile menu

Center for Excellence in Teaching and Learning

essay questions about assessment of learning

Moodle Response Templates Improve Student Responses and Speed Up Grading

Consider this scenario: you add open-ended questions (called “Essay” questions) to Moodle quizzes to add variety in question types. When you read student responses, however, you find they are problematic. Students missed answering certain parts of a question or wrote their responses in a difficult-to-comprehend format, leading to point deductions and lost time grading the questions.

These problems can be minimized by specifying a Response Template for essay questions in Moodle quizzes. I use a table format with cells for different parts of the response. Cells ensure that students provide clear and complete responses. A set format for responses streamlines grading.

Essay Questions in Moodle Quizzes

It is important to understand the nature of Essay questions in Moodle quizzes before discussing response templates. Importantly, essay questions are not just for essays. They are simply open-ended questions without a specific length or format for responses (unlike Short Answer questions, which have a strict format). Essay questions can be used to elicit responses of widely varying lengths. Responses can be anywhere from one word to multiple paragraphs depending on the nature of the question.

Although essay questions offer considerable flexibility, I have encountered difficulties when using them for questions with multiple parts requiring a longer answer. Students may fail to provide all of the requested information, mix up different parts of the question, or write one long, undifferentiated paragraph that is difficult to comprehend.

Enter the response template!

Using Moodle Response Templates 

How to add a response template in an essay question.

Response templates are easy to add into Moodle essay questions. As you create an essay question in a Moodle quiz, click on Response Template in the menu of options for the question and provide the template you have created that you want students to use for their responses.

I suggest using tables with cells that students fill in with their responses. Create the tables in Word or Google Docs and copy them into the Response Template cell when drafting the essay question. When students view the essay question in the quiz, the response template will appear in the response cell for them to fill in.

Pro Tip! Students need to see the entire response template when they view the essay question in the quiz. Under Response Options, select an Input Box size (5 lines to 40 lines) that will show the entire response template.

If you need help with response templates in Moodle Quizzes, reach out to e-LIS with a help request or schedule a one-on-one appointment with an e-LIS Instructional Designer.

Creating Effective Response Templates

Response templates for essay questions should: 1. enable students to provide clear, complete, and coherent answers to open-ended questions and 2. specify a unified format for student answers that is efficient for the instructor to read. Follow these tips for creating effective response templates:

  • Basic set-up – I use response templates for essay questions that have two or more parts. After drafting this type of essay question, I review how many parts the question has and in what order. For clarity, I create a response template that mirrors this information exactly, using the same key phrases in the response template as in the question and placing them in the same order as in the question.
  • Examples – Let’s say an essay question has the following three parts, state the definition of codeswitching , describe an example of codeswitching provided by the author , and describe your own example of codeswitching . In the response template, I might say definition , author’s example , and your example , in that order. I also include a descriptive header in the response template, such as Understanding Codeswitching . Within each sub-question, I specify how long each part of the response should be so that students provide the level of detail I am looking for.

Response templates can also be used for solving a problem in a series of steps that need to be shown and that should be in a particular order. For example, in a phonology contrastive distribution problem, I might ask students to identify the sounds being compared , list the minimal pairs in the data (if any) , state the distribution of the sounds , etc. Each step of the problem represents one item in the response template.

  • An efficient format – In my experience, the best format for response templates is a two-column table with a row of cells for each sub-question. (A header is also needed to link the response table with the question.) Numbered prompts go in the cells of the left-hand column (e.g. definition , author’s example , your example or sounds , minimal pairs (if any) , distribution ). Student answers are written in the corresponding cells in the right-hand column. This format clearly shows whether the student answered all parts of the question and organizes student responses into chunks that are easy for the instructor to read and understand.

Conclusion 

Response templates are an invaluable tool for both students and instructors. They are helpful for encouraging students to organize their thinking for open-ended (Essay) questions in Moodle quizzes, eliciting complete and coherent student responses, and speeding up grading. Use them whenever an open-ended question requires multiple parts that need to be structured in a particular way .

Related Teaching Tips

Essays Your Students Want to Write proposes strategies for structuring assignments with questions that engage students in critical thinking and reflection. Stepping in as a Student talks about establishing your teaching presence by taking the role of a student and posting some of the same written work students are doing. “Two Buckets” Assessment Activities discusses the use of a bonus question to ascertain additional information a student might have known on an exam. If the exam is administered in Moodle, a response template could be used for the student’s response, facilitating efficient review by the instructor.

Save and adapt a Google Doc version of this teaching tip.

About the Author

Helena Riha, Ph.D. teaches Linguistics and International Studies. She has taught over 3,500 students in 17 different courses. Helena won the OU Online Teaching Excellence Award and the Excellence in Teaching Award. This is her eighteenth teaching tip. Outside of class, Helena maintains her streak in Wordle .

Helena Riha is the current guest editor for the Grizz Tips for Teaching Effectiveness series on the CETL Teaching Blog at Oakland University. Contribute to the Teaching Blog as a guest editor (OU community only) .

Others may share and adapt under Creative Commons License CC BY-NC .

View all CETL Weekly Teaching Tips . 

  • Study Protocol
  • Open access
  • Published: 26 August 2024

Learning effect of online versus onsite education in health and medical scholarship – protocol for a cluster randomized trial

  • Rie Raffing 1 ,
  • Lars Konge 2 &
  • Hanne Tønnesen 1  

BMC Medical Education volume  24 , Article number:  927 ( 2024 ) Cite this article

123 Accesses

Metrics details

The disruption of health and medical education by the COVID-19 pandemic made educators question the effect of online setting on students’ learning, motivation, self-efficacy and preference. In light of the health care staff shortage online scalable education seemed relevant. Reviews on the effect of online medical education called for high quality RCTs, which are increasingly relevant with rapid technological development and widespread adaption of online learning in universities. The objective of this trial is to compare standardized and feasible outcomes of an online and an onsite setting of a research course regarding the efficacy for PhD students within health and medical sciences: Primarily on learning of research methodology and secondly on preference, motivation, self-efficacy on short term and academic achievements on long term. Based on the authors experience with conducting courses during the pandemic, the hypothesis is that student preferred onsite setting is different to online setting.

Cluster randomized trial with two parallel groups. Two PhD research training courses at the University of Copenhagen are randomized to online (Zoom) or onsite (The Parker Institute, Denmark) setting. Enrolled students are invited to participate in the study. Primary outcome is short term learning. Secondary outcomes are short term preference, motivation, self-efficacy, and long-term academic achievements. Standardized, reproducible and feasible outcomes will be measured by tailor made multiple choice questionnaires, evaluation survey, frequently used Intrinsic Motivation Inventory, Single Item Self-Efficacy Question, and Google Scholar publication data. Sample size is calculated to 20 clusters and courses are randomized by a computer random number generator. Statistical analyses will be performed blinded by an external statistical expert.

Primary outcome and secondary significant outcomes will be compared and contrasted with relevant literature. Limitations include geographical setting; bias include lack of blinding and strengths are robust assessment methods in a well-established conceptual framework. Generalizability to PhD education in other disciplines is high. Results of this study will both have implications for students and educators involved in research training courses in health and medical education and for the patients who ultimately benefits from this training.

Trial registration

Retrospectively registered at ClinicalTrials.gov: NCT05736627. SPIRIT guidelines are followed.

Peer Review reports

Medical education was utterly disrupted for two years by the COVID-19 pandemic. In the midst of rearranging courses and adapting to online platforms we, with lecturers and course managers around the globe, wondered what the conversion to online setting did to students’ learning, motivation and self-efficacy [ 1 , 2 , 3 ]. What the long-term consequences would be [ 4 ] and if scalable online medical education should play a greater role in the future [ 5 ] seemed relevant and appealing questions in a time when health care professionals are in demand. Our experience of performing research training during the pandemic was that although PhD students were grateful for courses being available, they found it difficult to concentrate related to the long screen hours. We sensed that most students preferred an onsite setting and perceived online courses a temporary and inferior necessity. The question is if this impacted their learning?

Since the common use of the internet in medical education, systematic reviews have sought to answer if there is a difference in learning effect when taught online compared to onsite. Although authors conclude that online learning may be equivalent to onsite in effect, they agree that studies are heterogeneous and small [ 6 , 7 ], with low quality of the evidence [ 8 , 9 ]. They therefore call for more robust and adequately powered high-quality RCTs to confirm their findings and suggest that students’ preferences in online learning should be investigated [ 7 , 8 , 9 ].

This uncovers two knowledge gaps: I) High-quality RCTs on online versus onsite learning in health and medical education and II) Studies on students’ preferences in online learning.

Recently solid RCTs have been performed on the topic of web-based theoretical learning of research methods among health professionals [ 10 , 11 ]. However, these studies are on asynchronous courses among medical or master students with short term outcomes.

This uncovers three additional knowledge gaps: III) Studies on synchronous online learning IV) among PhD students of health and medical education V) with long term measurement of outcomes.

The rapid technological development including artificial intelligence (AI) and widespread adaption as well as application of online learning forced by the pandemic, has made online learning well-established. It represents high resolution live synchronic settings which is available on a variety of platforms with integrated AI and options for interaction with and among students, chat and break out rooms, and exterior digital tools for teachers [ 12 , 13 , 14 ]. Thus, investigating online learning today may be quite different than before the pandemic. On one hand, it could seem plausible that this technological development would make a difference in favour of online learning which could not be found in previous reviews of the evidence. On the other hand, the personal face-to-face interaction during onsite learning may still be more beneficial for the learning process and combined with our experience of students finding it difficult to concentrate when online during the pandemic we hypothesize that outcomes of the onsite setting are different from the online setting.

To support a robust study, we design it as a cluster randomized trial. Moreover, we use the well-established and widely used Kirkpatrick’s conceptual framework for evaluating learning as a lens to assess our outcomes [ 15 ]. Thus, to fill the above-mentioned knowledge gaps, the objective of this trial is to compare a synchronous online and an in-person onsite setting of a research course regarding the efficacy for PhD students within the health and medical sciences:

Primarily on theoretical learning of research methodology and

Secondly on

◦ Preference, motivation, self-efficacy on short term

◦ Academic achievements on long term

Trial design

This study protocol covers synchronous online and in-person onsite setting of research courses testing the efficacy for PhD students. It is a two parallel arms cluster randomized trial (Fig.  1 ).

figure 1

Consort flow diagram

The study measures baseline and post intervention. Baseline variables and knowledge scores are obtained at the first day of the course, post intervention measurement is obtained the last day of the course (short term) and monthly for 24 months (long term).

Randomization is stratified giving 1:1 allocation ratio of the courses. As the number of participants within each course might differ, the allocation ratio of participants in the study will not fully be equal and 1:1 balanced.

Study setting

The study site is The Parker Institute at Bispebjerg and Frederiksberg Hospital, University of Copenhagen, Denmark. From here the courses are organized and run online and onsite. The course programs and time schedules, the learning objective, the course management, the lecturers, and the delivery are identical in the two settings. The teachers use the same introductory presentations followed by training in break out groups, feed-back and discussions. For the online group, the setting is organized as meetings in the online collaboration tool Zoom® [ 16 ] using the basic available technicalities such as screen sharing, chat function for comments, and breakout rooms and other basics digital tools if preferred. The online version of the course is synchronous with live education and interaction. For the onsite group, the setting is the physical classroom at the learning facilities at the Parker Institute. Coffee and tea as well as simple sandwiches and bottles of water, which facilitate sociality, are available at the onsite setting. The participants in the online setting must get their food and drink by themselves, but online sociality is made possible by not closing down the online room during the breaks. The research methodology courses included in the study are “Practical Course in Systematic Review Technique in Clinical Research”, (see course programme in appendix 1) and “Getting started: Writing your first manuscript for publication” [ 17 ] (see course programme in appendix 2). The two courses both have 12 seats and last either three or three and a half days resulting in 2.2 and 2.6 ECTS credits, respectively. They are offered by the PhD School of the Faculty of Health and Medical Sciences, University of Copenhagen. Both courses are available and covered by the annual tuition fee for all PhD students enrolled at a Danish university.

Eligibility criteria

Inclusion criteria for participants: All PhD students enrolled on the PhD courses participate after informed consent: “Practical Course in Systematic Review Technique in Clinical Research” and “Getting started: Writing your first manuscript for publication” at the PhD School of the Faculty of Health and Medical Sciences, University of Copenhagen, Denmark.

Exclusion criteria for participants: Declining to participate and withdrawal of informed consent.

Informed consent

The PhD students at the PhD School at the Faculty of Health Sciences, University of Copenhagen participate after informed consent, taken by the daily project leader, allowing evaluation data from the course to be used after pseudo-anonymization in the project. They are informed in a welcome letter approximately three weeks prior to the course and again in the introduction the first course day. They register their consent on the first course day (Appendix 3). Declining to participate in the project does not influence their participation in the course.

Interventions

Online course settings will be compared to onsite course settings. We test if the onsite setting is different to online. Online learning is increasing but onsite learning is still the preferred educational setting in a medical context. In this case onsite learning represents “usual care”. The online course setting is meetings in Zoom using the technicalities available such as chat and breakout rooms. The onsite setting is the learning facilities, at the Parker Institute, Bispebjerg and Frederiksberg Hospital, The Capital Region, University of Copenhagen, Denmark.

The course settings are not expected to harm the participants, but should a request be made to discontinue the course or change setting this will be met, and the participant taken out of the study. Course participants are allowed to take part in relevant concomitant courses or other interventions during the trial.

Strategies to improve adherence to interventions

Course participants are motivated to complete the course irrespectively of the setting because it bears ECTS-points for their PhD education and adds to the mandatory number of ECTS-points. Thus, we expect adherence to be the same in both groups. However, we monitor their presence in the course and allocate time during class for testing the short-term outcomes ( motivation, self-efficacy, preference and learning). We encourage and, if necessary, repeatedly remind them to register with Google Scholar for our testing of the long-term outcome (academic achievement).

Outcomes are related to the Kirkpatrick model for evaluating learning (Fig.  2 ) which divides outcomes into four different levels; Reaction which includes for example motivation, self-efficacy and preferences, Learning which includes knowledge acquisition, Behaviour for practical application of skills when back at the job (not included in our outcomes), and Results for impact for end-users which includes for example academic achievements in the form of scientific articles [ 18 , 19 , 20 ].

figure 2

The Kirkpatrick model

Primary outcome

The primary outcome is short term learning (Kirkpatrick level 2).

Learning is assessed by a Multiple-Choice Questionnaire (MCQ) developed prior to the RCT specifically for this setting (Appendix 4). First the lecturers of the two courses were contacted and asked to provide five multiple choice questions presented as a stem with three answer options; one correct answer and two distractors. The questions should be related to core elements of their teaching under the heading of research training. The questions were set up to test the cognition of the students at the levels of "Knows" or "Knows how" according to Miller's Pyramid of Competence and not their behaviour [ 21 ]. Six of the course lecturers responded and out of this material all the questions which covered curriculum of both courses were selected. It was tested on 10 PhD students and within the lecturer group, revised after an item analysis and English language revised. The MCQ ended up containing 25 questions. The MCQ is filled in at baseline and repeated at the end of the course. The primary outcomes based on the MCQ is estimated as the score of learning calculated as number of correct answers out of 25 after the course. A decrease of points of the MCQ in the intervention groups denotes a deterioration of learning. In the MCQ the minimum score is 0 and 25 is maximum, where 19 indicates passing the course.

Furthermore, as secondary outcome, this outcome measurement will be categorized as binary outcome to determine passed/failed of the course defined by 75% (19/25) correct answers.

The learning score will be computed on group and individual level and compared regarding continued outcomes by the Mann–Whitney test comparing the learning score of the online and onsite groups. Regarding the binomial outcome of learning (passed/failed) data will be analysed by the Fisher’s exact test on an intention-to-treat basis between the online and onsite. The results will be presented as median and range and as mean and standard deviations, for possible future use in meta-analyses.

Secondary outcomes

Motivation assessment post course: Motivation level is measured by the Intrinsic Motivation Inventory (IMI) Scale [ 22 ] (Appendix 5). The IMI items were randomized by random.org on the 4th of August 2022. It contains 12 items to be assessed by the students on a 7-point Likert scale where 1 is “Not at all true”, 4 is “Somewhat true” and 7 is “Very true”. The motivation score will be computed on group and individual level and will then be tested by the Mann–Whitney of the online and onsite group.

Self-efficacy assessment post course: Self-efficacy level is measured by a single-item measure developed and validated by Williams and Smith [ 23 ] (Appendix 6). It is assessed by the students on a scale from 1–10 where 1 is “Strongly disagree” and 10 is “Strongly agree”. The self-efficacy score will be computed on group and individual level and tested by a Mann–Whitney test to compare the self-efficacy score of the online and onsite group.

Preference assessment post course: Preference is measured as part of the general course satisfaction evaluation with the question “If you had the option to choose, which form would you prefer this course to have?” with the options “onsite form” and “online form”.

Academic achievement assessment is based on 24 monthly measurements post course of number of publications, number of citations, h-index, i10-index. This data is collected through the Google Scholar Profiles [ 24 ] of the students as this database covers most scientific journals. Associations between onsite/online and long-term academic will be examined with Kaplan Meyer and log rank test with a significance level of 0.05.

Participant timeline

Enrolment for the course at the Faculty of Health Sciences, University of Copenhagen, Denmark, becomes available when it is published in the course catalogue. In the course description the course location is “To be announced”. Approximately 3–4 weeks before the course begins, the participant list is finalized, and students receive a welcome letter containing course details, including their allocation to either the online or onsite setting. On the first day of the course, oral information is provided, and participants provide informed consent, baseline variables, and base line knowledge scores.

The last day of scheduled activities the following scores are collected, knowledge, motivation, self-efficacy, setting preference, and academic achievement. To track students' long term academic achievements, follow-ups are conducted monthly for a period of 24 months, with assessments occurring within one week of the last course day (Table  1 ).

Sample size

The power calculation is based on the main outcome, theoretical learning on short term. For the sample size determination, we considered 12 available seats for participants in each course. To achieve statistical power, we aimed for 8 clusters in both online and onsite arms (in total 16 clusters) to detect an increase in learning outcome of 20% (learning outcome increase of 5 points). We considered an intraclass correlation coefficient of 0.02, a standard deviation of 10, a power of 80%, and a two-sided alpha level of 5%. The Allocation Ratio was set at 1, implying an equal number of subjects in both online and onsite group.

Considering a dropout up to 2 students per course, equivalent to 17%, we determined that a total of 112 participants would be needed. This calculation factored in 10 clusters of 12 participants per study arm, which we deemed sufficient to assess any changes in learning outcome.

The sample size was estimated using the function n4means from the R package CRTSize [ 25 ].

Recruitment

Participants are PhD students enrolled in 10 courses of “Practical Course in Systematic Review Technique in Clinical Research” and 10 courses of “Getting started: Writing your first manuscript for publication” at the PhD School of the Faculty of Health Sciences, University of Copenhagen, Denmark.

Assignment of interventions: allocation

Randomization will be performed on course-level. The courses are randomized by a computer random number generator [ 26 ]. To get a balanced randomization per year, 2 sets with 2 unique random integers in each, taken from the 1–4 range is requested.

The setting is not included in the course catalogue of the PhD School and thus allocation to online or onsite is concealed until 3–4 weeks before course commencement when a welcome letter with course information including allocation to online or onsite setting is distributed to the students. The lecturers are also informed of the course setting at this time point. If students withdraw from the course after being informed of the setting, a letter is sent to them enquiring of the reason for withdrawal and reason is recorded (Appendix 7).

The allocation sequence is generated by a computer random number generator (random.org). The participants and the lecturers sign up for the course without knowing the course setting (online or onsite) until 3–4 weeks before the course.

Assignment of interventions: blinding

Due to the nature of the study, it is not possible to blind trial participants or lecturers. The outcomes are reported by the participants directly in an online form, thus being blinded for the outcome assessor, but not for the individual participant. The data collection for the long-term follow-up regarding academic achievements is conducted without blinding. However, the external researcher analysing the data will be blinded.

Data collection and management

Data will be collected by the project leader (Table  1 ). Baseline variables and post course knowledge, motivation, and self-efficacy are self-reported through questionnaires in SurveyXact® [ 27 ]. Academic achievements are collected through Google Scholar profiles of the participants.

Given that we are using participant assessments and evaluations for research purposes, all data collection – except for monthly follow-up of academic achievements after the course – takes place either in the immediate beginning or ending of the course and therefore we expect participant retention to be high.

Data will be downloaded from SurveyXact and stored in a locked and logged drive on a computer belonging to the Capital Region of Denmark. Only the project leader has access to the data.

This project conduct is following the Danish Data Protection Agency guidelines of the European GDPR throughout the trial. Following the end of the trial, data will be stored at the Danish National Data Archive which fulfil Danish and European guidelines for data protection and management.

Statistical methods

Data is anonymized and blinded before the analyses. Analyses are performed by a researcher not otherwise involved in the inclusion or randomization, data collection or handling. All statistical tests will be testing the null hypotheses assuming the two arms of the trial being equal based on corresponding estimates. Analysis of primary outcome on short-term learning will be started once all data has been collected for all individuals in the last included course. Analyses of long-term academic achievement will be started at end of follow-up.

Baseline characteristics including both course- and individual level information will be presented. Table 2 presents the available data on baseline.

We will use multivariate analysis for identification of the most important predictors (motivation, self-efficacy, sex, educational background, and knowledge) for best effect on short and long term. The results will be presented as risk ratio (RR) with 95% confidence interval (CI). The results will be considered significant if CI does not include the value one.

All data processing and analyses were conducted using R statistical software version 4.1.0, 2021–05-18 (R Foundation for Statistical Computing, Vienna, Austria).

If possible, all analysis will be performed for “Practical Course in Systematic Review Technique in Clinical Research” and for “Getting started: Writing your first manuscript for publication” separately.

Primary analyses will be handled with the intention-to-treat approach. The analyses will include all individuals with valid data regardless of they did attend the complete course. Missing data will be handled with multiple imputation [ 28 ] .

Upon reasonable request, public assess will be granted to protocol, datasets analysed during the current study, and statistical code Table 3 .

Oversight, monitoring, and adverse events

This project is coordinated in collaboration between the WHO CC (DEN-62) at the Parker Institute, CAMES, and the PhD School at the Faculty of Health and Medical Sciences, University of Copenhagen. The project leader runs the day-to-day support of the trial. The steering committee of the trial includes principal investigators from WHO CC (DEN-62) and CAMES and the project leader and meets approximately three times a year.

Data monitoring is done on a daily basis by the project leader and controlled by an external independent researcher.

An adverse event is “a harmful and negative outcome that happens when a patient has been provided with medical care” [ 29 ]. Since this trial does not involve patients in medical care, we do not expect adverse events. If participants decline taking part in the course after receiving the information of the course setting, information on reason for declining is sought obtained. If the reason is the setting this can be considered an unintended effect. Information of unintended effects of the online setting (the intervention) will be recorded. Participants are encouraged to contact the project leader with any response to the course in general both during and after the course.

The trial description has been sent to the Scientific Ethical Committee of the Capital Region of Denmark (VEK) (21041907), which assessed it as not necessary to notify and that it could proceed without permission from VEK according to the Danish law and regulation of scientific research. The trial is registered with the Danish Data Protection Agency (Privacy) (P-2022–158). Important protocol modification will be communicated to relevant parties as well as VEK, the Joint Regional Information Security and Clinicaltrials.gov within an as short timeframe as possible.

Dissemination plans

The results (positive, negative, or inconclusive) will be disseminated in educational, scientific, and clinical fora, in international scientific peer-reviewed journals, and clinicaltrials.gov will be updated upon completion of the trial. After scientific publication, the results will be disseminated to the public by the press, social media including the website of the hospital and other organizations – as well as internationally via WHO CC (DEN-62) at the Parker Institute and WHO Europe.

All authors will fulfil the ICMJE recommendations for authorship, and RR will be first author of the articles as a part of her PhD dissertation. Contributors who do not fulfil these recommendations will be offered acknowledgement in the article.

This cluster randomized trial investigates if an onsite setting of a research course for PhD students within the health and medical sciences is different from an online setting. The outcomes measured are learning of research methodology (primary), preference, motivation, and self-efficacy (secondary) on short term and academic achievements (secondary) on long term.

The results of this study will be discussed as follows:

Discussion of primary outcome

Primary outcome will be compared and contrasted with similar studies including recent RCTs and mixed-method studies on online and onsite research methodology courses within health and medical education [ 10 , 11 , 30 ] and for inspiration outside the field [ 31 , 32 ]: Tokalic finds similar outcomes for online and onsite, Martinic finds that the web-based educational intervention improves knowledge, Cheung concludes that the evidence is insufficient to say that the two modes have different learning outcomes, Kofoed finds online setting to have negative impact on learning and Rahimi-Ardabili presents positive self-reported student knowledge. These conflicting results will be discussed in the context of the result on the learning outcome of this study. The literature may change if more relevant studies are published.

Discussion of secondary outcomes

Secondary significant outcomes are compared and contrasted with similar studies.

Limitations, generalizability, bias and strengths

It is a limitation to this study, that an onsite curriculum for a full day is delivered identically online, as this may favour the onsite course due to screen fatigue [ 33 ]. At the same time, it is also a strength that the time schedules are similar in both settings. The offer of coffee, tea, water, and a plain sandwich in the onsite course may better facilitate the possibility for socializing. Another limitation is that the study is performed in Denmark within a specific educational culture, with institutional policies and resources which might affect the outcome and limit generalization to other geographical settings. However, international students are welcome in the class.

In educational interventions it is generally difficult to blind participants and this inherent limitation also applies to this trial [ 11 ]. Thus, the participants are not blinded to their assigned intervention, and neither are the lecturers in the courses. However, the external statistical expert will be blinded when doing the analyses.

We chose to compare in-person onsite setting with a synchronous online setting. Therefore, the online setting cannot be expected to generalize to asynchronous online setting. Asynchronous delivery has in some cases showed positive results and it might be because students could go back and forth through the modules in the interface without time limit [ 11 ].

We will report on all the outcomes defined prior to conducting the study to avoid selective reporting bias.

It is a strength of the study that it seeks to report outcomes within the 1, 2 and 4 levels of the Kirkpatrick conceptual framework, and not solely on level 1. It is also a strength that the study is cluster randomized which will reduce “infections” between the two settings and has an adequate power calculated sample size and looks for a relevant educational difference of 20% between the online and onsite setting.

Perspectives with implications for practice

The results of this study may have implications for the students for which educational setting they choose. Learning and preference results has implications for lecturers, course managers and curriculum developers which setting they should plan for the health and medical education. It may also be of inspiration for teaching and training in other disciplines. From a societal perspective it also has implications because we will know the effect and preferences of online learning in case of a future lock down.

Future research could investigate academic achievements in online and onsite research training on the long run (Kirkpatrick 4); the effect of blended learning versus online or onsite (Kirkpatrick 2); lecturers’ preferences for online and onsite setting within health and medical education (Kirkpatrick 1) and resource use in synchronous and asynchronous online learning (Kirkpatrick 5).

Trial status

This trial collected pilot data from August to September 2021 and opened for inclusion in January 2022. Completion of recruitment is expected in April 2024 and long-term follow-up in April 2026. Protocol version number 1 03.06.2022 with amendments 30.11.2023.

Availability of data and materials

The project leader will have access to the final trial dataset which will be available upon reasonable request. Exception to this is the qualitative raw data that might contain information leading to personal identification.

Abbreviations

Artificial Intelligence

Copenhagen academy for medical education and simulation

Confidence interval

Coronavirus disease

European credit transfer and accumulation system

International committee of medical journal editors

Intrinsic motivation inventory

Multiple choice questionnaire

Doctor of medicine

Masters of sciences

Randomized controlled trial

Scientific ethical committee of the Capital Region of Denmark

WHO Collaborating centre for evidence-based clinical health promotion

Samara M, Algdah A, Nassar Y, Zahra SA, Halim M, Barsom RMM. How did online learning impact the academic. J Technol Sci Educ. 2023;13(3):869–85.

Article   Google Scholar  

Nejadghaderi SA, Khoshgoftar Z, Fazlollahi A, Nasiri MJ. Medical education during the coronavirus disease 2019 pandemic: an umbrella review. Front Med (Lausanne). 2024;11:1358084. https://doi.org/10.3389/fmed.2024.1358084 .

Madi M, Hamzeh H, Abujaber S, Nawasreh ZH. Have we failed them? Online learning self-efficacy of physiotherapy students during COVID-19 pandemic. Physiother Res Int. 2023;5:e1992. https://doi.org/10.1002/pri.1992 .

Torda A. How COVID-19 has pushed us into a medical education revolution. Intern Med J. 2020;50(9):1150–3.

Alhat S. Virtual Classroom: A Future of Education Post-COVID-19. Shanlax Int J Educ. 2020;8(4):101–4.

Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: A meta-analysis. JAMA. 2008;300(10):1181–96. https://doi.org/10.1001/jama.300.10.1181 .

Pei L, Wu H. Does online learning work better than offline learning in undergraduate medical education? A systematic review and meta-analysis. Med Educ Online. 2019;24(1):1666538. https://doi.org/10.1080/10872981.2019.1666538 .

Richmond H, Copsey B, Hall AM, Davies D, Lamb SE. A systematic review and meta-analysis of online versus alternative methods for training licensed health care professionals to deliver clinical interventions. BMC Med Educ. 2017;17(1):227. https://doi.org/10.1186/s12909-017-1047-4 .

George PP, Zhabenko O, Kyaw BM, Antoniou P, Posadzki P, Saxena N, Semwal M, Tudor Car L, Zary N, Lockwood C, Car J. Online Digital Education for Postregistration Training of Medical Doctors: Systematic Review by the Digital Health Education Collaboration. J Med Internet Res. 2019;21(2):e13269. https://doi.org/10.2196/13269 .

Tokalić R, Poklepović Peričić T, Marušić A. Similar Outcomes of Web-Based and Face-to-Face Training of the GRADE Approach for the Certainty of Evidence: Randomized Controlled Trial. J Med Internet Res. 2023;25:e43928. https://doi.org/10.2196/43928 .

Krnic Martinic M, Čivljak M, Marušić A, Sapunar D, Poklepović Peričić T, Buljan I, et al. Web-Based Educational Intervention to Improve Knowledge of Systematic Reviews Among Health Science Professionals: Randomized Controlled Trial. J Med Internet Res. 2022;24(8): e37000.

https://www.mentimeter.com/ . Accessed 4 Dec 2023.

https://www.sendsteps.com/en/ . Accessed 4 Dec 2023.

https://da.padlet.com/ . Accessed 4 Dec 2023.

Zackoff MW, Real FJ, Abramson EL, Li STT, Klein MD, Gusic ME. Enhancing Educational Scholarship Through Conceptual Frameworks: A Challenge and Roadmap for Medical Educators. Acad Pediatr. 2019;19(2):135–41. https://doi.org/10.1016/j.acap.2018.08.003 .

https://zoom.us/ . Accessed 20 Aug 2024.

Raffing R, Larsen S, Konge L, Tønnesen H. From Targeted Needs Assessment to Course Ready for Implementation-A Model for Curriculum Development and the Course Results. Int J Environ Res Public Health. 2023;20(3):2529. https://doi.org/10.3390/ijerph20032529 .

https://www.kirkpatrickpartners.com/the-kirkpatrick-model/ . Accessed 12 Dec 2023.

Smidt A, Balandin S, Sigafoos J, Reed VA. The Kirkpatrick model: A useful tool for evaluating training outcomes. J Intellect Dev Disabil. 2009;34(3):266–74.

Campbell K, Taylor V, Douglas S. Effectiveness of online cancer education for nurses and allied health professionals; a systematic review using kirkpatrick evaluation framework. J Cancer Educ. 2019;34(2):339–56.

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63–7.

Ryan RM, Deci EL. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. Am Psychol. 2000;55(1):68–78. https://doi.org/10.1037//0003-066X.55.1.68 .

Williams GM, Smith AP. Using single-item measures to examine the relationships between work, personality, and well-being in the workplace. Psychology. 2016;07(06):753–67.

https://scholar.google.com/intl/en/scholar/citations.html . Accessed 4 Dec 2023.

Rotondi MA. CRTSize: sample size estimation functions for cluster randomized trials. R package version 1.0. 2015. Available from: https://cran.r-project.org/package=CRTSize .

Random.org. Available from: https://www.random.org/

https://rambollxact.dk/surveyxact . Accessed 4 Dec 2023.

Sterne JAC, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: Potential and pitfalls. BMJ (Online). 2009;339:157–60.

Google Scholar  

Skelly C, Cassagnol M, Munakomi S. Adverse Events. StatPearls Treasure Island: StatPearls Publishing. 2023. Available from: https://www.ncbi.nlm.nih.gov/books/NBK558963/ .

Rahimi-Ardabili H, Spooner C, Harris MF, Magin P, Tam CWM, Liaw ST, et al. Online training in evidence-based medicine and research methods for GP registrars: a mixed-methods evaluation of engagement and impact. BMC Med Educ. 2021;21(1):1–14. Available from:  https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8439372/pdf/12909_2021_Article_2916.pdf .

Cheung YYH, Lam KF, Zhang H, Kwan CW, Wat KP, Zhang Z, et al. A randomized controlled experiment for comparing face-to-face and online teaching during COVID-19 pandemic. Front Educ. 2023;8. https://doi.org/10.3389/feduc.2023.1160430 .

Kofoed M, Gebhart L, Gilmore D, Moschitto R. Zooming to Class?: Experimental Evidence on College Students' Online Learning During Covid-19. SSRN Electron J. 2021;IZA Discussion Paper No. 14356.

Mutlu Aİ, Yüksel M. Listening effort, fatigue, and streamed voice quality during online university courses. Logop Phoniatr Vocol :1–8. Available from: https://doi.org/10.1080/14015439.2024.2317789

Download references

Acknowledgements

We thank the students who make their evaluations available for this trial and MSc (Public Health) Mie Sylow Liljendahl for statistical support.

Open access funding provided by Copenhagen University The Parker Institute, which hosts the WHO CC (DEN-62), receives a core grant from the Oak Foundation (OCAY-18–774-OFIL). The Oak Foundation had no role in the design of the study or in the collection, analysis, and interpretation of the data or in writing the manuscript.

Author information

Authors and affiliations.

WHO Collaborating Centre (DEN-62), Clinical Health Promotion Centre, The Parker Institute, Bispebjerg & Frederiksberg Hospital, University of Copenhagen, Copenhagen, 2400, Denmark

Rie Raffing & Hanne Tønnesen

Copenhagen Academy for Medical Education and Simulation (CAMES), Centre for HR and Education, The Capital Region of Denmark, Copenhagen, 2100, Denmark

You can also search for this author in PubMed   Google Scholar

Contributions

RR, LK and HT have made substantial contributions to the conception and design of the work; RR to the acquisition of data, and RR, LK and HT to the interpretation of data; RR has drafted the work and RR, LK, and HT have substantively revised it AND approved the submitted version AND agreed to be personally accountable for their own contributions as well as ensuring that any questions which relates to the accuracy or integrity of the work are adequately investigated, resolved and documented.

Corresponding author

Correspondence to Rie Raffing .

Ethics declarations

Ethics approval and consent to participate.

The Danish National Committee on Health Research Ethics has assessed the study Journal-nr.:21041907 (Date: 21–09-2021) without objections or comments. The study has been approved by The Danish Data Protection Agency Journal-nr.: P-2022–158 (Date: 04.05.2022).

All PhD students participate after informed consent. They can withdraw from the study at any time without explanations or consequences for their education. They will be offered information of the results at study completion. There are no risks for the course participants as the measurements in the course follow routine procedure and they are not affected by the follow up in Google Scholar. However, the 15 min of filling in the forms may be considered inconvenient.

The project will follow the GDPR and the Joint Regional Information Security Policy. Names and ID numbers are stored on a secure and logged server at the Capital Region Denmark to avoid risk of data leak. All outcomes are part of the routine evaluation at the courses, except the follow up for academic achievement by publications and related indexes. However, the publications are publicly available per se.

Competing interests

The authors declare no competing interests

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., supplementary material 5., supplementary material 6., supplementary material 7., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Raffing, R., Konge, L. & Tønnesen, H. Learning effect of online versus onsite education in health and medical scholarship – protocol for a cluster randomized trial. BMC Med Educ 24 , 927 (2024). https://doi.org/10.1186/s12909-024-05915-z

Download citation

Received : 25 March 2024

Accepted : 14 August 2024

Published : 26 August 2024

DOI : https://doi.org/10.1186/s12909-024-05915-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Self-efficacy
  • Achievements
  • Health and Medical education

BMC Medical Education

ISSN: 1472-6920

essay questions about assessment of learning

Democratic National Convention (DNC) in Chicago

Samantha Putterman, PolitiFact Samantha Putterman, PolitiFact

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/politics/fact-checking-warnings-from-democrats-about-project-2025-and-donald-trump

Fact-checking warnings from Democrats about Project 2025 and Donald Trump

This fact check originally appeared on PolitiFact .

Project 2025 has a starring role in this week’s Democratic National Convention.

And it was front and center on Night 1.

WATCH: Hauling large copy of Project 2025, Michigan state Sen. McMorrow speaks at 2024 DNC

“This is Project 2025,” Michigan state Sen. Mallory McMorrow, D-Royal Oak, said as she laid a hardbound copy of the 900-page document on the lectern. “Over the next four nights, you are going to hear a lot about what is in this 900-page document. Why? Because this is the Republican blueprint for a second Trump term.”

Vice President Kamala Harris, the Democratic presidential nominee, has warned Americans about “Trump’s Project 2025” agenda — even though former President Donald Trump doesn’t claim the conservative presidential transition document.

“Donald Trump wants to take our country backward,” Harris said July 23 in Milwaukee. “He and his extreme Project 2025 agenda will weaken the middle class. Like, we know we got to take this seriously, and can you believe they put that thing in writing?”

Minnesota Gov. Tim Walz, Harris’ running mate, has joined in on the talking point.

“Don’t believe (Trump) when he’s playing dumb about this Project 2025. He knows exactly what it’ll do,” Walz said Aug. 9 in Glendale, Arizona.

Trump’s campaign has worked to build distance from the project, which the Heritage Foundation, a conservative think tank, led with contributions from dozens of conservative groups.

Much of the plan calls for extensive executive-branch overhauls and draws on both long-standing conservative principles, such as tax cuts, and more recent culture war issues. It lays out recommendations for disbanding the Commerce and Education departments, eliminating certain climate protections and consolidating more power to the president.

Project 2025 offers a sweeping vision for a Republican-led executive branch, and some of its policies mirror Trump’s 2024 agenda, But Harris and her presidential campaign have at times gone too far in describing what the project calls for and how closely the plans overlap with Trump’s campaign.

PolitiFact researched Harris’ warnings about how the plan would affect reproductive rights, federal entitlement programs and education, just as we did for President Joe Biden’s Project 2025 rhetoric. Here’s what the project does and doesn’t call for, and how it squares with Trump’s positions.

Are Trump and Project 2025 connected?

To distance himself from Project 2025 amid the Democratic attacks, Trump wrote on Truth Social that he “knows nothing” about it and has “no idea” who is in charge of it. (CNN identified at least 140 former advisers from the Trump administration who have been involved.)

The Heritage Foundation sought contributions from more than 100 conservative organizations for its policy vision for the next Republican presidency, which was published in 2023.

Project 2025 is now winding down some of its policy operations, and director Paul Dans, a former Trump administration official, is stepping down, The Washington Post reported July 30. Trump campaign managers Susie Wiles and Chris LaCivita denounced the document.

WATCH: A look at the Project 2025 plan to reshape government and Trump’s links to its authors

However, Project 2025 contributors include a number of high-ranking officials from Trump’s first administration, including former White House adviser Peter Navarro and former Housing and Urban Development Secretary Ben Carson.

A recently released recording of Russell Vought, a Project 2025 author and the former director of Trump’s Office of Management and Budget, showed Vought saying Trump’s “very supportive of what we do.” He said Trump was only distancing himself because Democrats were making a bogeyman out of the document.

Project 2025 wouldn’t ban abortion outright, but would curtail access

The Harris campaign shared a graphic on X that claimed “Trump’s Project 2025 plan for workers” would “go after birth control and ban abortion nationwide.”

The plan doesn’t call to ban abortion nationwide, though its recommendations could curtail some contraceptives and limit abortion access.

What’s known about Trump’s abortion agenda neither lines up with Harris’ description nor Project 2025’s wish list.

Project 2025 says the Department of Health and Human Services Department should “return to being known as the Department of Life by explicitly rejecting the notion that abortion is health care.”

It recommends that the Food and Drug Administration reverse its 2000 approval of mifepristone, the first pill taken in a two-drug regimen for a medication abortion. Medication is the most common form of abortion in the U.S. — accounting for around 63 percent in 2023.

If mifepristone were to remain approved, Project 2025 recommends new rules, such as cutting its use from 10 weeks into pregnancy to seven. It would have to be provided to patients in person — part of the group’s efforts to limit access to the drug by mail. In June, the U.S. Supreme Court rejected a legal challenge to mifepristone’s FDA approval over procedural grounds.

WATCH: Trump’s plans for health care and reproductive rights if he returns to White House The manual also calls for the Justice Department to enforce the 1873 Comstock Act on mifepristone, which bans the mailing of “obscene” materials. Abortion access supporters fear that a strict interpretation of the law could go further to ban mailing the materials used in procedural abortions, such as surgical instruments and equipment.

The plan proposes withholding federal money from states that don’t report to the Centers for Disease Control and Prevention how many abortions take place within their borders. The plan also would prohibit abortion providers, such as Planned Parenthood, from receiving Medicaid funds. It also calls for the Department of Health and Human Services to ensure that the training of medical professionals, including doctors and nurses, omits abortion training.

The document says some forms of emergency contraception — particularly Ella, a pill that can be taken within five days of unprotected sex to prevent pregnancy — should be excluded from no-cost coverage. The Affordable Care Act requires most private health insurers to cover recommended preventive services, which involves a range of birth control methods, including emergency contraception.

Trump has recently said states should decide abortion regulations and that he wouldn’t block access to contraceptives. Trump said during his June 27 debate with Biden that he wouldn’t ban mifepristone after the Supreme Court “approved” it. But the court rejected the lawsuit based on standing, not the case’s merits. He has not weighed in on the Comstock Act or said whether he supports it being used to block abortion medication, or other kinds of abortions.

Project 2025 doesn’t call for cutting Social Security, but proposes some changes to Medicare

“When you read (Project 2025),” Harris told a crowd July 23 in Wisconsin, “you will see, Donald Trump intends to cut Social Security and Medicare.”

The Project 2025 document does not call for Social Security cuts. None of its 10 references to Social Security addresses plans for cutting the program.

Harris also misleads about Trump’s Social Security views.

In his earlier campaigns and before he was a politician, Trump said about a half-dozen times that he’s open to major overhauls of Social Security, including cuts and privatization. More recently, in a March 2024 CNBC interview, Trump said of entitlement programs such as Social Security, “There’s a lot you can do in terms of entitlements, in terms of cutting.” However, he quickly walked that statement back, and his CNBC comment stands at odds with essentially everything else Trump has said during the 2024 presidential campaign.

Trump’s campaign website says that not “a single penny” should be cut from Social Security. We rated Harris’ claim that Trump intends to cut Social Security Mostly False.

Project 2025 does propose changes to Medicare, including making Medicare Advantage, the private insurance offering in Medicare, the “default” enrollment option. Unlike Original Medicare, Medicare Advantage plans have provider networks and can also require prior authorization, meaning that the plan can approve or deny certain services. Original Medicare plans don’t have prior authorization requirements.

The manual also calls for repealing health policies enacted under Biden, such as the Inflation Reduction Act. The law enabled Medicare to negotiate with drugmakers for the first time in history, and recently resulted in an agreement with drug companies to lower the prices of 10 expensive prescriptions for Medicare enrollees.

Trump, however, has said repeatedly during the 2024 presidential campaign that he will not cut Medicare.

Project 2025 would eliminate the Education Department, which Trump supports

The Harris campaign said Project 2025 would “eliminate the U.S. Department of Education” — and that’s accurate. Project 2025 says federal education policy “should be limited and, ultimately, the federal Department of Education should be eliminated.” The plan scales back the federal government’s role in education policy and devolves the functions that remain to other agencies.

Aside from eliminating the department, the project also proposes scrapping the Biden administration’s Title IX revision, which prohibits discrimination based on sexual orientation and gender identity. It also would let states opt out of federal education programs and calls for passing a federal parents’ bill of rights similar to ones passed in some Republican-led state legislatures.

Republicans, including Trump, have pledged to close the department, which gained its status in 1979 within Democratic President Jimmy Carter’s presidential Cabinet.

In one of his Agenda 47 policy videos, Trump promised to close the department and “to send all education work and needs back to the states.” Eliminating the department would have to go through Congress.

What Project 2025, Trump would do on overtime pay

In the graphic, the Harris campaign says Project 2025 allows “employers to stop paying workers for overtime work.”

The plan doesn’t call for banning overtime wages. It recommends changes to some Occupational Safety and Health Administration, or OSHA, regulations and to overtime rules. Some changes, if enacted, could result in some people losing overtime protections, experts told us.

The document proposes that the Labor Department maintain an overtime threshold “that does not punish businesses in lower-cost regions (e.g., the southeast United States).” This threshold is the amount of money executive, administrative or professional employees need to make for an employer to exempt them from overtime pay under the Fair Labor Standards Act.

In 2019, the Trump’s administration finalized a rule that expanded overtime pay eligibility to most salaried workers earning less than about $35,568, which it said made about 1.3 million more workers eligible for overtime pay. The Trump-era threshold is high enough to cover most line workers in lower-cost regions, Project 2025 said.

The Biden administration raised that threshold to $43,888 beginning July 1, and that will rise to $58,656 on Jan. 1, 2025. That would grant overtime eligibility to about 4 million workers, the Labor Department said.

It’s unclear how many workers Project 2025’s proposal to return to the Trump-era overtime threshold in some parts of the country would affect, but experts said some would presumably lose the right to overtime wages.

Other overtime proposals in Project 2025’s plan include allowing some workers to choose to accumulate paid time off instead of overtime pay, or to work more hours in one week and fewer in the next, rather than receive overtime.

Trump’s past with overtime pay is complicated. In 2016, the Obama administration said it would raise the overtime to salaried workers earning less than $47,476 a year, about double the exemption level set in 2004 of $23,660 a year.

But when a judge blocked the Obama rule, the Trump administration didn’t challenge the court ruling. Instead it set its own overtime threshold, which raised the amount, but by less than Obama.

Support Provided By: Learn more

Educate your inbox

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

essay questions about assessment of learning

Blog The Education Hub

https://educationhub.blog.gov.uk/2024/08/20/gcse-results-day-2024-number-grading-system/

GCSE results day 2024: Everything you need to know including the number grading system

essay questions about assessment of learning

Thousands of students across the country will soon be finding out their GCSE results and thinking about the next steps in their education.   

Here we explain everything you need to know about the big day, from when results day is, to the current 9-1 grading scale, to what your options are if your results aren’t what you’re expecting.  

When is GCSE results day 2024?  

GCSE results day will be taking place on Thursday the 22 August.     

The results will be made available to schools on Wednesday and available to pick up from your school by 8am on Thursday morning.  

Schools will issue their own instructions on how and when to collect your results.   

When did we change to a number grading scale?  

The shift to the numerical grading system was introduced in England in 2017 firstly in English language, English literature, and maths.  

By 2020 all subjects were shifted to number grades. This means anyone with GCSE results from 2017-2020 will have a combination of both letters and numbers.  

The numerical grading system was to signal more challenging GCSEs and to better differentiate between students’ abilities - particularly at higher grades between the A *-C grades. There only used to be 4 grades between A* and C, now with the numerical grading scale there are 6.  

What do the number grades mean?  

The grades are ranked from 1, the lowest, to 9, the highest.  

The grades don’t exactly translate, but the two grading scales meet at three points as illustrated below.  

The image is a comparison chart from the UK Department for Education, showing the new GCSE grades (9 to 1) alongside the old grades (A* to G). Grade 9 aligns with A*, grades 8 and 7 with A, and so on, down to U, which remains unchanged. The "Results 2024" logo is in the bottom-right corner, with colourful stripes at the top and bottom.

The bottom of grade 7 is aligned with the bottom of grade A, while the bottom of grade 4 is aligned to the bottom of grade C.    

Meanwhile, the bottom of grade 1 is aligned to the bottom of grade G.  

What to do if your results weren’t what you were expecting?  

If your results weren’t what you were expecting, firstly don’t panic. You have options.  

First things first, speak to your school or college – they could be flexible on entry requirements if you’ve just missed your grades.   

They’ll also be able to give you the best tailored advice on whether re-sitting while studying for your next qualifications is a possibility.   

If you’re really unhappy with your results you can enter to resit all GCSE subjects in summer 2025. You can also take autumn exams in GCSE English language and maths.  

Speak to your sixth form or college to decide when it’s the best time for you to resit a GCSE exam.  

Look for other courses with different grade requirements     

Entry requirements vary depending on the college and course. Ask your school for advice, and call your college or another one in your area to see if there’s a space on a course you’re interested in.    

Consider an apprenticeship    

Apprenticeships combine a practical training job with study too. They’re open to you if you’re 16 or over, living in England, and not in full time education.  

As an apprentice you’ll be a paid employee, have the opportunity to work alongside experienced staff, gain job-specific skills, and get time set aside for training and study related to your role.   

You can find out more about how to apply here .  

Talk to a National Careers Service (NCS) adviser    

The National Career Service is a free resource that can help you with your career planning. Give them a call to discuss potential routes into higher education, further education, or the workplace.   

Whatever your results, if you want to find out more about all your education and training options, as well as get practical advice about your exam results, visit the  National Careers Service page  and Skills for Careers to explore your study and work choices.   

You may also be interested in:

  • Results day 2024: What's next after picking up your A level, T level and VTQ results?
  • When is results day 2024? GCSEs, A levels, T Levels and VTQs

Tags: GCSE grade equivalent , gcse number grades , GCSE results , gcse results day 2024 , gsce grades old and new , new gcse grades

Sharing and comments

Share this page, related content and links, about the education hub.

The Education Hub is a site for parents, pupils, education professionals and the media that captures all you need to know about the education system. You’ll find accessible, straightforward information on popular topics, Q&As, interviews, case studies, and more.

Please note that for media enquiries, journalists should call our central Newsdesk on 020 7783 8300. This media-only line operates from Monday to Friday, 8am to 7pm. Outside of these hours the number will divert to the duty media officer.

Members of the public should call our general enquiries line on 0370 000 2288.

Sign up and manage updates

Follow us on social media, search by date.

August 2024
M T W T F S S
 1234
5 7891011
131415161718
2122232425
2627 29 31  

Comments and moderation policy

IMAGES

  1. Assessment IN Learning 2 Reviewer

    essay questions about assessment of learning

  2. Pre-assessment strategies. Great way to see what knowledge students

    essay questions about assessment of learning

  3. Classroom Summative Assessment Free Essay Example

    essay questions about assessment of learning

  4. (PDF) Reflective Essay on Assessment

    essay questions about assessment of learning

  5. Assessment of Learning / assessment-of-learning.pdf / PDF4PRO

    essay questions about assessment of learning

  6. What Is an Evaluation Essay? Simple Examples To Guide You

    essay questions about assessment of learning

VIDEO

  1. GOALS AND OBJECTIVE IN ASSESSMENT LEARNING OUTCOME

  2. ASSESSMENT LEARNING 1 lesson 5 Construction of written test by: jessel Lazo

  3. Assessment for learning

  4. Types Of Assessment (Essay)/6th sem Bcom Income tax and gst

  5. Assessment for learning , B. Ed 2nd year 2023 paper #igu #exam #motivation #B.edpyq #pyq

  6. Assessment as an Essential Aspect of the Learning Process

COMMENTS

  1. Short answer and essay questions

    It is important to redesign the assessment tasks to authentically assess the intended learning outcomes in a way that is appropriate for this mode of assessment. Replacing questions that simply recall facts with questions that require higher level cognitive skills—for example analysis and explanation of why and how students reached an answer—provides opportunities for reflective questions ...

  2. PDF PREPARING EFFECTIVE ESSAY QUESTIONS

    from a list of possibilities, whereas essay questions require students to compose their own answer. However, requiring students to compose a response is not the only characteristic of an effective essay question. There are assessment items other than essay questions that require students to construct responses (e.g., short answer, fill in the ...

  3. Assessing Student Learning

    Learning assessment is like a magnifying glass we hold up to students' learning to discern whether the teaching and learning process is functioning well or is in need of change. ... For instance, were the test or essay questions confusing, yielding invalid and unreliable assessments of student knowledge. Reflect and revise.

  4. (PDF) Reflective Essay on Assessment

    Reflective Essay on Assessment. Kerwin Anthony Livingstone, PhD. Email: [email protected]. The goal of education is learning, and the vehicle used to accomplish this goal is ...

  5. Student Assessment in Teaching and Learning

    According to Euan S. Henderson, essays make two important contributions to learning and assessment: the development of skills and the cultivation of a learning style. (Henderson, 1980) Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes ...

  6. Assessing Student Learning: 6 Types of Assessment and How to Use Them

    1. Formative assessment. Formative assessment is a type of assessment that focuses on monitoring student learning during the instructional process. Its primary purpose is to provide ongoing feedback to both teachers and students, helping them identify areas of strength and areas in need of improvement. This type of assessment is typically low ...

  7. Best Practices for Assessing Student Learning

    Varying Assessment Types. Best practices in assessing student learning include using several types of assessment that enable students to show evidence of their learning in various ways as they learn the content and achieve the learning outcomes. Effective assessment types include: Essay; Short-answers; Research paper; Multiple-choice questions

  8. Assessment For Learning Essay

    Teaching, Learning and Assessment Assessment is a central point for teaching and learning as it is used to evaluate and develop "teaching and learning processes and outcomes" (Gordon, Rice & Heincke, 2012). Previously assessment was exclusively summative but has evolved into three areas of assessment; diagnostic, formative and summative ...

  9. Essay

    Essay. Essay assessments ask students to demonstrate a point of view supported by evidence. They allow students to demonstrate what they've learned and build their writing skills. An essay question prompts a written response, which may vary from a few paragraphs to a number of pages. Essay questions are generally open-ended.

  10. Essay on Assessment

    Introduction. Assessment of students is a vital exercise aimed at evaluating their knowledge, talents, thoughts, or beliefs (Harlen, 2007). It involves testing a part of the content taught in class to ascertain the students' learning progress. Assessment should put into consideration students' class work and outside class work.

  11. Constructing tests

    Ask students (individually or in small groups) to develop and answer potential test questions. Collect these and use them to create the questions for the actual test. Develop relevant tests/quiz questions that assess skills and knowledge you explored during lecture or discussion. Share examples of successful answers and provide explanations of ...

  12. Designing Assessments of Student Learning

    As educators, we measure student learning through many means, including assignments, quizzes, and tests. These assessments can be formal or informal, graded or ungraded. But assessment is not simply about awarding points and assigning grades. Learning is a process, not a product, and that process takes place during activities such as recall and ...

  13. Assessing Student Writing

    Assessment is the gathering of information about student learning. It can be used for formative purposes−−to adjust instruction−−or summative purposes: to render a judgment about the quality of student work. It is a key instructional activity, and teachers engage in it every day in a variety of informal and formal ways.

  14. Essay Exams

    Consider offering students choice among essay questions, while ensuring that all learning aims are assessed. When designing essay exams, consider the reasoning skills you want to assess in your students. The following table lists different skills to measure with example prompts to guide assessment questions.

  15. Example of an Assessment Topic

    Below is an example of an assessment reading and question in the style you can expect for the BWA. Directions. Read the passage and the essay topic that follows. Respond to the topic by writing an essay that is controlled by a central idea and is developed by discussing specific examples.

  16. PDF Application essays as an effective tool for assessing instruction in

    Abstract: The assessment of student learning in general education courses is of critical importance in higher education. This study examines the utility of a writing assignment (application essays) in a basic communication course as an effective assessment tool. The authors conducted a content analysis of student

  17. Learning & Assessment: Possible Essay Questions

    The Power Point Presentations can be viewed directly using Internet Explorer or can be downloaded by clicking the right mouse button and viewed using Power Point or the PowerPoint 97 Viewer.Exams will all be essays; guidelines for writing an essay are provided at the end of the objectives.. The following textbooks have student study guides with practice tests that can help you organize your ...

  18. Evidence And Assessment Of Student Learning

    1104 Words. 5 Pages. Open Document. Evidence and Assessment of Student Learning How will you know whether students are making progress toward your learning goals for each of the following types of performance: exceeds expectations, meets expectations, and below expectations. (Be sure to include both content and language, assessed either ...

  19. PDF The Role of Essay Tests Assessment in e-Learning: A Japanese Case Study

    the form of multiple-choice questions, without any essay type of learning assessment. Major reasons for employing multiple-choice tasks in e-learning include ease of implementation and ease of managing learner's responses. To address this limitation in online assessment of learning, this study investigated an automatic assessment system

  20. Essay on Learning Assessment

    Learning assessment is like a teacher using a map to find out how far a student has traveled on their education journey. It's a way to see what a student knows and can do after a lesson or a series of lessons. Think of it as a progress report that helps teachers and students understand more about the learning process.

  21. Essay On Assessment In Education

    Essay On Assessment In Education. 767 Words4 Pages. Above I have discussed teaching and learning at great lengths but not much has been said about assessment. Assessment is an integral part of any education system. It is how one determines whether a learner has learnt or understood what has been taught, it is a means of quantifying a teachers ...

  22. Modern Assessment Techniques In eLearning

    Make sure the questions in your question bank represent a range of cultures, ethnicities, and standpoints. Employ a variety of assessment techniques Using a wide range of evaluation techniques to adjust uneven learning preferences and skill levels. Peer reviews, interactive simulations, and project-based exams are a few examples of this.

  23. Educationally authentic assessment: reframing authentic assessment in

    Nearly half of the 116 statements under the theme of choice emphasised choice of topic. In most of the examples, students describing having relatively free rein in selecting topics that addressed module learning outcomes. One arts/humanities student wrote, 'I find essays quite engaging when I'm given the option to design my own question'.

  24. How to Nurture a Sense of Belonging for Students With ...

    Integrate student choice, goal setting, and preferences into lessons and other class activities every day, such as offering flexible seating during independent work or options for self-assessment like rubrics and checklists for class and homework assignments.

  25. sld

    Student Learning Development supports Trinity students reach their academic potential. We offer a range of services including individual appointments, workshops and skills events. These services are designed to develop your skills in areas such as academic writing, self and time management, exam & assessment skills for PG & UG students.

  26. Moodle Response Templates Improve Student ...

    Essay Questions in Moodle Quizzes. It is important to understand the nature of Essay questions in Moodle quizzes before discussing response templates. Importantly, essay questions are not just for essays. They are simply open-ended questions without a specific length or format for responses (unlike Short Answer questions, which have a strict ...

  27. Learning effect of online versus onsite education in health and medical

    The disruption of health and medical education by the COVID-19 pandemic made educators question the effect of online setting on students' learning, motivation, self-efficacy and preference. In light of the health care staff shortage online scalable education seemed relevant. Reviews on the effect of online medical education called for high quality RCTs, which are increasingly relevant with ...

  28. Americans' View of K-12 Education Improves From 2023 Low

    Employee Engagement Create a culture that ensures employees are involved, enthusiastic and highly productive in their work and workplace.; Employee Experience Analyze and improve the experiences across your employee life cycle, so your people and organization can thrive.; Leadership Identify and enable future-ready leaders who can inspire exceptional performance.

  29. Fact-checking warnings from Democrats about Project 2025 and ...

    Vice President Kamala Harris, the Democratic presidential nominee, has warned Americans about "Trump's Project 2025" agenda — even though former President Donald Trump doesn't claim the ...

  30. GCSE results day 2024: Everything you need to know including the number

    Thousands of students across the country will soon be finding out their GCSE results and thinking about the next steps in their education.. Here we explain everything you need to know about the big day, from when results day is, to the current 9-1 grading scale, to what your options are if your results aren't what you're expecting.