• Reference Manager
  • Simple TEXT file

People also looked at

Review article, appropriate criteria: key to effective rubrics.

research paper rubric higher education

  • Department of Educational Foundations and Leadership, Duquesne University, Pittsburgh, PA, United States

True rubrics feature criteria appropriate to an assessment's purpose, and they describe these criteria across a continuum of performance levels. The presence of both criteria and performance level descriptions distinguishes rubrics from other kinds of evaluation tools (e.g., checklists, rating scales). This paper reviewed studies of rubrics in higher education from 2005 to 2017. The types of rubrics studied in higher education to date have been mostly analytic (considering each criterion separately), descriptive rubrics, typically with four or five performance levels. Other types of rubrics have also been studied, and some studies called their assessment tool a “rubric” when in fact it was a rating scale. Further, for a few (7 out of 51) rubrics, performance level descriptions used rating-scale language or counted occurrences of elements instead of describing quality. Rubrics using this kind of language may be expected to be more useful for grading than for learning. Finally, no relationship was found between type or quality of rubric and study results. All studies described positive outcomes for rubric use.

A rubric articulates expectations for student work by listing criteria for the work and performance level descriptions across a continuum of quality ( Andrade, 2000 ; Arter and Chappuis, 2006 ). Thus, a rubric has two parts: criteria that express what to look for in the work and performance level descriptions that describe what instantiations of those criteria look like in work at varying quality levels, from low to high.

Other assessment tools, like rating scales and checklists, are sometimes confused with rubrics. Rubrics, checklists, and rating scales all have criteria; the scale is what distinguishes them. Checklists ask for dichotomous decisions (typically has/doesn't have or yes/no) for each criterion. Rating scales ask for decisions across a scale that does not describe the performance. Common rating scales include numerical scales (e.g., 1–5), evaluative scales (e.g., Excellent-Good-Fair-Poor), and frequency scales (e.g., Always, Usually-Sometimes-Never). Frequency scales are sometimes useful for ratings of behavior, but none of the rating scales offer students a description of the quality of their performance they can easily use to envision their next steps in learning. The purpose of this paper is to investigate the types of rubrics that have been studied in higher education.

Rubrics have been analyzed in several different ways. One important characteristic of rubrics is whether they are general or task-specific ( Arter and McTighe, 2001 ; Arter and Chappuis, 2006 ; Brookhart, 2013 ). General rubrics apply to a family of similar tasks (e.g., persuasive writing prompts, mathematics problem solving). For example, a general rubric for an essay on characterization might include a performance level description that reads, “Used relevant textual evidence to support conclusions about a character.” Task-specific rubrics specify the specific facts, concepts, and/or procedures that students' responses to a task should contain. For example, a task-specific rubric for the characterization essay might specify which pieces of textual evidence the student should have located and what conclusions the student should have drawn from this evidence. The generality of the rubric is perhaps the most important characteristic, because general rubrics can be shared with students and used for learning as well as for grading.

The prevailing hypothesis about how rubrics help students is that they make explicit both the expectations for student work and, more generally, describe what learning looks like ( Andrade, 2000 ; Arter and McTighe, 2001 ; Arter and Chappuis, 2006 ; Bell et al., 2013 ; Brookhart, 2013 ; Nordrum et al., 2013 ; Panadero and Jonsson, 2013 ). In this way, rubrics play a role in the formative learning cycle (Where am I going? Where am I now? Where to next? Hattie and Timperley, 2007 ) and support student agency and self-regulation ( Andrade, 2010 ). Some research has borne out this idea, showing that rubrics do make expectations explicit for students ( Jonsson, 2014 ; Prins et al., 2016 ) and that students do use rubrics for this purpose ( Andrade and Du, 2005 ; Garcia-Ros, 2011 ). General rubrics should be written with descriptive language, as opposed to evaluative language (e.g., excellent, poor) because descriptive language helps students envision where they are in their learning and where they should go next.

Another important way to characterize rubrics is whether they are analytic or holistic. Analytic rubrics consider criteria one at a time, which means they are better for feedback to students ( Arter and McTighe, 2001 ; Arter and Chappuis, 2006 ; Brookhart, 2013 ; Brookhart and Nitko, 2019 ). Holistic criteria consider all the criteria simultaneously, requiring only one decision on one scale. This means they are better for grading, for times when students will not need to use feedback, because making only one decision is quicker and less cognitively demanding than making several.

Rubrics have been characterized by the number of criteria and number of levels they use. The number of criteria should be linked to the intended learning outcome(s) to be assessed, and the number of levels should be related to the types of decisions that need to be made and to the number of reliable distinctions in student work that are possible and helpful.

Dawson (2017) recently summarized a set of 14 rubric design elements that characterize both the rubrics themselves and their use in context. His intent was to provide more precision to discussions about rubrics and to future research in the area. His 14 areas included: specificity, secrecy, exemplars, scoring strategy, evaluative criteria, quality levels, quality definitions, judgment complexity, users and uses, creators, quality processes, accompanying feedback information, presentation, and explanation. In Dawson's terms, this study focused on specificity, evaluative criteria, quality levels, quality definitions, quality processes, and presentation (how the information is displayed).

Four recent literature reviews on the topic of rubrics ( Jonsson and Svingby, 2007 ; Reddy and Andrade, 2010 ; Panadero and Jonsson, 2013 ; Brookhart and Chen, 2015 ) summarize research on rubrics. Brookhart and Chen (2015) updated Jonsson and Svingby's (2007) comprehensive literature review. Panadero and Jonsson (2013) specifically addressed the use of rubrics in formative assessment and the fact that formative assessment begins with students understanding expectations. They posited that rubrics help improve student learning through several mechanisms (p. 138): increasing transparency, reducing anxiety, aiding the feedback process, improving student self-efficacy, or supporting student Self-regulation.

Reddy and Andrade (2010) addressed the use of rubrics in post-secondary education specifically. They noted that rubrics have the potential to identify needs in courses and programs, and have been found to support learning (although not in all studies). The found that the validity and reliability of rubrics can be established, but this is not always done in higher education applications of rubrics. Finally, they found that some higher education faculty may resist the use of rubrics, which may be linked to a limited understanding of the purposes of rubrics. Students generally perceive that rubrics serve purposes of learning and achievement, while some faculty members think of rubrics primarily as grading schemes (p. 439). In fact, rubrics are not as easy to use for grading as some traditional rating or point schemes; the reason to use rubrics is that they can support learning and align learning with grading.

Some criticisms and challenges for rubrics have been noted. Nordrum et al. (2013) summarized words of caution from several scholars about the potential for the criteria used in rubrics to be subjective or vague, or to narrow students' understandings of learning (see also Torrance, 2007 ). In a backhanded way, these criticisms support the thesis of this review, namely, that appropriate criteria are the key to the effectiveness of a rubric. Such criticisms are reasonable and get their traction from the fact that many ineffective or poor-quality rubrics exist, that do have vague or narrow criteria. A particularly dramatic example of this happens when the criteria in a rubric are about following the directions for an assignment rather than describing learning (e.g., “has three sources” rather than “uses a variety of relevant, credible sources”). Rubrics of this kind misdirect student efforts and mis-measure learning.

Sadler (2014) argued that codification of qualities of good work into criteria cannot mean the same thing in all contexts and cannot be specific enough to guide student thinking. He suggests instantiation instead of codification, describing a process of induction where the qualities of good work are inferred from a body of work samples. In fact, this method is already used in classrooms when teachers seek to clarify criteria for rubrics ( Arter and Chappuis, 2006 ) or when teachers co-create rubrics with students ( Andrade and Heritage, 2017 ).

Purpose of the Study

A number of scholars have published studies of the reliability, validity, and/or effectiveness of rubrics in higher education and provided the rubrics themselves for inspection. This allows for the investigation of several research questions, including:

(1) What are the types and quality of the rubrics studied in higher education?

(2) Are there any relationships between the type and quality of these rubrics and reported reliability, validity, and/or effects on learning and motivation?

Question 1 was of interest because, after doing the previous review ( Brookhart and Chen, 2015 ), I became aware that not all of the assessment tools in studies that claimed to be about rubrics were characterized by both criteria and performance level descriptions, as for true rubrics ( Andrade, 2000 ). The purpose of Research Question 1 was simply to describe the distribution of assessment tool types in a systematic manner.

Question 2 was of interest from a learning perspective. Various types of assessment tools can be used reliably ( Brookhart and Nitko, 2019 ) and be valid for specific purposes. An additional claim, however, is made about true rubrics. Because the performance level descriptions describe performance across a continuum of work quality, rubrics are intended to be useful for students' learning ( Andrade, 2000 ; Brookhart, 2013 ). The criteria and performance level descriptions, together, can help students conceptualize their learning goal, focus on important aspects of learning and performance, and envision where they are in their learning and what they should try to improve ( Falchikov and Boud, 1989 ). Thus I hypothesized that there would not be a relationship between type of rubric and conventional reliability and validity evidence. However, I did expect a relationship between type of rubric and the effects of rubrics on learning and motivation, expecting true descriptive rubrics to support student learning better than the other types of tools.

This study is a literature review. Study selection began with the data base of studies selected for Brookhart and Chen (2015) , a previous review of literature on rubrics from 2005 to 2013. Thirty-six studies from that review were done in the context of higher education. I conducted an electronic search for articles published from 2013 to 2017 in the ERIC database. This yielded 10 additional studies, for a total of 46 studies. The 46 studies have the following characteristics: (a) conducted in higher education, (b) studied the rubrics (i.e., did not just use the rubrics to study something else, or give a description of “how-to-do-rubrics”), and (c) included the rubrics in the article.

There are two reasons for limiting the studies to the higher education context. One, most published studies of rubrics have been conducted in higher education. I do not think this means fewer rubrics are being used in the K-12 context; I observe a lot of rubric use in K-12. Higher education users, however, are more likely to do a formal review of some kind and publish their results. Thus the number of available studies was large enough to support a review. Two, given that more published information on rubrics exists in higher education than K-12, limiting the review to higher education holds constant one possible source of complexity in understanding rubric use, because all of the students are adult learners. Rubrics used with K-12 students must be written at an appropriate developmental or educational level. The reason for limiting the studies to ones that included a copy of the rubrics in the article was that the analysis for this review required classifying the type and characteristics of the rubrics themselves.

Information about the 46 studies was entered into a spreadsheet. Information noted about the studies included country, level (undergraduate or graduate), type (rubric, rating scale, or point scheme), how the rubric considered criteria (analytic or holistic), whether the performance level descriptors were truly descriptive or used rating scale and/or numerical language in the levels, type of construct assessed by the rubrics (cognitive or behavioral), whether the rubrics were used with students or just by instructors for grading, sample, study method (e.g., case study, quasi-experimental), and findings. Descriptive and summary information about these classifications and study descriptions was used to address the research questions.

As an example of what is meant by descriptive language in a rubric, consider this excerpt from Prins et al. (2016) . This is the performance level description for Level 3 of the criterion Manuscript Structure from a rubric for research theses (p. 133):

All elements are logically connected and keypoints within sections are organized. Research questions, hypotheses, research design, results, inferences and evaluations are related and form a consistent and concise argumentation.

Notice that a key characteristic of the language in this performance level description is that it describes the work. Thus for students who aspire to this high level, the rubric depicts for them what their work needs to look like in order to reach that goal.

In contrast, if performance level descriptions are written in evaluative language (for example, if the performance level description above had read, “The paper shows excellent manuscript structure”), the rubric does not give students the information they need to further their learning. Rubrics written in evaluative language do not give students a depiction of work at that level and, therefore, do not provide a clear description of the learning goal. An example of evaluative language used in a rubric can be found in the performance level descriptions for one of the criteria of an oral communication rubric ( Avanzino, 2010 , p. 109). This is the performance level description for Level 2 (Adequate) on the criterion of Delivery:

Speaker's delivery style/use of notes (manuscript or extemporaneous) is average; inconsistent focus on audience.

Notice that the key word in the first part of the performance level description, “average,” does not give any information to the student about what average delivery looks like in regard to style and use of notes. The second part of the performance level description, “inconsistent focus on audience,” is descriptive and gives students information about what Level 2 performance looks like in regard to audience focus.

Results and Discussion

The 46 studies yielded 51 different rubrics because several studies included more than one rubric. The two sections below take up results for each research question in turn.

Type and Quality of Rubrics

Table 1 displays counts of the type and quality of rubrics found in the studies. Most of the rubrics (29 out of 51, 57%) were analytic, descriptive rubrics. This means they considered the criteria separately, requiring a separate decision about work quality for each criterion. In addition, it means that the performance level descriptions used descriptive, as opposed to evaluative, language, which is expected to be more supportive of learning. Most commonly, these rubrics described four (14) or five (8) performance levels.

www.frontiersin.org

Table 1 . Types of rubrics used in studies of rubrics in higher education.

Four of the 51 rubrics (8%) were holistic, descriptive rubrics. This means they considered the criteria simultaneously, requiring one decision about work quality across all criteria at once. In addition, the performance level descriptions used the desired descriptive language.

Three of the rubrics were descriptive and task-specific. One of these was an analytic rubric and two were holistic rubrics. None of the three could be shared with students, because they would “give away” answers. Such rubrics are more useful for grading than for formative assessment supporting learning. This does not necessarily mean the rubrics were not of quality, because they served well the grading function for which they were designed. However, they represent a missed opportunity to support learning as well as grading.

A few of the rubrics were not written in a descriptive manner. Six of the analytic rubrics and one of the holistic rubrics used rating scale language and/or listed counts of occurrences of elements in the work, instead of describing the quality of student learning and performance. Thus 7 out of 51 (14%) of the rubrics were not of the quality that is expected to be best for student learning ( Arter and McTighe, 2001 ; Arter and Chappuis, 2006 ; Andrade, 2010 ; Brookhart, 2013 ).

Finally, eight of the 51 rubrics (16%) were not rubrics but rather rating scales (5) or point schemes for grading (3). It is possible that the authors were not aware of the more nuanced meaning of “rubric” currently used by educators and used the term in a more generic way to mean any scoring scheme.

As the heart of Research Question 1 was about the potential of the rubrics used to contribute to student learning, I also coded the studies according to whether the rubrics were used with students or whether they were just used by instructors for grading. Of the 46 studies, 26 (56%) reported using the rubrics with students and 20 (43%) did not use rubrics with students but rather used them only for grading.

Relation of Rubric Type to Reliability, Validity, and Learning

Different studies reported different characteristics of their rubrics. I charted studies that reported evidence for the reliability of information from rubrics (Table 2 ) and the validity of information from rubrics (Table 3 ). For the sake of completeness, Table 4 lists six studies that presented their work with rubrics in a descriptive case-study style that did not fit easily into Table 2 or Table 3 or in Table 5 (below) about the effects of rubrics on learning. With the inclusion of Table 4 , readers have descriptions of all 51 rubrics in all 46 studies reported under Research Question 1.

www.frontiersin.org

Table 2 . Reliability evidence for rubrics.

www.frontiersin.org

Table 3 . Validity evidence for rubrics.

www.frontiersin.org

Table 4 . Descriptive case studies about developing and using rubrics.

www.frontiersin.org

Table 5 . Studies of the effects of rubric use on student learning and motivation to learn.

Reliability was most commonly studied as inter-rater reliability, arguably the most important for rubrics because judgment is involved in matching student work with performance level descriptions, or as internal consistency among criteria. Construct validity was addressed with a variety of methods, from expert review to factor analysis; some studies also addressed consequential evidence for validity with student or faculty questionnaires. No discernable patterns were found that indicated one form of rubric was preferable to another in regard to reliability or validity. Although this conforms to my hypothesis, this result is also partly because most of the studies' reported results and experience with rubrics were positive, no matter what type of rubric was used.

Table 5 describes 13 studies of the effects of rubrics on learning or motivation, all with positive results. Learning was most commonly operationalized as improvement in student work. Motivation was typically operationalized as student responses to questionnaires. In these studies as well, no discernable pattern was found regarding type of rubric. Despite the logical and learning-based arguments made in the literature and summarized in the introduction to this article, rubrics with both descriptive and evaluative performance level descriptions both led to at least some positive results for students. Eight of these studies used descriptive rubrics and five used evaluative rubrics. It is possible that the lack of association of type of rubric with study findings is a result of publication bias, because most of the studies had good things to say about rubrics and their effects. The small sample size (13 studies) may also be an issue.

Conclusions

Rubrics are becoming more and more evident as part of assessment in higher education. Evidence for that claim is simply the number of studies that are published investigating this new and growing interest and the assertions made in those studies about rising interest in rubrics.

Research Question 1 asked about the type and quality of rubrics published in studies of rubrics in higher education. The number of criteria varies widely depending on the rubric and its purpose. Three, four, and five are the most common number of levels. While most of the rubrics are descriptive—the type of rubrics generally expected to be most useful for learning—many are not. Perhaps most surprising, and potentially troubling, is that only 56% of the studies reported using rubrics with students. If all that is required is a grading scheme, traditional point schemes or rating scales are easier for instructors to use. The value of a rubric lies in its formative potential ( Panadero and Jonsson, 2013 ), where the same tool that students can use to learn and monitor their learning is then used for grading and final evaluation by instructors.

Research Question 2 asked whether rubric type and quality were related to measurement quality (reliability and validity) or effects on learning and motivation to learn. Among studies in this review, reported reliability and validity was not related to type of rubric. Reported effects on learning and/or motivation were not related to type of rubric. The discussion above speculated that part of the reason for these findings might be publication bias, because only studies with good effects—whatever the type of rubric they used—were reported.

However, we should not dismiss all the results with a hand-wave about publication bias. All of the tools in the studies of rubrics—true rubrics, rating scales, checklists—had criteria. The differences were in the type of scale and scale descriptions used. Criteria lay out for students and instructors what is expected in student work and, by extension, what it looks like when evidence of intended learning has been produced. Several of the articles stated explicitly that the point of rubrics was to make assignment expectations explicit (e.g., Andrade and Du, 2005 ; Fraser et al., 2005 ; Reynolds-Keefer, 2010 ; Vandenberg et al., 2010 ; Jonsson, 2014 ; Prins et al., 2016 ). The criteria are the assignment expectations: the qualities the final work should display. The performance level descriptions instantiate those expectations at different levels of competence. Thus, one firm conclusion from this review is that appropriate criteria are the key to effective rubrics. Trivial or surface-level criteria will not draw learning goals for students as clearly as substantive criteria. Students will try to produce what is expected of them. If the criterion is simply having or counting something in their work (e.g., “has 5 paragraphs”), students need not pay attention to the quality of what their work has. If the criterion is substantive (e.g., “states a compelling thesis”), attention to quality becomes part of the work.

It is likely that appropriate performance level descriptions are also key for effective rubrics, but this review did not establish this fact. A major recommendation for future research is to design studies that investigate how students use the performance level descriptions as they work, in monitoring their work, and in their self-assessment judgments. Future research might also focus on two additional characteristics of rubrics ( Dawson, 2017 ): users and uses and judgment complexity. Several studies in this review established that students use rubrics to make expectations explicit. However, in only 56% of the studies were rubrics used with students, thus missing the opportunity to take advantage of this important rubric function. Therefore, it seems important to seek additional understanding of users and uses of rubrics. In this review, judgment complexity was a clear issue for one study ( Young, 2013 ). In that study, a complex rubric was found more useful for learning, but a holistic rating scale was easier to use once the learning had occurred. This hint from one study suggests that different degrees of judgment complexity might be more useful in different stages of learning.

Rubrics are one way to make learning expectations explicit for learners. Appropriate criteria are key. More research is needed that establishes how performance level descriptions function during learning and, more generally, how students use rubrics for learning, not just that they do.

Author Contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership 57, 13–18. Available online at: http://www.ascd.org/publications/educational-leadership/feb00/vol57/num05/Using-Rubrics-to-Promote-Thinking-and-Learning.aspx

Google Scholar

Andrade, H., and Du, Y. (2005). Student perspectives on rubric-referenced assessment. Pract. Assess. Res. Eval. 10, 1–11. Available online at: http://pareonline.net/pdf/v10n3.pdf

Andrade, H., and Heritage, M. (2017). Using Assessment to Enhance Learning, Achievement, and Academic Self-Regulation . New York, NY: Routledge.

Andrade, H. L. (2010). “Students as the definitive source of formative assessment: academic self-assessment and the self-regulation of learning,” in Handbook of Formative Assessment , eds H. L. Andrade and G. J. Cizek (New York, NY: Routledge), 90–105.

Arter, J. A., and Chappuis, J. (2006). Creating and Recognizing Quality Rubrics . Boston: Pearson.

Arter, J. A., and McTighe, J. (2001). Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance . Thousand Oaks, CA: Corwin.

Ash, S. L., Clayton, P. H., and Atkinson, M. P. (2005). Integrating reflection and assessment to capture and improve student learning. Mich. J. Comm. Serv. Learn. 11, 49–60. Available online at: http://hdl.handle.net/2027/spo.3239521.0011.204

Avanzino, S. (2010). Starting from scratch and getting somewhere: assessment of oral communication proficiency in general education across lower and upper division courses. Commun. Teach. 24, 91–110. doi: 10.1080/17404621003680898

CrossRef Full Text | Google Scholar

Bauer, C. F., and Cole, R. (2012). Validation of an assessment rubric via controlled modification of a classroom activity. J. Chem. Educ. 89, 1104–1108. doi: 10.1021/ed2003324

Bell, A., Mladenovic, R., and Price, M. (2013). Students' perceptions of the usefulness of marking guides, grade descriptors and annotated exemplars. Assess. Eval. High. Educ. 38, 769–788. doi: 10.1080/02602938.2012.714738

Bissell, A. N., and Lemons, P. R. (2006). A new method for assessing critical thinking in the classroom. BioScience , 56, 66–72. doi: 10.1641/0006-3568(2006)056[0066:ANMFAC]2.0.CO;2

Bowen, T. (2017). Assessing visual literacy: a case study of developing a rubric for identifying and applying criteria to undergraduate student learning. Teach. High. Educ. 22, 705–719. doi: 10.1080/13562517.2017.1289507

Britton, E., Simper, N., Leger, A., and Stephenson, J. (2017). Assessing teamwork in undergraduate education: a measurement tool to evaluate individual teamwork skills. Assess. Eval. High. Educ. 42, 378–397. doi: 10.1080/02602938.2015.1116497

Brookhart, S. M. (2013). How to Create and Use Rubrics for Formative Assessment and Grading . Alexandria, VA: ASCD.

Brookhart, S. M., and Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educ. Rev. 67, 343–368. doi: 10.1080/00131911.2014.929565

Brookhart, S. M., and Nitko, A. J. (2019). Educational Assessment of Students, 8th Edn. Boston, MA: Pearson.

Chasteen, S. V., Pepper, R. E., Caballero, M. D., Pollock, S. J., and Perkins, K. K. (2012). Colorado Upper-Division Electrostatics diagnostic: a conceptual assessment for the junior level. Phys. Rev. Spec. Top. Phys. Educ. Res. 8:020108. doi: 10.1103/PhysRevSTPER.8.020108

Cho, K., Schunn, C. D., and Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. J. Educ. Psychol. 98, 891–901. doi: 10.1037/0022-0663.98.4.891

Ciorba, C. R., and Smith, N. Y. (2009). Measurement of instrumental and vocal undergraduate performance juries using a multidimensional assessment rubric. J. Res. Music Educ. 57, 5–15. doi: 10.1177/0022429409333405

Davidowitz, B., Rollnick, M., and Fakudze, C. (2005). Development and application of a rubric for analysis of novice students' laboratory flow diagrams. Int. J. Sci. Educ. 27, 43–59. doi: 10.1080/0950069042000243754

Dawson, P. (2017). Assessment rubrics: towards clearer and more replicable design, research and practice. Assess. Eval. High. Educ. 42, 347–360. doi: 10.1080/02602938.2015.1111294

DeWever, B., Van Keer, H., Schellens, T., and Valke, M. (2011). Assessing collaboration in a wiki: the reliability of university students' peer assessment. Internet High. Educ. 14, 201–206. doi: 10.1016/j.iheduc.2011.07.003

Dinur, A., and Sherman, H. (2009). Incorporating outcomes assessment and rubrics into case instruction. J. Behav. Appl. Manag. 10, 291–311.

Facione, N. C., and Facione, P. A. (1996). Externalizing the critical thinking in knowledge development and clinical judgment. Nurs. Outlook 44, 129–136. doi: 10.1016/S0029-6554(06)80005-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Falchikov, N., and Boud, D. (1989). Student self-assessment in higher education: a meta-analysis. Rev. Educ. Res. 59, 395–430.

Fraser, L., Harich, K., Norby, J., Brzovic, K., Rizkallah, T., and Loewy, D. (2005). Diagnostic and value-added assessment of business writing. Bus. Commun. Q. 68, 290–305. doi: 10.1177/1080569905279405

Garcia-Ros, R. (2011). Analysis and validation of a rubric to assess oral presentation skills in university contexts. Electr. J. Res. Educ. Psychol. 9, 1043–1062.

Hancock, A. B., and Brundage, S. B. (2010). Formative feedback, rubrics, and assessment of professional competency through a speech-language pathology graduate program. J. All. Health , 39, 110–119.

PubMed Abstract | Google Scholar

Hattie, J., and Timperley, H. (2007). The power of feedback. Rev. Educ. Res. 77, 81–112. doi: 10.3102/003465430298487

Howell, R. J. (2011). Exploring the impact of grading rubrics on academic performance: findings from a quasi-experimental, pre-post evaluation. J. Excell. Coll. Teach. 22, 31–49.

Howell, R. J. (2014). Grading rubrics: hoopla or help? Innov. Educ. Teach. Int. 51, 400–410. doi: 10.1080/14703297.2013.785252

Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assess. Eval. High. Educ. 39, 840–852. doi: 10.1080/02602938.2013.875117

Jonsson, A., and Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educ. Res. Rev. 2, 130–144. doi: 10.1016/j.edurev.2007.05.002

Kerby, D., and Romine, J. (2010). Develop oral presentation skills through accounting curriculum design and course-embedded assessment. Journal of Education for Business , 85, 172–179. doi: 10.1080/08832320903252389

Knight, L. A. (2006). Using rubrics to assess information literacy. Ref. Serv. Rev. 34, 43–55. doi: 10.1108/00907320610640752

Kocakülah, M. (2010). Development and application of a rubric for evaluating students' performance on Newton's Laws of Motion. J. Sci. Educ. Technol. 19, 146–164. doi: 10.1007/s10956-009-9188-9

Latifa, A., Rahman, A., Hamra, A., Jabu, B., and Nur, R. (2015). Developing a practical rating rubric of speaking test for university students of English in Parepare, Indonesia. Engl. Lang. Teach. 8, 166–177. doi: 10.5539/elt.v8n6p166

Lewis, L. K., Stiller, K., and Hardy, F. (2008). A clinical assessment tool used for physiotherapy students—is it reliable? Physiother. Theory Pract. 24, 121–134. doi: 10.1080/09593980701508894

McCormick, M. J., Dooley, K. E., Lindner, J. R., and Cummins, R. L. (2007). Perceived growth versus actual growth in executive leadership competencies: an application of the stair-step behaviorally anchored evaluation approach. J. Agric. Educ. 48, 23–35. doi: 10.5032/jae.2007.02023

Menéndez-Varela, J., and Gregori-Giralt, E. (2016). The contribution of rubrics to the validity of performance assessment: a study of the conservation-restoration and design undergraduate degrees. Assess. Eval. High. Educ. 41, 228–244. doi: 10.1080/02602938.2014.998169

Moni, R. W., Beswick, E., and Moni, K. B. (2005). Using student feedback to construct an assessment rubric for a concept map in physiology. Adv. Physiol. Educ. 29, 197–203. doi: 10.1152/advan.00066.2004

Newman, L. R., Lown, B. A., Jones, R. N., Johansson, A., and Schwartzstein, R. M. (2009). Developing a peer assessment of lecturing instrument: lessons learned. Acad. Med. 84, 1104–1110. doi: 10.1097/ACM.0b013e3181ad18f9

Nicholson, P., Gillis, S., and Dunning, A. M. (2009). The use of scoring rubrics to determine clinical performance in the operating suite. Nurse Educ. Today 29, 73–82. doi: 10.1016/j.nedt.2008.06.011

Nordrum, L., Evans, K., and Gustafsson, M. (2013). Comparing student learning experiences of in-text commentary and rubric-articulated feedback: strategies for formative assessment. Assess. Eval. High. Educ. 38, 919–940. doi: 10.1080/02602938.2012.758229

Pagano, N., Bernhardt, S. A., Reynolds, D., Williams, M., and McCurrie, M. (2008). An inter-institutional model for college writing assessment. Coll. Composition Commun. 60, 285–320.

Panadero, E., and Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: a review. Educ. Res. Rev. 9, 129–144. doi: 10.1016/j.edurev.2013.01.002

Petkov, D., and Petkova, O. (2006). Development of scoring rubrics for IS projects as an assessment tool. Issues Informing Sci. Inform. Technol. 3, 499–510. doi: 10.28945/910

Prins, F. J., de Kleijn, R., and van Tartwijk, J. (2016). Students' use of a rubric for research theses. Assess. Eval. High. Educ. 42, 128–150. doi: 10.1080/02602938.2015.1085954

Reddy, M. Y. (2011). Design and development of rubrics to improve assessment outcomes: a pilot study in a master's level Business program in India. Qual. Assur. Educ. 19, 84–104. doi: 10.1108/09684881111107771

Reddy, Y., and Andrade, H. (2010). A review of rubric use in higher education. Assess. Eval. High. Educ. 35, 435–448. doi: 10.1080/02602930902862859

Reynolds-Keefer, L. (2010). Rubric-referenced assessment in teacher preparation: an opportunity to learn by using. Pract. Assess. Res. Eval. 15, 1–9. Available online at: http://pareonline.net/getvn.asp?v=15&n=8

Rezaei, A., and Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assess. Writing , 15, 18–39. doi: 10.1016/j.asw.2010.01.003

Ritchie, S. M. (2016). Self-assessment of video-recorded presentations: does it improve skills? Act. Learn. High. Educ. 17, 207–221. doi: 10.1177/1469787416654807

Rochford, L., and Borchert, P. S. (2011). Assessing higher level learning: developing rubrics for case analysis. J. Educ. Bus. 86, 258–265. doi: 10.1080/08832323.2010.512319

Sadler, D. R. (2014). The futility of attempting to codify academic achievement standards. High. Educ. 67, 273–288. doi: 10.1007/s10734-013-9649-1

Schamber, J. F., and Mahoney, S. L. (2006). Assessing and improving the quality of group critical thinking exhibited in the final projects of collaborative learning groups. J. Gen. Educ. 55, 103–137. doi: 10.1353/jge.2006.0025

Schreiber, L. M., Paul, G. D., and Shibley, L. R. (2012). The development and test of the public speaking competence rubric. Commun. Educ. 61, 205–233. doi: 10.1080/03634523.2012.670709

Stellmack, M. A., Konheim-Kalkstein, Y. L., Manor, J. E., Massey, A. R., and Schmitz, J. P. (2009). An assessment of reliability and validity of a rubric for grading APA-style introductions. Teach. Psychol. 36, 102–107. doi: 10.1080/00986280902739776

Timmerman, B. E. C., Strickland, D. C., Johnson, R. L., and Payne, J. R. (2011). Development of a ‘universal’ rubric for assessing undergraduates' scientific reasoning skills using scientific writing. Assess. Eval. High. Educ. 36, 509–547. doi: 10.1080/02602930903540991

Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Assess. Educ. 14, 281–294. doi: 10.1080/09695940701591867

Urios, M. I., Rangel, E. R., Tomàs, R. B., Salvador, J. T., Garci,á, F. C., and Piquer, C. F. (2015). Generic skills development and learning/assessment process: use of rubrics and student validation. J. Technol. Sci. Educ. 5, 107–121. doi: 10.3926/jotse.147

Vandenberg, A., Stollak, M., McKeag, L., and Obermann, D. (2010). GPS in the classroom: using rubrics to increase student achievement. Res. High. Educ. J. 9, 1–10. Available online at: http://www.aabri.com/manuscripts/10522.pdf

Wald, H. S., Borkan, J. M., Taylor, J. S., Anthony, D., and Reis, S. P. (2012). Fostering and evaluating reflective capacity in medical education: developing the REFLECT rubric for assessing reflective writing. Acad. Med. 87, 41–50. doi: 10.1097/ACM.0b013e31823b55fa

Wallace, C. S., Prather, E. E., and Duncan, D. K. (2011). A study of general education Astronomy students' understandings of cosmology. Part II. Evaluating four conceptual cosmology surveys: a classical test theory approach. Astron. Educ. Rev. 10:010107. doi: 10.3847/AER2011030

Young, C. (2013). Initiating self-assessment strategies in novice physiotherapy students: a method case study. Assess. Eval. High. Educ. 38, 998–1011. doi: 10.1080/02602938.2013.771255

Keywords: criteria, rubrics, performance level descriptions, higher education, assessment expectations

Citation: Brookhart SM (2018) Appropriate Criteria: Key to Effective Rubrics. Front. Educ . 3:22. doi: 10.3389/feduc.2018.00022

Received: 01 February 2018; Accepted: 27 March 2018; Published: 10 April 2018.

Reviewed by:

Copyright © 2018 Brookhart. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Susan M. Brookhart, [email protected]

This article is part of the Research Topic

Transparency in Assessment – Exploring the Influence of Explicit Assessment Criteria

Center for Teaching Innovation

Resource library.

  • AACU VALUE Rubrics

Using rubrics

A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations.  

Why use rubrics? 

Rubrics help instructors: 

  • Assess assignments consistently from student-to-student. 
  • Save time in grading, both short-term and long-term. 
  • Give timely, effective feedback and promote student learning in a sustainable way. 
  • Clarify expectations and components of an assignment for both students and course teaching assistants (TAs). 
  • Refine teaching methods by evaluating rubric results. 

Rubrics help students: 

  • Understand expectations and components of an assignment. 
  • Become more aware of their learning process and progress. 
  • Improve work through timely and detailed feedback. 

Considerations for using rubrics 

When developing rubrics consider the following:

  • Although it takes time to build a rubric, time will be saved in the long run as grading and providing feedback on student work will become more streamlined.  
  • A rubric can be a fillable pdf that can easily be emailed to students. 
  • They can be used for oral presentations. 
  • They are a great tool to evaluate teamwork and individual contribution to group tasks. 
  • Rubrics facilitate peer-review by setting evaluation standards. Have students use the rubric to provide peer assessment on various drafts. 
  • Students can use them for self-assessment to improve personal performance and learning. Encourage students to use the rubrics to assess their own work. 
  • Motivate students to improve their work by using rubric feedback to resubmit their work incorporating the feedback. 

Getting Started with Rubrics 

  • Start small by creating one rubric for one assignment in a semester.  
  • Ask colleagues if they have developed rubrics for similar assignments or adapt rubrics that are available online. For example, the  AACU has rubrics  for topics such as written and oral communication, critical thinking, and creative thinking. RubiStar helps you to develop your rubric based on templates.  
  • Examine an assignment for your course. Outline the elements or critical attributes to be evaluated (these attributes must be objectively measurable). 
  • Create an evaluative range for performance quality under each element; for instance, “excellent,” “good,” “unsatisfactory.” 
  • Avoid using subjective or vague criteria such as “interesting” or “creative.” Instead, outline objective indicators that would fall under these categories. 
  • The criteria must clearly differentiate one performance level from another. 
  • Assign a numerical scale to each level. 
  • Give a draft of the rubric to your colleagues and/or TAs for feedback. 
  • Train students to use your rubric and solicit feedback. This will help you judge whether the rubric is clear to them and will identify any weaknesses. 
  • Rework the rubric based on the feedback. 

The use of assessment rubrics to enhance feedback in higher education: An integrative literature review

Affiliations.

  • 1 Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, Kings College London, James Clerk Maxwell Building, Waterloo Road, London SE1 8WA, United Kingdom. Electronic address: [email protected].
  • 2 Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, Kings College London, James Clerk Maxwell Building, Waterloo Road, London SE1 8WA, United Kingdom. Electronic address: [email protected].
  • PMID: 30007151
  • DOI: 10.1016/j.nedt.2018.06.022

Objective: To explore the literature relating to the use of rubrics in Higher Education.

Design: A systematic search using three databases was undertaken, the question used to guide the search strategy was: What are the benefits and challenges of using rubrics as part of the assessment process in Higher Education?

Data sources: Three electronic databases were searched: British Education Index, Education Resources Information Centre and the Cumulative Index to Nursing and Allied Health Literature.

Review methods: The review utilised an integrative approach to the retrieval and appraisal of the research. As the papers retrieved used different methodologies to explore the use of rubrics they were analysed using either thematic analysis or narrative synthesis.

Results: Fifteen papers were identified that met the inclusion and exclusion criteria for the review, these spanned a range of disciplines including education, medicine and design. Four main themes related to the use of rubrics were identified: the reliability and validity of the rubric, student performance, students' perceptions of the rubric and the implementation of the rubric.

Conclusions: Student self-assessment, self-regulation and understanding of assessment criteria were all found to be enhanced by the use of rubrics. However students also reported that rubrics could be restrictive and student stress related to assessments could be increased. Student involvement in the design and implementation of a rubric was identified as being critical to their success. Rubrics were judged favourably by the studies reviewed in this paper, however they were found to be most effective when used as part of an overall assessment strategy that was co-created with students.

Keywords: Assessment; Feedback; Higher education; Rubrics.

Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

Publication types

  • Clinical Competence*
  • Education, Nursing, Baccalaureate*
  • Educational Measurement / methods*
  • Reproducibility of Results
  • Students, Nursing

Examples of Rubrics

Here are some rubric examples from different colleges and universities, as well as the Association of American Colleges and Universities (AACU) VALUE rubrics. We would also like to include examples from Syracuse University faculty and staff. If you would be willing to share your rubric with us, please click  here.

  • Art and Design Rubric (Rhode Island University)
  • Theater Arts Writing Rubric (California State University)

Class Participation

  • Holistic Participation Rubric (University of Virginia)
  • Large Lecture Courses with TAs (Carnegie Mellon University)

Doctoral Program Milestones

  • Qualifying Examination (Syracuse University)
  • Comprehensive Core Examination (Portland State University)
  • Dissertation Proposal (Portland State University)
  • Dissertation (Portland State University)

Experiential Learning

  • Key Competencies in Community-Engaged Learning and Teaching (Campus Compact)
  • Global Learning and Intercultural Knowledge (International Cross-Cultural Experiential Learning Evaluation Toolkit)

Humanities and Social Science

  • Anthropology Paper (Carnegie Mellon University)
  • Economics Paper (University of Kentucky)
  • History Paper (Carnegie Mellon University)
  • Literary Analysis (Minnesota State University)
  • Philosophy Paper (Carnegie Mellon University)
  • Psychology Paper (Loyola Marymount University)
  • Sociology Paper (University of California)

Media and Design

  • Media and Design Elements Rubric (Samford University)

Natural Science

  • Physics Paper (Illinois State University)
  • Chemistry Paper (Utah State University)
  • Biology Research Report (Loyola Marymount University)

Online Learning

  • Discussion Forums (Simmons College)

Syracuse University’s Shared Competencies

Ethics, Integrity, and Commitment to Diversity and Inclusion rubric (*pdf)

Critical and Creative Thinking rubric (*pdf)

Scientific Inquiry and Research Skills rubric (*pdf)

Civic and Global Responsibility rubric (*pdf)

Communication Skills rubric (*pdf)

Information Literacy and Technological Agility rubric (*pdf)

  • Journal Reflection (The State University of New Jersey)
  • Reflection Writing Rubric  and  Research Project Writing (Carnegie Mellon University)
  • Research Paper Rubric (Cornell College)
  • Assessment Rubric for Student Reflections

AACU VALUE Rubrics

VALUE (Valid Assessment of Learning in Undergraduate Education) is a national assessment initiative on college student learning sponsored by AACU as part of its Liberal Education and America’s Promise (LEAP) initiative.

Intellectual and Practical Skills

  • Inquiry and Analysis (*pdf)
  • Critical Thinking (*pdf)
  • Creative Thinking (*pdf)
  • Written Communication (*pdf)
  • Oral Communication (*pdf)
  • Reading (*pdf)
  • Quantitative Literacy (*pdf)
  • Information Literacy (*pdf)
  • Teamwork (*pdf)
  • Problem Solving (*pdf)

Personal and Social Responsibility

  • Civic Engagement (*pdf)
  • Intercultural Knowledge and Competence (*pdf)
  • Ethical Reasoning (*pdf)
  • Foundations and Skills for Lifelong Learning (*pdf)
  • Global Learning (*pdf)

Integrative and Applied Learning

  • Integrative Learning (*pdf)

Assessing Institution-Wide Diversity

  • Self-Assessment Rubric For the Institutionalization of Diversity, Equity, and Inclusion in Higher Education

research paper rubric higher education

Research in Higher Education

  • Open to studies using a wide range of methods, with a special interest in advanced quantitative research methods.
  • Covers topics such as student access, retention, success, faculty issues, institutional assessment, and higher education policy.
  • Encourages submissions from scholars in disciplines outside of higher education.
  • Publishes notes of a methodological nature, literature reviews and 'research and practice' studies.
  • Aims to inform decision-making in postsecondary education policy and administration.

This is a transformative journal , you may have access to funding.

  • William R. Doyle,
  • Lauren T. Schudde

research paper rubric higher education

Latest issue

Volume 65, Issue 2

Latest articles

Promoting age inclusivity in higher education: campus practices and perceptions by students, faculty, and staff.

  • Susan Krauss Whitbourne
  • Lauren Marshall Bowen
  • Jeffrey E. Stokes

research paper rubric higher education

Unpacking the Gap: Socioeconomic Background and the Stratification of College Applications in the United States

  • Wesley Jeffrey
  • Benjamin G. Gibbs

research paper rubric higher education

Exploring the Interplay Between Equity Groups, Mental Health and Perceived Employability Amongst Students at a Public Australian University

  • Chelsea Gill
  • Adrian Gepp

research paper rubric higher education

Performance-Based Funding and Certificates at Public Four-Year Institutions

  • Junghee Choi

research paper rubric higher education

Making the Band: Constructing Competitiveness in Faculty Hiring Decisions

  • Damani K. White-Lewis
  • KerryAnn O’Meara
  • Lindsey Templeton

Journal updates

Editorial team changes july 2023.

As of July 1, 2023, the Research in Higher Education editorial team has made several transitions

Editorial board changes - 2021

Effective 1 January 2021, the editorial board of Research in Higher Education will undergo several changes.

Editorial board changes 2011

Effective January 1, 2011, several important changes occurred with the journal Research in Higher Education.

Journal information

  • Current Contents/Social & Behavioral Sciences
  • Google Scholar
  • OCLC WorldCat Discovery Service
  • Research Papers in Economics (RePEc)
  • Social Science Citation Index
  • TD Net Discovery Service
  • UGC-CARE List (India)

Rights and permissions

Springer policies

© Springer Nature B.V.

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Research Paper Rubric

    research paper rubric higher education

  2. 46 Editable Rubric Templates (Word Format) ᐅ TemplateLab

    research paper rubric higher education

  3. An introduction to rubrics for higher education

    research paper rubric higher education

  4. 007 College Essay Rubric ~ Thatsnotus

    research paper rubric higher education

  5. 020 Rubrics For Essay Example Writing High School English ~ Thatsnotus

    research paper rubric higher education

  6. Releasing your rubrics

    research paper rubric higher education

VIDEO

  1. English Plus 3: Student self-assessment rubric

  2. SBA Assessment 2023/ Paper Checking/Rubrics

  3. Research Paper Rubric

  4. Research Paper Rubric for Grading

  5. Research paper rubric

  6. Introduction to the Research and Performance Rubric

COMMENTS

  1. Full article: Rubrics in higher education: an exploration of

    Introduction. As the use of rubrics in a higher educational context (and the research into them) increases, a uniform definition and understanding of the term has become more difficult to establish (Dawson Citation 2017).Their design and structure can vary widely, as can their intended function (Prins, De Kleijn, and Van Tartwijk Citation 2017) and the perception of their use by those engaging ...

  2. Beyond Fairness and Consistency in Grading: The Role of Rubrics in

    Abstract. This chapter will examine the role of rubrics in higher education. At its best, a rubric is a carefully wrought expression of the professional judgement of a teacher, and identifies the learning goals and aspirations of performance. We will explain and illustrate how rubrics can be used as both teaching and grading tools.

  3. Frontiers

    True rubrics feature criteria appropriate to an assessment's purpose, and they describe these criteria across a continuum of performance levels. The presence of both criteria and performance level descriptions distinguishes rubrics from other kinds of evaluation tools (e.g., checklists, rating scales). This paper reviewed studies of rubrics in higher education from 2005 to 2017.

  4. A review of rubric use in higher education

    Abstract and Figures. This paper critically reviews the empirical research on the use of rubrics at the post‐secondary level, identifies gaps in the literature and proposes areas in need of ...

  5. PDF An Effective Rubric Norming Process

    the use of rubrics to assess student learning is becoming mainstream. A recent google scholar search for the term rubric brings up approximately 347,000 results, and a keyword search within Practical Assessment, Research & Evaluation brings up 54 articles (November 21, 2017). Within American higher education, there is a strong

  6. PDF A review of rubric use in higher education

    This paper critically reviews the empirical research on the use of rubrics at the post-secondary level, identifies gaps in the literature and proposes areas in need of research. Studies of rubrics in higher education have been undertaken in a wide range of disciplines and for multiple purposes, including increasing student

  7. [PDF] Rubrics in higher education: an exploration of undergraduate

    Rubrics are an assessment framework commonly employed in higher education settings; however, students can engage with and perceive them to be used in a variety of ways and with varying degrees of success. The aim of this research project was to explore these perceptions, to better understand how rubrics might be used to support students more effectively to successful academic outcomes ...

  8. A review of rubric use in higher education.

    This paper critically reviews the empirical research on the use of rubrics at the post-secondary level, identifies gaps in the literature and proposes areas in need of research. Studies of rubrics in higher education have been undertaken in a wide range of disciplines and for multiple purposes, including increasing student achievement, improving instruction and evaluating programmes.

  9. A review of rubric use in higher education

    This paper critically reviews the empirical research on the use of rubrics at the post‐secondary level, identifies gaps in the literature and proposes areas in need of research. Studies of rubrics in higher education have been undertaken in a wide range of disciplines and for multiple purposes, including increasing student achievement, improving instruction and evaluating programmes. While ...

  10. (PDF) COMPETENCY-BASED ASSESSMENT AND THE USE OF ...

    In the last decades, rubrics have been all part of the most used learning evaluation tools in higher education. Its use is further well-related to competency-based assessment purposes.

  11. A Review of Rubric Use in Higher Education

    This paper critically reviews the empirical research on the use of rubrics at the post-secondary level, identifies gaps in the literature and proposes areas in need of research. Studies of rubrics in higher education have been undertaken in a wide range of disciplines and for multiple purposes, including increasing student achievement, improving instruction and evaluating programmes.

  12. PDF Constructing Clear Evaluation Criteria and Using a Rubric Research

    The Journal of Paper & Products, 63(1), 344-368. ... Higher Education: Handbook of Theory and Research: Volume 35, 1-63. ... Microsoft Word - Constructing Clear Evaluation Criteria and Using a Rubric Research Brief_withExamples.docx Created Date: 9/27/2021 6:09:27 PM ...

  13. PDF The Role of Rubrics in Advancing and Assessing Student Learning

    A rubric is a multi-purpose scoring guide for assessing student products and perform-ances. This tool works in a number of different ways to advance student learning, and has great potential in particular for non-traditional, first generation, and minority stu-dents. In addition, rubrics improve teaching, contribute to sound assessment, and are ...

  14. More pain, more gain? Extricating the effect of student involvement in

    The rubric was then distributed to both groups to facilitate their writing. A peer-feedback session was held for the students to use the rubric to offer and interpret feedback. Independent-samples t-tests revealed no significant differences between the two groups in terms of their perception of the rubric, use of the rubric and writing ...

  15. PDF ASSESSMENT RUBRIC FOR RESEARCH REPORT WRITING: A TOOL FOR ...

    Keywords: Assessment rubric, constructive learning, research writing, supervision, feedback, action research, scholarship of teaching and learning, high impact educational practices. INTRODUCTION In many higher education institutions in Malaysia, a dissertation or thesis is considered to be the subject of a compulsory course, and forms

  16. Using rubrics

    A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations. Why use rubrics? Rubrics help instructors: Assess assignments consistently from student-to-student.

  17. The use of assessment rubrics to enhance feedback in higher education

    Objective: To explore the literature relating to the use of rubrics in Higher Education. Design: A systematic search using three databases was undertaken, the question used to guide the search strategy was: What are the benefits and challenges of using rubrics as part of the assessment process in Higher Education? Data sources: Three electronic databases were searched: British Education Index ...

  18. PDF Systematic Review of Rubric Ontology in Higher Education

    In the years 2018 through 2022, 42 papers were reviewed. In conclusion, the key finding of this work is that rubric-based outcome learning is the most recent research area to get attention and that only a small number of studies have used ontologies to develop rubrics based on learning outcomes.

  19. Example 1

    Example 1 - Research Paper Rubric. Characteristics to note in the rubric: Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper will ...

  20. Examples of Rubrics

    Research Paper Rubric (Cornell College) Assessment Rubric for Student Reflections; AACU VALUE Rubrics. VALUE (Valid Assessment of Learning in Undergraduate Education) is a national assessment initiative on college student learning sponsored by AACU as part of its Liberal Education and America's Promise (LEAP) initiative.

  21. PDF Rubrics for assessment (10-11-17)

    Reflection paper 1 Research task 1 Research task 2 Article critique Final paper & poster 1. Demonstrate comprehension and application of contemporary issues in assessment in higher education. Emerging Developing Developing Mastery 2. Distinguish between, evaluate, and synthesize different traditions of inquiry and methodological approaches to

  22. Example 9

    Example 9 - Original Research Project Rubric. Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper will "add up" to a specified score or ...

  23. Home

    Overview. Research in Higher Education is a journal that publishes empirical research on postsecondary education. Open to studies using a wide range of methods, with a special interest in advanced quantitative research methods. Covers topics such as student access, retention, success, faculty issues, institutional assessment, and higher ...

  24. Primary challenge at two-year college is mental health

    A working paper from University of Pennsylvania and Columbia University researchers identifies key themes in the challenges learners at two-year institutions face and how it impacts their enrollment and degree progression in the first year. Community college students make up 41 percent of undergraduates and, among students who completed a degree in 2015-16, 49 percent have enrolled at a ...