The six characteristics of a quality peer review process can be remembered with a mnemonic device: We want to help our students go FFAARR (pronounced far) in life.
A common challenge faculty raise with me when students are assigned to work collaboratively in groups or teams is that they rate their peers as all doing well despite clear evidence that some of them are performing significantly better than others. “My teammates are all super!” might be what student peer reviews indicate based on ratings they provide to you. But then, one of those students comes to office hours to explain that they really did most of the work because one teammate “didn’t have a clue”, another missed several meetings, and a final teammate just wanted to “argue about useless details”. How can we get student peer reviews for collaborative work to reflect reality? In this blog post I provide six characteristics of a good peer review process that nudges students towards producing valuable, accurate, high quality peer reviews of each other’s performance.
The six characteristics of a quality peer review process can be remembered with a mnemonic device: We want to help our students go FFAARR (pronounced far) in life. Most teachers I know want to help our students go as FFAARR as we can take them and, ideally, to help them build skills to go ever FFAARRther on their own. Conducting effective peer review can help our students on that journey.
The FFAARR acronym for peer review helps us remember that quality reviews are more likely to be generated when they are:
- Rated on a scale
So what do each of these characteristics mean in practice? A brief description of each characteristic follows, with references you can use to explore further.
Frequent practice can lead to great improvement in quality for many skills, especially when conducted deliberately (Ericsson, Prietula, & Cokely, 2007) over long periods of time. Dee Fink, who coined the term FIDeLity feedback (Frequent, Immediate, Discriminating feedback delivered Lovingly) as a means to help students achieve significant learning, placed frequent feedback as the cornerstone from which FIDeLity feedback begins (Fink, 2013). Thus, providing students with multiple opportunities to practice and improve their peer review skills is one key to improving their performance. This can be done informally on a regular basis during class (perhaps weekly or even daily if there is significant team interaction), as well as perhaps 3-4 times throughout the semester more formally as an assignment outside of class. In my own class I often have students practice in pairs during class by giving feedback based on the activity they just completed, and then have them write formal peer reviews for all team members 3-4 times using an online system to collect and distribute the reviews.
In order to produce a peer review that is valuable and can be defended based on evidence, constructively formatted qualitative feedback should be included in the review. The SII feedback format (Wasserman & Beyerlein, 2007) outlining strengths, areas for improvement, and insights is a simple and easy way to elevate the quality of peer reviews produced by students. In particular, when identifying strengths students should explicitly name the strength, they should provide evidence of the strength that was observed in their peer’s performance, and they should indicate why that strength helped the collaborative learning effort to be as successful as it was. When identifying areas for improvement, students should clearly state the concern, describe how an improved performance would look different, and provide a practical action plan for how to achieve that improvement to better help the group or team in their collaborative effort. Finally, insights can be offered that highlight something the peer reviewer learned while conducting the review.
Students tend to prioritize their learning effort based on what gets graded. Therefore, peer reviews should count towards a student’s overall course grade to show that you value the reviews and desire they be taken seriously (Ambrose, Bridges, DiPietro, Lovett, & Norman, 2010, p. 84). However, to avoid the issue where students simply give all of their peers high marks, grades for peer review need to reflect the quality of the peer review itself, not the level at which the student was judged to perform by their peers. In other words, a student who performs poorly as a teammate but who conducts a high quality review of themselves and their teammates would get high marks on their peer review grade. Conversely, a student who contributes strongly to their team’s but who provides a cursory or inaccurate review would get low marks on their peer review grade.
Students produce higher quality peer reviews when the accuracy of their review is considered in their grade. Three ways to help students be more accurate are to (1) consider including the expectation for variation in student ratings of each other (unless they explicitly provide a solid justification for rating everyone the same), (2) consider consistency of ratings for the same person by different team members as a component of the grade, and (3) ask students to review themselves in addition to their team members. These techniques force students to think deeply and to critically analyze each team member’s performance, including their own. Further, when accurate ratings are provided it becomes possible to use the ratings to adjust individual grades given for the same team project. Evidence that these guidelines for accuracy can be successfully implemented is provided in a study by Kaufman, Felder, and Fuller (1999).
Rated on a scale
Quality peer review should include some kind of scaled rating to indicate level of performance. Simple scales might involve rating team members from not helpful to extremely helpful, or distributing a fixed number of total points among team members according to their level of contribution to the overall effort. In these cases the ratings by themselves may not be particularly useful for improving future performance without qualitative justifications explaining the ratings. However, behaviorally anchored scales produce ratings based on specific, observable criteria. These scales range in complexity from single scales with language describing different levels of meeting responsibility to the team (Kaufman et al., 1999), to the CATME BARS system based on team effectiveness literature (Ohland et al., 2012) which consists of five scales (contribution, interaction, staying on track, expecting quality, and having relevant knowledge, skills, and abilities) each rated based on descriptions of typical student behaviors in these areas for different levels.
A final characteristic of a quality peer review process is relevance to student’s development throughout the course and into the future. This can be accomplished by keeping the review’s purpose closely aligned with the goals of the assignment (Wessa & De Rycker, 2010). For example, ask students to provide their qualitative feedback from the perspective of how their performance, and their peer’s performances, impacted the ability of the team to learn the needed content and perform the cognitive work to achieve the goals of the assignment. Additionally, if you assign group roles to students, as commonly occurs in many professional settings and other contexts relevant to student’s lives, you can direct students to produce feedback designed to grow their ability in those particular roles.
Together, these six characteristics of a quality peer review process can significantly improve student performance, and add significant value to your course. I might recommend you first consider asking students to practice frequently using the SII format and giving credit for completing it to hold them accountable. From there you might add rating scales and build in stronger accountability for quality reviews using a rubric outlining the criteria for good qualitative justifications of those ratings. Go ahead and try these ideas out!
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: seven research-based principles for smart teaching (1. ed). San Francisco, CA: Jossey-Bass.
Ericsson, K. A., Prietula, M. J., & Cokely, E. T. (2007, July 1). The Making of an Expert. Harvard Business Review, (July–August 2007). Retrieved from https://hbr.org/2007/07/the-making-of-an-expert
Fink, L. D. (2013). Creating significant learning experiences: an integrated approach to designing college courses (Revised and updated edition). San Francisco: Jossey-Bass.
Kaufman, D. B., Felder, R. M., & Fuller, H. (1999). Peer Ratings in Cooperative Learning Teams. Proceedings of the 1999 Annual ASEE Meeting. Retrieved from http://www4.ncsu.edu/unity/lockers/users/f/felder/public/Papers/kaufman-...
Ohland, M. W., Loughry, M. L., Woehr, D. J., Bullard, L. G., Felder, R. M., Finelli, C. J., … Schmucker, D. G. (2012). The Comprehensive Assessment of Team Member Effectiveness: Development of a Behaviorally Anchored Rating Scale for Self- and Peer Evaluation. Academy of Management Learning & Education, 11(4), 609–630. https://doi.org/10.5465/amle.2010.0177
Wasserman, J., & Beyerlein, S. (2007). SII Method for Assessment Reporting. In Faculty Guidebook: A Comprehensive Tool for Improving Faculty Performance (4th ed., p. 2). Pacific Crest.
Wessa, P., & De Rycker, A. (2010). Reviewing peer reviews - A rule-based approach. Proceedings of 5th International Conference on E-Learning, 408–418.