Thursday, November 13, 2014

Three Peer Review Tools at Glance, and Four Other Options to Boot

Here's an e-mail that I thought Gmail wouldn't let me send to two discussion lists I am on because it was flagged as spam.  I'm posted it here so I could send to the lists the link for this post instead. Turns out, the message did go through. Ah well. Since then, I've revised this a bit so it works better as a post.


When we (Bedford/St. Martin's, now an imprint of Macmillan Education) were ramping up our project with Eli (http://elireview.com), my colleague, Melissa Graham Meeks, did a quick review of the peer review tools landscape, including Calibrated Peer Review from UCLA (http://cpr.molsci.ucla.edu/Home.aspx)  and from U. of Pittsburgh, SWoRD (now called Perceptiv -- http://www.peerceptiv.com/)

Here're her notes on those:

  . . . Calibrated Peer Review (CPR) from The University of California and SWORD from the University of Pittsburg.   Calibrated Peer Review works by training writers to read instructor-supplied models before completing a peer review activity and a self-reflective scoring of their own drafts; the system produces a single score that accounts for student’s performance when calibrating to the models, when scoring drafts as a reviewer, and when self-evaluating (comparison of peers’ ratings with writer’s own rating). The appeal of CPR for instructors is the single-score derived without instructor intervention once the models and rubrics are set-up. Like CPR, SWORD generates a single score. SWORD uses an algorithm to assign a grade to students’ writing based on the feedback given (reviewer accuracy is accounted for) and students’ reviews (accuracy as indicated by other peers’ scoring and helpfulness as indicated by the writer are accounted for).  But, like Eli, SWORD allows instructors to design rating and comment prompts. SWORD also has an established library of prompts, which instructors can customize.

These comparisons overlook the fundamental difference in these tools: Whereas CPR and SWORD are [primarily framed] as summative  review tools with gradebooks, Eli takes a formative approach to a writing process that includes both review and revision: Eli’s patent-pending* approach to capturing review feedback and allowing writers to build a revision plan from that feedback distinguishes Eli from every other product on the market. Eli’s espoused purpose is to teach review and revision by giving instructors analytics that match their personal teaching goals and giving students a scaffolded process that mirrors expert writers’ behaviors; CPR and SWORD simplify the review and grade it.

SWoRD/Perceptiv:
SWoRD is a cloud-based reciprocal student peer-to-peer student assessment system.  Students upload their assignments into SWoRD, which automatically and anonymously assigns the document to from 3 to 6 of the student’s classmates.  SWoRD Peer is equally effective for presentations, videos, art projects, business plans, legal briefs, and any other task requiring formative feedback.

CPR:
CPR trains students to provide good feedback by having them rate 3 “source” texts first; reviewers must pass this calibration phase before they can rate 3 peers’ work and then they rate their own using the same criteria used to evaluate the source texts. The system compares reviewers’ scores to each other as well as the writers’ self-assessment to reviewer’ assessment; these two scores are combined with the writers’ performance during calibration to derive a single score.
Eli:
Eli Review is a software service that supports and reports student engagement. As they write, give feedback, and use feedback for revision, students learn from each other. Reports give students and instructors a clear window into the peer exchanges, making it easy to assess effort, identify exemplars, and motivate revision. A revision planning tool let's students choose which comments they'll work from in their revision, sort the comments in order of priority, make notes on how they'll use the comments. The revision plan can then be reviewed by the professor for further guidance. Reviewers get the most boost to their helpfulness rating when comments they write make it to revision plans.

__

The folks at Perceptiv have done a good bit of publishing -- http://www.peerceptiv.com/wordpress/research/publications/ . They developed their approach with NSF grants. (The creator of the program is a cognitive psychologist; the creator of CPR was a chemistry professor, I believe; both professors took peer review seriously and sought to use writing more fully in their teaching.) The research from Perceptiv is good and useful stuff and shows, contra MOOCs which didn't do it too well, that peer review, done well, can be a reliable means for feedback, especially if done early and often.

Eli Review's approach -- see http://elireview.com/development/ for a link to their first professional development piece -- draws directly from composition theory and seeks to turn writing classrooms into writing workshops (If you visit writing classrooms, you'd be surprised, or maybe not, at how much lecturing goes on.)

All three tools require some forethought and planning to use fully and richly. And that's the challenge in their use for most faculty, whose first instinct (understandably) begin digital teaching by taking their analog assignments and putting it on the Web. In the work Melissa did, CPR and SWoRD found purchase in WAC-focused science and social science courses, often larger course (though not exclusively).

Other tools for Peer Review - that are open to folks from other campuses to try . . .

  1. Joe Moxley's Myreviewers:  http://myreviewers.com/
  2. Mike Palmquist's WritingStudio:  http://writing.colostate.edu/
  3. MARCA from UGA's Ron Balthazor & Christy Desmet:  http://calliopeinitiative.org/
  4. Les Perlman and Irv Peckham might weigh in here, but I also imagine iMoat could be readily adapted to do peer review even though its original use was for writing placement: http://web.mit.edu/imoat/

--
* This is not a cliche. Eli did apply for a patent because they have a unique algorithm for deriving a reviewer's helpfulness rating based on how their review comments are rated, but more importantly, which of their comments make it to a writer's revision plan.

No comments: