AI in the Classroom, Teaching Effectiveness Framework, Teaching with Technology, TEF: Learning-Focused

Using the PAIRR Feedback Method for STEM Writing

by Nicole States, Instructional Developer, Reinert Center 

Categories: AI in the Classroom, Teaching with Technology, TEF: Learning-Focused

AI, AI, AI- we are hearing about it now more than ever, and while some of us shun its use for ethical reasons (see Nathaniel River’s guest post), others are looking for thoughtful ways to incorporate it into our teaching practice.

Today’s post is for those in the latter group; specifically, those interested in integrating generative AI (GenAI) tools into the peer feedback process.

If you’re curious about using AI to support feedback rather than replace human judgment, the Peer & AI Review + Reflection (PAIRR) method offers a structured, research-backed approach. Developed by a research team at the University of California, Davis, PAIRR is designed to build students’ critical AI literacy and the metacognitive habits that make feedback meaningful.

You can read more details on their site, but in brief, PAIRR follows a 5-step process:

  1. AI Policy & Readings: The class reviews the course AI policy and discusses a few readings about AI to set expectations.
  2. Peer Feedback: Students exchange early drafts and give each other feedback.
  3. AI Feedback: Students use an AI tool to get additional feedback on their draft.
  4. Reflection: Students compare both peer and AI feedback and decide what changes to make.
  5. Revision: Students revise their draft and write a short reflection about their revisions.

This process is not exactly simple, but you don’t have to start from scratch. The PAIRR Project Curriculum Committee has created a publicly available PAIRR Packet to support implementation of this feedback protocol. Two highlights from the packet are the feedback templates: a Reader Response template that focuses on how a piece of writing comes across to an audience and a Criteria-Based template that relies more closely on the specific assignment instructions and rubric. 

As fantastic as the PAIRR Packet is and despite being piloted in both writing and large STEM classes, the templates are necessarily broad. STEM writing, however, is anything but broad or vague. Because assignments vary widely across disciplines and purposes, it is not always immediately clear how the existing prompts map onto specific STEM contexts. So, I was curious what it would look like to make that alignment more explicit.

I soon realized just how diverse writing assignments in a STEM course can be. My initial thinking was focused on lab reports or science communication (my two most common forms of scientific writing). But many STEM courses also use writing-to-learn activities, or other discipline-specific writing assignments. In some cases, these assignments emphasize writing style and rhetorical clarity as much as a writing-intensive humanities course, while also layering in disciplinary nuance.

What I was ultimately able to develop were some suggested edits, not wholesale revisions, but small refinements an instructor might consider depending on your particular goals. 

No matter the assignment or which PAIRR template you are using, here are a couple of notes on ways you might tailor the prompts to get more focused feedback. 

If your assignment includes data, you may need to decide whether students should focus on the accuracy of the analysis itself or on the clarity of the explanation of results. No matter the focus, ensure it is clearly articulated in your rubric criteria or specified in the “follow-up criteria” section of the template.  This is especially important if there is any aspect of the report you do not want the chatbot to evaluate. For example, if you want students to receive feedback only on the clarity of their analysis and not on whether the analysis itself is correct make that limitation explicit in the feedback and follow-up portion of the prompt (e.g., “Do not provide feedback on the accuracy of the data analysis; focus only on the clarity of the explanation.”).

Another example is that scientific writing often includes conventions that differ from students’ prior writing experiences. If you want students to adhere to disciplinary conventions, like avoiding personal pronouns or using field-specific terminology, make that expectation explicit in the rubric or the AI feedback prompt. Peers may overlook these subtler conventions, but a GenAI tool would be more likely to identify them if those expectations are included in the initial feedback prompt.

While these are just a couple of examples for consideration, they are by no means exhaustive. If you would like to talk about your context and additions to the PAIRR prompts, or other ways to thoughtfully include GenAI tools in your teaching, please contact the Reinert Center or fill out our consultation request form. Please also consider sharing your perspectives and ideas in the comment section below.

References

Rivers, N. (2026, January 29). Unpacking My Statement on Generative AI. The Notebook. https://reinertcenter.com/2026/01/29/unpacking-my-statement-on-generative-ai/

The PAIRR Project Curriculum Committee. Marit MacArthur, Anna Mills, Julie Gamberg, Lisa Sperber, Aparna Sinha, Hao-Chuan Wang, and Valerie Turner, with input from consultants Michelle Cruz Gonzales and Kisha Quesada Turner. (2025). “The Peer & AI Review + Reflection (PAIRR) Packet.”

Finkenstaedt-Quinn, S. A., Petterson, M., Gere, A., & Shultz, G. (2021). Praxis of Writing-to-Learn: A Model for the Design and Propagation of Writing-to-Learn in STEM. Journal of Chemical Education, 98(5), 1548–1555. https://doi.org/10.1021/acs.jchemed.0c01482

Leave a comment