Collaborative failure…and success

Last quarter, I had the pleasure of teaching Science Methods and Practice, the course that had so much success last year. A previous blog talks about how I’ve improved student learning in the course with specifications grading. Students who score an 80% on an assignment “complete” it, and those who score less than 80% score an “incomplete.” This approach helps me focus on student mastery—my goal is for students to meet a certain baseline of knowledge. That blog also talks about the hybrid and hyflex format—students can attend our one weekly face-to-face meetings in person or remotely through Zoom. 

I’ve experimented a lot with how to teach the statistics unit of the course. Ideally, students start learning on their own, and then come to class with questions of their own and ready to solve more practice problems. I’ve arranged worksheets with guiding videos that students completed out of class, and this year I built quizzes (combinations of automatically graded multiple choice and open-ended questions) that repeated some of the questions in the worksheet. As I thought they might, the quizzes (externally) motivated more students’ to complete the worksheets than in previous years without them. 

The stats unit culminates in an exam. I’m not fond of giving exams—they’re not real life situations and they generate anxiety. I also know enough about survey development to know that the scores may not be valid measures of student learning (e.g, Martinkova et al. 2017). And yet students and I work in a system that relies on external motivation. I’ve found that students tend not to learn this particular suite of materials unless they have the exam. So, I’ve switched my attention to trying to offer an exam that decreases anxiety and that has scores that are valid. The fact that students can take the exam remotely, along with the fact that we don’t have an exam room for our campus, means that I need to think creatively about how to ensure I’m truly capturing students’ work on the exam. 

Simple drawing of people standing in a circle.

collaboration by ProSymbols from Noun Project (CCBY3.0)

Here’s how I set it up. The week beforehand, I gave students an unannounced practice exam. I told them that, after completing the homework assignments, they should be able to complete the exam. I collected the practice exam at the end of class, graded it, and wrote to students which questions they got wrong. Then, for the exam itself, students corrected their work on the practice exam and explained what was wrong with their initial answers. The questions had all been aligned to problems in the quizzes they’d taken, and the students had practiced correcting their work in the quizzes. I also had some additional reflective questions, such as “how do you think you’ll use this knowledge in your future?” 

I am always excited for exam day, because I can’t wait for the students to show off what they’ve learned. 

But this exam didn’t work. Less than half of the class met the 80% threshold. There were even some questions that several students skipped. Even students who scored well on their quizzes didn’t pass the exam. I wasn’t immediately sure how to proceed, other than knowing that I would emphasize the respect I have for students and our collaborative learning space, following the approach to pedagogy called ethics of care that Nel Noddings advocated. 

This is a class about Science Methods and Practice, and students complete their own, independent research projects. And a new research opportunity arose. I had the research question: Why did so many students score an incomplete on the exam? What’s more, I needed collaborators—the students—to answer the research question. We needed to have a class discussion about it, and conduct a mini-research study. So, using the absolutely wonderful How Science Works Flowchart from Understanding Science, I engaged in a research project with students. 

  1. As the Flowchart suggests, I entered the research process by making observations, and one observation in particular: students didn’t do well on the exam. 
  2. I released grades to students and told them that we would strategize about remedying the situation in class. I waffled about whether to release grades in class or before, but ultimately decided that it would unduly increase anxiety to withhold the information about student performance, and that if I released grades in class, they might be too distracted by processing their grades to participate in the discussion.
  3. In class, I gave students time to brainstorm about what didn’t work for them in the exam.
  4. Then, I presented a slideshow about what I called the exam research project. I switched to third person (“A professor gave her students an exam…”) to put a little bit of distance between us and the experience, to help us think critically instead of personally about it. 
  5. I shared some data from the class, showing specifics about 
    1. a tight relationship between skipping the practice exam and failing;
    2. higher scores among those who took the exam in person; and 
    3. past experience with a similar exam in previous terms.
  6. I stated the research question (“Why did so many students fail the exam?) and my goal (“If the instructor’s goal is for students to learn, how should she proceed?”), and then opened class by asking students what, in their minds, went wrong. 

Some things quickly became apparent. The students did not like the format of the exam. While it went OK for the students who took it in person and on paper, the students online found it very confusing to jump between screens to correct the practice exam and complete the other questions. In one question, for example, I asked students to add a row to a table in the practice exam. But there was a physical disconnect: that question was written in the new file, but the change was supposed to be made on the old file. Lots of students didn’t complete the question just because they didn’t process that it was there. That’s not a reflection of their learning—that’s a reflection of how I formatted the test. 

Students brainstormed about the kind of data that would be helpful to answer the research question, and then they designed a padlet to collect some data quickly. For example, all of the students said they studied for the quizzes they’d taken, but fewer had studied from the practice exam or from the in-class problems we had. 

The next step was for students to brainstorm on how to proceed. I re-emphasized that my goal was for students to demonstrate to me what they learned, and that the exam had not done that for everyone. They needed to come up with ideas for other ways to show me what they learned. They shared ideas with each other, and then voted on which ideas they liked best. They decided on a take-home exam, with all the questions posted on a single document. 

In that take-home exam, I asked students what they learned from our discussion. Some students shared insightful tips: sleep the night before, take the exam in a quiet place. A handful of students talked about how they shifted their thinking about the purpose of the exam, beginning to see it was a way of expressing what they learned rather than a high-stress, high stakes situation. Others discussed how liberating it was to separate shame from failure, and instead reframe the exam as an experiment that didn’t work. These students appreciated collaborating with me to frame another opportunity to demonstrate their learning. 

I feel positive about this outcome. I think that what could have been an incredibly frustrating experience ended up being an empowering one, that hopefully taught folks about shared leadership and collaboration. 

I talk about this incident on this episode of Bonni Stachowiak’’s Teaching in Higher Ed podcast, too.

 

Leave a Reply