Peer Instruction in Computer Science
This is my attempt to collect resources related to Peer Instruction (PI) in Computer Science (CS). I am new to PI, having been introduced to it through a paper at SIGCSE '10. I certainly don't claim to have any special proficiency implementing or evaluating PI in CS. I've only used PI a couple of times. ... But, someone had to start a page like this, so here we are.
Below, I provide:
- Abstracts and links to papers about PI, with a focus on PI-CS papers
- ConcepTest and Reading Quizz banks (developed by me and other CS lecturers)
- Hardware and software resources for implementing PI
If you have stuff you'd be willing to share, please let me know!
PI Methodology Papers
- Crouch, C. H. and E. Mazur. 2001. Peer Instruction: Ten years of experience and results. American Journal of Physics. 69:970–977.
A nice description of the PI methodology and its refinement in physics. The authors indicate that PI is meant to engage all students in class, rather than the few students who would be active otherwise. Performance on the physics Force Concept Inventory increased substantially when changing from traditional lectures to PI in both a calculus-based and algebra-based physics course. Quantitative problems do not bode well as ConcepTests; the authors describe how they ensure competence in working quantitatively as well as conceptually. The format of reading quizzes, the restructuring of discussion sessions, and ways to motivate students are included. - CROUCH, C. H., WATKINS, J., FAGEN, A. P., AND MAZUR, E. 2007. Peer instruction: Engaging students one-on-one, all at once. In Research-Based Reform of University Physics, E. F. Redish and P. J. Cooney, Eds. American Association of Physics Teachers.
This paper expands on the above. It gives concrete advice for ConcepTest generation (e.g. ConcepTests should be unambiguous, require thought rather than rote application of an algorithm, etc.), and compares clickers to flashcards and raising hands. The paper also argues for the use of a ``predict'' ConcepTest preceding a classroom demonstration. Students who are given the opportunity to predict prior to the presentation (as opposed to only seeing the presentation without predicting) are much more likely to be able to explain the reasons for the observed outcomes when tested at the end of the semester. Another finding: while we might think that conceptual exam questions are easier than quantitative, problem-solving questions, the truth is that students often find conceptual questions even more difficult! The paper also describes the findings of a web-based survey of instructors around the world who use PI, giving indications of student learning, student satisfaction, instructor perceptions, use of reading quizzes, methods of polling, and most significant challenges to PI-adoption. (Polling was used by only 8% of instructors, but the survey was conducted in 1999.) - Beatty, I., Gerace, W., Leonard, W., & Dufresne, R. 2006. Designing effective questions for classroom response system teaching. American Journal of Physics, 74(1), 31-39.
This paper gives concrete advice for generating clicker questions; I have found the advice to scale quite well for PI use. To effectively use an existing question, or to develop our own, we must appreciate the design logic of a question -- why the question is good, and what it seeks to uncover. Each question should have a threefold goal: a content goal (what subject material do we want students to learn?), a cognitive goal (how do we want our students to think?), and a metacognitive goal (what do we want students to learn about our subject in general?). Many tactics are given for designing clicker questions that accomplish these goals.
PI in Physics
- Lasry, N., Mazur, E. and Watkins, J. Peer instruction: From Harvard to the two-year college. American Journal of Physics, 76(11), 1066-1069.
This paper helps to legitimize PI in a small liberal arts college. Comparing a PI class with a traditional class found that the PI class performed better on a final exam (though not statistically significantly better, as was found at Harvard). Students were also divided into low-background and high-background groups, based on a pre-course FCI. Comparing post-FCI scores: low-background PI outperform low-background traditional, high-background PI outperform high-background traditional, and (the coolest finding) low-background students in PI outperform high-background students in a traditional offering. - Lasry, N. Clickers or Flashcards: Is There Really a Difference? 2008. The Physics Teacher, 46(4), 242-244.
To answer the question in the title: no... and yes. No: on a comparison of post-FCI between students in a clicker classroom and students in a flashcard classroom, there was no significant difference. There was also no difference in end-of-semester exam performance. Yes: clickers are better because they give us precise real-time feedback, and help us archive data that can be used to improve our teaching. - Reay, N. W., Li. P., and Bao, L. 2008. Testing a new voting machine question methodology. American Journal of Physics, 76(2), 171-178.
Our one-off clicker questions might indicate learning gains, but we don't know to what extent students can generalize a clicker question to other contexts. This paper suggests using multiple questions for each concept, in order to gauge students' understanding across contexts. The study in question used both a traditional section, and a section that used clicker questions (occasionally PI-informed) for a small portion of lecture time. Voting students scored higher on post-tests; males and females gained equally in the voting section, whereas males gained more than females in the traditional section.
What's Going on when They Talk?
- James, M. 2006. The effect of grading incentive on student discourse in Peer Instruction. American Journal of Physics. 74:689.
A common discussion involves tradeoffs between high- and low-stakes grading on clicker questions. Should students who get questions right get more points than those who get them wrong? According to this article, no: in high-stakes grading, discussions are more-often dominated by one person of the group, giving others less opportunity to give their perspectives. In the high-stakes setting, the person who dominated each group was more likely to get a higher grade than the others in the group. The author says that this means there is a correlation between dominating a PI conversation and amount-of-knowledge. I agree, but might it also reflect the benefits of PI conferred to these students through the very process of being able to talk it out?
Pi-CS Papers
- Pargas, R. P. and Shah, D. M. 2006. Things are clicking in computer science courses. SIGCSE Bull. 38, 1 (Mar. 2006), 474-478.
A discussion of PI in CS4. As university mandate, all students were required to bring a laptop to class. PI voting took place using a web application (called MessageGrid) rather than clickers or flashcards. Pre-lecture reading quizzes were used to obtain students' questions about the readings (but many more students than expected said that they understood everything!). An end-of-term survey indicated that students were very positive to PI, including the use of reading quizzes and in-class discussion. The paper includes examples demonstrating the adaptive nature of PI-informed lectures. - Simon, B., Kohanfars, M., Lee, J., Tamayo, K., and Cutts, Q. 2010. Experience report: peer instruction in introductory computing. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education. SIGCSE '10. ACM, New York, NY, 341-345.
A discussion of PI in CS1 and CS1.5, including analysis of normalized gains and student survey data. Reading quizzes were not used, though students were asked to read prior to class. Again, students responded positively to PI, and there was evidence of learning gains due to group discussion. Compared to the paper by Pargas and Shah above, these authors spend more lecture time on ConcepTests and do not use in-class exercises. - Zingaro, D. 2010. Experience Report: Peer Instruction in Remedial Computer Science. in Proceedings of Ed-Media 2010.
Inspired by the above SIGCSE'10 paper, this paper discusses the motivation, implementation, and analysis of results of PI in a remedial first-year CS course for computer engineers. The PI implementation is as faithful to the original physics work as my current understanding allows. I describe the use of reading quizzes, performance on ConcepTests, use of Parson puzzles, clicker data as dialogue extension, and student attitudes. - Porter, L., Lee, C., Simon, B., and Zingaro, D. Peer Instruction: do Students Really Learn from Peer Discussion in Computing? In ICER 2011.
An investigation of how much learning is happening during peer discussion, through the use of isomorphic questions. Findings from biology are corroborated: something like 50%-70% of potential learners seem to learn as measured by the isomorphic test. - Porter, L., Lee, C., Simon, B., Cutts, Q., and Zingaro, D. Experience Report: a Multi-Classroom Report on the Value of Peer Instruction. In ITICSE 2011.
Four instructors discuss the value of PI in their CS0, CS1, Architecture, and Theory of Computation courses. - Cutts, Q., Carbone, A., and van Haaster, K. Using an Electronic Voting System to Promote Active Reflection on Coursework Feedback. In Proceedings of the International Conference on Computers in Education, Melbourne, Australia, 2004.
In a novel use of PI, these authors develop ConcepTests based on common mistakes made in summative course assessments. The goal is to help students process and reflect on course feedback, rather than simply providing unscaffolded static text from which students will likely not learn. - Simon, B. and Cutts, Q. Peer Instruction: A Teaching Method to Foster Deep Understanding. In Communications of the ACM, 2012.
A brief look at what we currently know about PI in computing.
Question Banks
Please see peerinstruction4cs.org for all of our PI materials.
Hardware and Software
- i>clicker. This is the clicker system I use. These clickers have five option buttons, just enough for multiple-choice ConcepTests. I am on an old software version (5.2), because I have made several modifications to the source code that I'd sooner not keep re-implementing. (My version plays a sound effect when the timer starts and stops, and another sound when the graph is displayed. It also displays a textual description of the graph and makes other miscellaneous accessibility improvements.)
- clickdata.zip. This archive includes a Python script I wrote to tell me useful information from my i>clicker
.csv
files. It expects to findL*.csv
files generated by i>clicker, processes all such files in its directory, and generates session/pre-post/NG/histogram data. Thehisto
file can then be further processed byprochisto.gp
, a gnuplot script to generate a histogram from the data. - quizcode.zip. I use these Python scripts to administer reading quizzes. I wrote these in order to avoid having my students log-in to a course management system. I wanted students to be able to get to the quizzes quickly from the course website; all they do is type their student number and go. The quiz responses are stored in text files for easy reading or processing. The quizzes themselves are also created as text format for simple data entry, and the scripts support an availability date and an expiry date for each quiz. This extremely simple system has worked quite well for me.
Help!
If you have pointers to other PI-CS articles of interest, links to your own ConcepTests or reading quizzes, or other tools for processing PI data or administering PI in general, please get in touch. Thankyou!