When I was hired as a teaching assistant in graduate school, I taught in the same way in which I had been taught…lecture. I had never received any formal education in how to teach. In fact, the extent of my teacher training was someone telling me, “You’ll be fine”. (That’s not completely fair, actually, because there was a two-day workshop for teaching assistants that I missed for reasons I can’t remember.) I was in charge of four recitation sections in which I was supposed to do practice problems from the material that was covered in lecture. When the students had difficulty with some piece of material from lecture, I tried to explain it to them in my own way. The students seemed to think I was doing a pretty good job. I had a problem, though: while the students understood me as I explained something, they often couldn’t replicate what I did when they were on their own. I couldn’t quite figure out what else I could do. After all, I explained everything the best I could. Wasn’t the rest of it in the students’ hands?
In retrospect, I shouldn’t have been too surprised at the fact that the students had trouble communicating ideas and making connections on their own because similar issues arise in other aspects of life. As an example, many people who have been immersed in a foreign language find that they can understand the language but they can’t speak it themselves. This suggests that the ability to comprehend doesn’t necessarily translate to the ability to speak.
It was around this time that a friend of mine suggested group work. To be honest, I thought it was just a waste of time. I had heard about group work before, but I never really trusted the theories in favor of group work because they were based on words that I didn’t feel had a precise meaning, and they didn’t have measurable outcomes. Proponents of those theories reminded me of those New Age treatment peddlers that are able to make wild claims because of how nebulous the terminology is.
I needed to hear something that was more scientific.
Last summer, I took two physics pedagogy courses at Buffalo State College, and that is when I was finally exposed to a lot of the hard data:
In order to understand this graph, you need to know something about the Force Concept Inventory, or FCI. It is a conceptual assessment test that was designed to gauge a student’s understanding of Newtonian concepts. It has been around for about 20 years, and it has been extremely thoroughly vetted. Typically, the FCI is given on the first and last day of class in order to see how much students learned. The normalized gain is calculated by: (post-test % – pre-test %)/(100% – pre-test %). This is a measure of how much a student learned relative to how much they could have learned. Anyway, what Richard Hake did in his 1998 paper was to compare the FCI scores of 6,542 students from 62 classrooms. He divided the classrooms into two categories: Traditional Instruction — defined as relying on passive-student lectures, recipe labs, and algorithmic-problem exams — and Interactive Engagement — defined as classes designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (i.e. minds engaged) and hands-on activities which yield immediate feedback through discussion with peers and/or instructors. This graph shows the results, with normalized gain on the horizontal axis and the fraction of courses of that style that achieved that particular gain on the vertical axis. Traditional Instruction courses are in red and Interactive Engagement courses are in green. As you can see, there is a huge difference; in fact, the average gain of an Interactive Engagement course is roughly double that of a Traditional Instruction course.
I also watched Derek Muller’s Veritasium video in which he describes his doctoral research on how students understand and interpret physics material from video explanations. He gave his own pre-test and post-test that was much like the FCI, and in between he showed a video that gave the answers to the majority of the questions on the test. The students were quite confident that they did well on the post-test, yet the average score only went from a 6/26 to a 6.3/26. So not only was the direct instruction not helpful, but it actually made the students more confident about their misconceptions.
Lastly, I was informed that Interactive Engagement physics classes produce nearly double the number of STEM (Science, Technology, Engineering, Math) majors as Traditional Instruction classes do.
That was enough data to get me to at least try teaching using Interactive Engagement methods, and actually experiencing Interactive Engagement for myself fully convinced me that this was the way to go.