Most teachers want to be better teachers. You’re probably reading this blog for research-based guidance on doing so.
I recently read a study that offers emphatic — and paradoxical — guidance. Exploring this research — as well as its paradoxes — might be helpful as we think about being better teachers.
Here’s the story.
A research team, led by Louis Deslauriers, worked with students in an introductory physics class at Harvard. This class was taught by an experienced professor who mostly lectured; he also supplemented the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”
Let’s call this approach “interactive lecture.”
In Deslauriers’s study, students also attended two additional classes. One was taught with Method A and the other with Method B.
In Method A, an experienced professor:
- presented slides
- gave explanations
- solved sample problems
- strove for fluency of presentation
What abotu Method B? Another experienced teacher:
- used principles of deliberate practice
- instructed students to solve sample problems together in small groups
- circulated through the room to answer questions
- ultimately provided a full and correct answer to the problems
The researchers strove, as much as possible, to make the class content identical; only the pedagogy differed.
What did the researchers learn about the relative benefits of Methods A and B?
Paradox #1: Effective and Unloved
First off, the students learned more from Method B.
That is: when they solved problems in small groups, wrestled with the content, and ultimately heard the right answer, students scored relatively higher on an end-of-class multiple choice test. When they experienced Method A (the prof explained all the info and solved all the problems), they scored relatively lower.
But — paradoxically — the students preferred Method A, and believed that they learned more from it. They even suggested that all their classes be taught according to Method A — the method that resulted in less learning.
The researchers offer several explanations for this paradox. The headlines sound like this:
- When students hear straightforward explanations and see clear/succesful demonstrations of solutions strategies (Method A), the content seems easy and clear. Students think they understand.
- But, when they have to do the cognitive heavy lifting (Method B), class feels more difficult. Result: students worry they didn’t understand.
- Because the students are — relatively speaking — novices, they don’t know enough to know when they understand.
Team Deslauriers, sensibly enough, suggests that we can help students appreciate and accept the more challenging methods — like Method B — if we explain the reseasoning behind them.
I, by the way, take this suggestion myself. For instance: I explain the benefits of retrieval practice to my students. They don’t always love RP exercises, because retrieval practice feels harder than simple review. But they understand the logic behind my approach.
Paradox #2: Clarity vs. Muddle
Up to this point, Deslauriers and Co. pursue a sensible path.
They know that MOST college profs use Method A (the bad one), so they want those profs to change. To encourage that change, the undertake a study showing a better option: Method B!
Given these research results, Deslauriers and Co. offer two clear and emphatic suggestions:
First: teachers should use Method B teaching strategies, not Method A strategies.
Second: to counteract students’ skepticism about Method B, we should explain the logic behind it.
What could be more helpful?
Alas, these clear suggestions can lead to another muddle. This muddle results from the freighted NAMES that this study gives to Methods A and B.
Method B — the good one — is called “active.”
Method A — the bad one — is called (inevitably) “passive.”
So, this study summarizes its findings by saying that “active” teaching is better than “passive” teaching.
These labels create real problems with the research conclusions.
Because these labels lack precision, I can apply them quite loosely to any teaching approach that I believe to be good or bad.
For instance: recall the experienced professor who regularly teaches this physics course. He mostly lectures; he also supplements the class with “demonstrations, … occasional interactive quizzes or conceptual questions.”
If I disapprove of that combination, I can call it “passive” — he mostly lectures!
If I approve, I can call it “active” — consider all those demonstractions, interactions, and conceptual questions!!
These labels, in other words are both loaded and vague — a perilous combination.
The peril arises here: literally no one in the world of cognitive science champions Method A.
EVERYONE who draws on cognitive science research — from the most ardent “constructivist” to the most passionate advocate for direct instruction — believes that students should actively participate in learning by problem solving, discussion, creation, and so forth.
Advocates for those two groups have different names for this mental activity: “desirable difficulties,” “productive struggle.” They think quite differently about the best way to achieve all that active thinking. But they all agree that students must struggle with some degree of difficulty.
Slippery Logic
This naming muddle creates unfortunate logical slips.
The study certainly suggests that Method B benefits students more than Method A. But, it doesn’t suggest that Method B is better than other methods that might reasonably be called by the open-ended named “active.”
For instance: it doesn’t necessarily mean that “constructivism” is better than direct instruction. And yet — because of those highly flexible labels — the study can be misinterpreted to support that claim.
My concern isn’t hypothetical. Someone sent me this study precisely to support the argument that inquiry learning promotes more learning than direct instruction.
But: “Method B” isn’t inquiry learning. And direct instruction isn’t Method A.
The Big Picture
I said at the beginning of this post that teachers might draw on research to be better teachers.
I worry that readers will draw this inaccurate conclusion based on this study:
“Research proves that ‘active learning’ (like projects and inquiry) is better than ‘passive learning’ (like direct instruction).”
Instead, this study suggests that asking students to do additional, productive mental work results in more learning than reducing their mental work.
Champions of both projects/inquiry and direct instruction want students to do additional, productive mental work.
Those schools of though have sharply different ideas of the best ways to accomplish that goal. But dismissing one of them as “passive” — and therefore obviously bad — obscures the important insights of that approach.
Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences, 116(39), 19251-19257.