You come to Learning and the Brain conferences — and to this blog — because you want research-based insight into teaching and learning.
We sincerely hope that you get lots of those insights, and feel inspired by them.
At the same time, all sorts of work has to go on behind the scenes to make sure such advice has merit. Much of that work seems tedious, but all of it is important.
For instance: definitions.
When researchers explore a particular topic — say, “learning” — they have to measure something — say, “how much someone learned.”
To undertake that measurement, they rely on a definition of the thing to be measured — for example: “getting correct answers on a subsequent test = learning.”
Of course, skeptics might reject that definition: “tests don’t reveal learning. Only real world application reveals learning.”
Because these skeptics have a different definition, they need to measure in a different way. And, of course, they might come to a different conclusion about the value of the teaching practice being measured.
In other words:
If I define learning as “getting answers right on a test,” I might conclude that the Great Watson Teaching Method works.
If you define learning as “using new concepts spontaneously in the real world,” you might conclude that the Great Watson Teaching method is a bust.
The DEFINITION tells researchers what to MEASURE; it thereby guides our ultimate CONCLUSIONS.
A Case in Point
I recently read an article, by Hambrick, Macnamara, and Oswald, about deliberate practice.
Now, if you’ve spent time at a Learning and the Brain conference in the last decade, you’ve heard researcher K. Anders Ericsson and others present on this topic. It means, basically, “practicing with the specific intention of getting better.”
According to Ericsson and others, deliberate practice is THE key to developing expertise in almost any field: sports, music, chess, academics, professional life.
Notice, however, that I included the slippery word ‘basically’ in my definition two sentences ago. I wrote: “it means, basically, ‘practicing with the specific intention of getting better.’ ”
That “basically” means I’m giving a rough definition, not precise one.
But, for the reasons explained above, we shouldn’t use research to give advice without precise definitions.
As Hambrick, Macnamara, and Oswald detail, deliberate practice has a frustratingly flexible definition. For instance:
- Can students create their own delibate practice regimens? Or do they need professionals/teachers to create them and give feedback?
- Does group/team practice count, or must deliberate practice be individual?
As the authors detail, the answers to those questions change over time.
Even more alarmingly, they seem to change depending on the context. In some cases, Ericsson and his research partners hold up studies as examples of deliberate practice, but say that Hambrick’s team should not include them in meta-analyses evaluating the effectiveness of deliberate practice.
(The back-n-forth here gets very technical.)
Although the specifics of this debate quickly turn mind-numbing, the debate itself points to a troubling conclusion: because we can’t define deliberate practice with much confidence, we should hesitate to make strong research claims about the benefits of deliberate practice.
Because — again — research depends on precise definitions.
Curiouser and Curiouser
The argument above reminded me of another study that I read several years ago. Because that study uses lots of niche-y technical language, I’m going to simplify it a fair bit. But its headlines were clear:
Project-based learning helps students learn; direct instruction does not.
Because the “constructivist” vs. “direct instruction” debate rages so passionately, I was intrigued to find a study making such a strong claim.
One of my first questions will sound familiar: “how, precisely, did the researchers define ‘project-based learning’ and ‘direct instruction.’ ”
This study started with these definitions:
Direct instruction: “lecturing with passive listening.”
Constructivism: “problem-solving opportunities … that provide meaning. Students learn by collaboratively solving authentic, real-life problems, developing explanations and communicating ideas.”
To confirm their hypothesis, the reseachers had one group of biology students (the “constructivism” group) do an experiment where they soaked chicken bones in vinegar to see how flexible the bones became.
The “direct instruction” students copied the names of 206 bones from the chalkboard into their notebooks.
After even this brief description, you might have some strong reactions to this study.
First: OF COURSE students don’t learn much from copying the names of 206 bones. Who seriously thinks that they do? No cognitive scientist I’ve ever met.
Second: no one — and I mean NO ONE — who champions direct instruction would accept the definition as “lecturing with passive listening.”
In other words: we might be excited (or alarmed) to discover research championing PBL over direct instruction. But we shouldn’t use this reseach to make decisions about that choice because it relies on obviously inaccurate definitions.
(If you’re interested in this example — or this study — I’ve written about it extensively in my book, The Goldilocks Map.)
In Brief:
It might seem nerdy to focus so stubbornly on research definitions. If we’re serious about following research-informed guidance for our teaching, we really must.
Hambrick, D. Z., Macnamara, B. N., & Oswald, F. L. (2020). Is the deliberate practice view defensible? A review of evidence and discussion of issues. Frontiers in Psychology, 11, 1134.