methodology – Education & Teacher Conferences Skip to main content
Upsides Always Have Downsides: “Side Effects” in Education Research
Andrew Watson
Andrew Watson

Here at Learning and the Brain, we believe that research can improve education.

Young man wearing a tie, showing thumbs up in one image and thumbs down in the other

Specifically, research into psychology (“how the mind works”) and neuroscience (“how the brain works”) can help teachers and schools. After all, we spend all day working with students’ minds and brains!

Every now and then, we should stop and look for flaws in our assumptions.

Are there ways that research might not help learning? Might it actually limit learning?

A recent article by Yong Zhao explores this important — and troubling — question.

The Medical Model

Doctors have long relied on research to test their hypotheses.

Does this treatment work better than that treatment? Let’s run a “randomized control trial” to find out.

Notably, medical research always includes this important question: what side effects* does a treatment produce?

That is:

Any treatment might produce specific benefits.

And, it might also produce specific harms.

Medical research looks for and reports on BOTH.

Sadly — and this is Zhao’s key point — education research tends to skip the second question.

Researchers look for benefits:

Does mindfulness reduce stress?

Can retrieval practice enhance learning?

Should students exercise mid-class?

When they measure the potential upsides of those “treatments,” they don’t always look equally scrupulously for downsides.

And yet: almost everything has downsides.

What to Measure, and When?

Why do we overlook the downsides?

Zhao offers two hypotheses.

First, we all agree that education is good.

If doing X helps students learn, then X is good! Its obvious goodness makes potential badness invisible.

Second, downsides take time — and alternative methods — to discover.

An example. I hypothesize a particular method will help students sing better. So, I test my method in a randomized control trial.

Sure enough, students with the new method sang better! My method worked!

However, my new teaching method just might make students hate singing.

To discover this “side effect,” I have to measure different variables. That is:

I need to check how well they sing (one set of measurements),

AND how much they like singing (a different set of measurements).

It’s also possible that the downside takes longer to arise. The improvement (right now) results in less enjoyment of singing (later on). If I don’t keep measuring, I’ll miss this “side effect.”

New Habits

As Zhao argues, our habit of overlooking potential downsides creates real problems.

For instance, Zhao takes the example of Direct Instruction.

Its proponents can show lots of research suggesting its strengths. Its detractors likewise.

How can these contradictory realities exist?

Well, any complex teaching method will have benefits and detriments. If we focus only on one — if we measure only one — we’ll necessarily miss the other.

Instead, Zhao argues, we should develop the rigorous habit of looking for both: the benefits of any teaching strategy, and also its downsides.

This realisic, complex reality will allow us to make better decisions in classroom and schools.

One More Step

Although Zhao doesn’t mention “opportunity costs,” I think they’re an important part of this conceptual re-think.

That is:

Every time I do use a particular teaching strategy, I don’t use the other one.

If I take time for this stress-reducing technique, I don’t have time for that stress-reducing technique.

Even if a strategy has good research behind it, even if it has relatively few “side effects,” I always want to know: have I given up a better strategy to make time for this merely good strategy?

For example, this point often comes up in discussion of Learning Styles Theory.

If you’ve spent any time in this field, you know: Learning Styles Theory simply doesn’t have good research support behind it.

Alas: it has LOTS of popular support, even among teachers.

When I show teachers the comprehensive research reviews contradicting the theory, they occasionally respond this way:

“Okay, but what harm is it doing? It might be true, so why not teach to my students’ learning style?”

For me, the clear answer is opportunity cost.

If we teachers ARE spending time on teaching methods that have no research support, we ARE NOT spending time on those that do.

If students ARE studying on the treadmill because they’re “kinesthetic learners,” they ARE NOT using study strategies with research support behind them.

Measuring opportunity cost requires subtle and humble calculations. We just might have to give up a long-prized approach to make time for an even better one.

If our students learn more, that sacrifice will have been worth it.

TL;DR

Like medical researchers, we should look both for benefits and for potential harms of any teaching suggestion.

This balanced perspective might take additional time, and might require consideration of  opportunity costs.

It will, however, result in a more realistic and useful understanding of teaching and learning.


*  Many years ago, I read that the phrase “side effects” is misleading. It makes unwanted effects seem unlikely, even though they’re just as likely as the wanted effects.

For that reason, I’m putting the words “side effects” in quotations throughout this post.

I believe it was Oliver Sacks who made this point, but I can’t find the citation so I’m not sure.

If you know the correct source of this insight, please let me know!


Zhao, Y. (2017). What works may hurt: Side effects in education. Journal of Educational Change18(1), 1-19.

Parachutes Don’t Help (Important Asterisk) [Repost]
Andrew Watson
Andrew Watson

A surprising research finding to start your week: parachutes don’t reduce injury or death.

How do we know?

Researchers asked participants to jump from planes (or helicopters), and then measured their injuries once they got to the ground. (To be thorough, they checked a week later as well.)

Those who wore parachutes and those who did not suffered — on average — the same level of injury.

Being thorough researchers, Robert Yeh and his team report all sorts of variables: the participants’ average acrophobia, their family history of using parachutes, and so forth.

They also kept track of other variables. The average height from which participants jumped: 0.6 meters. (That’s a smidge under 2 feet.) The average velocity of the plane (or helicopter): 0.0 kilometers/hour.

Yes: participants jumped from stationary planes. On the ground. Parked.

Researchers include a helpful photo to illustrate their study:

Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground

Why Teachers Care

As far as I know, teachers don’t jump out of planes more than other professions. (If you’re jumping from a plane that is more than 0.6 meters off the ground, please do wear a parachute.)

We do, however, rely on research more than many.

Yeh’s study highlights an essential point: before we accept researchers’ advice, we need to know exactly what they did in their research.

Too often, we just look at headlines and apply what we learn. We should — lest we jump without parachutes — keep reading.

Does EXERCISE helps students learn?

It probably depends on when they do the exercise. (If the exercise happens during the lesson, it might disrupt learning, not enhance it.)

Does METACOGNITION help students learn?

It probably depends on exactly which metacognitive activity they undertook.

Do PARACHUTES protect us when we jump from planes?

It probably depends on how high the plane is and how fast it’s going when we jump.

In brief: yes, we should listen respectfully to researchers’ classroom guidance. AND, we should ask precise questions about that research before we use it in our classrooms.

Parachutes Don’t Help (Important Asterisk)
Andrew Watson
Andrew Watson

A surprising research finding to start your week: parachutes don’t reduce injury or death.

How do we know?

Researchers asked participants to jump from planes (or helicopters), and then measured their injuries once they got to the ground. (To be thorough, they checked a week later as well.)

Those who wore parachutes and those who did not suffered — on average — the same level of injury.

Being thorough researchers, Robert Yeh and his team report all sorts of variables: the participants’ average acrophobia, their family history of using parachutes, and so forth.

They also kept track of other variables. The average height from which participants jumped: 0.6 meters. (That’s a smidge under 2 feet.) The average velocity of the plane (or helicopter): 0.0 kilometers/hour.

Yes: participants jumped from stationary planes. On the ground. Parked.

Researchers include a helpful photo to illustrate their study:

Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground

Why Teachers Care

As far as I know, teachers don’t jump out of planes more than other professions. (If you’re jumping from a plane that is more than 0.6 meters off the ground, please do wear a parachute.)

We do, however, rely on research more than many.

Yeh’s study highlights an essential point: before we accept researchers’ advice, we need to know exactly what they did in their research.

Too often, we just look at headlines and apply what we learn. We should — lest we jump without parachutes — keep reading.

Does EXERCISE helps students learn?

It probably depends on when they do the exercise. (If the exercise happens during the lesson, it might disrupt learning, not enhance it.)

Does METACOGNITION help students learn?

It probably depends on exactly which metacognitive activity they undertook.

Do PARACHUTES protect us when we jump from planes?

It probably depends on how high the plane is and how fast it’s going when we jump.

In brief: yes, we should listen respectfully to researchers’ classroom guidance. AND, we should ask precise questions about that research before we use it in our classrooms.

Interested in Action Research? Try This Instead
Andrew Watson
Andrew Watson

We don’t do a lot of cross posting here at Learning and the Brain. I believe this is the first time we’ve done so while I’ve been editor.

I think the initiative below is very exciting, and you — Learning and the Brain readers — are just the right audience to take advantage of it.

In this post, Ben Keep and Ulrich Boser of the Learning Agency Lab explain how we teachers can do valuable research in our own classrooms.

If that grabs your attention: read on!


New technologies can help educators become high-quality researchers. ​

When it comes to teaching, there are a million questions to ask about the nature of instruction.

What examples to use? What analogies to draw on? What sequences to teach new ideas? The people in the best position to both ask and answer these questions are often teachers.

Teacher-driven research isn’t new, but — at least in the U.S. — it’s relatively rare. Teaching loads are high and work hours are long, making teachers reluctant to lead education research projects, even when they want to. And, generally speaking, the U.S. school system is not set up to support teacher-driven research.

But in spite of the challenges, teachers want to engage in research. One survey found that over 90% of teachers wanted to influence the direction of research. And 59% wanted to participate in research themselves.

One way to engage is through action research, which certainly has its place in the field. And while the approach has clear benefits, it also has some limitations — like missing comparison groups.

A new kind of tool might help solve this problem. Over the next year, different learning platforms plan on offering tools to assist teachers in running their own research projects. Take ASSISTments ETRIALS Project. There’s already currently a small community of teachers who are performing independent classroom research on ASSISTments and that’s scheduled to expand.

RCE Coach also has plans to put out a version of their software this fall that will facilitate teacher use of the platform. They plan on fostering collaborations and providing workshops and other resources to support teacher research.

There’s also Carnegie Learning’s UpGrade platform. The company has plans to release an easy-to-use UI that lets teachers perform research on the platform. They’re particularly interested in testing whether letting students move ahead at their own pace benefits student outcomes.

These tools all help teachers run randomized controlled trials in their classrooms. That is, they help teachers to randomly assign students to different instructional conditions so that we can figure out which teaching approaches work best — and why.

Action Research Isn’t Action(able) Enough. Or why RCTs?

Current teacher-driven research efforts emphasize action research, which is an approach to deliberately reflecting on one’s own teaching practice with an aim to improve it. Under this model, teachers will often experiment with new teaching approaches, conduct interviews or surveys of students, and make detailed observations along the way. Often, the entire class makes a change, and the teacher reflects on whether the change was effective at improving learning outcomes.

This has led to a lot of fascinating work. But one of the limitations of action research is that, without a meaningful group comparison, it’s hard to know whether the proposed change made a difference.

Putting teachers in charge of running RCTs offers several intriguing benefits. First, teachers are likely to ask questions that researchers might not think of. The tests would also be in the context of a real classroom environment. And the results could be put into practice immediately.

Second, a wider group of teachers becoming involved with research might help bridge the research-practice divide. Teachers do not often learn about the science of learning during teacher training programs. Simultaneously, many teachers feel like existing education research is inaccessible, hard-to-understand, or simply not relevant.

Transparent randomly controlled trials would also give teachers the ability to hone their intuitions about instructional choices. By posting the study design before posting the results, teachers, researchers, and anyone else who was interested could make predictions about what’s likely to happen. This gives people the kind of practice they need to become expert forecasters.

Of course, the approach also comes with significant challenges. With average class sizes of around 25 students, a single class yields very small sample sizes for carrying out RCTs. Teachers also have varying experience with research methods. And it’s still unclear what platform features will best serve the teachers-as-researchers community, and which questions simply can’t be tested using learning platforms.

More Actionable ResearchRCTs In Action

Do students benefit from solving math problems with pencil and paper (as opposed to on a computer)?

Suppose we had a group of students perform a homework assignment where they solved problems with pencil and paper, while a comparable group of students solved the same homework problem on a computer (with no incentive to write it out). Would the first group learn more or less than the second?

A math teacher in Maine — Bill Hinkley — actually decided to test this very question, through an RCT. One group of students was encouraged to use paper and pencil, and had to turn in a piece of paper showing their work. The other group of students went through the homework problems as usual — through a computer screen. Both groups saw and submitted their answers through the same math platform: ASSISTments.

The result? Students who used paper and pencil outperformed those who didn’t by about 13 points. The difference was just shy of statistical significance, but suggestive given the small sample size (15 students in one condition, 12 in the other). Bill Hinkley plans to replicate and expand on the experiment in the near future.

Want To Join The RCT Teacher Research Community?

What would happen if we could scale up this style of research? There are 3.7 million teachers in the U.S. If just one percent of them started engaging in education research, there would be 37,000 teacher-researchers. The largest education research association, AERA, by comparison, has about 25,000 members.

Suppose each teacher-researcher only performed one experiment a year. That’s still 37,000 small experiments, run in realistic, noisy, classroom settings using rigorous research methods. Imagine what we might learn.

Interested in using RCTs in your classroom? Get in touch with us: Email Ulrich at [email protected]

We’re looking to build a community of teacher researchers who are doing this work in schools every day.

How Does Self-Control Really Work? Introducing a Debate
Andrew Watson
Andrew Watson

Every teacher I know wishes that our students could control themselves just a little bit better. Or, occasionally, a whole lot better.

Rarely do we worry that students have too much self-control.

All these observations prompt us to ask: how does this thing called self-control really work?

In the field of psychology, that question has led to a fierce debate. If you’d like to enter into that debate, well, I’ve got some resources for you!

A Very Brief Introduction

Roy Baumeister has developed a well-known theory about self-control. You can read about it in depth in his book Willpower: Rediscovering the Greatest Human Strength, written with John Tierney.

Think of self-control as a kind of inner reservoir. My reservoir starts the day full. However, when I come down for breakfast, I see lots of bacon. I know I…MUST…RESIST…BACON, and that self-control effort drains my reservoir a bit.

However, once I finish my oatmeal and leave the kitchen, the bacon no longer tempts me so strongly. I’ve stopped draining the reservoir, and it can refill.

Baumeister’s theory focuses on all the things that drain the reservoir, and all the strategies we can use to a) refill it, or b) expand it.

Baumeister calls this process by a somewhat puzzling name: “ego depletion.” The “depletion” part makes good sense: my reservoir is depleted. The “ego” part isn’t as intuitive, but we’ll get used to that over time.

The key point: in recent years, the theory of ego depletion has come under debate — especially as part of the larger “replication crisis” in psychology.

Some say the theory has (literally) hundreds of studies supporting it. Others note methodological problems, and worry that non-replications languish in file drawers.

Welcome Aboard

Because self-control is so important to teachers, you just might be intrigued and want to learn more.

One great resource is a podcast, charmingly titled “Two Psychologists, Four Beers.” A couple times a month, Yoel Inbar and Michael Inzlicht get together over a few brews and chat about a topic.

In this episode, they talk about this controversy at length and in detail. SO MUCH interesting and helpful information here.

One key point to know: Inzlicht himself is a key doubter of Baumeister’s research. He’s not a dispassionate observer, but an important critic.

Friendly On Ramp

However interested you are in the topic of self-control, you might not have 80 minutes to devote to it.

Or, you might worry it will be overly complex to understand the first time through.

Good news! Ahmad Assinnari has put together a point-by-point summary of the podcast. 

You could read it as an introduction to an upcoming debate, and/or follow along to be sure you’re tracking the argument clearly. (BTW: Assinnari refers to Inzicht both as “Inzlicht” and as “Michael.” And, beware: it’s easy to confuse “Michael” with “Michel,” another scholar in the field.)

So, if you’d like to learn more, but you’re not sure you want to read Baumeister’s book, this post serves as an introduction to Assinnari’s summary. And, Assinnari’s summary introduces the podcast.

With these few steps, you’ll be up to speed on a very important debate.

Does Smartphone Addiction Boost Anxiety and Depression?
Andrew Watson
Andrew Watson

We frequently hear about the dangers of “smartphone addiction.” If you search those words in Google, you’ll find this juicy quotation in the second link:

The brain on “smartphone” is the same as the brain on cocaine: we get an instant high every time our screen lights up with a new notification.

“An instant high.” Like cocaine? Hmmmm.

You might even have heard that we’ve got research about the perils of such addictions. But: can we rely on it?

A recent study asked a simple question, and got an alarming answer.

How Do We Know What We Know About Phone Usage?

Studies about smartphones typically ask participants to rate their cell phone usage — number of minutes, number of texts, and so forth. They then correlate those data with some other harmful condition: perhaps depression.

Researchers in Britain wanted to know: how accurately do people rate their cellphone usage?

When they looked at Apple’s “Screen Time” application, they found that participants simply don’t do a good job of reporting their own usage.

In other words: depression might correlate with people’s reported screen time. But it doesn’t necessarily correlate with (and certainly doesn’t result from) their actual screen time.

In the modest language of research:

We conclude that existing self-report instruments are unlikely to be sensitive enough to accurately predict basic technology use related behaviors. As a result, conclusions regarding the psychological impact of technology are unreliable when relying solely on these measures to quantify typical usage.

So much for that “instant high.” Like cocaine.

What Should Teachers Do

As I’ve written before, I think research into technology use is often too muddled and contradictory to give us good guidance right now.

Here’s what I wrote back in May:

For the time being, to preserve sanity, I’d keep these main points in mind:

First: don’t panic. The media LOVE to hype stories about this and that terrible result of technology. Most research I see doesn’t bear that out.

Second: don’t focus on averages. Focuses on the child, or the children, in front of you.

Is your teen not getting enough sleep? Try fixing that problem by limiting screen time. If she is getting enough sleep, no need to worry!

Is your student body managing their iPhones well? If yes, it’s all good! If no, then you can develop a policy to make things better.

Until we get clearer and more consistent research findings, I think we should respond — calmly — to the children right in front of us.

I still think that advice holds. If your child’s attachment to the cellphone seems unhealthy, then do something about it.

But if not, we shouldn’t let scary headlines drive us to extremes.

Today’s Unpopular Research Finding: Potential Perils of Mindfulness
Andrew Watson
Andrew Watson

Mindfulness has a great reputation.

Students and teachers can start meditation programs quite easily. And, we’ve heard about its myriad benefits: reduced stress, greater concentration, enhanced classroom cooperation.

If we can fix so many school problems for (essentially) no money, what’s not to love?

Today’s Headline: “Particularly Unpleasant” Experiences

We’ve heard about all the good things that mindfulness can produce. Does it lead to any bad things?

Several researchers in Europe wanted to know if it led to “particularly unpleasant” experiences: “anxiety, fear, distorted emotions or thoughts, altered sense of self or the world.”

In particular, they asked if these experiences occurred during or after meditating.

They surveyed 1200+ people who had practiced meditation for at least two months. (The average experience meditating was, in fact, six years.)

Amazingly, more than 300 of them — 25% — reported a “particularly unpleasant” experience.

And, their findings are in line with two earlier studies (here and here), which reported 25% and 32% of meditators had such experiences.

The rate was lower for religious meditators, and slightly higher for men than women. The kind of meditation mattered somewhat. And (surprisingly for me), the rate was higher among those who had attended meditation retreats.

Lots of other variables didn’t matter: for instance, years of meditation experience, or length of meditation session.

Classroom Implications: Don’ts, and Do’s

Don’t Panic. If you’re currently running a mindfulness program, you don’t need to abandon ship.

Keep in mind:

This study asked respondants one question. We can’t draw extravagant conclusions from just one question.

The study focused on adults, not K-12 students.

We can’t draw causal links. That is: we don’t know, based on this study design, if the meditation led to the “particularly unpleasant” experience. We don’t even know what that rate would be for people in a control group.

We’re still VERY EARLY in exploring this question. We’ve now got 3 studies pointing this direction. But, we need more research — and more consistent ways of investigating this link — to know what to make of it.

Do’s

First: Use this research to improve the mindfulness program you have, or the one you’re planning.

That is: If you’ve got such a program, or have one under consideration, ask yourself, do you see signs that your students have unpleasant experiences?

Are you giving them permission and opportunity to say so?

Do the people running the mindfulness session know what to do if they get that kind of response?

After all, this research team isn’t asking schools and teachers to stop meditating. Like good scientists, they’re looking at both potential benefits and potential detriments.

Second: More generally, let this research be a healthy reminder. Almost all school changes lead to both good and bad results.

While mindfulness breaks might have lots of benefits, they might well have some downsides. So too with everything else.

We should always ask about the downsides.

When doesn’t retrieval practice help? Being outside might help some students learn something, but could it hamper others trying to learn other things?

When we actively seek out both the good and bad in the research-based practices we adopt, we’re likelier to use them more thoughtfully and effectively.

A Rose by Any Other Name Would Smell as Confusing
Andrew Watson
Andrew Watson

We have to admit it: when it comes to naming things, the field of psychology has no skills.

In many professions, we can easily distinguish between key terms.

The difference between a kidney and a pancreas? Easy.

The difference between a 2×4 and a 1×6? Easy.

The difference between an altimeter and speed indicator? Easy.

But:

The difference between grit and resilience?

Between self-control and self-regulation?

Between an adolescent and a teen-ager? Um….

And, if we can’t define and distinguish among concepts easily, we’ll struggle to talk with each other sensibly about the work we’re doing.

I think of naming problems in several categories:

Sales-Pitch Names

Occasionally, psychologists come up with a name that seems to have been market tested for maximum sales.

Take, for instance, “wise feedback.”

Many researchers have explored a particular feedback structure that combines, first, an explicit statement of high standards, and second, an explicit statement of support.

For instance:

“I’ve made these suggestions on your essay because we have very high standards in the history department. And, I’m quite confident that – with the right kind of revision – this essay will meet those standards.”

(You can find research into this strategy here.)

I myself find the research quite persuasive. The strategy couldn’t be easier to implement. It couldn’t cost any less – it’s free! And, it’s particularly helpful for marginalized students.

But the phrase “wise feedback” rankles. Whenever I talk with teachers about this strategy, I feel like I’m participating in a late-night cable TV sales pitch.

Couldn’t we find a more neutral name? “Two-step feedback”? “Supportive standards feedback”?

Another example: “engagement.” Blake Harvard recently posted about this word, worrying that it’s too hard to define.

I agree. But, I also worry the name itself tries to prohibit debate. Who could be opposed to “engagement”?

In science world, however, we should always look for opposing viewpoints on any new suggestion. If a brand name – like “engagement” – feels too warm and fuzzy to oppose, the name itself inhibits scientific thinking.

By the way, almost everything that includes the word “brain” in it is a sales-pitch name: “Brain Gym.” “Brain Break.”

Of course, the right kind of exercise and activity do benefit learning. Short cognitive breaks do benefit learning. We don’t need to throw the word “brain” at those sentences to improve those strategies.

Poaching Names

If I’ve got a new idea, and no one pays attention to it, how might I get eyeballs on my website?

I know! I can use a pre-existing popular name, and staple it on to my concept – even if the two aren’t factually related to one another!

That way, readers will think that my new ideas has links to that other well-known idea. Voila – instant credibility.

This “poaching” happens most often with “Mindset.”

You’ve probably read about an “empathy” mindset. Or a “technology” mindset. Or a “creative” mindset. Maybe, an “international” mindset. Or a “your product name here” mindset.

To be clear, these ideas might in fact help students learn. Empathy and creativity and an international perspective can certainly improve schools.

But, Dweck’s word “mindset” has a very particular meaning. She has done quite specific research to support a handful of quite specific theories.

Calling my new thing “a Watson mindset” implies that my work links with Dweck’s. But, that implication needs careful, critical investigation. If you trust Dweck, you don’t have to believe everything called “mindset.”

(Of course, not everyone does trust Dweck. But: that’s a different post.)

Confusing Names

These names make sense to the people who coin and use them. But, they’re not obviously connected to the concepts under discussion – especially to visitors in the field.

Here’s a crazy example: entity theorists.

Believe it or not, one of the best-known concepts in educational psychology used to distinguish between entity theorists and (not joking here) incremental theorists.

But then, in the late 1990s, Carol Dweck started a rebranding project, and now calls those things a fixed mindset and a growth mindset.

I rather suspect her ideas wouldn’t have gotten such traction without the new names.

(Imagine teachers earnestly encouraging their students: “remember to adopt an incremental theory!” I don’t see it…)

A Really Good Name

In the bad old days (the 2000s), psychologists did a lot of research into “the testing effect.” It’s a terrible name. No one in schools wants anything to do with more testing.

Let’s rebrand. How about “retrieval practice”?

That name has many strengths:

First: far from being confusing, it tells you exactly what it means. Practice by retrieving, not by reviewing. Couldn’t be clearer.

Second: far from being a sales pitch, it remains comfortably neutral. It’s not “awesome practice” or “perfect practice.” You get to investigate research pro- and con-, and decide for yourself.

Third: rather than poaching (“students should develop a practice mindset!”), it stands on its own.

I don’t know who came up with this phrase. But, I tip my hat to a modest, clear, straightforward name.

We should all try to follow this clear and neutral example.

 

Praising Researchers, Despite Our Disagreements
Andrew Watson
Andrew Watson

This blog often critiques the hype around “brain training.” Whether Lumosity or Tom Brady‘s “brain speed” promises, we’ve seen time and again that they just don’t hold water.

Although I stand behind these critiques, I do want to pause and praise the determined researchers working in this field.

Although, as far as I can see, we just don’t have good research suggesting that brain training works*, it will be an AWESOME accomplishment if it someday comes to pass.

A Case In Point

I’ve just read a study that pursues this hypothesis: perhaps brain training doesn’t succeed because the training paradigms we’ve studied do only one thing.

So, a program to improve working memory might include cognitively demanding exercises, but nothing else. Or, brain stimulation, but nothing else. Or, physical exercise, but nothing else.

What would happen if you combined all three?

To test this question, Ward & Co. ran a remarkably complex study including 518 participants in 5 different research conditions. Some did cognitive exercises. Some also did physical exercises. And some also added neural stimulation.

The study even included TWO control groups.

And, each group participated in dozens of sessions of these trainings.

No matter the results, you have to be impressed with the determination (and organization) that goes into such a complex project.

Okay, but What Were The Results?

Sadly, not much. This study didn’t find that training results transferred to new tasks — which is the main reason we’d care about positive findings in the first place.

We might be inclined to think that the study “didn’t succeed.” That conclusion, however, misses the bigger point. The researchers pursued an entirely plausibly hypothesis…and found that their evidence didn’t support it.

That is: they learned something highly useful, that other researchers might draw on in their own work.

Someday — we fervently hope — researchers will find the right combination to succeed in this task. Those who do so will have relied heavily on all the seemingly unsuccessful attempts that preceded them.

__________

* To be clear: the phrase “brain training” means “training core cognitive capacities, like working memory.”

From a different perspective, teaching itself is a form of brain training. When we teach our students, brains that once could not do something now can do that something.

Brains change all the time. “Brain training” aims for something grander. And, we haven’t yet figured out how to do it.

Default Image
Andrew Watson
Andrew Watson

In a blog post, David Didau raises concerns about “the problem with teachers’ judgment.”

Here goes:

If a brain expert offers me a teaching suggestion, I might respond: “Well, I know my students, and that technique just wouldn’t work with them.”

Alas, this rebuttal simply removes me from the realm of scientific discussion.

Scientific research functions only when a claim can be disproven. Yet the claim “I know my students better than you do” can’t be disproven.

Safe in this “I know my students” fortress, I can resist all outside guidance.

As Didau writes:

If, in the face of contradictory evidence, we [teachers] make the claim that a particular practice ‘works for me and my students’, then we are in danger of adopting an unfalsifiable position. We are free to define ‘works’ however we please.

It’s important to note: Didau isn’t arguing with a straw man. He’s responding to a tweet in which a former teacher proudly announces: “I taught 20 years without evidence or research…I chose to listen to my students.”

(Didau’s original post is a few years old; he recently linked to it to rebut this teacher’s bluff boast.)

Beware Teachers’ Judgment, Part 2

In their excellent book Understanding How We Learn, the Learning Scientists Yana Weinstein and Megan Sumeracki make a related pair of arguments.

They perceive in teachers “a huge distrust of any information that comes ‘from above’ “… and “a preference for relying on [teachers’] intuitions” (p. 22).

And yet, as they note,

There are two major problems that arise from a reliance on intuition.

The first is that our intuitions can lead us to pick the wrong learning strategies.

Second, once we land on a learning strategy, we tend to seek out “evidence” that favors the strategy we have picked. (p. 23)

Weinstein and Sumeracki cite lots of data supporting these concerns.

For instance, college students believe that rereading a textbook leads to more learning than does retrieval practice — even when their own experience shows the opposite.

The Problems with the Problem

I myself certainly agree that teachers should listen to guidance from psychology and neuroscience. Heck: I’ve spent more than 10 years making such research a part of my own teaching, and helping others do so too.

And yet, I worry that this perspective overstates its case.

Why? Because as I see it, we absolutely must rely on teachers’ judgment — and even intuition. Quite literally, we have no other choice. (I’m an English teacher. When I write “literally,” I mean literally.)

At a minimum, I see three ways that teachers’ judgments must be a cornerstone in teacher-researcher conversations.

Judgment #1: Context Always Matters

Researchers arrive at specific findings. And yet, the context in which we teach a) always matters, and b) almost never matches the context in which the research was done.

And therefore, we must rely on teachers’ judgments to translate the specific finding to our specific context.

For example: the estimable Nate Kornell has shown that the spacing effect applies to study with flashcards. In his research, students learned more by studying 1 pile of 20 flashcards than 4 piles of 5 flashcards. The bigger pile spaced out practice of specific flashcards, and thus yielded more learning.

So, clearly, we should always tell our students to study with decks of 20 flashcards.

No, we should not.

Kornell’s study showed that college students reviewing pairs of words learned more from 20-flashcard piles than 5-flashcard piles. But, I don’t teach college students. And: my students simply NEVER learn word pairs.

So: I think Kornell’s research gives us useful general guidance. Relatively large flashcard decks will probably result in more learning than relatively small ones. But, “relatively large” and “relatively small” will vary.

Doubtless, 2nd graders will want smaller decks than 9th graders.

Complex definitions will benefit from smaller decks than simple ones.

Flashcards with important historical dates can be studied in larger piles than flashcards with lengthy descriptions.

In every case, we have to rely on … yes … teachers’ judgments to translate a broad research principle to the specific classroom context.

Judgment #2: Combining Variables

Research works by isolating variables. Classrooms work by combining variables.

Who can best combine findings from various fields? Teachers.

So: we know from psychology research that interleaving improves learning.

We also know from psychology research that working memory overload impedes learning.

Let’s put those findings together and ask: at what point does too much interleaving lead to working memory overload?

It will be simply impossible for researchers to explore all possible combinations of interleaving within all levels of working memory challenge.

The best we can do: tell teachers about the benefits of interleaving, warn them about the dangers of WM overload – and let them use their judgment to find the right combination.

Judgment #3: Resolving Disputes

Some research findings point consistently in one direction. But, many research fields leave plenty of room for doubt, confusion, and contradiction.

For example: the field of retrieval practice is (seemingly) rock solid. We’ve got all sorts of research showing its effectiveness. I tell teachers and students about its benefits all the time.

And yet, we still don’t understand its boundary conditions well.

As I wrote last week, we do know that RP improves memory of specifically tested facts and processes. But we don’t know if it improves memory of facts and processes adjacent to the ones that got tested.

This study says it does. This one says it doesn’t.

So: what should the teacher do right now, before we get a consistent research answer? We should hear about the current research, and then use our best judgment.

One Final Point

People who don’t want to rely on teacherly judgment might respond thus: “well, teachers have to be willing to listen to research, and to make changes to their practice based upon it.”

For example, that teacher who boasted about ignoring research is no model for our work.

I heartily – EMPHATICALLY – agree with that point of view.

At the same time, I ask this question: “why would teachers listen to research-based guidance if those offering it routinely belittle our judgment in the first place?”

If we start by telling teachers that their judgment is not to be trusted, we can’t be surprised that they respond with “a huge distrust of any information that comes ‘from above’.”

So, here’s my suggestion: the field of Mind, Brain, Education should emphasize equal partnership.

Teachers: listen respectfully to relevant psychology and neuroscience research. Be willing to make changes to your practice based upon it.

Psychology and neuroscience researchers: listen respectfully to teachers’ experience. Be up front about the limits of your knowledge and its applicability.

Made wiser by these many points of view, we can all trust each other to do our best within our fields of expertise.