teacher development – Education & Teacher Conferences Skip to main content
When Evidence Conflicts with Teachers’ Experience
Andrew Watson
Andrew Watson

Here’s an interesting question: do students — on average — benefit when they repeat a grade?

As you contemplate that question, you might notice the kind of evidence that you thought about.

Perhaps you thought: “I studied this question in graduate school. The research showed that answer is X.”

Perhaps you thought: “I knew a student who repeated a grade. Her experience showed that the answer is X.”

In other words: our teaching beliefs might rest on research, or on personal experience. Almost certainly, they draw on a complex blend of both research and experience.

So, here’s today’s question: what happens when I see research that directly contradicts my experience?

If I, for instance, think that cold calling is a bad idea, and research shows it’s a good idea, I might…

… change my beliefs and conclude it’s a good idea, or

… preserve my beliefs and insist it’s a bad idea. In this case, I might…

… generalize my doubts and conclude education research generally doesn’t have much merit. I might even…

… generalize those doubts even further and conclude that research in other fields (like medicine) can’t help me reach a wise decision.

If my very local doubts about cold-calling research spread beyond this narrow question, such a conflict could create ever-widening ripples of doubt.

Today’s Research

A research team in Germany, led by Eva Thomm, looked at this question, with a particular focus on teachers-in-training. These pre-service teachers, presumably, haven’t studied much research on learning, and so most of their beliefs come from personal experience.

What happens when research contradicts those beliefs?

Thomm ran an online study with 150+ teachers-in-training across Germany. (Some were undergraduates; others graduate students.)

Thomm’s team asked teachers to rate their beliefs on the effectiveness of having students repeat a year. The teachers then read research that contradicted (or, in half the cases, confirmed) those beliefs. What happened next?

Thomm’s results show an interesting mix of bad and good news:

Alas: teachers who read contradictory evidence tended to say that they doubted its accuracy.

Worse still: they started to rely less on scientific sources (research) and more on other sources (opinions of colleagues and students).

The Good News

First: teachers’ doubts did not generalize outside education. That is: however vexed they were to find research contradicting prior beliefs about repeating a year, they did not conclude that medical research couldn’t be trusted.

Secondteachers’ doubts did not generalize within education. That is: they might have doubted findings about repeating a year, but they didn’t necessarily reject research into cold calling.

Third: despite their expressed doubts, teachers did begin to change their minds. They simultaneously expressed skepticism about the research AND let it influence their thinking.

Simply put, this research could have discovered truly bleak belief trajectories. (“If you tell me that cold calling is bad, I’ll stop believing research about vitamin D!”) Thomm’s research did not see that pattern at work.

Caveats, Caveats

Dan Willingham says: “one study is just one study, folks.” Thomm’s research gives us interesting data, but it does not answer this question completely, once and for all. (No one study does. Research can’t do that.)

Two points jump out at me.

First, Thomm’s team worked with teachers in Germany. I don’t know if German society values research differently than other societies do. (Certainly US society has a conspicuously vexed relationship with research-based advice.) So, this research might not hold true in other countries or social belief systems.

Second, her participants initially “reported a positive view on the potency of research and indicated a higher appreciation of scientific than of non-scientific sources.” That is, she started with people who trusted in science and research. Among people who start more skeptical — perhaps in a society that’s more skeptical — these optimistic patterns might not repeat.

And a final note.

You might reasonably want to know: what’s the answer to the question? Does repeating a year help students?

The most honest answer is: I’m not an expert on that topic, and don’t really know.

The most comprehensive analysis I’ve seen, over at the Education Endowment Foundation, says: NO:

“Evidence suggests that, in the majority of cases, repeating a year is harmful to a student’s chances of academic success.” (And, they note, it costs A LOT.)

If you’ve got substantial contradictory evidence that can inform this question, I hope you’ll send it my way.

“Soft” vs. “Hard” Skills: Which Create a Stronger Foundation?
Andrew Watson
Andrew Watson

As teachers, should we focus on our students’ understanding of course content, or on our students’ development of foundational academic skills?

Do they benefit more from learning history (or chemistry or spelling or flute), or from developing the self-discipline (grit, focus, executive skills) to get the work — any work — done?

I’ve found a recent study that explores this question. It stands out for the rigor of its methodology, and the tough-mindedness of its conclusions.

Here’s the setup:

Daunting Problems; Clever Solutions

Researchers struggle to answer these questions because student choice can complicate the data.

When college students choose courses and professors, when they opt out of one section and opt into another, we can’t tell if the professor’s quality or the students’ preferences led to particular research results.

How to solve this problem? We find a school where students get no choices.

They must take the same courses.

They can’t change sections.

Students start the year randomly distributed, and they stay randomly distributed.

Where shall we find such a school? Here’s a possibility: the United States Naval Academy. All students take the same courses. They can’t switch. They can’t drop. Sir, yes sir!

Even better: several USNA courses are sequential. We can ask this question: how does the student’s performance in the first semester affect his/her performance in the second semester?

Do some 1st semester teachers prepare their students especially well — or especially badly — for the 2nd semester?

We can even fold in extra data. The website Rate My Professors lets students grade professors on many qualities — including the difficulty of the course, and their overall rating. Perhaps those data can inform our understanding of teacher effectiveness.

Provocative Conclusions

A research team has followed this logic and recently published their conclusions.

In their findings:

Easygoing teachers — who don’t demand lots of work, who don’t communicate high standards, who routinely give lots of high grades — harm their students. 

How so? Their students — quite consistently — do badly on subsequent courses in the field.

In other words: if I have an easygoing teacher for Calculus I, I’m likely to do badly in Calculus II — compared to my identical twin brother who had a different teacher.

On the other hand, tough-minded teachers — who insist on deadlines, who require extra work, who remain stingy with high grades — benefit their students.

How so? These students — like my identical twin — do better in subsequent courses than I do.

This research team calls such executive function topics — getting work done, even if it’s dull; prioritizing; metacognition — “soft skills.” In their analysis, professors who are tough minded about these soft skills ultimately help their students learn more.

More Provocative Still

This logic certainly makes sense; we’re not shocked that students learn more when we insist that they work hard, focus, and set high standards.

Of course, professors who DON’T insist that their students work hard get lots of student compliments (on average). We teachers know that — all things being equal — students are happier when they get less work. Their RateMyProfessor scores average higher than those of their tough-minded peers.

In turn, colleges notice student popularity ratings. School leaders feel good when students praise particular teachers. They give them awards and promotions and citations. Why wouldn’t they? After all, those highly-praised professors give the college a good reputation.

In other words: according to this research team, colleges are tempted to honor and promote teachers who get high student ratings — even though those very professors harm their students’ long term learning, and thereby diminish the quality of the academic program.

That’s a scathing claim indeed.

Caveats

Like everything I write about here, this finding comes with caveats.

First: although these students were randomly assigned once they got to the Naval Academy, admission to that Academy is very challenging indeed. (Google tells me that 8.3% of their applicants get in.)

So, a tough-minded approach might benefit this extremely narrow part of the population — who, let’s be honest, signed up for a rigorous academic program, rigorously delivered.

However, that finding doesn’t necessarily mean that this approach works for younger students, or a broader swath of the population, or students who didn’t apply for such demanding treatment.

It might. But, this study by itself shouldn’t persuade us to change our work dramatically. (Unless we work in a similar academic setting.)

Second: this report’s authors define “soft” and “hard” in a very specific way (see their page 3).

Your school might use these terms quite differently, so their claims might not apply directly to your terminology.

Equally important, the strategies they use to distinguish between “tough-minded” and “easy-going” professors require lots of intricate parsing.

I myself don’t have the stats skills to interrogate their process; I can imagine a more expert reading asking sharp questions about their methods.

Conclusion

In many parts of life, short-term challenges lead to long-term benefits.

We might not like exercise, but it helps us as we get older.

We might like bacon and ice cream, but leeks and salmon keep us fitter.

This research report suggests that we help our students in the long run by maintaining tough-minded high standards right now.

Doing so might not make us popular. Our administrative leaders don’t always recognize our wisdom. But if our students learn more, their strong “soft-skills” foundation really does help them thrive.

Do Expert Teachers See More Meaningful Classrooms?
Andrew Watson
Andrew Watson

Why do chess experts win more chess matches than novices?

This question has a perfectly straightforward answer: they know more about chess. Obviously.

expert teacher vision

Forty-five years ago, William Chase and Herbert Simon tested another hypothesis. Perhaps, they speculated, chess experts see the world differently than do chess novices.

They don’t just think differently. The literally see differently. Their chess knowledge changes their perception.

Sure enough, as Chase and Simon predicted, chess experts see chess boards as meaningful groups of chess pieces.

This chess board shows a modified French Dragon Attack.

That chess board shows a King-and-Bishop vs. King-and-Rook problem.

Chess novices, however, see chess boards as scatterings of individual pieces.

This chess board shows…a bunch of pieces.

That chess board shows…a different bunch of pieces.

Because the expert sees a different chess board, she sorts through her possible moves much more efficiently. And: she’s likelier to win the game.

Expert Teacher Vision: Are Experienced Teachers Like Chess Grand Masters?

Does this finding hold true for teachers? Does expert teacher vision differ from that of novice teachers?

Charlotte Wolff (and others) explored this question in a study that used eye-tracking software to understand where teachers look.

Sure enough, they found that expert teachers look at classrooms differently.

For instance: expert teachers “appear to be searching for activity between students,” even “following posture and body movements.”

Novices, on the other hand, focus on irrelevant details: for example, a student’s “fluorescent green shoelaces.”

When you look at the photos in the study, you’ll see that novices spend a disproportionate amount of time looking at unimportant details. A painting on the wall. People walking by in the hallway. Even an electrical outlet oddly placed in the wall.

Expert Teacher Vision: Eyes and Words

Intriguingly, Wolff & Co found that experienced teachers used different words to describe what they saw. In particular, they commented more frequently on feelings, and on the events happening in the room.

For my taste, this part of the study needs further elaboration. I’d love to hear about they ways that experts describe their classrooms differently from novices.

Here’s why.

A novice teacher might reasonably ask this question: “How do I train myself to have expert teacher vision?”

The likeliest answer is: practice, practice, practice. We don’t know many good shortcuts for developing expertise. It just takes time.

However, if we knew more about the words that experts use, we might train new teachers to speak and think that way when they comment on classrooms. These verbal habits — a kind of deliberate teacherly practice — just might help novice teachers hone their visual skills.