Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.”
Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."
A surprising research finding to start your week: parachutes don’t reduce injury or death.
How do we know?
Researchers asked participants to jump from planes (or helicopters), and then measured their injuries once they got to the ground. (To be thorough, they checked a week later as well.)
Those who wore parachutes and those who did not suffered — on average — the same level of injury.
Being thorough researchers, Robert Yeh and his team report all sorts of variables: the participants’ average acrophobia, their family history of using parachutes, and so forth.
They also kept track of other variables. The average height from which participants jumped: 0.6 meters. (That’s a smidge under 2 feet.) The average velocity of the plane (or helicopter): 0.0 kilometers/hour.
Yes: participants jumped from stationary planes. On the ground. Parked.
Researchers include a helpful photo to illustrate their study:
Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground
Why Teachers Care
As far as I know, teachers don’t jump out of planes more than other professions. (If you’re jumping from a plane that is more than 0.6 meters off the ground, please do wear a parachute.)
We do, however, rely on research more than many.
Yeh’s study highlights an essential point: before we accept researchers’ advice, we need to know exactly what they did in their research.
Too often, we just look at headlines and apply what we learn. We should — lest we jump without parachutes — keep reading.
Does EXERCISE helps students learn?
It probably depends on when they do the exercise. (If the exercise happens during the lesson, it might disrupt learning, not enhance it.)
Does METACOGNITION help students learn?
It probably depends on exactly which metacognitive activity they undertook.
Do PARACHUTES protect us when we jump from planes?
It probably depends on how high the plane is and how fast it’s going when we jump.
In brief: yes, we should listen respectfully to researchers’ classroom guidance. AND, we should ask precise questions about that research before we use it in our classrooms.
Usually I blog about specific research findings that inform education.
Today — to mix things up — I thought it would be helpful to talk about an under-discussed theory pertinent to education.
This theory helps us at least two ways:
First: it gives useful insights into student motivation. (Teachers want to know everything we can know about motivation.)
Second: it provides useful background for a second up-n-coming theory — as I’ll describe below.
Education and Evolution
Let’s zoom the camera WAY BACK and think about individual human development from an evolutionary perspective.
Certain human interests and abilities can promote our evolutionary fitness.
Tens of thousands of years ago, humans who — say — understood other people and worked with them effectively probably had a survival advantage.
So did humans who took time to make sense of the natural world around them.
Oh, and the physical world as well.
Given those probabilities, humans who learned about people, the natural world, and the physical world would — on average — thrive more than those who did not.
If that’s true, then we probably evolved to learn those things relatively easily. (Obviously, this is a great oversimplification of evolution’s complexities.)
For instance: we rarely teach children to recognize faces — our species evolved to be good at that. We don’t teach them to walk or talk; they do so naturally. (We encourage and celebrate, but we don’t need to teach.)
We don’t have to encourage people to explore the natural or physical world. Throwing rocks, climbing trees, jumping in puddles, chasing small animals: we evolved to be intrinsically interested in those things.
Primary and Secondary
Evolutionary Psychologist David Geary describes these interests as biologically primary. We evolved to be interested in and learn about what he calls “folk psychology” (people), “folk biology” (the natural world), and “folk physics” (the physical world).
Geary contrasts these several topics with others that we learn because human culture developed them: geometry, grammar, the scientific method, reading. He calls such topics biologically secondary because need for them does not spring from our evolutionary heritage.
We are MUCH less likely to be interested in biologically secondary topics than biologically primary ones. We didn’t evolve to learn them. Our survival — understood on an evolutionary scale — does not depend on them.
Said the other way around: if I don’t explicitly teach my child to walk, she’s highly likely to do so anyway. If I don’t explicitly teach my child calculus, she’s highly unlikely to figure it out on her own. (Newton and Leibnitz did…but that’s about it.)
If you’re keen to understand its nuances, Geary’s 100 page introduction to his theory is here.
Implications: Motivation
If Geary’s correct, his theory helps answer a persistent question in education:
Why don’t students love learning X as much as they loved learning to climb trees/play games/mimic siblings/build stick forts/etc.?
This question usually implies that schools are doing something wrong.
“If only we didn’t get in the way of their natural curiosity,” the question implies, “children would love X as much as those other things.”
Geary’s answer is: playing games is biologically primary, doing X is biologically secondary.
We evolved to be motivated to play games. Our genes, in effect, “want” us to do that.
We did not evolve to learn calculus. Our culture, in effect, “wants” us to do that. But cultural motivations can’t match the power of genetic ones.
In effect, Geary’s argument allows teachers to stop beating ourselves up so much. We shouldn’t feel like terrible people because our students don’t revel in the topics we teach.
Schools focus on biologically secondary topics. Those will always be less intrinsically motivating (on average) than biologically primary ones.
Implications: Cognitive Load
A second theory — cognitive load theory (CLT) — has been getting increasing attention in recent months and years.
CLT helps explain the role of working memory in human cognition. (Frequent readers know: I think working memory is the essential topic for teachers to understand.)
In recent years, CLT’s founders have connected their theory to Geary’s work on biologically primary/secondary learning.
That connection takes too much time to explain here. But, if you’re interested in cognitive load, be aware that Geary’s work might be hovering in the background.
Watch this space.
Reactions
Some scholars just love the analytical power provided by the distinction between biologically primary and secondary learning.
Paul Kirschner (twitter handle: @P_A_Kirschner), for instance, speaks of Geary’s theory with genuine admiration. (In one interview I read, he wished he’d thought of it himself.)
Others: not so much.
Christian Bokhove (twitter handle: @cbokhove), for instance, worries that the theory hasn’t been tested and can’t be tested. (Geary cites research that plausibly aligns with his argument. But, like many evolutionary theories, it’s hard to test directly.)
I myself am drawn to this framework — in part because evolutionary arguments make lots of sense to me. I do however worry about the lack of more evidence.
And: I’m puzzled that so little work has been done with the theory since it was first published in 2007. If it makes so much sense to me (a non-specialist), why haven’t other specialists picked up the topic and run with it?
For the time being, I think teachers should at least know about this theory.
You might start considering your students’ interests and motivations in this light — perhaps Geary’s distinction will offer a helpful perspective.
And, I don’t doubt that — as cognitive load theory gets more attention — the distinction between biologically primary and secondary learning will be more and more a part of teacherly conversations.
We’ve heard so much about retrieval practice in the last two years that it seems like we’ve ALWAYS known about its merits.
But no: this research pool hasn’t been widely known among teachers until recently.
We can thank Agarwal and Bain’s wonderful Powerful Teachingfor giving it a broad public audience. (If you had been attending Learning and the Brain conferences, of course, you would have heard about it a few years before that.)
Of course, we should stop every now and then to ask ourselves: how do we know this works?
In this case, we’ve got several answers.
In addition to Agarwal and Bain’s book, both Make it Stick (by Brown, Roediger, and McDaniel) and How We Learn (by Benedict Carey) offer helpful surveys of the research.
You could also check out current research. Ayanna Kim Thomas recently published a helpful study about frequent quizzing in college classrooms. (It helps!)
All these ways of knowing help. Other ways of knowing would be equally helpful.
For instance: I might want to know if retrieval practice helps in actual classrooms, not just in some psychology lab somewhere.
Yes, yes: Agarwal and Bain’s research mostly happened in classrooms. But if you’ve met them you know: it might work because they’re such engaging teachers! What about teachers like me — who don’t quite live up to their energy and verve?
Today’s News
A recent meta-analysis looked at the effect on retrieval practice in actual classrooms with actual students. (How many students? Almost 8000 of them…)
Turns out: retrieval practice helps when its studied in psychology labs.
And, it helps when vivacious teachers (like Agarwal and Bain) use it.
And, it helps when everyday teachers (like me) use it.
It really just helps. As in: it helps students learn.
A few interesting specifics from this analysis:
First: retrieval practice quizzes helped students learn more when they were counted for a final grade than when they weren’t. (Although: they did help when not counted toward the grade.)
Second: they helped more when students got feedback right away than when feedback was delayed. (This finding contradicts the research I wrote about last week.)
Third: short answer quizzes helped learning more than multiple choice (but: multiple choice quizzes did produce modest benefits).
Fourth: announced quizzes helped more than unannounced quizzes.
and, by the way
Fifth: retrieval practice helped middle-school and high-school students more than college students. (Admittedly: based on only a few MS and HS studies.)
In brief: all that good news about retrieval practice has not been over sold. It really is among the most robustly researched and beneficial teaching strategies we can use.
And: it’s EASY and FREE.
A Final Note
Because psychology research can be — ahem — written for other psychology researchers (and not for teachers), these meta-analyses can be quite daunting. I don’t often encourage people to read them.
In this case, however, authors Sotola and Crede have a straightforward, uncomplicated prose style.
They don’t hold back on the technical parts — this is, after all, a highly technical kind of writing.
But the explanatory paragraphs are unusually easy to read. If you can get a copy — ask your school’s librarian, or see if it shows up on Google Scholar — you might enjoy giving it a savvy skim.
Given the importance of feedback for learning, it seems obvious teachers should have well-established routines around its timing.
In an optimal world, would we give feedback right away? 24 hours later? As late as possible?
Which option promotes learning?
In the past, I’ve seen research distinguishing between feedback given right this second and that given once students are done with the exercise: a difference of several seconds, perhaps a minute or two.
It would, of course, be interesting to see research into longer periods of time.
Sure enough, Dan Willingham recently tweeted a link to this study, which explores exactly that question.
The Study Plan
In this research, a team led by Dr. Hillary Mullet gave feedback to college students after they finished a set of math problems. Some got that feedback when they submitted the assignment; others got it a week later.
Importantly, both groups got the same feedback.
Mullet’s team then looked at students’ scores on the final exams. More specifically, if the students got delayed feedback on “Fourier Transforms” — whatever those are — Mullet checked to see how they did on the exam questions covering Fourier.
And: they also surveyed the students to see which timing they preferred — right now vs. one week later.
The Results
I’m not surprised to learn that students strongly preferred immediate feedback. Students who got delayed feedback said they didn’t like it. And: some worried that it interfered with their learning.
Were those students’ worries correct?
Nope. In fact, just the opposite.
To pick one set of scores: students who got immediate feedback scored 83% on that section of an exam. Students who got delayed feedback scored a 94%.
Technically speaking, that’s HUGE.
Explanations and Implications
I suspect that delayed feedback benefitted these students because it effectively spread out the students’ practice.
We have shed loads of research showing that spacing practice out enhances learning more than doing it all at once.
So, if students got feedback right away, they did all their Fourier thinking at the same time. They did that mental work all at once.
However, if the feedback arrived a week later, they had to think about it an additional, distinct time. They spread that mental work out more.
If that explanation is true, what should teachers do with this information? How should we apply it to our teaching?
As always: boundary conditions matter. That is, Mullet worked with college students studying — I suspect — quite distinct topics. If they got delayed feedback on Fourier Transforms, that delay didn’t interfere with their ability to practice “convolution.”
In K-12 classrooms, however, students often need feedback on yesterday’s work before they can undertake tonight’s assignment.
In that case, it seems obvious that we should get feedback to them ASAP. As a rule: we shouldn’t require new work on a topic until we’ve given them feedback on relevant prior work.
With that caveat, Mullet’s research suggests that delaying feedback as much as reasonably possible might help students learn. The definition of “reasonably” will depend on all sorts of factors: the topic we’re studying, the age of my students, the trajectory of the curriculum, and so forth.
But: if we do this right, feedback helps a) because feedback is vital, and b) because it creates the spacing effect. That double-whammy might help our students in the way it helped Mullet’s. That would be GREAT.
I’m usually an easy-going guy. But if you want to see me frantic with frustration, tell me about the superiority of handwriting for taking notes.
Here’s the story.
Back in 2014, two Princeton researchers did a study which concluded that handwritten notes lead to better learning than notes taken on laptops.
That’s a helpful question to have answered, and so I read their study with a mixture of curiosity and gratitude.
Imagine my surprise when I found that their conclusion rests on the assumption that students can’t learn to do new things. (That’s a VERY weird belief for a teacher to have.)
If you believe a student CAN learn new to do things, then the researchers’ data strongly suggest that laptop notes will be better.
Despite these glaring flaws, people still cite this study — and look at me with pity (contempt?) when I try to convince them otherwise. “But research says so,” they say wearily. I seethe, but try to do so politely.
Today’s Exciting News
When I try to explain my argument, my interlocutor often says something like “handwriting engages more neural processing through kinesthetic yada yada,” and therefore boosts learning.
In the first place, that’s NOT the argument that the Princeton researchers make. It might be true, but that’s changing the subject — never a good way to prove a point.
In the second place, where is the evidence of that claim? I’d love to review it.
To date, no one has taken me up on that offer.
But — [sound of trumpets blaring] — I recently found a post at Neuroscience News with this splendid headline: “Why Writing by Hand Makes Kids Smarter.”
Here’s the first sentence of the article:
Children learn more and remember better when writing by hand, a new study reports. The brains of children are more active when handwriting than typing on a computer keyboard.
“Learn more.” “Remember better.” That’s impressive. At last: the research I’ve been asking for all these years!
Believe it or not, I rather enjoy finding research that encourages me to change my mind. That process reminds me of the power of the scientific method. I believe one thing until I see better evidence on the other side of the argument. Then I believe the other thing.
So, AT LAST, I got to read the research showing that handwriting helps students learn more and remember better.
Want to know what I found?
The Study
The researchers did not test anyone’s learning or memory.
You read that right. This article claims that handwriting improves learning and memory, but they didn’t test those claims.
This research team asked 24 participants — twelve adults and twelve 12-year-olds — to write by hand, or write on a laptop. They then observed the neural regions involved in those tasks.
Based on what they saw, they inferred that handwriting ought to result in better learning.
But they did not test that hypothesis.
So, based on a tiny sample size and a huge leap of neuro-faith, they have concluded that handwriting is better. (And, astonishingly, some big names in the field have echoed this claim.)
The Bigger Picture
Believe it or not, I’m entirely open to the possibility that handwritten notes enhance learning more than laptop notes do.
I’m even open to the possibility that kinesthetic yada yada is the reason.
To take one example, Jeffrey Wammes has done some splendid research showing that — in specific circumstances — drawing pictures helps students remember words and concepts.
If drawing boosts learning, maybe handwriting does too. That’s plausible.
But here’s the thing: before Wammes made his claim, he tested the actual claim he made.
He did not — as the Princeton researchers did — start from the assumption that students can’t learn to do new things.
He did not — as this current research does — extrapolate from neural patterns (of 24 people!) to predict how much learning might happen later on.
Wammes designed a plausible study to measure his hypothesis. In fact, he worked hard to disprove his interpretation of the data. Only when he couldn’t did he admit that — indeed — drawing can boost learning.
Before I believe in the superiority of either handwritten notes or laptop notes, I want to see the study that works hard to disprove its own claims. At present, the best known research on the topic conspicuously fails to meet that test.
Do you know of research that meets this standard? If yes, please let me know!
“The Theory of Enchantment is a social-emotional learning program that teaches individuals how to develop character, develop tools for resiliency…but more importantly, to learn how to love oneself.”
Intrigued?
Meet Chloé Valdary in this TedTalk, at at our conference, November 7-8.
Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?
Or, should they be simple, black and white, somewhat abstract?
We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.
We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.
In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.
Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)
For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.
Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?
Results, Please…
No, and yes. Well, mostly yes.
In other words: students who watched both videos learned about ladybug metamorphosis equally well.
But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.
In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.
In sum: to encourage transfer, we should use “bland,” abstract diagrams.
By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.
So: if you’re teaching face-to-face or remotely, this research can guide your thinking.
Some Caveats
First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.
Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?
As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!
Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?
What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?
When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.
In Sum
Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.
Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.
To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.
If you’re as excited for our November conference as I am, you might want to know more about our speakers.
Mary Helen Immordino-Yang is an affective neuroscientist and an educational psychologist.
That means: she studies how “children’s emotional and social relationships shape their LEARING, and also shape the BRAIN DEVELOPMENT that undergirds their learning.”
Yes: her work is that interesting.
https://www.youtube.com/watch?v=DEeo350WQrs
I got to interview Dr. Immordino-Yang back in 2018; she’s practical and funny and insightful. And she KNOWS SO MUCH.
We all agree, I suspect, that students should learn math. And reading. They should learn history. And science. SO MANY other topics.
What’s the best way to meet these goals?
If I want my students to learn math, is math teaching the best way to go? If I want them to understand history, should I teach more history?
Or, instead, is there a handy shortcut?
If I could help students improve their reading by teaching something other than reading, that alternate approach just might be more efficient and motivating.
In fact, two candidates get lots of attention as “alternative approaches.” If either or both pan out, they would offer us more choices. Maybe even a higher chance of success.
Music and Math
I don’t remember where I first heard that music education improves math learning. Specifically: learning to play the violin ultimately makes students better at learning calculus.
The explanation focused on “strengthened neural circuits” “repurposed” for “higher cognitive function.” Something like that. That string of words sounded quite impressive, and inclined me to believe.
Given the complexity of calculus, that would be really helpful!
But: is it true?
A recent meta-analysis looked at 54 relevant studies, including just under 7,000 participants.
Their findings? Let me quote key points from their summary:
Music training has repeatedly been claimed to positively impact children’s cognitive skills and academic achievement (literacy and mathematics).
This claim relies on the assumption that engaging in intellectually demanding activities fosters particular domain-general cognitive skills, or even general intelligence.
The present meta-analytic review shows that this belief is incorrect.
Once the quality of study design is controlled for, the overall effect of music training programs is null.
It gets worse:
Small statistically significant overall effects are obtained only in those studies implementing no random allocation of participants and employing non-active controls.
In other words: you get this result only if the study isn’t correctly designed.
And worse:
Interestingly, music training is ineffective regardless of the type of outcome measure (e.g., verbal, non-verbal, speed-related, etc.), participants’ age, and duration of training.
That is: no matter what you measure, the answer is still “no.”
Violin training sure strengthened some neural circuits. But that additional strength doesn’t get “repurposed for ‘higher’ cognitive function.”
If I want my students to learn math, I should teach them math.
Chess and Intelligence
If you watch The West Wing, you know that President Bartlet is smarter than everyone else because he won a Nobel Prize, and he plays chess frequently. He says things like “rook takes queen in five.” And then Leo nods appreciatively.
So smart.
It might be true that being smart makes you better at chess. (Although, Anders Ericsson says “no.”)
Is it true that playing chess makes you smarter? If we want our students to learn math and reading and science, should we teach them more chess? Would some neural circuitry get repurposed?
In contrast to much of the existing literature, we find no evidence of an effect of chess instruction upon children’s mathematics, reading or science test scores.
In this case, by the way, the “tape” is a randomized control trial with more than 4,000 students in it. So: that result seems impressively well established.
So far, it seems that if I want my students to be better at X, I should teach them X. Teaching them Y and hoping that Y makes them better at X hasn’t panned out well…
Social Studies and Reading
Reading might be an interesting exception to this rule. On the one hand, reading is a skill that students must acquire.
And, at the same time, they have to apply the skill of reading to the content being read. The more that students know about the content, maybe the better they’ll do at reading.
In any case, that’s a plausible hypothesis.
A recently released report from the Thomas Fordham Institute crunches the numbers, and finds that additional time devoted to social studies instruction ultimately improves reading scores.
Two key sentences from the executive summary:
Instead of devoting more class time to English language arts, we should be teaching elementary school children more social studies — as in, rich content about history, geography, and civics.
…
Literacy gains are more likely to materialize when students spend more time learning social studies.
In fact, they find that social studies instruction most benefits students from lower-income households, and from non-English speaking homes.
For a variety of reasons, this study looks at correlation, and so can’t demonstrate causation.
However, the underlying theory makes sense. If students can decode the sounds of the words “Berlin” and “Wall,” but don’t know the geography of Germany or cold-war history, they’re unlikely to make much sense of a reading passage about that in/famous border.
In Sum
Students improve at the skills they practice. Those skills — alas — rarely transfer to distantly unrelated disciplines.
To help students learn math, teach them math. To help them read, teach them to read — and also about the scientific, historical, geographic, and philosophical concepts that make reading so important and so worthwhile.