The headlines: highlighting helps students if the highlight the right amount of the right information.
Right amount: students tend to highlight too much. This habit reduces the benefit of highlighting, for several reasons.
Highlighting can help if the result is that information “pops out.” If students highlight too much, then nothing pops out. After all, it’s all highlighted.
Highlighting can help when it prompts students to think more about the reading. When they say “this part is more important than that part,” this extra level of processing promotes learning. Too much highlighting means not enough selective processing.
Sometimes students think that highlighting itself is studying. Instead, the review of highlighted material produces the benefits. (Along with the decision making before-hand.)
Right information.
Unsurprisingly, students often don’t know what to highlight. This problem shows up most often for a) younger students, and b) novices to a topic.
Suggestions and Solutions
Surma & Co. include several suggestions to help students highlight more effectively.
For instance, they suggest that students not highlight anything until they’ve read everything. This strategy helps them know what’s important.
(I myself use this technique, although I tend to highlight once I’ve read a substantive section. I don’t wait for a full chapter.)
And, of course, teachers who teach highlighting strategies explicitly, and who model those strategies, will likely see better results.
Surma’s post does a great job summarizing and organizing all this research; I encourage you to read the whole thing.
You might also check out John Dunlosky’s awesome review of study strategies. He and his co-authors devote lots of attention to highlighting, starting on page 18. They’re quite skeptical about its benefits, and have lots to contribute to the debate.
For other suggestions about highlighting, especially as a form of retrieval practice, click here.
We’ve explored the relationship of correlation and causation before on the blog.
In particular, this commentary on DeBoer’s blog notes that — while correlation doesn’t prove causation — it might be a useful first step in discovering causation.
DeBoer argues for a difficult middle ground. He wants us to know (say it with me) that “correlation doesn’t prove causation.” AND he wants us to be reasonably skeptical, not thoughtlessly reactive.
On some occasions, we really ought to pay attention to correlation.
More Fun
I recently stumbled across a livelier way to explore this debate: a website called Spurious Correlations.
If you’d like to explore the correlation between — say — the number of letters in the winning word of the Scripps National Spelling Bee and — hmmm — the number of people killed by venomous spiders: this is definitely website for you.
Just so you know, the correlation of the divorce rate in Maine with per-capita consumption of margarine is higher than 99%.
“A year from now, you’ll wish you had started today.”
This quotation, attributed to Karen Lamb, warns us about the dangers of procrastination. Presumably our students would propose a slightly modified version:
“The night before the test, I’ll wish I had started studying today.”
Does procrastination ever help? Is there such a thing as “beneficial procrastination”?
Types of Procrastination
I myself was intrigued when recently asked this question.
(True story: I was the President-To-Be of the Procrastinators’ Society in my high school. I would surely have been elected, but we never scheduled the meeting.)
Sure enough, researchers have theorized that we procrastinate for different reasons and in different ways.
Many of us, of course, procrastinate because we can’t get ourselves organized to face the task ahead.
(Mark Twain assures us he never put off until tomorrow that which he could do the day after tomorrow.)
Danya Corkin and colleagues wondered about another kind of deliberate procrastination: something they call “active delay.”
Active delay includes four salient features:
First, students intentionally decide to postpone their work. It’s not a haphazard, subconscious process.
Second, they like working under pressure.
Third — unlike most procrastinators — they get the work done on time.
Fourth, they feel good about the whole process.
What did Corkin & Co. find when they looked for these distinct groups?
The Benefits of “Active Delay”
As is often the case, they found a mixed bag of results.
To their surprise, procrastinators and active delayers adopted learning strategies (rehearsal, elaboration, planning, monitoring) roughly equally.
Unsurprisingly, procrastinators generally followed unproductive motivational pathways. (If you follow Carol Dweck’s work, you know about the dangers of “performance goals” and “avoidance goals.”)
And, the big headline: procrastination led to lower grades. Active delay led to higher grades.
Classroom Implications
This research gives teachers a few points to consider.
First: both kinds of procrastination might look alike to us. However, they might lead to quite different results.
Even if students procrastinate from our perspective, we can distinguish between two categories of procrastination. And, we should worry less about “active delay” than good, old-fashioned putting stuff off because I can’t deal with it.
Second: even though “active delay” leads to more learning than “procrastination,” both probably produce less learning than well-scheduled learning.
As we know from many researchers, spreading practice out over time (interleaving) yields more learning than bunching it all together.
Active delay might not be as bad, but it’s still bad for learning.
Finally: if you’re an “active delayer,” you might forgive yourself. As long as you’re choosing delay as a strategy — especially because you work best under pressure — then this flavor of procrastination needn’t bring on a bout of guilt.
The following story is true. (The names have been left out because I’ve forgotten them.)
When I attended graduate school in education, I handed in my first essay with some trepidation, and lots of excitement.
Like my classmates, I had worked hard to wrestle with the topic: how best to critique a study’s methodology. Like my classmates, I wanted to know how I could do better.
When we got those essays back, our TAs had written a number at the end. There were, quite literally, no other marks on the paper — much less helpful comments. (I’m an English teacher, so when I say “literally” I mean “literally.”)
We then sat through a slide show in which the head TA explained the most common errors, and what percentage of us had made each one.
Here’s the kicker. The head TA then said:
“Your TAs are very busy, and we couldn’t possibly meet with all of you. So, to be fair, we won’t discuss these essays individually with any of you.”
So, in a SCHOOL OF EDUCATION, I got exactly NO individual feedback on my essay. I have little idea what I did right or wrong. And, I have no idea whatsoever how I could have done better.
How’s that for teaching excellence?
Grades and Motivation: Today’s Research
My point with this story is: for me, the experience of getting a grade without feedback was a) demotivating, b) infuriating, and c) useless.
If you’d like to rethink your school’s grading strategy, my own experience would point you in a particular direction.
However: you’re not reading this blog to get anecdotes. If you’re in Learning and the Brain world, you’re interested in science. What does research tell us about grades and motivation?
The researchers surveyed students at a college that has grades only, a different college that offers narrative feedback only, and two colleges that use both. They also interviewed students at one of the “hybrid” colleges.
What did they find?
They didn’t pull any punches:
“Grades did not enhance academic motivation.”
“Grades promoted anxiety, a sense of hopelessness, social comparison, as well as a fear of failure.”
Briefly: grades demotivate, while narrative feedback helpfully focuses students on useful strategies for improvement.
Certainly these conclusions align with my own grad-school experience.
Not So Fast
Despite these emphatic conclusions, and despite the Twitter love, teachers who want to do away with grades should not, in my view, rely too heavily on this study.
Here’s why:
First: unless you teach in a college or university, research with these students might not apply to your students. Motivation for 2nd and 3rd graders might work quite differently than motivation for 23-year-olds.
Second: most college and university students, unlike most K-12 students, have some choices about the schools the attend and the classes they take.
In other words: students with higher degrees of academic motivation might be choosing colleges and courses with narrative feedback instead of grades.
It’s not clear if their level of motivation results from or causes their choice of college. Or, perhaps, both.
(To be clear, the researchers acknowledge this concern.)
Third: in my experience, most K-12 teachers combine letter or number grades with specific feedback. Unlike my TAs, who gave me a number without guidance, teachers often provide both a number and specific guidance.
Fourth: the study includes a number of troubling quirks.
The interview portion of the study includes thirteen students. It is, ahem, unusual to draw strong conclusions from interviews with 13 people.
The interviewer was a student who already knew some of the interviewees. Their prior relationship might well influence their answers to the interview questions.
More than any study I’ve read, this one includes an overtly political and economic perspective. Research like this typically eschews a strong political stance, and its presence here is at odds with research norms. (To be clear: researchers have political opinions. It’s just very strange to see them in print.)
Given these concerns — big and small — we should look elsewhere for research on grades and motivation to guide our schools and our own practice.
Earlier Thoughts
We have, of course, often written about grades and motivation here on the blog. For example:
In this article, Doug Lemov argues that — although imperfect — grades are the best way to ensure that scare resources aren’t given entirely to well-connected people.
In this article, we look at the Mastery Transcript movement: a strategy to provide lots of meaningful feedback without the tyranny of grades and transcripts.
Your thoughts on grades and grading are welcome: please share your experience in the comments.
“[P]hysics tells you about the properties of materials but it’s the engineer who designs the bridge. Similarly, psychology tells us about how our brains work, but it’s teachers who craft instruction.”
In other words, teachers should learn a great deal about psychology from psychologists.
(And should learn some things about neuroscience from neuroscientists.)
But the study of psychology doesn’t — and can’t — tell us exactly how to teach. We have to combine the underlying psychological principles (that’s “physics” in William’s analogy) with the day-to-day gritty demands of the environment (“engineering”).
And so, my clarifying New Year’s resolution:
Study physics to be a better engineer.
I hope you’ll join me this year, and share your wisdom!
Some research-based suggestions for teaching require a lot of complex changes. (If you want to develop an interleaved syllabus, you’re going to need some time.)
Others couldn’t be simpler to adopt.
Here’s a suggestion from researchers Down Under: encourage your students to adopt “personal best goals.”
The Research
In a straightforward study, Andrew Martin and Australian colleagues asked 10- to 12-year-olds to solve a set of math problems. After each student worked for one minute, she learned how well she had done on that group of problems.
Students then worked that same set of problems again. Martin measured their improvement from the first to the second attempt.
Here’s the key point: after half of the students heard their score, they got these additional instructions:
“That is your Personal Best score. Now we’re going to do these question again, and I would like you to set a goal where you aim to do better on these questions than you did before.”
The other half of the students simply heard their score and were told to try the problems again.
Sure enough, this simple “personal best” prompt led to greater improvement than in the control group.
To be clear: the difference was statistically significant, but relatively small. The Cohen’s d was 0.08 — lower than typically gets my attention.
However, as the researchers point out, perhaps the structure of the study kept that value low. Given the process — students worked the same problem sets twice — the obvious thing for students to do is strive to improve performance on the second iteration.
In other words: some students might have been striving for “personal bests” even when they weren’t explicitly instructed to do so.
In my own view, a small Cohen’s d matters a lot if the research advice is difficult to accomplish. So, if interleaving leads to only a small bump in learning, it might not be worth it. As noted above, interleaving takes a lot of planning time.
In this case, the additional instruction to “strive for your personal best” has essentially no cost at all.
Classroom Implications
Martin’s study is the first I know of that directly studies this technique.
(Earlier work, well summarized by Martin, looks at self-reports by students who set personal best goals. That research is encouraging — but self-reports aren’t as persuasive as Martin’s design.)
For that reason, we should be careful and use our best judgement as we try out this idea.
For example:
I suspect this technique works when used occasionally, not constantly.
In this study, the technique was used for the very short term: the personal best goals applied to the very next minute.
One intriguing suggestion that Martin makes: teachers could encourage personal best goals for the process not the result. That is: the goal could be “ask for help before giving up” rather than “score higher than last time.”
One final point stands out in this research. If you’re up to date on your Mindset research, you know the crucial difference between “performance goals” and “learning goals.”
Students with “performance goals” strive, among other things, to beat their peers. Of course, “personal best goals” focus not on beating peers but on beating oneself. They are, in other words, “learning goals.”
And, we’ve got LOTS of research showing that learning goals result in lots more learning.
He has put together different versions: one filled-in with explanations, another left blank for teachers to use, yet another for adapting and editing.
The Bigger Picture
In the world of Learning and the Brain, researchers explore precise, narrow questions about learning. The result: lots of precise, narrow answers.
For instance: Technique X helped this group of bilingual 5th graders in Texas learn more about their state constitution.
How might Technique X help you? With your students? And your curriculum?
And, crucially, how does Technique X fit together with Technique Y, Technique 7, and Technique Gamma — which you also heard about at the conference?
As you’ve heard me say: only the teacher can figure out the best way to put the research pieces together. Once you’ve gathered all the essential information, you’re in the best position to conjure the optimal mix for your specific circumstances.
All Together Now
And, that’s why I like Sherrington’s lesson planning form so much.
You’ve seen research into the importance of “activating prior knowledge.” You’ve also seen research into the importance of “retrieval practice.” You know about “prior misconceptions.” And so forth…
But, how do those distinct pieces all fit together?
This lesson planning form provides one thoughtful answer.
To be clear: this answer doesn’t have to be your answer. For this reason (I assume), Sherrington included a form that you can edit and make your own.
The key message as you start gearing up for January: research does indeed offer exciting examples and helpful new ways to think about teaching and learning.
Teachers should draw on that research. And: we’ll each put the pieces together in our own ways.
When the school year starts back up in January, teachers would LOVE to use this fresh start for good.
In particular, our students might have developed some counter-productive habits during the first half of the year. Wouldn’t it be great if we could help them develop new learning habits?
Maybe homework would be a good place to start. Better homework habits should indeed lead to more learning.
The Problem: Old Habits
When I sit down to do my homework, the same problems always crop up.
My cell phone buzzes with texts.
I’m really tired. SO tired.
The abominable noise from my brother’s room (heavy metal horror) drives me crazy.
I try to solve all these problems when they appear, but they get me so distracted and addled that I just can’t recover quickly. Result: I’m just not very efficient.
Wouldn’t it be great if I could develop new habits to solve these problems? What would these new learning habits be?
New Learning Habits: “Implementation Intentions”
We actually have a highly effective habit strategy to deal with this problem. Sadly, the solution has a lumpish name: “implementation intentions.”
Here’s what that means.
Step 1: I make a list of the problems that most often vex me. (In fact, I’ve already made that list — see above.)
Important note about step 1: everyone’s list will be different. The problems that interfere with my homework might not bother other people. (Apparently, some folks like my brother’s dreadful music.)
Step 2: decide, IN ADVANCE, how I will solve each problem.
For example, when my cell phone buzzes, I won’t look at the message. Instead, I will turn the phone to airplane mode.
When I feel tired, I’ll do 20 jumping jacks. If that doesn’t work, I’ll take a quick shower. That always wakes me right up.
When my brother cranks his stereo, I’ll move to my backup study location in the basement.
Just as everyone faces different problems, everyone will come up with different solutions.
Step 3: let the environment do the work.
Here’s the genius of “implementation intentions”: the environment does the work for us.
Now, when my phone buzzes, I already know what to do. I’ve already made the decision. I don’t have to make a new decision. I simply execute the plan.
Phone buzzes, I switch it to airplane mode. Done.
New Learning Habits: the Research
Now, I have to be honest with you. When I first read about this strategy, I was REALLY SKEPTICAL.
I mean, it’s so simple. How can this possibly work?
The theory — “the environment does the work, activating a decision chain that’s already been planned” — sort of makes sense, but: really?
In fact, we do have lots of good research showing that this strategy works.
For instance, Angela Duckworth (yes, thatAngela Duckworth) found that students who went through this process completed 60% more practice problems for the PSAT than those who simply wrote about their goals for the test.
You read that right: 60% more practice problems.
How’s that for new learning habits?
Classroom Applications
What does this technique look like in your classroom?
Of course: everyone reading this blog teaches different content to different students at different schools. And, we are all different people.
So, your precise way of helping your students will differ from my way.
I’m including a link to Ollie Lovell’s post on this topic. To be clear, I’m not suggesting that you follow his example precisely. After all, you and Ollie are two different people.
If you’d like to stir up a feisty argument at your next faculty meeting, lob out a casual observation about direct instruction.
Almost certainly, you’ll hear impassioned champions (“only direct instruction leads to comprehension”) and detractors (“students must construct their own understandings”) launch into battle.
One study, looking at science instruction with 4th graders, found that direct instruction led to more learning. The second study argued for a constructivist approach — yet lacked a remotely plausible control group.
So, in that post at least, it made sense to tell students what experts had already concluded.
One Study, Two Perspectives
I’ve found another study that helpfully reopens this debate.
Daniel Schwartz and colleagues helped 8th grade science students understand concepts like density, speed, and surface pressure.
Crucially, all these concepts share an underlying “deep structure”: ratio.
That is: “speed” is distance divided by time. “Density” is mass divided by volume.
Schwartz wanted to see if students learned each concept (density, spring constant) AND the underlying deep structure (ratio).
Half of the 8th graders in this study heard a brief lecture about each concept — and about the underlying structure they shared. They had a chance to practice the formulas they learn.
That is: this “tell and practice” paradigm is one kind of direct instruction.
The rest of the 8th graders were given several related problems to solve, and asked to figure out how best to do so.
This “invent with contrasting cases” paradigm enacts constructivist principles.
Findings, and Conclusions
Schwartz and Co. found that both groups learned to solve word problems equally well.
However — crucially — the contrasting cases method led to deeper conceptual understanding.
When this group of students were given a new kind of ratio to figure out, they recognized the pattern more quickly and solved problems more accurately.
So, the obvious conclusion: constructivist teaching is better. Right?
Not so fast. Schwartz’s study includes this remarkable pair of sentences:
“There are different types of learning that range from skill acquisition to identity formation, and it seems unlikely that a single pedagogy or psychological mechanism will prove optimal for all types of learning.
Inventing with contrasting cases is one among many possible ways to support students in learning deep structure.”
That is: in this very particular set of circumstances, a constructivist approach helped these students learn this concept — at least, in the way it was tested.
What Next?
If the purists have it wrong — if both direct instruction and constructivist pedagogies might have appropriate uses — what’s a teacher to do?
Schwartz himself suggests that different approaches make sense for different kinds of learning.
For instance, he wonders if direct instruction helps learn complex procedures, whereas constructivist methods help with deep structures (like ratio).
Perhaps, instead, the essential question is the level of difficulty. We have lots of research that says the appropriate level of cognitive challenge enhances learning.
So: perhaps the “tell and practice” method of this study was just too easy; only a more open-ended investigation required enough mental effort.
However, perhaps the study with the 4th graders (mentioned above) included a higher base level of conceptual difficulty. In that case, hypothetically, direct instruction allowed for enough mental work, whereas the inquiry method demanded too much.
Two Conclusions
First: the right pedagogical approach depends on many variables — including the content to be learned. We teachers should learn about the strengths and weaknesses of various approaches, but only we can decide what will work best for these students and this material on this day.
Second: purists who insist that we must always follow one (and ONLY one) pedagogy are almost certainly wrong.