classroom advice – Page 5 – Education & Teacher Conferences Skip to main content
Why Time is a Teacher’s Greatest Commodity…and What to Do When You Don’t Have Enough of It
Guest Post
Guest Post

Today’s guest post is by Jim Heal, Director of New Initiatives, and Rebekah Berlin, Senior Program Director at Deans for Impact.

Long-time readers know how much I respect the work that Deans for Impact does. Their Resources — clear, brief, research informed, bracingly practical — offer insight and guidance in this ever-evolving field.


Ask any teacher to name a rare commodity in their profession and there’s a good chance they will reply with the word: “Time.” Whether it’s time to plan, grade, or even catch one’s breath in the midst of a busy school day, time matters.

Time is perhaps most important when it comes to time spent focusing on the material you want students to learn. So, how do you ensure that you’re making the most of the time you have with students and that they’re making the most of the way you structure their time?

Water Is Life

To answer this, let’s consider the following scenario. You’re a 7th Grade ELA teacher teaching a lesson on ‘Water is Life’ – a nonfiction text by Barbara Kingsolver. One of the objectives for this lesson is: Analyze the development of ideas over the course of a text.

You know from reading the teacher’s guide that student success will require them to compare two parts of the reading: a section describing a lush setting with an abundance of water and another describing an arid setting where rain hardly ever falls. Comparing the two will allow students to explore one of the main ideas of the text: The environmental role played by water and water sustainability.

Here is the section of the lesson[1] designed to address these aims. Take a moment to read it and consider when students are being asked to think deeply about comparing the two settings:

You arrive at school on the morning you’re due to teach this content, and there’s an unexpected announcement for students to attend club photo sessions for the yearbook during your lesson.

Big Changes, Little Time

At this point you realize that, by the time your class gets back together, you’ll need to cut ten minutes from this part of the lesson and now you have a choice to make:

If you only had twenty minutes to teach the thirty minutes of content you had planned for, how would you adapt your plan so that the most important parts of the lesson remained intact?

Let’s begin addressing this challenge with a couple of simple truths:

First: The harder and deeper we think about something, the more durable the memory will be. This means that we need students to think effortfully about the most important content in any lesson if we want it to stick.

Second: If you treat everything in the lesson as equally valuable and try to squeeze it all into less time, students are unlikely to engage in the deep thinking they need to remember the important content later.

Therefore, something’s got to give.

To help determine what goes and stays, you’re going to need to differentiate between three types of instructional tasks that can feature in any given lesson plan.

Effortful Tasks

Tasks and prompts that invite students to think hard and deep about the core content for that lesson.

In the case of ‘Water is Life’ a quick review of the plan tells us the effortful question (i.e. the part that directs students to the core knowledge they will need to think deeply about) doesn’t come until the end of the allotted thirty minute period.

This question is this lesson’s equivalent of the ‘Aha!’ moment in which students are expected to “analyze the development of ideas over the course of the text” (the lesson objective) by exploring the way the author uses juxtaposition across the two settings.

If you reacted to the shortened lesson time by simply sticking to the first twenty minutes’ worth of content, the opportunity for students to engage in the most meaningful part of the lesson would be lost. It’s therefore crucial to ask what is most essential for student learning in each case and ensure that those parts are prioritized.

Essential Tasks

Foundational tasks and prompts that scaffold students to be able to engage with the effortful questions that follow.

Just because effortful thinking about core content is the goal, that doesn’t mean you should make a beeline for the richest part of the lesson without helping students build the essential understanding they will need in order to engage with it effortfully.

In the case of ‘Water is Life’ – even though some of the tasks aren’t necessarily effortful, they are an essential stair step for students to be able to access effortful thinking opportunities.

For example, consider the moment in the lesson immediately prior to the effortful thinking prompt we just identified:

As you can see, even though we want students to go on and address the effortful task of juxtaposing the language in each of the two settings, that step won’t be possible unless they have a good understanding of the settings themselves. This part might not be effortful, but it is essential.

In this example, it isn’t essential that students share their understanding of each setting as stated in the plan, but it is essential that they do this thinking before taking on a complex question about juxtaposed settings. In other words, the instructional strategy used isn’t essential, but the thinking students do is.

Armed with this understanding, you can now shave some time off the edges of the lesson, while keeping its core intentions intact. For instance, in a time crunch, instead of having groups work on both questions the teacher could model the first paragraph and have students complete the second independently.

Strategies like these would ensure students engage more efficiently in the essential tasks – all of which means more time and attention can be paid to the effortful task that comes later on.

Non-Effortful, Non-Essential Tasks

Lower-priority tasks and prompts that focus on tangential aspects of the core content.

Lastly, there are those parts that would be nice to have if time and student attention weren’t at a premium – but they’re not effortful or essential in realizing the goals of the lesson.

If your lesson plan is an example of high-quality instructional materials (as is the case with ‘Water is Life’) you’ll be less likely to encounter these kinds of non-essential lesson components. Nevertheless, even when the lesson plan tells you that a certain section should take 30 minutes, it won’t tell you how to allocate and prioritize that time.

This is why it’s so important to identify any distractions from the ‘main event’ of the lesson. Because effortful questions are just that: they are hard and students will need more time to grapple with their answers and to revise and refine their thinking – all of which can be undermined by non-essential prompts.

For instance, it might be tempting to ask…

…“What was your favorite part of the two passages?”

…“What does water sustainability mean to you?”

…“Has anyone ever been to a particularly wet or dry place? What was it like?

These might seem engaging – and in one definition of the term, they are – it’s just that they don’t engage students with the material you want them to learn. For that reason alone, it’s important to steer clear of adding questions not directly related to your learning target in a lesson where you’re already having to make difficult choices about what to prioritize and why.

Three Key Steps

It’s worth noting that, even though our example scenario started with a surprise announcement, this phenomenon doesn’t only play out when lesson time gets unexpectedly cut. These kinds of decisions can happen when you know your students will need more time to take on an effortful question than the curriculum calls for, or even when lesson time is simply slipping away faster than you had anticipated. In either case, you would need to adjust the pacing of the lesson to accommodate the change, and bound up within that would be the prioritization of its most important parts.

There are steps one can take to ensure the time you have becomes all the time you need. Here are three such strategies informed by Deans For Impact’s work supporting novice and early-career teachers:

Identify the effortful tasks – aka the opportunities for effortful thinking about core content within the lesson. These effortful ‘Aha!’ moments can appear towards the end of the lesson, so don’t assume that you can trim content ‘from the bottom up’ since that could result in doing away with the most important parts for student learning.

Determine which are the essential tasks – aka the foundational scaffolds students will need in order to engage with those effortful thinking opportunities. These stepping stone tasks will often deal with the knowledge and materials students need to engage in the effortful part of the lesson. Even though they can’t be removed, they can be amended. If in doubt, concentrate on the thinking students need to do rather than the surface features of the instructional strategy.

Trim those parts of the lesson that don’t prompt effortful thinking or the foundational knowledge required to engage in it. This means that anything NOT mentioned in the previous two strategies is fair game for shrinking, trimming or doing away with altogether. Ask yourself whether this part of the lesson is instrumental in getting students to engage deeply with the content you want them to take away.

So, even if lesson time always feels like it’s running away (which it often is!) there are steps we can take to ensure teachers (and subsequently students) make the most of it.


Jim Heal is Director of New Initiatives at Deans for Impact and author of ‘How Teaching Happens’. He received his master’s in School Leadership and doctorate in Education Leadership from the Harvard Graduate School of Education.

Rebekah Berlin is Senior Director of Program at Deans for Impact. She received her Ph.D. in teaching quality and teacher education from the University of Virginia.

If you’d like to learn more about the work of Deans for Impact, you can get involved here.


[1] “Grade 7: Module 4B: Unit 1: Lesson 1” by EngageNY. Licensed under CC BY-NC-SA 3.0.

 

The Best Kind of Practice for Students Depends on the Learning Goal
Andrew Watson
Andrew Watson

In some ways, teaching ought to be straightforward. Teachers introduce new material (by some method or another), and we have our students practice (by some method or another).

Result: THEY (should) LEARN.

Alas, both classroom experience and psychology/neuroscience research suggest that the process is MUCH more complicated.

For instance:

When we “introduce new material,” should we use direct instruction or more of an inquiry/problem-based pedagogy? *

When we “have our students practice,” what’s the very BEST kind of practice?

Around here, we typically offer two answers to that 2nd question: retrieval practice and interleaving.

Retrieval practice has gotten lots of love on this blog — for instance, here. I have written less about interleaving, mostly because we have less research on the topic.

But I’ve found some ripping good — and very practical — research to share here at the end of 2021.

“What?,” “Why?,” and Other Important Questions

Let’s start with definitions.

Let’s say I teach a particular topic today: “adjectives.” And tomorrow I teach “adverbs.” Next day, “prepositions.” Next: “coordinating conjunctions.”

How should I structure students’ homework?

They could do 20 adjective practice problems tonight. Then 20 adverb problems the next night. Then 20 prepositions. And so forth.

Let’s call that homework schedule blocking.

Or, they could do 5 adjective problems a night for the next 4 nights. And 5 adverb problems a night starting tomorrow night. And so forth.

If I go with this system, students will practice multiple different topics (adjectives, adverbs, prepositions…) at the same time. So, let’s call that homework schedule interleaving.

For the most part, when we compare these two approaches, we find that interleaving results in more learning than blocking. (Lots of info here. Also in this book.)

That’s an interesting conclusion, but why is it true?

In the first place, probably, interleaving is a desirable difficulty. Students must THINK HARDER when they interleave practice, so they learn more.

In the second place, well, we don’t exactly know. Our confusion, in fact, stems in part from an arresting truth: interleaving usually helps students learn, but not always.

Of course, NOTHING ALWAYS WORKS, so we’re not fully surprised. But if the exceptions helped explain the rule, that could be mightily helpful…

An Intriguing Possibility…

Two scholars — Paulo F. Carvalho and Robert Goldstone — have been studying a potential explanation.

Perhaps blocking and interleaving enhance different kinds of memories. And so, research produces contradictory results because researchers use different kinds of memory tests.

Specifically, they propose that:

During blocked study, attention and encoding are progressively directed toward the similarities among successive items belonging to the same category,

whereas during interleaved study attention and encoding are progressively directed toward the differences between successive items belonging to different categories.

In other words: blocking focuses students on the properties of a particular category (“adjectives”). Interleaving focuses students on the distinctions among different categories (“adjectives, adverbs, prepositions”).

And so: if I want students to DEFINE ONE topic or idea or category (“adjectives”), blocking will help them do that well.

If I want students to COMPARE/CONTRAST MANY topics or ideas or categories, interleaving will help them do that well.

To repeat the title of this blog post: “the best kind of practice for students depends on the learning goal.”

In their most recent study, Carvalho and Goldstone test this possibility.

Sure enough, they find that students who block practice do better at defining terms, whereas those who interleave practice do better at multiple-choice questions.

The study gets splendidly intricate — they work hard to disprove their own hypothesis. But once they can’t do so, they admit they they just might be right.

Caveats and Classroom Implications

Caveat #1: “one study is just one study, folks.” (Dan Willingham.)

Although, to be fair, Carvalho and Goldstone have been building a series of studies looking at this question.

Caveat #2: The researchers worked with adults (average age in the 30s) studying psychology topics.

Does their conclusion hold true for K-12 students learning K-12 topics? Maybe…

Caveat #3: Practically speaking, this research might focus on a distinction that evaporates over time.

In truth, I always want my students to know specific definitions — like “tragedy” — well. And, I want them to compare those well-known definitions flexibly to other definitions — like, say, “comedy.”

An an English teacher, I — of course! — want my students to define adjective. AND I — of course!! — want them to compare that definition/concept to other related ideas (adverbs; participles; prepositional phrases acting as adjectives).

In other words, I suspect the ultimate teaching implication of this research goes like this:

We should have students BLOCK practice until they know definitions to some degree of confidence, and then have them INTERLEAVE practice to bring those definitions flexibly together.

To be clear: I’m extrapolating, based on my classroom experience and on my reading in this field.

Until my interpretation gets more research behind it, Carvahlo and Goldstone’s research suggests this general plan:

START BY DECIDING ON THE GOAL.

If you mostly want your students to know individual concepts, have them block their practice.

If you mostly want them to bring several topics together, have them interleave practice.

As your goal changes, their homework changes too.

As is so often the case, this research doesn’t tell teachers what to do. It helps us think more clearly about the work we’re doing.

In my view, that’s the most helpful research of all.


* I think that’s a false choice; both approaches make sense under different circumstances. More on that in another blog post.


Carvalho, P. F., & Goldstone, R. L. (2021). The most efficient sequence of study depends on the type of test. Applied Cognitive Psychology35(1), 82-97.

The Best Way to Take Class Notes
Andrew Watson
Andrew Watson

Teachers often ask me: “how should my students take notes?”

That question typically springs from a heated debate. Despite all the enthusiasm for academic technology, many teachers insist on hand-written notes. (Long-time readers know: I have a provocative opinion on this topic.)

For the time being, let’s set that debate aside.

Instead, let’s ask a more important question: what kind of mental processing should my students do while they take notes?

If students get the mental processing right, then perhaps the handwriting/laptop debate won’t matter so much.

Possibilities and Predictions

To study complicated questions, we start by simplifying them. So, here’s one simplification: in class, I want my students to…

…learn specific facts, ideas, and procedures, and

…learn connections and relationships among those facts, ideas, and procedures.

Of course, class work includes MANY more complexities, but that distinction might be a helpful place to start.

So: should students’ note-taking emphasize the specific facts? OR, should it emphasize the connections and relationships?

The answer just might depend on my teaching.

Here’s the logic:

If my teaching emphasizes facts, then students’ notes should focus on relationships.

If my teaching emphasizes relationships, then their notes should focus on factual specifics.

In these cases, the note-taking strategy complements my teaching to be sure students think both ways.

Of course, if both my teaching and students’ notes focus on facts, then mental processing of relationships and connections would remain under-developed.

In other words: we might want notes to be complementary, not redundant, when it comes to mental processing.

In fact, two researchers at the University of Louisville — Dr. David Bellinger and Dr. Marci DeCaro — tested such a prediction in recent research

Understanding Circulation

Bellinger and DeCaro had college students listen to information-heavy lecture on blood and the circulatory system.

Some students used guided notes that emphasized factual processing. This note-taking system — called “cloze notes” — includes a transcript of the lecture, BUT leaves words out. Students filled in the words.

Bellinger, D. B., & DeCaro, M. S. (2019). Note-taking format and difficulty impact learning from instructor-provided lecture notes. Quarterly Journal of Experimental Psychology, 72(12), 2807-2819.

Others students used guided notes that emphasized conceptual/relational processing. These notes — “outline notes” — organized the lecture’s ideas into conceptual hierarchies, which the students filled out.

And, to be thorough, Bellinger and DeCaro used both “more challenging” and “less challenging” versions of these note systems. As you can see, examples A and B above leave much larger blanks than examples C and D.

So, which note-taking system helped students more?

Because the lecture was “information heavy,” a note-taking system that highlights facts (the “cloze notes”) would be “redundant,” while a system that highlights conceptual relationships (the “outline notes”) would be “complementary.”

That is: students would get facts from the lecture, and see relationships highlighted in the outline notes.

For this reason, Bellinger and DeCaro predicted that the outline notes would help more in this case.

And, sure enough, students remembered more information — and applied it more effectively — when they used the challenging form of the outline notes.

Classroom Implications

Based on this study, do I recommend that you use outline notes with your students?

NO, READER, I DO NOT.

Remember, the “outline notes” worked here because (presumably) they complemented the factual presentation of the lecture.

If, however, the lecture focused more on relationships and connections, then (presumably) “cloze notes” would help more. They would be “complementary.”

As is so often the case, I don’t think we teachers should DO what research says we should DO.

Instead, I think we should THINK the way researchers help us THINK.

In this case, I should ask myself: “will my classroom presentation focus more on facts, or more on relationships and connections?”

Honestly: that’s a difficult question.

In the first place, I lecture only rarely.

And in the second place, my presentations (I hope) focus on both facts and relationships.

But, if I can figure out an answer — “this presentations focuses on relationships among the characters” — then I should devise a complementary note system. In this case, “cloze notes” would probably help, because they highlight facts (and my presentation highlights connections).

In other words: this research — and the theory behind it — doesn’t offer a straightforward, simple answer to the question that launched this post: “how should my students take notes?”

Because learning is complicated, such a usefully intricate answer might be all the more persuasive.


Bellinger, D. B., & DeCaro, M. S. (2019). Note-taking format and difficulty impact learning from instructor-provided lecture notes. Quarterly Journal of Experimental Psychology72(12), 2807-2819.

Teachers’ Gestures Can Help Students Learn
Andrew Watson
Andrew Watson

Over the years, I’ve written about the importance of “embodied cognition.

In other words: we know with our brains, and we know with and through our bodies.

Scholars such as Dr. Susan Goldin-Meadow and Dr. Sian Beilock have done splendid and helpful work in this field.

Their research suggests that students might learn more when they make the right kind of gesture.

Other scholars have shown that — in online lectures — the right kind of pointing helps too.

What about the teachers‘ gestures? Can we help students learn in the way we use our hands?

Dr. Celeste Pilegard wanted to find out

Steamboats, East and West

Pilegard invited college students to watch brief video lectures. The topic: the differences between Eastern and Western steamboats. (You think I’m joking. I’m not joking.)

These students watched one of four versions:

In the first version, the teacher’s gestures focused on the surface features of the steamboats themselves (how deep they sit in the water, for instance).

In the second version, the gestures focused on the structure of the lesson (“Now I’m talking about Eastern steamboats, and NOW I’m talking about Western steamboats.”).

Third version: gestures emphasized BOTH surface AND structural features.

Fourth version: a control group saw a video with neutral, content-free gestures.

Did those gestures make a difference for learning?

Pilegard, in fact, measured learning in two ways:

Did the students remember the facts?

Could the students apply those facts by drawing inferences?

So, what did she discover?

No, but Yes

Researchers typically make predictions about their findings.

In this case, Pilegard predicted that neither the surface gestures (about steamboats) nor the structural gestures (about the logic of the lesson) would help students remember facts.

But, she predicted that the structural gestures would help students draw inferences. (“If a steamboat operates on a shallow river, what does that tell you about the pressure of the steamboat’s engine?”) Surface gestures, she predicted, would not improve inferences.

Sure enough, Pilegard was 2 for 2.

Watching gestures didn’t help students remember facts any better. However, students who watched structural gestures (but not surface gestures) did better on inference questions. (Stats types: the Cohen’s d was 0.39; an impressive bonus for such a small intervention.)

When Pilegard repeated the experiment with a video on “innate vs. acquired immunity,” she got the same results.

Implications and Cautions

As teachers, we know that every little bit helps. When we use gestures to reinforce the underlying logical structure of our explanations, doing so might help students learn more.

As we plan, therefore, we should be consciously aware of our lesson’s logical structure, and think a bit about how gestures might reinforce that structure.

At the same time, regular readers know that all the usual cautions apply:

We should look at groups of studies, not just one study.

Pilegard’s research focused on college students. Will this strategy work with other students? We don’t know for sure.

These video lessons were quite short: under two minutes each. Will this strategy work over longer periods of time? We don’t know for sure.

In other words — this research offers a promising strategy. And, we need more research with students who resemble our own classrooms and lessons that last longer to have greater confidence.

I myself do plan to think about gestures for upcoming lessons. But I won’t ignore all the other teaching strategies (retrieval practice, cognitive load management, etc.). Here’s hoping that future research can point the way…


By the way:

Teachers often ask how they can get copies of research to study it for themselves.

Easy answer #1: Google Scholar.

If that doesn’t work, I recommend easy answer #2: email the researcher.

In this case, I emailed Dr. Pilegard asking for a copy of the study — and she emailed it to me 11 minutes later.

In honor of her doing so, I’m creating the Pilegard Award for Prompt Generosity in Sharing Research with People who Email You Out of the Blue.

No doubt it will be much coveted.

 

Handwriting Improves Learning, Right?
Andrew Watson
Andrew Watson

Here’s a good rule for research: if you believe something, look for research that contradicts your belief.

So, if you think that retrieval practice helps students learn, see if you can find research showing the opposite.

If you disapprove of cold-calling, see if any studies support its use.

If you think that hand-written notes help students more than notes taken on a laptop, try to find research that disagrees with you.

In this last case, you might even find me. Most teachers I know believe that handwritten notes are superior, and they cite a well-known study to support that belief.

I’ve argued for years that this research assumes students can’t learn how to do new things – a very odd belief for a teacher to have. If you believe a students can learn how to do new things, well, this study actually suggests that laptop notes will help more than handwritten notes.

However, the “good rule” described above applies to me too. If I believe that we don’t know whether handwriting or keyboarding is better for learning, I should look for evidence that contradicts my belief.

For that reason, I pounced on a recent science news headline. The gist: recent research by Robert Wiley and Brenda Rapp shows that students who wrote by hand learned more than those who used laptops.

So, does their research finally contradict my belief?

Learning Arabic Letters

Wiley and Rapp had college-age adults learn Arabic letters.

12 of them learned by pressing the right key on a keyboard.

12 learned by looking at the letters closely and confirming they were the same.

And, 12 learned by writing the letters.

Did these distinct learning strategies make a difference several days later?

YES THEY DID.

The hand-writers learned a lot more, and learned a lot faster.

In fact – here’s a cool part – their learning transferred to new, related skills.

These participants practiced with letters. When Wiley and Rapp tested them on WORDS, the hand-writers did better than the other two groups – even though they hadn’t practiced with words.

So: sure enough, handwriting helped students learn more.

Boundary Conditions

Given the strength and clarity of these findings, you might think that I’m going to change my mind.

Reader, I am not. Here’s why:

This research shows that writing by hand helps people learn how to write by hand. It also helps people learn to do things immediately related to writing by hand – like, say, saying and writing words.

We should notice the narrow boundaries around that conclusion.

People who write by hand learn how to write by hand.

That research finding, however, does NOT demonstrate that writing by hand helps people learn things unrelated to handwriting itself.

For instance: do handwritten notes help people learn more about history or psychology or anatomy than laptop notes? This research does not answer that question, because that question falls outside the boundaries of the research.

In a similar way: practicing scales on the piano surely helps play piano scales better than – say – watching someone else do so.

But: does practicing piano scales make me better at other tasks requiring manual dexterity? Knitting? Keyboarding? Sculpting?

To answer those questions, we have to research those questions. We can’t extrapolate from piano scales to knitting and sculpting. (Well: we can, but we really shouldn’t.)

So, What’s The Answer?

Is handwriting really a better way to learn than keyboarding?

Honestly, I just don’t think we know. (In fact, Wiley and Rapp don’t claim that handwriting helps anywhere other than learning and reading letters and words.)

In fact, I suspect we need to explore MANY other variables:

the content being learned,

the teacher’s strategy for presenting it,

the student’s preference,

the student’s age –

perhaps even the relative complexity of writing vs. keyboarding. (I’m not an expert in this topic, but I understand that some languages require very intricate steps for accurate keyboarding.)

We can say – thanks to Wiley and Rapp – that handwriting helps learn how to write by hand. But until we explore those other precise questions precisely, we shouldn’t offer strong answers as if they have research support.

 

Let’s Get Practical: What Works Best in the Classroom?
Andrew Watson
Andrew Watson

At times, this blog explores big-picture hypotheticals — the “what if” questions that can inspire researchers and teachers.

And, at times, we just want practical information. Teachers are busy folks. We simply want to know: what works? What really helps my students learn?

That question, in fact, implies a wise skepticism. If research shows a teaching strategy works well, we shouldn’t just stop with a study or two.

Instead, we should keep researching and asking more questions.

Does this strategy work with …

… older students as well as younger students?

… history classes as well as music classes as well as sports practice?

… Montessori classrooms, military academies, and public school classrooms?

this cultural cultural context as well as that cultural context?

And so forth.

In other words, we want to know: what have you got for me lately?

Today’s News

Long-time readers know of my admiration for Dr. Pooja Agarwal.

Her research into retrieval practice has helped clarify and deepen our understanding of this teaching strategy.

Her book, written with classroom teacher Patrice Bain, remains one of my favorites in the field.

And she’s deeply invested in understanding the complexity of translating research into the classroom.

That is: she doesn’t just see if a strategy works in the psychology lab (work that’s certainly important). Instead, she goes the next step to see if that strategy works with the messiness of classrooms and students and schedule changes and school muddle.

So: what has she done for us lately? I’m glad you asked.

Working with two other scholars, Agarwal asked all of those questions I listed above about retrieval practice.

That is: we think that retrieval practice works. But: does it work with different ages, and various subjects, in different countries?

Agarwal and Co. wanted to find out. They went though an exhaustive process to identify retrieval practice research in classrooms, and studied the results. They found:

First: yup, retrieval practice really does help. In 57% of the studies, the Cohen’s d value was 0.50 or greater. That’s an impressively large result for such a simple, low-cost strategy.

Second: yup, it works it in different fields. By far the most research is done in science and psychology (19 and 16 studies), but it works in every discipline where we look — including, say, history or spelling or CPR.

Third: yup, it works at all ages. Most research is done with college students (and, strangely, medical students), but works in K-12 as well.

Fourth: most retrieval practice research is done with multiple choice. (Yes: a well-designed multiple choice test can be retrieval practice. “Well-designed” = “students have to THINK about the distractors.”)

Fifth: we don’t have enough research to know what the optimal gap is between RP and final test.

Sixth: surprisingly, not enough classroom research focused on FEEDBACK. You’d think that would be an essential component…but Team Agarwal didn’t find enough research here to draw strong conclusions.

Seventh: Of the 50 studies, only 3 were from “non-Western” countries. So, this research gap really stands out.

In brief: if we want to know what really works, we have an increasingly clear answer: retrieval practice works. We had good evidence before; we’ve got better evidence now.

Examples Please

If you’re persuaded that retrieval practice is a good idea, you might want to be sure exactly what it is.

You can always use the “tags” menu on the right; we blog about retrieval practice quite frequently, so you’ve got lots of examples.

But, here’s a handy description (which I first heard in Agarwal and Bain’s book):

When students review, they put information back into their brains. So: “rereading the textbook” = “review,” because students try to redownload the book into their memory systems.

When students use retrieval practice, they take information out of their brains. So, “flashcards” = “retrieval practice,” because students have to remember what that word means.

So:

Reviewing class notes = review.

Outlining the chapter from memory = retrieval practice.

Short answer questions = retrieval practice.

Watching a lecture video = review.

When you strive for retrieval practice, the precise strategy is less important than the cognitive goal. We want student to try to remember before they get the correct answer. That desirable difficulty improves learning.

And, yes, retrieval practice works.

“Rich” or “Bland”: Which Diagrams Helps Students Learn Deeply? [Reposted]
Andrew Watson
Andrew Watson

Here’s a practical question: should the diagrams we use with students be detailed, colorful, bright, and specific?

Or, should they be simple, black and white, somewhat abstract?

We might reasonably assume that DETAILS and COLORS attract students’ attention. If so, they could help students learn.

We might, instead, worry that DETAILS and COLORS focus students’ attention on surface features, not deep structures. If so, students might learn a specific idea, but not transfer their learning to a new context.

In other words: richly-decorated diagrams might offer short-term benefits (attention!), but result in long-term limitations (difficulties with transfer). If so, blandly-decorated diagrams might be the better pedagogical choice.

Today’s Research

Scholars in Wisconsin — led by David Menendez — have explored this question.

Specifically, they asked college students to watch a brief video about metamorphosis. (They explained that the video was meant for younger students, so that the cool college kids wouldn’t be insulted by the simplicity of the topic.)

For half the students, that video showed only the black-and-white diagram to the left; for the other half, the video showed the colors and dots.

Did the different diagrams shape the students’ learning? Did it shape their ability to transfer that learning?

Results, Please…

No, and yes. Well, mostly yes.

In other words: students who watched both videos learned about ladybug metamorphosis equally well.

But — and this is a BIG but — students who watched the video with the “rich” diagram did not transfer their learning to other species as well as students who saw the “bland” diagram.

In other words: the bright colors and specifics of the rich diagram seem to limit metamorphosis to this specific species right here. An abstract representation allowed for more successful transfer of these concepts to other species.

In sum: to encourage transfer, we should use “bland,” abstract diagrams.

By the way: Team Menendez tested this hypothesis with both in-person learners and online learners. They got (largely) the same result.

So: if you’re teaching face-to-face or remotely, this research can guide your thinking.

Some Caveats

First: as is often the case, this effect depended on the students’ prior knowledge. Students who knew a lot about metamorphosis weren’t as distracted by the “rich” details.

Second: like much psychology research, this study worked with college students. Will its core concepts work with younger students?

As it turns out, Team Menendez has others studies underway to answer that very question. Watch This Space!

Third: Like much psychology research, this study looked at STEM materials. Will it work in the humanities?

What, after all, is the detail-free version of a poem? How do you study a presidency without specifics and details?

When I asked Menendez that question, he referred me to a study about reader illustrations. I’ll be writing about this soon.

In Sum

Like seductive details, “rich” diagrams might seem like a good teaching idea to increase interest and attention.

Alas, that perceptual richness seems to help in the short term but interfere with transfer over time.

To promote transfer, teach with “bland” diagrams — and use a different strategy to grab the students’ interest.

How to Foster New Friendships in School? Seating Plans! (We’ve Got Research…)
Andrew Watson
Andrew Watson

In schools, we want students to learn many topics: math, and history, and reading, and health, and robotics…

And, especially at the beginning of the year, we’d like them to make friends along the way.

Can we help?

One research team tried a reasonable approach. They wondered if students might form new friendships when they sit next to classmates they don’t yet know well.

Here’s the story:

The Plan

Julia Rohrer and colleagues worked with 182 teachers in 40 schools in Hungary. Their study included 3rd through 8th graders — almost 3000 of them!

In these schools, students sat at “freestanding forward-facing 2-person desks.” (It sounds to me like Little House on the Prairie, but in rural Hungary.) Researchers assigned students to these paired desks randomly.

And, they tracked the friendships that formed.

So: what happened? Did students befriend their deskmates?

The Prediction & the Speculation

Unsurprisingly, we tend — on average — to form friendships with people who are like us. In schools, that means:

boys typically befriend boys, while girls befriend girls;

academic achievers connect with other achievers;

members of racial and ethnic groups often form friendships within those groups. (In this study, researchers kept track of Roma and non-Roma Hungarian identities.)

Researchers predicted that this pattern (called “homophily) would continue.

And they speculated that the new seating plans might shake things up a bit. That is: perhaps more friendships would form outside of those usual patterns.

The Results

So, what happened with these new seating plans?

First: Randomly seating students next to each other did modestly increase the likelihood of mutual friendships forming: from 15% to 22%.

Second: These new friendships did mostly fit the expected patterns. As homophily suggests, friendships largely formed within gender, achievement, and ethnic groups.

Third: Random seating DID foster new friendships across those divides as well — although to a smaller degree. That is: some girls did form mutual friendships with boys, and so forth.

In brief: researchers wondered if random seating patterns might expand friendship circles — and they do!

The Big Picture

We should, of course, remember that this study is just one study. We’ll need more research to be increasingly certain of these conclusions.

And, honestly, this seating plan didn’t make a huge difference.

At the same time: teachers know that every little bit counts. If we can help students form new friendships — and help them form friendships that might not otherwise have started — that’s a powerful way to start a new school year.

You will, of course, adapt this idea to your own teaching context. As you contemplate your routine at the beginning of a new year, this strategy might be a useful way to open new friendship vistas.

To Grade or Not to Grade: Should Retrieval Practice Quizzes Be Scored? [Repost]
Andrew Watson
Andrew Watson

We’ve seen enough research on retrieval practice to know: it rocks.

When students simply review material (review their notes; reread the chapter), that mental work doesn’t help them learn.

However, when they try to remember (quiz themselves, use flashcards), this kind of mental work does result in greater learning.

In Agarwal and Bain’s elegant phrasing: don’t ask students to put information back into their brains. Instead, ask them to pull information out of their brains.

Like all teaching guidance, however, the suggestion “use retrieval practice!” requires nuanced exploration.

What are the best methods for doing so?

Are some retrieval practice strategies more effective?

Are some frankly harmful?

Any on-point research would be welcomed.

On-Point Research

Here’s a simple and practical question. If we use pop quizzes as a form of retrieval practice, should we grade them?

In other words: do graded pop quizzes result in more or less learning, compared to their ungraded cousins?

This study, it turns out, can be run fairly easily.

Dr. Maya Khanna taught three sections of an Intro to Psychology course. The first section had no pop quizzes. In the second section, Khanna gave six graded pop quizzes. In the third, six ungraded pop quizzes.

Students also filled out a questionnaire about their experience taking those quizzes.

What did Khanna learn? Did the quizzes help? Did grading them matter?

The Envelope Please

The big headline: the ungraded quizzes helped students on the final exam.

Roughly: students who took the ungraded pop quizzes averaged a B- on the final exam.

Students in the other two groups averaged in the mid-to-high C range. (The precise comparisons require lots of stats speak.)

An important note: students in the “ungraded” group scored higher even though the final exam did not repeat the questions from those pop quizzes. (The same material was covered on the exam, but the questions themselves were different.)

Of course, we also wonder about our students’ stress. Did these quizzes raise anxiety levels?

According to the questionnaires, nope.

Khanna’s students responded to this statement: “The inclusion of quizzes in this course made me feel anxious.”

A 1 meant “strongly disagree.”

A 9 meant “strongly agree.”

In other words, a LOWER rating suggests that the quizzes didn’t increase stress.

Students who took the graded quizzes averaged an answer of 4.20.

Students who took the ungraded quizzes averaged an answer of 2.96.

So, neither group felt much stress as a result of the quizzes. And, the students in the ungraded group felt even less.

In the Classroom

I myself use this technique as one of a great many retrieval practice strategies.

My students’ homework sometimes includes retrieval practice exercises.

I often begin class with some lively cold-calling to promote retrieval practice.

Occasionally — last Thursday, in fact — I begin class by saying: “Take out a blank piece of paper. This is NOT a quiz. It will NOT be graded. We’re using a different kind of retrieval practice to start us off today.”

As is always true, I’m combining this research with my own experience and classroom circumstances.

Khanna gave her quizzes at the end of class; I do mine at the beginning.

Because I’ve taught high school for centuries, I’m confident my students feel comfortable doing this kind of written work. If you teach younger grades, or in a different school context, your own experience might suggest a different approach.

To promote interleaving, I include questions from many topics (Define “bildungsroman.” Write a sentence with a participle. Give an example of Janie exercising agency in last night’s reading.) You might focus on one topic to build your students’ confidence.

Whichever approach you take, Khanna’s research suggests that retrieval practice quizzes don’t increase stress and don’t require grades.

As I said: retrieval practice rocks!

Making “Learning Objectives” Explicit: A Skeptic Converted? [Reposted]
Andrew Watson
Andrew Watson

Teachers have long gotten guidance that we should make our learning objectives explicit to our students.

The formula goes something like this: “By the end of the lesson, you will be able to [know and do these several things].”

I’ve long been skeptical about this guidance — in part because such formulas feel forced and unnatural to me. I’m an actor, but I just don’t think I can deliver those lines convincingly.

The last time I asked for research support behind this advice, a friend pointed me to research touting its benefits. Alas, that research relied on student reports of their learning. Sadly, in the past, such reports haven’t been a reliable guide to actual learning.

For that reason, I was delighted to find a new study on the topic.

I was especially happy to see this research come from Dr. Faria Sana, whose work on laptop multitasking  has (rightly) gotten so much love. (Whenever I talk with teachers about attention, I share this study.)

Strangely, I like research that challenges my beliefs. I’m especially likely to learn something useful and new when I explore it. So: am I a convert?

Take 1; Take 2

Working with college students in a psychology course, Sana’s team started with the basics.

In her first experiment, she had students read five short passages about mirror neurons.

Group 1 read no learning objectives.

Group 2 read three learning objectives at the beginning of each passage.

And, Group 3 read all fifteen learning objectives at the beginning of the first passage.

The results?

Both groups that read the learning objectives scored better than the group that didn’t. (Group 2, with the learning objectives spread out, learned a bit more than Group  3, with the objectives all bunched together — but the differences weren’t large enough to reach statistical significance.)

So: compared to doing nothing, starting with learning objectives increased learning of these five paragraphs.

But: what about compared to doing a plausible something else? Starting with learning objectives might be better than starting cold. Are they better than other options?

How about activating prior knowledge? Should we try some retrieval practice? How about a few minutes of mindful breathing?

Sana’s team investigated that question. In particular — in their second experiment — they combined learning objectives with research into pretesting.

As I’ve written before, Dr. Lindsay Richland‘s splendid study shows that “pretesting” — asking students questions about an upcoming reading passage, even though they don’t know the answers yetyields great results. (Such a helpfully counter-intuitive suggestion!)

So, Team Sana wanted to know: what happens if we present learning objectives as questions rather than as statements? Instead of reading

“In the first passage, you will learn about where the mirror neurons are located.”

Students had to answer this question:

“Where are the mirror neurons located?” (Note: the students hadn’t read the passage yet, so it’s unlikely they would know. Only 38% of these questions were answered correctly.)

Are learning objectives more effective as statements or as pretests?

The Envelope Please

Pretests. By a lot.

On the final test — with application questions, not simple recall questions — students who read learning-objectives-as-statements got 53% correct.

Students who answered learning-objectives-as-pretest-questions got 67% correct. (For the stats minded, Cohen’s d was 0.84! That’s HUGE!)

So: traditional learning objectives might be better than nothing, but they’re not nearly as helpful as learning-objectives-as-pretests.

This finding prompts me to speculate. (Alert: I’m shifting from research-based conclusions to research-&-experience-informed musings.)

First: Agarwal and Bain describe retrieval practice this way: “Don’t ask students to put information into their brains (by, say, rereading). Instead, ask students to pull information out of their brains (by trying to remember).”

As I see it, traditional learning objectives feel like review: “put this information into your brain.”

Learning-objectives-as-pretests feel like retrieval practice: “try to take information back out of your brain.” We suspect students won’t be successful in these retrieval attempts, because they haven’t learned the material yet. But, they’re actively trying to recall, not trying to encode.

Second: even more speculatively, I suspect many kinds of active thinking will be more effective than a cold start (as learning objectives were in Study 1 above). And, I suspect that many kinds of active thinking will be more effective that a recital of learning objectives (as pretests were in Study 2).

In other words: am I a convert to listing learning objectives (as traditionally recommended)? No.

I simply don’t think Sana’s research encourages us to follow that strategy.

Instead, I think it encourages us to begin classes with some mental questing. Pretests help in Sana’s studies. I suspect other kinds of retrieval practice would help. Maybe asking students to solve a relevant problem or puzzle would help.

Whichever approach we use, I suspect that inviting students to think will have a greater benefit than teachers’ telling them what they’ll be thinking about.

Three Final Points

I should note three ways that this research might NOT support my conclusions.

First: this research was done with college students. Will objectives-as-pretests work with 3rd graders? I don’t know.

Second: this research paradigm included a very high ratio of objectives to material. Students read, in effect, one learning objective for every 75 words in a reading passage. Translated into a regular class, that’s a HUGE number of learning objectives.

Third: does this research about reading passages translate to classroom discussions and activities? I don’t know.

Here’s what I do know. In these three studies, Sana’s students remembered more when they started reading with unanswered questions in mind. That insight offers teachers a inspiring prompt for thinking about our daily classroom work.