Andrew Watson – Page 66 – Education & Teacher Conferences Skip to main content

Andrew Watson About Andrew Watson

Andrew began his classroom life as a high-school English teacher in 1988, and has been working in or near schools ever since. In 2008, Andrew began exploring the practical application of psychology and neuroscience in his classroom. In 2011, he earned his M. Ed. from the “Mind, Brain, Education” program at Harvard University. As President of “Translate the Brain,” Andrew now works with teachers, students, administrators, and parents to make learning easier and teaching more effective. He has presented at schools and workshops across the country; he also serves as an adviser to several organizations, including “The People’s Science.” Andrew is the author of "Learning Begins: The Science of Working Memory and Attention for the Classroom Teacher."

Video Games and Empathy
Andrew Watson
Andrew Watson

AdobeStock_96027478_Credit

Do violent video games reduce empathy?

If people spend lots of time pretending to beat up and shoot pretend people, will this experience reduce their empathy for human suffering? Will it make them more likely to really beat up and shoot real people?

We might get different answers to this question depending on the variables we decide to measure, and the tools we use to measure them.

In this study, researchers found 15 people who often played violent video games–typically “first person shooter” games involving automatic weapons–and 15 who had never played them.

These participants looked at sketches: some showed people by themselves while others depicted people in pairs. Half of the pictures showed mundane activities–two men carrying a cabinet–while the other half showed violent activities–one man forcibly holding another man’s head underwater.

As participants looked at these pictures, researchers used functional magnetic resonance imaging to measure neural responses.

The researchers reasoned as follows: if violent video games impair players’ empathy, these scans should reveal differences in brain networks associated with empathy. That is: gamers and non-gamers would respond similarly to the men carrying the cabinet, but the gamers would not respond with as much empathy as the non-gamers to the sight of human suffering. After all, in this hypothesis, the gamers would have been desensitized to human pain, and so would not have as strong an empathetic response.

How much difference did they find?

One Conclusion, and One More

No difference. Gamers and non-gamers were equally empathetic–and non-empathetic–when they looked at these images.

So: when these researchers answer this version of this question using these tools, they get this answer.

However: when these researchers answer this version of the question using metanalysis, they get a radically different answer:

The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior.

The Takeaway

I hope this entry does NOT persuade you that video games do, or don’t, reduce empathy.

I hope, instead, to persuade you that it’s hard to answer that question once and for all. We have many ways to ask, and many tools with which to answer, such a question. Only by asking (and asking and asking), and then by looking for converging answers, can we start to move towards a conclusion.

This is Your Chess on Ritalin
Andrew Watson
Andrew Watson

AdobeStock_51407584_Credit

In movies and on television, chess skill symbolizes “pure intelligence.” Characters who can outwit others on the chessboard are–obviously–just smarter than everyone else. (On The West Wing, President Bartlet routinely schools his staff on the nuances of the game.)

By implication, people who get better at chess seem to be getting smarter. So, if I can give you a drug that improves your chess score, you might conclude that this drug is making you more intelligent.

This approach, of course, has a controversial history. We have developed drugs (such as methylphenidate and modafinil) that benefit people who struggle during cognitive tasks. Will those same drugs benefit those who don’t typically struggle? If they do, is that benefit somehow unfair?

The Study: Setup

German researchers worked with 40 mid-level chess players. Following a remarkably detailed and precise research regimen, these players spent 4 days playing games against a chess program that had been matched to play at their level.

On each day, these chess players took either methylphenidate (Ritalin/Concerta), modafinil (Provigil), caffeine (yum), or a placebo. The schedule of these 4 drugs was varied among the group, to be sure that the order didn’t matter.

The Study: Results

How did they do? It’s bit complicated…

Compared to the games when they took a placebo, they players slowed down when they took all three drugs. On average, they added nearly 2 minutes to the time they took (9:13 vs 7:17 per game); that’s a slowdown of 25%.

When they took more time, these players often ran up against the time limit that had been set for each game. As a result, they lost lots of games by running out of time.

But, what happens when we look at the games when they didn’t run out of time?

They got better. It’s a little tricky to describe improvement in chess terms. You might say they had a 5% increased chance of winning. Or, you might say–as the lead researcher said:

If we correct for the slowest players, then the effect would be the equivalent of moving a player from say, number 5000 in the world ranking, to number 3500 in the world ranking. In a single game, the effect is the equivalent of having the white pieces, every time.

That’s quite the improvement.

The Study: Implications

So, what do we do with this information? Should we all rush right out and add some methylphenidate to our daily vitamins?

In my view, not yet.

First, this study looked at people playing chess. Although we associate chess with “intelligence in general,” we can’t be sure–based on this study alone–that the effects of these drugs will generalize to other cognitive activities.

Second, the study worked with an unusual subgroup of the population: the average IQ among the players was 125. (Of course, IQ isn’t the only–or necessarily the best–way to measure human cognitive capacity. But, it’s not meaningless.)

An IQ of 125 is more than 1 standard deviation above average. This is, in other words, a select–even atypical–group of thinkers.

For these reasons, I wouldn’t do anything differently just yet.

And third: I stumbled across this study after I had completed this blog entry. The headline is that non-prescription use of Ritalin can muddle the dopamine system–at least in rats.

When I say “muddle,” I’m summarizing the following passage:

These changes in brain chemistry were associated with serious concerns such as risk-taking behaviors, disruptions in the sleep/wake cycle and problematic weight loss, as well as resulting in increased activity and anti-anxiety and antidepressive effects.

In other words, if these effects are true for humans as well as rats, that’s some serious muddling right there.

At the same time, I must tell you that this chess study gives me pause. In grad school, the orthodoxy about these drugs was that “they help people who struggle think more like typical learners, but they don’t help typical learners think like more extraordinary learners.”

(You might think of them as a mental knee brace. The brace helps you if you’re injured, but isn’t particularly beneficial if you’re not.)

This study, however, suggests that–for this atypical group of people doing this atypical thing–such drugs do provide a cognitive benefit.

An alternate explanation

I’m intrigued by the fact that chess players taking methylphenidate, modafinil, and caffeine slowed down.

Perhaps the reason they played better is not that the drugs helped them think better, but that they gave the players more time to think.

Could we get the same benefit by deliberately forcing ourselves to take more time with our thoughts? This study doesn’t answer that question. But, the possibility seems worth exploring.

______________________________________________________________

A final note, unrelated to the content of this study. In looking over the specifics of the research paradigm, I note that the team began work on this study in July of 2011, and that it was published only in 2017. That’s right: they’ve been working on this for over 6 years.

Wow.

Out with the Old…
Andrew Watson
Andrew Watson

AdobeStock_93044051_Credit

Articles about learning styles theory–including my own–typically focus on debunking the theory.

This article, over at The Learning Scientists, takes a different approach: it chooses specific parts of learning styles theory, and shows how each small part derives from another–more useful–theory about learning.

The goal of this article, in other words, is not that you stop believing a false theory, but that you replace false beliefs with correct ones.

In my view, that’s a GREAT approach, and one that I plan to borrow.

Memorable Beauty?
Andrew Watson
Andrew Watson

AdobeStock_84426829

Over at Psychology Today, Nate Kornell speculates about the potential memory benefits of taking beautiful notes.

(Kornell is a thorough and thoughtful research, who studied with Robert Bjork, so I always look forward to his posts.)

Enjoy!

Lightening the Cognitive Load
Andrew Watson
Andrew Watson

AdobeStock_63215125_Credit

How should we manage working memory limitations in the classroom?

Furtheredogogy has a handy post about Cognitive Load Theory, which is basically a fancy way of saying “taking care of our students’ working memory capacity.”

Notice, btw, that the author suggests worked examples as a working-memory friendly alternative to project-based learning–which can all to often overwhelm students’ cognitive resources.

(Mis)Understanding Educational Stats
Andrew Watson
Andrew Watson

AdobeStock_93587199_Credit

Over at The Anova, Freddie deBoer has a knack for writing about statistical questions and making them not just readable but interesting.

Case in point: he recently explored the New York Times feature about school choice.

Although careful to praise the Times authors for their genuine concern and dedication, he thoughtfully explicates the numerous ways in which their article gets important questions wrong because it doesn’t think its way through statistics carefully enough.

For example: when we say we want students to do better, does that mean we want individual students to rise above the average, or that we want to raise the average for students overall?

As deBoer sees the field, we typically say we want the latter, but focus on (and tell stories about) the former.

DeBoer’s article doesn’t express an opinion about school choice (I’m sure he has one, but he doesn’t tip his hand here). But, it’s an excellent reminder that statistics can help us only so long as we are clear-minded about what they really measure.

As he glumly says in his final paragraph:

It’s not just that we can’t get what we want. It’s that nobody really knows what they’re trying to accomplish.

Oxytocin in Crisis
Andrew Watson
Andrew Watson

AdobeStock_63357143_Credit

Oxytocin is often described as the “love hormone.” Apparently lots of oxtyocin is swirling around when mothers interact with their babies, and so its role in maternal affection is much trumpeted.

You may well hear people say that, in schools, we need to be sure that our students have more oxytocin in their lives.

However, folks giving this advice may be unsettled to hear that recent research describes oxytocin as “the relationship crisis hormone.”

Researchers in the US and Norway have found that, in romantic relationships, discrepancies in romantic interest lead to higher levels of oxytocin production.

In my mind, this news underlines an important general conclusion.

a) The study of psychology is complicated.

b) The study of neuroscience is really complicated.

c) The study of hormones is absurdly complicated. I mean, just, you cannot believe how complicated this stuff gets.

As a result, I encourage you to be wary when someone frames teaching advice within a simple hormonal framework. If you read teaching advice saying “your goal is to increase dopamine flow,” it’s highly likely that the person giving that advice doesn’t know enough about dopamine.

(BTW: it’s possible that the author’s teaching advice is sound, and that this teaching advice will result in more dopamine. But, dopamine is a result of the teaching practice–and of a thousand other variables–but not the goal of the teaching practice. The goal of the teaching is more learning. Adding the word “dopamine” to the advice doesn’t make it any better.)

In brief: if teaching advice comes to you dressed in the language of hormones, you’ll get a real dopamine rush by walking away…

Head Start: Getting To Yes
Andrew Watson
Andrew Watson

AdobeStock_96678045_Credit

Loyal blog readers know that Austin Matte is our local expert on Head Start. To follow up on his recent article, I want to highlight study published in Child Development.

Studying records of nearly 3000 students, the authors find that attendance matters. Head Starters who miss class don’t make as much progress in math and literacy as those who do.

That news might not sound surprising–of course attendance matters!–but it contributes to an important debate about the value of Head Start in the first place.

The Argument, Part I

We’ve got some good research showing that, although Head Start produces impressive gains among its participants, those gains just don’t last. This review, for example, finds that–by 3rd grade–Head Start participants no longer stand out from their non-Head-Start peers.

In the biz, they call this result “fadeout.” Some people argue that fadeout suggests we should give up on Head Start altogether. After all, given that its results don’t last, we should spend our money elsewhere.

Austin’s response to this argument (a response I find persuasive, by the way) is that fadeout in fact demonstrates the benefits of Head Start.

Here’s an analogy:

I’m overweight and my cholesterol is high. My doctor tells me to exercise and eat right. I start jogging four times a week and eating like Tom Brady. A year later–voila!–my doctor reports that I’m the picture of health.

So, I stop with the jogging, and go back to potato chips and lard burgers. Fairly soon, I’m back to my old weight and cholesterol level.

Now: do you blame the jogging? Or, do you blame the end of the jogging?

People who say that “Head Start” doesn’t work are blaming the jogging. But, it just seems obvious that the jogging helped. It was my decision to stop–not to start–jogging that caused the problems.

Isn’t the straightforward conclusion that we should add more years to Head-Start, not eliminate the program that’s clearly working?

The Argument, Part II

Today’s study gets at the same question a different way. If Head Start programs didn’t really help, then doing less of them wouldn’t matter. Gaps in attendance shouldn’t be a problem, because the program being attended wouldn’t actually accomplishing anything.

This research, however, gives the lie to that logic. Clearly, less time in Head Start leads to less learning; or–said the other way around–more time produces more learning.

(In the biz, they call this “the dosing effect.” A higher dose of something–in this case, Head Start–leads to greater benefits–in this case, greater learning.)

Given that we see a dosing effect, we can have confidence that Head Start does, in fact, cause the changes it claims to cause.

I + II = Yes

Austin’s argument about “fadeout” helps us see the long-term benefits of Head Start. And today’s study about “dosing” helps us see the short-term effects of Head Start.

Convinced yet? Just say yes…

Home News
Andrew Watson
Andrew Watson

LB Cover 2

Exciting news: my book was published at the beginning of April. (I’m resisting the temptation to put in an exclamation point.)

Learning Begins explores the science of working memory and attention, and offers practical strategies for putting this research to work in our classrooms.

Here’s what the first Amazon reviewer wrote:

“This book feels more like a personal discussion with the author. Andrew shares stories with meaning, current useful research, and provides clear suggestions to better teaching methods and student supports. A quick and easy read! Andrew is a proficient educator himself who knows his audience and uses humor and story telling to reach them!”

I hope you’ll read it, and let me know what you think! (Okay, I gave in. There’s the exclamation point.)

(BTW: if you email me–[email protected]–I’ll give you a code for a 20% discount from the publisher.)