The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us (34 page)

BOOK: The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us
2.79Mb size Format: txt, pdf, ePub
ads

This finding was so surprising, and led to a publication in
Nature
, because it seemed to break down a wall between two ways that practice can improve our mental abilities. Suppose you work hard at becoming an expert Sudoku solver, spending all your free time doing nothing but solving Sudoku puzzles. You will, of course, get faster and more accurate at solving Sudoku. Moreover, you might find that your ability to solve KenKen puzzles—a new variant of Sudoku—also improves somewhat, even though you had not done a single one during the time you practiced Sudoku. Your improved performance on KenKen would be an example of “narrow transfer,” where improvement on one mental skill transfers to other highly similar skills. It would be more surprising to find that practicing Sudoku improved your ability to calculate tips in your head, prepare your income taxes, or remember telephone numbers. Improvements on those skills would demonstrate “broad transfer,” because they have little surface-level similarity to Sudoku. Playing Medal of Honor to get better at finding targets in a similar first-person-shooter
video game would be an example of narrow transfer. Playing Medal of Honor to improve your ability to pay attention to your surroundings while driving your car is like solving Sudoku to get better at remembering telephone numbers. It’s an example of broad transfer, which is valuable because it improves aspects of cognition that weren’t specifically trained. Moreover, in this case, a different skill was improved by doing something fun and engaging. We’ll bet that you’re more likely to follow the adage “practice makes perfect” if the “practice” consists entirely of playing video games.

Green and Bavelier’s experiment suggests that video-game training might actually enable people to release some untapped potential for broader skills without having to spend effort practicing those particular skills. It’s far from obvious why passively listening to ten minutes of Mozart should change a cognitive ability (spatial reasoning) that has little or nothing to do with music or even hearing. But video games do require players to actively use a variety of cognitive skills, and it’s not implausible that ten hours of training on a game that requires attention to a wide visual field could improve performance on a task that requires subjects to focus across a wide display, even though the game and the task are different in many other respects.

Perhaps the most astonishing aspect of this experiment was that it required
only ten hours of training
. Think about the implications of this: We all spend much of our lives focusing on our environment from a first-person perspective, making rapid decisions, and acting on them. Daily tasks like driving require us to focus on a wide visual field—you need to focus both on the road in front of you and on the side streets. And you most likely have driven for much more than ten hours in the past six months. Even if you haven’t, you likely have done other things that require similar skills—playing any sport, or even walking down a crowded city street, requires similar rapid decisions and awareness of your surroundings. Why, then, should an additional ten hours of playing one video game have such a large effect on basic cognitive skills?

One possible answer to this question is that playing video games does
not
actually produce dramatic improvements on largely unrelated
tasks. As was the case with the Mozart effect, Green and Bavelier’s initial study could turn out to be an outlier—subsequent studies may show that video-game training is not as potent as originally thought. But it is also possible that there really is something about playing a first-person action video game that does release untapped potential with minimal effort. Video games can be more engaging and intense than many other activities that draw on the same cognitive abilities, so they could conceivably provide more productive and efficient training that extends beyond the game itself.

More recently, Bavelier and her colleagues have used much more extensive training, often thirty to fifty hours, to find further cognitive benefits of video games. These studies have shown transfer to several different basic perceptual abilities. One study found that video-game training improved contrast sensitivity, which is essentially the ability to detect a shape that is similar in brightness to the background, like a darkly clad person walking along a poorly lit sidewalk.
62
Another showed that action video-game training improved the ability to identify letters placed close together in the periphery of the visual field, essentially increasing the spatial resolution of attention.
63
Given how basic and fundamental these skills are to all aspects of perception, these findings are even more surprising than the original field-of-view result.
64
Metaphorically, these findings suggest that practicing video games is akin to putting on your glasses—it improves all aspects of visual perception. For example, increased contrast sensitivity should make driving at night easier. Even though these followup studies involved substantially more training, they showed broad transfer to abilities that could affect many real-world skills. That said, none of these articles have reported on transfer to performance on real-world tasks, and given the lack of any direct evidence, the authors are appropriately careful not to claim any impact beyond the lab.

As with the Mozart effect, one worrisome aspect of these video-game findings is that the majority of the evidence comes from a single group of researchers. Unlike with the Mozart effect, the group’s studies consistently appear in top-tier, peer-reviewed journals rather than obscure scientific backwaters. A bigger problem, though, is that training studies do not lend themselves to easy replication. Studies of the Mozart effect
are easy to conduct—bring people into the lab for an hour, play them some Mozart, and give them a few cognitive tests. All you really need is a CD player and some pens. Studies of video-game training are much grander in scale. Each participant must be trained for many hours under direct supervision of laboratory personnel. That requires full-time research staff, more computers, a lot more money to pay subjects for their time, and the space to accommodate hundreds of subject-hours of testing. Few labs are devoted to doing this sort of research, and those that are not typically don’t have the funding or resources available for a quick attempt at replication.

To our knowledge, only one published study from a laboratory unaffiliated with the original researchers has successfully replicated the core result of the original Green and Bavelier article. In that study, Jing Feng, Ian Spence, and Jay Pratt of the University of Toronto showed that playing an action video game for ten hours improved the ability to imagine simple shapes rotating as well as the ability to pay attention to objects that the subjects were not directly looking at. They also found that women, who are on average somewhat worse than men on these spatial tasks, improved more from the training.
65

A second study, although not a direct replica of the Green and Bavelier experiment, did show a positive effect of video-game practice using a different game and a different subject population: seniors.
66
This study addresses one of the major motivations for brain training: helping to preserve and improve cognitive functioning in aging. In this experiment, cognitive neuroscientist Chandramallika Basak and her colleagues randomly assigned one group of seniors to play Rise of Nations and another group to a no-training control condition. Rise of Nations is a slow-paced strategy game that requires players to keep track of a lot of information while switching back and forth between different strategic elements. The researchers’ hypothesis was that training on this sort of strategy game would improve what’s known as “executive functioning,” which is the ability to allocate cognitive resources effectively among multiple tasks and goals. Their study found substantial transfer from the video game to a variety of laboratory measures of executive functioning. That makes sense given the demands of the game,
but because the study did not include any other games for comparison, it’s also possible that the benefits had nothing to do with being trained on this particular kind of video game, or indeed with video-game training at all. Seniors in the training group might simply have been more motivated to improve because they knew they were receiving special treatment as part of a study, and that motivation could have led to the biggest improvements for those tasks where they were already the most impaired.
67

These questions about the proper interpretation of the original Green-Bavelier study will be moot unless it can be consistently and independently replicated. One large-scale attempt to do just that, led by video-game researcher Walter Boot, did not produce the same results as the earlier experiments.
68
Dan was one of the coauthors of Boot’s paper and participated in the design of the study. The original study and the replication by Feng’s group were both relatively small in scope: In each case, no more than ten subjects were assigned to each condition, and their training lasted only about ten hours. Boot’s study used more than twice as many subjects in each condition and gave the subjects more than twice as much training, over twenty hours on each game. He also used a much larger battery of cognitive tasks, including all of the ones used by Green and Bavelier plus about twenty others. The battery itself took up to two hours to complete, and each participant completed all the tasks before and after the training as well as once about halfway through it. Boot used the same Tetris and Medal of Honor games used in the original study, as well as the Rise of Nations game used in Basak’s experiment. Like Basak, he had the idea that training with that sort of strategy game would not enhance attention and perception, but instead would improve performance on measures of problem solving, reasoning, and possibly memory. Boot also included a group that received no training at all in order to provide a clear estimate of how much people might improve just by retaking the cognitive tasks before and after training. So this study was designed to test all of the alternative explanations for the positive findings that the original studies did not address—as well as the possibility that training released untapped potential.

One oddity in all of the previous experiments showing positive evidence of video-game training is that none of the control groups did any better the second time they took the cognitive tests than they did the first time. In the original study by Green and Bavelier, the subjects who played Tetris (a video game, but not a fast-paced, first-person “action” game) showed no improvement when they did the cognitive tasks for a second time, after completing their training. The same was true for the replication by Feng and colleagues: Subjects in the control condition did no better when retaking the cognitive tasks. It also held true for most of the positive effects in the Basak study and for the subsequent studies conducted by Bavelier and her colleagues. Given what we know about practice and learning, this finding is hard to explain; people almost always perform better when they do a task a second time. Such improvements are typical as well for the sorts of tasks used in the Brain Age software and other brain-training products. In fact, these routine practice effects are exactly the “evidence” those programs rely on to back their claims that their users’ brains are “improving.”

Why does lack of improvement in the control conditions matter? Because the evidence for the positive effects of video-game training is based on a comparison to these control groups. To support the claim that video games improve cognition, an experiment must show that people trained with video games improve more than people receiving other training or no training. It’s much easier to show an improvement relative to a control group if the control group shows no improvement at all. Had subjects in the control groups improved as expected, the benefits that could be ascribed to video games would have been reduced.

In Boot’s experiment, unlike the others, the control group did show a typical increase in performance from the first to the last testing session. The group that practiced action video games also improved on the cognitive tasks. But it improved by the same amount as the control group, meaning that there was no specific effect of video-game training on cognitive abilities.
69
This failure to replicate is especially significant because Boot doubled the amount of training and used more subjects and control groups—all of which strengthened the design of the study
and made it a more definitive test of the broad transfer hypothesis advanced by Green and Bavelier. Their initially promising idea that a small amount of video-game training could have big effects does not seem to be borne out. It’s possible that some subtle differences in the methods among the various studies account for the different results, but if the effect is that fragile, it is hard to imagine that video games will turn out to be a panacea for cognitive decline.
70

Recall that the first four experiments in Green and Bavelier’s
Nature
article showed that video-game experts consistently outperformed novices on the same tasks that benefited from training in their experiment. Since the effects of training appear to be somewhat tenuous, you might now wonder why experts should tend to outperform novices. One explanation is that the cognitive differences between experts and novices might require a lot more than ten or even fifty hours of training to develop. The experts in these studies often play more than twenty hours of video games in a single week! If it takes that much effort to transfer skill from video games to general perception, would video-game training really be a worthwhile thing to do (if you didn’t already love playing video games)? The benefit of being a little faster on a selective attention task is probably not worth the hundreds of hours you would have to spend to receive it—you would be better off practicing the specific skills you are trying to improve. Given the lack of direct evidence that video-game training would even have consequences for our daily lives—say, by making us safer drivers—the potential benefits of training are even more uncertain.

BOOK: The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us
2.79Mb size Format: txt, pdf, ePub
ads

Other books

Feeding the Hungry Ghost by Ellen Kanner
Clara and Mr. Tiffany by Susan Vreeland
A Sting in the Tale by Dave Goulson
The Partnership by Phyllis Bentley
The Sixth Station by Linda Stasi
La Sposa by Sienna Mynx


readsbookonline.com Copyright 2016 - 2024