Summary:
In the article Putting Brain Training to the Test, Owen, et al. explained how there is little scientific evidence in favor of the efficacy of brain training, which they refer to as "improved cognitive function through the regular use of computerized tests." The ultimate question was not whether brain training improved performance on cognitive tests but whether those benefits transfer to other untrained tasks. This reduces the likelihood that improvements are simply due to practice.
To test the hypothesis that brain training is not effective, the authors evaluated the results of a study that the BBC science program "Bang Goes The Theory" participated in. The test was conducted over a six-week period and included two experimental groups and one control group with 11,430 people completing the assessments. Four tests were given in an initial benchmark which included baseline measures of reasoning, verbal short-term memory, spatial working memory and paired-associates learning, all of which are sensitive to changes in cognitive function. After the end of the six weeks, a second benchmark test was given.
The results showed that although brain training improved results on cognitive tasks over time, there was no evidence for transfer effects to untrained tasks. Experimental group 1 and 2 and the control group all improved on some or all benchmarking tests but the effect sizes were very small for all of them. Thus, even with statistical significance, the results were not meaningful. Alternatively, improvement on tests that were trained had large effect sizes for both experimental groups. These improvements could have been due to task repetition, adoption of new task strategies, or a combination of the two. Nevertheless, training-related improvements did not generalize to other tasks. The authors accounted for the fact that these results were unlikely due to either the use of the wrong types of cognitive tasks or masking by direct comparison with the control group. They note, however, that it is possible that a more extensive training system could produce different results. The following image shows benchmarking scores at baseline and after the program was finished.
Critique:
This article holds value primarily because it plays devil's advocate on a topic that many people support with little evidence. The study performed included a large random sample size with two experimental groups and a control group, making it more reliable. When analyzing the results, the authors also took into account the effect size which plays a large role in the comprehension of results.The study also addressed many reasons why the results would have been skewed, disproving most with thorough explanations.
Nevertheless, the study should certainly be repeated to ensure that the results would stay consistent. Although the analysis does hold merit within the context of this study, I'm not entirely convinced that brain-training is ineffective based solely on these results. Additionally, a significantly larger portion of participants in the control group did not finish the 6-week study. Although this may be due to the fact that they were less engaged than the experimental group, additional tests should be performed that attempt to prevent this.
Source:
Owen, A., Hampshire, A., Grahn, J., Stenton, R., Dajani, S., Burns, A., Howard, R., & Ballard, C. (2010). Putting brain training to the test. Nature, 465.
Tuesday, April 30, 2013
Subscribe to:
Post Comments (Atom)
This was an interesting article and I agree with your assessment. Just because something is widely believed to be beneficial does not mean that we should not test it. It would be interesting if we could study a possible placebo effect. It is possible that brain training creates a placebo effect that leads to improved cognitive function because participants think it should. This would be a difficult study to conduct though, primarily since we don't know for sure if it legitimately works or not.
ReplyDelete