In the study, two groups of participants tried to solve a portion of the classic Rubik’s Cube puzzle, one group using an interactive video that included step-by-step navigation tools, user controls for rotating the cube, and automatic pauses at the end of each section. The other group had the same video but with only basic viewing capabilities. Both groups could pause, rewind or fast-forward the video.
The results showed that users with low spatial skills – those not able to easily recognize patterns or make inferences – scored significantly higher if they were using the interactive video tutorial. Their scores were comparable to those of high-spatial ability users who also used the interactive video. (A standardized test for scoring spatial skills was given beforehand to each participant.)
However, the second group, which watched the non-interactive video, showed a sizable gap in scores between high- and low-spatial ability users. Without the external aids of the interactive video, researchers say the users with lower pattern recognition skills couldn’t compensate.
“Basically, the type of tutorial used did not matter as much to those with higher spatial ability, but did make a difference for those on the lower end of the spectrum,” says Dar-Wei Chen, lead researcher and Georgia Tech Ph.D. student in engineering psychology.
“In this particular study, participants with high spatial reasoning were likely to better mentally visualize information that the standard video did not present as well, while participants with low spatial ability were significantly aided by the interactive video because they had lower capacities to fill the information gaps by themselves.”
When generally comparing the groups without regard to participants’ spatial skills, the average scores for the two groups (interactive vs. non-interactive) were very similar, with the non-interactive mean score slightly higher. (Interactive mean score was 8.33 (out of 10) compared to the non-interactive mean score of 8.81).
Both results surprised the researchers, who had predicted that the mean score for the interactive video users would be noticeably higher. Chen says this might be attributed to the assignment itself (matching center colors on the Rubik’s Cube to form “the cross”) being completed too easily and leading to a ceiling effect in achievement scores. More than two-thirds of participants achieved a perfect score.
Chen also noted that differences in the video tutorials might not have been large enough to show clearly where interactivity ceases to be beneficial, even though the study did show the impact of interactivity on performance based on participants’ spatial abilities.
“The results lend credence to the argument that teaching materials should be tailored to specific learners’ abilities,” says Chen.
Researchers believe that the results show there is evidence that users with low-spatial learning abilities can benefit from interactive material when they are learning a task on their own, a finding that is significant for understanding how learning outcomes are achieved online.
“One of the main questions facing educators now is harnessing the powers of interactivity to improve learning technologies for all users,” says Richard Catrambone, professor of psychology at Georgia Tech and part of the research team.
The research group presented the research paper at the Human Factors and Ergonomics Society Annual Meeting in the fall. Their future research will look at identifying aspects of interactivity that lead to better learning and the people who would benefit most from using multimedia with those aspects implemented.