Because the focus of the experiment is an app that records videos while compensating for tilt/rotation (it's not just "hold muh beer while I tape my iPhone to a car wheel!"). I have the app, and find it interesting for its ability to eliminate the inevitable angling of the video, despite being on such an unstable mount (my free-waving hand). By subjecting the app to extreme conditions of rapid rotation, subtle flaws in the compensation process are made apparent - specifically, that the iPhone's gravity sensor gets confused & skewed, that the progressive scanning of each frame means distorting the image (the top of the image shows a view 1/60th or so of a second different from the bottom), and the two together result in a bizarre/unexpected periodic warping of the image. Having this video, the experimenters can go back to the software lab and tweak their image processing algorithms to compensate for these errors and create an even more stable image.
"So what?" you mumble. "It's so subtle nobody cares."
Well, that's actually a big part of why VR failed when it was first introduced a couple decades ago, and why Carmack & Occulus are so successful with it now: they've analyzed the subtle nuances that you don't acknowledge but your brain processes & reacts to nonetheless. Horizon is applying the same attention to detail, and will produce a better product as a result (yes, I do notice those sub-frame distortions in various products almost daily, and would appreciate developers correcting them).