It's another blow for immersive virtual reality. University of California researchers have shown that even people with perfect eyesight navigate the world by relying on a lot more than what they see. Here's why VR won't really work until we go beyond visual cues
Inside our brain’s hippocampus we have what are called place cells. These specialized cells help us build a “cognitive map” of our surroundings — mental representations which allow us to orientate ourselves in our spatial environment.
These neurons have been observed to fire like crazy whenever a rat has to go about the task of figuring out where it is in the world. And if the rat in an entirely new location altogether, it has to create a new cognitive map from scratch.
But once this map has been created, rats can quickly figure out where they are should they return to that location.
Scientists have theorized that rats don’t require much sensory information to build these maps, figuring that distant visual images, the ability to move themselves around, and maybe some proprioceptive orientation is all that’s required to do the trick. But as the new study by Pascal Ravassard and colleagues has shown, that’s not enough — and not enough by a mile.
To reach this conclusion, Pascal Ravassard and colleagues experimented with rats placed in a virtual reality environment. Indeed, VR is becoming a popular tool amongst some scientists. For example, researchers have interacted with rats by becoming virtual rats themselves
But as this experiment showed, getting a rat's brain to respond to a VR environment in the same way it responds to the real world is not so easy.
For the study, the researchers tried to create two apparently identical worlds, one real (RW) and one virtual (VR). Each environment consisted of a linear track in the center of a square room with distinct visual cues on each of the four walls. These cues were nearly identical in both environments, but the rats' bodies were fixed in VR — thus minimizing (or even eliminating) other important spatial cues, like balance. So, the only incoming environmental data during VR exposure were the visual cues and self-motion.
After attaching tetrodes to measure the neural activity of six rats, the researchers had them run the track in both the RW and VR environments. When looking at the results, it was clear that the VR environment was not exciting the place cells as per usual. In VR, place cells showed 20% activity as compared to 45% in RW — more than twice as much.
So, vision and self-motion will spark a little bit of place cell activity, but balance and other sensory cues are what's fully required to properly encode a rat’s — and likely a human's — position. Moreover, the researchers speculate that other cues — like smell, sound, and textures — are what's needed to help the rats properly self-locate themselves. But looking at the scans, the researchers realized that the only spatial encoding that was being done in VR was distance.
It’s clear from the study, therefore, that a variety of sensory clues must interact and compete in the brain for us to construct a robust cognitive map.
Read the entire study at Science: “Multi-Sensory Control of Hippocampal Spatiotemporal Selectivity.”