Imagining how a future technology might work and the implications of such is one of the reasons I enjoy reading and writing sci-fi. That being said, I am often annoyed by wishful and frankly impossible portrayals of technology in many popular movies, shows, and novels. In particular, portrayals of virtual reality technologies are rife with problems. You might just call me a nitpicker, but I’m most interested in what might actually happen in the future and how humans as a species will respond to it.
One blatant example comes to mind: the Holoband in the Battlestar Galactica spin-off series, Caprica. The device is put on and removed as readily as a pair of glasses, and transports the user into a fully-immersive virtual reality. By fully-immersive, I mean it is like being transported to another reality, with its accompanying sights, sounds, smells, sensations, and the ability to move around as you would in reality. The characters using the Holobands are shown climbing stairs, walking through large spaces, riding in aircraft, eating, drinking, getting shot or stabbed, getting in fist fights, having sexual encounters, etc. The problem is that this not only impossible, but absurdly impossible. (Don’t take this as a diss to Caprica. I actually really enjoyed the series and was disappointed it was cut short.)
Let’s talk about immersive virtual reality. To start, I’ll need to launch into a bit of a biology lesson. I’m a neurologist, so you might find the following a bit dense, but I’ll try to make it as interesting and accessible as possible. (It’s okay to skim.)
Inputs and Outputs
Each of us as living things with brains can be viewed as a control loop. The environment around us acts on us (light hits our retinas, sound hits our eardrums, physical objects touch us, etc.), our brain processes this information (sensory input), translates it into a response (motor output), and our bodies act on the environment in turn, thus completing the loop/circuit/cycle. As a requirement, any immersive virtual reality system would have to completely hijack this control loop.
On the sensory input side we have many channels:
- olfaction (smell)
- gustation (taste)
- rotational/linear acceleration sense (vestibule, saccule, and utricle in the inner ear)
- somatic sensation (joint position sense, touch, pain, temperature, etc.)
- visceral sensation (all sensory input from the heart, stomach, intestines, and other internal organs)
Each of these channels can be further broken down into subchannels. To illustrate the complexity, taste, for example, might be the simplest of them all. It can be broken down into specific sensory neurons conveying sweet (sugars, ketones, aldehydes), sour (acid), salty (sodium), savory/umami (glutamate), and a family of 25 taste receptors attuned to chemicals we perceive as bitter. A final receptor that detects lipids (fats) is also believed to exist. Do you agree it’s complex now?
The motor output system consists of all of the motor neurons that emerge from the brainstem and spinal cord, connecting to (innervating) the muscles of the body. (There are neuroendocrine outputs from the brain also, but are not important to this discussion.) The motor nerves can be divided into two groups: somatic (voluntary) and visceral (dealing with internal organs).
Immersive Virtual Reality (IVR) Device
Now that we’ve covered some basic neurobiology, let’s further define how a virtual reality device would behave before we talk about design requirements. For simplicity, let’s say that the device gives the user the experience of being transported to another place in their own body with the ability to walk/talk/interact as he or she would in the real world. The experience would be indistinguishable from the real world or close to it. While using the device, the user’s body would stay in a resting state, preferably sitting or lying on a bed.
For the above immersive virtual reality device to work it must:
- capture and replace the user’s sensory input streams from the real world with the simulated virtual ones
- capture the user’s motor output streams and simulate appropriate movements of a virtual body
- replace the user’s motor output streams with those that maintain the real body in a safe resting state
- provide an exit switch in the form of a motor output to leave the virtual world
Before expounding on items one and two, let’s just address the last two quickly. If one is transported to another reality, we don’t want that person to walk around, bumping into things in the real world, as would happen if their motor outputs were allowed to reach the muscles in their real body. On the other hand, we can’t just block the impulses and paralyze the person either, as they would stop breathing and die. For my novel, What a Piece of Work is Man, I came up with the idea of a proxy, essentially a program that controls the body while the user is in the virtual space. It would keep the body in a safe and inert position, and even perform such tasks as exercising, eating, and using the bathroom during prolonged visits to the virtual world. My point is that it is not a trivial problem to keep one’s real body safe while they are in an immersive virtual reality.
Exiting the virtual reality is the other problem. How would the device know when you want to leave? The easiest solution would be for it to monitor your motor outputs for a trigger action or movement. This could take the form of a gesture or verbal command. Dorothy’s tapping her feet together three times and saying “there’s no place like home” would qualify. The point is there has to be a way to readily exit the virtual control loop.
Capturing and Replacing Human Input and Output Streams–the Nitty Gritty
This is the most difficult part of the whole design and the place where Caprica and The Matrix get it all wrong. Slipping on a Holoband or ramming in a series of wires into your spinal cord will not do the trick. When I talk of capturing the input and output streams of the human brain, I mean literally recording from millisecond to millisecond the exact electrical responses of each of the millions of neurons in each sensory channel. For an immersive virtual reality device to work, this must happen. There are no shortcuts. There is no convergence of the neurons to any one place in the brain, nor can you ‘fudge’ it in any way. And don’t talk to me about using EEGs and functional MRIs. I read EEGs (electroencephalograms) as part of my living, and have kept up with the medical literature on fMRI. Believe me, they will never provide anywhere near the resolution that would be required for this application.
At risk of getting even more technical, the only way to capture the input and output streams with the necessary fidelity is to have a sensor on or within every neuron. To replace the input and output streams with those corresponding to a virtual reality, the same is true: an actual physical device would have to be present on/within and assume control of every neuron in the peripheral nervous system. Topping it off, to create a coherent IVR experience, each of these millions of sensor/controllers would have to connect to a central processing unit that would run the simulation. That’s some seriously advanced and more importantly, invasive, technology.
How could this be physically possible? This post is already too long, so I’ll get to it in the next one…