2015 is going to be remembered as the year we entered an entirely new world, a world today known as Mixed Reality.
What is Mixed Reality? In short, Mixed Reality is the world which we now inhabit where real and virtual objects co-exist and can causally influence each other. In Mixed Reality persistent virtual objects can be created, seen, and manipulated and they exist and are perceived as integrated with the real world.
Together with the Internet of Things where environmental objects embed computational elements, sensors, and connectivity, Mixed Reality will radically alter the way we interact with and think about the world.
Reality as we know it just ended.
In 1994 Paul Milgram and Fumio Kishino defined a Mixed Reality as “…anywhere between the extrema of the virtuality continuum.” (VC), where the Virtuality Continuum extends from the completely real through to the completely virtual environment with augmented reality and augmented virtuality ranging between.
“The conventionally held view of a Virtual Reality (VR) environment is one in which the participant-observer is totally immersed in, and able to interact with, a completely synthetic world. Such a world may mimic the properties of some real-world environments, either existing or fictional; however, it can also exceed the bounds of physical reality by creating a world in which the physical laws ordinarily governing space, time, mechanics, material properties, etc. no longer hold. What may be overlooked in this view, however, is that the VR label is also frequently used in association with a variety of other environments, to which total immersion and complete synthesis do not necessarily pertain, but which fall somewhere along a virtuality continuum. In this paper we focus on a particular subclass of VR related technologies that involve the merging of real and virtual worlds, which we refer to generically as Mixed Reality (MR).”
In 1993, I was leading the design of a concept for a Mixed Reality training system for the U.S. Army. The concept was to create a full body wireless interface to allow an individual combatant experience the virtual battlefield as if they were really there. Today you might implement something like this at home using Kinect and a Wii controller. But the DIVE Chamber concept was originally well beyond the state of the art.
Central to the project idea was that at the time there was a divided culture in the virtual reality community between those influenced by researcher Myron Krueger who emphasized the unencumbered and natural interaction of the body with virtual objects and the fully immersive head mounted display based approach pursued by pretty much everyone else. Another voice in the wilderness at that time was Brenda Laurel, whose book Computers as Theatre and concept of No Fucking Interface (NFI) suggested alternative ideas.
In her book, Laurel suggests the use of theatrical techniques and narrative to enhance virtual experiences. Her idea of NFI stated that visible or perceptible interface elements should be eliminated and interaction should be as natural as possible with control and interaction using the body, voice, and gaze as humans do in the real world. Both of these ideas shared the common idea of re-engaging the human themselves with the simulated experience in ways that went beyond mere visual surface appearances popularized in early VR demos.
The DIVE project proposed to build a specially designed room in which a user or participant could experience virtual reality. The virtual world was to be viewed through an HMD, actually an augmented reality display, and the room was to be equipped with video cameras for body tracking. The user would wear an HMD and experience audio via headphones with HRTF based 3D audio capability, and could also interact with the world through instrumented “props” such as weapons. Props were both physical objects that were held and virtual objects which could interact and cause effects within the simulated world. Originally we planned to use video based body tracking and a specially designed illuminated suit that could easily be tracked. We later abandoned this idea for off the shelf wireless magnetic tracking and replaced the headphone based audio with a high powered ambisonic speaker based system that could more realistically reproduce battlefield sounds such as gunfire and explosions at realistic decibel levels.
Integrating the props turned out to be much more complex than originally anticipated, for example, at this time there was no WiFi available so we had to hack a workable wireless solution together.
In order to wirelessly link the props to the virtual world we used an industrial wireless RS-232 data link developed for process control applications. Video was transmitted by a separate device. In order to appear realistic, the props had to be tracked along with the user’s head and body but also had to feature realistic controls such as trigger and safety mechanisms. This turned out to be problematic even using the then state of the art Datacube video rate image processing hardware we employed.
Batteries were also an issue, however, since our application was military training where soldiers normally carry gear and equipment we had an ample weight budget for extras like this. The illuminated suit itself required a battery of course and was dangerous. Bright colored LEDs did not exist at this time. We used fiber optics illuminated with a halogen lamp, and the lamp got hot requiring a custom box to cool it. Initial manned tests were attended with a fire extinguisher.
Props such as an M16 rifle, pistol, and a simulated Javelin anti-tank weapon were developed. Props were connected to a belt worn computer which processed raw signals and sent data back to the simulation computer via the wireless RS-232 link allowing for implementation of complex controls required for something like the Javelin.
We were focused around simulating firearms, but since the goal was a fully realistic immersive simulation of operations in urban terrain, we also had to consider hand grenades which are thrown. Some of these concepts have more recently been realized in a variety of systems, see below, where U.S. Marines are training with the Future Immersive Training Environment (FITE) and handgun training simulators using an omnidirectional mobility simulator known as a “hamster ball”.
Not Glass, A Great Leap Forward, and Microsoft Gets Their Mojo Back
The technology to support Mixed Reality is advancing rapidly and especially in the area of display technology. Two important new announcements were made recently in this area, one related to secretive start up Magic Leap and another, to the surprise of many, by industry stalwart Microsoft.
A full mixed reality experience requires use of a stereoscopic binocular head mounted display. This distinguishes these displays from the recently cancelled Google Glass device which was monocular and could not create immersive mixed reality experiences. This isn’t about being a Glasshole. It’s about changing the very nature of reality itself.
In early 2013 Canon made an augmented reality display commercially available that promised a full mixed reality experience. The Canon device did feature a stereoscopic display but required the use of special markers to aid the registration of real and virtual objects. This has been the state of the art in augmented reality technology for some time. Until now.
WIRED Magazine broke the story of mysterious startup Magic Leap’s patent application which included amazing concept art of a Mixed Reality world populated by magical interactive objects integrated with the normal physical world.
The Magic Leap display is a see through stereo-augmented reality display with a belt worn controller similar to that envisioned in the DIVE project. The Magic Leap concept suggests a deep Mixed Reality world where interfaces and displays materialize in mid air or around the body as desired and in which users can interact with photorealistic virtual objects and characters.
Body centric interfaces allow controls and displays to appear anywhere as needed, and allow the user to interact directly with their entire body and without the need for specialized clothing or suits beyond the display and a glove shown for gesture control. Virtual objects can be registered with real objects in the world like tables or “slates” made out of simple flat pieces of metal or other materials. An appropriately mysterious object in the form of a mixed reality egg was shown.
The tech world was already buzzing with the dreams of Magic Leap’s amazingly envisioned mixed reality world, when Microsoft dropped the bomb of their HoloLens project. Microsoft is officially back and it’s big.
HoloLens is seemingly already delivering the promise of a true mixed reality experience today. Both WIRED and C-NET report hands on experience with the device describing it as jaw dropping and amazing.
Microsoft’s product goes beyond the obvious. Sure you can interact with Windows via menus and displays that appear to float in front of you. But also, you can play Minecraft integrated into the real world. Suddenly, Microsoft’s somewhat surprising move on Mojang, creators of the game, makes all the sense in the world.
In addition to the Minecraft world, Microsoft naturally showed some impressive practical and business oriented applications including virtually helping someone repair a sink, use of mixed reality in design, and more. HoloLens isn’t just for games, it’s an entirely new way to interact with machines.
Mixed Reality is Really a New World
The virtual world is now real, and the real world is now virtual. Our new Mixed Reality environment combines technologies from multiple areas and integrates various infrastructures into a unified experience. It includes and integrates augmented reality displays, wearable technologies and the quantified self, the Internet of Things, the Internet of Me, and well, pretty much anything else you can think of.
But Mixed Reality isn’t just a Silicon Valley gimmick, it’s an entirely new and unique mode of existence where real and virtual merge.
In a 2011 paper Alfred Hubler and Vadas Gintautas experimentally demonstrate “synchronized states”, where “the boundary between the real system and the virtual system is blurred”. Their apparatus uses physically coupled virtual and real systems , in this case real and virtual pendulums, but similar arrangements could be envisioned between more complex dynamical systems. Hubler and Gintautas show that the system of coupled pendulums exists in a mixed reality state which is not entirely real and not entirely virtual but both and in between.
In order to create this effect, an instantaneous, or nearly so, bidirectional coupling is required between real and virtual objects. When this coupling occurs the result is a mixed reality state. Mixed Reality entails, therefore, causal couplings between real and virtual objects. Eventually we will be surrounded by many such causally coupled systems and objects.
An important property of mixed reality systems is that their behavior can not be determined by examination of either the virtual or physical world alone. Since the state of the system is coupled both to physical and virtual realities, examining either one alone can not capture these “hidden” causal linkages.
For example, a home management and control system presented in a Mixed Reality environment would connect virtual objects used to control features like temperature and lighting, as well as sensors in the actual physical environment. The result is that your future augmented reality controlled home will probably exist in a mixed reality state. You might not even notice it though.
Dangers: Reality Distortions and Disorders
No technology is entirely risk free. Mixed Reality seems amazing and holds promise to radically extend human abilities to understand and act in the world. But what happens when it gets too good? When virtual objects can not easily be distinguished from real ones?
And what happens when you take the device off and return to the information poor and non-magical ordinary reality we inhabit today? Will we ever want to take these off?
Certainly we can imagine some people that already have difficulties distinguishing reality from fantasy whom may also have issues with Mixed Reality environments. But what happens to all of us when reality is fundamentally influenced by the virtual world? Seemingly effectively operating in such a full Mixed reality scenario requires that you remain connected to the shared Mixed Reality environment at all times. Eventually, Mixed Reality may become simply known as “reality”.
Another danger, Mixed Reality might be annoying. Advertisers can now know not only where you are but what you are looking at. Virtual advertisements can appear in your real world and in your private spaces, “pushed” to your display. The idea was pioneered in the development of NanoHome a long lost 3D world which I led the development of before co-founding Live365.com. In the NanoHome world everyone had their own 3D virtual mansion which they could decorate and use to share media content. But we could insert objects, for example a dancing soda can could appear on your kitchen table and the environment knew where you looked and for how long. With Mixed Reality, add eye tracking and gaze direction to the long list of things the government can monitor about you. And of course the device also constructs and stores a 3D model of your immediate surroundings.
Mixed Reality is the Interface to Magic
But the possibilities out weigh these negatives perhaps.
Once you understand the full implications of Mixed Reality as a separate “interreality” physical state, you inevitably also realize that it implements many of the ancient ideas of magical manipulation of reality.
Using just their voice, various coded gestures and movements, along with computational symbolic systems, the user is able to create and manipulate objects in both an imaginary world of the mind and the physical material realm. As the world becomes populated with interactive “thinking” systems, these will be controllable from Mixed Reality. They will be mixed reality objects which exist in both real and virtual realms at once. Virtual controllers will influence and cause real physical effects sometimes in surprising ways. Mixed Reality will become the interface through which we interact with the Internet of Things, talk to robots, drones, smart buildings, intelligent vehicles and eventually nanotechnology such as nanorobotic swarms. And that’s just the start of what is seemingly possible.
Mixed Reality is the interface to magic.
It isn’t just because imaginary objects and avatars can now appear and penetrate our realities. Beyond Mixed Reality lies a hidden realm. The notion was explored in another early virtual reality project we did for the U.S. Army called Object Oriented VR. Every object in a virtual world consists of both a visual representation, what it appears to be, and a data representation which includes the various data elements that control the object’s behavior in the simulated world. However, these data items can be presented as physical or visual objects themselves in the Mixed Reality world. Even the controlling software itself can become visually and materially present in Mixed Reality. And you can toggle between realms with the raising of a finger, and they can be overlaid, layered, and simultaneously present as well.
The notion isn’t very different from what some users of DMT report experiencing in their interactions with the so called “machine elves” that some say construct our world.
But in the new world of Mixed Reality it’s true.
Beyond the mundane world lies a magical realm, the realm of code, the realm of pure mathematics and computation, also the world of imagination, the world where anything can become real and where information manages and controls the levers of our new Mixed Reality environment.
The idea was presented most effectively perhaps in Char Davies’ transformative VR experience Osmose. In Osmose, the underworld of code is the secret world that lies beneath the roots of a tree surrounded by and penetrated by floating orbs of light. The user experiences the world through an immersive head tracked head mounted display along with algorithmically generated music.
Finally, it is important to remember that the virtual world need not obey the existing laws of physics of the mundane one. Virtual objects are limitless, do not need to obey the laws of gravity or conservation of momentum. Virtual objects can hold entire worlds, and a space can contain an even larger space within it. Objects can hide controls and interfaces whether they are real or virtual. A flower can become a controller, with catching falling petals giving you one last chance to restore your deleted files. Imagination is the limit.
Reality just became “mixed” , a little bit more strange, and possibly a whole lot more interesting.