The goal of VR technology is that a user can replicate and interact with any instance, or enter a virtual world and be convinced that the world exists. This means a user could just as easily attend a virtual concert as they could explore Mars. In theory, the possibilities are endless. Yet, for all its promise, the current state of virtual reality reveals a multitude of problems that prevent this from happening.
All in all, virtual reality’s connection to truth is that of experience. If something has happened to us – if we have experienced it, we know that it is a fact. However, virtual reality allows us to experience things that are not necessarily happening in the natural world. So, an immersive virtual experience raises the question – what have we really experienced, and how does it influence our perception of information we receive about the world?
The Recent History of VR:
2016 was coined as the original Year of Virtual Reality – when VR was expected to break the market. Leading companies in virtual reality such as Oculus and HTC did well to push their products and educate consumers, while learning about the market in the process.
Virtual Reality did not break the market, but it did make a dent. VR platforms and equipment earned a total estimate of $1.8B in 2016 and officially put VR on the map.
Over a year later, the consumer market is more educated and understands the potential of the technology – not just what it is today, but what it could be. VR companies must be quick to push the acceleration and adoption of the technology, or end up as a historic springboard for future VR companies to flourish.
To realize the potential of Virtual Reality – that game-changing, truth-challenging experience – will require major industrial changes in how technology and infrastructure has operated over the past 60 years.
Problem 1: Hardware
Virtual Reality is one of the biggest shifts from previous Man to Machine Interfaces (MMIs) we’ve ever seen. Like previous technological introductions, virtual reality’s primary problem lies in its hardware. This is due to the massive amount of processing required to render and visualize virtual worlds, all the while interpreting the user’s interactions within that world. This leads to an incredibly clunky and immobile setup, and an expensive one at that.
For the cheapest virtual reality in-home experience, a minimum investment of roughly $1000 is required. This includes virtual reality goggles, two hand controllers, two sensors, and a low-scale CPU tethered to the goggles, as there are too many processors required to be built into the goggles standalone.
Even with up and coming wireless solutions (an added $300), systems must be connected to a local in-home CPU. This lack of processing power not only attributes to problems with ergonomics and mobility, but display issues as well. According to Oculus’ Chief Scientist Michael Abrash’s keynote from Oculus Connect 4, virtual reality must go great lengths if we wish to close the gap and replicate human sight - something we are still 5 to 10 years from achieving.
Problem 2: Content
Thanks to its hardware limitations, virtual reality content is one-dimensional – only finding relevant success in the video game niche. After all, video games are the closest platforms to a ‘free range’ user environment and since realism is often put aside for the experience video games deliver, it’s understandable why virtual reality has been most successful in the gaming market.
And yet, virtual reality is so new, there has yet to be a ‘breakthrough’ in content for the medium. Content creators and storytellers struggle to deliver a unique experience that can take advantage of VR’s potential, without being held back by its nuisances. For example, the brain competes with itself when it perceives that the user is moving, when the user’s body isn’t moving around the physical room they’re in, causing users to get dizzy and nauseous.
Cases such as these force content creators to build games and content from stationary points within the story. Therefore, the user’s capability is limited to looking around a room without the ability to realistically explore and interact with their environment. That said, new technological mediums like virtual reality, no matter how ‘prehistoric’ they may seem in their first few states, have still penetrated the market thanks to unique and novel experiences that make up for the realism of the human brain.
But, creating and rendering a real-life environment on a 2D plane is nothing compared to portraying this same environment in virtual reality. Original consoles compare 2 senses as inputs (vision and audio), versus the 3 inputs (vision, audio and touch) that will be necessary for an immersive VR experience.
Problem 3: Architecture
The evolution of previous MMIs – from typewriters, to computers, consoles, and handheld devices – evolved steadily in accordance with Moore’s Law. The Law states that the number of transistors per square inch on integrated circuits will double every year since their invention (1965), and continue to do so for the foreseeable future. Until now.
We’ve arrived at a future that was not foreseen. Like Augmented Reality and AI, Virtual Reality has processing demands that make doubling computer power increasingly difficult without major innovations in the semiconductor industry. Therefore, it is now estimated that between 2021-2024, Moore’s Law will be dead and two new semiconductor architectures will be commercialized.
The differences in these architectures are fundamental to understand and analyze what the best path will be for virtual reality over the next five or so years. The first processing architecture consists of quantum computers, which use the mechanics of atomic sub particles to perform operations on data at speeds that are much faster than silicon chips. Future chips will be even smaller than those currently found in our phones, cameras, and other devices, but much faster. The second system is called neuromorphic computing, which relies on chips modeled after the anatomy of the human brain – chips connected to and functioning from the human brain itself. This raises ethical and security debates that complicates the technology.
iPhone’s success since the 2000s revealed just how important user interface is for consumers. The more natural and simple it is for any consumer to get what they’re looking for out of their device, the better their experience. As a result, the leading and most successful MMIs of today’s generation offer more ‘natural’ interaction by capitalizing on human psychology to deliver a seamless user experience between man and machine.
In accordance with truth, users will derive this experience from biological inputs of sight, hearing, touch, smell, and taste. Regardless of ethical challenges, validation of this development may be necessary for virtual reality to mimic these components and be psychologically viable. Thus, it’s possible that VR technology has more in common with biological technology such as prosthetics, than it does with a game console or smartphone.
Although 2016 was coined The Year of Virtual Reality, the technology will likely deliver a much greater experience and at a lower cost in just 5 years. That said, those with strong presence in the market will not wait 5 years to determine the best architecture to move forward with.
This catches us in a state of purgatory, or rather - a chicken and egg problem. Which will find its breakthrough first: the content, or the hardware? The answer has yet to be determined, but the opportunity exists for either to drive the other.
What do you think?
We want to hear from you. What can be done to improve virtual reality’s hardware issues? Will these solutions have any effect on the processing architectures to come?
Do you think the industry will follow content or hardware changes for the sake of a greater experience at lower cost?
Joni the conversation in the EIA Facebook Group or use #EricssonInnovationAwards to tell us what you think!
Click here to register for the Ericsson Innovation Awards 2018 and share your ICT ideas to shape The Future of Truth.