MWC recap – Presenting an UX view of the future of mobile networks and systems
At the Mobile World Congress (MWC) in February, we at the UX Lab presented the research concept “Remote Control over Mobile Networks” that we’ve been working on lately. In the following video, Marcus Gårdman of the UX Lab presents the core of the concept:
In four days we gave around 250 presentations to customers, partners, analysts, and media from all over the world. We also presented the concept as part of Hans Vestberg’s (Ericsson CEO) keynote (scroll to 5:08 for our part).
The “Remote Control over Mobile Networks” concept is one of several concepts that we have defined in a longer term research project called “UX of future networks”, which is currently ongoing. The main scope for the project is for us at the UX Lab to investigate the next generation of networks and systems (such as 5G) from an UX perspective.
When we embark on projects like this our first activity is always to try to get our heads around the “material” that we are to work with. In this case the material is the next generation of networks and systems (such as 5G), and we approached this by doing some hefty desktop research, as well as talking to the industry experts (for example this Ericsson video – slides below – and external parties such as the METIS 2020 project), and many more.
We then did some analysis which allowed us to identify a number of areas or themes that we found relevant and interesting to work on further by conceptualising them in various ways. The “Remote Control over Mobile Networks” focus on a theme that we call “super performance” which means that it looks at some of the characteristics of the next generation of networks and systems from a performance perspective (speed, bandwidth, latency etc). The telecom industry (and many other industries) is often inclined to refer to characteristics like these when talking about “next generation”. However, our biggest question when reading up on this quite quickly became: “Why?”. Why do we need more of everything, what use cases motivate this development? What kind of functions, services, needs, etc. require this kind of performance?
After some ideation and reflection we identified quite a few potential use cases that would require a huge increase in network and system performance. One of these was the “Remote Control over Mobile Networks” concept that we decided to proceed with. So why this specific concept? Well, we have all heard of remote surgery and similar systems. We are convinced that these initiatives hold great possibilities, but we also believe that we can take the next steps in this area by developing solutions that are more scalable, that enable richer experiences, and that allow for mobility. We also had discussions with experts within various industries during which we have come to understand that a common problem is to get qualified people to relocate to where the assignments are. One of our goals is to make it possible for people to not have to leave their families for months in order to make a living (unless they want to of course).
Once we had decided on this use case we started to think about how we could identify and “extract” the requirements on the future technologies we are to develop from it. The answer was rather straightforward: If someone is to remotely control something it will be that person’s experience that will have to be the guideline for the technology. If the remote control system doesn’t allow for the person to control the machine/device equally good (or better) than if the person was controlling it in person, then the solution is not good enough. In other words: For this use case the user experience will determine the characteristics and requirements of the future networks and systems.
An experience can not be described nor understood by being defined in a matrix, spreadsheet, or a powerpoint presentation. An experience has to be experienced. Consequently we needed to start by creating that experience for our proposed use case. When we started to talk about remote controlling, we saw quite a few possible applications for it. We ended up deciding on using controlling machines, such a digger/excavator, as a demonstrator of the concept. Why the digger? Well, partly because it is a really interesting case that could be very relevant in some situations (such as cleaning up after the Fukushima nuclear disaster, as one example). Another reason is that is a case that implies hard requirements on the technologies that are to realise it. Instead of going big (as in buying a real excavator) we decided to investigate things with a more iterative approach, by starting small and progressively scale up. Hence we decided to start by making it possible to remotely control a radio controlled model excavator. Of course there is a vast difference between controlling a small scale model and a big machine in a real environment, but as a starting point for getting an understanding of the limits of the technologies we envision we believed that this was a sufficient first step.
Furthermore we made the assumption that one desired characteristic of a successful user experience is that the person can feel as present as possible, preferably even immersed. This most likely requires a combination of a lot of various parameters and features, such as visuals, audio, touch, smell, etc., so once again we decided to work iteratively by starting with addressing two things: The driver of our excavator has to be able to see what is going on and be able to adequately control the machine. We are of course going to investigate other parameters that might influence the user experience as the project proceeds, but this is were we started.
The result of all of these considerations is the concept we presented at the MWC. We built a fully working prototype that allows a person to more or less “teleport” into the driver’s seat of the machine. We achieve this by streaming a 360 degree panoramic live video to a pair of virtual reality goggles, the Oculus Rift. We also made it possible to actually drive the machine using a custom built controller.
Despite the fact that this is just a model, the feeling is actually rather immersive. The potential of the use case becomes very obvious through this prototype, but what is even more interesting is that the shortcomings of the technology are also very easy to identify. For example, the importance of the video quality is painfully clear given that our current solution is not at all sufficient. This is an area that we need to improve, and this insight in itself guides us towards finding new solutions by asking questions such as: Do we need higher video resolution? Do we need depth/3D video? Can we compensate with augmented reality delivered by some cloud functionality? The situation becomes even more interesting if we consider (which we have to) how the answers to these questions relate to the networks and systems that are to cater for all of this. What will 3D video mean in terms of bandwidth? How will a cloud component influence latency? Just to mention a few of the questions that emerge.
Needless to say we don’t have all the answers yet. However, one conclusion we can make from this use case is that if we want to realise use cases like this we need to start view the infrastructure from a true end to end perspective. The next generation of mobile infrastructure will have to be about extremely tight integration of many things, such as connectivity, real time analytics, management, security, cloud technologies, and much more.
We hope that it is clear what the real result of this project is about: We can’t fully understand the complexity of future technologies if we don’t explore them from various perspectives. In this use case we are using user experience as a mean of identifying what the next generation of mobile networks and systems can/shall be about.
To be continued :)