The emergence of the Internet of Skills | Ericsson Research Blog

Ericsson Research Blog

Research, insights and technology reflections

The emergence of the Internet of Skills

A next step in the connected world is to enable any human being to teach, be taught and execute actions remotely. In this way, human skills can be delivered or acquired without any physical boundaries, spreading knowledge globally at a faster and more efficient way. This is commonly referred to as the Internet of Skills and is expected to be a key component of the future digitalized world.

The emergence of the Internet of Skills is one of the five key technology trends outlined by our CTO. A key requirement in this context is the availability of visual, audio and haptic technologies allowing human beings to have sensory experiences close to the ones experienced when learning, teaching or executing actions locally. For example, a surgeon located in Stockholm performing a remote surgery on a patient located in Gothenburg, should be able to see, hear and feel in the same way as if the surgery was being performed in the doctor’s operating theater in Stockholm.Similarly, a technician performing a repair of a machine at a certain location, should be able to be instructed by a remote expert in the same way as if the expert was physically with the technician.

Methodologies to allow for seamless interaction between the user with remote devices and robots are also fundamental. Communication technologies are an essential part of this system, where ultra-low latency and high bandwidth networks as well as cloud technologies are needed to deliver the required end-to-end experience.

In the last years, several advancements in these technology areas have taken place. Affordable Virtual Reality (VR) and Mixed Reality (MR) devices capable of rendering 3D visual representations are now in the market, as well as new sensors capable of capturing high quality 3D visual and audio information from the world in real-time. 3D spatial audio technologies have also been developed and the latest haptic technologies allow users to experience motion, forces, shapes and textures with increasing levels of realism.

One example is the mixed reality remote expert assistance system developed by PengramAR and ScopeAR. However, the current systems lack the audio, visual and haptic technology performance and devices, together with machine interaction methods, which would provide a fully immersive Internet of Skills system. As these systems require a combination of multiple technology components and an understanding of human factors, the full enablement will depend on the development and maturity of each of them. Moreover, these systems currently lack the availability of 5G communication and cloud technologies which provide the required flexibility, high bandwidth and low latency.

Communicating remotely in real time

High-quality and efficient capturing, transmission and rendering of visual, audio and haptic information is essential. For this, sensors, algorithms and actuators for all these modalities are required.

With respect to the visual component, the technology developments have focused on real-time 3D video capturing, processing and rendering. For capturing, several new sensors have been launched in the market in the past years relying on different fundamental technologies and their prices have been significantly decreased. Depth information is combined with RGB textures to create a 3D representation of the captured world. These representations aim to provide a user with an experience of being immersed in the mixed or virtual reality. Since the expectation is that this experience mimics the interaction with natural environment, these immersive representations need to be coded, rendered and presented to the user at sufficient quality. This creates pressure on bandwidth due to increase in resolution and frame-rates, as well as latency as users expect to navigate and interact with the representations in real-time.

Advances in machine learning has impacted capturing and processing algorithms, improving their quality and efficiency. On the rendering side, several new VR and MR devices are being launched in the market every year, allowing for higher resolutions, improved field of view and depth perception, wearability and on-device positioning. The recent development of low-cost smartphone-based MR headsets have looked particularly promising for MR applications. Given the current large investments in both capturing and rendering technologies, we foresee that the quality, performance and availability of these technologies will keep increasing.

As for the audio component, the technology developments have focused on newer more affordable spatial audio microphones for capturing the sound field in one or more spatial locations in a room. These can then be used to, through spatial audio filtering methods, separate individual sound sources in the room and estimate their position in 3D space or deliver a representation of the sound field. The performance of spatial audio renderers is very much tied to how well the head related (HR) filter models used in the rendering are adapted to the user’s own physical HR filters. This is a very active research and development area. Reasonable solutions currently exist, but better ones will be available in the near future. Formats for exchanging spatial audio streams are now being specified and compression techniques are developed to handle the increase in the amount of data that is needed for capturing the spatial aspects of audio.

Regarding the haptic component, the technology developments have increasingly focused on wearable haptic devices, mainly given by the push from VR applications. The developed devices are either to be worn on the fingers or on the whole hand, and efforts have been placed on allowing users to feel motion, forces and textures in 3D. Similarly, the devices allow for sensing of motion and forces applied by the user during interactions. Wearability is still a challenge, as available devices are currently typically difficult to setup or wear, mainly when many degrees-of-freedom are enabled.

Ultrasound-based haptic devices have been recently proposed where ultrasound waves are provided to the user’s hands, hence not requiring the user to wear or hold a physical device. This technology is very appealing as it is simple to use and setup. The major drawback is that the haptic feedback quality is currently low. We believe that wearable haptic devices and ultrasound-based devices will continue being improved, while novel sensing and actuation technologies in the coming years may further improve the haptic devices. Standardization efforts for haptic communications have also began in the past years, which will potentially allow a quicker proliferation and adoption of haptic technologies and greatly impact the enablement of the Internet of Skills.

In this new paradigm, humans will interact via devices with characteristics very different from what they are currently used to, and most importantly, human-robot interactions will be common. Methods to design these new devices and robots considering the human interaction are thus required.

The availability of Internet of Skills systems has the potential to greatly reduce the need for human travel for work and business, as well as further allowing humans to work remotely. Such technology would then greatly contribute to tackling goals number 11, 12 and 13 from the UN sustainable development goals.

Communication technologies

The standardization of 5G communication technologies will be a key enabler of the Internet of Skills. Haptic communications require latencies below 10 ms which is made possible by the 5G Ultra-Reliable Low-Latency Communication (URLLC) standard feature. In particular, the large volumes of 3D visual information impose high network bandwidth demands. With low latency networks, large amounts of data can be quickly transmitted between devices, allowing for a larger amount of time to be spent on processing and performing analytics on the available information.

The latest cloud technology developments have the potential to increase flexibility and provide a better resource utilization in Internet of Skills systems. Edge cloud will enable the efficient processing of large volumes of 3D visual, audio and haptic information captured by a large number of devices near the users. Computation at VR/MR headsets can also be reduced by relying on an edge cloud, which increases the device’s battery lifetime.

Research at Ericsson

Since 2015 we have been partnering with several companies and universities to demonstrate the feasibility and exciting benefits of the Internet of Skills. For example, the remote control of buses with SCANIA AB, the remote control of a wheel loader and an excavator with Volvo AB, the remote control of a robotic arm with ABB, the remote control of a 5G concept car with KTH, the remote medical examination with Kings College London, RoomOne Labs, Neurodigital and British Telecom, as well as the remote control of drones. You can see more of these examples below.

Ericsson Research is active in immersive audio coding and rendering development in both 3GPP and MPEG, as well as video codec developments in MPEG (ISO) and ITU-T standardization. The work with audio includes investigating methods for high quality and efficient rendering and representation of immersive audio, including modeling of head related filters.

Video investigations include system and transport aspects for high quality rendering and presentation of natural and synthetic video and standardization of interfaces for distributing processing of the representations across network, cloud and client devices.

We are also founding members of the IEEE P1918.1 Working Group on Tactile Internet which focuses on the standardization of haptic communications. The main targets of the standardization group lie on the definition of a common architecture, interfaces and compression algorithms for an efficient communication of haptic information.

Remote control of a full-size excavator with haptic feedback, MWC2015

Remote control of a full-size excavator with haptic feedback, MWC2015

Remote control of a full-size excavator with haptic feedback, MWC 2015

Remote control over mobile networks – the making of a 5G use case demo
True remote controlling at the upcoming Mobile World Congress

Telehaptic drone control at MWC 2016

Telehaptic drone control at MWC 2016

Telehaptic Drone Control (Feel the Force)
Taking 5G to the sky

Robot remote operation with haptic feedback with ABB at MWC 2016

Robot remote operation with haptic feedback with ABB at MWC 2016

Haptic technology makes it feel real

Remote surgery with Kings college at MWC 2017

Remote surgery with Kings college at MWC 2017

Transmitting the doctor’s sense of touch: transforming robotic surgery

Concluding remarks

We argue that the initial fully immersive Internet of Skills systems will become available within the next years, with their performance and proliferation increasing with the maturing of all the key technology components and the availability of 5G communication and cloud technology. On the positive side, both the industry and consumers are demonstrating a wide interest and openness for the deployment of these systems.

Find out more about what Ericsson CTO Erik Ekudden has to say about the emergence of the Internet of Skills in his Ericsson Technology Review article: Five technology trends augmenting the connected society.

Andre Gualda

André Gualda is a senior advisor for consumer insight at Ericsson Consumer and Industry Lab. He joined Ericsson in 2011 and has held business intelligence and consumer insights positions in Brazil, United States and Sweden. André has conducted research on topics like TV and media, mobile commerce, social media and in the ICT field. He holds a bachelor degree in business from the Universidade Municipal de São Caetano do Sul, São Paulo, Brazil.

Andre Gualda

Alvin Jude

Alvin Jude has worked with Ericsson Silicon Valley's Advanced Technology Labs since 2014. He works in the Digital Representation and Interaction research area and specializes in Human-Computer Interaction. Alvin's has worked to improve users' experience with media content discovery, personalization, and recommendations, as well as Augmented Reality.

Alvin Jude

Erlendur Karlsson

Dr. Erlendur Karlsson is a Master Researcher in the Digital Representation and Interaction research area at Ericsson Research, specializing in Spatial Audio. He has over 10 years of experience working with methods to capture spatial sound and render spatial sound through loudspeakers and headphones. He has also worked with machine learning methods in speech and speaker recognition systems. Before joining Ericsson in 1999, Erlendur was an assistant Professor in the Department of Systems Engineering at Uppsala University, working in the field of System Identification. He also served as an associate editor for the IEEE Transactions on Acoustics, Speech and Signal Processing publication. Erlendur holds two MSc degrees, in Electrical Engineering and in Mathematics, and a PhD in Electrical Engineering, all from Georgia Institute of Technology, Atlanta, USA.

Erlendur Karlsson

José Araújo

José Araújo is a Senior Researcher at Ericsson Research in Stockholm, Sweden. He received the Ph.D. in Automatic Control from KTH Royal Institute of Technology in Stockholm, Sweden in 2014 and the M.Sc. degree in electrical and computer engineering from the Faculty of Engineering, University of Porto (FEUP), Portugal in 2008. He has held visiting researcher positions at the University of British Columbia (UBC) and the University of California, Los Angeles (UCLA), in 2008 and 2012, respectively. His current research interests are in future device technologies and cyber-physical systems.

José Araújo

Lukasz Litwic

Dr. Lukasz Litwic is Research Leader of Visual Technology team at Ericsson Research in Stockholm. Lukasz joined Ericsson in 2007 and has worked on various aspects of image processing and video compression research, which formed the foundation of Ericsson real-time broadcast encoding products. Dr Litwic received Master of Engineering degree from Gdansk University of Technology in 2005 and a Ph. D from the University of Surrey in 2015.

Lukasz Litwic