Machine intelligence when automation is not an option
How can a machine learn from experience? And what is learning anyway for a machine? A good way to answer these questions is with probabilistic modeling, where researchers use models in which there are multiple possible outcomes, each having varying degrees of certainty or uncertainty of their occurrence.
Probabilistic modeling has a central role in scientific data analysis, machine learning, robotics, cognitive science, and artificial intelligence. It has emerged as a principal theoretical and practical approach for designing machines that learn from data acquired through experience.
In context of mobile networks, our goal is for intelligent radio sites to predict and solve problems in real time. But this level of automation isn’t always possible. This is often the case when hardware faults occur, for example dirty optical cables, faulty adaptors, malfunctioning fans, etc. Therefore, it is also important to develop virtual assistants to help field service technicians with maintenance tasks on-site.
So what is machine intelligence really?
Machine intelligence refers to all technologies that make machines intelligent enough to solve complex problems without a predefined set of rules for a specific case. It uses both machine learning and artificial intelligence methods, tools and techniques to create data driven, intelligent, non-fragile systems for automation and network evolution.
Machine intelligence for field maintenance
One of Ericsson’s best examples of machine intelligence is an augmented reality (AR) assistant for Ericsson Radio System (AR assistant for ERS). In this case, researchers utilized machine intelligence to address field maintenance tasks that cannot be automated, which is often the case when hardware errors occur.
The specific challenge was how to support field service technicians to solve problems more efficiently on-site. Arriving at a fault alarming site, often with multiple alarms being sent, they cannot count on having internet access, and it might not be easy for them to know where to start searching.
And when they find the fault, how do they know if it was the root cause of the problem?
The challenge is three-fold:
- identifying relevant information in the enormous amount of customer documentation
- visually detecting a faulty object, for example a hardware component of a specific board
- troubleshooting and reusing approaches of clearing previous alarms and trouble tickets.
But who really teaches the machines how to be intelligent and address these challenges? Below meet the Ericsson researchers behind some very clever discoveries:
Can you process 14,000 pages of customer documentation and semantically annotate it manually?
To guide field service technicians in solving faults, rapid access to relevant documentation is essential. From the 1990s – when containers full of paper binders were shipped together with the equipment – to the digitized form of today, documentation must be presented in human-readable format.
However, this makes processing of documents by computer applications difficult. As part of this research project, machine learning has been used to convert documentation to a graph-based representation and automatically tag it with additional information to make it machine-readable.
This information includes semantics describing the purpose of each section of every document, but also linking document sections with alarm information. Doing this process manually for more than 14,000 pages of documentation instead of applying machine learning would take a significant amount of effort.
The Ericsson expert
Athanasios Karapantelakis holds a MSc and LicEng from KTH Royal Institute of Technology. He has been with Ericsson Research from 2012, holding position of Master Researcher. His research interests include cloud computing and machine learning.
Seeing is believing with object detection
The second challenge is to build augmented reality (AR) applications that perform automatic detection of all relevant hardware components and provide visualization and guidance on the tablet/smartphone screen of a field technician.
The technician points the smartphone’s camera to the hardware components of interest. The components are automatically recognized, and the corresponding label/link to documentation/bounding box is visualized on the screen. Thanks to recent advances in machine learning and computing power of mobile devices, all these steps are performed in real-time, and the visual object detector is able to run as a standalone application on a portable device used by the technician.
The Ericsson experts
Volodya Grancharov received a Ph.D. in telecommunications from the Sound and Image Processing Lab at the KTH Royal Institute of Technology in 2006. Since then he has been with Ericsson Research, Stockholm, where he holds the position of Master Researcher. His main research focus is in designing signal processing and machine learning algorithms for audio/video applications.
Sigurdur Sverrisson received a M.Sc. degree in Wireless Communication from Chalmers University of Technology, Gothenburg, in 2008. He has since then worked for Ericsson Research and currently holds the position of Senior Researcher. His research interests include audio and video analytics.
Yifei Jin received a MSc. degree in Wireless System from the KTH Royal Institute of Technology in 2018. He has been with Ericsson Research since 2017, where he holds the position of researcher. His main research focus is in communication networks, artificial intelligence and natural language processing.
Leonid Mokrushin is a senior specialist in the area of cognitive technologies at Ericsson Research. His current focus is on investigating new opportunities within AI in the context of industrial and telco use cases. He joined Ericsson Research in 2007 after postgraduate studies at Uppsala University, Sweden, with a background in real-time systems. He received an M.Sc. in software engineering from Peter the Great St. Petersburg Polytechnic University, Russia in 2001.
Identifying the root cause of the problem with Visible Light Communication
Today in order to monitor alarm status of onsite equipment like antennas and radio and baseband units, a technician needs to have special equipment and training. To address this problem, we have developed a state of the art Visible Light Communication (VLC) system. The system enables one-way communication from any unit equipped with LEDs to a device with a camera, such as smartphone.
The distinctive property of the system is its ability to track and decode communication from multiple simultaneously signaling LEDs. And it can do all this with a negligible memory and CPU footprint.
Using LEDs to transmit alarms – combined with use of machine learning in order to identify root cause of the problems – enables a less qualified technician to troubleshoot a wider range of problems. Ultimately it reduces both the time and cost of field service operations.
The Ericsson expert
Maxim Teslenko received a Ph.D. degree in Computer Science, Department of Microelectronics and Information Technology at the KTH Royal Institute of Technology in 2008. He has been with Ericsson Research since 2012, holding the position of Senior Researcher. His research interests include hardware and software prototyping, formal verification and machine learning.
Learn more about machine intelligence
Stay tuned for the next blog post on using machine intelligence to automate as much as possible, using truly data-driven research.
Interested in learning more about how our machine intelligence uses machine learning and artificial intelligence to drive systems for automation and network evolution?
Discover more on our site and explore how Machine Intelligence differentiates Ericsson’s portfolio and services.