{ Markettagged:True , MatchedLanguageCode:True }

Cognitive technologies in network and business automation

Forward-looking network operators and digital service providers require an automated network and business environment that can support them in the transition to a new market reality characterized by 5G, the Internet of Things, virtual network functions and software-defined networks. The combination of machine learning and machine reasoning techniques makes it possible to build cognitive applications with the ability to utilize insights across domain borders and dynamically adapt to changing goals and contexts.

Ericsson Technology Review logo


Authors: Jörg Niemöller, Leonid Mokrushin

Download pdf

CPI – Customer Product Information
eTOM – Enhanced Telecom Operations Map
SID – Shared Information/Data
SLI – Service Level Index
TOVE – Toronto Virtual Enterprise


The need to support emerging technologies will soon lead to radical changes in the operations of both network operators and digital service providers, as their businesses tend to be based on a complex system of interdependent, manually-executed processes. These processes span across technical functions such as network operation and product development, support functions such as customer care, and business-level functions such as marketing, product strategy planning and billing. Manually-executed processes represent a major challenge because they do not scale sufficiently at a competitive cost.

Automation is an essential part of the solution. At Ericsson, we envision a new infrastructure for network operators and digital service providers in which intelligent agents operate autonomously with minimal human involvement, collaborating to reach their overall goals. These agents base their decisions on evidence in data and the knowledge of domain experts, and they are able to utilize knowledge from various domains and dynamically adapt to changed contexts.

Cognitive technologies

Software that is able to operate autonomously and make smart decisions in a complex environment is referred to as an intelligent agent (a practical implementation of artificial intelligence and machine learning). It perceives its environment and takes actions to maximize its success in achieving its goals. The term cognitive technologies refers to a diverse set of techniques, tools and platforms that enable the implementation of intelligent agents.

Figure 1: The model of mind
Figure 1: The model of mind

The model of mind shown in Figure 1 illustrates the main tasks of an intelligent agent, and thus the main concerns of cognitive technologies. The model describes the process of deriving an action or decision from input and knowledge.

An intelligent agent needs a model of the environment in which it operates. Technologies used to capture information about the environment are diverse and use-case dependent. For example, natural language processing enables interaction with human users; network probes and sensors deliver measured technical facts; and an analytics system processes data to provide relevant insights.

The purpose of intelligent agents is to perform actions and communicate solutions. Acting complements sensing in interaction with the environment. The choice of techniques and tools is equally diverse and use-case dependent. For example, speech synthesis enables convenient communication with users, robotics involves mechanical actuation, and an intelligent network manager can act by executing commands on the equipment or changing configuration parameters.

The thinking phase in the model of mind is the source of the intelligence in an intelligent agent. Thinking can be implemented, for example, as a logic program in Prolog, in an artificial neural network, or in any other type of inference engine, including machine-learned models.

The thinking phase derives its decisions from facts and previous experiences stored in a knowledge base. The key is a machine-readable knowledge representation in the form of a model. Graph databases and triple stores are frequently used for efficient storage. Formal knowledge definition can be achieved using concepts of RDF (the Resource Description Framework) or description languages, such as UML (the Universal Markup Language) or OWL (the Web Ontology Language).

Machine learning and machine reasoning

There are two technological pillars on which an intelligent agent can be based: machine learning and machine reasoning (illustrated in Figure 2). Both involve making predictions and planning actions toward a goal. Each has its own strengths and weaknesses.

Figure 2: Machine reasoning and machine learning
Figure 2: Machine reasoning and machine learning [1]

Machine learning relies on statistical methods to numerically calculate an optimized model based on the training data provided. This is driven by wanted characteristics of the model, such as low average error or the rate of false positive or negative predictions. Applying the learned numerical model to new data leads to predictions or action recommendations that are statistically closest to the training examples.

An example of a learned model is the Service Level Index (SLI) [2] [3] implemented in Ericsson Expert Analytics, which predicts a user’s level of satisfaction. The training input is measurements from network probes that show the QoS delivered to the user combined with surveys in which users state their level of satisfaction. The learned model predicts this satisfaction level from new QoS measurements.

Machine reasoning generates conclusions from symbolic knowledge representation. Widely used techniques are logical induction and deduction. It relies on a formal description of concepts in a model, often organized as an ontology. Knowledge about the environment is asserted within the model by connecting abstract concepts and terminology to objects representing the entities to be used and managed. For example, “customer satisfaction,” “user” and “quantifies” are abstract concepts. Based on these, we can assert that “Adam” is a user and “4” is the SLI value representing his satisfaction. We can further assert inference rules: “SLI quantifies satisfaction,” “SLI below 5 is low,” “low satisfaction causes churn”. Based on this knowledge, a machine-reasoning process would logically conclude that Adam is about to churn. It would trace the reason to the low SLI value.

Hybrid approaches to symbolic neural networks also exist. These are deep neural networks with a numeric and statistics-based core and an implicit mapping of the model’s numeric variables to a symbolic representation.

Designing intelligent agents

Autonomous intelligent agents support human domain experts by fully taking over the execution of operational tasks. Doing this convincingly requires them to react and execute faster than humans and be able to overcome unexpected situations, while making fewer errors and scaling to a high number of managed assets and tasks.

Intelligent agents are developed and deployed in a software life cycle. As such, they profit from the encapsulation provided by a microservice architecture, comprehensive and performant data routing and management, and a dynamically scalable execution environment. The ability to create an optimal thinking core for an intelligent agent requires a good understanding of the fundamental characteristics of machine learning and machine reasoning.

The role of abstraction

A person uses abstraction to distill essential information from the input presented. Abstraction provides focus and easier-to-grasp concepts as a base for reasoning and decisions. It also facilitates communication.

Interacting with a person or with another intelligent agent requires an intelligent agent to have the ability to operate on the same level of abstraction with a shared understanding of concepts and terminology. This includes, for example, how goals are formulated and how the intelligent agents present insights and decisions.

Machine-learned models are numerical. They manage abstraction by mapping meaning to numerical values. This constitutes an implicit translation layer between the numerical representation and the abstract semantics.

Ontology-based models are symbolic. Within an ontology, objects are established and linked to each other using predicates. Machine reasoning draws inference from this representation by logical induction and deduction.

The symbolic representation assigned to objects, predicates and numeric values is convention. It is chosen to use the same abstraction and the same terminology as the domain it reflects. This facilitates an intuitive experience when users create and maintain the knowledge base.

Business strategy planning is a good example of a highly abstract domain. It deals with concepts such as growth, churn, customers, satisfaction and policy. Numerical data needs to be interpreted to deliver a meaningful contribution at this level. An intelligent agent performing this interpretation of data is a valuable assistant in business-level processes.

The introduction of intelligent agents will not make domain experts unnecessary. Instead, the task of the expert shifts from direct involvement in operational processes to maintenance of the models that dictate the operation of autonomous agents.

The abstraction of the models contributes to the efficiency of the domain expert. A practical example is the design of decision processes of expert systems proposing actions. These systems reach an answer by checking a tree of branching conditions. Even with a small number of variables, manually designing these conditions is a time-consuming and unintuitive task. An intelligent agent can compile the tree from knowledge about the reasons for proposing an action. Managing the abstract rules is a considerably more intuitive because the abstraction rises to the level the expert is used to thinking at.

Obtaining and managing knowledge

The intelligent digital assistant example (see proof of concept #1 on page 8) demonstrates an automated process that contributes knowledge. The assistant is generated from product manuals written in natural language by a document-crawler application. Based on existing knowledge, it identifies and classifies the information provided in the documents. It asserts this information as additional knowledge. Furthermore, site data stored in catalogs and inventories is automatically and continuously asserted in the knowledge base. This keeps the knowledge up-to-date, and the reasoning results adapt dynamically to changed facts.

The intelligent digital assistant also uses image recognition. It identifies physical elements and the current situation from images and asserts its findings in the knowledge base. This demonstrates a transformation of numeric data into symbolic knowledge. Deep-learning based neural networks are particularly successful at this task of identifying patterns in data and classifying them symbolically.

The intelligent digital assistant’s use of image recognition and its ability to read natural language documents show that not all knowledge for machine reasoning needs to originate from a human domain expert. Machine-learning-based processes can add knowledge and keep it up-to-date based on what is learned from data.

In this respect, it is important to differentiate between data and knowledge. Data is values as provided by the environment. Knowledge is the interpretation of these values with respect to the semantics that are applied to give the data its meaning. Data and information models categorize data objects. Analytics creates further knowledge from multiple data elements and the domain context. A knowledge base preserves this knowledge for reasoning. When facing continuously changing data, a swarm of specialized intelligent agents can keep the knowledge up-to-date.

In machine learning, the learned model is the knowledge, and training examples are the main source. Domain experts are involved in selecting variables and data sources, and in configuring the learning processes according to use-case goals and constraints. The success of learning – and consequently, the performance of a learning-based intelligent agent – mainly depends on the availability and quality of training data.

Reinforcement learning is a variant of machine learning that learns from a set of rules and a simulation of the environment. Therefore, it does not necessarily depend on example data. However, the learned model is also not based on experience.

The manual design of knowledge by domain experts remains a major source of knowledge for machine reasoning. The domain experts create a stable core framework of asserted terminology and concepts. Based on this, they express their domain expertise by asserting further concepts and inference rules. They also design the applications that assess data source and automatically assert knowledge. This requires staff to be well trained in knowledge management, with efficient processes and tools for knowledge life-cycle management. A well-designed meta model establishes a standard for consistent knowledge representation. Any knowledge management competence gap can usually be filled by knowledge engineers, who can listen to the domain experts and transfer their knowledge into a model.

A major task in modeling is assembling a knowledge base according to use-case requirements. Ontologies can integrate and interconnect any formally defined model allowing extensive reuse. For example, data and information models used in application programming interface design constitute a foundation for asserting data objects. eTOM [4] and SID [5] are industry-standard models contributing common telecommunication terminology. TOVE [6] [7] or Enterprise Ontology [8] can cover business concepts. They were used in the business analytics orchestration example [9] (see page 9) for interpreting business-level questions.

An important part of the knowledge of autonomous intelligent agents is their goals. The domain expert uses goals to tell the intelligent agent what it is supposed to accomplish. Ideally, they are formulated as abstract business-level goals derived directly from the business strategy of the organization. This requires broad knowledge and adaptability to be built into the intelligent agents, but it promises a high level of autonomy.

Proof of concept

#1: Intelligent digital assistant

The intelligent digital assistant (see Figure 3) is designed to assist field technicians who service base stations [10]. The technician interacts with the assistant through a mobile device. The assistant uses augmented reality to derive the base station type, configuration and state through object detection and visible light communication. For example, it can read the status LED of the device. The assistant provides instructions and visual guidance to the technician during maintenance operations. It downloads contextual data about the site and requests any additional information that could not be retrieved automatically through a query and answer dialogue.

The intelligent digital assistant is currently a proof of concept implemented by Ericsson Research. We have implemented and deployed the machine-reasoning system on backend servers. The system collects sensed input, analyses symptoms and presents corresponding maintenance procedures as a proposed series of actions. Domain experts have manually designed the procedural knowledge for problem resolution. Additionally, a document crawler automatically reads operational documentation, which allows the assistant to present documents that are relevant for the current tasks to the technician for reference.

Figure 3: Intelligent digital assistant
Figure 3: Intelligent digital assistant

#2: Business analytics orchestration

The business analytics orchestration use case (see Figure 4) was implemented at Ericsson as a proof of concept within a master thesis project [9]. It demonstrates how the abstract level of business concepts can be linked with the technical level of data-driven analytics, so that intelligent agents can operate across the levels. The use case starts with a business question that can be solved through analytics. An intelligent agent acts as a business consultant, providing analytics-based assistance to a user. It analyzes the question, plans the needed analytics and orchestrates the execution of suitable analytics applications. When the results are available, the intelligent agent reasons about their meaning in the context of the question and explains the answer to the user.

The inference is based on a knowledge base that contains a combination of a business concept ontology and abstract service descriptions of analytics applications. It was built using existing and freely available business ontologies combined with manually-designed knowledge.

Figure 4: Business consulting through analytics
Figure 4: Business consulting through analytics

Machine learning and machine reasoning hybrid solutions

Good decisions and plans are often based on understanding multiple domains. For example, experts in network operation know about network incidents and the appropriate procedures to solve them. They can analyze technical root causes and apply corrective and preventive actions. The same experts usually also know some facts about the broader business environment. Knowing about financial goals and Service Level Agreements helps them to prioritize tasks. By understanding the application domain of a device or the concerns of a user, they can customize the solution. They might also know about marketing efforts or products in development and proactively provide consulting. All this knowledge allows an expert to make the right decisions. For intelligent agents, it is a challenge to operate with the same amount of diverse knowledge and to provide an equally diverse range of actions.

The role of machine reasoning

The knowledge used in machine reasoning is pure data decoupled from the implementation of the inference engine. Changes in behavior and extensions of scope must therefore be reached by changing the model data rather than the implementation of the intelligent agent. Therefore, machine-reasoning models are well suited to integrating ontologies and inference rules from multiple domains, if formal and semantic consistency is preserved.

Ideally, a layer of core concepts and terminology common to all domains should be used to anchor domain-specific models. This allows inference engines to traverse across domain borders and draw conclusions from all constituent domain models. If the models from different domains already use similar concepts, but define them differently, a “glue” model can relate them by introducing knowledge about the differences.

The drawbacks of the multi-domain knowledge base described here are the complexity of maintaining model consistency and the performance of the inference generation due to the number of knowledge elements to process.

The role of machine learning

In machine learning, each additional domain contributes yet another set of variables adding further numerical dimensions to the model. This introduces challenges such as the need for training examples that contain consolidated data samples from all domains. There is also an increase in the number of data points required to reach acceptable statistical characteristics. The combination of more dimensions and higher data volume increases the processing cost. Furthermore, each change in scope requires a full life-cycle loop including data selection, implementation, deployment and learning until a new model is available for productive use.

Considering these challenges, machine-learned models are best suited to be specialists in confined tasks. A secondary layer of models can then build on the specialist insights and evaluate them in a broader context. The second tier operates on higher abstraction with concepts from multiple domains. However, since training examples at this level are broad in scope, they tend to be hard to obtain. Domain experts are still available, though, so using machine reasoning is always feasible. In general, machine learning excels at inference that results from processing large amounts of data, while machine reasoning works very well in drawing conclusions from broad, abstract knowledge.

Hybrid solutions

The result is an environment comprised of orchestrated or choreographed intelligent agents. Coordination and collaboration is done through the knowledge. A machine-learned model can contribute its findings through asynchronous assertion. A mapping application is designed to monitor the numeric output of a machine-learned model or analyze the learned numeric model itself. When new output is generated, or a new version of the model is available, the mapping application interprets it in the domain context, determines its meaning and generates a respective symbolic representation. This constitutes new knowledge that is asserted into the knowledge base.

Alternatively, an application incorporating a machine-learned model can be linked directly into the knowledge base acting as a proxy for a knowledge object. A reasoning process would call the linked application when the respective knowledge is needed. The application generates a reply based on all currently available data.

Both methods create a hybrid of machine learning and machine reasoning that enables dynamic adaptation of the reasoning results based on learning and the latest data. Asynchronous assertion acts like a domain expert continuously updating knowledge. A knowledge proxy application synchronously generates knowledge on demand. However, this comes at the cost of delaying the reasoning process.

Symbolic neural networks

Symbolic neural networks specialize in learning about the relationships between entities. They implicitly abstract from an underlying statistical model, which allows them to answer abstract questions directly. One example is image processing combining multiple machine-learned models. One model identifies the objects seen. Another learns about the relationship between the objects. A third has learned to interpret questions asked. Due to the implicit abstraction and use of symbolic representation, the insights generated by these models would integrate seamlessly into a knowledge base and further reasoning. However, getting reference data for learning is a challenge in this scenario and would usually be dependent on human experts creating samples. As this setup has machine learning at its core, it also does not scale well to a high number of concerns and variables. Nevertheless, it can find and contribute knowledge about new relationships that was hitherto unknown to experts.

Tiered implementation

The tiered implementation approach uses machine learning on the layer of specialist models and machine reasoning for consolidation across domains. This assignment of roles reflects strengths of the technology families, although a different selection is possible depending on the use case and environment. For example, machine learning may be successfully applied for cross-domain consolidation if training data is available. And machine reasoning can implement a specialist intelligent agent, for example, if it incorporates the manually-designed rules of a human domain expert.


Intelligent agents with the ability to work collaboratively present the best opportunity for network operators and digital service providers to create the extensively automated environment that their businesses will require in the near future. Cognitive technologies – and in particular a combined use of machine reasoning and machine learning – provide the technological foundation for developing the kind of intelligent agents that will make this flexible, autonomous environment a reality. These agents will have a detailed semantic understanding of the world and their own individual contexts, as well as being able to learn from diverse inputs, and share or transfer experience between contexts. In short, they are capable of dynamically adapting their actions to a broad range of domains and goals.


Jörg Niemöller

Jörg Niemöller

is an analytics and customer experience expert in solution area OSS. He joined Ericsson in 1998 and spent several years at Ericsson Research, where he gained experience of machine-reasoning technologies and developed an understanding of their business relevance. He is currently driving the introduction of these technologies into Ericsson’s portfolio of Operations Support Systems / Business Support Systems solutions. Niemöller holds a degree in electrical engineering from TU Dortmund University in Germany and a Ph.D. in computer science from Tilburg University in the Netherlands.

Leonid Mokrushin

Leonid Mokrushin

is a senior specialist in cognitive technologies at Ericsson Research. His current focus is on investigating new opportunities within artificial intelligence in the context of industrial and telco use cases. He joined Ericsson Research in 2007 after postgraduate studies at Uppsala University, Sweden, with a background in real-time systems. He received an M.Sc. in software engineering from Peter the Great St. Petersburg Polytechnic University, Russia, in 2001.


  1. Acadia University, On Common Ground: Neural-Symbolic Integration and Lifelong Machine Learning (research paper), Daniel L. Silver
  2. Ericsson Technology Review, Generating actionable insights from customer experience awareness, September 30, 2016, Niemöller, J; Sarmonikas, G; Washington N
  3. Annals of Telecommunications, Volume 72, Issue 7-8, pp. 431-441, Subjective perception scoring: psychological interpretation of network usage metrics in order to predict user satisfaction, 2017, Niemöller, J; Washington, N
  4. TM Forum, GB921 Business Process Framework (eTOM), R17.0.1
  5. TM Forum, GB922 Information Framework (SID), Release 17.05.1
  6. Berlin: Springer-Verlag, pp. 25-34, The TOVE project towards a common-sense model of the enterprise, Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, 1992, Fox, M.S.
  7. University of Toronto, TOVE Ontologies 
  8. Cambridge University Press, The Knowledge Engineering Review, Vol. 13, Issue 1, pp. 31-89, The Enterprise Ontology, March 1998, King, M; Moralee, S; Uschold, M; Zorgios, Y
  9. Tilburg University, Mediating Insights for Business Needs, A Semantic Approach to Analytics Orchestration (master’s thesis), June 2016, Alhinnawi, B.
  10. Ericsson Mobility Report 2018, Applying machine intelligence to network management, Stephen Carlsson