Artificial intelligence in telecom - from hype to reality

How do you feel about artificial intelligence (AI)? Are you like the 20 percent of students and professionals in one of our studies who believe AI will be able to replace them at their current job in this lifetime?

Man working on multiple screens

Research Director AI

Head of Consumer & Industrylab

Research Director AI

Contributor (+1)

Head of Consumer & Industrylab

Will artificial intelligence eventually turn every industry upside down? How it will shape our society? And perhaps most important - how can we know that AI won’t reach self-awareness and strike out on its own to cause a future beyond human control? The fact is, some of this is hype, and some reality.

It's true, the recent advancements of narrow AI are mind-blowing: algorithms are beating humans in applications ranging from gaming to healthcare.

But however 'magical' these accomplishments may seem – especially when we retrospectively look back at what we thought AI would be able to do just a couple of years ago – this is far from the reality of the everyday work we do at Ericsson.

For us, the 'magic' of AI is making it work for us to make our everyday lives better and more efficient as a result. It's all about creating real-time efficiencies in the here and now, not dreaming of far-reaching hypotheses in the future.

Implementing AI magic into the now in telecom!

Now is the moment when AI goes from hype to reality. Already in our own industry we can see that AI is being embraced by service providers around the world. According to our research, more than half of service providers expect to have adopted AI within their networks by the end of 2020.

Some are working to an even shorter timescale and expect to have adopted AI by the end of this year. A further 19 percent are looking at an adoption timescale of within three to five years. 

At Ericsson, we aim high with our ambitions for AI while also operating in the here and now. Last year, the main trend of our CTO's five technology trends was Zero touch networks.

We hope we can realize a future in which we have automated as much as possible in the networks and have removed a lot of the dependencies of humans in managing the networks. We visualize a future where networks can become self-operating, self-optimizing and self-healing. Sounds like a dream, doesn't it?


In telecommunications we successfully use AI to optimize features in networks and predict failures in telecom sites. And, as with any technological innovation, the more we rely on technology the more vulnerable we become.

At a first glance, telecom networks may not seem like the most critical part of infrastructure. Today, a misbehaving network may drop your call or cancel your Instagram upload, and while this is of course annoying (especially if it's an important call or upload), the consequences can be far-reaching.

You may end up wanting to change your carrier, and that is a big problem for service providers. But tomorrow, networks will be an integral part of any mission-critical use case relying on connectivity, be it remote control of heavy machinery, autonomous drones or self-organizing logistics.

We are becoming more and more dependent on connectivity: the network is supposed to work and deliver in accordance to the high-level intent, either from consumers or from enterprises.

How to manage the machines - responsible AI by design

In the Ericsson AI Research Area, we divide technological innovations in two categories: automation and new value creation.

Automation means replacing human effort with machines, opening up the possibility of reassigning human effort to different types of activities that are not as easy to automate, giving us the ability to scale up the business.

Value creation is about untapping new business values through innovations such as developing new unique features in 5G networks. Both automation and value creation are achievable with the help of AI. But AI is radically different from the way system features were programmed traditionally.

Traditional methods include a design phase, where requirement specifications of the system are being created. Normally, business requirements are translated to technical requirements, and after that the development phase can start. During the development process it is important that the system meets both the functional requirements, as well as the requirements on safety, privacy and security.

For us at Ericsson, a system that is "correct-by-design" has taken all these requirements into account. AI-based feature development is based on machine learning and machine reasoning and does not include the design and development phase as in traditional methodologies.

Still, we must be able to guarantee the correct behavior of the system, independently, if it's based on AI or not. Therefore, general requirements such as safety, privacy and security are just as important for AI-based systems. And since we are working with data, it is extra important to be able to make sure that our algorithms work in non-biased and explainable manner.

Ensuring non-bias

To build a network serving billions of connections and automating countless handovers, making them work flawlessly is a daunting task. Moreover, complexity in telecommunications spans multiple levels. We need to adhere to global standards and still comply to local rules and legislation. We need to adhere to industry standards, yet every one of our customers is unique and in turn, wants to provide a unique experience to their customers.

AI can potentially be used to improve and transform our industry on so many levels. Design of products and services, life-cycle management, operations, both remote and in the field, all benefit from AI algorithms given the massive amounts of data and knowledge that is constantly being produced by telecom networks.

This data and knowledge is used to optimize system performance and make networks as sustainable and cost-efficient as possible. We need to make sure that the data we feed these systems is representative for what the system needs to learn – otherwise we make the system biased. There are of course ways to ensure this...

Formulating the right intent and priorities

AI is a technology which is intent driven. We set a goal for the AI to deliver on and when it succeeds, we know we are achieving what we aim for. This implies that industries using networks should be able to communicate their high-level intent of use of a service and expect a quality of service from the network to comply to this intent.

These intents will be set by humans and they will be industry-specific and carrier-specific. Furthermore, local legislations and regulations need to be taken into account as well when working with data and feeding the AI systems with new information. How we set these intents is therefore very important to ensure that the AI delvers what we want it to deliver.


A self-sustaining and intent-driven network implies that it will be smart enough to prioritize. Therefore, we will have to let the AI systems make certain priorities, but first they must be set by humans. It's no secret then, that they must be well-thought through!

When resources are scarce and something needs to be compromised, we better know our priorities! What's more is that we have to be able to explain the priorities chosen by our AI algorithms operating in our networks, and we better be ready to stand behind the decisions of the algorithm. Because, after all, we have decided how we want them to behave.

Human-centric systems - Optimizing the human factor of AI in telecom

When we talk to consumers about their hopes and fears around AI, it is clear that they see the largest benefit in taking out the human factor. For example, a person having a bad day might mean an unfavorable decision for them. An AI would not have that bad day - and that is seen as an advantage.

However, it's a double-edged sword because consumers also fear the lack of human emotion as the largest weakness – they say another human can sympathize and go out of protocol in order to make a wrong a right – whereas a machine must stick to protocol.

One could go as far to say that the dream for consumers would be a human-centric AI – incorporate all the benefits of eliminating human error while still leaving the human in control of the decisions taken by AI.

Similarly, when it comes to network design, the dream is to take away the human error of the equation without removing human control. As AI becomes widely implemented, we need to ask ourselves how we build and implement responsible AI.

So how do we ensure this? How can we protect ourselves from unwanted consequences of AI? There are several initiatives that aim at formulating general principles for how to design AI systems. The emergence of these principles is something which Ericsson follows closely.

Generally, these principles seek to secure that systems relying on AI are free from unwanted and unintentional consequences for humans, companies, or society at large. With these principles in place and translated into different industry domains and then into technical requirements, we will be able to require algorithms adhering to these principles.

In addition, apart from having AI systems behave responsibly, we need system design to allow for explainability for humans and possibility for human intervention at any step. This part is integral to us at Ericsson. Although we love automation, we also want to be able to be in control and set our boundary conditions to AI. The automation we design is automation with the human in control and setting the boundary conditions to the system.


Zero-touch network operations implies that networks can operate without human intervention. Predictive operations will detect any potential problems and take measures proactively. Even when it comes to equipment deployed in the field, preventive maintenance visits should be held with the help of robotics and drones to eliminate the need for humans to perform challenging and dangerous tasks such as climbing radio towers.

As yet, zero-touch does not contradict the human-centricity of technology. Zero-touch is about removing the dependency of having a human in the loop. Human-centricity is about humans being in control, capable of dictating requirements that the system must comply to, and intervening with automated decisions when needed.

Do consumers really want AI?

So, what about the average consumer? Do they even care about AI and how it will affect their everyday lives? Well AI systems depend on the usage of data, and this means that consumers who are willing to share their data with various companies is a prerequisite to reap the benefits of AI.

At Ericsson Consumer & Industry Lab, we have studied consumers' relationship to privacy in several studies over a number of years, and we see that there are three conditions which are important in order for consumers to be willing to share their data: Permissibility, Value and Control.

First of all, consumers want to get asked for permission in order to leave their data.

Secondly, they need to feel they get a value in return for sharing their data. This implies that in order to be willing to share data, consumers want to feel they get a substantial value in return, most usually presented in the form of improved products and services.

Thirdly, they want companies to be transparent in how they use their data, while still feeling in control of their data. They want to be able to delete their data if they would feel that would be appropriate and do not want companies to sell their data on to third parties beyond their control.

When companies are using data responsibly consumers tend to be positive toward companies using their data. In fact, in one of our studies, 56 percent of the interviewed consumers stated they expect telecom service providers to anticipate their needs even before they know what they are.

Consumers are aware that just like any technology, AI may be used for good or bad. One everyday example of this is social media. Fifty-five percent of advanced internet users in one of our studies believe influential groups use social networks to broadcast their messages, and a similar number think politicians use social networks to spread propaganda.

On the other hand, half of the studied consumers in this study say AI would be useful to help check whether facts stated on social networks are true or false. The same number of respondents would also like to use AI to verify the truthfulness of what politicians say.

To us at Ericsson, this is nothing new. Technologies are in themselves neither good or bad – but how we humans choose to use them makes them so. AI is a new technology which emphasizes the responsibility to build truly human-centric systems – automating networks for sure – but with the human in control and networks which are secure, ensuring privacy and responsible by design. We have worked with Technology for Good since 1876 - we intend to keep on doing so.

Find out more about how Ericsson is working with AI in the autonomous networks of the future.

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.