Four benefits of AI for security, safety and transparency in telecom
With its intelligence, efficiency and unique automation capabilities, the use of AI has been driving evolutionary change in industries across the globe. The telecom industry is no stranger to AI technology. At Ericsson, we’ve been working with and developing AI-powered technology for decades to help manage and automate complex network data, predict patterns and issues and boost the performance of networks. In recent years, use of AI in the wider telecom industry has increased dramatically, transforming use cases, products and services alike. Similarly, as advancements in large language models (LLMs) turned AI into a household topic, generative AI use cases for transforming telecom were growing, adding their weight to the list of benefits AI can offer.
But as this evolution progresses, we must also acknowledge that the advantages aren’t always on our side. AI, though offering significant benefits, also brings with it a new set of challenges for telecom networks.
As addressed in our recent blog post, the first in a new series on 5G security, our industry is facing an increasingly complex and evolving threat landscape. As 5G progresses and our technology shifts toward cloud and edge computing, our ecosystem also becomes more intertwined with that of the IT sector, opening up new opportunities for threat actors to find – and exploit – vulnerabilities in our networks and systems.
The rise of generative AI and LLMs has also given these adversaries powerful tools with which to identify vulnerabilities and execute rapid attacks on telecom systems, including the AI components designed to secure them. In fact, according to a recent report by Sapio Research and Deep Instinct, 85 percent of the surveyed security professionals attributed the recent rise in attacks to bad actors using generative AI. Defending against and responding to these threats in a complex and dynamic environment requires high-tech solutions and automation – the kind that can only be realized with the power of AI.
There are also challenges around trustworthiness and responsibility. AI can generate uncertainty, particularly when results are generated by probabilistic and non-transparent AI models, or there is not enough transparency or explainability around results or how the models work.
It’s paramount we ensure that things are done right, at every step of the way. From the selection and reviewing of the data to be used for training, to the development and testing of the algorithms themselves, to the data they will have access to and the decisions and actions they are taking – we must maintain human agency and responsibility for this entire process. Fortunately, it is possible, by introducing transparency and explainability at every step – a transparency and level of understanding enabled by AI insights and explanations. Our first stop must always be to build the foundations for trust in AI, as explored in our whitepaper on trustworthy AI.
To help provide a clearer understanding of this situation, we’ll explore four of the key areas in which AI can be leveraged to help strengthen security and build trust when it comes to telecom – protecting our networks, our information and even our lives.
1. AI as the countermeasure to threats and fraud
Adaptable threat prevention and detection
Leveraging the big data processing power and predictive capabilities of AI and machine learning (ML), we’re now able to more easily detect and prevent various types of telecom frauds. By training these algorithms with large quantities of historical network data, they can learn to recognize ‘usual’ behavior – even with complexities like changing traffic, network topology and other dynamic factors. Based on this knowledge, the AI model can then be used to effectively detect any anomalous behavior – and potential threats.
AI-driven tools can even adapt to new threats, enhancing detection as they learn. Through adaptability, AI-powered systems can distinguish between legitimate traffic and potential threats with greater accuracy, reducing false positives. AI algorithms, capable of analyzing vast data sets, can also predict potential future attacks by analyzing patterns in network traffic. Since AI-driven systems can understand the behavior of network entities and identify anomalies, they can manage large and expanding networks without a substantial increase in manual supervision.
AI for advanced vulnerability prediction
Software security is also a vital concern as digital transformation progresses. Software vulnerabilities can, more than ever, pose risks to critical infrastructure and systems. At Ericsson, we were able to demonstrate the use of machine learning (ML) as a software assurance method, predicting vulnerabilities in software prior to its release, from intelligent analysis of the source code.
Stopping scammers in their (digital) tracks
As covered in this earlier post from 2020 on AI and security in mobile networks, anomaly detection is a well-established area where AI has been used to significant advantage for some time, helping counter fraud against user equipment or infrastructure, such as false base stations. Other fraud prevention use cases include SIM card cloning, international revenue share fraud and bypass fraud.
Preventing phishing and spam calls is another important use case where AI takes defense to a whole new level, using analysis of network data and transmitted information to detect and block phishing attempts and spam calls, protecting users from potential scams and malicious actors. While filtering emails to detect likely spam or phishing has previously been relatively straightforward, these social engineering threats are becoming more sophisticated. With generative LLMs, cybercriminals now need limited technical expertise or even language skills to conduct effective high-volume, rapid attacks.
By analyzing data, AI can automate the monitoring for suspicious behavior, such as a device sending a high volume of SMS texts to unfamiliar numbers, or texts containing suspect URLs that may lead to fraudulent websites. By detecting these anomalies, AI can flag potential threats, enabling further investigation to determine their nature.
The need for speed in threat response
One of the key benefits worth highlighting separately in the area of threat detection is the simple matter of urgency. To combat the advanced threats posed by today’s cybercriminals, we must employ AI solutions – systems that can adapt, learn, and respond to emerging threats faster than traditional security measures. With adversaries leveraging AI for rapid attacks, our defense mechanisms must be equally agile. AI and ML-driven solutions and network security automation offer the capability to monitor for potential threats (even ones never seen before) and respond in real-time, neutralizing threats as they emerge – not after, once the damage is done.
2. Physical security: AI-powered safety saving lives, energy and costs
AI’s ability to analyze images or footage to enable remote monitoring and maintenance use cases for equipment and physical infrastructure does more than add convenience and protect valuable equipment. It also reduces safety risks for technicians and other staff.
As we mentioned earlier in this series in our post on sustainability and energy consumption, using AI predictions based on real network data to simulate a Virtual Drive Test can not only reduce carbon emissions, it removes the need for physical driving, reducing risks to the test drivers who would otherwise potentially spend hours on the road. Similarly, if technicians or other staff are able to monitor or engage with a site remotely or via AI-powered simulations, safety risks from having to travel to sites or climb towers can be drastically reduced or removed.
A similar scenario centers around predictive maintenance. With AI, technicians can predict system failures or vulnerabilities, allowing the telecom operators to address issues before they become critical threats – either to their infrastructure, or the safety of their personnel.
For example, at Ericsson we recently developed an AI-powered tool to help technicians identify potential maintenance issues using devices such as drones, or even their mobile phone. The technicians could capture images of the equipment – radio units or cables for example. The software would then analyze and identify any potential issues or hazards, recommending preventative maintenance actions to be taken, both improving resilience and performance, but also helping prevent any physical incidents from unsafe or poorly-maintained equipment, or from avoiding the need for tower climbs.
AI is also being utilized to modernize safety for field workers. By utilizing a mobile app that brings together AI, computer vision and IoT technologies, field crews and managers can benefit from improved safety and compliance. For example, they can validate that a worker is wearing the right protective equipment, such as a safety helmet, vest, work gloves and boots before undertaking a task. The app can also assist a crew through live weather updates or, in the case of an accident, even help identify a nearby trauma center by automating the Emergency Action Plan (EAP).
With Ericsson Safe Work, field workers can benefit from modernized safety measures through an AI-powered mobile app, checking protective equipment or identifying nearby medical centers.
3. Protecting privacy and sensitive information
There is a great deal of fear in people’s minds when it comes to the adoption of AI – and it’s not hard to understand why, looking at how science fiction culture has represented this technology – as an unsettling potential threat to our autonomy and human rights. Overcoming this fear, and building trust at every stage, is potentially the biggest challenge facing AI. This is of particular importance when AI has responsibilities requiring the use of our data or management of security measures, as is the case in our increasingly digitalized society.
Read our AI ethics report
What does it mean to fully trust a technology? Read more about how fast-growing tech needs to align with ethical principles if it’s to be embraced by society.
Click hereFortunately, a great deal of work has already been done around AI and privacy and in how to develop trustworthy AI. The protection of sensitive or personal information is a key part here, ensuring sensitive data is protected – particularly as, if appropriate protections are not put in place, AI systems may be able to identify and expose an otherwise anonymous individual or piece of information, even accidentally.
4. Transparency and the Chain of Trust
Trust in telecom networks can be considered like a chain reaction. If we understand and trust the AI and ML components (including the input data they are trained on or work with, and their management), we can trust the outputs and security measures they implement, leading to overall trustworthiness in the network. But a single break in this chain can compromise the entire system.
Validating correctness: understanding what’s in the black box
A ‘black box’ stage is the part of the process where the complex models are applied, and decision-making processes occur. Here we have the challenge of needing to understand and explain what is actually going on in that black box. Why did the AI model make a certain decision? How did it arrive at that decision? We also need to identify and assess any new or unexpected behavior, especially when the model is deployed from a simulated environment to a testbed or live network.
This is where Explainable AI comes into play. Firstly, Explainable AI can help explain (using natural language) those actions and decisions being taken behind the scenes in an AI system, enabling non-expert users (external customers or internal users) to ask questions and better understand policies or actions – a key factor when it comes to transparency and customer trust. Secondly, Explainable AI methods such as our own developed Both Ends Explainability for Reinforcement Learning (BEERL), can provide us with detailed internal features and explanations that are very useful when developing and testing the correctness of the AI models, ensuring accuracy and accountability of the models themselves.
Enabling automation – sharing predictions for proactive action
Explainable AI also supports the automation of telecom use cases and solving problems in proactive manner – before they arise, rather than after they occur. One such example is 5G network slice assurance, guaranteeing that the network slice will meet all the quality-of-service requirements in the Service Level Agreement (SLA) throughout its lifecycle. If the SLA is violated, a penalty would need to be paid. AI, and Explainable AI, can help by predicting and identifying any problems in advance. The identified problem then can be shared with other AI-based modules to provide suitable recommendations for action to be taken to resolve any potential violation before it occurs. Further details on this are available in our white paper on Explainable AI.
Moving forward in a fast-changing environment
Finally, governance has a major role to play in building trustworthy systems. Perhaps accelerated as a result of the cultural fear around AI we mentioned earlier, the development of regulations around AI does seem to be progressing rapidly – faster than any other recent technological advance. Regulations like the EU Artificial Intelligence Act have already been introduced, followed very recently by the U.S. Executive Order on the safe, secure and trustworthy development of AI and the Canadian Artificial Intelligence and Data Act.
We will, of course, continue to monitor regulatory developments closely as they evolve, and implement any required adherence as part of our normal development processes. We hope to see equally rapid progress when it comes to standardization efforts around trustworthy AI, as industry alignment and participation will be vital to help guide this technology in the right direction and ensure we can develop responsible, trustworthy AI systems and solutions (while preventing their misuse).
Benefits of AI in Networks blog series
We hope you enjoyed this episode on how AI is transforming telecom security – you can read more of our ‘Benefits of AI in Networks’ blog series here, or sign up below to be notified of future posts.
Sign up for our Benefits of AI in Networks blog series
Don't miss out - sign up today and be notified of each episode as it is released.
Sign up nowLearn more
Read more about our ‘Benefits of AI in Networks’ blog series.
Find out more about ethics and trust in AI, what it means for telecom, and the European Commission “Ethics Guidelines for Trustworthy AI” that Ericsson has adopted, in our Trustworthy AI whitepaper.
Dive deeper into explainable AI and how humans can trust AI in this whitepaper.
Learn more about telecom security for a connected world
Explore telecom AI
Explore AI in networks
Like what you’re reading? Please sign up for email updates on your favorite topics.
Subscribe nowAt the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.