Skip navigation

AI agents in the telecommunication network architecture

Available in English 简体中文

The evolution toward autonomous networks is rapidly gaining momentum, driven by the integration of AI agents, generative AI, and large language models. These technologies are poised to transform network operations, customer experience, and service delivery. This white paper explores how AI can be embedded into telecom architectures, with a focus on intent-driven management and real-world use cases. As we move toward 6G, AI will become a native component, enabling zero-touch, intelligent networks.

White paper

Introduction

The journey toward autonomous networks has been long underway, but has only recently started accelerating significantly. AI agents, generative AI (GenAI), and large language models (LLMs) are predicted to become key components for enhancing network efficiency, customer service, and operational management due to their strong autonomous capabilities.

In this white paper, we define AI agents and provide an example of their use in the mobile network architecture, with the TM Forum-specified intent management architecture serving as the main case study. We also investigate other possible use cases, such as enabling mobile networks to optimize communication between agents used by subscribers and enterprises.

We will explore and discuss concepts and analyses introduced in previous white papers, including Defining AI native: A key enabler for advanced intelligent telecom networks [1], Intent-driven networks is a key step in the journey to autonomous networks [5], and Cognitive reasoning for 5G network lifecycle management [2].

AI agents: Definition and taxonomy

In response to the large number of diverse interpretations of the roles and tasks of AI agents in the network, we need to clarify what agents and AI agents are.

Agent

An agent is an autonomous system authorized to act, decide, and self-initiate tasks independently on behalf of a person or entity.

Guided by goals, an agent perceives its environment through mechanisms such as sensors, protocols, data streams, or interactions with other agents. It processes information using rules, programmed logic, or learned models to produce outputs, take actions, use tools, or even execute code to achieve its goals.

Agents can interact with their environment in runtime and can store and retrieve information over time, act individually, or collaborate through agent-to-agent communication. Their computational expressivity spans deterministic rule-based behavior to Turing-complete reasoning, enabling varied levels of adaptability, decision-making, and planning.

AI agents

AI agents, a subclass of agents, leverage machine learning to update their internal knowledge, sometimes referred to as memory, enabling dynamic adaptation to changing conditions.

They exist on a continuum ranging from restricted, bound by human-defined constraints, to unrestricted, capable of modifying their internal logic and goals. Though many roles and organizational schemes exist, such as orchestrator-, coordinator-, and executor-agents, this taxonomy focuses on classifying agents as AI or non-AI and restricted or unrestricted.

Figure 1. Taxonomy of AI agents

Figure 1. Taxonomy of AI agents

This way, we can define the boundary between restricted and unrestricted agents, letting us determine whether and where to allow different kinds of agents and how they should be reflected in the architecture. Although there can be all kinds of variations of these two agent types, at some point, a restricted agent becomes unrestricted in the following cases:

  • Modification of internal logic: Overriding human-programmed restrictions
  • Modification of goals: Lifting the boundaries of human-assigned goals

A specific subtype of GenAI-based agents called a copilot is worth explicit mention here.

It is a restricted, LLM-based agent designed to work interactively with humans as a human- to-machine interface. Copilots assist and enhance human performance by leveraging LLMs’ advanced understanding of natural language.

The AI journey and the network transformation

The evolution of modern telecommunication networks, enriched by 5G and future 6G technologies, has brought increased complexity and a critical need for automating network operations. Without automation, the cost of traditional network operations could become unsustainable, requiring vendors and communications service providers (CSPs) to address such complexity. Automation addresses this need while also being an enabler for a more versatile network that swiftly adjusts to evolving customer needs.

AI can drive automation by leveraging the right data and by utilizing our deep expertise in the most relevant network aspects. With these in place, AI can be embedded in the product portfolio to provide the greatest value across operational efficiency, customer experience, business growth, and sustainability.

The synergy between AI and networks will become even more critical in 6G, where AI will serve as a key native component, shaping the network architecture, capabilities, and services, enabling intent-based management with minimal human intervention, and achieving zero-touch operations.

At the same time, the emergence of new approaches such as GenAI and AI agents further amplifies these capabilities, showcasing potential applications across multiple domains, including:

  • Efficiency in network operations: GenAI can enable dynamic network policies and configurations, continuously monitor network traffic, detect anomalies, and automatically respond to potential threats or inefficiencies, reducing operational costs and reliance on human intervention.
  • Predictive capabilities: AI agents can analyze extensive network data to predict potential issues before they occur, such as congestion or hardware failures, and initiate countermeasures to enhance network reliability and uptime.
  • Optimization: GenAI can generate optimal routing paths and resource allocation strategies based on factors such as bandwidth demand and latency requirements, then implement these strategies at runtime, while adapting to changing network conditions.
  • Scalability and adaptability: AI agents can manage large-scale networks by dynamically responding to the constantly changing operational demands and conditions.

Despite the advantages, implementing AI agents in networks presents challenges, including integration with existing infrastructure, managing and securing large data volumes while ensuring privacy, and safeguarding systems against cyberattacks. In addition, ensuring robustness and trustworthiness requires rigorous evaluation, observability, and monitoring of agent behavior.

The above non-exhaustive list of potentials and challenges shows how complex an agentic system can be. We will analyze some of the challenges listed above later on, while others are just mentioned briefly for the sake of completeness.

AI agents and network architecture

CSPs are transitioning to enhanced network operations and full automation of complex tasks by leveraging modern AI technologies and enabling AI and AI agentic techniques to be used in mobile networks. Current 5G mobile networks are ready for and capable of carrying end-user AI agent traffic, while the journey toward 6G will present more opportunities for CSPs to use AI agents within mobile networks as well as expose services for AI agents in the application space.

Given the definition of AI agents, how would they appear in the functional architecture as outlined by standardization bodies such as TM Forum (TMF), 3GPP, and Open Radio Access Network (O-RAN) Alliance? To answer that question, we need to analyze what AI agents may be used for, then assess if and how the functional architecture is impacted.

Since AI agents are a generic realization technology, they can be used for many different purposes. To make things more specific, we provide a couple of examples below on using AI agents in various network domains and show their potential impact on the functional architecture.

One aspect of an AI native architecture is intelligence everywhere [1]. AI functions may operate across every network domain, stack layer, or physical site—wherever it makes sense from a business and technical perspective. In this respect, AI agents are one variant of AI functionalities.

Intelligence everywhere implies that data and necessary computing resources are available where needed. This results in managing information effectively through a distributed infrastructure that is agnostic to the data type while allowing proper access to AI—and AI agent—workloads.

Networks are enriched with enablers, and there is a need to automate management, preferably in full. Humans would still be in control, but they would express requirements rather than instruct the system on what actions to take—an aspect called zero-touch.

AI agents are a new tool in the toolbox, which enables us to solve automation problems in a structured and simplified way. In that regard, we should focus on what AI agents can do rather than how they are implemented.

Agents for intent management functions

When networks are fully autonomous, control loops and decision-making do not require human involvement. An autonomous network can deploy, configure, maintain—as in monitor, optimize, and heal—and retire itself independently. A major step forward in the evolution toward autonomous networks is the introduction of intent-based operations [4] [5], where human interaction is limited to expressing the requirements the network has to meet through intents. Based on TMF terminology, this also corresponds to an increase in the autonomous network level [3].

TMF defines the autonomous network architecture [7] consisting of autonomous domains, with each implementing an intent management function (IMF) and exchanging intents with other autonomous domains. Each domain includes closed control loops and knowledge- centric domain intelligence.

IMFs can be considered agents because they can, among many things, observe, decide, and act autonomously and interact with other agents. In this respect, an IMF might employ and coordinate multiple other agents in its effort to meet its own intent requirements and implement optimal network performance based on those.

Figure 2. Zooming in on an intent management function (IMF)

Figure 2. Zooming in on an intent management function (IMF)

We can map IMFs into network architecture as shown in Figure 3. This mapping can be further detailed; the work on this is already ongoing, which includes standardizing IMF-to- IMF communication [12], [13], [14], [15].  Please note that placement of IMF in NF is an example of a possible allocation, further analysis needed.

Since IMFs are autonomous components, they are suitable for AI agent implementation. If an IMF is implemented using AI agents, the agent-to-agent communication takes the form of functional interfaces as defined by TMF.

Figure 3. IMFs in the proposed 6G network architecture

Figure 3. IMFs in the proposed 6G network architecture

Agents for process flow automation

Intents simplify management and the introduction of new services to the network, further realizing the vision of a fully autonomous system. However, there are many more aspects where humans are still involved, so reaching the zero-touch vision requires additional steps.

AI agents can automate traditionally human-led processes, such as service ordering between network operators and enterprise customers. Today, when such a customer orders a service from a network operator, a human customer sales representative on the operator’s side must perform a lengthy process, including querying a product or service catalogue, negotiating with the customer, and submitting the order to the order handling system.

Acting as a copilot, an AI agent can automate these steps, enabling tailored product offers, dynamic pricing strategies, and faster service delivery.

Agents for automated network management

The integration of AI agents into the network management layer will lead to significant intelligence-driven improvements in two main areas of future network management: operational efficiency and network provisioning.

Operational efficiency

AI agents can autonomously use network metrics to identify unusual patterns in telemetry data that may signal security threats or operational issues.
For a more proactive approach, pushing AI agents to perform predictive analysis and resolution actions based on historical network data will lead to fewer potential failures or performance degradations, significantly reducing downtime and maintenance costs while ensuring better reliability and stability for future networks.

AI agents will simplify future network management operational tasks by automatically retrieving all needed contexts and reasoning about them, removing the manual context switching and analysis between the different interfaces network engineers deal with today.

Network provisioning

As mentioned previously, by being embedded in different management platforms that support complex analysis and decision-making across the entire management portfolio, AI agents will be key enablers of intent-based networking and operations in the network management layer.

For example, within platform services or applications operating on these platforms, such as rApps within a service management and orchestration (SMO) platform [11], agent-to-agent communication is facilitated through rApp-to-rApp communication methods as supported by O-RAN.

Another key contribution is related to network planning and slice orchestration. By analyzing usage trends and predicting future demands, telecommunication operation engineers and business partners will be able to allocate resources more effectively and plan for needed infrastructure upgrades or expansions.

One such example of this is dynamic network slicing, where the network is partitioned into virtual slices tailored to specific applications or user requirements. AI agents will manage these slices at runtime, ensuring optimal performance for diverse use cases.

Figure 4. AI agents in network management layers supporting advanced service and operation intelligence

Figure 4. AI agents in network management layers supporting advanced service and operation intelligence

Agents for the 3GPP core network

If AI agents are used as an implementation technique within network functions, then virtually no changes are required to the 3GPP core network architecture [8]. For example, AI agents could take an active role in the policy control function (PCF).

By default, when an operator sets up (declarative) policies, the PCF is authorized to make policy decisions autonomously. With the AI agent approach, the PCF becomes an agent acting on behalf of the network operator.

A similar approach may be used for other network functions, too. If communication between agents in two different network functions is necessary, it can be handled through functional additions to service-based interface (SBI) communication.

While it is possible to implement functions using AI agents, we should consider their efficiency and whether they could improve the solution at all. Since this white paper’s purpose is to highlight the potential of such techniques and list all implications, we will need a balanced evaluation in a future study.

Network support for agents running in the application space

Current networks carry traffic between subscribers and, sometimes, between their AI agents. CSPs may provide additional support to these subscriber applications by exposing network services, such as location, differentiated connectivity, and network slicing.

Today, there are several layers of network service exposure application programming interfaces (APIs). One such example is the network exposure layer, while another is the application space layer, which uses APIs from the CAMARA and the GSMA Open Gateway initiatives.

Such APIs may provide additional support for functions specific to AI agents, such as agent registry and discovery, authentication, and authorization. Simultaneously, we need to preserve the robustness and integrity of the network with appropriate authorization mechanisms, as well as establish rules on the information agents can share and the topics they can discuss.

Figure 5. Network support for agents running in the application space

Figure 5. Network support for agents running in the application space

Model Context Protocol

The Model Context Protocol (MCP) [9], which has rapidly gained widespread adoption in the AI community, establishes a standard client–server architecture for exposing tools, resources, and prompts to LLM-based agents by providing a uniform interface for accessing various services.

To best define a possible role of MCP in the telecommunications domain, we must understand that wrapping already existing individual API calls as an MCP server tool is not the best option because APIs and MCP servers are not directly interchangeable. While

APIs are built for developers to execute precise, low-level operations with strict parameters, agents typically execute high-level tasks and actions to achieve specific goals through autonomous reasoning and intent-driven interaction.

MCP servers leverage underlying APIs by adding a conversational or agent-friendly layer on top of conventional APIs. The role of an MCP tool is to abstract multiple low-level API calls into coherent, high-level tools that represent tasks or capabilities an agent can invoke to achieve specific goals.

With this in mind, one possible role of MCP in the telecommunications domain is shown in Figure 6. In this particular scenario, CSPs supply MCP servers for network services that are exposed to the application layer. This is similar to what we described in the previous section, where the network service exposure allows APIs to provide additional support for several AI agent-specific functions. An early example of MCP in this specific role is the Telephony MCP Server from Vonage [16].

Figure 6. MCP servers in a telecommunication network architecture

Figure 6. MCP servers in a telecommunication network architecture

Another possible role of MCP is when it simply becomes an implementation technique to build internal network services using AI agents.

Agent-to-agent communication

The agent-to-agent (A2A) protocol [17] is a model-agnostic communication standard designed to enable AI agents to interact with each other, facilitating scalable cooperation with minimal human oversight. A2A standardizes the structure of messages exchanged between agents, including goals, capabilities, state updates, requests, and commitments. Complementing MCP, A2A aims to foster emergent, self-organizing, and scalable AI agent ecosystems.

Since AI agents represent a specific implementation technique, when they are used to implement existing standardized network functions, the agent should utilize the existing functional interfaces rather than a generic agent interface, such as A2A. For example, if we implement portions of an IMF with AI agents, they should use the TMF Intent Management Framework and Protocols to communicate their intents.

The motivation for this is the separation of concerns, stating that the implementation technology in one domain should remain indifferent to the implementation technique in another. In other words, two communicating functional entities should not assume that they are agent-based and use a specified agent interface.

For proprietary interfaces, agents should be given flexibility in choosing the best protocol for each agent’s purpose, such as A2A.

Robustness and trustworthiness in AI agent- based systems

Achieving robustness and trustworthiness in LLM-based AI agent systems is a significant challenge, which needs to be taken particularly seriously due to the high requirements on these aspects in telecommunication infrastructures and the sophisticated design of telecom APIs.

Traditionally, so-called guardrails such as human feedback, prompt validators, output filtering, moderator models, and fine-tuning have been used to align models with human preferences and improve safety. Architectural guardrails, including tool constraints, code sandboxes, chain-of-thought supervision, and retrieval-augmented generation (RAG), help further reduce risks such as hallucinations. Other methods, such as grammar-constrained decoding, enforce syntactic correctness by restricting LLM outputs to a formal grammar.

However, these techniques still cannot guarantee correctness and alignment.
As a result, achieving robust and trustworthy AI agents requires special attention through careful use of the following activities:

  • Evaluation: Analyzing agent reasoning, correctness, safety, and task completion of an AI Agent in both the development and operational phases.
  • Observability and monitoring: Continuously supervising agent behavior and internal events to improve agent performance over time.

It is more complex to perform the evaluation of AI agents than for traditional AI models, as it needs to consider results and the trajectory. Thus, all steps to get to a particular result need to be tracked, making evaluation, monitoring, and observability more complex.

Summary

To summarize, AI agents and agentic architectures will be utilized in telecommunication networks. However, we should consider AI and agents as realization techniques for functions in the network architecture such as IMFs, rApps, or 3GPP network functions, not as a standalone function or an architecture.
The use of agentic architecture and agents must not endanger the robustness and integrity of network operation. Functional network architectures, as standardized, still apply to agents, which gives proper context to what they are authorized to observe and do.

This also means that agents reside within domains and layers of network operation with clear interfaces and authorizations, while still leveraging their reasoning ability to make decisions.

Given the current standardized functional approach toward network architecture, AI agent communication will be possible through existing functional interfaces, such as SBIs for the 3GPP core network, or rApp-to-rApp communication (R1) and IMF-to-IMF communication.

AI agent realization aspects will not be standardized, which is consistent with the view of AI and AI agents being implementation techniques rather than architectures. This way, we can create flexibility for the AI agents to use different realizations and evolve at a different pace.

Technologies like MCP and A2A offer valuable capabilities in the AI agent space. While standardized APIs will continue to underpin most network functionalities, MCP can serve as an abstraction layer that exposes high-level tools to agents, particularly in the management and application domains. A2A, meanwhile, could serve as internal agent-to- agent communication, supporting scalable interactions between agents within proprietary implementations. Importantly, both MCP and A2A are intended to complement APIs by enabling more flexible and intelligent agent interactions.

The two summarizing figures below showcase the possible placement of AI agents in the management and orchestration domains, as well as in the 3GPP functional architecture.

Figure 7. A possible use of AI agents in management and orchestration domains

Figure 7. A possible use of AI agents in management and orchestration domains

Figure 8. A possible use of agents in the network architecture

Figure 8. A possible use of agents in the network architecture

Conclusion

AI agents will play a vital role in telecommunications, shaping future network evolution. This white paper underscores their potential, particularly in autonomous operations for 5G and 6G networks, clarifies the relation between AI agents and telecommunication network architecture, and provides a crisp definition of what AI agents are, while emphasizing critical considerations such as robustness, adaptability, and operational efficiency.

Telecommunications industry leaders are encouraged to define a long-term vision for AI agentic solutions as a flexible, scalable approach to address both current challenges and long-term opportunities.

Contributors

Massimo Iovene.

Massimo Iovene

Massimo Iovene is an AI expert in Core Network engineering, with more than 25 years of experience working with telecommunications architecture, product implementation and customer engagement. In the past years working on product strategy and evolutionary aspects around Cloud technologies, Automation, O&M, Total Cost of Ownership aspects, Cloud Native journey, analyzing market trends, status of the industry and the research. In recent years, his work has focused on the domain of AI and related methods, with the primary ambition of applying them to telecommunications, for nodes and service automatization.

Dr. Leif Jonsson

Dr. Leif Jonsson

Dr. Leif Jonsson holds an MSc in Computer Science from Uppsala University (1998), the same year he began his career at Ericsson’s research division. In 2018, he earned his PhD in Computer Science from Linköping University, with a focus on Machine Learning and Artificial Intelligence, conducted in collaboration with Ericsson. His research focuses on leveraging machine learning to enhance large-scale software development, particularly in automating complex tasks that have traditionally resisted automation. As an Expert in AI and Machine Learning at Ericsson, Dr. Jonsson leads initiatives in AI strategy, mentors and teaches within the ML domain, and drives applied research in machine learning across the organization.

Paddy Farrell

Paddy Farrell

Paddy Farrell is an expert in Data Science and Network Intelligence, with a strong focus on applying AI to Ericsson’s network management solutions. Since joining Ericsson in 2000, he has played a pivotal role in shaping the company’s future in network management by leveraging machine learning, predictive analytics and automation to boost network efficiency, reliability, and scalability.

Paddy holds degrees in both Electronic and Software Engineering, as well as a Master’s in Artificial Intelligence. He is passionate about driving research and fostering innovation, working closely with customers to design and implement cutting-edge solutions that transform network performance and operations.

Ulf Mattsson.

Ulf Mattsson

Ulf Mattsson is an Expert in Packet Core network. He has 25 years of experience working with telecommunications, covering four generations of mobile systems. His work has included development, architecture definition, and standardization for networks and mobile phones. In recent years, his work has focused on architecture for AI/ML. Ulf holds an M.Sc. from Chalmers University of Technology, Gothenburg.

Dinand Roeland.

Dinand Roeland

Dinand Roeland is a principal researcher at Ericsson Research who joined the company in 2000. His current research interests involve introducing artificial intelligence technologies into end-to-end network architecture with the goal of achieving an autonomous cognitive network. He has worked in a variety of technical leadership roles including product development, concept development, prototyping, standardization, system management and project management. He holds an M.Sc. cum laude in computer architectures and intelligent systems from the University of Groningen in the Netherlands.

Göran Hall.

Göran Hall

Göran Hall is an Expert in Network Architecture Evolution (AI/ML) at the CTO office. He joined Ericsson in 1991 to work on development and standardization, primarily within the area of Packet Core network architecture, and has been active in developing standards and products for GPRS, WCDMA, PDC, EPC and 5G Core. In 2021 he joined the CTO office where he has the responsibility for AI architecture principles across Ericsson products. Hall holds an M.Sc. in Electrical Engineering from Chalmers University of Technology in Gothenburg, Sweden.

Jörg Niemöller.

Jörg Niemöller

Jörg Niemöller is an expert in analytics and customer experience. Since 1998, Jörg has held positions in Ericsson Research, core network system management, and digital services. He has developed concepts and solutions for intelligent systems capable of driving autonomous operation and realizing the zero-touch vision. His current focus is the introduction of these technologies in our industry through evolved products and standardization. Jörg is the lead author of TM Forum guidebooks and models on intent.

References

1
Ericsson - Defining AI native: A key enabler for advanced intelligent telecom networks
2
Ericsson - Cognitive reasoning for 5G network lifecycle management
3
TM Forum (TMF) - Autonomous Network Levels Evaluation Methodology (IG1252)
4
Ericsson - Autonomous networks with multi-layer, intent-based operation
5
Ericsson - Intent-driven is a key step to autonomous networks
6
Ericsson - How to make better use of network insights with Generative AI
7
TMF - Autonomous Networks Technical Architecture (IG1230)
8
3GPP - TS 23.501 System architecture for the 5G System (5GS)
9
Anthropic – Model Context Protocol (MCP)
10
Google – Agent-to-agent (A2A) protocol
11
Ericsson - SMO enabling intelligent RAN operations
12
TMF - Intent in Autonomous Networks v1.3.0 (IG1253)
13
TMF - TMF921 Intent Management API User
14
3GPP TS 28.312 – Intent-driven management services for mobile networks
15
O-RAN WG1.TR.SMO-INT-R004 - SMO Intents-driven management
16
Vonage - Telephony MCP Server
17
A2A