Skip navigation
Network functionality deployed in various locations

Network functionality deployed in various locations

Network architecture domains

The Network Architecture is developing for a cognitive management of highly automated networks providing the correct needs with the flexibility of deployment environments


The current, 5G and future, 6G mobile networks have taken a more central and present role in everyday life both for normal users, but also in the role of communication for enterprises as well as for critical communication in society. The networks create a powerful innovation platform for virtually any sector with industry and society.

This transformational trend in how networks are built, operated and open for innovation is described in Figure 6 below. Networks have for some time developed in the direction of a more dynamically adaptable architecture where Network Functions (NF) and applications are running where and when they are needed to optimize performance, cost and business agility.

The horizontalization of the network architecture enables a distribution of cloud resources, joint data pipelines and Open APIs for the programmability and flexibility needed - both inside the network and to the outside world.

A central part of openness in 5G and 6G networks is to leverage the strength of business relevant vertical/functional interfaces and additional horizontal interfaces in the network platform for flexibility in a cloud world. It is still very important to leverage coordination across relevant standardization bodies, open-source projects, alliances and partnerships.

Separation through horizontalization between HW, cloud, transport, data pipelines, network applications, management, and monetization will consequently make interfaces supporting horizontalization more important for multi-vendor interfaces.

In this transition, correct modularization of network functionality is crucial. The open air-interface, open RAN-Packet Core interface and global roaming specified in 3GPP is the basis, and remain so, for the global scale with a strong ecosystem.

High-Level Network Architecture

Figure 6. High-Level Network Architecture

The horizontal architecture functional domains will be further described in the following chapters. Vertical topological domains span from “Devices / Local networks” on the left all the way to “Global sites” on the right.

Global connectivity and services have by tradition been deployed in a federated model, where the interfaces are well standardized and offered by any service provider. The complexity with multiple networks has been hidden through interoperability and inter service providers exchange models. However, the rapid deployment of new features makes the traditional standardized federated model hard to use. New methods of enabling exposure of assets from multiple networks is needed, like network asset facilitation and exchange, or even, on service providers request, aggregation into a single offering.

Management, Orchestration, Monetization

End-to-end SLA adds management of the commercial aspects with service monitoring and assurance functionality.

For CSPs today’s consumer business is well known and established and currently new and innovative monetization opportunities for 5G are being explored. Entering the area of doing business with Enterprises will require that tailored communication services can be delivered for each Enterprise customer. Such additional revenue flows and potential penalties means that the demand from CSPs have increased to be able to closely monitor both fulfilment and service quality to provide excellent support.

An SLA is legally binding and come in two flavors:

  • Commercial SLAs govern the service relationship between a CSP and its enterprise customers and potentially also between a CSP and its partners or suppliers
  • Operational level agreements apply to service and resource management specifying expectations of their contribution to service implementation

SLA management is thus collecting all processes concerned with defining and contracting service level agreements, the subsequent monitoring and assurance of delivered services and the commercial follow up. It leverages standards, mainly TMF, interfaces and can be organized into different functions as depicted in below.

E2E SLA Management Architecture

Figure 7. E2E SLA Management Architecture

The CPQ (Configure/Price/Quotation) function is responsible for the order negotiation of connectivity services, comprised of network functions from RAN and Packet Core and with selected service level objectives.

When created by a service orchestration function it updates the network inventory repository and prepares the service assurance function with the SLA details.

The network assurance collects monitoring data and trigger actions towards the responsible domain orchestration and assurance functions should errors occur. In cases of violations of aggregated service level alarms are raised which will then impact the commercial SLA Management.

The maturing virtualization technologies and the development of the cloud native paradigms increase the need of enhanced automation of Network functions as these increasingly adopt to the cloud native paradigm.

Physical resource automation regardless of deployment flavor of the network architecture underneath (physical and/or virtual) must be included to maximize the automation and potential gains for both physical, virtual and hybrid network deployments.

Above-described SLA handling, the opportunity of automated life-cycle handling, self-healing capacities and a close integration with orchestration calls for a unified service definition.   

A high-level description of the tight integration of Service Orchestration and Service.

Assurance service can be seen in Figure 8 below.

Unified information model for orchestration and assurance requirements

Figure 8. Unified information model for orchestration and assurance requirements

Future networks and new deployment options will evolve towards a fully intent-based management for full automation and SLA enablement. In RAN this is enabled through the Automated RAN Management Interface (ARMI) with the Data Driven Development Environment supporting the usage of AI/ML as a tool to build advanced automation functionality.

The Tele Management Forum (TM Forum) defines intent as “the formal specification of all expectations including requirements, goals, and constraints given to a technical system.” RAN intents are based on overall (CSP) business intents, using measurable target key performance indicators (KPIs) to achieve the desired consumer experience (as shown in Figure 9).

Example of intents relevant for the RAN automation solution

Figure 9. Example of intents relevant for the RAN automation solution

Complementary to the above KPIs, the system needs additional intents with information serving as guidelines to what else should be optimized, both in situations when all KPIs are met or in situations when there are not enough resources (e.g., in traffic peak situations).

The path to a Data Driven Development Environment is an important addition to the network architecture allowing for a process for efficient development and deployment.  Models will be trained in Ericsson (at Software Supply) then refined and tested in a lab environment and ultimately in production environments. Data incoming from the real network is then back into training, see Figure 10.

Data from real networks is used in simulators with complex and detailed scenarios (environment, mobility, traffic) for initial design and training of models. The AI/ML model from the first phase is then ported to the AI prototype for verification (and adjustment) for a product like environment. The AI/ML model is then deployed in a controlled environment in a real network (Product Near) for verification and re-training in the real environment.

Components of the data driven development environment

Figure 10. Components of the data driven development environment

Future networks and new deployment options will evolve towards a fully intent-based management for full automation and SLA enablement. In RAN this is enabled through the Automated RAN Management Interface (ARMI) with the Data Driven Development Environment supporting the usage of AI/ML as a tool to build advanced automation functionality.

The Tele Management Forum (TM Forum) defines intent as “the formal specification of all expectations including requirements, goals, and constraints given to a technical system.” RAN intents are based on overall (CSP) business intents, using measurable target key performance indicators (KPIs) to achieve the desired consumer experience (as shown in Figure 9).

Access, Mobility and Network Applications

Today’s network with an increasing range of purposes and use cases, requires deployment alternatives with improved flexibility, scalability and robustness regarding HW, sites and architectures for microservice support with increasing requirements of energy efficiency and performance improvements.

Ericsson leverage its special-purpose HW RAN for baseband and radio with competitive characteristics of optimized performance, energy efficiency and size.

Ericsson’s Cloud-based RAN solution embraces the cloud-native transformation of container-based microservice architectures and orchestration systems (e.g. Kubernetes), continuous integration, and continuous delivery (CI/CD), etc.

The unified RAN SW architecture, based on cloud native principles, consists of application software and virtualized resources with a separation of SW and HW. The RAN application can be dynamically deployed on various HW, sites and Kubernetes clusters. The virtualization of resources is a pool of RAN/Cloud infrastructure which exposes radio and compute resources to the RAN application SW, see Figure 11 below.

Dynamic deployment of RAN application SW on virtual resources in a HW pool

Figure 11. Dynamic deployment of RAN application SW on virtual resources in a HW pool

This enables a mix of deployment scenarios of hardware vendors. Multi-cloud, with a mix of cloud infrastructure vendors, requires that RAN SW can be deployed in multiple vendor cloud infrastructures and that SMO orchestrates multiple vendor cloud infrastructures as described in Figure 12 below.

Illustration of site deployment

Figure 12. Illustration of site deployment

In left option A of Figure 12, shows an example of a site deployment of an all-Ericsson solution on special-purpose HW. The right option C shows an example of a site deployment with cloud-based RAN deployment similar as option B, but with the difference that the cloud infrastructure (including cloud platform, compute, storage and networking) is deployed and managed by a hyperscaler (HCP) at the NOP’s premises. Additional benefits and consequences in option C, are transferring investment costs to a pay-per-use model (CAPEX to OPEX), but with the risk of introducing one more stakeholder. This  will impact operations of the network, given that responsibility for the total solution is split between several stakeholders.

Regarding security, the principles of Zero Trust have already been adopted in the telecom industry and integrated into CSP’s processes and workflows for network and security operations. Though reasoning for cloud native may be TCO, less complexity, etc., it has implications of potential regulatory restrictions or corporate decisions prohibiting running tools like EDR (Endpoint Detection and Response) or traffic/cpu/memory analytics to ensure optimal performance.

Another dimension of security impact considers the kernel and network stack not being controlled by the application, which means hardening and optimization recommendations and/or instructions must be provided to the operator as a compliment security addition.

Cloud Infrastructure, Transport and Data Pipeline

System infrastructure from HCPs, telecom vendors and third parties could be used in various deployments. Each option will provide an internal networking solution. For HCP shared infrastructures, regions and local zones, an overlay network can be needed to enable telecom network needs. In an end-to-end solution a transport solution is needed to connect the different cloud instances.

Alignments with such cloud networking helps simplify the end-to-end orchestration. Additionally, there is a need for an end-to-end data collection between all the different system infrastructures used.

The infrastructure term has developed to be focused on management and exposure of infrastructure resources that can be exposed in a secure, isolated and abstracted way to applications - the cloud paradigm. The main portability layer for all the different system infrastructures is the CaaS layer de-facto based on Kubernetes.

There are three main alternatives for cloud-based system infrastructures:

  1. Third party CaaS in a multivendor environment. CaaS, HW, Networking etc. all from different vendors and integrated by the customer.
    This environment provides full flexibility and control but introduces many dependencies and variations as the network, hardware, CaaS etc. are not fully integrated/verified by one vendor thus increasing need for System Integration work and a defined support situation.
  2. HCP platforms on-prem or in regions
    An integrated cloud platforms where the infrastructure in provided and verified by a HCPs simplifies for the application integration on the platform and requires less SI work.   
  3. Integrated Cloud Platforms from Ericsson or other vendors
    In the same way as for the complete HCP offerings, an Ericsson Cloud Infrastructure provide a consistent and verified deployment platform.

A system infrastructure can’t be viewed in isolation. All instances will be part of a larger distributed system where multiple clusters need to be managed and the transport between them secured. Consideration of who takes full responsibility must be done as multi cloud service orchestration is required.

Transport in a functional architecture can be divided into three different categories:

  • Application Transport termination
  • WAN Transport
  • Cloud Infrastructure Transport

An example in the application transport termination for RAN which consists of 3GPP stipulated IP termination points considered quite “cloud unfriendly”.  Use of termination points with 3GPP ASN.1 based application protocols on top of SCTP for example works poorly in a cloud native environment. ORAN specifications for fronthaul interfaces towards radios that use L2 Ethernet termination points is another artifact with a poor fit into a cloud native application. 

WAN and Cloud Infrastructure Transport contains same or similar functions providing connectivity through switching and routing and overlay/underlay services. In the WAN we often utilize Traffic Engineering or MPLS segment routing but in the Cloud site it is often enough with VXLAN overlay on top of IP. See also “Hyperscaler Implications to the Telecom Network Architecture” chapter below.

Data Pipeline and other Ericsson-Operator Interactions

Figure 13. Data Pipeline and other Ericsson-Operator Interactions

As seen in Figure 13 above, application clusters external to CSP networks (Service Domain) can interact over CSP Network External Services APIs with included capabilities:

  • collect/ingest raw data and insights from/to customer network through Data Pipeline APIs
  • support the CI/CD process through CI/CD Pipeline APIs
  • support configuration of the customer network through Configuration (Intent) Pipeline APIs
  • support actuation towards the customer network through Actuation Pipeline APIs. 

When working with HCPs (AWS, Google Cloud, Azure) there is a need for cloud agnostic solutions and network function performance for HCP deployments to be able to maintain a stack-neutral compatibility approach.

Hyperscaler Implications to the Telecom Network Architecture

Several drivers exist for Telecom Equipment vendors to leverage HCP deployments for telecom workloads:

  • Embark on a journey to true cloud native for applications and operations
  • Take advantage of technology leadership in areas like AI/ML frameworks, scale, data collection etc., see also chapter on Artificial Intelligence.
  • Focus on core business rather than platform features being provided by HCPs in a portable way
  • Leverage alignment between telecom and IT solutions on the same platform achieve further flexibility and openness

Another top driver for CSPs to move to an HCP platform is to achieve internal alignment and simplified operations for their complete business. Moving both telecom and enterprise workloads to the same infrastructure has the possibility to improve TCO.

There are more reasons for HCPs to move into the telecom market, both as a cloud platform provider for telecom applications and as a main case to enable new enterprise use cases connected to telecom, including private networks, on-prem edge solutions etc. HCPs are targeting their HCP offerings to be a suitable place for 5GC deployments enabling enterprise workloads on the same platform as 5GC. HCPs already have several solutions enabling enterprise deployments in the area.

Reasons for HCPs creating a RAN platform (HCP edge) is rather focused around ensuring that other workloads like AI/ML ends up in the HCP regions and the possibility to enable new enterprise use cases based on data and possibly edge locations in the future.

There will be a need for both private and public cloud and the best suited deployment is dependent on the scenario and workloads depicted in Figure 14 below:

Drivers for private and public cloud

Figure 14. Drivers for private and public cloud

Using HCPs have an impact on many architecture areas for workloads in the network:

  • Deployment Scenarios

HCPs build their presence from the regions where they have all their services available. The different edge locations provide infrastructure and a CaaS environment. Applications using the different edge deployments need to bring the PaaS services they need as PaaS services support in edge locations can be very limited. It should also be noted that all edge deployments will have a dependency to the regions for CaaS control plane, IAM, management and other central services.

HCP and CSP deployments

Figure 15. HCP and CSP deployments

  • Portability

For CSP to be able to deploy on a selected HCP, portability between different HCPs will be necessary to complement the full-stack delivery options.

Examples of areas in need of special consideration for portability are infrastructure and CaaS orchestration, Lifecycle Management of Microservices, accelerators, etc. as traditional approaches are not aligned with the different practices amongst the HCPs. 

  • Data Sovereignty

Today there is emerging regulation for critical infrastructure that impacts telecom systems, but this will most probably emerge. Such a legislation can have an impact on dependencies on regions outside the country with GDPR in the EU for example. 

  • Cloud Native Operations

Cloud native operations as a way of moving from physical network functions to virtual network functions is a substantial shift, but not the end goal.

It is rather the means to get benefits like openness, speed of change, simplification, trust etc. and is not only about running code in containers.  All HCPs’ operations are based on cloud native principles today.

Several challenges exist when moving to a new operational model and one example is if you need to change from an appliance-based model to a model based on function and software with a few fundamental gaps in the HCP networking architecture. These are today in many cases solved by creating an overlay network over the HCP fabric.

It would be desirable when future solutions develop, by the HCPs, that they support network functionality without relying on an overlay networks solution.

Related articles/additional reading: