How CPU choices impact application performance in layered architectures
Modernization of telecom networks will be a continuous journey as more and more agile technologies are developed. Deploying network function virtualization or introducing 5G cloud-native applications require solid system-level verification as well as optimizing applications for the hardware platform to achieve maximum performance. Leveraging rapid technology advances, Ericsson and Intel have collaborated on application and infrastructure compatibility to provide more effective system-verified solutions.
From self-contained nodes to cloud-native applications
Not that long ago all functions and services delivered within telecom networks were realized using discrete nodes – meaning each node solved a distinct task and delivered services to end users. Typically, these nodes were all self-contained, not easily scalable and mainly consisted of purpose-built hardware, an operating system, and the application itself.
Today, self-contained nodes are still in service in many telecom networks but for the past decade they’ve come up against new alternatives – such as virtualized infrastructure and virtualized network functions. The next technological leap comes with cloud-native and container-based technologies, particularly in conjunction with 5G core networks. Both virtualization and containerization technologies are typically deployed on standardized commercial off-the-shelf (COTS) hardware, and sometimes even on bare metal in the case of containers, (check out the blogpost from earlier this year on Benefits of Kubernetes on bare metal cloud infrastructure.
What this technology shift has brought (moving away from self-contained nodes) is an architecture in which each layer operates and scales independently from the layer underneath or above, using standardized APIs for interlayer communication. This unlocks an array of benefits, including faster service introduction, dynamic up/down scaling of resources and functions, automation and CI/CD, improved utilization, and more effective operations. It also allows standardized and open source components to be used, contributing to improved total cost of ownership (TCO).
The need for system verified solutions
However, despite the standardization of interfaces and protocols, there are still challenges to overcome when putting a reliable and cost-effective multilayer infrastructure into service. Ericsson’s way of addressing this – while still leveraging the benefits of virtualization and container technologies – is to offer pre-integrated and system-verified solutions that are optimized based on the requirements of the applications it carries.
In our system-verified offerings, things like NFVI configuration optimization for high throughput applications, fault and performance management correlation, networking and storage integration, and end-to-end full stack support all come into play as examples of important events to secure a sturdy, reliable and cost-effective solution.
Ericsson’s full stack system-verified solutions include four distinct layers:
- The application layer (virtualized, or cloud-native/containerized)
- The software platform on which the application is deployed (IaaS, or CaaS)
- The physical infrastructure layer (software defined)
- The management and orchestration layer (ETSI MANO compliant)
Ericsson-Intel collaboration on application performance
As mentioned in a blogpost earlier this year, (Ericsson and Intel: Next-gen hardware management platform collaboration), Intel and Ericsson are closely collaborating to co-develop a software defined infrastructure management platform targeted for efficient and agile deployments of modern telecom functions, whether virtualized/VNF or containerized/CNF. This builds on a longstanding collaboration between Intel and Ericsson, which includes software optimization efforts on Intel processors, which is integral to getting the best performance out of the application running at the top of the software stack.
Taking the packet core user-plane as an example we’ve measured improvements of 10% utilizing Intel’s new 2nd Generation Intel® Xeon® Scalable Processors compared to the previous generation – a further enhancement in our joint work to continuously improve performance and shave off costs.
The fact is, that despite the independently layered nature of a virtualized or containerized architecture, the choice of CPU has a significant impact on application performance. It’s obvious that the software platform layer is interdependent with the application above, as well as with the CPU/hardware underneath. But there’s an equally strong relationship between the application and the CPU where, for instance, an application like Evolved Packet Gateway is very much throughput-driven, versus an IMS application that’s more signaling intensive. This puts very different requirements on the processor in comparison.
For a more comprehensive overview of the relationship between workload characteristics and hardware configuration types the ETSI NFV performance and portability best practices (GS NFV-PER 001) classifies NFV workloads into different classes. At a high level, these are the characteristics distinguishing the workload classes:
- Data plane workloads: which cover all tasks related to packet handling in an end-to-end communication between edge applications. These tasks are expected to be very intensive in I/O operations and memory R/W operations.
- Control plane workloads: which cover any other communication between network functions that is not directly related to the end-to-end data communication between edge applications. This category includes session management, routing and authentication. When compared to data plane workloads, control plane workloads are expected to be much less intensive in terms of transactions per second, while the complexity of the transactions may be higher.
- Signal processing workloads: which cover all tasks related to digital processing such as the FFT decoding and encoding in a cellular base station. These tasks are expected to be very intensive in CPU processing capacity and highly delay sensitive.
- Storage workloads: which cover all tasks related to disk storage.
NFVI offerings with OpenStack and Kubernetes
Ericsson offer the market a set of variants and combinations of its pre-integrated, system-verified application and NFVI/infrastructure offerings. With the application on top, the underlying platform layer comes either as an OpenStack-based IaaS offering (Ericsson Cloud Execution Environment, CEE), or a Kubernetes-based CaaS offering (Ericsson Cloud Container Distribution, CCD). Both platforms comply with ETSI MANO for management and orchestration using Ericsson Orchestrator, and leverage a software-defined infrastructure: the Ericsson Software Defined Infrastructure (SDI).
Ericsson SDI integrates Intel’s latest processors
Ericsson SDI provides the basis for our market-leading NFVI solutions, irrespective to whether it’s an OpenStack or Kubernetes-based deployment running virtualized or cloud-native applications. It also deploys across the network from central locations to the edge, providing for Edge NFVI and enabling local break-out of the packet core data plane for low-latency use cases. Since the Ericsson SDI solution utilizes Intel’s new 2nd Generation Intel® Xeon® Scalable Processors, the close collaboration between Ericsson and Intel is foundational in making sure application performance is maximized and market leading TCO results are reached.
To continue with the packet core example: Each application on Ericsson’s Packet Gateway has certain characteristics, interdependencies and compatibility towards the platform and the CPU. Both optimization and configuration are required to ensure an efficient and optimal performing solution. This ensures our customers have the best performing and most capable solution possible.