{ Markettagged:True , MatchedLanguageCode:True }

A vision of the 5G core: flexibility for new business opportunities

Next-generation 5G networks will cater for a wide range of new business opportunities, some of which have yet to be conceptualized. They will provide support for advanced mobile broadband services such as massive media distribution. Applications like remote operation of machinery, telesurgery, and smart metering all require connectivity, but with vastly different characteristics. The ability to provide customized connectivity will benefit many industries around the world, enabling them to bring new products and services to market rapidly, and adapt to fast-changing demands, all while continuing to offer and expand existing services. But how will future networks provide people and enterprises with the right platform, with just the right level of connectivity?


Ericsson Technology Review logo

2016-02-02

Authors: Henrik Basilier, Lars Frid, Göran Hall, Gunnar Nilsson, Dinand Roeland, Göran Rune, and Martin Stuempert

Download PDF


AAA–authentication, authorization, and accounting
APP–application
BSS–business support systems
CN–core network
CO–central office
CP–control plane
DC–data center
DM–Device Management
EPC–Evolved Packet Core
ID–Identity
M2M–machine-to-machine
MBB–mobile broadband
NFV–Network Functions Virtualization
NFVI–NFV Infrastructure
NFVO–NFV Orchestration
NX–new radio-access technologies
OASIS–Organization for the Advancement of Structured Information Standards
Os-Ma–operating system mobile application
OSS–operations support systems
SDN–software-defined networking
SDNc– SDN controller
SF–service function
SLA–Service Level Agreement
TOSCA–Topology and Orchestration Specification for Cloud Applications
TTC–time to customer
TTM–time to market
UP–user plane
VIM–Virtual Infrastructure Manager


The answer: flexibility. The ict world has already started the journey to delivering elastic connectivity. Technologies like sdn and virtualization are enabling a drastic change to take place in network architecture, allowing traditional structures to be broken down into customizable elements that can be chained together programmatically to provide just the right level of connectivity, with each element running on the architecture of its choice. This is the concept of network slicing that will enable core networks to be built in a way that maximizes flexibility.

As we move deeper into the Networked Society, with billions of connected devices, lots of new application scenarios, and many more services, the business potential for service providers is expanding rapidly. And 5G technologies will provide the key to tap into this potential, ensuring that customized communication can be delivered to any industry.

Being able to deliver the wide variety of network performance characteristics that future services will demand is one of the primary technical challenges faced by service providers today. The performance requirements placed on the network will demand connectivity in terms of data rate, latency, qos, security, availability, and many other parameters — all of which will vary from one service to the next. But future services also present a business challenge: average revenues will differ significantly from one service to the next, and so flexibility in balancing cost-optimized implementations with those that are performance-optimized will be crucial to profitability.

In addition to the complex performance and business challenges, the 5G environment presents new challenges in terms of timing and agility. The time it takes to get new features into the network, and time to put services into the hands of users need to be minimized, and so tools that enable fast feature introduction are a prerequisite.

Above all, overcoming the challenges requires a dynamic 5G core network.

But how do you build the core to be a dynamic, virtualized provider of customized connectivity? An important first step is a high-level vision for the 5G core network. The network architecture that meets the objectives then needs to be defined, and finally the whole concept needs to be tested using various possible deployments of the architecture.

Vision of the 5G core

The 5G core will need to be able to support a wide range of business solutions, and at the same time allow existing service offerings, like mobile broadband, to be enhanced and optimized. It will need to connect many different access technologies together, and deliver traffic to and from a wide range of device types.

Next-generation core networks will run in a business environment that is significantly different from that of today. Next-generation core networks will be designed to support the traditional operator model, but at the same time be flexible enough to support a shared-infrastructure model, as well as dedicated usage for specific industries.

But it’s not just the core network that needs to be flexible; the whole communication ecosystem needs to work in a highly responsive manner. Agile systems and processes are needed to ensure that two crucial factors, ttm and ttc, are kept to a minimum. Service providers need to be able to create offerings quickly, and be able to tailor solutions to rapidly changing market demand (short ttm). Order processing needs to be fast, cutting the time from order to a fully active service to a minimum (rapid ttc). To build a future core network architecture that is highly flexible, modular, and scalable will require a much higher degree of programmability and automation than exists in today’s networks.

The 5G core will exist in an environment that is cloud-based, with a high degree of Network Functions Virtualization for scalability, sdn for flexible networking, dynamic orchestration of network resources, and a modular and highly resilient base architecture. Full support for next-generation access networks, including nx and evolved lte, as well wi-fi and other non-3gpp technologies are prerequisites.

Network slicing is one of the key capabilities that will enable flexibility, as it allows multiple logical networks to be created on top of a common shared physical infrastructure. The greater elasticity brought about by network slicing will help to address the cost, efficiency, and flexibility requirements imposed by future services.

Architecture and technology

Traditionally, core networks have been designed as a single network architecture serving multiple purposes, addressing a range of requirements, and supporting backward compatibility and interoperability. This one-size-fits-all approach has kept costs at a reasonable level, given that one set of vertically integrated nodes has provided all functionality.

Technology has, however, evolved. Virtualization, nfv, sdn, and advanced automation and orchestration make it possible to build networks in a more scalable, flexible, and dynamic way. Such capabilities allow today’s network designers to contemplate the core in a radically different way, providing greater possibilities for tailored and optimized solutions.

The concept of flexibility applies not only to the hardware and software parts of the network, but also to its management. For example, setting up a network instance that uses different network functions optimized to deliver a specific service needs to be automated. Flexible management will enable future networks to support new types of business offerings that previously would have made no technical or economic sense.

High-level architecture

Network slicing allows networks to be logically separated, with each slice providing customized connectivity, and all slices running on the same, shared infrastructure. This is a much more flexible solution than a single physical network providing a maximum level of connectivity.

Virtualization and sdn are the key technologies that make network slicing possible. As shown in Figure 1, network slices are logically separated and isolated systems that can be designed with different architectures, but can share functional components. One slice may be designed for evolved mbb services providing access to lte, evolved lte and nx devices; another may be designed for an industry application with an optimized core network control plane, different authentication schemes, and lightweight user plane handling. Together, the two slices can support a more comprehensive set of services and enable new offerings that are cost-effective to operate.

To support a specific set of services efficiently, a network slice should be assigned different types of resources, such as infrastructure — including VPNs, cloud services, and access — as well as resources for the core network in the form of vnfs.

Figure 1: The next-generation core network, comprising various slices
Figure 1: The next-generation core network, comprising various slices

As illustrated in Figure 2, network slicing supports business expansion due to the fact that it lowers the risks associated with introducing and running new services — the isolated nature of slices protects existing services running on the same physical infrastructure from any impact. An additional benefit of network slicing is that it supports migration, as new technologies or architectures can be launched on isolated slices.

Figure 2: Network slicing supports business expansion
Figure 2: Network slicing supports business expansion

Evolving standards should allow network architecture to develop — in a radical or revolutionary way. By steering away from the one-size-fits-all approach, evolving standards will allow for a whole palette of architectures from which different network slices can be designed. The introduction of a selection mechanism like Dedicated Core Network (dcn)1 — which allows for multiple parallel architectures — is one step in the right direction.

Control- and user-plane separation

Many aspects of the 5G network — not just the deployment architecture — need to be flexible to allow for business expansion. It is likely that networks will need to be deployed using different hardware technologies, with different feature sets placed at different physical locations in the network — depending on the use case. Special attention must be paid to the design of the user plane to meet requirements for high bandwidth, which may apply on an individual subscriber basis or as an aggregated target. For example, in some use cases, the majority of user-plane traffic may require only very simple processing, which can be run on low-cost hardware, whereas the remainder of the traffic might require more advanced processing. Cost-efficient scaling of the user plane to handle the increasing individual and aggregated bandwidths is a key component of a 5G core network.

Supporting the separation of the control- and user-plane functions is one of the most significant principles of the 5G core-network architecture. Separation allows control- and user-plane resources to be scaled independently, and it supports migration to cloud-based deployments. By separating user- and control-plane resources, the planes may also be established in different locations. For example, the control plane can be placed in a central site, which makes management and operation less complex. And the user plane can be distributed over a number of local sites, bringing it closer to the user. This is beneficial, as it shortens the round-trip-time between the user and the network service, and reduces the amount of bandwidth required between sites. Content caching is a good example of how locating functions on a local site reduces the required bandwidth between sites.

As separation of the control plane and the user plane is a fundamental concept of sdn, the flexibility of 5G core networks will improve significantly by adopting sdn technologies.

The control plane, illustrated in Figure 3, can be agnostic of many user-plane aspects, such as physical deployment, and l2 and L3 transport specifics. Typical control-plane functionality includes capabilities like the maintenance of location information, policy negotiation, and session authentication. As such, there is a natural separation at this level.

Figure 3: 5G control-plane architecture
Figure 3: 5G control-plane architecture

User-plane functionality, which can be seen as a chain of functions, can be deployed to suit a specific use case. Given that the connectivity needs of each use case varies, the most cost-efficient unique deployment can be created for each scenario. For example, the connectivity needs for an m2m service with small payload volume and low mobility are quite different from the needs of an mbb service with high payload volume and high mobility. An mbb service can be broken down into several sub-services, such as video streaming and web browsing, which can in turn be implemented by separate sub chains within the network slice. Such additional decomposition within the user-plane domain further increases the flexibility of the core network.

The strict separation of the control and user planes enables different execution platforms to be used for each. Similarly, different user planes can be deployed with different execution platforms, even within a user plane — all depending on which solution is most cost-efficient. In the above mbb example, one sub chain of services may run on general-purpose cpus, whereas another sub chain of services that requires simple user-data processing can be executed on low-cost hardware.

Governing the network

It is clear that enabling business expansion requires greater flexibility in the way networks are built. And as we have illustrated, network slicing is a key enabler to achieving greater flexibility. However, increasing flexibility may lead to greater complexity at all levels of the system, which in turn tends to raise the cost of operations and lengthen lead times. Automation is an essential way of avoiding this spiral of complexity.

Network governance is addressed in three main steps of the life cycle: creation, activation, and runtime.

  • Creation of new (or customization of existing) services with minimum ttm — the ability to break down the overall solution into components is necessary, so that services and slice types can be designed, verified, and validated rapidly.
  • Activation of a service with minimum ttc — the ability to complete activation in a fully automated way will minimize lead times.
  • Runtime — exposing the right capabilities to the user, service and sla monitoring, and adapting to changing conditions (such as scaling and failovers) enable scalability for new services, all of which need to be fully automated.

As illustrated in Figure 4, the fundamental architectural principles for achieving flexibility are separation of concerns, abstraction, and programmability2, 3.

Figure 4: Separation of concerns
Figure 4: Separation of concerns

The capability offered by network slicing to deliver different categories of connectivity is handled by two-layered governance functionalities: one layer focuses on services and products (such as business-to-business offerings that can be implemented using network slicing); and one focuses on the network slices themselves — as illustrated in Figure 5. By creating slices based on performance characteristics like a low latency slice, or a high capacity, low throughput, and high-speed slice, innovative offers can be created by bundling slice capabilities. Each offer would include governance capabilities such as slas (and how to translate wanted service levels into technical control parameters for a network slice), business policies, and control of exposure of capabilities from within the slice.

Figure 5: Governance functions for network slices, services, and products
Figure 5: Governance functions for network slices, services, and products

The life cycle management provided by these capabilities extends from design and creation of network slice types and services, through activation for individual customers, to runtime monitoring and updates (if needed).

The governance layers handle life cycle management of network slices, aided by a blueprint. A blueprint defines the setup of the slice, including: the components that need to be instantiated, the features to enable, configurations to apply, resource assignments, and all associated workflows — including all aspects of the life cycle (such as upgrades and changes). The blueprint contains machine-readable parts, similar to oasis tosca models, which support automation.

To instantiate a new customer service, a new network slice is created, or in some cases, an existing slice is reconfigured. The slice can be independently managed, and it comprises a set of resources or components, which may be traditional, such as an epc, or a new type of architecture — such as a cp/up separation. Network slices typically contain management capabilities, some of which may be under the control of the service provider, and some under the control of the customer — depending on the business model. The governance layer uses a number of systems and interfaces to facilitate the creation and configuration of resources, such as northbound api interfaces exposed by an nfvo or an sdnc, or apis exposed by network functions and components within the slice. Flexibility is key in the automation/orchestration system, which can be achieved, for example, by using plugins.

Deployment scenarios

Applying the concept of network slicing widens the choices for application support — different slices can be deployed independently for quite different purposes, both functionally as well as for performance reasons. The following use case describes the functionality, architecture, and variety in deployments of one such network slice.

In our example use case (see Figure 6), the processes involved in an industry application require low-delay communication between the application controller and the devices in the system — which could be sensors or actuators. Deployment is local to minimize delay and the number of points of failure. To ensure high availability, all relevant resources can be duplicated using a hot standby or load-sharing scheme; deploying duplication locally limits the need for redundancy in the global part of the network. The network slice could include cloud resources to host the industry application together with the necessary network functions in the operator data center.

Figure 6: Functional architecture of a possible application
Figure 6: Functional architecture of a possible application

Core network control-plane functions may be limited to device authorization, local mobility management, and policy control. What user-plane functions are needed depends on the nature of the application and possibly other usage. For example, a qos function may be in place to ensure that traffic prioritization is upheld; the application traffic is assigned with one or more priorities (such as real-time dependent versus background application traffic), and management traffic is assigned another priority.

If radio access is shared among several network slices, the user plane may include functions to ensure that uplink user-plane traffic is separated for the different network slices. The different user-plane service functions (upsfs) can be deployed as chains, and packets can be tagged to ensure they pass through the desired chain.

From a management perspective, responsibilities are shared between the operator and the customer (the industry enterprise using communication). The customer is responsible for the application, and device and identity management in the system, and the operator is responsible for the physical infrastructure — including data centers, transport, and nodes. Management of the slice may be shared between the two, allowing the customer (within the framework of an SLA) to manage capabilities for network functions that support the application — such as local mobility management.

As shown in Figure 7, the application controller, the management of the application controller, and the customer part of network management could all be deployed close to the industry site, if only local communication is needed. Operator-controlled network-management functions tend to be deployed at a central location in the operator network.

Figure 7: Low-latency application, local industry (1), (2), and (3) are possible operator sites for core-network and control-plane functions
Figure 7: Low-latency application, local industry (1), (2), and (3) are possible operator sites for core-network and control-plane functions

The decision of where to locate core network control-plane functions (1), (2), or (3) is governed not only by performance requirements, such as core-plane delay and reliability, but by other factors such as the number of local industries that need to be supported by the network slice, their geographical spread, and the organization of the operator. Operational parameters from the perspective of the operator also come into play. As identity management is industry specific, this function could be carried out from the same location as application controller management.

The deployment of a network slice for an industry process that operates on a regional, national, or even multinational basis is shown in Figure 8. The management of the application controller for this case should be deployed centrally, with room for some local control over application controller management capabilities.

Figure 8: Low latency application, regional, national, or multi-national industry (1), (2), and (3) are possible operator sites for core network control plane functions
Figure 8: Low latency application, regional, national, or multi-national industry (1), (2), and (3) are possible operator sites for core network control plane functions

For performance and reliability reasons, the application controller for the industry process should be deployed close to each of the industry sites. Customer-controlled network management functions could be placed on the same industry site as the management of the application controller.

Conclusion

Evolved virtualization, network programmability, and 5G use cases will change everything about network design, from planning and construction through deployment. Network functions will no longer be located according to traditional vertical groupings in single network nodes, but will instead be distributed to provide connectivity where it is needed.

To support the wide range of performance requirements demanded by new business opportunities, multiple access technologies, a wide variety of services, and lots of new device types, the 5G core will be highly flexible.

Minimizing cost for service providers and industries that depend on connectivity is a key part of the design for this flexible and dynamic core — enabling to keep costs under control, while networks adapt as quickly as business models change.

Technologies like sdn will be used in innovative ways, to set up a network slice, and to implement additional user-plane modifications. Cloud technology together with advanced analytics capabilities, nfv, and sdn provide a common distributed platform on which networks can be instantiated. The technology boost provided by a flexible core, with end-to-end network slices at the center, will increase the value of networks built on a common infrastructure and platform.

The authors

Lars Frid

Lars Frid

is a director of Product Management at Ericsson in San José, California, us. He has 25 years of experience of working with wireless data communications, ranging from satellite systems and dedicated mobile data systems for industries, to global standards for 2g, 3g, and 4g mobile data communications. His current focus is to drive product strategies for next-generation packet data systems. He holds a degree in electrical engineering from Chalmers University of Technology in Gothenburg, Sweden, and an M.Sc. in electrical engineering from the Imperial College of Science, Technology & Medicine in London, UK.

Lars Frid on Linkedin

Henrik Basilier

Henrik Basilier

is an expert at Business Unit Cloud & ip. He has worked for Ericsson since 1991 in a wide range of areas and roles. He is currently engaged in internal r&d studies and customer cooperation in the areas of cloud, virtualization, and sdn. He holds an M.Sc. in computer science and technology from the Institute of Technology at Linköping University, Sweden.

Henrik Basilier on Linkedin

Martin Stuempert

Martin Stuempert

has been working on 5G network architecture at Development Unit Analytics & Control since 2013. His focus is on sdn, nfv and cloud proofs of concept. Prior to this, he worked on ip/mpls transport networks, focusing on self-organizing networks, QoS, and security. In 2002, he received the Inventor of the Year award from the ceo of Ericsson. He joined Ericsson in 1993 and holds an M.Sc. in electrical engineering from the University of Kaiserslautern, Germany.

Göran Hall

Göran Hall

is an expert in Packet Core Network Architecture at Development Unit Network Functions & Cloud. He joined Ericsson in 1991 to work on development and standardization, primarily within the area of packet core network architecture for gprs, wcdma, pdc, and epc. He is chief network architect for the Packet Core, domain and his current focus is the functional and deployment architecture for a 5G-ready core network.

Göran Rune

Göran Rune

is a principal researcher at Ericsson Research. His current focus is the functional and deployment architecture of future networks, primarily 5G. Before joining Ericsson Research, he held a position as an expert in mobile systems architecture at Business Unit Networks, focusing on the end-to-end aspects of lte/epc, as well as various systems and network architecture topics. He joined Ericsson in 1989 and has held various systems management positions, working on most digital cellular standards, including gsm, pdc, wcdma, hspa, and lte. From 1996 to 1999, he was a product manager at Ericsson in Japan, first for pdc and later for wcdma. He was a key member of the etsi smg2 utran Architecture Expert group and later 3gpp tsg ran wg3 from 1998 to 2001, standardizing the wcdma ran architecture. He studied at the Institute of Technology at Linköping University, Sweden, where he received an M.Sc. in applied physics and electrical engineering and a Lic. Eng. in solid state physics.

Dinand Roeland

Dinand Roeland

is a senior researcher at Ericsson Research. In 2000, he joined Ericsson as a systems manager for core network products. He has worked for Ericsson Research since 2007, and his research interests are in the field of network architectures. He has been a key contributor to the standardization of multi-access support in the 3gpp epc architecture, especially in Wi-Fi. He is currently working on the architecture of 5G core networks. He holds an M.Sc. cum laude in computer architecture from the University of Groningen, the Netherlands.

Dinand Roeland on Linkedin

Gunnar Nilsson

Gunnar Nilsson

is an expert in 5G core network architecture at Business Unit Cloud & IP. He has worked for Ericsson since 1983, and has fulfilled a wide range of roles in many different areas, both in Sweden and in the US. He is currently the Technical Coordinator for studies relating to the 5G core network. His recent engagements include leading the establishment of the Ericsson cloud architecture and Cloud System, and taking on the role of chief scientist for the development of Ericsson’s SSR IP-router. He holds an M.Sc. in engineering physics and applied mathematics from KTH Royal Institute of Technology, Stockholm, Sweden, and an EMBA from the Institute of Management, Sigtuna, Sweden.