Network functions virtualization (NFV)
Network functions virtualization (NFV) takes modern networks one step closer to the cloud, making the deployment and management of network functions, services and applications completely virtual.
NFV explained
Network functions virtualization (NFV) is a modern network architecture approach that decouples the network’s various applications from its hardware resources including compute, storage and other network hardware. This creates a virtualized layer of network functions deployed as virtual machines (VMs) or containers that can be shared by all network applications.
Using virtual, software-based applications where dedicated physical boxes once stood, NFV makes it possible for communication service providers (CSPs) to manage, orchestrate, and expand network capabilities on-demand, wherever they’re required across the network.
Most of the world’s 5G networks run today on NFV infrastructure. As a precursor to the ongoing cloud native revolution, expect both NFV- and cloud native infrastructures to co-exist for many years to come.
From hardware to the cloud: the story of network function evolution
Just as technologies and network demands have evolved over time, so too have network functions. From first generation networks to today’s 5G networks, network functions have evolved from being an element tightly coupled to the network’s hardware to one that is virtualized, containerized, and now cloud native.
Benefits of NFV: making networks more efficient, agile and innovative
By making networks more resource efficient, more performant and much more agile in launching, scaling and managing new services, NFV makes it possible for CSPs to meet and stay ahead of the growing demands placed on today’s high-performance networks.
Lower day-to-day costs
The shift away from purpose-built hardware towards general-purpose servers lowers both the cost of outlay (CAPEX) and maintenance (OPEX). This is complemented by additional automation-based cost savings, as well as lower overall network energy consumption.
Improved performance
Through dynamic resource allocation, NFV makes it possible to upscale and downscale network functions based on demand. This makes it possible for CSPs to improve load balance and meet traffic spikes without overprovisioning.
Deliver services faster
With vendor-agnostic multi-tenant support, multiple virtual and cloud-native network functions can be aggregated onto a single platform, meaning that multiple services and applications can be served and scaled simultaneously.
Innovation on-demand
Shifting from physical to virtual infrastructure significantly improves the speed and agility with which network functions and services can be launched, updated or sunsetted. Together with faster and more agile development cycles, CSPs are better equipped to test new services and respond to market demands in no time.
NFV architecture: the basics
The NFV infrastructure (NFVI) platform sits at the heart of NFV architecture. By enabling and orchestrating the efficient, flexible and scalable deployment of virtual network functions (VNFs), the NFVI is the backbone that makes today’s high-performance networks possible.
NFVI typically consists of many architectural building blocks, including:
- a hardware layer of compute, network and storage resources
- virtualized resources, such as VNFs or cloud-native container-based microservices
- a virtualized execution layer that provides orchestration and run time environment for VNFs, containers or cloud-native functions
- a management, automation and network orchestration layer
Key NFV technologies and terms
The network cloud space is evolving rapidly. Get up to speed with some of the key NFV and cloud technologies and terms below.
Use cases and applications: what’s possible with NFV
The emergence of NFV and cloud native has powered a new range of technological possibilities on the network, forming one of the key building blocks to many advanced 5G use cases today.
NFV management and orchestration
NFV management, automation and orchestration (MANO) is the architectural framework that manages and orchestrates the allocation of resources demanded by virtual network functions (VNFs). This includes lifecycle management and orchestration of the virtualized resources for the VNF, but also traditional aspects such as fault-, configuration-, accounting-, performance- and security management (FCAPS) of the VNF itself.
MANO ensures that dynamic infrastructural workflows are maintained across the network, meaning that optimal performance, flexibility and scalability can be delivered at all times.
It is comprised of three key building blocks: the orchestrator, the network function manager and the infrastructural manager.
The journey from NFV to cloud native
Cloud native is the next architectural revolution that is transforming how network functions are deployed and managed. This shift is happening now, with many of the world’s CSPs having already launched cloud native platforms in their core networks today.
NFV: a stepping- stone to the cloud native revolution
In most cases, the shift from NFV to cloud native is a gradual one. By deploying a container-as-a-service (CaaS) platform, such as Kubernetes, on top of the exiting virtual infrastructure manager (VIM), cloud-native functions can co-exist with both legacy and virtual network functions.
NFV vs cloud native: the key differences
While NFV virtualizes network services traditionally run on dedicated hardware, cloud-native infrastructure is designed to run applications natively in the cloud using microservices, containers and other cloud-native technologies.
NFV | Cloud-native | |
---|---|---|
Architecture | Virtual Network Functions (VNFs) running on virtual machines (VMs) through a hypervisor (virtualization layer) on data center (DC) hardware typically standardized servers (COTS). | Microservices deployed within containers, orchestrated by platforms such as Kubernetes. They can run on bare metal infrastructure or on top of VMs but bare metal is the most efficient. |
Scalability | Scalable VNFs by adding more VMs | Fine-grained scalability through individual microservices and containers |
Resource efficiency and deployment speed | VM hypervisor layer results in a higher overhead and slower setup and instantiation times | Lightweight containers running directly on bare metal infrastructure removes the need for a hypervisor layer, reducing overhead and improving deployment speed |
Resilience and fault tolerance | Resilience through VM failover mechanisms | Designed for resilience with self-healing, container orchestration and microservices redundancy |
Operational model | Typically managed using traditional IT and network management tools | Utilizes DevOps practices, CI/CD pipelines and cloud-native management tools. |
Adaptability and flexibility | Generally less flexible; changes often require redeployment of VMs. | Highly adaptable; microservices can be updated independently without impacting the entire system. |
Development cycle | Monolithic nature of VNFs results in a slower innovation cycle | Automated CI/CD pipelines and microservice architecture contribute to faster innovation cycles |
Cost efficiency | Potentially higher due to VM resource needs and overheads | Typically more cost-efficient with better resource utilization and dynamic scaling |