Skip navigation
Like what you’re reading?

How pre-staging can help address the Open RAN deployment challenges

The power of pre-staging in unlocking Open RAN deployment challenges.

Strategic Product Manager

How pre-staging can help address the open ran deployment challenges

Strategic Product Manager

Contributor (+1)

Strategic Product Manager

One of the major attractions of Open RAN is the level of flexibility offered by being able to choose from an almost limitless array of hardware and software, but that level of choice brings its own challenges. To avoid chaos in Open RAN deployments, especially at the far edge, it is important for vendors to come together to find optimized solutions that benefit and give confidence to communications service providers (CSPs). Pre-staging can solve some of the major multi-vendor challenges in deployment projects from the very beginning – let's take a look at how this can be done.

Open RAN deployment challenges can be divided into three main categories:

Open RAN deployment challenges can be divided into three main categories
  1. Deployment Diversity
    Topologies of RAN deployments are very diverse, with some deployments having hundreds of different variations. This creates a situation that is crying out for the automation of processes leading into deployment itself.
  2. Multi-vendor complexity
    Managing and optimizing RAN deployments at the far edge has been one of the main areas of focus for equipment vendors and CSPs alike, and many have achieved a certain level of maturity in product and service automation capabilities. When it comes to Open RAN, the dynamics change quickly due to the nature of multi-vendor ecosystem – each of the vendors and components brings different levels and thought processes into the deployment process and so far, standardization efforts haven’t yielded the desired results.
  3. Multi-vendor life cycle management
    If deploying a multi-vendor Open RAN solution is complex, managing its life cycle is an altogether different beast to handle, and this is a subject we will talk about more in our subsequent blogs.

For now, let’s talk about how a pre-staged Open RAN deployment can help when it comes to tackling the Open RAN deployment challenges.

Site-building activities for Open RAN Deployment

Ask any RAN deployment expert in the field about the single most important key performance indicator (KPI) to achieve when it comes to far edge, and their answer is very likely to be “first time right”, meaning that activities are performed correctly the first time so that no further work or changes are necessary.

So, what exactly is the far edge?

When designing and deploying an Open RAN network, the geographical distribution of the different sites can be sorted in groups labeled from the center of the network all the way out to the radio sites close to the antennas.

The central sites are typically hosted in data centers - these can be large datacenters or further distributed smaller data centers (also known as edge data centers). Typically, the virtualized central units (vCUs) are among the key elements to note in these edge data centers in a radio network design. There may also be other elements such as management and orchestration nodes which are also hosted in these sites.

The distributed RAN sites, located away from the data centers, are called far-edge sites. These are sites typically situated close to antennas which host virtualized distributed units among other elements in an Open RAN topology.

With that in mind, why is what happens at the far edge so critical?

The answer is that far-edge sites are built to meet the requirements for low latency and high throughput in the network, and from a deployment perspective, the high number of far-edge sites requires a more industrialized deployment process. In many cases, the sites are also in the most remote of locations, and this requires traveling to them to be taken into consideration. As a result, “first-time right” is already an important factor for purpose-built networks, and that importance will increase in Open RAN, driven by more cloudification and multivendor sites.

Site-building activities for Open RAN Deployment


With that in mind, why is what happens at the far edge so critical?

The answer is that far-edge sites are built to meet the requirements for low latency and high throughput in the network. From deployment perspective, the high number of far edge sites, spread across a large requires a more industrialized deployment process – especially when it is multivendor far edge sites.

In total, we are talking about thousands of far-edge sites, with a crew of perhaps two or three members visiting each of these sites to set them up once they are physically built. A key factor also to consider is the travel time to-and from the sites depending on the demographics.

Let’s take a look at a typical site-building activity once the crew gets to the location with the picture below:

Typical site-building activity

Depending on the CSP’s particular circumstances, these activities usually need to be completed within certain maintenance window, and though the length of these windows can vary, in our experience they tend to average around six hours. With purpose-built RAN systems (hardware and software from a single vendor such as the Ericsson Radio System) the software is pre-loaded, which means less needs to be done in terms of software installations and integrations once the baseband unit reaches the site.

But when it comes to Open RAN, we are often talking about a commercial off-the-shelf (COTS) server as a baseband unit, which is essentially an empty box that’s getting shipped to the site and that needs to have the relevant software installed and configured there. This can lead to two issues – additional time for the crew at site to complete the necessary installations, and an increased risk of failures due to a higher number of manual interventions and tasks that need to be done at the site - both work against the principle of getting everything “first time right”, and all of this also leads to more work that needs to be done at the sites. This in turn demands more of the competence level of field crews, further increasing cost.

Imagine a crew waiting for the software to be downloaded remotely and getting installed (a process which normally takes around three hours), and after a long day on the site they discover that something has gone wrong either at Containers-as-a-Service (CaaS) layer or at the cloud-native network functions (CNF) layer and that they have to re-do all of these activities again. Depending on the situation, the crew either has to leave the site and return another day or extend their stay for another four to six hours to complete the task – time that they could have used to complete a deployment at another site. All this leads to delays in overall project planning and completion of RAN rollout activities, which ultimately also leads to increased costs.

Pre-staged deployments tackle this problem head-on, reducing the risk of such failures – here's how.

Open RAN Pre-Staging

As the name suggests, pre-staging involves the installation of software components onto the Commercial-Off-The-Shelf (COTS) hardware before it is dispatched to the site. This can either be done at the hardware vendor’s factory, a third-party (logistics provider) warehouse, one of the Open RAN vendor’s facilities (such as an Ericsson facility) or at the CSP’s own facility. Based on local scenarios, the location and logistics can be decided accordingly. If planned and coordinated well, a pre-staged deployment can be much more effective in terms of both cost and efficiency than an on-site installation backed by remote systems.

Pre-staging is nothing new in the software world - for many years it has been done with enterprise rollouts and hyperscale cloud service providers (HCPs) for their on-premises solutions with Kubernetes or OpenStack-based systems. In the past, Linux or Kubernetes software was installed on COTS hardware and then shipped out, but unfortunately telecom systems are not all the same, and that is why close co-operation among major vendors is key, especially between the virtualization software vendor (CaaS) and virtualized distributed unit (vDU) software vendor.

In the telecom world, it is not simply a matter of taking an open source cloud platform and loading a telco-specific CNF (cloud-native network function, such as a virtual distributed unit or vDU) on top of it – instead, the CaaS, Kubernetes platform or cloud infra-stack is highly customized and optimized for the telco CNF, and this requires special configurations and modules to be enabled for it to function.

This is why the concept of pre-integrated Open RAN /Cloud RAN solutions is the best bet out there, and more and more CSPs are going down this path. While pre-loading such a CaaS, one must ensure those pre-verified configurations are carried out correctly according to the design, and it doesn’t end there - if we are to make a more optimized pre-staged solution, we also need to pre-load the vDU CNF software on top of the CaaS. This requires the knowledge to pull the software package and correctly install it according to the specifications.

An Ericsson pre-staging solution covers all possible scenarios that are needed for a smooth deployment.

How the Ericsson pre-staging solution works

With Ericsson’s Open RAN pre-staging solution, we go further by pre-loading the following:

  1. Firmware, BIOS settings, Linux and CaaS (Both Ericsson and 3PP CaaS)
  2. Ericsson vDU software
  3. Site-specific configuration for CaaS
  4. Site-specific radio network config
  5. License pre-loading

With our unique approach, we have introduced automated tools (git and cloud-native based) and processes to smoothly carry out all of these tasks at the highest possible accuracy and scale (with multiple vDU’s being readied at same time, including all the above). We have further added health checks and the ability to generate reports about the installation. Ericsson has also worked with key ecosystem partners to make sure these integrations are officially validated.

Once the device is pre-loaded and shipped to the specific site, a further automated process kicks in to re-home the device to central management systems, including service management orchestration platforms such as Ericsson’s Intelligence Automation Platform and cloud infrastructure managers like Ericsson’s Operations Manager Cloud.

The re-homing process involves a call-to-home procedure initiated by the vDU server to the network management system (Service Management and Orchestration or SMO, and cloud manager) – this triggers a workflow of recognizing the device and taking over further life cycle management. This shields the field crew from the complexities of multi-vendor software installations and, most importantly, gets the deployment done accurately at the first time of asking.

With Ericsson’s Pre-staging solution, we leverage our experience to complete the software installation and radio configuration and integration activities within a time window of 30 minutes. This is a huge step up from the overall process without pre-staging.

One major part of prestaging is configuring the radio parameters using site-specific configurations. Most of the time these configurations are either only available within 24 to 48 hours of the site deployment, or they tend to be modified at the last moment, making the choice of location for pre-staging an important one, just like the process itself. Our experience says this needs to be done at last-mile delivery centers and not at a central factory warehouse. With Ericsson’s pre-staging solution, we provide this flexibility.

Together with our partners, Ericsson is further working to develop other use cases such as emergency hardware replacement at the far edge using pre-staged processes and tools.

Conclusion

In conclusion, Open RAN brings its unique advantages and challenges. The Ericsson pre-staging solution reduces the risk and complexity by taking disaggregated Open RAN deployment as close as possible to ‘first time right’, giving CSPs and field crews the confidence that they can complete this otherwise complex and time-consuming challenge.

The additional value for the CSPs is to be able to leverage the current deployment resources with almost no competence lift and be able to do the deployment in a relatively short maintenance window at site. Ericsson pre-staging provides a unique service in terms of bringing a COTS server close to being a fully functional vDU baseband unit, and this is enabled by our advanced service automation capabilities and close co-ordination with pre-integrated solution partners. The flexibility and openness in our solution offers the possibility of handling the overall process at locations of choice, without compromising on quality, speed and scale.

As we note in our paper on the future of RAN services, the evolution towards Open RAN is one of the most important trends in the telecommunications industry today, and in the future we will talk about the next step and testing service in the Open RAN – but for now, the journey begins with pre-staging and deploying the networks of the future in a place and at a time that suits your network needs.

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.