Objective-focused dual connectivity networks: Data-driven prioritization of 5G NSA frequency bands
5G non-standalone access (NSA) serves as a natural progression from LTE to a 5G upgrade. Combining user experience with data-driven insights can help achieve an optimal balance between enhanced user satisfaction and signaling costs, positioning 5G NSA as an effective intermediate step between LTE and standalone 5G. This approach supports technology leadership while keeping 5G standalone deployment as the ultimate goal.
Maximizing asset utilization helps keep costs in check and reduces capital expenditures. In the current telecom landscape, communication service providers (CSPs) are maximizing the use of mature long-term evolution (LTE) technology to meet their goals (Figure 1). The non-standalone access (NSA) mode [1] offers a way to deploy 5G while continuing to leverage existing LTE assets. In NSA mode, adding just the gNodeB (gNB) to the radio network is sufficient, as it can connect through the existing LTE eNodeB, using the current LTE packet core to maintain end-to-end connectivity. The 3rd Generation Partnership Project (3GPP) has recommended distinct options for 5G evolution, as depicted in Figure 2.
Option 1 pertains to legacy LTE networks, whereas Options 2, 4, and 7 provide a framework for standalone 5G deployment, requiring the deployment of a 5G packet core. Option 6 assumes that all radios are migrated to 5G but retain the enhanced packet core (EPC). However, this approach might not be the best alternative because it does not fully leverage the capabilities of 5G and also incurs expenses associated with the migration. The most cost-effective option for rapid 5G coverage is Option 3, which involves deploying 5G base stations and interfacing them with the LTE packet core through LTE base stations. This approach allows the deployment of the 5G core and new backhaul to be postponed, thereby keeping costs under control. However, it comes with the trade-off of not fully harnessing 5G's potential.
Option 3 is referred to as EUTRA-new radio (NR) dual connectivity (EN-DC). In this option, an LTE base station operates as the anchor, and user equipment (UE) performs initial registration to the anchor cell. Subsequently, LTE anchor cell adds one or more secondary cells that belong to 5G NR.
In response to a B1 event, the main LTE base station/cell can select a candidate NR cell to act as a secondary node (SN). This selection can be made based on pre-configured settings or measurements from the UE.
In this post, we explore how machine learning (ML) approaches can dynamically adjust the priority of 5G NR frequency bands. By varying these priorities, we aim to overcome the limitations of static definitions and achieve higher throughput, longer 5G session durations, and fewer switchovers between LTE and NR.
What is B1 measurement event?
The B1 event is used for inter-RAT (radio access technology) handover procedures, such as transitioning from LTE to 5G. In this scenario, the criteria depend on the target RAT (for example, 5G NR) rather than the serving cell coverage. A B1 event is triggered when the measured signal from the neighboring inter-RAT (5G) cell exceeds a certain threshold, considering offset and hysteresis.
Challenges in implementing 5G NR in current setup
In EN-DC, UE starts the connection with LTE like traditional legacy networks, after which the attach process network can facilitate the addition of 5G cell by configuring RRC reconfiguration messages. Network can:
- configure blind 5G NR cell addition without any measurement feedback from UE1 and
- configure UE to perform NR cell measurement under B1 event and based on the reports, configure the 5G NR.
Configuration-based blind addition is static and does not consider the signal strength of NR cells. On the other hand, a measurement-based setup is dynamic and flexible, adjusting to select the best available NR cell. During a B1 event, the network can trigger a measurement of the 5G NR frequency bands, guided by the frequency priority levels assigned to those target bands. Based on the UE's capabilities, the prioritized frequency bands are communicated to the UE through an RRC reconfiguration message. Figure 4 illustrates the UE capability information provided by the UE and the band combinations it can support.
Figure 5 shows an example of the measurement-based scenario. Network can be configured with the priority of NR frequency band and based on UE capability report (as shown in Figure 4), a list of priority of band is shared with the UE to measure.
In most current deployments, frequency priority is manually set and typically remains static as defined. It is determined by the expertise of the subject matter expert (SME) or CSP policy and does not change based on network behavior and performance. UEs could remain distributed across the entire cell coverage. Hence, keeping the same priority irrespective of UE location, target NR band assessment in terms of load, coverage potential, and energy efficiency, may not leverage the full 5G potential. This may result in frequent switchovers of the user plane between 5G and LTE and would add significant signaling overhead. This may also lead to a degraded user experience.
Objective-oriented, time-based NR frequency band prioritization
The ML-based solution can have a centralized objective controller (OC) that provides exposure for user interaction, as shown in Figure 6. The objectives are passed on to the respective layers of LTE and 5G NR. Based on the configured objective, ML model(s) can either be hosted directly within the anchor node or managed and orchestrated centrally through platforms such as, the Ericsson Intelligent Automation Platform (EIAP) [2].
The objectives might include high throughput, extended 5G session duration, broader 5G coverage, and improved energy efficiency. These goals can be tailored to different times of the day. For example, during peak traffic hours, the objective might focus on providing high user throughput, while during off-peak hours, the priority could shift to energy efficiency. Appropriate model selection and inference pipelines will be activated according to the configured objective for each time period. The relevant network nodes, such as eNB and gNB, will gather the necessary performance data for each objective.
The data would be passed for training (and subsequently inference) from the ML model(s). The trained ML model would perform two actions:
- Classification of UEs into subscriber groups (SG).
- Band prioritization as per objective per subscriber group
5G decisions will be based on UEs characteristics such as requested service, location, and so on.
Figure 7 summarizes the high-level flow of ML model training and inference workflows.
ML model(s) would be trained based on the configured objective, using associated performance and configuration metrics, and UE characteristics such as location, capability, and so on. The trained ML model may be stored in the model catalog (s). During inference, based on objective, time, and UE subscriber group, a target band will be determined for subscriber group. For example, UEs closer to cell edges may be given a higher priority for NR bands with high coverage, while UEs closer to base stations can be assigned a higher priority for a band with greater bandwidth.
The band priority list performance would be evaluated against the baseline definition of priority, and any necessary adjustments to the model’s objectives could be accommodated based on this comparison. This step may result in re-clustering of UEs associated with the respective subscriber groups. Data-driven techniques [3] show that ~29% energy saving can be achieved in devices if EN-DC activation is done considering RLF quantity (can be defined as an objective). Another paper [4] highlights that voice call muting instances can be minimized to zero, but with the tradeoff of EN-DC activation in the dynamic adjustment process. The proposed solution aims to minimize the tradeoff in activation by adjusting not only the activation thresholds, but also the frequency bands based on the expected performance aligned with the given objective.
Thus, ML can help in bringing intelligence in multiple aspects of UE’s association with the network. Adapting intelligent solutions would help in gaining useful insights and driving networks towards zero touch automation and self-sufficiency.
Summary
An objective-driven frequency priority determination may help in establishing a user-adaptive 5G NR leg. This will help in minimizing the connection switchovers and the signaling overhead, thus gaining a better user experience and optimal network performance. This approach ensures that users continue to benefit from 5G as CSPs transition towards 5G standalone.
Learn more
5G standalone (5G SA) experience 5G without limits
Hybrid 5G Core network with legacy device support
Measuring carrier aggregation & dual connectivity
5G Implementation Guideline - July 2019
Product Lifecycle Explained: Stages, Examples, and Product Life Cycle Diagram
References:
- Non-standalone and standalone: Two paths to 5G
- Mapping innovation through the EIAP ecosystem
- K. Jha, Nishant, A. K. Jangid, R. P. Kamaladinni, N. P. Shah and D. Das, "Efficient Algorithm to Reduce Power Consumption for EUTRA-New Radio Dual Connectivity RAN Parameter Measurements in 5G," 2020 IEEE 3rd 5G World Forum (5GWF), Bangalore, India, 2020, pp. 536-541, doi: 10.1109/5GWF49715.2020.9221291
- M. A. Zaidi, M. Manalastas, M. U. B. Farooq, H. Qureshi, A. Abu-Dayya and A. Imran, "A Data Driven Framework for QoE-Aware Intelligent EN-DC Activation," in IEEE Transactions on Vehicular Technology, vol. 72, no. 2, pp. 2381-2394, Feb. 2023, doi: 10.1109/TVT.2022.3211741
RELATED CONTENT
Like what you’re reading? Please sign up for email updates on your favorite topics.
Subscribe nowAt the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.