The ability to separate traffic flows enables communication service providers (CSPs) to offer enhanced support for time-critical communication (TCC) applications with stringent performance requirements in terms of latency and reliability. Known as differentiated connectivity, this approach enables CSPs to monetize their network investments while meeting the diverse demands of various applications, such as remote-control systems and cloud gaming on top of mobile broadband (MBB).
With differentiated connectivity, traditional best-effort traffic flows are handled separately from those originating from either selected user equipment (UE) groups (such as handheld gaming devices and remotely driven cars) or from selected application categories (such as video streaming and teleconferencing applications). The flows that come from these user groups and applications are identified and treated separately by the network in accordance with their own unique set of requirements or required performance levels.
At present, TCC traffic represents only a small fraction of the total traffic in most networks, but it is expected to grow in the years ahead. For the TCC services to function as intended, the application has a strict requirement on the data transmission delay and reliability, which means the application data packets must be delivered within a specific latency budget and only very small amounts of data can be delivered after that to preserve good quality of experience.
Like any other application, TCC applications can be categorized as uplink (UL)-heavy and downlink (DL)-heavy, suitable for being served by corresponding UL- or DL-heavy performance levels. UL-heavy performance levels are relevant in cases where the UE sends large amounts of data to the server – in remote-control applications such as teledriven cars, for example. DL-heavy performance levels are relevant in cases where the UE receives large amounts of data from the server. Recreational applications such as cloud gaming are the most common examples.
Our latest TCC research demonstrates that while the number of TCC traffic services in the network is relatively low, priority scheduling – a TCC feature in 5G standalone – enables 5G radio access networks (RANs) to support both DL- and UL-heavy performance levels that have strict latency and reliability targets without negatively impacting regular, best-effort MBB users.
Handling time-critical communication – challenges
A fundamental difference between TCC and traditional besteffort data traffic lies in putting a bound on latency. As mentioned earlier, a cloud-gaming or remote-control application might demand minimal end-to-end (E2E) delay from the mobile device to the server, with only a small fraction of data arriving outside the desired delay budget. If these requirements are not met, users may have a poor experience (in the case of cloud gaming) or there may be safety risks in the vicinity of remote-controlled machinery.
Such stringent E2E delay requirements give the RAN very little time for traffic handling. In some cases, the allocated RAN latency budget from the total E2E budget might be as short as 20-40ms. There is little to no room for mistakes in data transmission, which means that retransmissions should be avoided as much as possible. A conservative modulation and coding scheme (MCS) is needed to decrease transmission errors, even though we know that conservative MCS will lead to lower spectral efficiency, which will, in turn, result in lower network capacity.
On the other hand, the data rate, while transmitting, must be higher than the nominal TCC application rate, especially if the data packet interarrival time is much larger than the delay budget. Additionally, not all of the delay budget is available for actual data transmission: a significant portion is consumed by control signaling and coordination overhead. Possible retransmissions must also be accounted for, which drives the instantaneous bitrate much higher than the average application throughput over a longer period. The final consideration to bear in mind is that the TCC traffic is sharing resources with other traffic – both other time-critical traffic and best-effort traffic. If not handled properly, this will lead to congestion, making congestion avoidance and control mechanisms essential.
Previous research [1] has demonstrated that it is unrealistic to expect satisfactory performance for TCC services in these circumstances without implementing 5G RAN features addressing TCC. The following set of network features (known as the TCC features set) has therefore been defined to support TCC users:
- Priority scheduling for latency-bounded traffic
- Admission control to avoid resource starvation caused by few users at locations with unfavorable propagation
- Robust link adaptation to balance efficiency and the need for retransmissions
- Configured grants for buffer-status reports to minimize the amount of time it takes for the UE to start data transmission once the data has arrived.
Assessing the impact of time-critical communication traffic on the radio access network
To better understand how the resource cost of TCC traffic compares with that of traditional best-effort traffic, we chose to work with a 5G network model that resembles networks commonly deployed in dense urban areas in the US such as in Nob Hill in San Francisco, California, or the Midtown area in Atlanta, Georgia. The daytime population density in such areas is usually high: for the purposes of our study, we assumed 30,000 people per square kilometer.
We generally assume that 80 percent of users are located indoors and 20 percent are outdoors. However, in one of the UL-heavy performance levels, we assumed that all the UEs were outdoors. The results are marked accordingly.
Additionally, we have studied an indoor deployment of a visitor-centric venue such as a shopping mall or railway station. The size of the venue is 65,000m2, comprised of different types of areas including open spaces, long corridors, shopping areas and offices separated with walls. The venue is assumed to be served by an indoor small-cell system based on real deployments.
We consider MBB traffic demand in the 2028 timeframe, when the expected traffic consumption is around 40GB per month per smartphone [2]. If a CSP has a 30 percent subscriber share and a typical traffic share of 80 percent DL and 20 percent UL, the capacity demand in such an area is around 2,000Mbps/km2 in the DL direction and 200Mbps/km2 in the UL direction for this CSP.
In the visitor-centric venue, where the density of people during the peak hour is even higher, we assume that the capacity demand is approximately 5,000Mbps/km2 in DL direction and 500Mbps/km2 in UL direction.
Because the market for TCC services is much less mature than it is for best-effort MBB services, it is difficult to estimate the capacity demand that these services pose. There are three main aspects to the uncertainty:
- It is hard to predict which services (tele-driven cars, cloud gaming, extended reality and so on) will dominate on the mobile services landscape.
- We do not know what the exact requirements for each service will be.
- It is not clear how many TCC users and devices there will be in the network and how this will evolve over time.
In our study, we address the first two aspects of the uncertainty by concentrating on performance levels rather than the specific services. We address the third aspect by varying the fraction of the TCC traffic from 0 to 100 percent of the total traffic in the 5G RAN. We acknowledge that it is highly unlikely that the portion of latency-bounded traffic is more than few percent of total traffic initially, but analyzing the wider interval reveals some interesting behaviors in the network.
We have analyzed two types of TCC services in which either DL or UL traffic dominates; hence the requirement focuses on either the DL or the UL performance level. The bitrate for UL demands ranges from 0.5 to 8Mbps, while the demands for DL are between 5Mbps and 10Mbps. Note that different performance levels have been studied in the sections focusing on DL and UL performance.
The RAN latency budget (20ms one-way) and reliability (99 percent) are the same for both DL and UL directions. The UL-heavy use case has been inspired by tele-driving, in which cars driving along the streets are controlled remotely. This has resulted in all UEs being located outdoors. Where applicable, it has been labelled as “outdoor” in the results section.
As the population density in the studied areas is high, the network is also dense: the inter-site distance between the macro sites is between 600 and 700m, and the network is further densified with street sites. In the current analysis, we assume that there is one street site per macro site and the street sites are deployed on utility poles located at macro sector borders.
Figure 1 provides an overview of the frequency assets and key network configurations. The allocated bandwidth differs slightly from deployment to deployment and is highlighted in the table as an option between bands. Low band is not typically deployed on street sites and has therefore been excluded.
Site type | Carrier frequency | Total bandwidth | TDD pattern (DL:UL:DL) |
|
Low-band FDD | Macro | 700MHz | 2×20 or 2x30MHz | N/A |
Street | N/A | N/A | N/A | |
Indoor | N/A | N/A | N/A | |
Mid-band FDD | Macro | 2GHz | 2×40 or 2x45MHz | N/A |
Street | ||||
Indoor | 2GHz and 2.1GHz | 2x40MHz for both bands | N/A | |
Mid-band TDD | Macro | 3.5GHz | 1×120 or 1x160MHz | 4:2:4 |
Street | ||||
Indoor | 1x100MHz | 4:2:4 |
Figure 1: RAN configuration
Performance outcomes for mixed traffic
The intent of our research was to investigate the impact of DL- and UL-heavy performance levels on RANs in dense urban areas and in indoor RAN in visitor-centric venues in the context of strict delay and service-reliability targets. We studied the capacity impact of varying amounts of traffic for different traffic flows. The capacity is measured in terms of the served traffic per area unit (Mbps/km2). MBB capacity is generally defined as the served traffic per area unit for the fifth percentile at some required throughput. In our study, the MBB throughput requirements are 10Mbps in the DL direction and 2Mbps in the UL direction.
For our purposes, the TCC capacity is measured differently from MBB. We measure it as the served traffic per area unit at which 95 percent of users are satisfied (able to transmit their data within a latency budget). The amount of data that must be transmitted within a required delay is set by the reliability requirement. In this study, it means that for any given user, more than 99 percent of its data must be delivered within the 20ms time window to declare the user “satisfied.”
Although it is evident that for a user, both DL and UL directions must function to a satisfactory level, we have divided the analysis into two parts: DL performance analysis and UL performance analysis. Although the performance levels are defined by both DL and UL characteristics, the results shown are drawn from disjoining one direction from the other.
Downlink performance analysis
Figure 2 illustrates MBB and TCC capacities at different MBB/TCC values in the DL direction in two different deployments: a dense urban outdoor environment and an indoor visitor-centric venue. The DL direction results are based on the following application throughput requirements:
- DL 5Mbps
- DL 10Mbps
- DL 8Mbps.
Figure 2: The combined capacity of MBB and TCC in a dense urban deployment (left) and an indoor visitor-centric venue (right)
The same one-way RAN latency (20ms delay) and reliability target (99 percent) are valid for all performance levels, and the TCC features set is used in all cases.
The lines in Figure 2 show the upper MBB+TCC capacity limit and the areas under the lines are where the requirements for both TCC and MBB services are fulfilled. For example, if there is 2,000Mbps/km2 MBB traffic in the network (according to the red line on the left side of Figure 2), there is sufficient capacity available for 1,000Mbps/km2 of TCC traffic.
Each point on the graph represents a different portion of MBB and TCC traffic in the network. For example, the rightmost value along the x axis represents network traffic when there is only MBB traffic in the system; the uppermost value along the y axis represents network traffic when only TCC traffic is present. The vertical dashed line in both graphs in Figure 2 represents the expected MBB capacity target in 2028.
The graphs in Figure 2 depict a near-linear relation between MBB and TCC capacity, indicating that increased TCC traffic will degrade MBB capacity. By looking at the slopes of the curves, we can estimate spectral efficiency for the TCC traffic to MBB capacity. For example, in the “DL=5Mbps” scenario (blue) on the left side of Figure 2, the capacity in the case of MBB-only traffic (TCC capacity equal to 0Mbps/km2) is about 3,900Mbps/km2 whereas in the case of TCC-only traffic (MBB capacity equal to 0Mbps/km2) capacity is about 2,500Mbps/km2. Using this information, we can conclude that the spectral efficiency is about 1.6 times higher, which means that we can transmit 1.6 times fewer application bits with the same amount of RAN bits. A similar analysis can be done for other performance levels and the methodology is applicable for UL performance as well.
Our calculations are based on the assumption that the busy-hour MBB traffic demand in the 2028 timeframe is approximately 2,000Mbps/km2 in dense urban areas and around 5,000Mbps/km2 in visitor centric indoor venues. The graph on the left side of Figure 2 shows that the outdoor network can handle additional TCC traffic of 1,000Mbps/km2 or 1,300Mbps/km2, depending on the service level. The spare capacity can be consumed by 200 or 130 users per square kilometer who actively use TCC services. The number of users is derived simply by dividing the spare capacity (1,000Mbps/km2, for example) by the nominal application throughput requirement (5Mbps).
The graph on the right side of Figure 2 shows that the indoor network can handle additional TCC traffic of 6,000Mbps/km2 or 11,500Mbps/km2, depending on the performance level. The spare capacity can be consumed by 1,200 or 1,150 users per square kilometer who use TCC services.
Uplink performance analysis
Figure 3 shows the UL direction results for the following application throughput requirements:
- UL 0.5Mbps (indoor and outdoor users in a dense urban deployment)
- UL 2Mbps (indoor and outdoor users in a dense urban deployment)
- UL 5Mbps (indoor and outdoor users in a dense urban deployment)
- UL 3Mbps (indoor users in a visitor-centric venue deployment)
- UL 6Mbps (indoor users in a visitor-centric venue deployment)
- UL 8Mbps (indoor users in a visitor-centric venue deployment).
Figure 3: MBB and TCC capacities at different MBB/TCC values in the UL direction
The same one-way RAN latency (20ms delay) and reliability target (99 percent) are valid for all performance levels, and the TCC features set is used in all cases, with the exception of “UL=5Mbps /outdoor UE / San Francisco” (blue in Figure 3), in which only priority scheduling is implemented.
Similar to what we saw in the DL results, the graphs in Figure 3 indicate that the networks have spare capacity that can be filled with TCC users. Depending on the performance level, up to a certain point the graphs in Figure 3 also depict a near-linear relation between MBB and TCC capacity. This is particularly evident in the graph on the right side of Figure 3 for the performance level “UL=8Mbps” (orange), where one can observe a capacity cap. The more challenging UL performance level specification drastically increases the queuing delays: when a TCC packet arrives, the likelihood of queuing behind other TCC packets increases.
The “UL=5Mbps” performance level shown on the left side of Figure 3 illustrates similar behavior both when all TCC features are implemented (purple) and when only priority scheduling (blue) is implemented. At the point when the fraction of TCC traffic is about to become too high, the capacity starts to decrease due to longer queuing delays.
The “UL=5Mbps” performance levels (blue and purple) on the left side of Figure 3 also demonstrate that the implementation of priority scheduling alone (blue) is sufficient for creating spare capacity to accommodate an additional 600Mbps/km2 served traffic (about 120 TCC users per square kilometer) at the MBB target capacity. This indicates that deploying priority scheduling is the logical first step to accommodate TCC traffic. In the beginning, when the amount of TCC traffic is small relative to MBB best-effort traffic, priority scheduling should be sufficient. It is important to note, however, that the “UL=5Mbps” results are for outdoor users (remotely driven cars, for example). When comparing “UL=5Mbps” and “UL=0.5Mbps” performance levels, one might wonder why the former performance level achieves higher overall capacity. Firstly, the modeled networks are somewhat different: “UL=5Mbps” is a network in San Francisco while “UL=0.5Mbps” mimics a network in Atlanta. Secondly, the distribution of UEs differ. In San Francisco we assume that all users are outdoors (remotely driven cars), whereas in Atlanta we assume that 80% of users are indoors.
It is also interesting to compare “UL=2Mbps” (red) and “UL=0.5Mbps” (yellow) on the left side of Figure 3. There is a point at which the TCC capacity of the former surpasses the TCC capacity of the latter. This indicates that even though there are fewer users in the system after admission control for “UL=2Mbps,” the served traffic, aggregated over all users, is still higher when compared with “UL=0.5Mbps.” It is important to note that this does not translate directly to a higher number of actual users; it means that fewer users produce more traffic.
Finally, the right-most “MBB only” capacity on the left side of Figure 3 is different for the performance levels. This is because of the differences in deployment areas – both are dense urban, but one is in Atlanta’s Midtown (with smaller MBB-only capacity) and the other is in San Francisco’s Nob Hill area (with higher MBB-only capacity). The differences in these two RAN deployments have led to some differences in capacity.
Conclusion
In this article we have explored the potential of differentiated connectivity in 5G mobile networks, focusing on time-critical communication (TCC) applications and their impact on 5G radio access networks (RANs). The goal of our research is to provide guidance to communication service providers (CSPs) about how to deploy 5G RANs with the ability to service TCC traffic, while simultaneously providing satisfactory service to best-effort mobile broadband (MBB) users.
We have demonstrated that the implementation of a TCC features set creates additional capacity that can be filled by traffic beyond best-effort MBB. The features included in the TCC features set are:
- Priority scheduling for latency-bounded traffic
- Admission control
- Robust link adaptation
- Configured grants for buffer-status reports
Our results also indicate that priority scheduling alone can create sufficient capacity surplus in the early stages.
Our study investigated two types of TCC performance levels: uplink-heavy and downlink-heavy data traffic flows. The results demonstrate that by separating traffic flows based on user equipment groups or application categories and implementing priority scheduling, CSPs can better support innovative applications with specific performance requirements, particularly those with stringent latency and reliability needs alongside regular best-effort MBB. This approach enables CSPs to monetize network investments while meeting the diverse demands of various applications, such as remote-control systems and cloud gaming.
Acknowledgements
The authors would like to thank Rong Do, Antzela Kosta and Tomas Lundborg for their contributions to this article.