What is AI-powered drone mobility support?

How can AI-powered drone mobility support provide seamless connectivity and optimal system performance? Based on an award-winning article as part of the Ericsson internship program, we take a look at how AI solutions can empower drone connectivity in 5G networks.

AI-powered drone mobility support

Experienced Researcher, Radio

Senior Researcher, radio

Master Researcher, radio

Experienced Researcher, Radio

Contributor (+2)

Senior Researcher, radio

Master Researcher, radio

Category & Hashtags

Drone connectivity in the sky is an indispensable part of the Internet of Things (IoT): Anywhere, Anytime, Anything. 5G networks will provide wide-area, high-quality, and secure connectivity that can enable large-scale cost-efficient drone operations beyond visual line-of-sight range. In a recent summer internship project at Ericsson, we explored how Artificial Intelligence (AI) can empower drone mobility support in 5G networks. Our work received the Best Paper Award at the 2020 IEEE Wireless Communications and Networking Conference (WCNC 2020). The award is a recognition of the Ericsson internship program, which offers candidates a chance to learn about the world of work while working on projects that are changing the world of communications.  

Drones have many applications, ranging from package delivery and surveillance to remote sensing and IoT scenarios. The safe operation of drones relies on reliable and seamless wireless connectivity. Cellular technology is well-suited for providing wide-area, high-speed, and secure wireless connectivity to drones flying in the sky. This is largely due to the ubiquitous cellular infrastructure, global mobile standards, and a rich service repertoire for mobile users.

Challenges for drone mobility support

Leveraging cellular networks to connect drones poses several challenges. Existing cellular infrastructure uses base stations (BSs) with down-tilted antennas to enhance terrestrial coverage. This means that the main lobe of an antenna beam faces towards the ground, whereas the significantly weaker side lobes point in certain other directions.

Moreover, there are several null directions in a BS’s antenna pattern that may cause coverage holes in the sky. In a network with multiple BSs, where a drone is connected to the BS that provides the maximum received signal power, the drone flying in the sky has to traverse a fragmented coverage pattern, as illustrated in the figure below. 

Drone mobility support

Figure 1: Fragmented cell association pattern at a height of 300 m, assuming the drone connects to the BS that provides the maximum received signal power.


Additionally, drones can move with high speed in an arbitrary trajectory in the sky. The drone mobility, together with the fragmented cell association pattern in the sky, may result in a rapid fluctuation of received signal strength during the flight.

The figure below provides a summary of the challenges of drone mobility support in the sky. Due to these challenges, a mobile drone may need to frequently switch (handover) its connection from one BS to another to maintain reliable connectivity. This can cause frequent handovers entailing large signaling overhead and potential radio link failures due to unnecessary ping-pong handover events. Therefore, an efficient mobility management solution for drones is needed to address these issues. Specifically, there is a need for robust handover mechanisms that ensure reliable drone connectivity while accounting for handover costs.

Challenges for drone mobility support in the sky

Figure 2: Challenges for drone mobility support in the sky


AI-powered drone mobility support

Let us consider a drone flying along a fixed route. On one hand, we want to decrease the number of handovers to reduce handover costs. On the other hand, we want the drone to switch over to another BS when the coverage gets too spotty. How do we achieve these apparently conflicting goals?

AI-based solutions can play a promising role in devising optimal handover mechanisms. By leveraging tools from reinforcement learning (RL), handover decisions can be dynamically optimized to offer seamless wireless services to drones. To achieve optimal sequential handover decision-making for supporting drone mobility, an RL model can be employed.

In RL, an agent (which can be a drone, a BS, or a central entity) interacts with an environment by choosing actions based on the environment's current state (or an observation of the state). When the agent performs an action in a state, it receives feedback in terms of a reward and the environment transits to a new state. The reward and new state are stochastically determined by the dynamics of the environment, which generally are not known to the agent. The goal of the agent is to find an optimal policy to maximize the total future reward. The figure below provides an illustration of the RL model.

Reinforcement learning model.

Figure 3: Illustration of reinforcement learning model


An AI-based handover mechanism can exploit various information such as reference signal received power (RSRP), reference signal received quality (RSRQ), a drone’s trajectory, its speed, and BS distribution.  The reward function can incorporate a wide range of performance metrics including radio link failure, RSRP, and the number of handovers in order to determine the optimal handover rules. Furthermore, in applications where drone trajectories are not pre-defined or fixed, AI can be used to jointly optimize handover decisions and a drone’s trajectory to achieve the optimal system performance.

From tabular Q-learning to deep Q-network

The RL problem described above can be solved using a traditional tabular Q-learning algorithm, which may entail substantial storage requirements when the state space is large. To overcome this challenge, we can further consider deep RL for developing a deep Q-network (DQN) algorithm to efficiently solve the drone mobility support problem.

The basic idea of the DQN algorithm is to train a neural network as a function approximator for either the optimal action values or the optimal policy of the agent. An illustrative example of a neural network for approximating the optimal RL action values is shown in the figure below.

Tabular Q-learning to deep Q-network

Figure 4: An illustrative example of a neural network for function approximation


We use the deep Q-network model to train the system for various flight paths in an aerial region. The training process requires the data of received signal strengths within the aerial region for each of the serving BSs. Specifically, the algorithm uses a reward function that penalizes a handover event while incentivizing connecting to a BS with high signal strength. Once trained, the model would output the optimal handover decisions along any given drone route. The network can set the desired operating point in terms of signal quality and handover frequency, and the AI-based mechanism will fetch the best handover decisions for any given drone route.

Our results show that the AI-based handover scheme can lead to a remarkable improvement compared to the baseline scheme where the drone is always connected to the BS that provides the maximum received signal power along the route.

The figure below provides an example simulation result on handover ratio, which is the ratio of the number of handover events using the proposed AI-powered scheme to that of the baseline scheme. In this figure, wHO and wRSRP are design parameters that can be tuned to adjust the relative weight of the handover cost to the radio link RSRP. If there is no handover cost (i.e., wHO = 0), the proposed AI-powered scheme falls back to the baseline scheme, resulting in a constant handover ratio equal to 1. As the relative weight of the handover cost to the radio link RSRP increases, the handover ratio decreases, implying an increased reduction of the number of handovers thanks to the proposed AI-powered scheme.

Handover ratio

Figure 5: An example simulation result for handover ratio, which is the ratio of the number of handover events using the proposed AI scheme to that of a baseline scheme


By increasing the relative weight of the handover cost to the radio link RSRP, the reduced number of handover events comes at the cost of a somewhat reduced radio link RSRP. Our proposed AI-powered scheme provides a flexible framework to achieve the desired balance between these two criteria: It reduces unnecessary handover events while maintaining reliable drone connectivity.   

Want to learn more?

Read our award-winning article (IEEE WCNC 2020 Best Paper Award): Efficient Drone Mobility Support Using Reinforcement Learning, where we take a deeper look at how AI can empower drone connectivity in 5G networks.

An overview of some of the new features in the 5G NR specifications can be found in our article: 5G New Radio: Unveiling the Essentials of the Next Generation Wireless Access Technology.

Read our white paper, Drones and networks: Ensuring safe and secure operations to learn more about drones and networks.

Read our blog post on network exposure and the case for connected drones.

How can drones and machine learning help to speed up 5G site deployment? Read our post about intelligent site engineering to find out.

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.