Skip navigation
Like what you’re reading?

How a real 5G Quantum AI use case could disrupt antenna tilting

The world of quantum computing unravels innovative and powerful computational techniques, solving challenges that were once considered beyond reach. Advancements in quantum computing have the potential to rapidly accelerate parallelizable telecom workloads, complement AI to automate telecom processes, enhance RAN computations for 5G and drive new possibilities on the path towards 6G. Communication Solutions Providers could hence benefit from improved performance at a decreased energy consumption.  

In this blog, we put these promises to the test to understand, where are we on that journey today and how applicable quantum methods could be to telecom use cases going forward. 

Director Disruptive Technologies, Market Area North America

Senior Researcher, Cloud systems and platforms

Research Specialist, Business Area Cloud Software and Services

Research Specialist, Business Area Cloud Software and Services

Future networks

Director Disruptive Technologies, Market Area North America

Senior Researcher, Cloud systems and platforms

Research Specialist, Business Area Cloud Software and Services

Research Specialist, Business Area Cloud Software and Services

Director Disruptive Technologies, Market Area North America

Contributor (+3)

Senior Researcher, Cloud systems and platforms

Research Specialist, Business Area Cloud Software and Services

Research Specialist, Business Area Cloud Software and Services

Introduction 

In Quantum computing, quantum bits or qubits represent the smallest unit of information, where a qubit can be in both “0” and “1” state (superposition) and can harness both states by applying the sequence of quantum gates (quantum algorithm) unlike the logical gates in classical computing operating on either “0” or “1” state of a bit at a time.  Therefore, a mathematical problem can potentially be solved at an order of magnitude faster on the quantum computer. Algorithmic problems that today take years to solve, could possibly be computed in minutes. Today, only a limited number of qubits are accessible. Industry leaders in the field however, show promising roadmaps of growing this number of available qubits rapidly over the next few years. You can try it yourself and run computations using Quantum Cloud computing services already today, via. AWS, Azure, Google, IBM and D-Wave, to name a few. Frameworks, SDKs and APIs are becoming increasingly and readily available.  

With the above in mind, we had the idea that if quantum computers can solve complex problems fast, why not look in our own backyard? The telecommunications industry is full of complex challenges that can make use of this. So, we set out on a quest to identify a problem worth solving and knocked on the doors of various product area owners. 

We found that it takes a global, cross functional effort, to find the right AI use case, and test it against quantum simulators. But luckily, we are Ericsson, and working globally and cross function comes like second nature to us. 

This blog shares our journey of applying quantum techniques to a live use case to optimize antenna tilting. We first dive deeper into the use case. Then, we look at what adaptations are needed to run a comparison with Quantum computing. At the essence of this post, is the discussion and fine tuning of the quantum approach. Finally, we share our results and outlook on how quantum AI could improve our networks in the future and the hurdles that we need to overcome. 

Use Case: Antenna Tilt Optimization 

Antenna tilting plays a significant role in current network configurations. For instance, a large antenna tilt might result in coverage gaps, as it fails to adequately overlap with surrounding cells. Meanwhile, a small antenna tilt could intensify interference on neighboring cells due to excessive overlap. Enlarging the cell excessively could cause capacity issues as the considered cell might capture distant users residing in the coverage of another cell providing better propagation. This phenomenon, also called cell overshooting, would degrade the efficiency of radio links (see Figure 1). Today’s intelligent antenna tilting aims to trade off capacity, quality, and coverage by choosing the optimal antenna tilt, thereby improving the radio link efficiency.  

Antenna tilting impact on network

Figure 1. Antenna tilting impact on network

State of the art approaches for antenna tilt optimization comprise both offline and online reinforcement learning (RL). In these methods, an RL agent is trained for each cell to adapt the tilt angle based on network KPIs, deciding whether to increase, decrease, or maintain the current angle. We use RL agents to make these adjustments on the fly. An agent computes an action based on the input of a network state. The result of the action is an updated network state and a reward, which are used for the agent’s training (see Figure 2). The model leverages a Deep Q-Network, and all agents share the same deep neural network (DNN), and therefore a common policy. 

Our current RL based solution works well and fast today in a live network, once deployed. However, an RL agent takes a significant amount of offline training before it would act accurately enough to be uploaded to the live cell. The offline training utilizes a highly sophisticated network simulator and leads to training and testing times of days for the model. Within an ML operations pipeline, this can lead to long delays, e.g., every time the radios would receive any type of feature update. 

Reinforcement learning agent for antenna tilting optimization

Figure 2. Reinforcement learning agent for antenna tilting optimization

Quantum Motivation 

When a certain ML task (e.g. classification) is performed in the quantum domain, each data point is encoded to a quantum state by an appropriate state encoding method. The quantum state is evolved with the suitable parametrized quantum circuit (a.k.a. Ansatz) and the qubits are then measured to predict class labels after classical post processing. Motivated by the existing results on the capabilities on Quantum Machine Learning (QML), we envision its relevance on the offline training phase of the antenna tilt optimization use-case. These agents are already operational and live in customer networks today, providing us with a valuable opportunity to assess the potential impact of our findings on future networks. Our exploration comprises of running the use case fully offline with no reliance on customer data. This ensures the integrity of our customer’s privacy, and that there is no adverse impact on live networks – while still being able to measure potential enhancements to the deployment. 

Benchmarking Methodology 

The quantum computers available today have limited number of qubits and are noisy due to short time during which the qubits retain their quantum properties (a.k.a coherence time). The alternative of simulating the quantum agent together with the network simulator in a close loop manner would have required tremendous amount of compute resources. For example, a recent study needed 1024 A100 GPUs to simulate the 40-qubit deep quantum circuit.  

Fortunately, the existing training approach for antenna tilt optimization can be turned into a supervised learning problem. In the RL loop (see Figure 2), the environment determines the next sample of the training data set, which consists in a tuple with the form: state, action, reward. The use of an experience replay buffer permits storing the most recent samples and using them in multiple iterations during training. An extreme case would be not to only store the most recent samples, but all of them. In this case, the Deep Q-Network could be trained as a regular supervised learning problem once all samples have been collected.  The consideration of all samples eliminates the "forgetting" effect, caused by finishing the training with the most recent samples. Moreover, it is expected to reduce the reward estimation error associated with the a-priori not best actions. We therefore collect the samples from the RL. Each data point has 26 features and the corresponding action label.  

Aspiring to replicate the neural architectural complexity of existing Deep Q-Network, we first explore the design space of multi-layer perceptron (MLP) a. k. a.  artificial neural network (ANN) by training it to predict the next action given the input features. Pre-processing steps on the data set include standardizing it around zero mean with the unit variance and then normalizing it. Each ANN configuration is randomly initialized and is trained with multiple iterations of the optimizer.  

Next, we use feature scoring to select the top k features, such that prediction accuracy on the selected features stay within the acceptable prediction threshold. Then we use the selected features and 10% of the data set comprising same number of data points from each class to explore the quantum neural network (QNN) design space, such that prediction accuracy of QNN configuration is close to that on the ANN.   

For QNN models,  we rely on COBYLA optimizer due to improved prediction accuracy of the trained QNN model as compared to  the stochastic gradient descent and Adam optimizers, observed in our experiments. As QNNs training suffer from diminishing gradients with the training iterations (a.k.a. barren plateau), we perform multiple trials of training, each with the random initialization of parameters and report the best prediction accuracy. Finally, we scale-up the experiments with training on higher number of data points and testing on an unseen dataset.  

Due to the adopted benchmarking scheme, we manage to simulate each design of the QNN with a state vector simulator and run MLP configurations on a 16 CPUs server with 32GB memory. We compare the developed the MLP-ANN and QNN in terms of prediction accuracy. A fair comparison of execution time would require state-of-the-art of graphics processing unit (GPU) or tensor processing unit (TPU) and the quantum processing unit (QPU) with coherence time long enough to run the QNN circuit with limited divergence of the output state from the ideal state.  As such a QPU is not available today, we instead contrast the training overhead expressed in trainable parameters. 

Quantum Approach 

Choosing the appropriate feature encoding, measurement observables, Ansatz structure, and its layers entails an exhaustive exploration of the design space, demanding significant computational resources. This process culminates in the creation of a QNN architecture custom-tailored to the unique demands of the ML task at hand. 

Encoding each feature as the angle of rotation gate applied to each qubit (a.k.a angle encoding) would require simulating a 26 qubit QNN circuit for each data point. Using a layer of shallow depth ansatz with low expressivity (a.k.a. Circuit 2 in the literature), we find the training even with less than 10% of the data set to be intractable. Alternatively, we encode two features as the respective angles of the rotation gate and the phase gate applied after to each qubit (a.k.a dense angle encoding) as this would halve the qubits of the QNN circuit.  We train a 13 qubit QNN comprising of up to 3 layers of shallow depth Ansatz.  

To aid the QNN design exploration, we exploit the observation that the top-13 features perform only 4.8% worse than using all features on the ANN. Therefore, we select the Ansatz with relatively deep high expressivity (a.k.a. Circuit 5 in the literature). We train a 7 qubit QNN with up to 20 layers of high expressivity ansatz depth (see Figure 3) to predict increase or decrease in the Remote Antenna Tilt.  

QNN Ansatz to predict the increase or decrease in Remote Antenna Tilt

Figure 3. QNN Ansatz to predict the increase or decrease in Remote Antenna Tilt

The number of trainable parameters in the Nq qubits QNN with L layers of high expressivity Ansatz are L(3Nq + Nq2) i.e. they scale quadratically with the number of qubits. They can be reduced by using the compressed feature representation in the quantum domain. We have developed a relatively short-depth quantum convolution filter (a.k.a the F12 filter shown in Figure 4) that can project the 3-qubits quantum state to 2-qubits. Multiple F12 filters can be connected in the modular fashion to achieve desired compression ratio (RC) provided the feature set allows it. These filters would incur additional trainable parameters that scale instead linearly with the number of qubits. 

F-12 Quantum Convolution Filter

Figure 4. F-12 Quantum Convolution Filter

While training the QNN, we enable the compressed representation learning by relying on the loss function corresponding to the action prediction only.  We train the QNN comprising of 5 layers of high expressivity Ansatz (see Figure 5) using all 26 features, dense angle encoded onto 13 qubits, and the compressed feature representation layer performs 13-to-9 compression.  

QNN Circuit with Compressed Feature Representation to predict the increase or decrease in Remote Antenna Tilt

Figure 5. QNN Circuit with Compressed Feature Representation to predict the increase or decrease in Remote Antenna Tilt

Results

We find the ANN comprising of 2900 parameters requires all the features to give the prediction accuracy of 90.24%. Projecting the features to 13 principal components degrades the prediction accuracy to 63.8% whereas with the ANN comprising of 1600 parameters operating on top-13 features (based on the univariate score) only lowers the prediction quality to 85.9% (see Table 1).  

Establishing the classical baseline

Table 1. Establishing the classical baseline

From figure 6, one can see that low expressivity ansatz (3L-LEA) is marginally better than the random prediction, suggesting the need for a high expressivity Ansatz for this ML task.  One can see that QNN with 4 layers of high expressivity Ansatz (4L-HEA) almost matches the prediction accuracy of the classical ANN (MPL-ANN). Likewise, the addition of compressed representation (5L-HEA-CS) does not hamper the prediction accuracy in comparison to the 5 layers of high expressivity ansatz.   

Prediction Accuracy Comparison of QNN with different number of Ansatz Layers and the classical ANN

Figure 6. Prediction Accuracy Comparison of QNN with different number of Ansatz Layers and the classical ANN

By compressing the 13-qubits state to 9-qubits using four F12 filters, the number of parameters in each QNN layer are reduced by 48% while adding only 56 additional parameters. In total the QNN with compressed feature representation needs to train 41% less parameters. Moreover, our QNN architecture comprises of more than 10x less parameters than the MLP-ANN (see Figure 7).

Comparison of trainable parameters

Figure 7. Comparison of trainable parameters

Conclusion and Outlook 

Our approach validates the ability of quantum neural networks (QNNs) to model a telecommunication use case, the first at Ericsson. We acknowledge, however, that the problem is treated as a supervised learning problem and does not directly compare to today’s live deployed model and its training.  

Our results from the telco use case corroborate the findings in the current quantum literature on the potential of quantum computing for ML tasks. Thanks to Ansatz expressivity and efficient feature representation in the quantum, QNNs with 7 to 9 qubits achieved the similar level of prediction accuracy as the classical ANN but with an order of magnitude less trainable parameters. This suggests that transitioning a machine learning task into the quantum domain could substantially reduce the training overhead. In other words, the training of QNN on the QPU could be orders of magnitude faster than that of ANN on the classical hardware. Nevertheless, the efficient training of QNNs remains an ongoing challenge.    

Today, qubits and noise are a limiting factor in the problem space and size that can be modelled on quantum computers. Fortunately, leaders in the quantum compute industry predict an uptake and rapid growth in qubits in the mid-term future. We envision various telecom use cases to benefit from quantum technologies, as these increased numbers in qubits will become commercially available with frameworks that make them accessible and technologies to make them less error prone. Finally, a larger number of qubits, in the not-so-distant future, could solve today’s telecom AI problems faster, with higher accuracy, and more sustainably. 

Want to learn more?  

Take a look at other work we do at Ericsson in the Quantum domain. Topics range from Quantum compute research to Post Quantum Cryptography and Quantum Key Distribution. Beyond that, we are active in other disruptive technologies as well, check them out!  

Our freshly announced Quantum hub in Canada brings these advancements and potential gains faster to market. Quantum algorithms look into RAN crucial algorithms and how they could be translated into the Quantum domain. Finally, the area of Quantum resistant algorithms and Post Quantum Cryptography is the most current and impacting to our customers, partners, and especially to you, the readers, as we see standardization efforts materializing into portfolio recommendations short-term.  

Disruptive Technologies at Ericsson 

Quantum hub in Canada 

Quantum algorithms 

Quantum computers 

Quantum resistant algorithms 

Quantum safe technologies 

Post quantum cryptography 

Quantum Key Distribution in Sweden 

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.