Mobile telecommunications are advancing rapidly toward 6G and beyond. Meanwhile, today’s mobile networks already have many computationally complex applications running on cloud platforms that are massively distributed, all the way to the edge of the network.
The ability to meet future network requirements will require significant technological advances in many areas that combine computation and artificial intelligence/machine learning (ML) to interpret the network information and build the applications and services that will act upon that data. Quantum computers could be of use in this respect, as they are expected to surpass the computational capabilities of classical computers for certain types of problems.
With newer use cases that are even more computationally expensive, the network is steadily becoming a platform that processes workloads on both centralized and edge data centers. We envisage quantum computers coexisting in data centers and co-processing with classical computers, thereby bringing a computational edge to plan, control and run these networks.
Quantum computing in a telecommunication context
In quantum computing, quantum bits (qubits) serve as the fundamental unit of information. A qubit can exist in a superposition of “0” and “1” states. This enables the potential to solve certain mathematical problems at an order of magnitude faster than today’s classical computers, which may take years to solve them. Consequently, quantum computers theoretically possess the capability to efficiently address specific problem types that are beyond the computational capacity of classical computers.
The quantum computing techniques most likely to be of use in telecommunication networks are:
- Variational quantum algorithms and quantum annealing
- Quantum machine learning
- Quantum-inspired algorithms.
Variational quantum algorithms and quantum annealing utilize the capabilities of noisy intermediate-scale quantum (NISQ) devices to tackle intricate optimization and classification challenges. Variational quantum algorithms adjust parameters in quantum circuits iteratively to solve optimization problems, while quantum annealing relies on quantum tunneling and thermal fluctuations to find optimal solutions. Both approaches can address some complex optimization tasks in telecommunication such as peak-to-average power ratio (PAPR) minimization in wireless networks and simulation of quantum effects in sub-10nm (nanometer) transistor technologies.
Quantum ML aims to enhance learning processes like classification and pattern recognition, employing techniques such as quantum neural networks and quantum support vector machines. When a certain ML task (classification, for example) is performed in the quantum domain, each data point is encoded to a quantum state by an appropriate state encoding method. The quantum state evolves with the suitable parametrized quantum circuit, and the qubits are then measured to predict class labels after classical postprocessing. Training in the quantum domain can be efficient due to the requirement of fewer data points and the fewer trainable parameters of quantum models. Possible use cases include quantum feature selection and supervised learning for antenna tilt and K-means clustering of performance data generated by telecommunication networks.
Quantum-inspired algorithms focus on exploiting a subset of quantum phenomena that is efficiently executable on classical computers to solve optimization and ML tasks. Tensor network techniques based on tensor decomposition to model complex quantum wave functions have shown promise for quantum circuits with low entanglement among the qubits. These include low-depth Clifford gate circuits that can approximately solve Max-Cut problems also found in telecom networks.
The current state of quantum computing
By definition, the NISQ era contains very few qubits, and the qubits are not yet fault-tolerant. In the NISQ era, quantum computers are being developed based on different hardware approaches, such as superconducting, trapped ion (TI), photonic, neutral atoms, nitrogen-vacancy center diamonds and quantum dots. Three of these – superconducting qubits, TI qubits and the photonic qubits – are the most popularly explored technologies with systems accessible through IBM, Azure, AWS and Google clouds. Figure 1 provides an overview of the performance metrics of these three hardware modalities.
Performance metric | Superconducting qubits | TI qubits | Photonic qubits |
Maximum coherence time | ~100 microsecond (limited by decoherence) | Seconds to minutes (limited by decoherence) | Limited by photon loss, regeneration rates and detector efficiencies |
Clock speeds | ~MHz | ~kHz | ~Hz |
Scalability | More | Less | High |
Number of physical qubits/qumodes | 100-1,000 | <200 | 10-1,000,000 |
Temperature required | Of the order of milli Kelvin | 2-10 Kelvin | Qubits work at room temperature, while superconducting detectors operate at 4 Kelvin |
Two-qubit gate fidelity | Three nines | ~ 3 nines | ~ 2 nines |
Connectivity | Between nearest neighbors | All-to-all | High but constrained by limited photon-photon interactions |
Figure 1: Comparison of the three most popular hardware options for universal quantum computing
Looking at Figure 1, there is no clear frontrunner for the best quantum modality. The superconducting qubits have limited coherence time – a measure of how long a qubit maintains its quantum state. However, their high clock speed offsets this limitation compared with other qubit technologies, such as TI qubits.
In terms of scalability, superconducting qubits offer higher scalability through fabrication methods like those used for classical integrated circuits. Fidelity and qubit connectivity are two measures that directly impact the accuracy of a quantum computation. While superconducting quantum systems have some of the highest fidelities, their qubit connectivity is limited.
Temperature is a key measure that dictates the energy requirement for a particular modality. Compared with superconducting and TI qubits, photonic qubits can operate at room temperature; however, the superconducting detectors needed for measuring the system need to be at 4K. To overcome this obstacle, detectors made of germanium-silicon-based single-photon avalanche diodes are being explored [1].
Beyond achieving qubits with hardware modalities, quantum computing vendors are striving to build scalable and fault-tolerant quantum computers. Scaling up single-chip quantum computers is challenging because TI qubits are not easily scalable and superconducting qubits require a huge increase in control electronics. Instead, multichip quantum processors with interconnect outperform equivalently large single-chip processors due to less crosstalk between the qubits. For fault tolerance, researchers are exploring techniques to achieve better coherence time in qubits and higher fidelity in operations, quantum noise mitigation, error detection and error correction.
In the NISQ era, it is important to identify the potential impact of quantum computing in the telecommunication domain, focusing on the problems of optimization and ML at a small scale with limited qubits. For this reason, we approach the puzzle from two perspectives:
- Shallow-depth quantum circuits for telecom applications
- Scaling quantum systems and improving their noise robustness.
With respect to the first perspective, in the NISQ era, coherence-limited quantum algorithms can be simulated for performance analysis on classical devices. Alternatively, quantum-inspired classical algorithms can be used on classical machines to simulate a small subset of quantum correlations.
The second perspective – scaling quantum systems and improving their noise robustness – primarily involves multichip quantum computing and interconnect optimization.
Use-case-driven evaluation of quantum computing
The availability of quantum compute resources for telco infrastructure is likely to be limited for cost reasons. To minimize the total cost of ownership of quantum-enabled telco infrastructure, it is crucial to determine which workloads would exhibit computational advantage from the quantum hardware. There are several computationally expensive problems in the radio domain that could potentially benefit from quantum computing in theory. We have evaluated quantum methods for four such problems:
- Maximum likelihood multiple-input, multiple-output (MIMO) detection
- Maximum likelihood polar decoding
- Peak to Average Power Ratio (PAPR) minimization in the orthogonal frequency division multiplexing (OFDM) system
- Simulating quantum effects in sub-10nm transistors [2].
Once mapped to the quantum annealer, we find that small instances of MIMO detection and PAPR minimization can provide a computational advantage over the classical methods.
Figure 2 includes four graphs that illustrate our evaluation of use cases related to a radio access network (RAN), network management and edge computing.
2a - R
2b - MIMO problem size ([transmitters x recievers], QPSK)
2c - Number of users
2d - Training set size
Figure 2: Evaluation of computational tasks related to (a) PAPR minimization, (b) MIMO detection, (c) edge user allocation and (d) antenna-tilt optimization
Key findings
Figure 2a [3] illustrates PAPR optimization on the quantum annealer with varying resolution for binarization. It demonstrates that mapping the PAPR minimization on the 2x2 MIMO system with a single sub-carrier would yield a 29x speedup compared with the single-threaded implementation of classical quadratic unconstrained binary optimization (QUBO) solver (such as simulated annealing), which can be offset by the latest dual socket server. For example, by mapping the 29x speedup obtained for PAPR minimization on the scaling performance of parallel simulated annealing [4], it is evident that roughly 38 threads would be needed classically. However, transforming the optimization problems to QUBO form requires several qubits due to:
- Binarization of complex variables
- Linearization of the constraints and objective function
- Reduction of high-order polynomial to quadratic form.
Therefore, for a certain resolution of binarization, the qubit resources to solve PAPR minimization problem for a MIMO system scales quadratically to the problem size (product of transmitters and sub-carriers). This means a problem with 64 transmitters and 2,048 sub-carriers when binarized with 8 bits would take up to 1 trillion qubits. Each time the transmitted symbols change or the channel characteristics vary, the corresponding QUBO matrix mapped to the quantum annealer is modified, which triggers the embedding latency.
Figure 2b [5] shows MIMO detection on the quantum annealer as the problem size increases. In our experiments, we observed the embedding cost to be approximately half of the total quantum processing unit (QPU) access time. We also noted that the cooling requirements of quantum annealers based on super-conducting qubits pose challenges related to sustainability. These observations suggest that quantum computers have a limited role in the radio domain in the near term. There are, however, several relevant computational problems in the RAN control plane, network design and management domains that are good candidates for a potential positive impact from quantum computing.
For example, in Cloud RAN, resource provisioning for virtual network functions – virtualized distributed units (vDUs), virtualized central unit-user plane (vCU-UP) and virtualized central unit-control plane (vCU-CP) and so on – and routing of eCPRI (enhanced Common Public Radio Interface) traffic to vDUs while also satisfying bandwidth, resiliency, mobility and energy-efficiency requirements is a non-trivial automation task. The problem size is a product of sector carriers and resources (both physical and virtual). In the case of mobile edge computing, the product of user equipment (UE) and servers determine the problem size. This makes it computationally difficult to allocate UE tasks to servers while meeting the resource capacity and single association constraints using a classical approach. The qubit resources needed to solve these problems scale linearly with the problem size. We have evaluated our quantum-classical hybrid scheme on the variant of this problem that is illustrated in Figure 2c.
One relevant problem on the network design optimization side is antenna-tilt optimization, which aims to trade off capacity, quality and coverage to improve the radio link efficiency. Our current solution comprises a reinforcement learning (RL) agent employing the common policy based on the deep-Q network for each cell to adapt the tilt angle based on network key performance indicators (KPIs). An RL agent takes a significant amount of offline training before it acts accurately enough to be uploaded to the live cell. Replacing the classical deep-Q network with a quantum neural network (QNN) could reduce the training overhead. Due to the intractability of simulating QNNs, in the RL loop, we adopt the existing training approach into supervised learning by exploiting the experience replay buffer and predicting the reward corresponding to each action, and the given set of network KPIs.
To solve the regression task, we down-sample the dataset ensuring simultaneously that each action is represented equally and train the QNN comprising a single layer of high expressivity ansatz operating on the 14 most significant features (including the action) encoded densely onto 7 qubits. Due to ansatz expressivity, our QNN achieves a similar level of prediction accuracy as the classical artificial neural network but with 20x fewer trainable parameters. Assuming the QNN with a circuit depth of 86 is executable within the coherence time of the latest QPUs such as IBM’s Heron r2 with 28K circuit layer operations per second [6], one inference requiring 1,024 evaluations of the QNN circuit would take roughly 36ms. Conversely, the multi-layer perceptron regressor from scikit-Learn performs the same in the order of microseconds on the 10-core, 16GB laptop. Figure 2d [7] demonstrates that moderate entanglement among quantum features makes up for reduced training in antenna-tilt optimization. When training a QNN with fewer data points, we introduced a variable amount of entanglement (computed with the Mayer-Wallach entanglement measure) among the qubits and found that a moderate amount of entanglement does make up for the reduced data points in training. This means that the size of the replay buffer can be reduced, potentially leading to early convergence of the quantum-augmented RL loop. Nonetheless, training deep QNNs is a challenge due to the problem of vanishing gradients.
Our approach to creating telco-grade quantum algorithms
To enable the execution of telco-grade quantum algorithms, we propose the deployment of quantum computers as coprocessors in a cloud-native manner, as shown in Figure 3 [8]. Each quantum computer could comprise multichip QPUs, where information flows between them via a quantum communication channel, thereby delivering improved computational fidelity compared with single-chip quantum processors.
Figure 3: Our proposal for the deployment of a quantum computer in the telco cloud
We implement multi-chip QPU quantum processors with superconducting qubits and transfer information between the processors via traveling microwave photons. The photons that carry the quantum information are emitted by one processor, traveling through the quantum channel, and reabsorbed by another. This process enables us to implement quantum state transfer and build remote entanglement between the two processors that form the core of the multi-QPU system.
We have experimentally implemented three modes of information/entanglement flow over the multi-chip interconnect:
- Microwave photon emission from superconducting qubits [9]
- The mission of entangled photons from superconducting qubits [10]
- Frequency-bin encoded photon emission that serves as an error-detection protocol [11].
Figure 4 illustrates our approach to mapping a quantum circuit to a multi-chip quantum computer. We first develop a quantum algorithm for a particular use case and represent it as a quantum circuit. The circuit is then cut into subcircuits that are mapped to constrained NISQ QPUs. The subcircuits are then executed on the multi-QPU system where the quantum channel facilitates the inter-chip communication.
Figure 4: Mapping a quantum algorithm’s circuit to a multi-chip quantum computer
With respect to microwave photon emission from superconducting qubits, we have demonstrated a superconducting circuit that deterministically transfers the state of a qubit into a propagating microwave mode [9]. For this, we used a time-varying parametric drive to shape the temporal profile of the propagating mode to be time-symmetric and with constant phase, so that reabsorption by the receiving processor can be implemented as a time-reversed version of the emission.
The emission of entangled photons from superconducting qubits involved generating the entangled photons by continuously and coherently driving a superconducting qubit that was capacitively coupled to a coplanar waveguide [10]. Before our work, the potential of steady-state qubit emission in generating entangled photon pairs had not been fully explored. These entangled photons can then be captured and distributed across multi-QPU systems. We have used this setup to develop a unified framework to enable synchronization and teleportation using these entangled photons and designed a strategy for secret address distribution between the QPUs to provide the basis for an efficient multi-QPU system [12][13].
In the case of frequency-bin encoded photon emission that serves as an error-detection protocol, we needed to account for the fact that during the transmission of the microwave photon in a quantum channel, there is a risk of photon loss due to the noise in the environment [11]. We implemented a quantum error detection scheme to handle this problem. Our implementation encodes the quantum information of the emitted photon state into two photons in different frequencies. If there is a photon loss in the quantum channel, it will result in a distinguishable qubit state in the receiver’s quantum processor.
To enable optimal execution of relatively deeper quantum circuits (such as Quantum Alternating Operator Ansatz (QAOA) and QNN) on multi-chip QPUs, mapping plays a critical role. We have developed two mapping strategies to map quantum circuits [14]. These strategies take input from the system stack layers above, including the quantum application, and the layers below (the specific hardware, for example). The critical hardware parameters consist of the transmission rate of the interconnects, decoherence time (T1 and T2), duration of the gates of the QPU, and the duration of data emitter swap and microwave photon transmission and absorption. Both mapping strategies allow us to generate runnable subcircuits given the constraints on QPUs (size and coherence) and interconnects (latency and capacity), mapped to a minimum number of QPU chips while achieving high-fidelity results on QPUs with limited coherence, and interconnects with limited capacities [14].
Other potential uses of quantum computing in telecom applications
At Ericsson, we have also developed shallow-depth quantum circuits for auto-encoders, support vector machines, and K-means clustering. Strategies to reduce circuit depth include:
- Encoding the classical features into the quantum state with short-depth state encoding circuits
- Constructing QNNs from fewer layers of ansatz, each with shallow depth and high expressivity
- Using gate-optimized implementations of unitary gates
- Classical pre- and post-processing.
In the case of distance calculations in K-means clustering, for example, there are three ways to run them on NISQ devices. One option is to encode the input vectors (test and centroid) to a quantum state such that the angle between the interfering copies of the input vectors is equal to the angle between the interfering copies of the vectors in the quantum state. The second option is to map the cosine similarity of two vectors on the probability of the |0> state of the qubit. The third option would be to use the destructive interference probabilities of the quantum state [15].
Another way to offset the computational training debt is to transform the error loss function associated with the ML model to the Ising Hamiltonian or QUBO model, where the parameters are binarized as decision variables and the training data set is inscribed into the coefficients of the QUBO matrix. The QUBO model is solved using quantum annealing, and the configuration of binary decision variables corresponding to the ground state energy yields the trained values of the parameters. We have evaluated this approach by jointly performing the feature selection and ridge regression and find that our method beats most of the classical optimizers in prediction accuracy by selecting the optimal reduced feature set [16].
One could simulate the quantum circuits classically, although simulating the full quantum state of a high qubit QAOA circuit would require a supercomputer. Alternatively, we use tensor network methods to simulate low entanglement and low depth QAOA circuits and use them in a hybrid scheme, which starts a QAOA circuit based on the configuration of decision variables obtained from the classical integer linear programming solver when the energy of the obtained solution does not improve for the certain number of iterations. It then samples the shallow depth QAOA by contracting the corresponding tensor network and exploiting the backbone structure of the optimization problem (the backbone variables) taking the same value in all solutions. Thus, the value associated with each variable is obtained by taking the trace of the density matrix over all the other variables.
The configuration of decision variables obtained by tensor simulations of the shallow depth QAOA are used to facilitate the classical solver getting out of local minima. We have applied this strategy to the variant of edge user allocation modelled as a 1-in-K-SAT problem. Figure 2c illustrates that the quality of the solution (shown as Energy) obtained from the hybrid quantum optimizer improves as the size of the edge user allocation problem increases in terms of users. The graph plots the average energy difference between the solutions produced by classical and hybrid optimizers on different problem sizes, demonstrating that augmenting the classical integer linear programming solver with the short-depth QAOA would provide computational advantage on large problem instances.
Conclusion
Due to the rapidly evolving technological landscape, Ericsson is actively monitoring progress in the quantum domain. Through an in-depth evaluation of use cases from different parts of the telecom network, we have ascertained that near-term quantum computers could provide a limited degree of computational advantage.
Our research shows up to 29x speedup provided by quantum annealer on optimization tasks compared with a single-threaded central processing unit-based solver and nearly constant scaling of execution time with the problem size. We have also observed that training machine learning models in the quantum domain require up to 20x fewer trainable parameters and utilize fewer data points. Although these gains are offset by quantum processors being slower than state-of-the-art classical computers today, our hybrid approach verifies that the quality of solution at large problem instances could improve by utilizing classical and quantum processors collaboratively. In short, our findings indicate that quantum computers could add value to future generations of telco network infrastructure if and when they become scalable and fault-tolerant.
Acknowledgements
The authors would like to thank their colleagues Adriano Mendo, Juan Ramiro, Sorin Georgescu, and Mbarka Soualhia, as well as their collaborators from the University of Sherbrooke, Roya Radgohar and Stefanos Kourtis, for their contributions to the research presented in this article.