How 5G and machine learning can build scalable assistive health technologies
With recent advances in communication networks and machine learning (ML), healthcare is one of the key application domains which stands to benefit from many opportunities, including remote global healthcare, hospital services on cloud, remote diagnosis or surgeries, among others. One of those advances is network slicing, making it possible to provide high-bandwidth, low-latency and personalized healthcare services for individual users. This is important for patients using healthcare monitoring devices that capture various biological signals (biosignals) such as from the heart (ECG), muscles (EMG), brain (EEG), or activities from other parts of the body.
In this blog, we discuss the challenges to building a scalable delivery platform for such connected healthcare services, and how technological advances can help to transform this landscape significantly for the benefit of both users and healthcare service providers. Our specific focus is on assistive technology devices which are increasingly being used by many individuals.
With the advent of 6G, Ericsson’s vision for connected intelligent machines in 2030 envisages futuristic manifestations of personalized healthcare where multiple sensors or devices function in a connected eco-system providing end-to-end service delivery.
Making next-generation biosignal technologies possible
Biosignals can be acquired from various assistive technology devices, ranging from simple smartwatches that track your movement or pulse rate, to complex devices like prosthetic limbs, prosthetic neural systems and cardiac pacemakers. The sensors on such devices collect dense time-series data that needs to be processed in real-time, with minimal delay, to deliver critical actuations. For example, a person using a prosthetic limb needs urgent medical care when they fall, or a cardiac pacemaker needs to re-calibrate when there is an abnormal ECG event. Other than critical cases, there can also be long-term monitoring requirements where healthcare providers need to collect data over time, and then the challenge lies in efficiently processing large volumes of data. With growing urbanization and rapid proliferation of connected devices with the Internet of Things (IoT) ecosystem, it is evident that a scalable solution is needed where healthcare providers can optimize the services provided.
Let’s look at an example. Consider a user wearing an EEG device with 16 sensors that sample data at 256 Hz. The user has a locomotive disorder and intends to use brain activity (specifically from the motor cortex) for example to trigger a movement of the wheelchair. In this case, the input EEG data will be a matrix of size 16x256 samples generated every second. This data would have to be processed every t seconds (where t may typically be 2-3 seconds) to determine the user’s commands.
The conventional workflow, shown in Figure 1, would involve transmitting the sampled EEG signals to a server over a network, then pre-processing the signals (bandpass filtering, noise, and artifact removal), feature extraction or selection based on the task (typically a dimensionality reduction approach is used) and training a classifier (in presence of class imbalance). The issue of imbalance is inherent in such scenarios because it is unlikely that the user will execute all actions with the same frequency, hence their relative occurrence of events in the labeled training set will not be the same for all events. As and when the trained classifier is deployed on the server, streaming test EEG data will be windowed and processed, and inference would be obtained from the classifier. The recognized intent would then be sent back over the network for actuation of the wheelchair motor. These components introduce latency in the processing pipeline, as shown in the figure below.
Broadly, we can categorize the challenges into modeling and communication network issues:
- Low-latency processing pipelines are crucial for time-critical applications such as monitoring user activity from wearable devices/implants. Delay in computing predictions is undesirable.
- These devices generate high-dimensional, dense time series on which the ML/inference pipeline needs to compute predictions in real-time. These require more bandwidth and storage, but these devices have limited compute and storage.
- Such scenarios face the problem of class imbalance since the occurrence of all events of interest is not equally likely. Thus, models could be biased towards the majority class.
- As the communication network is agnostic to nature (or need) for which biosignals are transmitted or inferenced, it is difficult to prioritize transfer/actuation for users with critical medical requirements.
The role of network slicing and sparse models
A view of the workflow incorporating network slicing and machine learning to address the challenges identified is shown in Figure 2. The network slicing paradigm can be leveraged for low latency transmission of dense time series data from multiple devices of multiple users over dedicated network slices, with guaranteed quality of service (QoS) class indicators as envisaged for 5G networks. This makes it possible to manage many users with multiple devices by prioritizing slice selection based on classifier output. In this way, network slicing can be leveraged to manage streamlined transmission of the data using slices that guarantee defined QoS parameters, based on the criticality of the medical event(s). Users with high-priority medical events, for example a brain stroke, can be provided a dedicated slice, while users requiring regular monitoring can be provided with a slice that offers best effort delivery. Intermediate priority levels can also be configured by the healthcare service providers in coordination with network operators. This would be managed by the slice orchestrator. Thus, a prioritization strategy for slice allocation service level agreement (SLA) based on the need of users with biosignal devices significantly aids in improving healthcare service delivery.
There is also a requirement for sparse, class-imbalance tolerant models for robust, on-device inferencing for such use cases. One of the approaches for building class-imbalance tolerant models is in building twin architectures for models such as Support Vector Machines or neural networks. Recent interest has grown towards building sparse models for optimizing inference time on low-resource platforms, which facilitates on-device deployment of models for further reduction of inference time. A combination of sparse, class-imbalance tolerant machine (or deep) learning models can be leveraged by biosignal processing pipelines for improving inference latency and accuracy.
Scaling assistive technologies: How it works
- User(s) with multiple biosignal acquisition devices, each of which has multiple sensors that acquire digital samples of biosignals (devices may optionally perform basic pre-processing on these signals such as noise removal or frequency filtering).
- The devices transmit these digitized biosignals to a local node (or device). The data from these devices is sent as a dense time series, along with annotations for user information, time window, and other metadata. These devices also have an actuation module that receives the classified intent and triggers suitable actions.
- The devices can function in two modes: the first is to collect training model data from the sensors for classifying the user tasks when the data and annotations for the intents are sent to the backend server. The second is an online mode, where these devices use sensor data to obtain predictions during which they store the updated classification models from the server and perform on-device inferencing to classify and trigger actuation.
- The biosignals generated from the devices are sent through dedicated network slices over the 5G network (e.g., slice A, B, C as shown in Figure 2). The dedicated network slice offers a high-bandwidth slice for reliable and minimal latency transmission of the signals to the server network. The inference models are trained on the server and sent back to the device for enabling on-device predictions to cater for the prioritized and critical health events monitored, via the dedicated network slice.
- The back-end server(s) are compute-intensive nodes that can train large models for prediction on biosignals for recognizing multiple intents. Building sparse models with a twin approach (for addressing class imbalance) allows for improved model performance, low storage, and inference time to enable on-device deployment.
- Over time, network slice allocation is determined by classifier output, enabling healthcare service providers to offer scalable services for many users in coordination with mobile operators.
Summary
As healthcare is directly connected to the quality of human life and well-being, it has been the primary focus for multiple stakeholders. Assistive technology can significantly transform the quality of life for an individual, hence leveraging communication and machine learning advances for its practical realization are of utmost importance. While we note that challenges with security, privacy, and interoperability are still being addressed as methods and regulations evolve, platforms for scalable bio-signal monitoring with robust, timely detection (and resolution) of critical events have an array of beneficiaries and are of strategic business value.
Explore more
Learn more about the future of 5G healthcare.
Find out how 5G is helping to create new possibilities for borderless remote healthcare.
In 2021, Ericsson trialed a private edge 5G network for the Lucile Packard Children’s Hospital in Stanford, US. Read more about the trial in this blog post.
Learn more about the three waves of 5G network slicing.
RELATED CONTENT
Like what you’re reading? Please sign up for email updates on your favorite topics.
Subscribe nowAt the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.