Skip navigation
Like what you’re reading?

8 use cases for optimal use of network connectivity

What is distributed cloud infrastructure? We define it as having a unified approach across both centralized and decentralized resources, as well as making optimal use of network connectivity, in terms of both transport and access. And one cornerstone of a distributed cloud infrastructure is that network functions and customer applications can share the same resources, which allows for a variety of business models and use cases. Explore 8 top applications for a distributed cloud infrastructure, from NFV to virtual reality, content delivery networks, machine learning and more.
Man with VR glasses

Scale and efficiency accommodate multiple service providers

Economics of scale and efficiency can be achieved by allowing multiple service providers to address market needs. Specific requirements on regulatory compliance, resource capacities, and vertical managed services will naturally vary across different markets.

Up to now, it’s been challenging to find models in which service providers collaborate and federate resources in order to offer a more complete service to enterprises. The obstacles are the high complexity in the cloud service itself and big variations in commercial models. For particular services, such as content delivery, it has proved possible. There is an opportunity to refine and simplify the cloud service model to enable federated offerings.

 

A gradual telco transition to NFV

Operators’ infrastructure is already undergoing a gradual transformation towards applications running on a generic platform (NFV). This evolution has primarily been driven from a cost rationalization perspective. But it’s important now to also focus on attracting innovation and new revenue opportunities on top of the infrastructure.

Many aspects of the Edge Computing Infrastructure must be developed by industry collaborations and through relevant standardization fora and open source communities.

 

8 applications of a distributed cloud infrastructure

Let’s examine some of the possible applications:

 

1. Network applications / NFV

The NFV evolution has made it possible to distribute virtual network functions (VNFs) in a more flexible way.

The infrastructure for NFV is an important starting point for the distributed cloud evolution. Often today, an operator will have several sites with independent installations of virtualization environments. A step forward is to be able to handle resources and placement in a coordinated way. That opens the possibilities to formulate policies and constraints on the placement of the VNFs.

Intelligent placement of VNFs is especially important for mobile broadband core network functions, because non-optimal routing will result if applications are placed in a more decentralized location than the package gateway. In addition, correlating placement of mobile core and RAN functions is necessary. Note that, while new and flexible placement of mobile core functions is often a decentralization, for the higher-level RAN functions it is a centralization, relative to current deployments. The placement of RAN functions needs to be determined by constraints, such as the latency distance to physical entities (antennas, radio-near processing). But it is all enabled by a Distributed Cloud Infrastructure.

 

2. Content Delivery Networks

To achieve a good consumer experience for video and other content-based services, delivery infrastructure must become more and more decentralized.

Recently, content delivery solutions have been run as applications on generic computing and storage platforms. This means that these platforms must support distribution across regional and hub sites as well as across multiple service providers. The benefits of a decentralized architecture are better response times for the consumer experience, as well as efficiency in transport and peering costs.

 

3. Data storage with regulatory compliance

Enterprises are increasingly using cloud service providers for scalable storage of various data sets, and several studies have shown that security and regulatory constraints are major concerns.

An example is data sets that include personal information, where several countries have regulations such that at least a copy of the data has to be kept within country borders. A decentralized architecture enables compliance with regulations and ensures control of cost and policy with regards to the cloud service providers.

 

4. Hybrid enterprise cloud

Enterprises want to use cloud service providers for elasticity and scalability reasons, but also to control where applications are executed. A cloud platform can be deployed across on-premises and cloud resources such that applications and data can be placed according to policy and performance constraints and intents.

 

5. IoT data stream processing

Applications that collect and process IoT data are often composed of several components. In a pipeline, for example, the components include: data collection, data throttling, data pruning, anomaly detection, machine learning, and storage.

There is an opportunity to improve scalability and performance by placing these components at an optimal location in the network topology, which will lead to better response times for machines and users, as well as efficiency in transport and peering costs.

 

6. Video processing

Video is a good example of a data-intensive industrial application, for example, monitoring or surveillance in factories. In this case, the processing is also often composed in a pipeline: streaming source, computer vision (image feature detection), transport of meta data, anomaly detection, machine learning, and storage. The figure below shows an example where a complete video surveillance application consists of components for computer vision, anomaly detection, and storage, each placed at different sites to optimize resource usage.

 

7. Machine learning

In today’s machine learning-oriented applications (context-aware advertising, for example), the machine learning models are often decomposed into layers or pieces, where common parts can be centralized, and personal/contextual parts can be placed closer to where they are used. The data behind the machine learning model can also be distributed across several geographic sites.

The benefits of a decentralized architecture here are better response times for data processing, which can translate into the ability to process more data within certain time limits, as well as efficiency in transport and peering costs and in regulatory compliance.

 

8. VR/AR


Virtual reality and augmented reality (which can also include tactile interaction elements) are examples of applications that are both latency sensitive and bandwidth demanding.

In addition to consumer-oriented (gaming) type applications, there are many professional and industrial use cases, for example, remote monitoring and inspection of equipment. Remote cameras can generate multiple wide-angle video streams. Necessary processing includes both the stitching these video streams into a unified view, which is fairly compute intensive, and the rendering of the resulting images to an end user.

Depending on where in the topology the cameras (data sources) and the end user (industrial equipment professional) are placed, these data transport- and data processing-heavy application components should be distributed in an optimal way.

The benefits of a decentralized architecture are better response times toward end users, as well as efficiency in transport and decreased peering costs.

 Read more about:

Edge computing

Cloud infrastructure

 

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.