Skip navigation
Like what you’re reading?

Unlocking the Power of Automated Service Graph Generation for Cloud-Native Applications

In the era of 5G and emerging 6G networks, efficient cloud-native deployments are crucial for meeting high user demands. Manual deployment of these applications can result in errors and inefficiencies, underscoring the need for automated solutions to ensure reliability and optimize resource utilization.

Senior Researcher, Cloud intelligence

Master Researcher, Cloud intelligence

Senior Researcher, Cloud intelligence

Senior Researcher, cloud intelligence

Hashtags
Unlocking the Power of Automated Service Graph Generation for Cloud-Native Applications

Senior Researcher, Cloud intelligence

Master Researcher, Cloud intelligence

Senior Researcher, Cloud intelligence

Senior Researcher, cloud intelligence

Senior Researcher, Cloud intelligence

Contributor (+3)

Master Researcher, Cloud intelligence

Senior Researcher, Cloud intelligence

Senior Researcher, cloud intelligence

Cloud-native deployments enable seamless delivery of services and applications in 5G and emerging 6G networks. As communication service providers (CSP) aim to meet ever-increasing user demands and performance standards, it becomes imperative to deploy those services and applications so that they meet the stringent requirements (on, for example, availability, latency, and throughput).

Manually composing and deploying cloud-native applications is not only time-consuming but also error-prone, leading to inconsistent performance, resource wastage and potential downtime. Additionally, this process requires an expert familiar with the applications and their components, making it even more resource- intensive. This is where automation becomes essential.

This blog post explains our proposed solution for automation and illustrates how it works by applying the proposed method to an extended reality (XR) use case.

What is service graph generation?

It is important to optimize the composition of cloud-native applications to fulfill users’ objectives, such as performance and functionality. Therefore, it is necessary to understand user requirements to ensure that these applications deliver the desired outcomes efficiently. This involves meeting functional needs and addressing performance and scalability considerations. An illustrative example of such requirements is: 

“Enable seamless real-time collaboration among 10 users in a virtual environment, maintaining a high interactivity and medium quality”. 

To realize such user expectations and requirements, a translation between what is required and how to achieve it is needed.

A central aspect of realizing requirements is the generation of a service graph. This process involves specifying the various elements within the graph, for example virtual network functions (VNFs), microservices, or network components, and how they interact with each other to fulfill the requirements. A service graph represents the sequential arrangement of the elements involved in the system. It provides a clear roadmap for deploying cloud-native applications, ensuring they meet their requirements and operates according to their objectives.

Currently, these service graphs are manually specified by experts who have a detailed knowledge of the specific application or network service and the performance characteristics of each application element. This process is slow, error-prone, and leads to suboptimal graphs. Additionally, the task of service graph generation is challenging for those with limited expertise. Additionally, default settings recommended by experts might not meet requirements and expectations for all use cases in dynamic cloud environments. Moreover, new versions of application elements may not be compatible with old ones or may have different performance characteristics, leading to potential incompatibility with existing interfaces. Current efforts often overlook the correlation between service expectations and resource allocation, making effective resource allocation difficult.

Therefore, proper generation of service graphs is crucial to ensure that the cloud system aligns with and effectively meets the specified requirements. These service graphs serve as a blueprint, detailing how different components and elements interact within the system to fulfill the requirements defined by users.

Steps to automate service graph generation

Our solution for automated service graph generation relies on two pivotal steps: “Compatibility Graph Generation” and “Service Graph Generation” as shown in Figure 1.

Steps of service graph generation

Figure 1: Steps of service graph generation

Our solution’s input includes a description of a service catalog and a set of requirements that the generated service graph should satisfy (for example, latency and throughput).  The service catalog acts as a repository of information about the available microservices or VNFs within the system.  Our solution’s output includes a service graph that outlines the composition of the cloud-native application to be deployed and meet the input requirements. In the sequel, we will delve into a more detailed description of the two steps of our proposed solution.

Step 1: Compatibility graph generation:

This step assesses the compatibility between various microservices or VNFs and generates the compatibility graph between the different elements.

At this stage, the emphasis is on identifying microservices or VNFs that can be put together into a service graph which aligns with the user requirements. However, it’s crucial not just to identify them but also to ensure seamless collaboration between them. Here is a breakdown of this step, where we use microservices to illustrate:

  1. Service catalog referencing: This process involves cross-referencing the identified microservices within the service catalog.  Our solution uses initial microservices as a starting point for searching the service catalog to find compatible microservices. Then, it finds its compatible microservices by matching its output interface with the input interfaces of other microservices in the catalog.
  2. Compatibility confirmation: By consulting the service catalog, our solution validates whether selected microservices are compatible with each other in terms of their functionalities, interfaces, and any constraints that might apply. Compatibility is determined based on whether the output of one service can serve as the input for another one.  The result of this compatibility assessment is a set of compatible microservices that can be composed into service graphs to fulfill the input requirements.
  3. Alignment with requirements: The ultimate objective in this step is to guarantee that the composition of microservices aligns with requirements or objectives defined by the users. In other words, the microservices chosen are not only to match the requirements but also to fulfill the detailed, specific tasks necessary to bring the cloud-native application to life.

Step 2: Service graph generation:

Given the compatible graph generated in Step 1, our solution identifies different feasible service graph configurations of microservices that can be assembled to achieve the desired requirements. These arrangements are essentially the possible compositions of services that adhere to compatibility constraints. Here's an elaboration on the various aspects that should be considered:

  1. Core services identification: The first step in this process involves identifying the core services through which the application is consumed or provided. These services are critical components in the service graph where the key action begins and/or ends. For instance, core services include devices on which the service is going to be consumed, or cloud-external components that provide a function to the service in question.
  2. Candidate service graph generation: Given the core services and the compatibility graph, the next step is the generation of candidate graphs. A candidate graph is simply a subgraph of the compatibility graph, which includes the core services and any other microservices that are required to provide the service. Specifically, for each microservice (core or otherwise) in a candidate service graph, there should be other microservice(s) in that graph, which provide the necessary output interfaces to its input interface(s).
  3. Optimization and selection: Once candidate service graphs are identified, the next steps involve optimization and selection. Optimization involves refining the selected composition to improve efficiency, reduce latency, or enhance other performance metrics. The selection process’ goal is to choose the most suitable candidate service graph that aligns with the users’ requirements and objectives. This selection takes into account various factors, such as resource availability, cost-effectiveness, and the desired level of service quality.

Illustrative use case: XR application

In this section, we will illustrate our proposed solution for service graph generation using an application developed on the Illinois Extended Reality (ILLIXR)  platform.

Overview of ILLIXR A plugin-based platform for XR research, ILLIXR is an open source end-to-end XR framework with state-of-the-art components. Its goal is to develop an open-source software framework that enables researchers and developers to create and test new XR technologies easily.. The ILLIXR platform is monolithic. To run ILLIXR in a cloud-native way on a K8S cloud, we cloudified ILLIXR plugins as cloud-native containers. This is to demonstrate how to generate a service graph composed of microservices that fulfills the input requirements.

Service catalog

We design a service catalog containing “provided service", “input interface”, and “output interface,” which are specified for each of the services in the service catalog. The "provided service" element specifies the service it offers. The "input interface" refers to the types of content accepted as input for a given service, and the "output interface" pertains to the types of content generated in the output of a given service.

Table 1 shows a service catalog for XR applications based on a subset of the ILLIXR components. The table includes the service types, their implementations (service name), and their inter-relationships (input and output interfaces).

  • HMD, which stands for head-mounted display is a wearable device that displays visual information directly in front of the user’s eyes. It also tracks information about the user’s environment, including inertial measurement units (IMUs) and (depth) camera readings.
  • VIO is a service implementing a technique called visual inertial odometry, that is used in robot and computer vision to estimate the pose (that is, the position and orientation of a camera or HMD). Examples of implementations are OpenVINS or Kimera-VIO.
  • IMU_INTEGRATOR is a service that integrates the latest IMU samples since the last published pose estimations from the VIO to provide a fast pose. Example of algorithms are GTSAM and RK4.
  • SLAM is a service used to construct a map of an unknown environment while simultaneously keeping track of the location within it. It can be implemented using the ORBSLAM algorithm.
  • POSE_PREDICTION predicts the position and orientation of the headset or other sensors in real-time, based on previous measurements from sensors (for example, raw IMU and fast pose).
  • TIME_WARP is a technique used in XR systems to reduce motion-to-photon latency and improve the user's sense of presence in the virtual environment. The “texture_pose” in its output, refers to the position and orientation of a texture in the virtual environment and “hologram_in” as another output, adapts the eye buffer for use on a holographic display.
  • ENCODER is responsible for encoding videos or IMU data and can be implemented using protocols such as webrtc.
  • AUDIO_PIPELINE is responsible for generating spatialized audio for XR, considering the placement of audio within the virtual environment and the viewer's location within that environment.

 

Service type Service Name Input Interface Output Interface
HMD Occulus ue_holo_stream_out ue_imu, ue_stereo_cam, ue_rgb_depth
HMD Cardboard ue_rendered_video ue_imu, ue_mono_cam
XR_APP illixr_gldemo pose_prediction eye_buffer, audio+pose
XR_APP PokemonGo_server pose_prediction, scene_map eye_buffer, audio+pose
VIO OpenVINS_STEREO imu, stereo_cam imu_integrator_input, slow_pose
VIO Kimera-VIO imu, stereo_cam imu_integrator_input, slow_pose
IMU_INTEGRATOR GTSAM imu_integrator_input, imu imu_raw, fast_pose
IMU_INTEGRATOR RK4 imu_integrator_input, imu imu_raw, fast_pose
SLAM iORBSLAM imu, stereo_cam, rgb_depth fast_pose, imu_raw, scene_map
POSE_PREDICTION illixir_pose_prediction fast_pose, imu_raw pose_prediction
TIME_WARP illixr_timewarp pose_prediction, eye_buffer hologram_in, texture_pose
ENCODER video_encoder texture_pose rendered_video
ENCODER holographic_encoder hologram_in, bin_audio holo_stream_out
AUDIO_PIPELINE illixr_audio_pipeline audio+pose, fast_pose bin_audio
UEC cardboard_connector ue_imu, ue_mono_cam, rendered_video imu, mono_cam, ue_rendered_video

Table 1: Example services for an XR Service Catalog

Compatibility graph generation for XR applications

Let’s consider an example where a user requires to use the “PokemonGo Server” XR application with their “Occulus” HMD. 

Here, the process begins by referring to the service catalog of the XR application. It then builds the compatible graph that captures the relationships, or interdependency, among the various microservices in the service catalog that are needed to meet user requirements.

Figure 2 illustrates the compatibility graph generated for an XR application using our proposed solution. The graph represents the interconnections and relationships between different microservices based on their input and output interfaces, showcasing the comprehensive structure required for XR applications.

For example, the “Occulus” HMD has an input interface “ue_holo_stream_out” while the “occulus_connector" service has an output interface with the same name. Therefore, we make a direct connection from "occulus_connector” to the “Occulus” HMD. This process is repeated until all services are included in the generated compatible graph.

Example compatibility graph for an XR application

Figure 2: Example compatibility graph for an XR application

Service graph generation for XR applications

Given the compatibility graph obtained in the previous step, a candidate service graph is simply a subgraph of the compatibility graph that includes the “Occulus” and “PokemonGo server” core services, and potentially fulfills the input requirements.

Figure 3 shows three example service graphs that can fulfill the requirement of streaming the “PokemonGo_server” on “Occulus”. Each graph includes the core services along with all necessary intermediate microservices.

We can make a few interesting observations about the generated service graphs:

First, we see that it is possible to have several service graphs that fulfill the requirements. 

Second, we also see that graphs 2 and 3 in this example have identical structures, differing only on the specific implementation of a service (openVINS_STEREO vs. Kimera-VIO).

Such diversity in service graphs implies that the selection of the most appropriate graph should be carefully considered. Factors such as resource availability, service performance capability, and cost become crucial determinants in the decision-making process. Resource availability involves assessing whether the necessary computational and infrastructural resources are accessible for the selected service graph. Service performance capability relates to evaluating how well each service fulfills the desired performance criteria, which may include metrics such as latency and reliability. Lastly, cost considerations involve evaluating the expenses associated with deploying resources for the service graph, ensuring it meets technical needs while staying within budget.

Example service graphs of an XR application

 

Example service graphs of an XR application

 

Example service graphs of an XR application

Figure 3: Example service graphs of an XR application

Navigating from service graph generation to service assurance for cloud-native applications In this blog post, we have outlined the challenges associated with the generation of service graphs in the context of cloud-native applications. The composition and arrangement of these service components significantly impact the successful deployment of cloud-native applications. To address these challenges, we have introduced our service graph generation solution that is composed of two steps: Compatibility graph generation and service graph generation. We have also tested its effectiveness and performance by applying it to an XR use case that is based on the ILLIXR research testbed.

After deployment in cloud environments following the selected service graph, applications may experience performance issues due to infrastructure or resource provisioning faults. These issues can manifest as quality of service (QoS) drops or even service interruptions. Preventing such performance problems by predicting them and taking proactive remediation actions is critical for maintaining application availability and performance. Service assurance solutions play a critical role in ensuring the QoS of cloud applications. In our next blog post, we will explore the importance of ensuring service assurance post-deployment. We will discuss proactive solutions designed to preemptively address service performance problems and effective mitigation strategies.

More reading

Blog post How to automate resource dimensioning in cloud

Details of the ILLXR platform

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.