Skip navigation
Like what you’re reading?

How will distributed compute and storage improve future networks?

Tomorrow’s advanced use cases, together with the slowing down of Moore’s Law, look set to shake up how we compute and store data. What will this mean for future storage and compute models? Find out below.

Principal Researcher, Network-compute convergence

Principal Research Engineer cloud technologies

Master Researcher cloud systems and software

Senior Researcher, Cloud systems and platforms

People standing in between multiple lights

Principal Researcher, Network-compute convergence

Principal Research Engineer cloud technologies

Master Researcher cloud systems and software

Senior Researcher, Cloud systems and platforms

Principal Researcher, Network-compute convergence

Contributor (+3)

Principal Research Engineer cloud technologies

Master Researcher cloud systems and software

Senior Researcher, Cloud systems and platforms

The age of virtualization and cloud began with the promise of reduced costs. It achieved this by running all types of workloads on homogenous, generic commercial off-the-shelf hardware (COTS) which would be hosted in dedicated, centralized data centers.

However, today’s use cases are maturing. As such, emerging applications such as cyber-physical systems face much stricter requirements regarding data volumes, latency guarantees, energy efficiency, privacy and resiliency.

These challenges will be tackled by the future network platform by taking advantage of the two following parallel trends:

  • firstly, through seamless integration of specialized compute and storage hardware enabling better performance of a wider range of emerging, complex applications
  • secondly, by moving these advanced compute and storage capabilities to the edge of the network, closer to where the data is generated

As a result, the future network platform will be able to provide optimal application support by leveraging emerging hardware innovation distributed throughout the network. In the meantime, it will still continue to harvest the operational and business benefits of cloud computing models.

Distributed compute and storage is one of the key technology trends which is evolving the network platform. Read the 2019 technology trends in full.

What next for compute and storage?

Moore’s law is slowing down, meaning developers can no longer assume that new demanding applications will be catered for by the next generation of faster general-purpose chips. In addition, today’s general-purpose computing proves to be unfit to meet contemporary energy efficiency requirements – from both a cost and environmental point of view.

Domain-specific accelerators

Instead, the class of commodity hardware is joined by a highly heterogeneous set of specialized, domain-specific chipsets – often collectively referred to as accelerators. Each of these chips will be optimized for a certain class of applications. For instance, data-intensive applications like machine learning (ML) or artificial intelligence (AI), or even augmented reality and virtual reality, can take advantage of the massive parallelization offered by graphics processing units (GPUs) or tensor processing units (TPUs).

Figure 1: Microprocessor's Transistor Count & Moore's Law

Figure 1: Microprocessor's Transistor Count & Moore's Law


Latency-sensitive applications such as 5G network functions or mission-critical applications might utilize computation pattern reuse offered by either custom designed integrated circuits (ASICs) or field-programmable integrated circuits (FPGAs). An example of the latter being network acceleration purposes. While ASICs incur high development costs, they offer optimal performance and power consumption. FPGAs, on the other hand, offer application-specific reconfigurability of logic blocks at the expense of relatively lower performance per watt. As use of domain-specific processing increases, more efficient utilization patterns of these accelerators will become commonplace, such as remote access and sharing like today’s COTS hardware in cloud.

Beyond-CMOS computing

However, even today's accelerators, mostly CMOS based (complementary metal-oxide-semiconductor), will eventually experience the end of Moore’s law. As the next step of heterogeneous computing, completely new “beyond CMOS” computing paradigms will appear - at least for selected, specific types of applications. This will include neuromorphic processors, inspired by the workings of the human brain. Neuromorphic computing attempts to adopt the brain’s locality, fine-grained parallelism and event-driven operation by realizing spiking neural networks in the hardware (a model of computing where computation is represented as a time-dependent, state evolution of a dynamic system).  As a result, they yield low power consumption, fast inference, and event-driven information processing primarily for ML/AI applications.

Another emerging new paradigm is photonic computing. Here, photons are used instead of electrons, thus avoiding the latency of the electron switching times and adding inherent parallelism for optical in-network processing. Even further down the road, quantum processor-based acceleration of compute-intensive and latency-sensitive telco algorithms will become reality . By exploiting the quantum mechanics principles like superposition (the ability of a sub-atomic particle to be in more than one quantum states at the same time), entanglement (two entangled qubits will always yield the same output upon measurement), quantum processors promise significantly faster problem-solving for a particular class of problem. As a first step, while we wait for full-blown neuromorphic, optical, and quantum processors, these technologies will become available as co-processors to accelerate a variety of specific applications.

Next-generation storage and memory

Given today’s data-intensive applications, the demand for memory capacity is growing faster than the capacity growth of common memory technologies. In response, upcoming generations of memory will blur the strict dichotomy of classical volatile memory on one hand, and persistent storage technologies on the other hand.

We will see the emergence of “universal memories”, offering the capacity and persistency features of storage, combined with the byte-addressability and access speed of today’s RAM technologies. Storage-class, persistent memory technologies will help in solving DRAM scaling issues and remove extra layers of the storage stack, bringing both speed and efficiency.

Programs written for persistent memories can remove the distinction between runtime data structures and offline data storage structures, resulting in faster startup times and failover recovery. Eventually, processes may be suspended and resumed rather than started and stopped, opening up new possibilities around dynamic deployment and distribution of networked services. Furthermore, advancements in technologies like Non-Volatile Memory over fabric (NVMeOF) will be crucial to meet strict latency requirements while providing applications with access to large capacities of shared storage. This class of emerging technologies provides interfaces enabling optimized software stacks that can take advantage of speed advancements in data center interconnects.

Enabled by approaches like silicon die-stacking and persistent memory, the problems caused by the memory wall (i.e. the increasing disparity of CPU speeds vs. memory access speeds) are leading to this new paradigm. Just like edge computing proposes to move compute to the data on macro-scale, compute-node architectures are adopting this mindset on micro-scale. Compute units will be embedded either inside the memory or storage fabrics, opening up for near-memory computing (NMC)  or computational storage approaches, see Fig 2. Reducing the need to move data from storage to memory and eventually to the processor will not only increase performance, but also have significant energy efficiency gains.

Figure 2: Today's compute centric architecture vs future data centric architectures

Figure 2: Today's compute centric architecture vs future data centric architectures


Distributed and edge computing trends 

Requirements typically associated with 5G applications, such as massive data volumes, latency guarantees, energy-efficiency, as well as privacy and resiliency, will have to be met with applications running on a platform that is massively distributed, all the way to the edge of the network. The future network platform will cater to the emerging need for edge computing by exploring mutual awareness between the computing environment, connectivity, and the devices connecting to the network (see Fig 3). This will require well-defined and secure abstract interfaces to allow applications to express their intents, as well as the network to expose relevant connectivity information. As a result, we will see optimized deployment and synchronization of applications running on distributed edge environments. Integrated connectivity and compute at the edge of the network combined with distributed intelligence will provide a smooth transition to a future where application connectivity, performance, and resiliency requirements can always be met in a cost and energy efficient way.

Figure 3: Integration of compute and storage capabilities in the edge to support both network and 3rd party applications

Figure 3: Integration of compute and storage capabilities in the edge to support both network and 3rd party applications


Revised programming models and system software requirements

Efficiently developing applications for a distributed compute environment based on heterogeneous, emerging infrastructure technologies will require new programming models. For instance, programs would greatly benefit from separating the intent of the application from the how and where of the physical world. To give an example, consistent knowledge of data is expensive in terms of latency and resources for both large distributed systems and heterogeneous memory technologies.

Developers will declare the intent of data structures and operations to allow commutative and idempotent operations and conflict-free replicated data types when possible. For other data and operations, the declared intent could require another level of consistency, such as linearization and causal consistency. Only on an application level does the knowledge exist as to whether strong consistency or an apology is the proper failure mitigation, and how to strike the balance between capacity and latency.

Recently, we have seen the intent-based approach on the rise in various domains. For instance, intent-based networking uses service level agreements (SLAs) and policies to define the intent of network operations. The platform then configures, monitors and troubleshoots issues in the network to fulfill these intents. Also, certain cloud services, e.g. KubeDB, start to be managed by intent-based operators to evolve towards more advanced automation. We foresee a continuation of this trend towards a fully-fledged intent-based distributed cloud. Hence, a network platform will be able to support developers with efficient and transparent programming models to expose the right level of information and hide other complexities of the distributed and heterogeneous environment, while taking full advantage of all infrastructure features for optimized application performance.

We also foresee the rise of edge-native applications: applications designed from the ground up during development and deployment, to fully capitalize on compute and storage resources anywhere. With increasing heterogeneity of the underlying hardware, the demands on the system software will grow radically. The future network platform will be responsible for managing the infrastructure and take various capabilities into account for optimized orchestration of applications. Furthermore, developer-friendly programming environments will be required to open up the network platform for developers of 3rd party applications.

As can be seen from the above, there are multiple technology trends in the compute and storage space, many of them poised to have a huge impact on how we develop telecom software systems. While some of these trends can be handled in the lower levels of software (kernel, infrastructure libraries etc. thereby masking its impact from the applications), others may require a complete rethink, including new programming models and new ways of dealing with system resiliency. Continued attention to this area is therefore very important as changes to the hardware foundation of a platform will have a ripple effect through all layers, up to the applications itself.

Read more

Which other trends will shape emerging technology?  Learn more in our 2019 technology trends.

What is a cyber-physical system? Read about another future-defining tech trend in our colleague’s recent blog post.

Learn everything you need to know about edge computing.

Networking trends: A platform for next-level digitalization - Ericsson

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.