Edge computing infrastructure and beyond: the 5 key factors for better edge choices
When paramedics reach the site of an accident or emergency, resources and equipment can be very limited. Paramedics work to stabilize the patient as best they can for quick transport to hospital, while ambulances act as an expensive taxi service, contributing little in the way of equipment for deep medical diagnosis or treatment. With the centralization of hospitals and increases in traffic congestion, the average time to reach the hospital is increasing – costing valuable time, and lives. But what if that ambulance were connected to the edge?
What if a small device, connected to a mobile phone, could deliver remote diagnostic tests like ultrasound, saving precious time and allowing the hospital to ready the necessary preparations and have fast treatment ready on arrival? What if a specialist could provide remote over-the-shoulder support, guiding the medics to direct the probe and interpreting the imagery right then and there in real-time? Such a use case demands higher network and computing characteristics (latency and bandwidth) that cannot be achieved with a central or regional cloud.
This is the exact conversation we were having a couple of years ago with a medical device company looking to improve the capabilities of ambulances. This kind of use case isn’t just exciting – it’s lifesaving. And it’s just one use case example in the 5G-enabled enterprise market expected to be worth up to USD 700 billion by 2030.
Welcome to the innovative emerging world of edge computing.
Cloud and edge — the technology enabling 5G to deliver on its promises
Using cloud capabilities, edge computing brings compute power and storage closer to where the data is being generated and consumed. Whether that means enterprise on-premises or mobile network deployment will depend on the application requirements, but – just like the property market – it’s all about location, location, location. It’s also a complete turnaround from the trend toward centralization we’ve seen in recent years to reduce costs and maintain control.
To be clear, edge computing isn’t an entirely new concept. Distributed cloud and other similar technologies are already utilized by players including major media streaming providers around the globe. But with the advent of 5G comes a whole new level of network characteristics, and in turn, a whole new world of opportunities. And without edge bringing the consumption and processing capabilities closer together, the full promise of 5G for customers and consumers simply cannot be realized. As we often say: “Without edge computing, 5G is just a faster 4G.”
So how can CSPs deliver the end-to-end capabilities, bringing the network and edge together and carve a position for themselves to make the most of these early opportunities? To begin, here are five key interdependent areas that need to be considered when it comes to defining and deploying your edge computing solutions.
Infrastructure
When it comes to the edge infrastructure layer, we’re talking about where the compute, storage, and application hosting is – about bringing the cloud to the edge. Unlike other network infrastructure, edge infrastructure isn’t about bringing in a set of servers or static machines on which you can install your application. It’s about introducing a way of managing things – similar to how you would manage cloud capabilities today, but at the edge, in a distributed environment.
As reliability will be crucial for edge applications, the infrastructure must be footprint-flexible, efficient and automated. Depending on the application requirements, infrastructure could be on-premises or in the CSP networks, hosting their telco workloads and 3PP OTT applications with limited local management. In order to support diverse applications, it’s vital the infrastructure also support multi- and hybrid-cloud.
Orchestration
Edge orchestration comes down to resource distribution and configuration. You can’t just have hundreds of places in one country running edge workloads, and have all the applications deployed at all sites, at all times – it would require a lot of resources, and be far too expensive. Naturally, being smaller than a centralized location, the edge is a resource-constrained environment. This makes it vital to map the topology, considering the capabilities of all the different sites across the network, identifying the best location for an application to be placed, and continuously monitoring it for optimum usage.
We call this “Smart Workload Placement” – using algorithms to weigh up how the best capabilities can be provided where they’re needed most, and find that sweet spot where the cost of deploying an application across multi-cloud infrastructure is compensated by the benefits it would provide. Dynamic allocation of resources and ensuring that data flows and information are going to the right places are crucial for the effective operation of applications at the edge – especially in a multi-cloud environment.
User plane
Control and user plane functions separation (CUPS) was a technology introduced in 4G which has since become more advanced with the advent of 5G, making up the three packet core functions. While the control plane deals with access and mobility and session management functions and can be centralized, the user plane function is essentially the gateway between the network and the application – the connection point where the network meets the internet.
The user plane is therefore a key function to have distributed at the edge. If you’re bringing the application to a certain location, you need to make sure that that gateway is close by, as well as instruct the network to take the data for that application. To do this successfully, operators need a very agile user plane function that is scalable to meet the demands of an application, and that can be deployed at the site with plug and play solutions.
Traffic routing
Traffic routing is an important area, as this is where the network itself comes into play. While infrastructure and orchestration focus on the application hosting and environment, traffic routing brings in the information and awareness that sits within the network, and which CSPs have – the location of the user and what the user is trying to consume. For example, if a user sends a request to consume data via a video-streaming application, that information is within the CSP network – you'll see the user IP session request being routed from the user’s location to the user plane function near where the streaming service’s server is located. At the edge, however, where all the traffic routing isn’t always contained to the operator’s network, there are different options available to route the user’s IP session to the edge.
We can either bring all the traffic to the edge and then decide where it goes from there, or we only bring part of the traffic to the edge, managing the rest more centrally. Three main mechanisms are emerging as the relevant technology for edge traffic routing: distributed anchor, session breakout and multiple session. So how do CSPs decide which is the best technology for them?
Ultimately, it will depend on the application and intended use. Being a very simple mechanism which can be delivered on top of existing 4G and 5G networks, distributed anchor would be a wise choice for many wishing to deploy on their existing networks. Session breakout is a more complicated option, requiring the development of complex features in multiple products, and is very specific to 5G – the mechanism doesn't exist at all in 4G. Multiple session, which is quite promising however isn’t yet an established technology due to device eco-system dependency, will likely also be specific to 5G. However, by that time it’s likely 5G will be firmly established as a dominant technology.
At the end of the day, we want to achieve that separation of traffic. For those who haven’t already invested deeply in session breakout technology, the distributed anchor should be easily deployable directly to evolve in the future to the multiple session mechanism, bypassing session breakout entirely. But you should thoroughly understand the costs and benefits of each technology before you make a decision – or talk to a knowledgeable partner who can help.
Edge exposure
There are two angles when it comes to edge exposure: exposure for the edge, and exposure at the edge. Exposure for the edge includes exposing assets like edge discovery information and UE IP to network identity translation information – vital information for systems that need to find where sites are and how to connect to them.
Exposure at the edge is about exposing capabilities at the edge for the applications which are residing there. These could include location information, quality of service information or user equipment information. Exposing these capabilities means that, in low-latency dependent scenarios, you don’t have to go back to a central location to access those capabilities. It’s also important to note that this information all needs to be exposed in a format where it is identifiable by the network and can be translated into a useful, consumable format by applications.
With 25 percent of all emerging 5G use cases expected to rely on edge within the next year, the business potential for early moving CSPs to gain advantage in this emerging area is enormous – particularly in enterprise use cases for sectors such as gaming, industry, healthcare and more. The questions will simply be, what will your strategy to multi-cloud edge deployment look like? What role will you play in this new ecosystem? And who will you choose to help you on that journey?
Read more
Learn more about edge computing, strategies for successful deployment and Ericsson’s related offerings.
See what Erik Ekudden, CTO of Ericsson and Randeep Sekhon, CTO of Bharti Airtel had to say about cloud innovation and the opportunities of edge in India and across the world in this CTO Focus blog post.
Discover how Ericsson are driving openness for ecosystem innovation.
Like what you’re reading? Please sign up for email updates on your favorite topics.
Subscribe nowAt the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.