Edge computing in IoT
In the Internet of Things space, singular measurements are not very interesting. Rather, we want to analyze an aggregated set of several measurements over time, either from an individual or a larger group of sensors, and visualize the results or use it for real-time actuation. For many use cases, sending data to a centrally located data center for processing will be too slow and cost too much in terms of data network traffic. Therefore, a distributed cloud set-up, with intelligence distributed to the ends of the network, is necessary.
So, how do you distribute the processing tasks of IoT applications in the most efficient way? Enter actor model programming. Actor model programming allows for describing a complicated computation as a set of actors, individual components with singular purpose, to perform one task. Actor models, which are essentially compositions of several actors, do not have a shared state. Instead communication between actors is achieved via message exchange. This representation forces a developer to employ a strict separation of concerns while describing a computation as an actor model.
So now you know and you can exercise actor model programming using different programming languages, for instance Calvin, which we from Ericsson Research recently released as open source. There is also for example Scala, Erlang, and D, or you can use other well-known Actor libraries and frameworks with your favorite programming language.
Technically, actor model programming does not really solve the general case, say by transforming any complex computation into a set of independent and simple tasks. But actor model programming as a programming paradigm forces the developer to think about how to write a complex problem as a set of simple tasks. All-in-all this a good habit to get into in this multicore age.
The Actor model, which describes how a complex computation is split into different parts, is only one piece of the input that we need in order to figure out the optimal deployment plan. We also still need to know a few more things:
- Which devices are computationally available and what are their capabilities (Can they store data? Do they have direct access to sensor data?
- What is the communication cost between the different devices?
All these requirements essentially generate a search space for us consisting of different alternatives that we need to examine in order to decide which kind of mapping meets our requirements. Such search spaces can be formulated very elegantly using constraint programming. In a way, constraint programming allows us to formally describe preferences or limitations to the aforementioned search space. The interesting thing here is that we can formulate as many or as few constraints as we like.
Consider the following example:
Let’s say that we have an Actor Model (Figure 1) consisting of 4 tasks and a set of 7 computationally available devices where these different tasks can be deployed. This gives us about 2401 (7^4) different ways to map the tasks to our devices, (in the general case, assuming we are OK with putting the same task to all devices).
Figure 1 Sample actor model
Then we can include a few constraints into this picture: computation A can only be done by device 1 because that device has access to a particular sensor, computation B can only be done by device 2 because that device has access to another kind of sensor, and last but not least computation D can be done either by device 6 or 7 because only those devices can store a large amount of data. Now, the number of different ways to map is reduced to 14. These final 14 alternatives can be further evaluated using a cost function. Out of these evaluations, those that minimize our cost function would be the most optimal way of deploying tasks in our miniature (by-design) setup (Figure 2).
Figure 2 Possible deployment of actor model
In this case of course, the constraints kind of work to our advantage, because they can help decrease our search space. However, in the general case this happens to be an NP-hard problem (similar to the quadratic assignment problem), or, in other words: there is no known algorithm that solves this problem in polynomial time.
That said, our quest does not have to end here. If this problem is partitioned, if we maintain a relatively small mapping of actors to devices, a solution could be produced in milliseconds. Moreover, we can play additional tricks to tip-toe around NP-hardness, such as memoization. Memoizaiton allows us to store previously computed full or partial deployments so that we don’t need to repeat what we’ve done in the past, trading computation for memory. If we can afford it, we could even divert from our original quest of finding the perfect solution, and go for a less perfect one that can be produced in a more reasonable time and re-deploy, or re-arrange our actor model once we’ve approximated a better alternative. Genetic programming can be used in such a case for iteratively creating new generations of solutions (alternative deployments of actors in our case) that may eventually approximate the optimal solution.
This blog post is derived out of a recent publication we’ve made at ANT-2015. Please check the full publication for more details:
A. Moregård Haubenwaller, K. Vandikas - Computations on the Edge in the Internet of Things
Konstantinos Vandikas, Ericsson Research
Like what you’re reading? Please sign up for email updates on your favorite topics.Subscribe now
At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.