Computation Offloading for Distributed Sensor Applications

Currently, there is an explosion of sensors from smart phones and the Internet of Things (IoT). Processing the data from this ever-growing number of sensors is challenging and requires massive computing resources. Processing large-scale sensor data is a new scenario for High Performance Computing (HPC), because it entails operating on highly dynamic, widely distributed data and it often needs fast response times. Most existing HPC solutions for sensors send all data to a shared Cloud data center and process it there. Unfortunately, this approach is unsuitable for applications that need fast responses or that have privacy issues, and also the Internet bandwidth required by the huge number of sensors will escalate. Therefore, there is an urgent need to have a distributed approach that can also do computations on the sensors or in nearby (cloudlet, edge, fog) resources.

The distribution leads to a complex, dynamic, online resource management and scheduling (RM&S) problem, where many sensing-application components must be transparently managed and scheduled subject to restrictions (like performance, privacy, energy consumption). Component migration (offloading) is the key challenge here. Another major problem is that current programming paradigms are insufficient to address the complexity of future IoT applications running on a diversity of distributed hardware. To address these problems, the Continuum project will develop a generalized computation offloading model. The project will introduce several key innovations, including, a reference RM&S architecture, a portfolio technique to dynamically manage RM&S solutions, a declarative language for distributed sensor applications, and a methodology for experiments with distributed sensing. Many domains will benefit from our project, including smart cities, safety, smart buildings, and smart industry.

Continuum is open-source and available at

The Team