Latency-Aware Horizontal Computation Offloading for Parallel Processing in Fog-Enabled IoT

In this article, we propose a two-step distributed horizontal architecture for computation offloading in a fog-enabled Internet of Things environment-HD-Fog-to minimize the overall energy consumption, and latency while executing hard real-time applications. HD stands for the horizontal distribution...

Full description

Saved in:
Bibliographic Details
Published in:IEEE systems journal Vol. 16; no. 2; pp. 1 - 8
Main Authors: Deb, Pallav Kumar, Misra, Sudip, Mukherjee, Anandarup
Format: Journal Article
Language:English
Published: New York IEEE 01.06.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1932-8184, 1937-9234
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this article, we propose a two-step distributed horizontal architecture for computation offloading in a fog-enabled Internet of Things environment-HD-Fog-to minimize the overall energy consumption, and latency while executing hard real-time applications. HD stands for the horizontal distribution of the tasks in the fog layer. Each sensor in the user devices independently captures data of varying formats. Parallel execution on these data is possible based on its directed acyclic task graph (DATG), and the corresponding results facilitate the ease of decision-making. Toward this, in HD-Fog, the sensor nodes in user devices offload their tasks to a nearby fog node based on a greedy selection criterion. This fog node then further offloads the smaller subtasks, based on the DATG, among other fog nodes for parallel execution. Through extensive real-life metric-based emulation and comparison against traditional Fog and Cloud computing schemes, we observe that our approach 1) reduces the overall operational delays by 29% and 96%, and 2) offers promising speedup values. The proposed HD-Fog scheme also indicates a reduction in energy consumption by 30% compared to traditional fog computing schemes.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1932-8184
1937-9234
DOI:10.1109/JSYST.2021.3085566