Latency-aware placement of stream processing operators in modern-day stream processing frameworks
Saved in:
| Title: | Latency-aware placement of stream processing operators in modern-day stream processing frameworks |
|---|---|
| Authors: | Ecker, Raphael, Karagiannis, Vasileios, Sober, Michael Peter, Schulte, Stefan |
| Publisher Information: | Elsevier |
| Publication Year: | 2025 |
| Collection: | Hamburg University of Technology (TUHH): TUBdok |
| Subject Terms: | Apache storm | Compute continuum | Data stream processing | Edge computing | Internet of Things, 0: Computer Science, Information and General Works::004: Computer Sciences, 6: Technology::620: Engineering::620.3: Vibrations, 6: Technology::621: Applied Physics::621.3: Electrical Engineering, Electronic Engineering, 5: Natural Sciences and Mathematics::519: Applied Mathematics, Probabilities |
| Description: | The rise of the Internet of Things has substantially increased the number of interconnected devices at the edge of the network. As a result, a large number of computations are now distributed in the compute continuum, spanning from the edge to the cloud, generating vast amounts of data. Stream processing is typically employed to process this data in near real-time due to its efficiency in handling continuous streams of information in a scalable manner. However, many stream processing approaches do not consider the underlying network devices of the compute continuum as candidate resources for processing data. Moreover, many existing works do not consider the incurred network latency of performing computations on multiple devices in a distributed way. To avoid this, we formulate an optimization problem for utilizing the complete compute continuum resources and design heuristics to solve this problem efficiently. Furthermore, we integrate our heuristics into Apache Storm and perform experiments that show latency- and throughput-related benefits compared to alternatives. |
| Document Type: | article in journal/newspaper |
| File Description: | application/pdf |
| Language: | English |
| ISSN: | 0743-7315 |
| Relation: | Journal of parallel and distributed computing; Journal of Parallel and Distributed Computing: 105041 (2025); https://hdl.handle.net/11420/54147; https://doi.org/10.15480/882.14577 |
| DOI: | 10.15480/882.14577 |
| Availability: | https://hdl.handle.net/11420/54147 https://doi.org/10.15480/882.14577 |
| Rights: | true ; https://creativecommons.org/licenses/by/4.0/ |
| Accession Number: | edsbas.4F626238 |
| Database: | BASE |
| Abstract: | The rise of the Internet of Things has substantially increased the number of interconnected devices at the edge of the network. As a result, a large number of computations are now distributed in the compute continuum, spanning from the edge to the cloud, generating vast amounts of data. Stream processing is typically employed to process this data in near real-time due to its efficiency in handling continuous streams of information in a scalable manner. However, many stream processing approaches do not consider the underlying network devices of the compute continuum as candidate resources for processing data. Moreover, many existing works do not consider the incurred network latency of performing computations on multiple devices in a distributed way. To avoid this, we formulate an optimization problem for utilizing the complete compute continuum resources and design heuristics to solve this problem efficiently. Furthermore, we integrate our heuristics into Apache Storm and perform experiments that show latency- and throughput-related benefits compared to alternatives. |
|---|---|
| ISSN: | 07437315 |
| DOI: | 10.15480/882.14577 |
Full Text Finder
Nájsť tento článok vo Web of Science