Extending the Migration from Asynchronous to Reactive Programming in Java: A Performance Analysis of Caching, CPU-Bound, and Blocking Scenarios.

Gespeichert in:
Bibliographische Detailangaben
Titel: Extending the Migration from Asynchronous to Reactive Programming in Java: A Performance Analysis of Caching, CPU-Bound, and Blocking Scenarios.
Autoren: Zbarcea, Andrei, Tudose, Cătălin, Boicea, Alexandru
Quelle: Applied Sciences (2076-3417); Jan2026, Vol. 16 Issue 1, p90, 40p
Schlagwörter: DISTRIBUTED computing, SCALABILITY, SERVICE-oriented architecture (Computer science), CACHE memory, PARALLEL programming, REAL-time computing
Abstract: Modern distributed systems increasingly rely on reactive programming to meet the demands of high throughput and low latency under extreme concurrency. While the theoretical advantages of non-blocking I/O are well-established, empirical understanding of its behavior across heterogeneous enterprise workloads remains fragmented. This study presents a unified architectural evaluation of asynchronous (thread-per-request) and reactive (event-loop) paradigms within a functionally equivalent Java microservice environment. Unlike prior studies that isolate specific workloads, this research benchmarks the architectural crossover points across three distinct operational categories: distributed caching, CPU-bound processing, and blocking I/O, under loads up to 1000 concurrent users. The results quantify specific boundary conditions: the reactive model demonstrates superior elasticity in I/O-bound caching scenarios, achieving 75% higher throughput and 68% lower memory footprint. However, this advantage is strictly workload-dependent; both paradigms converge to an identical CPU wall at processor saturation, where the reactive model incurs a quantifiable latency penalty due to event-loop contention. Furthermore, under blocking conditions, the reactive model's memory efficiency (reducing footprint by ~50%) provides resilience against Out-Of-Memory (OOM) failures, even as throughput gains plateau. These findings move beyond generic performance comparisons to provide precise, data-driven guidelines for hybrid architectural adoption in complex distributed systems. [ABSTRACT FROM AUTHOR]
Copyright of Applied Sciences (2076-3417) is the property of MDPI and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Datenbank: Complementary Index
Beschreibung
Abstract:Modern distributed systems increasingly rely on reactive programming to meet the demands of high throughput and low latency under extreme concurrency. While the theoretical advantages of non-blocking I/O are well-established, empirical understanding of its behavior across heterogeneous enterprise workloads remains fragmented. This study presents a unified architectural evaluation of asynchronous (thread-per-request) and reactive (event-loop) paradigms within a functionally equivalent Java microservice environment. Unlike prior studies that isolate specific workloads, this research benchmarks the architectural crossover points across three distinct operational categories: distributed caching, CPU-bound processing, and blocking I/O, under loads up to 1000 concurrent users. The results quantify specific boundary conditions: the reactive model demonstrates superior elasticity in I/O-bound caching scenarios, achieving 75% higher throughput and 68% lower memory footprint. However, this advantage is strictly workload-dependent; both paradigms converge to an identical CPU wall at processor saturation, where the reactive model incurs a quantifiable latency penalty due to event-loop contention. Furthermore, under blocking conditions, the reactive model's memory efficiency (reducing footprint by ~50%) provides resilience against Out-Of-Memory (OOM) failures, even as throughput gains plateau. These findings move beyond generic performance comparisons to provide precise, data-driven guidelines for hybrid architectural adoption in complex distributed systems. [ABSTRACT FROM AUTHOR]
ISSN:20763417
DOI:10.3390/app16010090