Reward Augmentation in Reinforcement Learning for Testing Distributed Systems

Bugs in popular distributed protocol implementations have been the source of many downtimes in popular internet services. We describe a randomized testing approach for distributed protocol implementations based on reinforcement learning. Since the natural reward structure is very sparse, the key to...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings of ACM on programming languages Ročník 8; číslo OOPSLA2; s. 1928 - 1954
Hlavní autoři: Borgarelli, Andrea, Enea, Constantin, Majumdar, Rupak, Nagendra, Srinidhi
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York, NY, USA ACM 08.10.2024
Témata:
ISSN:2475-1421, 2475-1421
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Bugs in popular distributed protocol implementations have been the source of many downtimes in popular internet services. We describe a randomized testing approach for distributed protocol implementations based on reinforcement learning. Since the natural reward structure is very sparse, the key to successful exploration in reinforcement learning is reward augmentation. We show two different techniques that build on one another. First, we provide a decaying exploration bonus based on the discovery of new states---the reward decays as the same state is visited multiple times. The exploration bonus captures the intuition from coverage-guided fuzzing of prioritizing new coverage points; in contrast to other schemes, we show that taking the maximum of the bonus and the Q-value leads to more effective exploration. Second, we provide waypoints to the algorithm as a sequence of predicates that capture interesting semantic scenarios. Waypoints exploit designer insight about the protocol and guide the exploration to "interesting" parts of the state space. Our reward structure ensures that new episodes can reliably get to deep interesting states even without execution caching. We have implemented our algorithm in Go. Our evaluation on three large benchmarks (RedisRaft, Etcd, and RSL) shows that our algorithm can significantly outperform baseline approaches in terms of coverage and bug finding.
ISSN:2475-1421
2475-1421
DOI:10.1145/3689779