Reward Augmentation in Reinforcement Learning for Testing Distributed Systems

Bugs in popular distributed protocol implementations have been the source of many downtimes in popular internet services. We describe a randomized testing approach for distributed protocol implementations based on reinforcement learning. Since the natural reward structure is very sparse, the key to...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings of ACM on programming languages Ročník 8; číslo OOPSLA2; s. 1928 - 1954
Hlavní autori: Borgarelli, Andrea, Enea, Constantin, Majumdar, Rupak, Nagendra, Srinidhi
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York, NY, USA ACM 08.10.2024
Predmet:
ISSN:2475-1421, 2475-1421
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Bugs in popular distributed protocol implementations have been the source of many downtimes in popular internet services. We describe a randomized testing approach for distributed protocol implementations based on reinforcement learning. Since the natural reward structure is very sparse, the key to successful exploration in reinforcement learning is reward augmentation. We show two different techniques that build on one another. First, we provide a decaying exploration bonus based on the discovery of new states---the reward decays as the same state is visited multiple times. The exploration bonus captures the intuition from coverage-guided fuzzing of prioritizing new coverage points; in contrast to other schemes, we show that taking the maximum of the bonus and the Q-value leads to more effective exploration. Second, we provide waypoints to the algorithm as a sequence of predicates that capture interesting semantic scenarios. Waypoints exploit designer insight about the protocol and guide the exploration to "interesting" parts of the state space. Our reward structure ensures that new episodes can reliably get to deep interesting states even without execution caching. We have implemented our algorithm in Go. Our evaluation on three large benchmarks (RedisRaft, Etcd, and RSL) shows that our algorithm can significantly outperform baseline approaches in terms of coverage and bug finding.
ISSN:2475-1421
2475-1421
DOI:10.1145/3689779