GAME-RL: Generating Adversarial Malware Examples Against API Call Based Detection via Reinforcement Learning

The adversarial example presents new security threats to trustworthy detection systems. In the context of evading dynamic detection based on API call sequences, a practical approach involves inserting perturbing API calls to modify these sequences. The type of inserted API calls and their insertion...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on dependable and secure computing Ročník 22; číslo 5; s. 5431 - 5447
Hlavní autori: Zhan, Dazhi, Liu, Xin, Bai, Wei, Li, Wei, Guo, Shize, Pan, Zhisong
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Washington IEEE 01.09.2025
IEEE Computer Society
Predmet:
ISSN:1545-5971, 1941-0018
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The adversarial example presents new security threats to trustworthy detection systems. In the context of evading dynamic detection based on API call sequences, a practical approach involves inserting perturbing API calls to modify these sequences. The type of inserted API calls and their insertion locations are crucial for generating an effective adversarial API call sequence. Existing methods either optimize the inserted API calls while neglecting the insertion positions or treat these optimizations as separate processes. This can lead to inefficient attacks that insert a large number of unnecessary API calls. To address this issue, we propose a novel reinforcement learning (RL) framework, dubbed GAME-RL, which simultaneously optimizes both the perturbing APIs and their insertion positions. Specifically, we define malware modification through IAT (Import Address Table) hooking as a sequential decision-making process. We introduce an invalid action masking and an auto-regressive policy head within the RL framework, ensuring the feasibility of IAT hooking and capturing the inherent relationship between factors. GAME-RL learns more effective evasion strategies, taking into account functionality preservation and the black-box setting. We conduct comprehensive experiments on various target models, demonstrating that GAME-RL significantly improves the evasion rate while maintaining acceptable levels of adversarial overhead.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-5971
1941-0018
DOI:10.1109/TDSC.2025.3566708