Generation-based parallel particle swarm optimization for adversarial text attacks

Text adversarial attack is an effective strategy to investigate the vulnerability of Natural Language Processing (NLP) models. Most of text attack studies focus on word-level attacks with static or dynamic optimization algorithms. However, they are hard to balance (1) attack performance (i.e., attac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences Jg. 644; S. 119237
Hauptverfasser: Yang, Xinghao, Qi, Yupeng, Chen, Honglong, Liu, Baodi, Liu, Weifeng
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.10.2023
Schlagworte:
ISSN:0020-0255
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Text adversarial attack is an effective strategy to investigate the vulnerability of Natural Language Processing (NLP) models. Most of text attack studies focus on word-level attacks with static or dynamic optimization algorithms. However, they are hard to balance (1) attack performance (i.e., attack success rate, word substitution rate) and (2) attack efficiency. Generally, static optimization is fast but suffers from low attack performance, and the dynamic adversary improves the attacking quality but is time-consuming. To address these challenges, a Generation-based Parallel Particle Swarm Optimization (GP2SO) is proposed for the adversarial text attack. Specifically, the GP2SO employs an adaptive strategy to determine the word modification priority, which produces a high attack performance owing to the aggressive objective function. To achieve time efficiency, we parallelize the PSO on multiple pipelines in a generation-overlapping manner. Extensive experiments on four public text recognition datasets are conducted by attacking four deep models to evaluate the effectiveness of the GP2SO. Experimental results manifest that the proposed GP2SO averagely improves the time efficiency by 272% with only 0.3% success rate reduction compared to the PSO. Besides, the GP2SO also shows superiorities in adversarial training and transferability compared with baselines. The code is provided to ensure reproducibility https://github.com/OutdoorManofML/GPPSO.
ISSN:0020-0255
DOI:10.1016/j.ins.2023.119237