Exposing the Achilles’ heel of textual hate speech classifiers using indistinguishable adversarial examples
The accessibility of online hate speech has increased significantly, making it crucial for social-media companies to prioritize efforts to curb its spread. Although deep learning models demonstrate vulnerability to adversarial attacks, whether models fine-tuned for hate speech detection exhibit simi...
Uloženo v:
| Vydáno v: | Expert systems with applications Ročník 254; s. 124278 |
|---|---|
| Hlavní autoři: | , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier Ltd
15.11.2024
|
| Témata: | |
| ISSN: | 0957-4174 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | The accessibility of online hate speech has increased significantly, making it crucial for social-media companies to prioritize efforts to curb its spread. Although deep learning models demonstrate vulnerability to adversarial attacks, whether models fine-tuned for hate speech detection exhibit similar susceptibility remains underexplored. Textual adversarial attacks involve making subtle alterations to the original samples. These alterations are designed so that the adversarial examples produced can effectively deceive the target model, even when correctly classified by human observers. Though many approaches have been proposed to conduct word-level adversarial attacks on textual data, they face the obstacle of preserving the semantic coherence of texts during the generation of adversarial counterparts. Moreover, the adversarial examples produced are often easily distinguishable by human observers. This work presents a novel methodology that uses visually confusable glyphs and invisible characters to generate semantically and visually similar adversarial examples in a black-box setting. In the hate speech detection task context, our attack was effectively applied to several state-of-the-art deep learning models, fine-tuned on two benchmark datasets. The major contributions of this study are: (1) demonstrating the vulnerability of deep learning models fine-tuned for hate speech detection; (2) a novel attack framework based on a simple yet potent modification strategy; (3) superior outcomes in terms of accuracy degradation, attack success rate, average perturbation, semantic similarity, and perplexity when compared to existing baselines; (4) strict adherence to prescribed linguistic constraints while formulating adversarial samples; and (5) preservation of ground truth label while perturbing original input using imperceptible adversarial examples. |
|---|---|
| ISSN: | 0957-4174 |
| DOI: | 10.1016/j.eswa.2024.124278 |