A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies
•We present a timely and comprehensive survey on robust adversarial training.•This survey offers the fundamentals of adversarial training, a unified theory that can be used to interpret various methods, and a comprehensive summarization of different methodologies.•This survey also addresses three im...
Saved in:
| Published in: | Pattern recognition Vol. 131; p. 108889 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier Ltd
01.11.2022
|
| Subjects: | |
| ISSN: | 0031-3203, 1873-5142 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | •We present a timely and comprehensive survey on robust adversarial training.•This survey offers the fundamentals of adversarial training, a unified theory that can be used to interpret various methods, and a comprehensive summarization of different methodologies.•This survey also addresses three important research focus in adversarial training: interpretability, robust generalization, and robustness evaluation, which can stimulate future inspirations as well as research outlook.
Deep neural networks have achieved remarkable success in machine learning, computer vision, and pattern recognition in the last few decades. Recent studies, however, show that neural networks (both shallow and deep) may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples. Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to the vast applications of neural networks. To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream. Various ideas, methods, and applications have boomed in the field. Yet, a deep understanding of adversarial training including characteristics, interpretations, theories, and connections among different models has remained elusive. This paper presents a comprehensive survey trying to offer a systematic and structured investigation on robust adversarial training in pattern recognition. We start with fundamentals including definition, notations, and properties of adversarial examples. We then introduce a general theoretical framework with gradient regularization for defending against adversarial samples - robust adversarial training with visualizations and interpretations on why adversarial training can lead to model robustness. Connections will also be established between adversarial training and other traditional learning theories. After that, we summarize, review, and discuss various methodologies with defense/training algorithms in a structured way. Finally, we present analysis, outlook, and remarks on adversarial training. |
|---|---|
| AbstractList | •We present a timely and comprehensive survey on robust adversarial training.•This survey offers the fundamentals of adversarial training, a unified theory that can be used to interpret various methods, and a comprehensive summarization of different methodologies.•This survey also addresses three important research focus in adversarial training: interpretability, robust generalization, and robustness evaluation, which can stimulate future inspirations as well as research outlook.
Deep neural networks have achieved remarkable success in machine learning, computer vision, and pattern recognition in the last few decades. Recent studies, however, show that neural networks (both shallow and deep) may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples. Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to the vast applications of neural networks. To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream. Various ideas, methods, and applications have boomed in the field. Yet, a deep understanding of adversarial training including characteristics, interpretations, theories, and connections among different models has remained elusive. This paper presents a comprehensive survey trying to offer a systematic and structured investigation on robust adversarial training in pattern recognition. We start with fundamentals including definition, notations, and properties of adversarial examples. We then introduce a general theoretical framework with gradient regularization for defending against adversarial samples - robust adversarial training with visualizations and interpretations on why adversarial training can lead to model robustness. Connections will also be established between adversarial training and other traditional learning theories. After that, we summarize, review, and discuss various methodologies with defense/training algorithms in a structured way. Finally, we present analysis, outlook, and remarks on adversarial training. |
| ArticleNumber | 108889 |
| Author | Qian, Zhuang Zhang, Xu-Yao Wang, Qiu-Feng Huang, Kaizhu |
| Author_xml | – sequence: 1 givenname: Zhuang surname: Qian fullname: Qian, Zhuang organization: School of Advanced Technology of Xi’an Jiaotong-Liverpool University, China – sequence: 2 givenname: Kaizhu orcidid: 0000-0003-4644-3037 surname: Huang fullname: Huang, Kaizhu email: kaizhu.huang@dukekunshan.edu.cn organization: Data Science Research Center, Duke Kunshan University, China – sequence: 3 givenname: Qiu-Feng surname: Wang fullname: Wang, Qiu-Feng email: qiufeng.wang@xjtlu.edu.cn organization: School of Advanced Technology of Xi’an Jiaotong-Liverpool University, China – sequence: 4 givenname: Xu-Yao surname: Zhang fullname: Zhang, Xu-Yao organization: Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences, China |
| BookMark | eNqFkM1qwzAQhEVJoUnaN-hBDxCn-rEtO4dCCE1bCPTSnoVirRIFRwqSEsjb18Y99dCeFnaZmZ1vgkbOO0DokZI5JbR8OsxPKjV-N2eEsW5VVVV9g8a0EjwraM5GaEwIpxlnhN-hSYwHQqjoDmNkljiewwWu2Bsc_PYcE1b6AiGqYFWLU1DWWbfD1uEuJEFwOECX5Wyy3i3w-uy0OoJLqp3htAcfrjOsnMZHSHuvfet3FuI9ujWqjfDwM6foa_3yuXrLNh-v76vlJms4KVOmSiOqBnJgpS6Y4ZxvSc1BUFOIujFFoVVd1IXiFLY6F0QoQcucaLPlUDHG-BQtBt8m-BgDGNnYpPpP-yKtpET2xORBDsRkT0wOxDpx_kt8CvaowvU_2fMgg67YxUKQsbHgGtC2I5Wk9vZvg2-VR4uO |
| CitedBy_id | crossref_primary_10_1016_j_patcog_2023_109661 crossref_primary_10_1016_j_patcog_2024_110875 crossref_primary_10_1016_j_patcog_2024_111007 crossref_primary_10_1109_TPAMI_2023_3303934 crossref_primary_10_1007_s44443_025_00175_3 crossref_primary_10_1016_j_patcog_2024_110915 crossref_primary_10_1109_COMST_2023_3319492 crossref_primary_10_1109_TBDATA_2025_3533919 crossref_primary_10_1016_j_patcog_2025_111867 crossref_primary_10_1016_j_neucom_2024_128701 crossref_primary_10_1049_rsn2_70062 crossref_primary_10_1016_j_neunet_2024_106117 crossref_primary_10_1142_S2196888825500095 crossref_primary_10_3390_app122412947 crossref_primary_10_1016_j_future_2023_01_028 crossref_primary_10_1145_3633517 crossref_primary_10_1016_j_patcog_2022_109229 crossref_primary_10_1145_3649453 crossref_primary_10_1007_s42243_024_01361_9 crossref_primary_10_3390_electronics13091692 crossref_primary_10_1007_s40747_024_01704_9 crossref_primary_10_1007_s12559_023_10179_8 crossref_primary_10_1016_j_eswa_2025_127774 crossref_primary_10_1016_j_cherd_2025_02_031 crossref_primary_10_1016_j_ress_2023_109340 crossref_primary_10_1007_s10462_025_11136_7 crossref_primary_10_1016_j_patcog_2024_110884 crossref_primary_10_1007_s10994_023_06348_3 crossref_primary_10_1016_j_eswa_2025_126632 crossref_primary_10_1016_j_patcog_2024_110968 crossref_primary_10_1007_s10994_023_06367_0 crossref_primary_10_1109_TII_2023_3297663 crossref_primary_10_1109_TPAMI_2023_3330862 crossref_primary_10_1109_ACCESS_2024_3479949 crossref_primary_10_1145_3648351 crossref_primary_10_1109_TKDE_2025_3530916 crossref_primary_10_1016_j_engappai_2024_108641 crossref_primary_10_1109_ACCESS_2024_3467154 crossref_primary_10_1016_j_patcog_2024_110281 crossref_primary_10_1016_j_neunet_2024_106224 crossref_primary_10_1007_s11263_024_02111_w |
| Cites_doi | 10.1109/TASLP.2017.2756440 10.1016/j.patcog.2021.108472 10.1016/j.patcog.2022.108527 10.1109/LCSYS.2021.3050444 10.1007/BF01589116 10.1287/opre.1090.0741 10.1016/j.patcog.2018.07.023 10.1109/9.119632 10.1109/TPAMI.2023.3271451 10.1007/s10107-017-1172-1 10.1109/JPROC.2020.2989782 10.1007/s11263-015-0816-y 10.1109/TEVC.2019.2890858 10.1109/5.726791 10.1016/j.neunet.2021.03.031 10.1109/TNNLS.2018.2886017 10.1109/TPAMI.2018.2858821 |
| ContentType | Journal Article |
| Copyright | 2022 Elsevier Ltd |
| Copyright_xml | – notice: 2022 Elsevier Ltd |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.patcog.2022.108889 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1873-5142 |
| ExternalDocumentID | 10_1016_j_patcog_2022_108889 S0031320322003703 |
| GroupedDBID | --K --M -D8 -DT -~X .DC .~1 0R~ 123 1B1 1RT 1~. 1~5 29O 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JN AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABEFU ABFNM ABFRF ABHFT ABJNI ABMAC ABTAH ABXDB ABYKQ ACBEA ACDAQ ACGFO ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADMXK ADTZH AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FD6 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q G8K GBLVA GBOLZ HLZ HVGLF HZ~ H~9 IHE J1W JJJVA KOM KZ1 LG9 LMP LY1 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SBC SDF SDG SDP SDS SES SEW SPC SPCBC SST SSV SSZ T5K TN5 UNMZH VOH WUQ XJE XPP ZMT ZY4 ~G- 9DU AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS ~HD |
| ID | FETCH-LOGICAL-c306t-a6f78ce4e26d52f333b093e71f579cf55da9595a31ebd4707a71640dfb3e82223 |
| ISICitedReferencesCount | 59 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000854976600009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0031-3203 |
| IngestDate | Sat Nov 29 07:29:49 EST 2025 Tue Nov 18 21:14:54 EST 2025 Fri Feb 23 02:40:09 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Adversarial training Robust learning Adversarial examples |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c306t-a6f78ce4e26d52f333b093e71f579cf55da9595a31ebd4707a71640dfb3e82223 |
| ORCID | 0000-0003-4644-3037 |
| ParticipantIDs | crossref_citationtrail_10_1016_j_patcog_2022_108889 crossref_primary_10_1016_j_patcog_2022_108889 elsevier_sciencedirect_doi_10_1016_j_patcog_2022_108889 |
| PublicationCentury | 2000 |
| PublicationDate | November 2022 2022-11-00 |
| PublicationDateYYYYMMDD | 2022-11-01 |
| PublicationDate_xml | – month: 11 year: 2022 text: November 2022 |
| PublicationDecade | 2020 |
| PublicationTitle | Pattern recognition |
| PublicationYear | 2022 |
| Publisher | Elsevier Ltd |
| Publisher_xml | – name: Elsevier Ltd |
| References | Xie, Wang, Zhang, Ren, Yuille (bib0014) 2018 Wang, Zhang (bib0022) 2019 Su, Vargas, Sakurai (bib0059) 2019; 23 Xu, Evans, Qi (bib0010) 2018 He, Zhang, Ren, Sun (bib0001) 2016 Smith (bib0114) 2017 Wong, Kolter (bib0012) 2018 Lyu, Huang, Liang (bib0038) 2015 Shen, Ji, Zhang, Zuo, Wang (bib0032) 2018 Yang, Rashtchian, Zhang, Salakhutdinov, Chaudhuri (bib0075) 2020 Goodfellow, Shlens, Szegedy (bib0006) 2015 Chen, Gu (bib0062) 2020 R. Feinman, R.R. Curtin, S. Shintre, A.B. Gardner, Detecting adversarial samples from artifacts, 2017, arXiv preprint arXiv Weng, Zhang, Chen, Yi, Su, Gao, Hsieh, Daniel (bib0100) 2018 Shafahi, Najibi, Ghiasi, Xu, Dickerson, Studer, Davis, Taylor, Goldstein (bib0069) 2019 Pang, Xu, Du, Chen, Zhu (bib0078) 2019 Dhillon, Azizzadenesheli, Lipton, Bernstein, Kossaifi, Khanna, Anandkumar (bib0013) 2018 Zhang, Huang, Zhu, Liu (bib0082) 2021; 140 H. Xu, X. Liu, W. Wang, W. Ding, Z. Wu, Z. Liu, A. Jain, J. Tang, Towards the memorization effect of neural networks in adversarial training, 2021, arXiv preprint arXiv Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein, Berg, Fei-Fei (bib0002) 2015; 115 Sinha, Namkoong, Duchi (bib0073) 2018 Qi, Guo, Xu, Jin, Yang (bib0111) 2021 Croce, Hein (bib0101) 2019 Miyato, Maeda, Koyama, Ishii (bib0081) 2018; 41 Rice, Wong, Kolter (bib0085) 2020 Liu, Chen, Liu, Song (bib0037) 2017 Spall (bib0051) 1992; 37 Moosavi-Dezfooli, Fawzi, Frossard (bib0045) 2016 Bui, Le, Tran, Zhao, Phung (bib0074) 2021 LeCun, Bottou, Bengio, Haffner (bib0042) 1998; 86 Micikevicius, Narang, Alben, Diamos, Elsen, Garcia, Ginsburg, Houston, Kuchaiev, Venkatesh (bib0115) 2018 Chen, Min, Zhang, Karbasi (bib0094) 2020 Dong, Liao, Pang, Su, Zhu, Hu, Li (bib0048) 2018 Tramèr, Kurakin, Papernot, Goodfellow, Boneh, McDaniel (bib0049) 2018 Z. Qian, S. Zhang, K. Huang, Q. Wang, R. Zhang, X. Yi, Improving model robustness with latent distribution locally and globally, 2021, arXiv preprint arXiv Kurakin, Goodfellow, Bengio (bib0047) 2018 Andriushchenko, Croce, Flammarion, Hein (bib0064) 2020 Staib, Jegelka (bib0112) 2017; vol. 1 Esposito, Gastpar, Issa (bib0093) 2020 Wu, Xia, Wang (bib0098) 2020 Smilkov, Thorat, Kim, Viégas, Wattenberg (bib0091) 2018 Biggio, Roli (bib0030) 2018; 84 You, Ye, Li, Xu, Wang (bib0021) 2019 Bhagoji, Cullina, Sitawarin, Mittal (bib0105) 2018 Carlini, Wagner (bib0019) 2017 Brendel, Rauber, Bethge (bib0060) 2018 Zhang, Qian, Huang, Wang, Zhang, Yi (bib0099) 2021 Delage, Ye (bib0108) 2010; 58 Zhang, Xu, Han, Niu, Cui, Sugiyama, Kankanhalli (bib0080) 2020 Kuhn, Esfahani, Nguyen, Shafieezadeh-Abadeh (bib0110) 2019 Tsipras, Santurkar, Engstrom, Turner, Madry (bib0088) 2019 Baluja, Fischer (bib0055) 2018 Zhang, Wang (bib0023) 2019 Kurakin, Goodfellow, Bengio (bib0046) 2017 Croce, Hein (bib0063) 2020 Pang, Du, Dong, Zhu (bib0011) 2018 N. Papernot, P. McDaniel, I. Goodfellow, Transferability in machine learning: from phenomena to black-box attacks using adversarial samples, 2016, arXiv preprint arXiv Huang, Yang, King, Lyu, Chan (bib0106) 2004; 5 Meng, Chen (bib0015) 2017 Hendrycks, Gimpel (bib0007) 2017 Esfahani, Kuhn (bib0109) 2018; 171 Zagoruyko, Komodakis (bib0086) 2016 R.R. Wiyatno, A. Xu, O. Dia, A. de Berker, Adversarial examples in modern machine learning: a review, 2019, arXiv preprint arXiv Wang, Xu, Xu, Tao (bib0033) 2018 Xie, Yuille (bib0084) 2019 N. Carlini, G. Katz, C.W. Barrett, D.L. Dill, Ground-truth adversarial examples, 2017, arXiv preprint arXiv Madry, Makelov, Schmidt, Tsipras, Vladu (bib0040) 2018 Xiao, Li, Zhu, He, Liu, Song (bib0056) 2018 Yuan, He, Zhu, Li (bib0026) 2019; 30 Tramèr, Behrmann, Carlini, Papernot, Jacobsen (bib0035) 2020 Zhang, Yu, Jiao, Xing, El Ghaoui, Jordan (bib0079) 2019 Guo, Rana, Cisse, van der Maaten (bib0018) 2018 Sabour, Cao, Faghri, Fleet (bib0044) 2016 Zhao, Dua, Singh (bib0054) 2018 Park, Lee (bib0070) 2021 R. Zhai, T. Cai, D. He, C. Dan, K. He, J. Hopcroft, L. Wang, Adversarially robust generalization just requires more unlabeled data, 2019, arXiv preprint arXiv Xiong, Droppo, Huang, Seide, Seltzer, Stolcke, Yu, Zweig (bib0003) 2017; 25 Mygdalis, Pitas (bib0068) 2022 D. Yang, I. Kong, Y. Kim, Adaptive regularization for adversarial training, 2022, arXiv preprint arXiv Pauli, Koch, Berberich, Kohler, Allgöwer (bib0076) 2022; 6 2014. Tramèr, Boneh, Kurakin, Goodfellow, Papernot, McDaniel (bib0077) 2018 Bai, Luo, Zhao, Wen, Wang (bib0028) 2021 A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, D. Mukhopadhyay, Adversarial attacks and defences: a survey, 2018, arXiv preprint arXiv Szegedy, Zaremba, Sutskever, Bruna, Erhan, Goodfellow, Fergus (bib0005) 2014 J. Shu, X. Yuan, D. Meng, Z. Xu, CMW-Net: learning a class-aware sample weighting mapping for robust deep learning, 2022, arXiv preprint arXiv Samangouei, Kabkab, Chellappa (bib0016) 2018 Jacobsen, Behrmann, Zemel, Bethge (bib0034) 2018 Kos, Fischer, Song (bib0052) 2018 Yang, Zhang, Xu, Katabi (bib0017) 2019; vol. 97 Croce, Hein (bib0061) 2020 Xiao, Zhu, Li, He, Liu, Song (bib0053) 2018 Metzen, Genewein, Fischer, Bischoff (bib0008) 2017 Zhang, Liu, Suen (bib0027) 2020; 108 Jordan, Lewis, Dimakis (bib0102) 2019 H. Kannan, A. Kurakin, I. Goodfellow, Adversarial logit pairing, 2018, arXiv preprint arXiv K. Grosse, P. Manoharan, N. Papernot, M. Backes, P. McDaniel, On the (statistical) detection of adversarial examples, 2017, arXiv preprint arXiv Zhang, Zhu (bib0090) 2019 Schmidt, Santurkar, Tsipras, Talwar, Madry (bib0095) 2018 Zhang, Huang, Zhang, Hussain (bib0024) 2019 Andriushchenko, Flammarion (bib0071) 2020 Chen, Zhang, Sharma, Yi, Hsieh (bib0058) 2017 J. Hoffman, D.A. Roberts, S. Yaida, Robust learning with jacobian regularization, 2019, arXiv preprint arXiv Athalye, Carlini, Wagner (bib0050) 2018 Ilyas, Santurkar, Tsipras, Engstrom, Tran, Madry (bib0089) 2019 Papernot, McDaniel, Goodfellow, Jha, Celik, Swami (bib0057) 2017 Li, Wu, Feng, Fan, Jiang, Li, Xia (bib0067) 2022; 124 Wong, Rice, Kolter (bib0072) 2019 Rafique, Liu, Lin, Yang (bib0107) 2021 Mao, Zhong, Yang, Vondrick, Ray (bib0065) 2019 Song, He, Lin, Wang, Hopcroft (bib0097) 2019 R. Balestriero, J. Pesenti, Y. LeCun, Learning in high dimension always amounts to extrapolation, 2021, arXiv preprint arXiv Y.M. Assael, B. Shillingford, S. Whiteson, N. de Freitas, LipNet: End-to-end sentence-level lipreading, 2016, arXiv preprint arXiv Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio (bib0031) 2014 Liu, Nocedal (bib0043) 1989; 45 A. Krizhevsky, V. Nair, G. Hinton, The CIFAR-10 dataset, online 10.1016/j.patcog.2022.108889_bib0020 Wang (10.1016/j.patcog.2022.108889_bib0022) 2019 Metzen (10.1016/j.patcog.2022.108889_bib0008) 2017 Bai (10.1016/j.patcog.2022.108889_bib0028) 2021 10.1016/j.patcog.2022.108889_bib0029 Goodfellow (10.1016/j.patcog.2022.108889_bib0006) 2015 10.1016/j.patcog.2022.108889_bib0025 Smith (10.1016/j.patcog.2022.108889_bib0114) 2017 Hendrycks (10.1016/j.patcog.2022.108889_bib0007) 2017 Athalye (10.1016/j.patcog.2022.108889_bib0050) 2018 Tramèr (10.1016/j.patcog.2022.108889_bib0077) 2018 Moosavi-Dezfooli (10.1016/j.patcog.2022.108889_bib0045) 2016 Liu (10.1016/j.patcog.2022.108889_bib0037) 2017 Bhagoji (10.1016/j.patcog.2022.108889_bib0105) 2018 Mygdalis (10.1016/j.patcog.2022.108889_bib0068) 2022 Yang (10.1016/j.patcog.2022.108889_bib0017) 2019; vol. 97 Wong (10.1016/j.patcog.2022.108889_bib0012) 2018 10.1016/j.patcog.2022.108889_bib0096 Lyu (10.1016/j.patcog.2022.108889_bib0038) 2015 Shafahi (10.1016/j.patcog.2022.108889_bib0069) 2019 Dhillon (10.1016/j.patcog.2022.108889_bib0013) 2018 Pang (10.1016/j.patcog.2022.108889_bib0011) 2018 Yuan (10.1016/j.patcog.2022.108889_bib0026) 2019; 30 Wong (10.1016/j.patcog.2022.108889_bib0072) 2019 Schmidt (10.1016/j.patcog.2022.108889_bib0095) 2018 10.1016/j.patcog.2022.108889_bib0092 Pauli (10.1016/j.patcog.2022.108889_bib0076) 2022; 6 Carlini (10.1016/j.patcog.2022.108889_bib0019) 2017 10.1016/j.patcog.2022.108889_bib0041 Biggio (10.1016/j.patcog.2022.108889_bib0030) 2018; 84 LeCun (10.1016/j.patcog.2022.108889_bib0042) 1998; 86 Song (10.1016/j.patcog.2022.108889_bib0097) 2019 Zhang (10.1016/j.patcog.2022.108889_bib0079) 2019 Szegedy (10.1016/j.patcog.2022.108889_bib0005) 2014 Xie (10.1016/j.patcog.2022.108889_bib0014) 2018 Wang (10.1016/j.patcog.2022.108889_bib0033) 2018 Yang (10.1016/j.patcog.2022.108889_bib0075) 2020 Xie (10.1016/j.patcog.2022.108889_bib0084) 2019 Brendel (10.1016/j.patcog.2022.108889_bib0060) 2018 Chen (10.1016/j.patcog.2022.108889_bib0094) 2020 Xiao (10.1016/j.patcog.2022.108889_bib0053) 2018 Xiong (10.1016/j.patcog.2022.108889_bib0003) 2017; 25 Miyato (10.1016/j.patcog.2022.108889_bib0081) 2018; 41 Bui (10.1016/j.patcog.2022.108889_bib0074) 2021 Tsipras (10.1016/j.patcog.2022.108889_bib0088) 2019 10.1016/j.patcog.2022.108889_bib0039 10.1016/j.patcog.2022.108889_bib0036 Jordan (10.1016/j.patcog.2022.108889_bib0102) 2019 Tramèr (10.1016/j.patcog.2022.108889_bib0049) 2018 Zagoruyko (10.1016/j.patcog.2022.108889_bib0086) 2016 Park (10.1016/j.patcog.2022.108889_bib0070) 2021 Croce (10.1016/j.patcog.2022.108889_bib0101) 2019 10.1016/j.patcog.2022.108889_bib0066 Qi (10.1016/j.patcog.2022.108889_bib0111) 2021 Andriushchenko (10.1016/j.patcog.2022.108889_bib0064) 2020 Croce (10.1016/j.patcog.2022.108889_bib0061) 2020 Dong (10.1016/j.patcog.2022.108889_bib0048) 2018 Rice (10.1016/j.patcog.2022.108889_bib0085) 2020 10.1016/j.patcog.2022.108889_bib0103 Zhang (10.1016/j.patcog.2022.108889_bib0082) 2021; 140 10.1016/j.patcog.2022.108889_bib0104 Sabour (10.1016/j.patcog.2022.108889_bib0044) 2016 Zhang (10.1016/j.patcog.2022.108889_bib0090) 2019 Micikevicius (10.1016/j.patcog.2022.108889_bib0115) 2018 Li (10.1016/j.patcog.2022.108889_bib0067) 2022; 124 Zhang (10.1016/j.patcog.2022.108889_bib0027) 2020; 108 Papernot (10.1016/j.patcog.2022.108889_bib0057) 2017 Tramèr (10.1016/j.patcog.2022.108889_bib0035) 2020 Sinha (10.1016/j.patcog.2022.108889_bib0073) 2018 Delage (10.1016/j.patcog.2022.108889_bib0108) 2010; 58 Esfahani (10.1016/j.patcog.2022.108889_bib0109) 2018; 171 Samangouei (10.1016/j.patcog.2022.108889_bib0016) 2018 Meng (10.1016/j.patcog.2022.108889_bib0015) 2017 Zhao (10.1016/j.patcog.2022.108889_bib0054) 2018 Mao (10.1016/j.patcog.2022.108889_bib0065) 2019 Chen (10.1016/j.patcog.2022.108889_bib0062) 2020 Ilyas (10.1016/j.patcog.2022.108889_bib0089) 2019 Esposito (10.1016/j.patcog.2022.108889_bib0093) 2020 Huang (10.1016/j.patcog.2022.108889_bib0106) 2004; 5 Pang (10.1016/j.patcog.2022.108889_bib0078) 2019 Wu (10.1016/j.patcog.2022.108889_bib0098) 2020 Croce (10.1016/j.patcog.2022.108889_bib0063) 2020 10.1016/j.patcog.2022.108889_bib0087 Jacobsen (10.1016/j.patcog.2022.108889_bib0034) 2018 10.1016/j.patcog.2022.108889_bib0083 10.1016/j.patcog.2022.108889_bib0009 Smilkov (10.1016/j.patcog.2022.108889_bib0091) 2018 10.1016/j.patcog.2022.108889_bib0004 Chen (10.1016/j.patcog.2022.108889_bib0058) 2017 Madry (10.1016/j.patcog.2022.108889_bib0040) 2018 Weng (10.1016/j.patcog.2022.108889_bib0100) 2018 Kurakin (10.1016/j.patcog.2022.108889_bib0047) 2018 He (10.1016/j.patcog.2022.108889_bib0001) 2016 Zhang (10.1016/j.patcog.2022.108889_bib0099) 2021 Russakovsky (10.1016/j.patcog.2022.108889_bib0002) 2015; 115 Zhang (10.1016/j.patcog.2022.108889_bib0023) 2019 Kos (10.1016/j.patcog.2022.108889_bib0052) 2018 Baluja (10.1016/j.patcog.2022.108889_bib0055) 2018 Guo (10.1016/j.patcog.2022.108889_bib0018) 2018 Rafique (10.1016/j.patcog.2022.108889_bib0107) 2021 Goodfellow (10.1016/j.patcog.2022.108889_bib0031) 2014 Shen (10.1016/j.patcog.2022.108889_bib0032) 2018 Liu (10.1016/j.patcog.2022.108889_bib0043) 1989; 45 10.1016/j.patcog.2022.108889_bib0113 Spall (10.1016/j.patcog.2022.108889_bib0051) 1992; 37 Xiao (10.1016/j.patcog.2022.108889_bib0056) 2018 You (10.1016/j.patcog.2022.108889_bib0021) 2019 Zhang (10.1016/j.patcog.2022.108889_bib0024) 2019 Zhang (10.1016/j.patcog.2022.108889_bib0080) 2020 Kurakin (10.1016/j.patcog.2022.108889_bib0046) 2017 Xu (10.1016/j.patcog.2022.108889_bib0010) 2018 Staib (10.1016/j.patcog.2022.108889_bib0112) 2017; vol. 1 Su (10.1016/j.patcog.2022.108889_bib0059) 2019; 23 Kuhn (10.1016/j.patcog.2022.108889_bib0110) 2019 Andriushchenko (10.1016/j.patcog.2022.108889_bib0071) 2020 |
| References_xml | – year: 2017 ident: bib0008 article-title: On detecting adversarial perturbations publication-title: International Conference on Learning Representations (ICLR) – year: 2016 ident: bib0086 article-title: Wide residual networks publication-title: British Machine Vision Conference – start-page: 7472 year: 2019 end-page: 7482 ident: bib0079 article-title: Theoretically principled trade-off between robustness and accuracy publication-title: International Conference on Machine Learning (ICML) – reference: N. Papernot, P. McDaniel, I. Goodfellow, Transferability in machine learning: from phenomena to black-box attacks using adversarial samples, 2016, arXiv preprint arXiv: – start-page: 36 year: 2018 end-page: 42 ident: bib0052 article-title: Adversarial examples for generative models publication-title: IEEE Security and Privacy Workshops (SP Workshops) – year: 2017 ident: bib0046 article-title: Adversarial machine learning at scale publication-title: International Conference on Learning Representations (ICLR) – year: 2018 ident: bib0100 article-title: Evaluating the robustness of neural networks: an extreme value theory approach publication-title: International Conference on Learning Representations (ICLR) – start-page: 135 year: 2017 end-page: 147 ident: bib0015 article-title: Magnet: a two-pronged defense against adversarial examples publication-title: ACM SIGSAC Conference on Computer and Communications Security – year: 2018 ident: bib0016 article-title: Defense-GAN: protecting classifiers against adversarial attacks using generative models publication-title: International Conference on Learning Representations (ICLR) – start-page: 2672 year: 2014 end-page: 2680 ident: bib0031 article-title: Generative adversarial nets publication-title: Advances in Neural Information Processing Systems (NeurIPS) – year: 2018 ident: bib0013 article-title: Stochastic activation pruning for robust adversarial defense publication-title: International Conference on Learning Representations (ICLR) – start-page: 484 year: 2020 end-page: 501 ident: bib0064 article-title: Square attack: a query-efficient black-box adversarial attack via random search publication-title: European Conference on Computer Vision (ECCV) – volume: 25 start-page: 2410 year: 2017 end-page: 2423 ident: bib0003 article-title: Toward human parity in conversational speech recognition publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. – year: 2021 ident: bib0074 article-title: A unified Wasserstein distributional robustness framework for adversarial training publication-title: International Conference on Learning Representations (ICLR) – start-page: 10067 year: 2021 end-page: 10080 ident: bib0111 article-title: An online method for A class of distributionally robust optimization with non-convex objectives publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 125 year: 2019 end-page: 136 ident: bib0089 article-title: Adversarial examples are not bugs, they are features publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 3353 year: 2019 end-page: 3364 ident: bib0069 article-title: Adversarial training for free! publication-title: Advances in Neural Information Processing Systems (NeurIPS) – year: 2017 ident: bib0007 article-title: A baseline for detecting misclassified and out-of-distribution examples in neural networks publication-title: International Conference on Learning Representations (ICLR) – reference: J. Hoffman, D.A. Roberts, S. Yaida, Robust learning with jacobian regularization, 2019, arXiv preprint arXiv: – start-page: 909 year: 2019 end-page: 913 ident: bib0021 article-title: Adversarial noise layer: regularize neural network by adding noise publication-title: IEEE International Conference on Image Processing (ICIP) – volume: 41 start-page: 1979 year: 2018 end-page: 1993 ident: bib0081 article-title: Virtual adversarial training: a regularization method for supervised and semi-supervised learning publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – reference: R. Zhai, T. Cai, D. He, C. Dan, K. He, J. Hopcroft, L. Wang, Adversarially robust generalization just requires more unlabeled data, 2019, arXiv preprint arXiv: – start-page: 8588 year: 2020 end-page: 8601 ident: bib0075 article-title: A closer look at accuracy vs. robustness publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 16048 year: 2020 end-page: 16059 ident: bib0071 article-title: Understanding and improving fast adversarial training publication-title: Advances in Neural Information Processing Systems (NeurIPS) – reference: R. Feinman, R.R. Curtin, S. Shintre, A.B. Gardner, Detecting adversarial samples from artifacts, 2017, arXiv preprint arXiv: – volume: 6 start-page: 121 year: 2022 end-page: 126 ident: bib0076 article-title: Training robust neural networks using Lipschitz bounds publication-title: IEEE Control. Syst. Lett. – reference: N. Carlini, G. Katz, C.W. Barrett, D.L. Dill, Ground-truth adversarial examples, 2017, arXiv preprint arXiv: – start-page: 1739 year: 2020 end-page: 1747 ident: bib0062 article-title: Rays: a ray searching method for hard-label adversarial attack publication-title: ACM SIGKDD International Conference on Knowledge Discovery & Data Mining – start-page: 4970 year: 2019 end-page: 4979 ident: bib0078 article-title: Improving adversarial robustness via promoting ensemble diversity publication-title: International Conference on Machine Learning (ICML) – volume: 45 start-page: 503 year: 1989 end-page: 528 ident: bib0043 article-title: On the limited memory BFGS method for large scale optimization publication-title: Math. Program. – year: 2018 ident: bib0010 article-title: Feature squeezing: detecting adversarial examples in deep neural networks publication-title: Annual Network and Distributed System Security Symposium (NDSS) – start-page: 6629 year: 2019 end-page: 6638 ident: bib0022 article-title: Bilateral adversarial training: towards fast training of more robust models against adversarial attacks publication-title: IEEE/CVF International Conference on Computer Vision (ICCV) – start-page: 4312 year: 2021 end-page: 4321 ident: bib0028 article-title: Recent advances in adversarial training for adversarial robustness publication-title: International Joint Conference on Artificial Intelligence (IJCAI) – year: 2018 ident: bib0049 article-title: Ensemble adversarial training: attacks and defenses publication-title: International Conference on Learning Representations (ICLR) – start-page: 108527 year: 2022 ident: bib0068 article-title: Hyperspherical class prototypes for adversarial robustness publication-title: Pattern Recognit – reference: R. Balestriero, J. Pesenti, Y. LeCun, Learning in high dimension always amounts to extrapolation, 2021, arXiv preprint arXiv: – start-page: 1829 year: 2019 end-page: 1839 ident: bib0023 article-title: Defense against adversarial attacks using feature scattering-based adversarial training publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 274 year: 2018 end-page: 283 ident: bib0050 article-title: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples publication-title: International Conference on Machine Learning (ICML) – reference: H. Xu, X. Liu, W. Wang, W. Ding, Z. Wu, Z. Liu, A. Jain, J. Tang, Towards the memorization effect of neural networks in adversarial training, 2021, arXiv preprint arXiv: – start-page: 7758 year: 2021 end-page: 7767 ident: bib0070 article-title: Reliably fast adversarial training via latent adversarial perturbation publication-title: IEEE/CVF International Conference on Computer Vision (ICCV) – start-page: 478 year: 2019 end-page: 489 ident: bib0065 article-title: Metric learning for adversarial robustness publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 1 year: 2021 end-page: 35 ident: bib0107 article-title: Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning publication-title: Optim. Methods Softw. – start-page: 5286 year: 2018 end-page: 5295 ident: bib0012 article-title: Provable defenses against adversarial examples via the convex outer adversarial polytope publication-title: International Conference on Machine Learning (ICML) – start-page: 1670 year: 2020 end-page: 1680 ident: bib0094 article-title: More data can expand the generalization gap between adversarially robust and standard models publication-title: International Conference on Machine Learning (ICML) – year: 2015 ident: bib0006 article-title: Explaining and harnessing adversarial examples publication-title: International Conference on Learning Representations (ICLR) – volume: 37 start-page: 332 year: 1992 end-page: 341 ident: bib0051 article-title: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation publication-title: IEEE Trans. Automat. Control – start-page: 4584 year: 2018 end-page: 4594 ident: bib0011 article-title: Towards robust detection of adversarial examples publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 9561 year: 2020 end-page: 9571 ident: bib0035 article-title: Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations publication-title: International Conference on Machine Learning (ICML) – start-page: 464 year: 2017 end-page: 472 ident: bib0114 article-title: Cyclical learning rates for training neural networks publication-title: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) – year: 2016 ident: bib0044 article-title: Adversarial manipulation of deep representations publication-title: International Conference on Learning Representations (ICLR) – year: 2018 ident: bib0091 article-title: SmoothGrad: removing noise by adding noise publication-title: Workshop on Visualization for Deep Learning, ICML – volume: 108 start-page: 894 year: 2020 end-page: 922 ident: bib0027 article-title: Towards robust pattern recognition: a review publication-title: Proc. IEEE – year: 2018 ident: bib0034 article-title: Excessive invariance causes adversarial vulnerability publication-title: International Conference on Learning Representations (ICLR) – volume: 86 start-page: 2278 year: 1998 end-page: 2324 ident: bib0042 article-title: Gradient-based learning applied to document recognition publication-title: Proc. IEEE – start-page: 99 year: 2018 end-page: 112 ident: bib0047 article-title: Adversarial examples in the physical world publication-title: Artificial Intelligence Safety and Security – start-page: 7502 year: 2019 end-page: 7511 ident: bib0090 article-title: Interpreting adversarially trained convolutional neural networks publication-title: International Conference on Machine Learning (ICML) – start-page: 130 year: 2019 end-page: 166 ident: bib0110 article-title: Wasserstein distributionally robust optimization: theory and applications in machine learning publication-title: Operations Research & Management Science in the Age of Analytics – year: 2019 ident: bib0101 article-title: Provable robustness against all adversarial publication-title: International Conference on Learning Representations (ICLR) – reference: R.R. Wiyatno, A. Xu, O. Dia, A. de Berker, Adversarial examples in modern machine learning: a review, 2019, arXiv preprint arXiv: – volume: 84 start-page: 317 year: 2018 end-page: 331 ident: bib0030 article-title: Wild patterns: ten years after the rise of adversarial machine learning publication-title: Pattern Recognit. – start-page: 2196 year: 2020 end-page: 2205 ident: bib0063 article-title: Minimally distorted adversarial examples with a fast adaptive boundary attack publication-title: International Conference on Machine Learning (ICML) – volume: 58 start-page: 595 year: 2010 end-page: 612 ident: bib0108 article-title: Distributionally robust optimization under moment uncertainty with application to data-driven problems publication-title: Oper. Res. – year: 2018 ident: bib0055 article-title: Learning to attack: adversarial transformation networks publication-title: AAAI Conference on Artificial Intelligence – year: 2019 ident: bib0084 article-title: Intriguing properties of adversarial training at scale publication-title: International Conference on Learning Representations (ICLR) – reference: K. Grosse, P. Manoharan, N. Papernot, M. Backes, P. McDaniel, On the (statistical) detection of adversarial examples, 2017, arXiv preprint arXiv: – volume: 30 start-page: 2805 year: 2019 end-page: 2824 ident: bib0026 article-title: Adversarial examples: attacks and defenses for deep learning publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 124 start-page: 108472 year: 2022 ident: bib0067 article-title: Semi-supervised robust training with generalized perturbed neighborhood publication-title: Pattern Recognit. – year: 2019 ident: bib0097 article-title: Robust local features for improving the generalization of adversarial training publication-title: International Conference on Learning Representations (ICLR) – start-page: 12524 year: 2021 end-page: 12534 ident: bib0099 article-title: Towards better robust generalization with shift consistency regularization publication-title: International Conference on Machine Learning (ICML) – start-page: 5764 year: 2018 end-page: 5773 ident: bib0032 article-title: Generative adversarial learning towards fast weakly supervised detection publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – reference: D. Yang, I. Kong, Y. Kim, Adaptive regularization for adversarial training, 2022, arXiv preprint arXiv: – year: 2018 ident: bib0073 article-title: Certifying some distributional robustness with principled adversarial training publication-title: International Conference on Learning Representations (ICLR) – volume: 171 start-page: 115 year: 2018 end-page: 166 ident: bib0109 article-title: Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations publication-title: Math. Program. – start-page: 3905 year: 2018 end-page: 3911 ident: bib0056 article-title: Generating adversarial examples with adversarial networks publication-title: International Joint Conference on Artificial Intelligence (IJCAI) – year: 2018 ident: bib0054 article-title: Generating natural adversarial examples publication-title: International Conference on Learning Representations (ICLR) – start-page: 96 year: 2020 end-page: 100 ident: bib0093 article-title: Robust generalization via publication-title: International Zurich Seminar on Information and Communication (IZS) – reference: H. Kannan, A. Kurakin, I. Goodfellow, Adversarial logit pairing, 2018, arXiv preprint arXiv: – reference: A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, D. Mukhopadhyay, Adversarial attacks and defences: a survey, 2018, arXiv preprint arXiv: – year: 2017 ident: bib0037 article-title: Delving into transferable adversarial examples and black-box attacks publication-title: International Conference on Learning Representations (ICLR) – year: 2018 ident: bib0018 article-title: Countering adversarial images using input transformations publication-title: International Conference on Learning Representations (ICLR) – start-page: 5019 year: 2018 end-page: 5031 ident: bib0095 article-title: Adversarially robust generalization requires more data publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 39 year: 2017 end-page: 57 ident: bib0019 article-title: Towards evaluating the robustness of neural networks publication-title: IEEE Symposium on Security and Privacy (SP) – volume: 115 start-page: 211 year: 2015 end-page: 252 ident: bib0002 article-title: ImageNet large scale visual recognition challenge publication-title: Int. J. Comput. Vision (IJCV) – volume: 23 start-page: 828 year: 2019 end-page: 841 ident: bib0059 article-title: One pixel attack for fooling deep neural networks publication-title: IEEE Trans. Evol. Comput. – volume: vol. 97 start-page: 7025 year: 2019 end-page: 7034 ident: bib0017 article-title: ME-Net: towards effective adversarial robustness with matrix estimation publication-title: Proceedings of the International Conference on Machine Learning (ICML) – year: 2019 ident: bib0088 article-title: Robustness may be at odds with accuracy publication-title: International Conference on Learning Representations (ICLR) – start-page: 8093 year: 2020 end-page: 8104 ident: bib0085 article-title: Overfitting in adversarially robust deep learning publication-title: International Conference on Machine Learning (ICML) – year: 2018 ident: bib0077 article-title: Ensemble adversarial training: attacks and defenses publication-title: International Conference on Learning Representations (ICLR) – reference: Y.M. Assael, B. Shillingford, S. Whiteson, N. de Freitas, LipNet: End-to-end sentence-level lipreading, 2016, arXiv preprint arXiv: – start-page: 301 year: 2015 end-page: 309 ident: bib0038 article-title: A unified gradient regularization family for adversarial examples publication-title: IEEE International Conference on Data Mining (ICDM) – volume: 5 start-page: 1253 year: 2004 end-page: 1286 ident: bib0106 article-title: The minimum error minimax probability machine publication-title: J. Mach. Learn. Res. – reference: A. Krizhevsky, V. Nair, G. Hinton, The CIFAR-10 dataset, online: – year: 2018 ident: bib0040 article-title: Towards deep learning models resistant to adversarial attacks publication-title: International Conference on Learning Representations (ICLR) – start-page: 826 year: 2019 end-page: 835 ident: bib0024 article-title: Generalized adversarial training in Riemannian space publication-title: IEEE International Conference on Data Mining (ICDM) – start-page: 1 year: 2018 end-page: 5 ident: bib0105 article-title: Enhancing robustness of machine learning systems via data transformations publication-title: 52nd Annual Conference on Information Sciences and Systems – start-page: 2958 year: 2020 end-page: 2969 ident: bib0098 article-title: Adversarial weight perturbation helps robust generalization publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 4260 year: 2018 end-page: 4267 ident: bib0033 article-title: Adversarial learning of portable student networks publication-title: Proceedings of the AAAI Conference on Artificial Intelligence – year: 2018 ident: bib0053 article-title: Spatially transformed adversarial examples publication-title: International Conference on Learning Representations (ICLR) – start-page: 9185 year: 2018 end-page: 9193 ident: bib0048 article-title: Boosting adversarial attacks with momentum publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – start-page: 15 year: 2017 end-page: 26 ident: bib0058 article-title: ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models publication-title: ACM Workshop on Artificial Intelligence and Security (AISec@CCS) – start-page: 2574 year: 2016 end-page: 2582 ident: bib0045 article-title: DeepFool: a simple and accurate method to fool deep neural networks publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – reference: , 2014. – start-page: 14059 year: 2019 end-page: 14069 ident: bib0102 article-title: Provable certificates for adversarial examples: fitting a ball in the union of polytopes publication-title: Advances in Neural Information Processing Systems (NeurIPS) – start-page: 770 year: 2016 end-page: 778 ident: bib0001 article-title: Deep residual learning for image recognition publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – year: 2018 ident: bib0014 article-title: Mitigating adversarial effects through randomization publication-title: International Conference on Learning Representations (ICLR) – reference: J. Shu, X. Yuan, D. Meng, Z. Xu, CMW-Net: learning a class-aware sample weighting mapping for robust deep learning, 2022, arXiv preprint arXiv: – year: 2019 ident: bib0072 article-title: Fast is better than free: revisiting adversarial training publication-title: International Conference on Learning Representations (ICLR) – start-page: 11278 year: 2020 end-page: 11287 ident: bib0080 article-title: Attacks which do not kill training make adversarial learning stronger publication-title: International Conference on Machine Learning (ICML) – year: 2014 ident: bib0005 article-title: Intriguing properties of neural networks publication-title: International Conference on Learning Representations (ICLR) – start-page: 506 year: 2017 end-page: 519 ident: bib0057 article-title: Practical black-box attacks against machine learning publication-title: ACM on Asia Conference on Computer and Communications Security (AsiaCCS) – start-page: 2206 year: 2020 end-page: 2216 ident: bib0061 article-title: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks publication-title: International Conference on Machine Learning (ICML) – year: 2018 ident: bib0060 article-title: Decision-based adversarial attacks: reliable attacks against black-box machine learning models publication-title: International Conference on Learning Representations (ICLR) – year: 2018 ident: bib0115 article-title: Mixed precision training publication-title: International Conference on Learning Representations (ICLR) – volume: 140 start-page: 282 year: 2021 end-page: 293 ident: bib0082 article-title: Manifold adversarial training for supervised and semi-supervised learning publication-title: Neural Netw. – reference: Z. Qian, S. Zhang, K. Huang, Q. Wang, R. Zhang, X. Yi, Improving model robustness with latent distribution locally and globally, 2021, arXiv preprint arXiv: – volume: vol. 1 year: 2017 ident: bib0112 article-title: Distributionally robust deep learning as a generalization of adversarial training publication-title: NIPS workshop on Machine Learning and Computer Security – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0016 article-title: Defense-GAN: protecting classifiers against adversarial attacks using generative models – volume: 25 start-page: 2410 issue: 12 year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0003 article-title: Toward human parity in conversational speech recognition publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2017.2756440 – volume: 124 start-page: 108472 year: 2022 ident: 10.1016/j.patcog.2022.108889_bib0067 article-title: Semi-supervised robust training with generalized perturbed neighborhood publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2021.108472 – start-page: 1739 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0062 article-title: Rays: a ray searching method for hard-label adversarial attack – ident: 10.1016/j.patcog.2022.108889_bib0036 – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0034 article-title: Excessive invariance causes adversarial vulnerability – volume: vol. 1 year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0112 article-title: Distributionally robust deep learning as a generalization of adversarial training – year: 2016 ident: 10.1016/j.patcog.2022.108889_bib0044 article-title: Adversarial manipulation of deep representations – year: 2021 ident: 10.1016/j.patcog.2022.108889_bib0074 article-title: A unified Wasserstein distributional robustness framework for adversarial training – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0100 article-title: Evaluating the robustness of neural networks: an extreme value theory approach – start-page: 9185 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0048 article-title: Boosting adversarial attacks with momentum – year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0101 article-title: Provable robustness against all adversarial lp-perturbations for p≥1 – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0091 article-title: SmoothGrad: removing noise by adding noise – start-page: 130 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0110 article-title: Wasserstein distributionally robust optimization: theory and applications in machine learning – start-page: 770 year: 2016 ident: 10.1016/j.patcog.2022.108889_bib0001 article-title: Deep residual learning for image recognition – start-page: 108527 year: 2022 ident: 10.1016/j.patcog.2022.108889_bib0068 article-title: Hyperspherical class prototypes for adversarial robustness publication-title: Pattern Recognit doi: 10.1016/j.patcog.2022.108527 – start-page: 15 year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0058 article-title: ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0013 article-title: Stochastic activation pruning for robust adversarial defense – ident: 10.1016/j.patcog.2022.108889_bib0092 – start-page: 7758 year: 2021 ident: 10.1016/j.patcog.2022.108889_bib0070 article-title: Reliably fast adversarial training via latent adversarial perturbation – start-page: 4260 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0033 article-title: Adversarial learning of portable student networks – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0018 article-title: Countering adversarial images using input transformations – start-page: 12524 year: 2021 ident: 10.1016/j.patcog.2022.108889_bib0099 article-title: Towards better robust generalization with shift consistency regularization – volume: 6 start-page: 121 year: 2022 ident: 10.1016/j.patcog.2022.108889_bib0076 article-title: Training robust neural networks using Lipschitz bounds publication-title: IEEE Control. Syst. Lett. doi: 10.1109/LCSYS.2021.3050444 – start-page: 1670 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0094 article-title: More data can expand the generalization gap between adversarially robust and standard models – start-page: 506 year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0057 article-title: Practical black-box attacks against machine learning – start-page: 6629 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0022 article-title: Bilateral adversarial training: towards fast training of more robust models against adversarial attacks – volume: 45 start-page: 503 issue: 1 year: 1989 ident: 10.1016/j.patcog.2022.108889_bib0043 article-title: On the limited memory BFGS method for large scale optimization publication-title: Math. Program. doi: 10.1007/BF01589116 – start-page: 9561 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0035 article-title: Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations – start-page: 4970 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0078 article-title: Improving adversarial robustness via promoting ensemble diversity – ident: 10.1016/j.patcog.2022.108889_bib0020 – ident: 10.1016/j.patcog.2022.108889_bib0104 – start-page: 4312 year: 2021 ident: 10.1016/j.patcog.2022.108889_bib0028 article-title: Recent advances in adversarial training for adversarial robustness – volume: 58 start-page: 595 issue: 3 year: 2010 ident: 10.1016/j.patcog.2022.108889_bib0108 article-title: Distributionally robust optimization under moment uncertainty with application to data-driven problems publication-title: Oper. Res. doi: 10.1287/opre.1090.0741 – start-page: 7472 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0079 article-title: Theoretically principled trade-off between robustness and accuracy – volume: 84 start-page: 317 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0030 article-title: Wild patterns: ten years after the rise of adversarial machine learning publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2018.07.023 – start-page: 8093 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0085 article-title: Overfitting in adversarially robust deep learning – volume: vol. 97 start-page: 7025 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0017 article-title: ME-Net: towards effective adversarial robustness with matrix estimation – start-page: 135 year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0015 article-title: Magnet: a two-pronged defense against adversarial examples – volume: 37 start-page: 332 issue: 3 year: 1992 ident: 10.1016/j.patcog.2022.108889_bib0051 article-title: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation publication-title: IEEE Trans. Automat. Control doi: 10.1109/9.119632 – start-page: 3905 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0056 article-title: Generating adversarial examples with adversarial networks – year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0007 article-title: A baseline for detecting misclassified and out-of-distribution examples in neural networks – start-page: 14059 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0102 article-title: Provable certificates for adversarial examples: fitting a ball in the union of polytopes – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0055 article-title: Learning to attack: adversarial transformation networks – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0014 article-title: Mitigating adversarial effects through randomization – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0073 article-title: Certifying some distributional robustness with principled adversarial training – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0010 article-title: Feature squeezing: detecting adversarial examples in deep neural networks – ident: 10.1016/j.patcog.2022.108889_bib0039 – start-page: 909 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0021 article-title: Adversarial noise layer: regularize neural network by adding noise – start-page: 2958 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0098 article-title: Adversarial weight perturbation helps robust generalization – start-page: 11278 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0080 article-title: Attacks which do not kill training make adversarial learning stronger – start-page: 464 year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0114 article-title: Cyclical learning rates for training neural networks – start-page: 99 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0047 article-title: Adversarial examples in the physical world – start-page: 16048 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0071 article-title: Understanding and improving fast adversarial training – start-page: 5286 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0012 article-title: Provable defenses against adversarial examples via the convex outer adversarial polytope – ident: 10.1016/j.patcog.2022.108889_bib0025 – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0040 article-title: Towards deep learning models resistant to adversarial attacks – start-page: 301 year: 2015 ident: 10.1016/j.patcog.2022.108889_bib0038 article-title: A unified gradient regularization family for adversarial examples – year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0088 article-title: Robustness may be at odds with accuracy – start-page: 2672 year: 2014 ident: 10.1016/j.patcog.2022.108889_bib0031 article-title: Generative adversarial nets – year: 2016 ident: 10.1016/j.patcog.2022.108889_bib0086 article-title: Wide residual networks – start-page: 96 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0093 article-title: Robust generalization via alpha-mutual information – start-page: 8588 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0075 article-title: A closer look at accuracy vs. robustness – ident: 10.1016/j.patcog.2022.108889_bib0113 doi: 10.1109/TPAMI.2023.3271451 – ident: 10.1016/j.patcog.2022.108889_bib0087 – year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0084 article-title: Intriguing properties of adversarial training at scale – year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0072 article-title: Fast is better than free: revisiting adversarial training – volume: 171 start-page: 115 issue: 1–2 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0109 article-title: Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations publication-title: Math. Program. doi: 10.1007/s10107-017-1172-1 – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0054 article-title: Generating natural adversarial examples – ident: 10.1016/j.patcog.2022.108889_bib0041 – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0115 article-title: Mixed precision training – start-page: 36 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0052 article-title: Adversarial examples for generative models – volume: 108 start-page: 894 issue: 6 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0027 article-title: Towards robust pattern recognition: a review publication-title: Proc. IEEE doi: 10.1109/JPROC.2020.2989782 – start-page: 7502 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0090 article-title: Interpreting adversarially trained convolutional neural networks – volume: 115 start-page: 211 issue: 3 year: 2015 ident: 10.1016/j.patcog.2022.108889_bib0002 article-title: ImageNet large scale visual recognition challenge publication-title: Int. J. Comput. Vision (IJCV) doi: 10.1007/s11263-015-0816-y – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0060 article-title: Decision-based adversarial attacks: reliable attacks against black-box machine learning models – start-page: 5019 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0095 article-title: Adversarially robust generalization requires more data – start-page: 1829 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0023 article-title: Defense against adversarial attacks using feature scattering-based adversarial training – year: 2014 ident: 10.1016/j.patcog.2022.108889_bib0005 article-title: Intriguing properties of neural networks – ident: 10.1016/j.patcog.2022.108889_bib0096 – year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0008 article-title: On detecting adversarial perturbations – year: 2015 ident: 10.1016/j.patcog.2022.108889_bib0006 article-title: Explaining and harnessing adversarial examples – volume: 23 start-page: 828 issue: 5 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0059 article-title: One pixel attack for fooling deep neural networks publication-title: IEEE Trans. Evol. Comput. doi: 10.1109/TEVC.2019.2890858 – volume: 5 start-page: 1253 year: 2004 ident: 10.1016/j.patcog.2022.108889_bib0106 article-title: The minimum error minimax probability machine publication-title: J. Mach. Learn. Res. – year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0046 article-title: Adversarial machine learning at scale – start-page: 3353 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0069 article-title: Adversarial training for free! – start-page: 2206 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0061 article-title: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks – start-page: 478 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0065 article-title: Metric learning for adversarial robustness – start-page: 125 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0089 article-title: Adversarial examples are not bugs, they are features – start-page: 1 year: 2021 ident: 10.1016/j.patcog.2022.108889_bib0107 article-title: Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning publication-title: Optim. Methods Softw. – year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0097 article-title: Robust local features for improving the generalization of adversarial training – volume: 86 start-page: 2278 issue: 11 year: 1998 ident: 10.1016/j.patcog.2022.108889_bib0042 article-title: Gradient-based learning applied to document recognition publication-title: Proc. IEEE doi: 10.1109/5.726791 – ident: 10.1016/j.patcog.2022.108889_bib0066 – start-page: 4584 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0011 article-title: Towards robust detection of adversarial examples – start-page: 39 year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0019 article-title: Towards evaluating the robustness of neural networks – start-page: 826 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0024 article-title: Generalized adversarial training in Riemannian space – ident: 10.1016/j.patcog.2022.108889_bib0009 – start-page: 2196 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0063 article-title: Minimally distorted adversarial examples with a fast adaptive boundary attack – ident: 10.1016/j.patcog.2022.108889_bib0083 – start-page: 2574 year: 2016 ident: 10.1016/j.patcog.2022.108889_bib0045 article-title: DeepFool: a simple and accurate method to fool deep neural networks – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0049 article-title: Ensemble adversarial training: attacks and defenses – start-page: 1 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0105 article-title: Enhancing robustness of machine learning systems via data transformations – start-page: 10067 year: 2021 ident: 10.1016/j.patcog.2022.108889_bib0111 article-title: An online method for A class of distributionally robust optimization with non-convex objectives – start-page: 274 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0050 article-title: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples – ident: 10.1016/j.patcog.2022.108889_bib0103 – volume: 140 start-page: 282 year: 2021 ident: 10.1016/j.patcog.2022.108889_bib0082 article-title: Manifold adversarial training for supervised and semi-supervised learning publication-title: Neural Netw. doi: 10.1016/j.neunet.2021.03.031 – volume: 30 start-page: 2805 issue: 9 year: 2019 ident: 10.1016/j.patcog.2022.108889_bib0026 article-title: Adversarial examples: attacks and defenses for deep learning publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2018.2886017 – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0053 article-title: Spatially transformed adversarial examples – year: 2017 ident: 10.1016/j.patcog.2022.108889_bib0037 article-title: Delving into transferable adversarial examples and black-box attacks – start-page: 5764 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0032 article-title: Generative adversarial learning towards fast weakly supervised detection – ident: 10.1016/j.patcog.2022.108889_bib0029 – ident: 10.1016/j.patcog.2022.108889_bib0004 – year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0077 article-title: Ensemble adversarial training: attacks and defenses – start-page: 484 year: 2020 ident: 10.1016/j.patcog.2022.108889_bib0064 article-title: Square attack: a query-efficient black-box adversarial attack via random search – volume: 41 start-page: 1979 issue: 8 year: 2018 ident: 10.1016/j.patcog.2022.108889_bib0081 article-title: Virtual adversarial training: a regularization method for supervised and semi-supervised learning publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2018.2858821 |
| SSID | ssj0017142 |
| Score | 2.643018 |
| Snippet | •We present a timely and comprehensive survey on robust adversarial training.•This survey offers the fundamentals of adversarial training, a unified theory... |
| SourceID | crossref elsevier |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 108889 |
| SubjectTerms | Adversarial examples Adversarial training Robust learning |
| Title | A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies |
| URI | https://dx.doi.org/10.1016/j.patcog.2022.108889 |
| Volume | 131 |
| WOSCitedRecordID | wos000854976600009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1873-5142 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017142 issn: 0031-3203 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1La9wwEBbLpode-i5NX-jQ20bBtuyV1NtSEvoipDQt216MZEnEIXjD7npJAv3vlTyy19stfUEvxghLsmc-j0ajeSD0QnCdceuln1aKpEYLomxREOvWWiuSRGZNlYjP79nREZ9OxfFg8K2NhVmds6ril5fi4r-y2rU5ZvvQ2b9gdzeoa3D3junu6tjurn_E-MloUc9Xpjk5n89UvVj6g36n5cmmQEdbE2IESVW9PXDUeRGBn8ehjw6BpP-eA02s41Xr5gklpxuRGdwPg2p7vD1YZ1Utwcz69bSWYaVskBRs1e9keX1ar0370PqhrIkjbPd4Z9qe1uSLnPXNFW6nG2-YK7o4mrXTUiOXaUxoEoGoMyCKOaPEqXObshqWjC25DyaIs31HN_eN-35i7z3JoTzRDxm1P0LCysjJMp9-x-eK3UlYJvgQ7UzeHEzfdsdQLE4h3Xx4vTb2snEQ3J7r57pNT185uYNuhY0GngBA7qKBqe6h220RDxxk-n1kJxjwgmcWA15wDy-4xQsuKxzwgnssfol7aNnDgJU97JCCN5DyAH06PDh59ZqE4hukcLvIJZFjy3hhUpOMdZZYSqmKBDUsthkThc0yLUUmMkljo3TKIib9zjvSVlHTKJ0P0bCaVeYRwtpwpakRheBxqgrJldTCqqiwMdVSjncRbcmWFyEzvf-487x1QTzLgdi5J3YOxN5FpOt1AZlZfvM8azmSB-0StMbcgeiXPR__c88n6Ob6H3iKhst5bZ6hG8VqWS7mzwPavgOc1KR- |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+survey+of+robust+adversarial+training+in+pattern+recognition%3A+Fundamental%2C+theory%2C+and+methodologies&rft.jtitle=Pattern+recognition&rft.au=Qian%2C+Zhuang&rft.au=Huang%2C+Kaizhu&rft.au=Wang%2C+Qiu-Feng&rft.au=Zhang%2C+Xu-Yao&rft.date=2022-11-01&rft.pub=Elsevier+Ltd&rft.issn=0031-3203&rft.eissn=1873-5142&rft.volume=131&rft_id=info:doi/10.1016%2Fj.patcog.2022.108889&rft.externalDocID=S0031320322003703 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon |