Wild patterns: Ten years after the rise of adversarial machine learning

•We provide a detailed review of the evolution of adversarial machine learning over the last ten years.•We start from pioneering work up to more recent work aimed at understanding the security properties of deep learning algorithms.•We review work in the context of different applications.•We highlig...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition Vol. 84; pp. 317 - 331
Main Authors: Biggio, Battista, Roli, Fabio
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.12.2018
Subjects:
ISSN:0031-3203, 1873-5142
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract •We provide a detailed review of the evolution of adversarial machine learning over the last ten years.•We start from pioneering work up to more recent work aimed at understanding the security properties of deep learning algorithms.•We review work in the context of different applications.•We highlight common misconceptions related to the evaluation of the security of machinelearning and pattern recognition algorithms.•We discuss the main limitations of current work, along with the corresponding future research paths towards designing more secure learning algorithms. Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
AbstractList •We provide a detailed review of the evolution of adversarial machine learning over the last ten years.•We start from pioneering work up to more recent work aimed at understanding the security properties of deep learning algorithms.•We review work in the context of different applications.•We highlight common misconceptions related to the evaluation of the security of machinelearning and pattern recognition algorithms.•We discuss the main limitations of current work, along with the corresponding future research paths towards designing more secure learning algorithms. Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
Author Biggio, Battista
Roli, Fabio
Author_xml – sequence: 1
  givenname: Battista
  orcidid: 0000-0001-7752-509X
  surname: Biggio
  fullname: Biggio, Battista
  email: battista.biggio@diee.unica.it
  organization: Department of Electrical and Electronic Engineering, University of Cagliari, Italy
– sequence: 2
  givenname: Fabio
  surname: Roli
  fullname: Roli, Fabio
  email: roli@diee.unica.it
  organization: Department of Electrical and Electronic Engineering, University of Cagliari, Italy
BookMark eNqFkMFKAzEQhoMo2FbfwENeYNcku5tdexCkaBUKXioew2wyaVO22ZKEQt_eSD150NPAzP_9MN-UXPrRIyF3nJWccXm_Kw-Q9LgpBeNdydqSieqCTHjXVkXDa3FJJoxVvKgEq67JNMYdY7zNhwlZfrrB0IwnDD7O6Ro9PSGESMHmFU1bpMFFpKOlYI4YIgQHA92D3jqPdMhZ7_zmhlxZGCLe_swZ-Xh5Xi9ei9X78m3xtCp01YhUGMm6XhsJKKB5MJWRsum5Fl1tJbNSYl-D6avOtNrWMkcEY2gbYcAg9PmfGZmfe3UYYwxolXYJkht9CuAGxZn6VqJ26qxEfStRrFVZSYbrX_AhuD2E03_Y4xnD_NjRYVBRO_QajQuokzKj-7vgC95ygMo
CitedBy_id crossref_primary_10_1109_ACCESS_2020_2987435
crossref_primary_10_1109_TPAMI_2022_3160350
crossref_primary_10_1145_3447822
crossref_primary_10_1016_j_ins_2023_03_085
crossref_primary_10_1016_j_cose_2021_102259
crossref_primary_10_1109_TIFS_2021_3087332
crossref_primary_10_1016_j_procs_2024_04_251
crossref_primary_10_1007_s11633_022_1375_7
crossref_primary_10_3233_JCS_191411
crossref_primary_10_1109_ACCESS_2022_3162588
crossref_primary_10_1109_TIFS_2024_3411936
crossref_primary_10_1155_2021_4907754
crossref_primary_10_1007_s44163_024_00151_2
crossref_primary_10_3390_e23030275
crossref_primary_10_1016_j_cose_2022_102783
crossref_primary_10_1016_j_ijmedinf_2021_104658
crossref_primary_10_1016_j_neunet_2020_06_006
crossref_primary_10_3390_info12020057
crossref_primary_10_1016_j_compeleceng_2022_108374
crossref_primary_10_3233_JCS_210133
crossref_primary_10_1155_2020_8841233
crossref_primary_10_1145_3592597
crossref_primary_10_3390_a18030157
crossref_primary_10_1016_j_media_2021_102141
crossref_primary_10_1109_ACCESS_2022_3229124
crossref_primary_10_1007_s44244_023_00005_3
crossref_primary_10_1016_j_ejrad_2023_111085
crossref_primary_10_1016_j_patcog_2024_111071
crossref_primary_10_1109_TETCI_2020_2968933
crossref_primary_10_37394_23209_2025_22_44
crossref_primary_10_1145_3613244
crossref_primary_10_1145_3626314
crossref_primary_10_1109_JETCAS_2021_3084400
crossref_primary_10_1145_3624010
crossref_primary_10_1103_PhysRevResearch_2_033212
crossref_primary_10_1016_j_asoc_2021_107252
crossref_primary_10_3390_info12100394
crossref_primary_10_1145_3538707
crossref_primary_10_1016_j_cose_2019_06_012
crossref_primary_10_1007_s43681_024_00495_6
crossref_primary_10_1016_j_cose_2022_102643
crossref_primary_10_1016_j_cose_2020_101901
crossref_primary_10_38124_ijsrmt_v3i12_644
crossref_primary_10_3390_fi15020062
crossref_primary_10_1016_j_patcog_2025_112441
crossref_primary_10_1145_3533378
crossref_primary_10_3390_electronics12051092
crossref_primary_10_1186_s13635_021_00127_0
crossref_primary_10_1016_j_cose_2019_06_004
crossref_primary_10_1016_j_patcog_2025_111788
crossref_primary_10_1109_COMST_2022_3205184
crossref_primary_10_30658_hmc_2_8
crossref_primary_10_1016_j_patcog_2021_108098
crossref_primary_10_1186_s42400_021_00102_9
crossref_primary_10_1016_j_neunet_2020_05_018
crossref_primary_10_1109_TPAMI_2022_3229593
crossref_primary_10_1002_asmb_2674
crossref_primary_10_3390_electronics12234806
crossref_primary_10_1016_j_patcog_2024_111263
crossref_primary_10_1109_TNNLS_2019_2933524
crossref_primary_10_1016_j_cose_2021_102280
crossref_primary_10_1016_j_procs_2024_11_112
crossref_primary_10_1371_journal_pone_0271970
crossref_primary_10_3390_bdcc9050114
crossref_primary_10_1007_s10462_021_10125_w
crossref_primary_10_1109_TCSS_2022_3218743
crossref_primary_10_1109_LSP_2025_3571398
crossref_primary_10_1155_2021_1476043
crossref_primary_10_1109_TCYB_2022_3209175
crossref_primary_10_1109_TITS_2023_3294349
crossref_primary_10_1109_TIFS_2025_3583234
crossref_primary_10_1016_j_procs_2024_09_653
crossref_primary_10_1109_TCYB_2020_3022673
crossref_primary_10_1016_j_ins_2021_05_049
crossref_primary_10_1109_TETC_2022_3184408
crossref_primary_10_3390_info15110740
crossref_primary_10_1016_j_jisa_2022_103341
crossref_primary_10_1109_ACCESS_2023_3304541
crossref_primary_10_3390_jcp3020010
crossref_primary_10_1109_ACCESS_2024_3489776
crossref_primary_10_1007_s10994_025_06833_x
crossref_primary_10_1016_j_jisa_2023_103502
crossref_primary_10_1016_j_bdr_2022_100359
crossref_primary_10_1109_ACCESS_2024_3488204
crossref_primary_10_1016_j_cor_2020_105185
crossref_primary_10_1088_2632_2153_ac338d
crossref_primary_10_3390_jmse11061179
crossref_primary_10_1109_ACCESS_2021_3136889
crossref_primary_10_3390_app11146488
crossref_primary_10_1109_ACCESS_2021_3124309
crossref_primary_10_1108_IDD_01_2024_0003
crossref_primary_10_1016_j_infsof_2022_106982
crossref_primary_10_1109_JIOT_2021_3111024
crossref_primary_10_3390_app13106001
crossref_primary_10_1038_s41928_020_0372_5
crossref_primary_10_1109_TSP_2022_3198169
crossref_primary_10_1017_S0140525X23001668
crossref_primary_10_1002_asmb_2765
crossref_primary_10_1109_ACCESS_2022_3174259
crossref_primary_10_1038_s41467_019_08931_6
crossref_primary_10_1002_wics_1511
crossref_primary_10_1109_ACCESS_2022_3206367
crossref_primary_10_1016_j_patcog_2022_109140
crossref_primary_10_1371_journal_pone_0219004
crossref_primary_10_1016_j_neunet_2024_106490
crossref_primary_10_1016_j_jisa_2022_103121
crossref_primary_10_1109_TSC_2021_3090771
crossref_primary_10_1016_j_neucom_2022_04_072
crossref_primary_10_1016_j_patcog_2019_107184
crossref_primary_10_1016_j_jnca_2020_102834
crossref_primary_10_1109_TCAD_2020_3024780
crossref_primary_10_1007_s11042_023_17394_3
crossref_primary_10_1109_TAI_2024_3383407
crossref_primary_10_1007_s11023_021_09580_9
crossref_primary_10_1016_j_future_2019_10_022
crossref_primary_10_1145_3544792
crossref_primary_10_1109_JPROC_2022_3223186
crossref_primary_10_1002_spe_2971
crossref_primary_10_1007_s10844_022_00753_1
crossref_primary_10_1007_s10916_020_01646_y
crossref_primary_10_1109_TNSM_2020_3031843
crossref_primary_10_1145_3317611
crossref_primary_10_1016_j_neunet_2020_04_015
crossref_primary_10_1145_3398394
crossref_primary_10_1109_OJCS_2025_3572244
crossref_primary_10_1016_j_cosrev_2019_100199
crossref_primary_10_1145_3477403
crossref_primary_10_1007_s12243_023_00951_0
crossref_primary_10_1016_j_patcog_2024_110356
crossref_primary_10_1109_TIFS_2025_3531143
crossref_primary_10_1016_j_cam_2024_116201
crossref_primary_10_1109_JIOT_2022_3227162
crossref_primary_10_1109_TDSC_2022_3233519
crossref_primary_10_1016_j_cose_2020_101743
crossref_primary_10_1016_j_patcog_2022_109286
crossref_primary_10_1155_2020_6535834
crossref_primary_10_1007_s11366_025_09907_8
crossref_primary_10_1109_TMM_2023_3255742
crossref_primary_10_1016_j_knosys_2021_106967
crossref_primary_10_3390_math12060834
crossref_primary_10_1109_ACCESS_2025_3589777
crossref_primary_10_1109_TMM_2024_3394677
crossref_primary_10_1109_TPAMI_2021_3137564
crossref_primary_10_1109_ACCESS_2021_3138338
crossref_primary_10_3389_fdata_2020_00023
crossref_primary_10_1016_j_ijar_2019_07_003
crossref_primary_10_1016_j_compeleceng_2022_107903
crossref_primary_10_1016_j_patcog_2022_109054
crossref_primary_10_1109_COMST_2023_3344808
crossref_primary_10_1109_ACCESS_2019_2962525
crossref_primary_10_1109_JIOT_2022_3215188
crossref_primary_10_1002_adts_202400479
crossref_primary_10_1007_s10664_021_10064_8
crossref_primary_10_1109_JSYST_2022_3215014
crossref_primary_10_1007_s10462_025_11167_0
crossref_primary_10_1016_j_cose_2024_103929
crossref_primary_10_1145_3511887
crossref_primary_10_1162_neco_a_01376
crossref_primary_10_1016_j_patcog_2022_109182
crossref_primary_10_1016_j_cageo_2024_105534
crossref_primary_10_1016_j_jisa_2021_102916
crossref_primary_10_1109_ACCESS_2025_3574016
crossref_primary_10_1186_s42400_023_00162_z
crossref_primary_10_1109_ACCESS_2022_3222531
crossref_primary_10_1145_3638531
crossref_primary_10_1016_j_jisa_2022_103398
crossref_primary_10_1007_s11042_024_18563_8
crossref_primary_10_1049_rpg2_12334
crossref_primary_10_1016_j_jnca_2020_102808
crossref_primary_10_1109_ACCESS_2024_3423323
crossref_primary_10_1016_j_cose_2023_103654
crossref_primary_10_1002_smr_2386
crossref_primary_10_1016_j_compeleceng_2021_107542
crossref_primary_10_1109_TVCG_2019_2934631
crossref_primary_10_1145_3545574
crossref_primary_10_5604_01_3001_0054_0092
crossref_primary_10_1016_j_pmcj_2023_101801
crossref_primary_10_1016_j_ins_2023_119701
crossref_primary_10_1145_3332184
crossref_primary_10_1088_1361_6560_ac3842
crossref_primary_10_1016_j_neucom_2024_128918
crossref_primary_10_1259_bjr_20190855
crossref_primary_10_1016_j_cose_2022_103006
crossref_primary_10_1007_s10994_020_05916_1
crossref_primary_10_3390_app10020518
crossref_primary_10_1016_j_jisa_2020_102722
crossref_primary_10_1080_01621459_2023_2183129
crossref_primary_10_3390_a17010047
crossref_primary_10_1016_j_neucom_2023_126227
crossref_primary_10_3390_app10217673
crossref_primary_10_1007_s13347_025_00894_5
crossref_primary_10_1016_j_engappai_2022_105461
crossref_primary_10_1016_j_patcog_2020_107309
crossref_primary_10_1109_JIOT_2025_3580286
crossref_primary_10_1109_TNSM_2022_3157344
crossref_primary_10_1109_TSP_2025_3564842
crossref_primary_10_1109_TIFS_2021_3080522
crossref_primary_10_3390_sym12040653
crossref_primary_10_3390_electronics13101991
crossref_primary_10_1126_science_aaw4399
crossref_primary_10_1007_s10462_023_10415_5
crossref_primary_10_1109_JBHI_2022_3181823
crossref_primary_10_1609_aaai_12028
crossref_primary_10_1007_s11786_021_00502_7
crossref_primary_10_1016_j_patrec_2024_07_020
crossref_primary_10_1088_1757_899X_1022_1_012037
crossref_primary_10_1016_j_neunet_2020_04_030
crossref_primary_10_1002_bimj_202200222
crossref_primary_10_1016_j_neucom_2021_10_082
crossref_primary_10_1109_TIFS_2023_3295942
crossref_primary_10_1109_TIFS_2024_3379829
crossref_primary_10_1016_j_patrec_2024_01_023
crossref_primary_10_1007_s11416_021_00390_2
crossref_primary_10_32628_CSEIT23564522
crossref_primary_10_1145_3585385
crossref_primary_10_22399_ijcesen_3793
crossref_primary_10_3389_frai_2021_780843
crossref_primary_10_3390_jcp2010010
crossref_primary_10_3390_en16135206
crossref_primary_10_1016_j_ejor_2025_06_011
crossref_primary_10_1109_ACCESS_2024_3519524
crossref_primary_10_1109_TDSC_2022_3210029
crossref_primary_10_3390_s22176662
crossref_primary_10_1016_j_engappai_2022_105660
crossref_primary_10_3389_fcomp_2020_00036
crossref_primary_10_1016_j_cose_2023_103459
crossref_primary_10_1007_s44206_025_00181_y
crossref_primary_10_1007_s42979_023_02556_9
crossref_primary_10_1007_s10207_025_01067_3
crossref_primary_10_1016_j_ins_2023_119093
crossref_primary_10_1007_s10462_023_10499_z
crossref_primary_10_32604_jcs_2024_056164
crossref_primary_10_1109_ACCESS_2024_3391021
crossref_primary_10_1145_3704724
crossref_primary_10_1007_s11943_023_00333_x
crossref_primary_10_1007_s10462_025_11147_4
crossref_primary_10_3390_fi16010032
crossref_primary_10_1007_s11277_021_08284_8
crossref_primary_10_1088_1361_648X_ac1e49
crossref_primary_10_1109_TDSC_2025_3542237
crossref_primary_10_1016_j_jnca_2019_102526
crossref_primary_10_1016_j_patcog_2020_107584
crossref_primary_10_1109_TIFS_2020_2991876
crossref_primary_10_1016_j_patrec_2021_03_011
crossref_primary_10_1109_JAS_2021_1004261
crossref_primary_10_1016_j_cor_2023_106478
crossref_primary_10_1007_s11241_025_09446_8
crossref_primary_10_1109_COMST_2024_3437248
crossref_primary_10_1016_j_neunet_2025_107801
crossref_primary_10_1145_3589766
crossref_primary_10_1016_j_bprint_2023_e00321
crossref_primary_10_1002_int_22889
crossref_primary_10_3390_e23081047
crossref_primary_10_1109_TDSC_2022_3166671
crossref_primary_10_1109_MNET_011_1900450
crossref_primary_10_3390_app121910166
crossref_primary_10_3390_s22197162
crossref_primary_10_1109_TCAD_2020_3033749
crossref_primary_10_1088_1742_6596_2037_1_012025
crossref_primary_10_1007_s10462_020_09942_2
crossref_primary_10_3390_sym15020535
crossref_primary_10_1109_TIFS_2023_3251842
crossref_primary_10_1002_mp_15869
crossref_primary_10_3390_su14095211
crossref_primary_10_1109_TIFS_2021_3082330
crossref_primary_10_1007_s00259_020_04879_8
crossref_primary_10_1145_3587470
crossref_primary_10_1007_s00446_022_00427_9
crossref_primary_10_1007_s10726_022_09800_2
crossref_primary_10_3390_s20072084
crossref_primary_10_1109_OJCOMS_2024_3430823
crossref_primary_10_1002_jocb_559
crossref_primary_10_1007_s12046_022_02014_x
crossref_primary_10_1109_TCAD_2021_3099084
crossref_primary_10_1145_3656474
crossref_primary_10_1016_j_jss_2020_110574
crossref_primary_10_1038_s42256_023_00646_0
crossref_primary_10_1016_j_csl_2019_05_005
crossref_primary_10_1016_j_jisa_2021_102879
crossref_primary_10_1051_e3sconf_202122901004
crossref_primary_10_1016_j_jspi_2024_106182
crossref_primary_10_1007_s11280_020_00813_y
crossref_primary_10_1007_s10676_024_09800_7
crossref_primary_10_1109_JSTSP_2020_3002101
crossref_primary_10_1186_s40900_022_00363_9
crossref_primary_10_1007_s10489_021_02523_y
crossref_primary_10_1109_ACCESS_2021_3118642
crossref_primary_10_1109_TIFS_2018_2890808
crossref_primary_10_1007_s00354_024_00283_0
crossref_primary_10_1016_j_cose_2023_103250
crossref_primary_10_1016_j_cosrev_2021_100452
crossref_primary_10_1109_TIP_2024_3458858
crossref_primary_10_1109_TDSC_2022_3192671
crossref_primary_10_1016_j_patcog_2022_108824
crossref_primary_10_1215_00382876_8007665
crossref_primary_10_1145_3468507_3468513
crossref_primary_10_1097_ICU_0000000000000846
crossref_primary_10_1016_j_ijhydene_2023_08_284
crossref_primary_10_1287_ijoc_2023_1297
crossref_primary_10_1145_3595292
crossref_primary_10_1016_j_digbus_2025_100130
crossref_primary_10_3390_math8111957
crossref_primary_10_3390_make5040087
crossref_primary_10_1109_JIOT_2024_3439440
crossref_primary_10_1007_s12652_020_02642_3
crossref_primary_10_1016_j_neucom_2024_127798
crossref_primary_10_2478_amns_2024_2480
crossref_primary_10_1109_TAC_2024_3351555
crossref_primary_10_1109_TIFS_2024_3350911
crossref_primary_10_1109_ACCESS_2022_3204995
crossref_primary_10_1007_s13042_020_01159_7
crossref_primary_10_1016_j_patcog_2023_109435
crossref_primary_10_1109_COMST_2020_3036778
crossref_primary_10_3390_ai2010007
crossref_primary_10_3390_ijms24010804
crossref_primary_10_1109_TII_2020_2964154
crossref_primary_10_1016_j_patcog_2021_108306
crossref_primary_10_1007_s10462_022_10195_4
crossref_primary_10_12968_S1361_3723_23_70007_9
crossref_primary_10_1007_s10664_020_09915_7
crossref_primary_10_3389_fenrg_2025_1531655
crossref_primary_10_3390_app13179928
crossref_primary_10_1016_j_cviu_2020_102988
crossref_primary_10_1093_comjnl_bxz121
crossref_primary_10_1109_TAI_2023_3340982
crossref_primary_10_1109_TKDE_2021_3117608
crossref_primary_10_1016_j_autcon_2022_104298
crossref_primary_10_1080_1206212X_2021_1940744
crossref_primary_10_1145_3446331
crossref_primary_10_1080_0960085X_2022_2073278
crossref_primary_10_1109_MC_2021_3057686
crossref_primary_10_1007_s44214_023_00043_z
crossref_primary_10_1038_s41467_022_32611_7
crossref_primary_10_1109_COMST_2020_2975048
crossref_primary_10_1038_s41467_024_55631_x
crossref_primary_10_3390_iot1020013
crossref_primary_10_1109_ACCESS_2022_3204696
crossref_primary_10_1016_j_ejor_2023_04_009
crossref_primary_10_1587_transinf_2019EDP7188
crossref_primary_10_1007_s10462_019_09717_4
crossref_primary_10_1016_j_cose_2023_103297
crossref_primary_10_1007_s10618_020_00694_9
crossref_primary_10_1007_s10994_022_06177_w
crossref_primary_10_3389_fnins_2022_1068193
crossref_primary_10_1111_itor_70049
crossref_primary_10_1109_ACCESS_2024_3497011
crossref_primary_10_1109_JPROC_2021_3052449
crossref_primary_10_1016_j_asoc_2023_110173
crossref_primary_10_1007_s10994_022_06269_7
crossref_primary_10_1093_cybsec_tyad011
crossref_primary_10_3389_fbioe_2020_591980
crossref_primary_10_1016_j_trc_2024_104750
crossref_primary_10_1016_j_cose_2022_102814
crossref_primary_10_1145_3627817
crossref_primary_10_1016_j_knosys_2025_112954
crossref_primary_10_1016_j_neucom_2019_11_113
crossref_primary_10_1109_TIFS_2019_2894031
crossref_primary_10_1016_j_ins_2025_121905
crossref_primary_10_1016_j_patcog_2023_109760
crossref_primary_10_1109_ACCESS_2021_3083421
crossref_primary_10_1109_TNSM_2022_3188930
crossref_primary_10_1109_ACCESS_2022_3197299
crossref_primary_10_1109_ACCESS_2022_3176367
crossref_primary_10_3390_e21050513
crossref_primary_10_25300_MISQ_2021_1578
crossref_primary_10_1016_j_cose_2022_102901
crossref_primary_10_32604_cmc_2021_017199
crossref_primary_10_1007_s00146_021_01284_z
crossref_primary_10_1016_j_fss_2018_11_004
crossref_primary_10_1103_PhysRevResearch_6_023020
crossref_primary_10_1177_1548512920951275
crossref_primary_10_1007_s40747_024_01704_9
crossref_primary_10_1371_journal_pcbi_1010718
crossref_primary_10_1109_TDSC_2021_3050101
crossref_primary_10_1007_s11433_021_1793_6
crossref_primary_10_1109_TPAMI_2024_3386985
crossref_primary_10_1109_JPROC_2021_3050042
crossref_primary_10_1093_imamat_hxad027
crossref_primary_10_1145_3559104
crossref_primary_10_1109_COMST_2022_3202047
crossref_primary_10_1145_3506734
crossref_primary_10_57197_JDR_2024_0101
crossref_primary_10_1214_24_STS922
crossref_primary_10_1007_s42519_021_00171_6
crossref_primary_10_1016_j_patcog_2022_108889
crossref_primary_10_1016_j_procir_2020_05_097
crossref_primary_10_1155_2022_6458488
crossref_primary_10_1016_j_neucom_2022_07_010
crossref_primary_10_1109_ACCESS_2019_2948912
crossref_primary_10_1007_s13042_021_01393_7
crossref_primary_10_1016_j_ins_2023_02_086
crossref_primary_10_1145_3469659
crossref_primary_10_1038_s42256_019_0109_1
crossref_primary_10_1109_MDAT_2020_2971217
crossref_primary_10_1109_TCCN_2022_3186331
crossref_primary_10_1145_3412357
crossref_primary_10_1155_2020_8882494
crossref_primary_10_3390_technologies13050202
crossref_primary_10_1016_j_coisb_2020_04_001
crossref_primary_10_1145_3491220
crossref_primary_10_1007_s10994_022_06178_9
crossref_primary_10_1016_j_neucom_2021_05_109
crossref_primary_10_3390_en14051380
crossref_primary_10_1109_COMST_2023_3264680
crossref_primary_10_1109_MNET_001_1900197
crossref_primary_10_1186_s13635_020_00105_y
crossref_primary_10_1145_3643563
crossref_primary_10_1109_TEM_2021_3059664
crossref_primary_10_1109_TIFS_2020_3003571
crossref_primary_10_1109_ACCESS_2020_3048319
crossref_primary_10_1016_j_patcog_2021_108279
crossref_primary_10_1109_ACCESS_2024_3395976
crossref_primary_10_1145_3374217
crossref_primary_10_1016_j_cose_2022_102830
crossref_primary_10_1007_s42979_025_03814_8
crossref_primary_10_1007_s10207_025_01016_0
crossref_primary_10_1186_s42400_023_00145_0
crossref_primary_10_1109_TCYB_2021_3105637
crossref_primary_10_3390_jimaging8120324
crossref_primary_10_1109_TPAMI_2023_3331087
crossref_primary_10_3390_math10122030
crossref_primary_10_1103_PhysRevA_103_042427
crossref_primary_10_1016_j_inffus_2024_102303
crossref_primary_10_1002_widm_1456
crossref_primary_10_1109_TIFS_2020_3023274
crossref_primary_10_1145_3717607
crossref_primary_10_1145_3442181
crossref_primary_10_1016_j_procs_2022_12_170
crossref_primary_10_1002_widm_1340
crossref_primary_10_1109_TIFS_2021_3117075
crossref_primary_10_1016_j_sasc_2023_200050
crossref_primary_10_1007_s44206_023_00039_1
crossref_primary_10_1093_eurpub_ckz160
crossref_primary_10_1038_s41598_025_93824_6
crossref_primary_10_1007_s11063_022_11056_5
Cites_doi 10.1016/j.patcog.2011.06.019
10.1007/s13042-010-0007-7
10.1016/j.patcog.2008.12.018
10.1007/s10462-011-9280-4
10.1016/j.patrec.2011.04.005
10.1007/s10994-010-5188-5
10.1109/TCYB.2015.2415032
10.1016/j.patcog.2014.05.007
10.1109/MSP.2012.75
10.1016/j.patcog.2012.06.019
10.1016/j.patcog.2017.10.013
10.1016/j.patcog.2009.08.022
10.1109/TPAMI.2016.2558154
10.1109/TDSC.2017.2700270
10.1016/j.patcog.2015.08.007
10.1109/TKDE.2013.57
10.1109/TSP.2017.2708039
10.1109/TDSC.2011.42
10.1007/s10994-010-5199-2
10.1109/MSP.2015.2426728
10.1109/TNNLS.2016.2593488
10.1142/S0218001414600027
10.1016/j.ins.2013.03.022
10.1016/j.patrec.2011.03.022
10.1016/j.patcog.2013.01.035
10.1016/j.patcog.2016.09.045
10.1007/s10994-010-5207-6
10.1109/MSP.2016.51
10.1007/s10994-009-5124-8
10.1016/j.patcog.2005.01.019
ContentType Journal Article
Copyright 2018 Elsevier Ltd
Copyright_xml – notice: 2018 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.patcog.2018.07.023
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-5142
EndPage 331
ExternalDocumentID 10_1016_j_patcog_2018_07_023
S0031320318302565
GroupedDBID --K
--M
-D8
-DT
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABFRF
ABHFT
ABJNI
ABMAC
ABTAH
ABXDB
ABYKQ
ACBEA
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FD6
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
JJJVA
KOM
KZ1
LG9
LMP
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WUQ
XJE
XPP
ZMT
ZY4
~G-
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
CITATION
EFKBS
~HD
ID FETCH-LOGICAL-c352t-d608bcd6ae2a59d3d665b1c284f60f66eb4adb38d7cf46a59200ef52dadeab873
ISICitedReferencesCount 784
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000444659200024&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0031-3203
IngestDate Tue Nov 18 22:14:16 EST 2025
Sat Nov 29 07:26:44 EST 2025
Fri Feb 23 02:48:19 EST 2024
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Evasion attacks
Adversarial examples
Adversarial machine learning
Secure learning
Poisoning attacks
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c352t-d608bcd6ae2a59d3d665b1c284f60f66eb4adb38d7cf46a59200ef52dadeab873
ORCID 0000-0001-7752-509X
OpenAccessLink http://hdl.handle.net/11584/249332
PageCount 15
ParticipantIDs crossref_citationtrail_10_1016_j_patcog_2018_07_023
crossref_primary_10_1016_j_patcog_2018_07_023
elsevier_sciencedirect_doi_10_1016_j_patcog_2018_07_023
PublicationCentury 2000
PublicationDate December 2018
2018-12-00
PublicationDateYYYYMMDD 2018-12-01
PublicationDate_xml – month: 12
  year: 2018
  text: December 2018
PublicationDecade 2010
PublicationTitle Pattern recognition
PublicationYear 2018
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Biggio, Corona, He, Chan, Giacinto, Yeung, Roli (bib0105) 2015; 9132
noz González, Biggio, Demontis, Paudice, Wongrassamee, Lupu, Roli (bib0033) 2018
Kolcz, Teo (bib0044) 2009
Kloft, Laskov (bib0027) 2010
Biggio, Corona, Fumera, Giacinto, Roli (bib0111) 2011; Vol. 6713
Wooldridge (bib0096) 2012; 27
Lipton (bib0125) 2016
Mei, Zhu (bib0031) 2015
Barth, Rubinstein, Sundararajan, Mitchell, Song, Bartlett (bib0091) 2012; 9
Bulò, Biggio, Pillai, Pelillo, Roli (bib0047) 2017; 28
Laskov, Lippmann (bib0050) 2010; 81
Xu, Cao, Hu, Principe (bib0120) 2017; 63
Kuncheva (bib0092) 2008
Lu, Issaranon, Forsyth (bib0009) 2017
2017.
Wild, Radu, Chen, Ferryman (bib0109) 2016; 50
Han, Kheir, Balzarotti (bib0055) 2016
Zantedeschi, Nicolae, Rawat (bib0112) 2017
Nelson, Rubinstein, Huang, Joseph, Lee, Rao, Tygar (bib0070) 2012; 13
Dang, Huang, Chang (bib0069) 2017
Biggio, Fumera, Roli (bib0082) 2010; 1
Kantchelian, Tygar, Joseph (bib0079) 2016; 48
Joseph, Nelson, Rubinstein, Tygar (bib0053) 2018
Biggio, Pillai, Bulò, Ariu, Pelillo, Roli (bib0062) 2013
.
Huang, Joseph, Nelson, Rubinstein, Tygar (bib0061) 2011
Koh, Liang (bib0032) 2017
T. Gu, B. Dolan-Gavitt, S. Garg, BadNets: identifying vulnerabilities in the machine learning model supply chain, in: Proceedings of the NIPS Workshop on Mach. Learn. and Comp. Sec.
X. Chen, C. Liu, B. Li, K. Lu, D. Song, Targeted backdoor attacks on deep learning systems using data poisoning, ArXiv e-prints
Dekel, Shamir, Xiao (bib0037) 2010; 81
Biggio, Fumera, Russu, Didaci, Roli (bib0043) 2015; 32
Zhang, Chan, Biggio, Yeung, Roli (bib0065) 2016; 46
Dong, Liao, Pang, Hu, Zhu (bib0087) 2018
Szegedy, Zaremba, Sutskever, Bruna, Erhan, Goodfellow, Fergus (bib0002) 2014
Joseph, Laskov, Roli, Tygar, Nelson (bib0051) 2013; 3
Qi, Tian, Shi (bib0098) 2013; 46
Fumera, Pillai, Roli (bib0059) 2006; 7
Carlini, Wagner (bib0085) 2017
Liu, Chawla (bib0094) 2010; 81
Barreno, Nelson, Joseph, Tygar (bib0040) 2010; 81
Großhans, Sawade, Brückner, Scheffer (bib0095) 2013; 28
Moosavi-Dezfooli, Fawzi, Frossard (bib0005) 2016
Christmann, Steinwart (bib0121) 2004; 5
Bootkrajang, Kaban (bib0122) 2014; 47
Papernot, McDaniel, Wu, Jha, Swami (bib0006) 2016
Cretu, Stavrou, Locasto, Stolfo, Keromytis (bib0116) 2008
Corona, Biggio, Contini, Piras, Corda, Mereu, Mureddu, Ariu, Roli (bib0056) 2017; 10492
Wong, Kolter (bib0100) 2018; 80
Goodfellow, Shlens, Szegedy (bib0003) 2015
Lowd, Meek (bib0020) 2005
Chen, Zhang, Sharma, Yi, Hsieh (bib0068) 2017
Sharif, Bhagavatula, Bauer, Reiter (bib0017) 2016
ArXiv e-prints.
Athalye, Carlini, Wagner (bib0086) 2018; 80
Biggio, Corona, Maiorca, Nelson, Šrndić, Laskov, Giacinto, Roli (bib0038) 2013; 8190
Dietterich (bib0124) 2017; 38
Biggio, Fumera, Roli (bib0042) 2014; 28
Papernot, McDaniel, Jha, Fredrikson, Celik, Swami (bib0014) 2016
Athalye, Engstrom, Ilyas, Kwok (bib0080) 2018
Tramèr, Zhang, Juels, Reiter, Ristenpart (bib0066) 2016
Biggio, Nelson, Laskov (bib0026) 2012
Huang, Kwiatkowska, Wang, Wu (bib0088) 2017; 10426
Torkamani, Lowd (bib0099) 2014; 32
Lowd, Meek (bib0021) 2005
Šrndić, Laskov (bib0083) 2013
Lyu, Huang, Liang (bib0101) 2015; 00
Biggio, Fumera, Pillai, Roli (bib0057) 2011; 32
Teo, Globerson, Roweis, Smola (bib0036) 2008
Biggio, Bulò, Pillai, Mura, Mequanint, Pelillo, Roli (bib0064) 2014; 8621
Li, Li (bib0010) 2017
Jagielski, Oprea, Biggio, Liu, Nita-Rotaru, Li (bib0119) 2018
Xu, Caramanis, Mannor (bib0084) 2009; 10
Melis, Demontis, Biggio, Brown, Fumera, Roli (bib0007) 2017
P. Laskov, R. Lippmann (Eds.), NIPS Workshop on Machine Learning in Adversarial Environments for Computer Security, 2007.
Biggio, Corona, Nelson, Rubinstein, Maiorca, Fumera, Giacinto, Roli (bib0030) 2014
Šrndic, Laskov (bib0039) 2014
Grosse, Papernot, Manoharan, Backes, McDaniel (bib0011) 2017; Vol. 10493
Biggio, Fumera, Roli (bib0041) 2014; 26
D. Maiorca, B. Biggio, M.E. Chiappe, G. Giacinto, Adversarial detection of flash malware: limitations and open issues, CoRR ArXiv
Globerson, Roweis (bib0035) 2006; 148
Biggio, Fumera, Roli (bib0045) 2008; 5342
Galbally, McCool, Fierrez, Marcel, Ortega-Garcia (bib0075) 2010; 43
Steinhardt, Koh, Liang (bib0118) 2017
Attar, Rad, Atani (bib0058) 2013; 40
Wittel, Wu (bib0034) 2004
Xu, Qi, Evans (bib0067) 2016
Zheng, Song, Leung, Goodfellow (bib0113) 2016
Newsome, Karp, Song (bib0090) 2006
Adler (bib0074) 2005; 3546
Dougherty, Hua, Xiong, Chen (bib0093) 2005; 38
Kloft, Laskov (bib0028) 2012
Rubinstein, Bartlett, Huang, Taft (bib0123) 2012; 4
Demontis, Russu, Biggio, Fumera, Roli (bib0077) 2016; 10029
Biggio, Rieck, Ariu, Wressnegger, Corona, Giacinto, Roli (bib0063) 2014
Biggio, Fumera, Marcialis, Roli (bib0110) 2017; 39
Brückner, Kanzow, Scheffer (bib0046) 2012; 13
Jordaney, Sharad, Dash, Wang, Papini, Nouretdinov, Cavallaro (bib0012) 2017
Carlini, Wagner (bib0016) 2017
Meng, Chen (bib0008) 2017
Xie, Wang, Zhang, Zhou, Xie, Yuille (bib0018) 2017
Matsumoto, Matsumoto, Yamada, Hoshino (bib0022) 2002; 26
Papernot, McDaniel, Goodfellow, Jha, Celik, Swami (bib0015) 2017
Fogla, Sharif, Perdisci, Kolesnikov, Lee (bib0081) 2006
Moreno-Torres, Raeder, Alaiz-Rodri-guez, Chawla, Herrera (bib0107) 2012; 45
Pillai, Fumera, Roli (bib0108) 2013; 46
Nelson, Barreno, Chi, Joseph, Rubinstein, Saini, Sutton, Tygar, Xia (bib0024) 2008
Madry, Makelov, Schmidt, Tsipras, Vladu (bib0104) 2018
Xiao, Biggio, Brown, Fumera, Eckert, Roli (bib0029) 2015; 37
Bendale, Boult (bib0106) 2016
Thomas, Rusu, Govindaraju (bib0060) 2009; 42
Sokolić, Giryes, Sapiro, Rodrigues (bib0102) 2017; 65
Dalvi, Domingos, Mausam, Sanghai, Verma (bib0019) 2004
Liu, Li, Vorobeychik, Oprea (bib0117) 2017
A. Demontis, M. Melis, B. Biggio, D. Maiorca, D. Arp, K. Rieck, I. Corona, G. Giacinto, F. Roli, Yes, machine learning can be more secure! a case study on android malware detection, IEEE Trans. Dep. Secure Comp. doi
Martinez-Diaz, Fierrez, Galbally, Ortega-Garcia (bib0076) 2011; 32
Rubinstein, Nelson, Huang, Joseph, Lau, Rao, Taft, Tygar (bib0025) 2009
Corona, Giacinto, Roli (bib0054) 2013; 239
C.J. Simon-Gabriel, Y. Ollivier, B. Schölkopf, L. Bottou, D. Lopez-Paz, Adversarial vulnerability of neural networks increases with input dimension, 2018.
Nguyen, Yosinski, Clune (bib0004) 2015
B.M. Thuraisingham, B. Biggio, D.M. Freeman, B. Miller, A. Sinha (Eds.), AISec ’17: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, New York, NY, USA, ACM, 2017.
McDaniel, Papernot, Celik (bib0013) 2016; 14
Nelson, Biggio, Laskov (bib0115) 2011
Pei, Cao, Yang, Jana (bib0089) 2017
Barreno, Nelson, Sears, Joseph, Tygar (bib0023) 2006
Russu, Demontis, Biggio, Fumera, Roli (bib0078) 2016
Cybenko, Landwehr (bib0097) 2012; 10
Fredrikson, Jha, Ristenpart (bib0073) 2015
Gu, Wang, Kuen, Ma, Shahroudy, Shuai, Liu, Wang, Wang, Cai, Chen (bib0001) 2018; 77
Koh (10.1016/j.patcog.2018.07.023_bib0032) 2017
Barreno (10.1016/j.patcog.2018.07.023_bib0023) 2006
Fumera (10.1016/j.patcog.2018.07.023_bib0059) 2006; 7
Nelson (10.1016/j.patcog.2018.07.023_bib0070) 2012; 13
Biggio (10.1016/j.patcog.2018.07.023_bib0057) 2011; 32
Dougherty (10.1016/j.patcog.2018.07.023_bib0093) 2005; 38
Moosavi-Dezfooli (10.1016/j.patcog.2018.07.023_bib0005) 2016
Biggio (10.1016/j.patcog.2018.07.023_bib0064) 2014; 8621
Wooldridge (10.1016/j.patcog.2018.07.023_bib0096) 2012; 27
Grosse (10.1016/j.patcog.2018.07.023_bib0011) 2017; Vol. 10493
10.1016/j.patcog.2018.07.023_bib0103
Athalye (10.1016/j.patcog.2018.07.023_bib0086) 2018; 80
Athalye (10.1016/j.patcog.2018.07.023_bib0080) 2018
Madry (10.1016/j.patcog.2018.07.023_bib0104) 2018
Teo (10.1016/j.patcog.2018.07.023_bib0036) 2008
Corona (10.1016/j.patcog.2018.07.023_bib0054) 2013; 239
Bulò (10.1016/j.patcog.2018.07.023_bib0047) 2017; 28
Biggio (10.1016/j.patcog.2018.07.023_bib0105) 2015; 9132
10.1016/j.patcog.2018.07.023_bib0072
Kolcz (10.1016/j.patcog.2018.07.023_bib0044) 2009
10.1016/j.patcog.2018.07.023_bib0071
Zhang (10.1016/j.patcog.2018.07.023_bib0065) 2016; 46
Huang (10.1016/j.patcog.2018.07.023_bib0088) 2017; 10426
10.1016/j.patcog.2018.07.023_bib0114
Biggio (10.1016/j.patcog.2018.07.023_bib0110) 2017; 39
Gu (10.1016/j.patcog.2018.07.023_bib0001) 2018; 77
Christmann (10.1016/j.patcog.2018.07.023_bib0121) 2004; 5
Torkamani (10.1016/j.patcog.2018.07.023_bib0099) 2014; 32
McDaniel (10.1016/j.patcog.2018.07.023_bib0013) 2016; 14
Dekel (10.1016/j.patcog.2018.07.023_bib0037) 2010; 81
Biggio (10.1016/j.patcog.2018.07.023_bib0026) 2012
Lowd (10.1016/j.patcog.2018.07.023_bib0021) 2005
Qi (10.1016/j.patcog.2018.07.023_bib0098) 2013; 46
Xu (10.1016/j.patcog.2018.07.023_bib0084) 2009; 10
10.1016/j.patcog.2018.07.023_bib0049
10.1016/j.patcog.2018.07.023_bib0048
Kuncheva (10.1016/j.patcog.2018.07.023_bib0092) 2008
Sharif (10.1016/j.patcog.2018.07.023_bib0017) 2016
Xu (10.1016/j.patcog.2018.07.023_bib0067) 2016
Lu (10.1016/j.patcog.2018.07.023_bib0009) 2017
Biggio (10.1016/j.patcog.2018.07.023_bib0038) 2013; 8190
Bendale (10.1016/j.patcog.2018.07.023_bib0106) 2016
Nelson (10.1016/j.patcog.2018.07.023_bib0024) 2008
Šrndic (10.1016/j.patcog.2018.07.023_bib0039) 2014
Lowd (10.1016/j.patcog.2018.07.023_bib0020) 2005
Moreno-Torres (10.1016/j.patcog.2018.07.023_bib0107) 2012; 45
Chen (10.1016/j.patcog.2018.07.023_bib0068) 2017
Dong (10.1016/j.patcog.2018.07.023_bib0087) 2018
noz González (10.1016/j.patcog.2018.07.023_bib0033) 2018
10.1016/j.patcog.2018.07.023_bib0052
Barreno (10.1016/j.patcog.2018.07.023_bib0040) 2010; 81
Xu (10.1016/j.patcog.2018.07.023_bib0120) 2017; 63
Kantchelian (10.1016/j.patcog.2018.07.023_bib0079) 2016; 48
Biggio (10.1016/j.patcog.2018.07.023_bib0082) 2010; 1
Pei (10.1016/j.patcog.2018.07.023_bib0089) 2017
Pillai (10.1016/j.patcog.2018.07.023_bib0108) 2013; 46
Papernot (10.1016/j.patcog.2018.07.023_bib0006) 2016
Newsome (10.1016/j.patcog.2018.07.023_bib0090) 2006
Brückner (10.1016/j.patcog.2018.07.023_bib0046) 2012; 13
Liu (10.1016/j.patcog.2018.07.023_bib0094) 2010; 81
Zheng (10.1016/j.patcog.2018.07.023_bib0113) 2016
Li (10.1016/j.patcog.2018.07.023_bib0010) 2017
Joseph (10.1016/j.patcog.2018.07.023_bib0053) 2018
Dietterich (10.1016/j.patcog.2018.07.023_bib0124) 2017; 38
Wittel (10.1016/j.patcog.2018.07.023_bib0034) 2004
Mei (10.1016/j.patcog.2018.07.023_bib0031) 2015
Corona (10.1016/j.patcog.2018.07.023_bib0056) 2017; 10492
Bootkrajang (10.1016/j.patcog.2018.07.023_bib0122) 2014; 47
Joseph (10.1016/j.patcog.2018.07.023_bib0051) 2013; 3
Huang (10.1016/j.patcog.2018.07.023_bib0061) 2011
Cybenko (10.1016/j.patcog.2018.07.023_bib0097) 2012; 10
Biggio (10.1016/j.patcog.2018.07.023_bib0111) 2011; Vol. 6713
Lyu (10.1016/j.patcog.2018.07.023_bib0101) 2015; 00
Thomas (10.1016/j.patcog.2018.07.023_sbref0056) 2009; 42
Nelson (10.1016/j.patcog.2018.07.023_bib0115) 2011
Adler (10.1016/j.patcog.2018.07.023_bib0074) 2005; 3546
Carlini (10.1016/j.patcog.2018.07.023_bib0085) 2017
Biggio (10.1016/j.patcog.2018.07.023_bib0043) 2015; 32
Dalvi (10.1016/j.patcog.2018.07.023_bib0019) 2004
Biggio (10.1016/j.patcog.2018.07.023_bib0062) 2013
Martinez-Diaz (10.1016/j.patcog.2018.07.023_bib0076) 2011; 32
Xie (10.1016/j.patcog.2018.07.023_bib0018) 2017
Meng (10.1016/j.patcog.2018.07.023_bib0008) 2017
Matsumoto (10.1016/j.patcog.2018.07.023_bib0022) 2002; 26
Barth (10.1016/j.patcog.2018.07.023_bib0091) 2012; 9
Tramèr (10.1016/j.patcog.2018.07.023_bib0066) 2016
Fredrikson (10.1016/j.patcog.2018.07.023_bib0073) 2015
Nguyen (10.1016/j.patcog.2018.07.023_bib0004) 2015
Biggio (10.1016/j.patcog.2018.07.023_bib0041) 2014; 26
Wong (10.1016/j.patcog.2018.07.023_bib0100) 2018; 80
Wild (10.1016/j.patcog.2018.07.023_bib0109) 2016; 50
Steinhardt (10.1016/j.patcog.2018.07.023_bib0118) 2017
Kloft (10.1016/j.patcog.2018.07.023_bib0028) 2012
Galbally (10.1016/j.patcog.2018.07.023_bib0075) 2010; 43
Liu (10.1016/j.patcog.2018.07.023_bib0117) 2017
Carlini (10.1016/j.patcog.2018.07.023_bib0016) 2017
Zantedeschi (10.1016/j.patcog.2018.07.023_bib0112) 2017
Han (10.1016/j.patcog.2018.07.023_bib0055) 2016
Biggio (10.1016/j.patcog.2018.07.023_bib0063) 2014
Jagielski (10.1016/j.patcog.2018.07.023_bib0119) 2018
Biggio (10.1016/j.patcog.2018.07.023_bib0042) 2014; 28
Papernot (10.1016/j.patcog.2018.07.023_bib0015) 2017
Attar (10.1016/j.patcog.2018.07.023_bib0058) 2013; 40
Biggio (10.1016/j.patcog.2018.07.023_bib0030) 2014
Laskov (10.1016/j.patcog.2018.07.023_bib0050) 2010; 81
Šrndić (10.1016/j.patcog.2018.07.023_bib0083) 2013
Russu (10.1016/j.patcog.2018.07.023_bib0078) 2016
Großhans (10.1016/j.patcog.2018.07.023_bib0095) 2013; 28
Biggio (10.1016/j.patcog.2018.07.023_bib0045) 2008; 5342
Demontis (10.1016/j.patcog.2018.07.023_bib0077) 2016; 10029
Goodfellow (10.1016/j.patcog.2018.07.023_bib0003) 2015
Jordaney (10.1016/j.patcog.2018.07.023_bib0012) 2017
Lipton (10.1016/j.patcog.2018.07.023_bib0125) 2016
Fogla (10.1016/j.patcog.2018.07.023_bib0081) 2006
Rubinstein (10.1016/j.patcog.2018.07.023_bib0123) 2012; 4
Melis (10.1016/j.patcog.2018.07.023_bib0007) 2017
Papernot (10.1016/j.patcog.2018.07.023_bib0014) 2016
Rubinstein (10.1016/j.patcog.2018.07.023_bib0025) 2009
Sokolić (10.1016/j.patcog.2018.07.023_bib0102) 2017; 65
Kloft (10.1016/j.patcog.2018.07.023_bib0027) 2010
Cretu (10.1016/j.patcog.2018.07.023_bib0116) 2008
Globerson (10.1016/j.patcog.2018.07.023_bib0035) 2006; 148
Dang (10.1016/j.patcog.2018.07.023_bib0069) 2017
Szegedy (10.1016/j.patcog.2018.07.023_bib0002) 2014
Xiao (10.1016/j.patcog.2018.07.023_bib0029) 2015; 37
References_xml – reference: A. Demontis, M. Melis, B. Biggio, D. Maiorca, D. Arp, K. Rieck, I. Corona, G. Giacinto, F. Roli, Yes, machine learning can be more secure! a case study on android malware detection, IEEE Trans. Dep. Secure Comp. doi:
– start-page: 4480
  year: 2016
  end-page: 4488
  ident: bib0113
  article-title: Improving the robustness of deep neural networks via stability training
  publication-title: Proceedings of the IEEE CVPR
– reference: P. Laskov, R. Lippmann (Eds.), NIPS Workshop on Machine Learning in Adversarial Environments for Computer Security, 2007.
– reference: T. Gu, B. Dolan-Gavitt, S. Garg, BadNets: identifying vulnerabilities in the machine learning model supply chain, in: Proceedings of the NIPS Workshop on Mach. Learn. and Comp. Sec.
– year: 2017
  ident: bib0008
  article-title: MagNet: a two-pronged defense against adversarial examples
  publication-title: Proceedings of the Twenty Fourth ACM (CCS)
– volume: 00
  start-page: 301
  year: 2015
  end-page: 309
  ident: bib0101
  article-title: A unified gradient regularization family for adversarial examples
  publication-title: Proceedings of the ICDM
– volume: 32
  start-page: 1643
  year: 2011
  end-page: 1651
  ident: bib0076
  article-title: An evaluation of indirect attacks and countermeasures in fingerprint verification systems
  publication-title: Pattern Recognit Lett.
– year: 2016
  ident: bib0125
  article-title: The mythos of model interpretability
  publication-title: Proceedings of the ICML Workshop on Human Interpretability of Machine Learning
– start-page: 506
  year: 2017
  end-page: 519
  ident: bib0015
  article-title: Practical black-box attacks against machine learning
  publication-title: Proceedings of the ASIA CCS
– volume: 26
  start-page: 984
  year: 2014
  end-page: 996
  ident: bib0041
  article-title: Security evaluation of pattern classifiers under attack
  publication-title: IEEE Trans. Knowl. Data Eng.
– volume: 42
  start-page: 3365
  year: 2009
  end-page: 3373
  ident: bib0060
  article-title: Synthetic handwritten captchas
  publication-title: Pattern Recognit.
– year: 2016
  ident: bib0067
  article-title: Automatically evading classifiers
  publication-title: Proceedings of the NDSS
– volume: 28
  start-page: 55
  year: 2013
  end-page: 63
  ident: bib0095
  article-title: Bayesian games for adversarial regression problems
  publication-title: Proceedings of the Thirtieth ICML, JMLR W&CP
– start-page: 119
  year: 2017
  end-page: 133
  ident: bib0069
  article-title: Evading classifiers by morphing in the dark
  publication-title: Proceedings of the ACM CCS
– volume: 38
  start-page: 1520
  year: 2005
  end-page: 1532
  ident: bib0093
  article-title: Optimal robust classifiers
  publication-title: Pattern Recognit.
– volume: 81
  start-page: 121
  year: 2010
  end-page: 148
  ident: bib0040
  article-title: The security of machine learning
  publication-title: Mach. Learn.
– reference: D. Maiorca, B. Biggio, M.E. Chiappe, G. Giacinto, Adversarial detection of flash malware: limitations and open issues, CoRR ArXiv:
– volume: 4
  start-page: 65
  year: 2012
  end-page: 100
  ident: bib0123
  article-title: Learning in a large function space: privacy-preserving mechanisms for SVM learning
  publication-title: J. Priv. Conf.
– volume: 14
  start-page: 68
  year: 2016
  end-page: 72
  ident: bib0013
  article-title: Machine learning in adversarial settings
  publication-title: IEEE Secur. Priv.
– volume: 45
  start-page: 521
  year: 2012
  end-page: 530
  ident: bib0107
  article-title: A unifying view on dataset shift in classification
  publication-title: Pattern Recognit.
– start-page: 105
  year: 2014
  end-page: 153
  ident: bib0030
  article-title: Security evaluation of support vector machines in adversarial environments
  publication-title: Support Vector Machines Applications
– start-page: 197
  year: 2014
  end-page: 211
  ident: bib0039
  article-title: Practical evasion of a learning-based classifier: a case study
  publication-title: Proceedings of the IEEE SP
– volume: 9132
  start-page: 168
  year: 2015
  end-page: 180
  ident: bib0105
  article-title: One-and-a-half-class multiple classifier systems for secure learning against evasion attacks at test time
  publication-title: Proceedings of the MCS
– start-page: 1402
  year: 2016
  end-page: 1413
  ident: bib0055
  article-title: PhishEye: live monitoring of sandboxed phishing kits
  publication-title: Proceedings of the ACM CCS
– year: 2017
  ident: bib0009
  article-title: SafetyNet: detecting and rejecting adversarial examples robustly
  publication-title: Proceedings of the IEEE ICCV
– year: 2005
  ident: bib0021
  article-title: Good word attacks on statistical spam filters, Mountain View, CA, USA
  publication-title: Proceedings of the Second CEAS
– volume: 80
  start-page: 5283
  year: 2018
  end-page: 5292
  ident: bib0100
  article-title: Provable defenses against adversarial examples via the convex outer adversarial polytope
  publication-title: Proceedings of the ICML
– start-page: 1563
  year: 2016
  end-page: 1572
  ident: bib0106
  article-title: Towards open set deep networks
  publication-title: Proceedings of the IEEE CVPR
– start-page: 1807
  year: 2012
  end-page: 1814
  ident: bib0026
  article-title: Poisoning attacks against support vector machines
  publication-title: Proceedings of the Twenty Ninth ICML
– volume: Vol. 6713
  start-page: 350
  year: 2011
  end-page: 359
  ident: bib0111
  article-title: Bagging classifiers for fighting poisoning attacks in adversarial classification tasks
  publication-title: Proceedings of the MCS
– volume: 8190
  start-page: 387
  year: 2013
  end-page: 402
  ident: bib0038
  article-title: Evasion attacks against machine learning at test time
  publication-title: Proceedings of the ECML PKDD, Part III
– volume: 46
  start-page: 766
  year: 2016
  end-page: 777
  ident: bib0065
  article-title: Adversarial feature selection against evasion attacks
  publication-title: IEEE Trans. Cybern.
– start-page: 27
  year: 2018
  end-page: 38
  ident: bib0033
  article-title: Towards poisoning of deep learning algorithms with back-gradient optimization
  publication-title: Proceedings of the AISec
– start-page: 15
  year: 2017
  end-page: 26
  ident: bib0068
  article-title: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models
  publication-title: AISec
– reference: , 2017.
– start-page: 1
  year: 2017
  end-page: 18
  ident: bib0089
  article-title: DeepXplore: automated whitebox testing of deep learning systems
  publication-title: Proceedings of the Twenty Sixth SOSP
– start-page: 91
  year: 2017
  end-page: 102
  ident: bib0117
  article-title: Robust linear regression against training data poisoning
  publication-title: Proceedings of the AISec
– year: 2018
  ident: bib0119
  article-title: Manipulating machine learning: poisoning attacks and countermeasures for regression learning
  publication-title: Proceedings of the Thirty Ninth IEEE Symposium Security and Privacy
– volume: 5342
  start-page: 500
  year: 2008
  end-page: 509
  ident: bib0045
  article-title: Adversarial pattern classification using multiple classifiers and randomisation
  publication-title: Proceedings of the SSPR
– volume: 81
  start-page: 115
  year: 2010
  end-page: 119
  ident: bib0050
  article-title: Machine learning in adversarial environments
  publication-title: Mach. Learn.
– year: 2015
  ident: bib0003
  article-title: Explaining and harnessing adversarial examples
  publication-title: Proceedings of the ICLR
– start-page: 582
  year: 2016
  end-page: 597
  ident: bib0006
  article-title: Distillation as a defense to adversarial perturbations against deep neural networks
  publication-title: Proceedings of the IEEE (SP)
– year: 2004
  ident: bib0034
  article-title: On attacking statistical spam filters
  publication-title: Proceedings of the First CEAS
– volume: 3546
  start-page: 1100
  year: 2005
  end-page: 1109
  ident: bib0074
  article-title: Vulnerabilities in biometric encryption systems
  publication-title: Proceedings of the Fifth ICAVBPA
– start-page: 5
  year: 2008
  end-page: 10
  ident: bib0092
  article-title: Classifier ensembles for detecting concept change in streaming data: overview and perspectives
  publication-title: Proceedings of the SUEMA
– volume: 3
  start-page: 1
  year: 2013
  end-page: 30
  ident: bib0051
  article-title: Machine learning methods for computer security (dagstuhl perspectives workshop 12371)
  publication-title: Dagstuhl Manif.
– year: 2017
  ident: bib0032
  article-title: Understanding black-box predictions via influence functions
  publication-title: Proceedings of the ICML
– volume: 81
  start-page: 69
  year: 2010
  end-page: 83
  ident: bib0094
  article-title: Mining adversarial patterns via regularized loss minimization
  publication-title: Mach. Learn.
– start-page: 427
  year: 2015
  end-page: 436
  ident: bib0004
  article-title: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images
  publication-title: Proceedings of the IEEE CVPR
– start-page: 1528
  year: 2016
  end-page: 1540
  ident: bib0017
  article-title: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition
  publication-title: Proceedings of the CCS
– start-page: 405
  year: 2010
  end-page: 412
  ident: bib0027
  article-title: Online anomaly detection under adversarial impact
  publication-title: Proceedings of the Thirteenth AISTATS
– volume: 48
  start-page: 2387
  year: 2016
  end-page: 2396
  ident: bib0079
  article-title: Evasion and hardening of tree ensemble classifiers
  publication-title: Proceedings of the ICML
– reference: C.J. Simon-Gabriel, Y. Ollivier, B. Schölkopf, L. Bottou, D. Lopez-Paz, Adversarial vulnerability of neural networks increases with input dimension, 2018.
– start-page: 3647
  year: 2012
  end-page: 3690
  ident: bib0028
  article-title: Security analysis of online centroid anomaly detection
  publication-title: Proceedings of the JMLR
– year: 2017
  ident: bib0010
  article-title: Adversarial examples detection in deep networks with convolutional filter statistics
  publication-title: Proceedings of the IEEE ICCV
– year: 2017
  ident: bib0007
  article-title: Is deep learning safe for robot vision? adversarial examples against the iCub humanoid
  publication-title: Proceedings of the ICCV Workshop
– start-page: 27
  year: 2014
  end-page: 36
  ident: bib0063
  article-title: Poisoning behavioral malware clustering
  publication-title: Proceedings of the AISec
– year: 2017
  ident: bib0118
  article-title: Certified defenses for data poisoning attacks
  publication-title: Proceedings of the NIPS
– volume: Vol. 10493
  start-page: 62
  year: 2017
  end-page: 79
  ident: bib0011
  article-title: Adversarial examples for malware detection
  publication-title: Proceedings of the ESORICS (2)
– volume: 32
  start-page: 31
  year: 2015
  end-page: 41
  ident: bib0043
  article-title: Adversarial biometric recognition: a review on biometric system security from the adversarial machine-learning perspective
  publication-title: IEEE Signal Proc. Mag.
– volume: 77
  start-page: 354
  year: 2018
  end-page: 377
  ident: bib0001
  article-title: Recent advances in convolutional neural networks
  publication-title: Pattern Recognit.
– volume: 13
  start-page: 2617
  year: 2012
  end-page: 2654
  ident: bib0046
  article-title: Static prediction games for adversarial learning problems
  publication-title: Proceedings of the JMLR
– start-page: 87
  year: 2013
  end-page: 98
  ident: bib0062
  article-title: Is data clustering in adversarial settings secure?
  publication-title: Proceedings of the AISec
– start-page: 241
  year: 2006
  end-page: 256
  ident: bib0081
  article-title: Polymorphic blending attacks
  publication-title: Proceedings of the USENIX Security Symposium, USENIX Association
– volume: 7
  start-page: 2699
  year: 2006
  end-page: 2720
  ident: bib0059
  article-title: Spam filtering based on the analysis of text information embedded into images
  publication-title: J. Mach. Learn. Res.
– volume: 239
  start-page: 201
  year: 2013
  end-page: 225
  ident: bib0054
  article-title: Adversarial attacks against intrusion detection systems: taxonomy, solutions and open issues
  publication-title: Inf. Sci.
– volume: 5
  start-page: 1007
  year: 2004
  end-page: 1034
  ident: bib0121
  article-title: On robust properties of convex risk minimization methods for pattern recognition
  publication-title: J. Mach. Learn. Res.
– start-page: 641
  year: 2005
  end-page: 647
  ident: bib0020
  article-title: Adversarial learning
  publication-title: Proceedings of the ICKDDM
– year: 2017
  ident: bib0018
  article-title: Adversarial examples for semantic segmentation and object detection
  publication-title: Proceedings of the IEEE ICCV
– year: 2018
  ident: bib0053
  article-title: Adversarial Machine Learning
– volume: 10
  start-page: 5
  year: 2012
  end-page: 8
  ident: bib0097
  article-title: Security analytics and measurements
  publication-title: IEEE Secur. Priv.
– start-page: 601
  year: 2016
  end-page: 618
  ident: bib0066
  article-title: Stealing machine learning models via prediction APIs
  publication-title: Proceedings of the USENIX Security Symposium, USENIX Association
– volume: 28
  start-page: 2466
  year: 2017
  end-page: 2478
  ident: bib0047
  article-title: Randomized prediction games for adversarial machine learning
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– start-page: 372
  year: 2016
  end-page: 387
  ident: bib0014
  article-title: The limitations of deep learning in adversarial settings
  publication-title: Proceedings of the First IEEE European Symposium Security and Privacy
– start-page: 59
  year: 2016
  end-page: 69
  ident: bib0078
  article-title: Secure kernel machines against evasion attacks
  publication-title: Proceedings of the AISec
– volume: 26
  year: 2002
  ident: bib0022
  article-title: Impact of artificial “gummy” fingers on fingerprint systems
  publication-title: Datenschutz und Datensicherheit
– reference: X. Chen, C. Liu, B. Li, K. Lu, D. Song, Targeted backdoor attacks on deep learning systems using data poisoning, ArXiv e-prints
– start-page: 3
  year: 2017
  end-page: 14
  ident: bib0085
  article-title: Adversarial examples are not easily detected: Bypassing ten detection methods
  publication-title: Proceedings of the AISec
– start-page: 1
  year: 2009
  end-page: 14
  ident: bib0025
  article-title: Antidote: understanding and defending against poisoning of anomaly detectors
  publication-title: Proceedings of the IMC
– year: 2018
  ident: bib0080
  article-title: Synthesizing robust adversarial examples
  publication-title: Proceedings of the ICLR
– start-page: 43
  year: 2011
  end-page: 57
  ident: bib0061
  article-title: Adversarial machine learning, Chicago, IL, USA
  publication-title: Proceedings of the Fourth AISec
– volume: 10
  start-page: 1485
  year: 2009
  end-page: 1510
  ident: bib0084
  article-title: Robustness and regularization of support vector machines
  publication-title: J. Mach. Learn. Res.
– volume: 47
  start-page: 3641
  year: 2014
  end-page: 3655
  ident: bib0122
  article-title: Learning kernel logistic regression in the presence of class label noise
  publication-title: Pattern Recognit.
– volume: 50
  start-page: 17
  year: 2016
  end-page: 25
  ident: bib0109
  article-title: Robust multimodal face and fingerprint fusion in the presence of spoofing attacks
  publication-title: Pattern Recognit.
– start-page: 39
  year: 2017
  end-page: 49
  ident: bib0112
  article-title: Efficient defenses against adversarial attacks
  publication-title: Proceedings of the AISec
– volume: 32
  start-page: 1436
  year: 2011
  end-page: 1446
  ident: bib0057
  article-title: A survey and experimental evaluation of image spam filtering techniques
  publication-title: Pattern Recognit. Lett.
– year: 2018
  ident: bib0087
  article-title: Boosting adversarial examples with momentum
  publication-title: Proceedings of the IEEE CVPR
– reference: B.M. Thuraisingham, B. Biggio, D.M. Freeman, B. Miller, A. Sinha (Eds.), AISec ’17: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, New York, NY, USA, ACM, 2017.
– volume: 27
  start-page: 76
  year: 2012
  end-page: 80
  ident: bib0096
  article-title: Does game theory work?
  publication-title: IEEE IS
– start-page: 2574
  year: 2016
  end-page: 2582
  ident: bib0005
  article-title: Deepfool: a simple and accurate method to fool deep neural networks
  publication-title: Proceedings of the IEEE CVPR
– year: 2014
  ident: bib0002
  article-title: Intriguing properties of neural networks
  publication-title: Proceedings of the ICLR
– volume: 10029
  start-page: 322
  year: 2016
  end-page: 332
  ident: bib0077
  article-title: On security and sparsity of linear classifiers for adversarial settings
  publication-title: Proceedings of the SSPR
– volume: 46
  start-page: 2256
  year: 2013
  end-page: 2266
  ident: bib0108
  article-title: Multi-label classification with a reject option
  publication-title: Pattern Recognit.
– year: 2013
  ident: bib0083
  article-title: Detection of malicious PDF files based on hierarchical document structure
  publication-title: Proceedings of the Twentieth NDSS
– year: 2009
  ident: bib0044
  article-title: Feature weighting for improved classifier robustness
  publication-title: Proceedings of the Sixth CEAS
– start-page: 99
  year: 2004
  end-page: 108
  ident: bib0019
  article-title: Adversarial classification
  publication-title: Proceedings of the ICKDDM
– start-page: 625
  year: 2017
  end-page: 642
  ident: bib0012
  article-title: Transcend: detecting concept drift in malware classification models
  publication-title: Proceedings of the USENIX Security Symposium
– volume: 46
  start-page: 305
  year: 2013
  end-page: 316
  ident: bib0098
  article-title: Robust twin support vector machine for pattern classification
  publication-title: Pattern Recognit.
– start-page: 81
  year: 2008
  end-page: 95
  ident: bib0116
  article-title: Casting out demons: sanitizing training data for anomaly sensors
  publication-title: Proceedings of the IEEE CS
– start-page: 1489
  year: 2008
  end-page: 1496
  ident: bib0036
  article-title: Convex learning with invariances
  publication-title: Proceedings of the NIPS
– volume: 9
  start-page: 482
  year: 2012
  end-page: 493
  ident: bib0091
  article-title: A learning-based approach to reactive security
  publication-title: IEEE Trans. Dependable Secure Comput.
– volume: 37
  start-page: 1689
  year: 2015
  end-page: 1698
  ident: bib0029
  article-title: Is feature selection secure against training data poisoning?
  publication-title: Proceedings of the Thirt Second ICML
– volume: 1
  start-page: 27
  year: 2010
  end-page: 41
  ident: bib0082
  article-title: Multiple classifier systems for robust classifier design in adversarial environments
  publication-title: Int. J. Mach. Learn. Cybern.
– volume: 32
  start-page: 577
  year: 2014
  end-page: 585
  ident: bib0099
  article-title: On robustness and regularization of structural support vector machines
  publication-title: Proceedings of the ICML
– volume: 10426
  start-page: 3
  year: 2017
  end-page: 29
  ident: bib0088
  article-title: Safety verification of deep neural networks
  publication-title: Proceedings of the Twenty Ninth ICCAV, Part I
– volume: 38
  year: 2017
  ident: bib0124
  article-title: Steps toward robust artificial intelligence
  publication-title: AI Mag.
– volume: 8621
  start-page: 42
  year: 2014
  end-page: 52
  ident: bib0064
  article-title: Poisoning complete-linkage hierarchical clustering
  publication-title: Proceedings of the SSPR
– start-page: 16
  year: 2006
  end-page: 25
  ident: bib0023
  article-title: Can machine learning be secure?
  publication-title: Proceedings of the ASIA CCS
– volume: 148
  start-page: 353
  year: 2006
  end-page: 360
  ident: bib0035
  article-title: Nightmare at test time: robust learning by feature deletion
  publication-title: Proceedings of the Twenty Third ICML
– year: 2015
  ident: bib0031
  article-title: Using machine teaching to identify optimal training-set attacks on machine learners
  publication-title: Proceedings of the Twenty Ninth AAAI
– volume: 43
  start-page: 1027
  year: 2010
  end-page: 1038
  ident: bib0075
  article-title: On the vulnerability of face verification systems to hill-climbing attacks
  publication-title: Pattern Recognit.
– year: 2018
  ident: bib0104
  article-title: Towards deep learning models resistant to adversarial attacks
  publication-title: Proceedings of the ICLR
– start-page: 1
  year: 2008
  end-page: 9
  ident: bib0024
  article-title: Exploiting machine learning to subvert your spam filter
  publication-title: Proceedings of the LEET, USENIX Association
– volume: 28
  start-page: 1460002
  year: 2014
  ident: bib0042
  article-title: Pattern recognition systems under attack: design issues and research challenges
  publication-title: Int. J. Pattern Recognit. Artif. Intell.
– volume: 10492
  start-page: 370
  year: 2017
  end-page: 388
  ident: bib0056
  article-title: DeltaPhish: detecting phishing webpages in compromised websites
  publication-title: Proceedings of the ESORICS
– start-page: 1322
  year: 2015
  end-page: 1333
  ident: bib0073
  article-title: Model inversion attacks that exploit confidence information and basic countermeasures
  publication-title: Proceedings of the ACM CCS
– start-page: 39
  year: 2017
  end-page: 57
  ident: bib0016
  article-title: Towards evaluating the robustness of neural networks
  publication-title: Proceedings of the IEEE SP
– reference: .
– volume: 80
  start-page: 274
  year: 2018
  end-page: 283
  ident: bib0086
  article-title: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
  publication-title: Proceedings of the ICML
– volume: 13
  start-page: 1293
  year: 2012
  end-page: 1332
  ident: bib0070
  article-title: Query strategies for evading convex-inducing classifiers
  publication-title: J. Mach. Learn. Res.
– volume: 81
  start-page: 149
  year: 2010
  end-page: 178
  ident: bib0037
  article-title: Learning to classify with missing and corrupted features
  publication-title: Mach. Learn.
– start-page: 87
  year: 2011
  end-page: 92
  ident: bib0115
  article-title: Understanding the risk factors of learning in adversarial environments
  publication-title: Proceedings of the AISec
– volume: 63
  start-page: 139
  year: 2017
  end-page: 148
  ident: bib0120
  article-title: Robust support vector machines based on the rescaled hinge loss function
  publication-title: Pattern Recognit.
– volume: 65
  start-page: 4265
  year: 2017
  end-page: 4280
  ident: bib0102
  article-title: Robust large margin deep neural networks
  publication-title: IEEE Trans. Signal Process.
– reference: ArXiv e-prints.
– volume: 40
  start-page: 71
  year: 2013
  end-page: 105
  ident: bib0058
  article-title: A survey of image spamming and filtering techniques
  publication-title: Artif. Intell. Rev.
– start-page: 81
  year: 2006
  end-page: 105
  ident: bib0090
  article-title: Paragraph: thwarting signature learning by training maliciously
  publication-title: Proceedings of the RAID, LNCS
– volume: 39
  start-page: 561
  year: 2017
  end-page: 575
  ident: bib0110
  article-title: Statistical meta-analysis of presentation attacks for secure multibiometric systems
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0125
  article-title: The mythos of model interpretability
– volume: 45
  start-page: 521
  issue: 1
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0107
  article-title: A unifying view on dataset shift in classification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2011.06.019
– start-page: 3647
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0028
  article-title: Security analysis of online centroid anomaly detection
– ident: 10.1016/j.patcog.2018.07.023_bib0103
– start-page: 582
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0006
  article-title: Distillation as a defense to adversarial perturbations against deep neural networks
– volume: 1
  start-page: 27
  issue: 1
  year: 2010
  ident: 10.1016/j.patcog.2018.07.023_bib0082
  article-title: Multiple classifier systems for robust classifier design in adversarial environments
  publication-title: Int. J. Mach. Learn. Cybern.
  doi: 10.1007/s13042-010-0007-7
– start-page: 59
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0078
  article-title: Secure kernel machines against evasion attacks
– volume: 5342
  start-page: 500
  year: 2008
  ident: 10.1016/j.patcog.2018.07.023_bib0045
  article-title: Adversarial pattern classification using multiple classifiers and randomisation
– start-page: 1322
  year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0073
  article-title: Model inversion attacks that exploit confidence information and basic countermeasures
– volume: 10029
  start-page: 322
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0077
  article-title: On security and sparsity of linear classifiers for adversarial settings
– year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0087
  article-title: Boosting adversarial examples with momentum
– start-page: 506
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0015
  article-title: Practical black-box attacks against machine learning
– volume: 42
  start-page: 3365
  issue: 12
  year: 2009
  ident: 10.1016/j.patcog.2018.07.023_sbref0056
  article-title: Synthetic handwritten captchas
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2008.12.018
– year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0118
  article-title: Certified defenses for data poisoning attacks
– start-page: 427
  year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0004
  article-title: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images
– volume: 40
  start-page: 71
  issue: 1
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0058
  article-title: A survey of image spamming and filtering techniques
  publication-title: Artif. Intell. Rev.
  doi: 10.1007/s10462-011-9280-4
– year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0003
  article-title: Explaining and harnessing adversarial examples
– volume: 32
  start-page: 1643
  issue: 12
  year: 2011
  ident: 10.1016/j.patcog.2018.07.023_bib0076
  article-title: An evaluation of indirect attacks and countermeasures in fingerprint verification systems
  publication-title: Pattern Recognit Lett.
  doi: 10.1016/j.patrec.2011.04.005
– volume: 00
  start-page: 301
  year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0101
  article-title: A unified gradient regularization family for adversarial examples
– volume: 81
  start-page: 121
  year: 2010
  ident: 10.1016/j.patcog.2018.07.023_bib0040
  article-title: The security of machine learning
  publication-title: Mach. Learn.
  doi: 10.1007/s10994-010-5188-5
– volume: 46
  start-page: 766
  issue: 3
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0065
  article-title: Adversarial feature selection against evasion attacks
  publication-title: IEEE Trans. Cybern.
  doi: 10.1109/TCYB.2015.2415032
– volume: 47
  start-page: 3641
  issue: 11
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0122
  article-title: Learning kernel logistic regression in the presence of class label noise
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2014.05.007
– start-page: 119
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0069
  article-title: Evading classifiers by morphing in the dark
– start-page: 87
  year: 2011
  ident: 10.1016/j.patcog.2018.07.023_bib0115
  article-title: Understanding the risk factors of learning in adversarial environments
– year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0010
  article-title: Adversarial examples detection in deep networks with convolutional filter statistics
– volume: 10
  start-page: 5
  issue: 3
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0097
  article-title: Security analytics and measurements
  publication-title: IEEE Secur. Priv.
  doi: 10.1109/MSP.2012.75
– volume: 38
  issue: 3
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0124
  article-title: Steps toward robust artificial intelligence
  publication-title: AI Mag.
– year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0008
  article-title: MagNet: a two-pronged defense against adversarial examples
– start-page: 241
  year: 2006
  ident: 10.1016/j.patcog.2018.07.023_bib0081
  article-title: Polymorphic blending attacks
– volume: 46
  start-page: 305
  issue: 1
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0098
  article-title: Robust twin support vector machine for pattern classification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2012.06.019
– volume: 148
  start-page: 353
  year: 2006
  ident: 10.1016/j.patcog.2018.07.023_bib0035
  article-title: Nightmare at test time: robust learning by feature deletion
– volume: 77
  start-page: 354
  year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0001
  article-title: Recent advances in convolutional neural networks
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.10.013
– volume: 43
  start-page: 1027
  issue: 3
  year: 2010
  ident: 10.1016/j.patcog.2018.07.023_bib0075
  article-title: On the vulnerability of face verification systems to hill-climbing attacks
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2009.08.022
– volume: 39
  start-page: 561
  issue: 3
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0110
  article-title: Statistical meta-analysis of presentation attacks for secure multibiometric systems
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2558154
– start-page: 91
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0117
  article-title: Robust linear regression against training data poisoning
– start-page: 39
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0016
  article-title: Towards evaluating the robustness of neural networks
– start-page: 27
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0063
  article-title: Poisoning behavioral malware clustering
– volume: 4
  start-page: 65
  issue: 1
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0123
  article-title: Learning in a large function space: privacy-preserving mechanisms for SVM learning
  publication-title: J. Priv. Conf.
– start-page: 1
  year: 2008
  ident: 10.1016/j.patcog.2018.07.023_bib0024
  article-title: Exploiting machine learning to subvert your spam filter
– start-page: 81
  year: 2008
  ident: 10.1016/j.patcog.2018.07.023_bib0116
  article-title: Casting out demons: sanitizing training data for anomaly sensors
– start-page: 405
  year: 2010
  ident: 10.1016/j.patcog.2018.07.023_bib0027
  article-title: Online anomaly detection under adversarial impact
– start-page: 105
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0030
  article-title: Security evaluation of support vector machines in adversarial environments
– start-page: 625
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0012
  article-title: Transcend: detecting concept drift in malware classification models
– ident: 10.1016/j.patcog.2018.07.023_bib0048
  doi: 10.1109/TDSC.2017.2700270
– volume: 50
  start-page: 17
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0109
  article-title: Robust multimodal face and fingerprint fusion in the presence of spoofing attacks
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2015.08.007
– start-page: 2574
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0005
  article-title: Deepfool: a simple and accurate method to fool deep neural networks
– start-page: 1402
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0055
  article-title: PhishEye: live monitoring of sandboxed phishing kits
– ident: 10.1016/j.patcog.2018.07.023_bib0114
– year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0031
  article-title: Using machine teaching to identify optimal training-set attacks on machine learners
– volume: Vol. 6713
  start-page: 350
  year: 2011
  ident: 10.1016/j.patcog.2018.07.023_bib0111
  article-title: Bagging classifiers for fighting poisoning attacks in adversarial classification tasks
– ident: 10.1016/j.patcog.2018.07.023_bib0072
– volume: 13
  start-page: 1293
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0070
  article-title: Query strategies for evading convex-inducing classifiers
  publication-title: J. Mach. Learn. Res.
– volume: 26
  start-page: 984
  issue: 4
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0041
  article-title: Security evaluation of pattern classifiers under attack
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2013.57
– volume: 65
  start-page: 4265
  issue: 16
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0102
  article-title: Robust large margin deep neural networks
  publication-title: IEEE Trans. Signal Process.
  doi: 10.1109/TSP.2017.2708039
– start-page: 4480
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0113
  article-title: Improving the robustness of deep neural networks via stability training
– volume: 48
  start-page: 2387
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0079
  article-title: Evasion and hardening of tree ensemble classifiers
– year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0083
  article-title: Detection of malicious PDF files based on hierarchical document structure
– year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0002
  article-title: Intriguing properties of neural networks
– start-page: 641
  year: 2005
  ident: 10.1016/j.patcog.2018.07.023_bib0020
  article-title: Adversarial learning
– start-page: 3
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0085
  article-title: Adversarial examples are not easily detected: Bypassing ten detection methods
– ident: 10.1016/j.patcog.2018.07.023_bib0052
– ident: 10.1016/j.patcog.2018.07.023_bib0049
– year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0080
  article-title: Synthesizing robust adversarial examples
– year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0009
  article-title: SafetyNet: detecting and rejecting adversarial examples robustly
– year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0007
  article-title: Is deep learning safe for robot vision? adversarial examples against the iCub humanoid
– volume: 7
  start-page: 2699
  year: 2006
  ident: 10.1016/j.patcog.2018.07.023_bib0059
  article-title: Spam filtering based on the analysis of text information embedded into images
  publication-title: J. Mach. Learn. Res.
– start-page: 1563
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0106
  article-title: Towards open set deep networks
– volume: 9
  start-page: 482
  issue: 4
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0091
  article-title: A learning-based approach to reactive security
  publication-title: IEEE Trans. Dependable Secure Comput.
  doi: 10.1109/TDSC.2011.42
– volume: 81
  start-page: 69
  issue: 1
  year: 2010
  ident: 10.1016/j.patcog.2018.07.023_bib0094
  article-title: Mining adversarial patterns via regularized loss minimization
  publication-title: Mach. Learn.
  doi: 10.1007/s10994-010-5199-2
– year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0104
  article-title: Towards deep learning models resistant to adversarial attacks
– start-page: 1528
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0017
  article-title: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition
– start-page: 27
  year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0033
  article-title: Towards poisoning of deep learning algorithms with back-gradient optimization
– volume: 80
  start-page: 274
  year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0086
  article-title: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
– start-page: 39
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0112
  article-title: Efficient defenses against adversarial attacks
– volume: 10492
  start-page: 370
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0056
  article-title: DeltaPhish: detecting phishing webpages in compromised websites
– volume: 13
  start-page: 2617
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0046
  article-title: Static prediction games for adversarial learning problems
– start-page: 1
  year: 2009
  ident: 10.1016/j.patcog.2018.07.023_bib0025
  article-title: Antidote: understanding and defending against poisoning of anomaly detectors
– volume: 3
  start-page: 1
  issue: 1
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0051
  article-title: Machine learning methods for computer security (dagstuhl perspectives workshop 12371)
  publication-title: Dagstuhl Manif.
– year: 2004
  ident: 10.1016/j.patcog.2018.07.023_bib0034
  article-title: On attacking statistical spam filters
– volume: 32
  start-page: 31
  issue: 5
  year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0043
  article-title: Adversarial biometric recognition: a review on biometric system security from the adversarial machine-learning perspective
  publication-title: IEEE Signal Proc. Mag.
  doi: 10.1109/MSP.2015.2426728
– start-page: 15
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0068
  article-title: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models
– year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0053
– volume: 28
  start-page: 55
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0095
  article-title: Bayesian games for adversarial regression problems
– volume: 28
  start-page: 2466
  issue: 11
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0047
  article-title: Randomized prediction games for adversarial machine learning
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2016.2593488
– year: 2009
  ident: 10.1016/j.patcog.2018.07.023_bib0044
  article-title: Feature weighting for improved classifier robustness
– volume: 80
  start-page: 5283
  year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0100
  article-title: Provable defenses against adversarial examples via the convex outer adversarial polytope
– year: 2018
  ident: 10.1016/j.patcog.2018.07.023_bib0119
  article-title: Manipulating machine learning: poisoning attacks and countermeasures for regression learning
– year: 2005
  ident: 10.1016/j.patcog.2018.07.023_bib0021
  article-title: Good word attacks on statistical spam filters, Mountain View, CA, USA
– volume: 28
  start-page: 1460002
  issue: 7
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0042
  article-title: Pattern recognition systems under attack: design issues and research challenges
  publication-title: Int. J. Pattern Recognit. Artif. Intell.
  doi: 10.1142/S0218001414600027
– volume: 239
  start-page: 201
  issue: 0
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0054
  article-title: Adversarial attacks against intrusion detection systems: taxonomy, solutions and open issues
  publication-title: Inf. Sci.
  doi: 10.1016/j.ins.2013.03.022
– start-page: 1489
  year: 2008
  ident: 10.1016/j.patcog.2018.07.023_bib0036
  article-title: Convex learning with invariances
– volume: 32
  start-page: 1436
  issue: 10
  year: 2011
  ident: 10.1016/j.patcog.2018.07.023_bib0057
  article-title: A survey and experimental evaluation of image spam filtering techniques
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2011.03.022
– volume: 5
  start-page: 1007
  year: 2004
  ident: 10.1016/j.patcog.2018.07.023_bib0121
  article-title: On robust properties of convex risk minimization methods for pattern recognition
  publication-title: J. Mach. Learn. Res.
– volume: 32
  start-page: 577
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0099
  article-title: On robustness and regularization of structural support vector machines
– volume: 10426
  start-page: 3
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0088
  article-title: Safety verification of deep neural networks
– year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0032
  article-title: Understanding black-box predictions via influence functions
– start-page: 87
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0062
  article-title: Is data clustering in adversarial settings secure?
– volume: 27
  start-page: 76
  issue: 6
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0096
  article-title: Does game theory work?
  publication-title: IEEE IS
– start-page: 372
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0014
  article-title: The limitations of deep learning in adversarial settings
– volume: 46
  start-page: 2256
  issue: 8
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0108
  article-title: Multi-label classification with a reject option
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2013.01.035
– ident: 10.1016/j.patcog.2018.07.023_bib0071
– start-page: 16
  year: 2006
  ident: 10.1016/j.patcog.2018.07.023_bib0023
  article-title: Can machine learning be secure?
– volume: 63
  start-page: 139
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0120
  article-title: Robust support vector machines based on the rescaled hinge loss function
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.09.045
– volume: 81
  start-page: 115
  year: 2010
  ident: 10.1016/j.patcog.2018.07.023_bib0050
  article-title: Machine learning in adversarial environments
  publication-title: Mach. Learn.
  doi: 10.1007/s10994-010-5207-6
– start-page: 99
  year: 2004
  ident: 10.1016/j.patcog.2018.07.023_bib0019
  article-title: Adversarial classification
– start-page: 1807
  year: 2012
  ident: 10.1016/j.patcog.2018.07.023_bib0026
  article-title: Poisoning attacks against support vector machines
– volume: 10
  start-page: 1485
  year: 2009
  ident: 10.1016/j.patcog.2018.07.023_bib0084
  article-title: Robustness and regularization of support vector machines
  publication-title: J. Mach. Learn. Res.
– start-page: 81
  year: 2006
  ident: 10.1016/j.patcog.2018.07.023_bib0090
  article-title: Paragraph: thwarting signature learning by training maliciously
– start-page: 197
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0039
  article-title: Practical evasion of a learning-based classifier: a case study
– volume: 8190
  start-page: 387
  year: 2013
  ident: 10.1016/j.patcog.2018.07.023_bib0038
  article-title: Evasion attacks against machine learning at test time
– start-page: 601
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0066
  article-title: Stealing machine learning models via prediction APIs
– volume: 14
  start-page: 68
  issue: 3
  year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0013
  article-title: Machine learning in adversarial settings
  publication-title: IEEE Secur. Priv.
  doi: 10.1109/MSP.2016.51
– volume: 8621
  start-page: 42
  year: 2014
  ident: 10.1016/j.patcog.2018.07.023_bib0064
  article-title: Poisoning complete-linkage hierarchical clustering
– start-page: 5
  year: 2008
  ident: 10.1016/j.patcog.2018.07.023_bib0092
  article-title: Classifier ensembles for detecting concept change in streaming data: overview and perspectives
– volume: 3546
  start-page: 1100
  year: 2005
  ident: 10.1016/j.patcog.2018.07.023_bib0074
  article-title: Vulnerabilities in biometric encryption systems
– year: 2016
  ident: 10.1016/j.patcog.2018.07.023_bib0067
  article-title: Automatically evading classifiers
– volume: 37
  start-page: 1689
  year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0029
  article-title: Is feature selection secure against training data poisoning?
– volume: 81
  start-page: 149
  year: 2010
  ident: 10.1016/j.patcog.2018.07.023_bib0037
  article-title: Learning to classify with missing and corrupted features
  publication-title: Mach. Learn.
  doi: 10.1007/s10994-009-5124-8
– volume: 9132
  start-page: 168
  year: 2015
  ident: 10.1016/j.patcog.2018.07.023_bib0105
  article-title: One-and-a-half-class multiple classifier systems for secure learning against evasion attacks at test time
– year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0018
  article-title: Adversarial examples for semantic segmentation and object detection
– start-page: 1
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0089
  article-title: DeepXplore: automated whitebox testing of deep learning systems
– volume: 26
  issue: 8
  year: 2002
  ident: 10.1016/j.patcog.2018.07.023_bib0022
  article-title: Impact of artificial “gummy” fingers on fingerprint systems
  publication-title: Datenschutz und Datensicherheit
– volume: Vol. 10493
  start-page: 62
  year: 2017
  ident: 10.1016/j.patcog.2018.07.023_bib0011
  article-title: Adversarial examples for malware detection
– volume: 38
  start-page: 1520
  issue: 10
  year: 2005
  ident: 10.1016/j.patcog.2018.07.023_bib0093
  article-title: Optimal robust classifiers
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2005.01.019
– start-page: 43
  year: 2011
  ident: 10.1016/j.patcog.2018.07.023_bib0061
  article-title: Adversarial machine learning, Chicago, IL, USA
SSID ssj0017142
Score 2.733223
Snippet •We provide a detailed review of the evolution of adversarial machine learning over the last ten years.•We start from pioneering work up to more recent work...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 317
SubjectTerms Adversarial examples
Adversarial machine learning
Deep learning
Evasion attacks
Poisoning attacks
Secure learning
Title Wild patterns: Ten years after the rise of adversarial machine learning
URI https://dx.doi.org/10.1016/j.patcog.2018.07.023
Volume 84
WOSCitedRecordID wos000444659200024&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: ScienceDirect Freedom Collection - Elsevier
  customDbUrl:
  eissn: 1873-5142
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017142
  issn: 0031-3203
  databaseCode: AIEXJ
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1JT9wwFLYo9NALlC4CSisfuCGjJE5spzeE2CqEkEqluUXeUoEgg5ihov-e5y0ZoKItEpdoFNmZ5H328-fntyC0kRltcpllRPGCktKInCiAntRKuOXOZEL6lPlH_PhYjEb1Saw2OvHlBHjXidvb-upFoYZ7ALYLnf0PuPuHwg34DaDDFWCH6z8BD_PcuHSpztI3CdbzbvM3fOUkFgT3ToVnwYYvXT3mifSlOy69X6VNhSR-zvLWk_C8zd7faDi9d-b1s3E4u5hOHR0dDnFC7PWeVMHbK5kXcvHAVaOPexmcjLwepTmhRRZUkw2qU3BKgH7d062inFGONERpxnU2Rmo9UuHBmnC-BYKCL3LOd8KnVy3osGT1joTfQ-5Jr5kce6teoYWCVzXot4Xtw93Rt_5EiedlyBwf3zyFUXpfv8f_9WeaMkM9Tt-ixbhnwNsB62U0Z7t3aCnV48BRPb9H-w56nKD_igF47IHHHngMwGMHPB63eAZ4HIHHCfgP6Mfe7unOAYmFMogG_jwlhmVCacOkLWRVG2oYq1SugXm0LGsZs6qURlFhuG5LBk1gvti2Kow0VirA7SOa78adXUHYMq0rvyvnFppKxVvZWtjn1oWVolWriCa5NDpmkXfFTC6a5C543gRpNk6aTcYbkOYqIn2vq5BF5S_teRJ5E5lgYHgNjJIne649u-cn9GYY_-tofnp9Yz-j1_oXzJzrL3E43QFFX4T9
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Wild+patterns%3A+Ten+years+after+the+rise+of+adversarial+machine+learning&rft.jtitle=Pattern+recognition&rft.au=Biggio%2C+Battista&rft.au=Roli%2C+Fabio&rft.date=2018-12-01&rft.pub=Elsevier+Ltd&rft.issn=0031-3203&rft.eissn=1873-5142&rft.volume=84&rft.spage=317&rft.epage=331&rft_id=info:doi/10.1016%2Fj.patcog.2018.07.023&rft.externalDocID=S0031320318302565
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon