Adversarial Attack Type I: Cheat Classifiers by Significant Changes
Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we propose another type of adversarial attack that can cheat classifiers by significant changes. For example, we can significantly change a face but...
Uložené v:
| Vydané v: | IEEE transactions on pattern analysis and machine intelligence Ročník 43; číslo 3; s. 1100 - 1109 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
United States
IEEE
01.03.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 0162-8828, 1939-3539, 2160-9292, 1939-3539 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we propose another type of adversarial attack that can cheat classifiers by significant changes. For example, we can significantly change a face but well-trained neural networks still recognize the adversarial and the original example as the same person. Statistically, the existing adversarial attack increases Type II error and the proposed one aims at Type I error, which are hence named as Type II and Type I adversarial attack, respectively. The two types of attack are equally important but are essentially different, which are intuitively explained and numerically evaluated. To implement the proposed attack, a supervised variation autoencoder is designed and then the classifier is attacked by updating the latent variables using gradient information. Besides, with pre-trained generative models, Type I attack on latent spaces is investigated as well. Experimental results show that our method is practical and effective to generate Type I adversarial examples on large-scale image datasets. Most of these generated examples can pass detectors designed for defending Type II attack and the strengthening strategy is only efficient with a specific type attack, both implying that the underlying reasons for Type I and Type II attack are different. |
|---|---|
| AbstractList | Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we propose another type of adversarial attack that can cheat classifiers by significant changes. For example, we can significantly change a face but well-trained neural networks still recognize the adversarial and the original example as the same person. Statistically, the existing adversarial attack increases Type II error and the proposed one aims at Type I error, which are hence named as Type II and Type I adversarial attack, respectively. The two types of attack are equally important but are essentially different, which are intuitively explained and numerically evaluated. To implement the proposed attack, a supervised variation autoencoder is designed and then the classifier is attacked by updating the latent variables using gradient information. Besides, with pre-trained generative models, Type I attack on latent spaces is investigated as well. Experimental results show that our method is practical and effective to generate Type I adversarial examples on large-scale image datasets. Most of these generated examples can pass detectors designed for defending Type II attack and the strengthening strategy is only efficient with a specific type attack, both implying that the underlying reasons for Type I and Type II attack are different.Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we propose another type of adversarial attack that can cheat classifiers by significant changes. For example, we can significantly change a face but well-trained neural networks still recognize the adversarial and the original example as the same person. Statistically, the existing adversarial attack increases Type II error and the proposed one aims at Type I error, which are hence named as Type II and Type I adversarial attack, respectively. The two types of attack are equally important but are essentially different, which are intuitively explained and numerically evaluated. To implement the proposed attack, a supervised variation autoencoder is designed and then the classifier is attacked by updating the latent variables using gradient information. Besides, with pre-trained generative models, Type I attack on latent spaces is investigated as well. Experimental results show that our method is practical and effective to generate Type I adversarial examples on large-scale image datasets. Most of these generated examples can pass detectors designed for defending Type II attack and the strengthening strategy is only efficient with a specific type attack, both implying that the underlying reasons for Type I and Type II attack are different. Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we propose another type of adversarial attack that can cheat classifiers by significant changes. For example, we can significantly change a face but well-trained neural networks still recognize the adversarial and the original example as the same person. Statistically, the existing adversarial attack increases Type II error and the proposed one aims at Type I error, which are hence named as Type II and Type I adversarial attack, respectively. The two types of attack are equally important but are essentially different, which are intuitively explained and numerically evaluated. To implement the proposed attack, a supervised variation autoencoder is designed and then the classifier is attacked by updating the latent variables using gradient information. Besides, with pre-trained generative models, Type I attack on latent spaces is investigated as well. Experimental results show that our method is practical and effective to generate Type I adversarial examples on large-scale image datasets. Most of these generated examples can pass detectors designed for defending Type II attack and the strengthening strategy is only efficient with a specific type attack, both implying that the underlying reasons for Type I and Type II attack are different. |
| Author | Tang, Sanli Huang, Xiaolin Yang, Jie Chen, Mingjian Sun, Chengjin |
| Author_xml | – sequence: 1 givenname: Sanli surname: Tang fullname: Tang, Sanli email: tangsanli@sjtu.edu.cn organization: MOE Key Laboratory of System Control and Information Processing, Institute of Image Processing and Pattern Recognition and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, P.R. China – sequence: 2 givenname: Xiaolin orcidid: 0000-0003-4285-6520 surname: Huang fullname: Huang, Xiaolin email: xiaolinhuang@sjtu.edu.cn organization: MOE Key Laboratory of System Control and Information Processing, Institute of Image Processing and Pattern Recognition and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, P.R. China – sequence: 3 givenname: Mingjian orcidid: 0000-0003-0584-6286 surname: Chen fullname: Chen, Mingjian email: w179261466@sjtu.edu.cn organization: MOE Key Laboratory of System Control and Information Processing, Institute of Image Processing and Pattern Recognition and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, P.R. China – sequence: 4 givenname: Chengjin orcidid: 0000-0002-9992-7919 surname: Sun fullname: Sun, Chengjin email: sunchengjin@sjtu.edu.cn organization: MOE Key Laboratory of System Control and Information Processing, Institute of Image Processing and Pattern Recognition and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, P.R. China – sequence: 5 givenname: Jie orcidid: 0000-0003-4801-7162 surname: Yang fullname: Yang, Jie email: jieyang@sjtu.edu.cn organization: MOE Key Laboratory of System Control and Information Processing, Institute of Image Processing and Pattern Recognition and Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, P.R. China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/31442970$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kV1LwzAUhoMoOqd_QEEK3njTmZw0TeLdKH4MFAXndTjNUo127Uw6Yf_e6qYXXgiBEPI8L4fz7pPtpm0cIUeMjhij-nz6ML6bjIAyPQLNcy7VFhkAy2mqQcM2GVCWQ6oUqD2yH-MrpSwTlO-SPc6yDLSkA1KMZx8uRAwe62TcdWjfkulq4ZLJRVK8OOySosYYfeV7KilXyaN_bvqXxab_esHm2cUDslNhHd3h5h6Sp6vLaXGT3t5fT4rxbWq5YF0q0Zal1gJsplCUbJZbKzOArBIWS9BWuBw5KuCOQ45aKp1XloGtpKIaSz4kZ-vcRWjfly52Zu6jdXWNjWuX0QAoKgQInffo6R_0tV2Gpp_OQKZkxiXvz5CcbKhlOXczswh-jmFlftbTA2oN2NDGGFxlrO-w823TBfS1YdR8NWG-mzBfTZhNE70Kf9Sf9H-l47XknXO_glJUcib4Jz_kkeg |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_1016_j_patcog_2021_108491 crossref_primary_10_1016_j_patcog_2021_108192 crossref_primary_10_1007_s11263_025_02467_7 crossref_primary_10_1109_TAI_2023_3257276 crossref_primary_10_1109_TPAMI_2025_3540200 crossref_primary_10_1109_TDSC_2022_3186918 crossref_primary_10_1109_TMI_2021_3072568 crossref_primary_10_1109_TETCI_2024_3367812 crossref_primary_10_1109_TPAMI_2020_3033291 crossref_primary_10_1016_j_engappai_2023_106595 crossref_primary_10_3390_technologies13050202 crossref_primary_10_1145_3643563 crossref_primary_10_1109_TIP_2025_3572793 crossref_primary_10_32604_cmc_2022_029969 crossref_primary_10_1109_TPAMI_2024_3365699 crossref_primary_10_1109_JIOT_2020_3040281 |
| Cites_doi | 10.1109/TNNLS.2018.2886017 10.1109/5.726791 10.4204/EPTCS.257.3 10.1109/CVPR.2017.645 10.24963/ijcai.2018/543 10.1109/SP.2016.41 10.1109/SP.2017.49 10.1201/9781351251389-8 10.1109/CVPR.2016.282 10.1109/CVPR.2019.00453 10.1109/ICCV.2015.425 10.14722/ndss.2018.23198 10.1109/CVPR.2017.660 10.1109/CVPR.2016.90 10.1109/CVPR.2015.7298682 10.1109/CVPR.2015.7298640 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TPAMI.2019.2936378 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic Technology Research Database PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 1109 |
| ExternalDocumentID | 31442970 10_1109_TPAMI_2019_2936378 8807315 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Committee of Science and Technology, Shanghai, China grantid: 19510711200 – fundername: National Natural Science Foundation of China grantid: 61977046; 61603248; 61876107; U1803261 funderid: 10.13039/501100001809 – fundername: 1000-Talent Plan |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION 5VS 9M8 AAYOK ABFSI ADRHT AETIX AGSQL AI. AIBXA ALLEH FA8 H~9 IBMZZ ICLAB IFJZH NPM PKN RIC RIG RNI RZB VH1 XJT Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c351t-7acbb9952c48a5b1d6cc74224f5cab29c5e6a3a823e326a97896fc12cf7809ab3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 19 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000616309900025&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Sun Sep 28 02:49:43 EDT 2025 Sun Nov 09 06:24:13 EST 2025 Wed Feb 19 02:29:47 EST 2025 Sat Nov 29 05:15:59 EST 2025 Tue Nov 18 22:41:42 EST 2025 Wed Aug 27 05:47:02 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 3 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c351t-7acbb9952c48a5b1d6cc74224f5cab29c5e6a3a823e326a97896fc12cf7809ab3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-4285-6520 0000-0003-0584-6286 0000-0003-4801-7162 0000-0002-9992-7919 |
| PMID | 31442970 |
| PQID | 2487437337 |
| PQPubID | 85458 |
| PageCount | 10 |
| ParticipantIDs | crossref_citationtrail_10_1109_TPAMI_2019_2936378 pubmed_primary_31442970 crossref_primary_10_1109_TPAMI_2019_2936378 proquest_journals_2487437337 proquest_miscellaneous_2280552596 ieee_primary_8807315 |
| PublicationCentury | 2000 |
| PublicationDate | 2021-03-01 |
| PublicationDateYYYYMMDD | 2021-03-01 |
| PublicationDate_xml | – month: 03 year: 2021 text: 2021-03-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2021 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref35 szegedy (ref9) 2014 ref12 goodfellow (ref6) 2015 ref36 kingma (ref34) 2015 ref31 ref11 ref32 chen (ref13) 2018 ref1 ref39 krizhevsky (ref2) 2012 baluja (ref19) 2018 huang (ref38) 2007 wang (ref10) 2017 ref18 huang (ref22) 2018 madry (ref14) 2018 lample (ref30) 2017 dai (ref29) 2017 ref23 ref26 ref25 ref41 ref21 song (ref15) 2018 kannan (ref40) 2018 kingma (ref5) 2014 makhzani (ref16) 2015 kingma (ref28) 2014 berthelot (ref33) 2017 ref8 ref7 lee (ref20) 2017 ref3 goodfellow (ref4) 2014 tramer (ref42) 2018 mirza (ref27) 2014 odena (ref17) 2017 gilmer (ref24) 2018 abadi (ref37) 2016 |
| References_xml | – ident: ref8 doi: 10.1109/TNNLS.2018.2886017 – ident: ref41 doi: 10.1109/5.726791 – year: 2018 ident: ref40 article-title: Adversarial logit pairing – ident: ref7 doi: 10.4204/EPTCS.257.3 – year: 2015 ident: ref16 article-title: Adversarial autoencoders – ident: ref31 doi: 10.1109/CVPR.2017.645 – start-page: 5967 year: 2017 ident: ref30 article-title: Fader networks: Manipulating images by sliding attributes publication-title: Proc Int Conf Neural Inf Process – ident: ref21 doi: 10.24963/ijcai.2018/543 – ident: ref11 doi: 10.1109/SP.2016.41 – ident: ref12 doi: 10.1109/SP.2017.49 – start-page: 2642 year: 2017 ident: ref17 article-title: Conditional image synthesis with auxiliary classifier GANs publication-title: Proc Int Conf Mach Learn – start-page: 2672 year: 2014 ident: ref4 article-title: Generative adversarial nets publication-title: Proc Int Conf Neural Inf Process – year: 2017 ident: ref20 article-title: Generative adversarial trainer: Defense to adversarial perturbations with GAN – start-page: 2687 year: 2018 ident: ref19 article-title: Learning to Attack: Adversarial Transformation Networks publication-title: Proc 32rd AAAI Conf Artif Intell – ident: ref26 doi: 10.1201/9781351251389-8 – start-page: 10 year: 2018 ident: ref13 article-title: EAD: Elastic-net attacks to deep neural networks via adversarial examples publication-title: Proc 32nd AAAI Conf Artif Intell – year: 2015 ident: ref34 article-title: Adam: A method for stochastic optimization publication-title: Proc Int Conf Learn Representations – year: 2017 ident: ref10 article-title: A theoretical framework for robustness of (deep) classifiers against adversarial examples publication-title: Proc Int Conf Learn Representations Workshop – year: 2014 ident: ref5 article-title: Auto-encoding variational Bayes publication-title: Proc Int Conf Learn Representations – start-page: 52 year: 2018 ident: ref22 article-title: IntroVAE: Introspective variational autoencoders for photographic image synthesis publication-title: Proc Int Conf Neural Inf Process – ident: ref25 doi: 10.1109/CVPR.2016.282 – ident: ref18 doi: 10.1109/CVPR.2019.00453 – year: 2018 ident: ref42 article-title: Ensemble adversarial training: Attacks and defenses publication-title: Proc Int Conf Learn Representations – ident: ref36 doi: 10.1109/ICCV.2015.425 – ident: ref39 doi: 10.14722/ndss.2018.23198 – year: 2018 ident: ref14 article-title: Towards deep learning models resistant to adversarial attacks publication-title: Proc Int Conf Learn Representations – year: 2007 ident: ref38 article-title: Labeled faces in the wild: A database for studying face recognition in unconstrained environments – ident: ref3 doi: 10.1109/CVPR.2017.660 – ident: ref1 doi: 10.1109/CVPR.2016.90 – year: 2016 ident: ref37 article-title: TensorFlow: Large-scale machine learning on heterogeneous distributed systems publication-title: arXiv 1603 04467 – ident: ref35 doi: 10.1109/5.726791 – year: 2017 ident: ref33 article-title: BEGAN: Boundary equilibrium generative adversarial networks publication-title: arXiv 1703 10717 – year: 2015 ident: ref6 article-title: Explaining and harnessing adversarial examples publication-title: Proc Int Conf Learn Representations – start-page: 3581 year: 2014 ident: ref28 article-title: Semi-supervised learning with deep generative models publication-title: Proc Int Conf Neural Inf Process – start-page: 8312 year: 2018 ident: ref15 article-title: Constructing unrestricted adversarial examples with generative models publication-title: Proc Int Conf Neural Inf Process – start-page: 6513 year: 2017 ident: ref29 article-title: Good semi-supervised learning that requires a bad GAN publication-title: Proc Int Conf Neural Inf Process – ident: ref32 doi: 10.1109/CVPR.2015.7298682 – year: 2014 ident: ref9 article-title: Intriguing properties of neural networks publication-title: Proc Int Conf Learn Representations – start-page: 1097 year: 2012 ident: ref2 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Int Conf Neural Inf Process – year: 2018 ident: ref24 article-title: Adversarial spheres – ident: ref23 doi: 10.1109/CVPR.2015.7298640 – year: 2014 ident: ref27 article-title: Conditional generative adversarial nets publication-title: arXiv preprint arXiv 1411 1784 |
| SSID | ssj0014503 |
| Score | 2.4915476 |
| Snippet | Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1100 |
| SubjectTerms | Adversarial attack Aerospace electronics Artificial neural networks Classifiers Face recognition Neural networks Permutations Sun supervised variational autoencoder Task analysis Toy manufacturing industry Training type I error |
| Title | Adversarial Attack Type I: Cheat Classifiers by Significant Changes |
| URI | https://ieeexplore.ieee.org/document/8807315 https://www.ncbi.nlm.nih.gov/pubmed/31442970 https://www.proquest.com/docview/2487437337 https://www.proquest.com/docview/2280552596 |
| Volume | 43 |
| WOSCitedRecordID | wos000616309900025&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS9xAEB-s9ME-aKtW4xdb6JtGL7ubbNa341AUWhFq4d7C7mZTjkpOvJzgf-_M5gMfbMG3kJ18sDOTmcl8_AC-ZyrRpZAqlpWVMdrbKjalFDGvRGl95rSrAmrJD3Vzk0-n-nYFToZeGO99KD7zp3QYcvnl3C3pV9kZypoS1FH-QSnV9moNGQOZBhRk9GBQwzGM6BtkRvrs7nb885qquPQpGrdMKALpExhJcE0Yxa_sUQBY-bevGWzO5cb73vYzrHe-JRu3wvAFVny9CRs9bgPr1HgTPr0aQrgFk4DJvDAkiWzcNMb9ZRSdsutzNqFPNQvAmbOKQLOZfWa_Zn9qKjBCnrC2OWGxDb8vL-4mV3EHrRA7kSZNrIyzVuuUO5mb1CZl5hwGyVxWqTOWa5f6zAiTc-HRvzMYauqscgl3lcpH2ljxFVbree13gWm0-aXGtZIK3hKb53gikEpJScsIkn6DC9fNHSf4i_sixB8jXQT-FMSfouNPBMfDNQ_t1I3_Um_R7g-U3cZHcNDzsegUc1FwDNBompNQEXwbllGlKE9iaj9fIg3PR2mKcWEWwU7L_-Hevdjsvf3MfVjjVPQSitQOYLV5XPpD-Oiemtni8QjldpofBbl9AdCr5D4 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3da9RAEB9KFdQHq60f0bau4JumvexHku3bcbT06PUoeELfwu5mUw4lJ72c4H_vzOaDPthC30J28sHOTGYm8_ED-JJmiS6FzGJZWRmjva1iU0oR80qU1qdOuyqglsyy-Ty_vtZXW_Bt6IXx3ofiM39EhyGXX67chn6VHaOsZYI6yp8oKXnSdmsNOQOpAg4y-jCo4xhI9C0yI328uBpfTqmOSx-heUtFRjB9AmMJrgml-I5FChAr93ubweqc7TzufV_By867ZONWHF7Dlq93YadHbmCdIu_CiztjCPdgElCZ14ZkkY2bxrifjOJTNj1hE_pYswCduawINpvZv-z78qamEiPkCmvbE9Zv4MfZ6WJyHnfgCrETKmnizDhrtVbcydwom5Spcxgmc1kpZyzXTvnUCJNz4dHDMxhs6rRyCXdVlo-0seItbNer2r8HptHqlxrXSip5S2ye44lAKiWlLSNI-g0uXDd5nAAwfhUhAhnpIvCnIP4UHX8i-Dpc87udu_Eg9R7t_kDZbXwE-z0fi0411wXHEI3mOYksgs_DMioVZUpM7VcbpOH5SCmMDNMI3rX8H-7di82H_z_zEzw7X1zOitl0fvERnnMqgQkla_uw3dxu_AE8dX-a5fr2MEjvP3jt5p0 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adversarial+Attack+Type+I%3A+Cheat+Classifiers+by+Significant+Changes&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Tang%2C+Sanli&rft.au=Huang%2C+Xiaolin&rft.au=Chen%2C+Mingjian&rft.au=Sun%2C+Chengjin&rft.date=2021-03-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.volume=43&rft.issue=3&rft.spage=1100&rft.epage=1109&rft_id=info:doi/10.1109%2FTPAMI.2019.2936378&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2019_2936378 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |