SECURITY EVALUATION OF PATTERN CLASSIFIERS UNDER ATTACK

Gespeichert in:
Bibliographische Detailangaben
Titel: SECURITY EVALUATION OF PATTERN CLASSIFIERS UNDER ATTACK
Autoren: Deepak, Immani, Ghosh, Ria M
Quelle: International Journal of Innovative Technology and Research; Vol 5, No 6 (2017): October - November 2017; 7705-7709
Verlagsinformationen: International Journal of Innovative Technology and Research
Publikationsjahr: 2017
Bestand: International Journal of Innovative Technology and Research (IJITR)
Schlagwörter: CSE, Data Mining, Java Technology, UML Diagrams, Data Flow Diagram
Beschreibung: Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier’s behavior in adversarial environments, and lead to better design choices
Publikationsart: article in journal/newspaper
Dateibeschreibung: application/pdf
Sprache: English
Relation: http://www.ijitr.com/index.php/ojs/article/view/2059/pdf; http://www.ijitr.com/index.php/ojs/article/view/2059
Verfügbarkeit: http://www.ijitr.com/index.php/ojs/article/view/2059
Rights: To The Editor-in-Chief, IJITR 1. I understand that the Editor-in-Chief may transfer the Copyright to a publisher at his discretion. 2. The author(s) reserve(s) all proprietary rights such as patent rights and the right to use all or part of the article in future works of their own such as lectures, press releases, and reviews of textbooks. In the case of republication of the whole, part, or parts thereof, in periodicals or reprint publications by a third party, written permission must be obtained from the The Editor-in-Chief IJITR, or his designated publisher. 3. I am authorized to execute this transfer of copyright on behalf of all the authors of the article named above. 4. I hereby declare that the material being presented by me in this paper is our original work, and does not contain or include material taken from other copyrighted sources. Wherever such material has been included, it has been clearly indented or/and identified by quotation marks and due and proper acknowledgements given by citing the source at appropriate places.
Dokumentencode: edsbas.871ABF7
Datenbank: BASE
Beschreibung
Abstract:Pattern classification systems are commonly used in adversarial applications, like biometric authentication, network intrusion detection, and spam filtering, in which data can be purposely manipulated by humans to undermine their operation. As this adversarial scenario is not taken into account by classical design methods, pattern classification systems may exhibit vulnerabilities, whose exploitation may severely affect their performance, and consequently limit their practical utility. In this paper, we address one of the main open issues: evaluating at design phase the security of pattern classifiers, namely, the performance degradation under potential attacks they may incur during operation. We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications. Reported results show that security evaluation can provide a more complete understanding of the classifier’s behavior in adversarial environments, and lead to better design choices