An adaptive randomized and secured approach against adversarial attacks

With the rising trends and use of machine learning algorithms for classification and regression tasks, deep learning has been widely accepted in the Cyber and as well as non-Cyber Domain. Recent researches have shown that machine learning classifiers such as Deep Neural Networks (DNN) can be used to...

Full description

Saved in:
Bibliographic Details
Published in:Information security journal. Vol. 32; no. 6; pp. 401 - 416
Main Authors: Dhamija, Lovi, Garg, Urvashi
Format: Journal Article
Language:English
Published: Taylor & Francis 02.11.2023
Subjects:
ISSN:1939-3555, 1939-3547
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the rising trends and use of machine learning algorithms for classification and regression tasks, deep learning has been widely accepted in the Cyber and as well as non-Cyber Domain. Recent researches have shown that machine learning classifiers such as Deep Neural Networks (DNN) can be used to improve the detection against adversarial samples as well as to detect malware in the cyber security domain. However, a recent study in deep learning has found that DNN classifiers are highly vulnerable and can be evaded simply by either performing small modifications in the training model or training data. The work proposed a randomized defensive mechanism with the use of generative adversarial networks to construct more adversaries and then defend against them. Interestingly, we encountered some open challenges highlighting common difficulties faced by defensive mechanisms. We provide a general overview of adversarial attacks and proposed an Adaptive Randomized Algorithm to enhance the robustness of models. Moreover, this work aimed to ensure the security and transferability of deep learning classifiers.
ISSN:1939-3555
1939-3547
DOI:10.1080/19393555.2022.2088429