DeepCens: A deep learning‐based system for real‐time image and video censorship.

Uloženo v:
Podrobná bibliografie
Název: DeepCens: A deep learning‐based system for real‐time image and video censorship.
Autoři: Yuksel, Asim Sinan, Tan, Fatma Gulsah
Zdroj: Expert Systems; Dec2023, Vol. 40 Issue 10, p1-19, 19p
Abstrakt: The popularity of social networks and video‐on‐demand platforms has increased the importance of image and video censorship. These platforms contain various content such as violence, explicit content, drug use and smoking that may be offensive or harmful to certain viewers. As a result, censorship is employed to filter or remove content that is unsuitable for particular audiences such as children and teenagers. However, policies for censoring harmful content in digital environments are either limited or nonexistent. This underscores the need for automated systems that can detect and censor harmful content in real time. To address these challenges, we developed the first of our knowledge systems using deep learning techniques to censor harmful content. We propose two novel, YOLO‐based real‐time censorship algorithms. Our approaches employ a pipeline‐based architecture that parallelizes the operations with subprocesses. In our experiments, the proposed algorithms performed faster and with higher accuracies compared to traditional approaches. Specifically, its content‐based accuracies were 98% for explicit content, 97% for alcohol, 98% for cigarettes and 97% for violence. Our research highlights the importance of developing effective and efficient solutions for censoring harmful content on digital media platforms. Our deep learning‐based system represents a promising approach to this challenge and has the potential to enhance user safety and protect vulnerable groups from harmful and offensive content. Future research will continue to refine and improve such systems to better address the evolving landscape of digital media and the challenges posed by harmful content. [ABSTRACT FROM AUTHOR]
Copyright of Expert Systems is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáze: Complementary Index
Popis
Abstrakt:The popularity of social networks and video‐on‐demand platforms has increased the importance of image and video censorship. These platforms contain various content such as violence, explicit content, drug use and smoking that may be offensive or harmful to certain viewers. As a result, censorship is employed to filter or remove content that is unsuitable for particular audiences such as children and teenagers. However, policies for censoring harmful content in digital environments are either limited or nonexistent. This underscores the need for automated systems that can detect and censor harmful content in real time. To address these challenges, we developed the first of our knowledge systems using deep learning techniques to censor harmful content. We propose two novel, YOLO‐based real‐time censorship algorithms. Our approaches employ a pipeline‐based architecture that parallelizes the operations with subprocesses. In our experiments, the proposed algorithms performed faster and with higher accuracies compared to traditional approaches. Specifically, its content‐based accuracies were 98% for explicit content, 97% for alcohol, 98% for cigarettes and 97% for violence. Our research highlights the importance of developing effective and efficient solutions for censoring harmful content on digital media platforms. Our deep learning‐based system represents a promising approach to this challenge and has the potential to enhance user safety and protect vulnerable groups from harmful and offensive content. Future research will continue to refine and improve such systems to better address the evolving landscape of digital media and the challenges posed by harmful content. [ABSTRACT FROM AUTHOR]
ISSN:02664720
DOI:10.1111/exsy.13436