Deep Fake Video Detection Using Transfer Learning Approach

The usage of the internet as a fast medium for spreading fake news reinforces the requirement of computational utensils in order to fight for it. Fake videos also called deep fakes that create great intimidation in society in an assortment of social and political behaviour. It can also be utilized f...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Arabian journal for science and engineering Ročník 48; číslo 8; s. 9727 - 9737
Hlavní autori: Suratkar, Shraddha, Kazi, Faruk
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Berlin/Heidelberg Springer Berlin Heidelberg 01.08.2023
Springer Nature B.V
Predmet:
ISSN:2193-567X, 1319-8025, 2191-4281
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The usage of the internet as a fast medium for spreading fake news reinforces the requirement of computational utensils in order to fight for it. Fake videos also called deep fakes that create great intimidation in society in an assortment of social and political behaviour. It can also be utilized for malevolent intentions. Owing to the availability of deep fake generation algorithms at cheap computation power in cloud platforms, realistic fake videos or images are created. However, it is more critical to detect fake content because of the increased complexity of leveraging various approaches to smudge the tampering. Therefore, this work proposes a novel framework to detect fake videos through the utilization of transfer learning in autoencoders and a hybrid model of convolutional neural networks (CNN) and Recurrent neural networks (RNN). Unseen test input data are investigated to check the generalizability of the model. Also, the effect of residual image input on accuracy of the model is analyzed. Results are presented for both, with and without transfer learning to validate the effectiveness of transfer learning.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2193-567X
1319-8025
2191-4281
DOI:10.1007/s13369-022-07321-3