Fake News Detection Using BERT-VGG19 Multimodal Variational Autoencoder

In this era of readily accessible Internet, there has been a monumental shift in the way information is created, processed and disseminated to the netizens. Moreover, social media has played a very vital role where users can not only interact with one another and share information but also have the...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics (Online) s. 1 - 5
Hlavní autoři: Jaiswal, Ramji, Singh, Upendra Pratap, Singh, Krishna Pratap
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 11.11.2021
Témata:
ISSN:2687-7767
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this era of readily accessible Internet, there has been a monumental shift in the way information is created, processed and disseminated to the netizens. Moreover, social media has played a very vital role where users can not only interact with one another and share information but also have the capability to influence the thought process of others through their content. One of the major drawbacks of these platforms remains the absence of credibility in the information being circulated and this inherent vulnerability is exploited by many to circulate fake news over these platforms. This falsehood not only jeopardises the credibility of information and the platform itself but is also a growing technological mess simply because fake news spreads much more rapidly and has the capacity to even cause unrest, discontent and misery among the masses. We propose a BERT and VGG19 based multi-modal variational autoencoder for fake news detection. Our proposed model combines the information present in text and image modality to obtain better discriminatory power. The model takes both text and image data of fake news and extract textual feature and visual feature of the News and process both the feature simultaneously into variational autoencoder so the purposed model is call as multi-model variational autoencoder. Specifically, Bert and VGG19 embeddings are obtained for text and image modalities respectively after which the two embeddings are concatenated and passed through a multi-modal variational autoencoder for obtaining the shared latent representation. The shared latent representation so obtained is then fed to a binary classifier that outputs a probability that the input is fake. Our proposed model gives state of the art results on MediaEval2015 data set (with a 0.924 f-score) and remains competitive with state of the art approaches on Weibo dataset (0.656 f-score).
ISSN:2687-7767
DOI:10.1109/UPCON52273.2021.9667614