DeepFake Detection for Human Face Images and Videos: A Survey
Techniques for creating and manipulating multimedia information have progressed to the point where they can now ensure a high degree of realism. DeepFake is a generative deep learning algorithm that creates or modifies face features in a superrealistic form, in which it is difficult to distinguish b...
Gespeichert in:
| Veröffentlicht in: | IEEE Access Jg. 10; S. 18757 - 18775 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Piscataway
IEEE
2022
Institute of Electrical and Electronics Engineers (IEEE) The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 2169-3536, 2169-3536 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Techniques for creating and manipulating multimedia information have progressed to the point where they can now ensure a high degree of realism. DeepFake is a generative deep learning algorithm that creates or modifies face features in a superrealistic form, in which it is difficult to distinguish between real and fake features. This technology has greatly advanced and promotes a wide range of applications in TV channels, video game industries, and cinema, such as improving visual effects in movies, as well as a variety of criminal activities, such as misinformation generation by mimicking famous people. To identify and classify DeepFakes, research in DeepFake detection using deep neural networks (DNNs) has attracted increased interest. Basically, DeepFake is the regenerated media that is obtained by injecting or replacing some information within the DNN model. In this survey, we will summarize the DeepFake detection methods in face images and videos on the basis of their results, performance, methodology used and detection type. We will review the existing types of DeepFake creation techniques and sort them into five major categories. Generally, DeepFake models are trained on DeepFake datasets and tested with experiments. Moreover, we will summarize the available DeepFake dataset trends, focusing on their improvements. Additionally, the issue of how DeepFake detection aims to generate a generalized DeepFake detection model will be analyzed. Finally, the challenges related to DeepFake creation and detection will be discussed. We hope that the knowledge encompassed in this survey will accelerate the use of deep learning in face image and video DeepFake detection methods. |
|---|---|
| AbstractList | Techniques for creating and manipulating multimedia information have progressed to the point where they can now ensure a high degree of realism. DeepFake is a generative deep learning algorithm that creates or modifies face features in a superrealistic form, in which it is difficult to distinguish between real and fake features. This technology has greatly advanced and promotes a wide range of applications in TV channels, video game industries, and cinema, such as improving visual effects in movies, as well as a variety of criminal activities, such as misinformation generation by mimicking famous people. To identify and classify DeepFakes, research in DeepFake detection using deep neural networks (DNNs) has attracted increased interest. Basically, DeepFake is the regenerated media that is obtained by injecting or replacing some information within the DNN model. In this survey, we will summarize the DeepFake detection methods in face images and videos on the basis of their results, performance, methodology used and detection type. We will review the existing types of DeepFake creation techniques and sort them into five major categories. Generally, DeepFake models are trained on DeepFake datasets and tested with experiments. Moreover, we will summarize the available DeepFake dataset trends, focusing on their improvements. Additionally, the issue of how DeepFake detection aims to generate a generalized DeepFake detection model will be analyzed. Finally, the challenges related to DeepFake creation and detection will be discussed. We hope that the knowledge encompassed in this survey will accelerate the use of deep learning in face image and video DeepFake detection methods. |
| Author | Abdullahi, Sani M. Malik, Asad Kuribayashi, Minoru Khan, Ahmad Neyaz |
| Author_xml | – sequence: 1 givenname: Asad orcidid: 0000-0002-9976-3563 surname: Malik fullname: Malik, Asad email: amalik_co@myamu.ac.in organization: Department of Computer Science, Aligarh Muslim University, Aligarh, India – sequence: 2 givenname: Minoru orcidid: 0000-0003-4844-2652 surname: Kuribayashi fullname: Kuribayashi, Minoru organization: Department of Electrical and Communication Engineering, Okayama University, Okayama, Japan – sequence: 3 givenname: Sani M. orcidid: 0000-0003-4962-2794 surname: Abdullahi fullname: Abdullahi, Sani M. organization: College of Computer and Information Technology, China Three Gorges University, Yichang, China – sequence: 4 givenname: Ahmad Neyaz orcidid: 0000-0002-2783-4190 surname: Khan fullname: Khan, Ahmad Neyaz organization: Department of Computer Application, Integral University, Lucknow, India |
| BackLink | https://cir.nii.ac.jp/crid/1870020693043634944$$DView record in CiNii |
| BookMark | eNp9kM1qGzEUhUVJoWmaJ8hG0G7t6H_mFrowTpwYAl247VbI0lWQa49czTiQt6-mk5bSRbWQxOWccw_fW3LW5Q4JueJszjmD68VyebvZzAUTYi655rw1r8i54AZmUktz9tf_Dbns-x2rp60j3ZyTTzeIx5X7jvQGB_RDyh2NudD708F1dOU80vXBPWJPXRfotxQw9x_pgm5O5Qmf35HX0e17vHx5L8jX1e2X5f3s4fPderl4mHnN5DCT3ITQqi3UumAU9yFut7UmhC0DjkorkFH6ECCqwINhXEYBIBhojzxGeUHWU27IbmePJR1cebbZJftrkMujdWVIfo9WtTF438jgtVDoojOSgWEBIIRxT816P2UdS_5xwn6wu3wqXa1vhZEcNDTAqgomlS-57wtG69PgRjxDcWlvObMjfDvBtyN8-wK_euU_3t-N_-_6MLm6lOqy8eZtw5hgBiRT0kgFSlXZ1SRLiPgnGBouhNHyJ3FTmS4 |
| CODEN | IAECCG |
| CitedBy_id | crossref_primary_10_1016_j_inffus_2025_102993 crossref_primary_10_3390_computers14080321 crossref_primary_10_1080_23742917_2023_2192888 crossref_primary_10_1109_TIFS_2025_3533906 crossref_primary_10_1016_j_cose_2024_103860 crossref_primary_10_1007_s10791_025_09628_9 crossref_primary_10_1109_ACCESS_2025_3598980 crossref_primary_10_1016_j_appet_2025_107926 crossref_primary_10_3390_jsan14010017 crossref_primary_10_1007_s11468_025_02846_3 crossref_primary_10_1007_s44163_025_00307_8 crossref_primary_10_1007_s43926_025_00154_0 crossref_primary_10_1007_s00371_022_02683_z crossref_primary_10_1109_JPROC_2025_3576367 crossref_primary_10_1007_s10489_022_03766_z crossref_primary_10_1016_j_eswa_2024_124260 crossref_primary_10_3390_jtaer20030215 crossref_primary_10_31681_jetol_1459434 crossref_primary_10_1109_ACCESS_2022_3185121 crossref_primary_10_1007_s11042_025_20897_w crossref_primary_10_1109_ACCESS_2023_3270713 crossref_primary_10_1016_j_inffus_2023_102103 crossref_primary_10_1109_ACCESS_2023_3251417 crossref_primary_10_1016_j_heliyon_2023_e15090 crossref_primary_10_1016_j_cosrev_2024_100677 crossref_primary_10_1109_TBC_2024_3374043 crossref_primary_10_1109_ACCESS_2024_3390217 crossref_primary_10_1109_TIFS_2025_3592531 crossref_primary_10_1111_1556_4029_15665 crossref_primary_10_1007_s11042_024_18774_z crossref_primary_10_1007_s13369_025_10526_x crossref_primary_10_1016_j_dim_2025_100099 crossref_primary_10_1177_09732586241277335 crossref_primary_10_2174_0123520965269334231117112747 crossref_primary_10_1109_ACCESS_2024_3480320 crossref_primary_10_3390_info14080430 crossref_primary_10_1080_23080477_2023_2268380 crossref_primary_10_3390_computers12100216 crossref_primary_10_1016_j_eswa_2024_123756 crossref_primary_10_1007_s11042_024_19220_w crossref_primary_10_1386_eme_00199_1 crossref_primary_10_1007_s11135_024_02048_9 crossref_primary_10_1007_s44196_025_00911_7 crossref_primary_10_1007_s11468_024_02696_5 crossref_primary_10_3390_jsan12040061 crossref_primary_10_1109_ACCESS_2024_3470128 crossref_primary_10_3390_electronics13152947 crossref_primary_10_1109_ACCESS_2025_3567523 crossref_primary_10_1109_ACCESS_2025_3606518 crossref_primary_10_1145_3742786 crossref_primary_10_4018_JOEUC_347217 crossref_primary_10_3390_electronics12163407 crossref_primary_10_1109_LSP_2022_3193590 crossref_primary_10_3390_electronics13010126 crossref_primary_10_1007_s10462_025_11286_8 crossref_primary_10_1007_s00521_023_09288_0 crossref_primary_10_1016_j_icte_2024_09_018 crossref_primary_10_1109_ACCESS_2024_3356550 crossref_primary_10_1109_MITP_2022_3230353 crossref_primary_10_1007_s42979_024_03105_8 crossref_primary_10_1109_ACCESS_2023_3324403 crossref_primary_10_1016_j_neunet_2025_107310 crossref_primary_10_32628_CSEIT251116171 crossref_primary_10_1007_s42454_024_00054_8 crossref_primary_10_1016_j_imavis_2025_105582 |
| Cites_doi | 10.1109/CVPR.2017.243 10.1109/WIFS.2018.8630761 10.1007/978-3-030-58574-7_7 10.1007/978-3-319-10590-1_53 10.1016/j.neucom.2020.10.081 10.1109/WIFS.2018.8630787 10.1049/iet-bmt.2019.0196 10.1007/s12652-020-02845-8 10.1109/IJCNN52387.2021.9534089 10.1109/TBIOM.2019.2942395 10.1007/978-3-030-31456-9_15 10.1109/AVSS.2018.8639163 10.1109/ACCESS.2014.2381273 10.1007/s11263-015-0816-y 10.1109/MIPR.2019.00103 10.1109/ACCESS.2019.2899367 10.1145/3306346.3323035 10.1145/3437880.3460400 10.1016/j.inffus.2020.06.014 10.1145/3072959.3073640 10.1109/79.543975 10.1109/ICCV.2019.00947 10.1109/CVPRW50498.2020.00336 10.1109/ICASSP.2019.8682602 10.1109/CVPR42600.2020.00872 10.1016/j.patcog.2017.10.013 10.1109/CVPR.2019.00379 10.1145/3425780 10.1145/3442381.3449809 10.1016/S0031-3203(01)00162-5 10.1109/WIFS49906.2020.9360904 10.1109/CVPR.2017.195 10.1109/ICCV.2019.00009 10.1109/CVPR.2017.632 10.1145/1774088.1774427 10.1038/scientificamerican0608-66 10.1016/j.measurement.2019.06.008 10.1109/CVPR42600.2020.00296 10.1109/CVPRW50498.2020.00338 10.1109/CVPRW50498.2020.00162 10.1016/j.cosrev.2021.100379 10.3390/app11199174 10.1007/s10489-022-03766-z 10.1109/WACVW.2019.00020 10.1145/1291233.1291252 10.1109/ICCV.2019.00765 10.1145/2713168.2713194 10.1109/ICASSP39728.2021.9414582 10.1109/JSTSP.2020.3007250 10.1109/ICASSP40776.2020.9053969 10.1109/SIPROCESS.2017.8124497 10.1109/CVPRW50498.2020.00341 10.3390/app10010370 10.1016/j.sigpro.2020.107616 10.1109/ICIP.2019.8803661 10.1109/CVPR42600.2020.00582 10.1109/ICCVW.2019.00213 10.1109/5.726791 10.1109/JSTSP.2020.3002101 10.1186/s13635-017-0067-2 10.1109/ICCV48922.2021.00996 10.1109/CVPR.2016.90 10.1109/ICMEW.2015.7169839 10.1145/3394171.3413700 10.1162/neco.1997.9.8.1735 10.1109/FG.2018.00020 10.1109/CVPR.2015.7298594 10.1145/3306346.3323028 10.1109/CVPR42600.2020.00808 10.1109/CVPR46437.2021.00480 10.1109/SP.2017.49 10.1109/CVPR.2019.00453 10.1109/TPAMI.2021.3093446 10.1145/3394171.3413570 10.1109/CVPR46437.2021.00500 10.1109/MSP.2008.931079 10.1109/CVPR42600.2020.00813 10.1109/CVPR.2016.262 10.1007/978-3-030-58201-2_28 10.1109/ICASSP.2019.8683164 10.1109/CVPR42600.2020.00327 10.1145/3394171.3413769 10.1109/TIFS.2011.2129512 10.1007/11744023_32 10.1109/IJCNN48605.2020.9207034 10.2352/ISSN.2470-1173.2019.5.MWSF-532 10.1007/BF00344251 10.1109/ICCV.2015.425 10.1109/CVPR.2018.00916 10.1109/TNNLS.2018.2875194 10.1145/3448017.3457387 10.1145/3394171.3413707 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| DBID | 97E ESBDL RIA RIE RYH AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D DOA |
| DOI | 10.1109/ACCESS.2022.3151186 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE Xplore Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CiNii Complete CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts METADEX Technology Research Database Materials Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef Materials Research Database Engineered Materials Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace METADEX Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Materials Research Database |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 2169-3536 |
| EndPage | 18775 |
| ExternalDocumentID | oai_doaj_org_article_48fdcc73dc524eafa630960d99dd5493 10_1109_ACCESS_2022_3151186 9712265 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Japan Science and Technology agency Core Research for Evolutional Science and Technology (JST CREST) grantid: JPMJCR20D3 funderid: 10.13039/501100002241 – fundername: Japan Society for the Promotion of Science (JSPS) KAKENHI grantid: 19K22846 funderid: 10.13039/501100001691 – fundername: Japan Science and Technology agency Strategic International Collaborative Research Program (JST SICORP) grantid: JPMJSC20C3 funderid: 10.13039/501100002241 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR ABAZT ABVLG ACGFS ADBBV AGSQL ALMA_UNASSIGNED_HOLDINGS BCNDV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD ESBDL GROUPED_DOAJ IPLJI JAVBF KQ8 M43 M~E O9- OCL OK1 RIA RIE RNS RYH AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c503t-316dd84b91099641cdfbb1869db091e45493f3cdd9f4d1d6013f2992095ce1ff3 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 83 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000760748700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2169-3536 |
| IngestDate | Fri Oct 03 12:50:04 EDT 2025 Sun Nov 30 03:47:04 EST 2025 Tue Nov 18 22:33:12 EST 2025 Sat Nov 29 06:31:55 EST 2025 Thu Jun 26 22:22:16 EDT 2025 Wed Aug 27 02:49:36 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://creativecommons.org/licenses/by/4.0/legalcode |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c503t-316dd84b91099641cdfbb1869db091e45493f3cdd9f4d1d6013f2992095ce1ff3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-9976-3563 0000-0003-4962-2794 0000-0003-4844-2652 0000-0002-2783-4190 |
| OpenAccessLink | https://doaj.org/article/48fdcc73dc524eafa630960d99dd5493 |
| PQID | 2631959790 |
| PQPubID | 4845423 |
| PageCount | 19 |
| ParticipantIDs | crossref_citationtrail_10_1109_ACCESS_2022_3151186 proquest_journals_2631959790 crossref_primary_10_1109_ACCESS_2022_3151186 nii_cinii_1870020693043634944 doaj_primary_oai_doaj_org_article_48fdcc73dc524eafa630960d99dd5493 ieee_primary_9712265 |
| PublicationCentury | 2000 |
| PublicationDate | 20220000 2022-01-01 2022-00-00 20220101 |
| PublicationDateYYYYMMDD | 2022-01-01 |
| PublicationDate_xml | – year: 2022 text: 20220000 |
| PublicationDecade | 2020 |
| PublicationPlace | Piscataway |
| PublicationPlace_xml | – name: Piscataway |
| PublicationTitle | IEEE Access |
| PublicationTitleAbbrev | Access |
| PublicationYear | 2022 |
| Publisher | IEEE Institute of Electrical and Electronics Engineers (IEEE) The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: Institute of Electrical and Electronics Engineers (IEEE) – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref57 ref56 Song (ref41) 2018 ref59 ref58 ref55 Goodfellow (ref2); 27 ref51 ref46 ref48 ref47 ref44 ref43 Chen (ref120) ref7 ref9 ref4 Rössler (ref50) 2018 ref6 ref5 ref100 ref101 ref40 He (ref32) 2017 ref35 ref34 ref37 ref36 Agarwal (ref88); 1 ref31 ref30 Jeon (ref74) 2020 ref33 Sabir (ref72) 2019; 3 ref39 ref38 Yu (ref93) 2015 Chen (ref127) 2018; 31 Dolhansky (ref53) 2019 Chingovska (ref70) Brock (ref24) 2018 Karras (ref26) 2020 ref25 Simonyan (ref17) 2014 ref20 ref22 ref28 ref27 Radford (ref21) 2015 ref29 Sanderson (ref61) 2002 ref13 ref12 ref15 ref128 ref14 ref97 ref96 ref99 ref124 ref10 ref98 ref125 Jha (ref126) 2019 ref16 ref19 ref18 Goodfellow (ref129) 2014 Sabour (ref118) 2017 Song (ref42) 2020 Li (ref87) 2018 ref92 ref94 ref130 ref91 Huang (ref83) ref90 ref89 ref86 ref85 Baldi (ref3) (ref45) 2015 Koopman (ref62) ref82 ref81 ref84 ref80 Korshunov (ref49) 2018 ref79 ref108 ref78 ref109 Kingma (ref95) 2018 ref106 ref107 ref75 ref104 ref105 Dolhansky (ref54) 2020 ref77 ref102 ref76 ref103 ref1 LeCun (ref11); 2 ref71 ref111 ref112 ref73 ref110 ref68 ref119 ref67 ref117 ref69 ref64 ref115 ref63 ref116 ref66 ref113 ref65 ref114 Karras (ref23) 2017 Nguyen (ref8) 2019 ref60 ref122 ref123 (ref52) 2019 ref121 |
| References_xml | – ident: ref119 doi: 10.1109/CVPR.2017.243 – year: 2018 ident: ref50 article-title: FaceForensics: A large-scale video dataset for forgery detection in human faces publication-title: arXiv:1803.09179 – ident: ref63 doi: 10.1109/WIFS.2018.8630761 – ident: ref98 doi: 10.1007/978-3-030-58574-7_7 – ident: ref16 doi: 10.1007/978-3-319-10590-1_53 – year: 2017 ident: ref118 article-title: Dynamic routing between capsules publication-title: arXiv:1710.09829 – ident: ref110 doi: 10.1016/j.neucom.2020.10.081 – ident: ref84 doi: 10.1109/WIFS.2018.8630787 – ident: ref116 doi: 10.1049/iet-bmt.2019.0196 – ident: ref111 doi: 10.1007/s12652-020-02845-8 – year: 2020 ident: ref54 article-title: The DeepFake detection challenge (DFDC) dataset publication-title: arXiv:2006.07397 – ident: ref80 doi: 10.1109/IJCNN52387.2021.9534089 – ident: ref117 doi: 10.1109/TBIOM.2019.2942395 – ident: ref71 doi: 10.1007/978-3-030-31456-9_15 – volume: 27 start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref2 article-title: Generative adversarial nets – ident: ref68 doi: 10.1109/AVSS.2018.8639163 – ident: ref113 doi: 10.1109/ACCESS.2014.2381273 – ident: ref15 doi: 10.1007/s11263-015-0816-y – ident: ref90 doi: 10.1109/MIPR.2019.00103 – ident: ref37 doi: 10.1109/ACCESS.2019.2899367 – year: 2014 ident: ref129 article-title: Explaining and harnessing adversarial examples publication-title: arXiv:1412.6572 – ident: ref35 doi: 10.1145/3306346.3323035 – volume-title: Wild Web Tampered Image Dataset year: 2015 ident: ref45 – ident: ref67 doi: 10.1145/3437880.3460400 – volume: 31 volume-title: Advances in Neural Information Processing Systems year: 2018 ident: ref127 article-title: Neural ordinary differential equations – ident: ref7 doi: 10.1016/j.inffus.2020.06.014 – year: 2018 ident: ref87 article-title: Exposing DeepFake videos by detecting face warping artifacts publication-title: arXiv:1811.00656 – ident: ref40 doi: 10.1145/3072959.3073640 – start-page: 37 volume-title: Proc. ICML Workshop Unsupervised Transf. Learn. ident: ref3 article-title: Autoencoders, unsupervised learning, and deep architectures – year: 2015 ident: ref21 article-title: Unsupervised representation learning with deep convolutional generative adversarial networks publication-title: arXiv:1511.06434 – ident: ref125 doi: 10.1109/79.543975 – year: 2017 ident: ref32 article-title: AttGAN: Facial attribute editing by only changing what you want publication-title: arXiv:1711.10678 – ident: ref38 doi: 10.1109/ICCV.2019.00947 – ident: ref79 doi: 10.1109/CVPRW50498.2020.00336 – ident: ref69 doi: 10.1109/ICASSP.2019.8682602 – ident: ref122 doi: 10.1109/CVPR42600.2020.00872 – ident: ref13 doi: 10.1016/j.patcog.2017.10.013 – ident: ref33 doi: 10.1109/CVPR.2019.00379 – ident: ref5 doi: 10.1145/3425780 – ident: ref81 doi: 10.1145/3442381.3449809 – start-page: 133 volume-title: Proc. 20th Irish Mach. Vis. image Process. Conf. (IMVIP) ident: ref62 article-title: Detection of deepfake video manipulation – year: 2015 ident: ref93 article-title: LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop publication-title: arXiv:1506.03365 – start-page: 1597 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref120 article-title: A simple framework for contrastive learning of visual representations – ident: ref124 doi: 10.1016/S0031-3203(01)00162-5 – ident: ref97 doi: 10.1109/WIFS49906.2020.9360904 – ident: ref121 doi: 10.1109/CVPR.2017.195 – volume-title: Face Generator—Generate Faces Online Using AI ident: ref27 – ident: ref51 doi: 10.1109/ICCV.2019.00009 – volume-title: Ebv Dataset ident: ref86 – ident: ref36 doi: 10.1109/CVPR.2017.632 – volume: 3 start-page: 80 issue: 1 year: 2019 ident: ref72 article-title: Recurrent convolutional strategies for face manipulation detection in videos publication-title: Interface (GUI) – ident: ref59 doi: 10.1145/1774088.1774427 – ident: ref115 doi: 10.1038/scientificamerican0608-66 – ident: ref114 doi: 10.1016/j.measurement.2019.06.008 – ident: ref56 doi: 10.1109/CVPR42600.2020.00296 – ident: ref39 doi: 10.1109/CVPRW50498.2020.00338 – ident: ref104 doi: 10.1109/CVPRW50498.2020.00162 – year: 2020 ident: ref26 article-title: Training generative adversarial networks with limited data publication-title: arXiv:2006.06676 – volume-title: Cew Dataset ident: ref85 – ident: ref14 doi: 10.1016/j.cosrev.2021.100379 – start-page: 1 volume-title: Proc. Workshop Faces Real-Life’Images, Detection, Alignment, Recognit. ident: ref83 article-title: Labeled faces in the wild: A database forstudying face recognition in unconstrained environments – ident: ref112 doi: 10.3390/app11199174 – ident: ref6 doi: 10.1007/s10489-022-03766-z – ident: ref94 doi: 10.1109/WACVW.2019.00020 – year: 2017 ident: ref23 article-title: Progressive growing of GANs for improved quality, stability, and variation publication-title: arXiv:1710.10196 – ident: ref128 doi: 10.1145/1291233.1291252 – ident: ref92 doi: 10.1109/ICCV.2019.00765 – ident: ref91 doi: 10.1145/2713168.2713194 – ident: ref107 doi: 10.1109/ICASSP39728.2021.9414582 – year: 2019 ident: ref8 article-title: Deep learning for deepfakes creation and detection: A survey publication-title: arXiv:1909.11573 – ident: ref29 doi: 10.1109/JSTSP.2020.3007250 – ident: ref77 doi: 10.1109/ICASSP40776.2020.9053969 – volume-title: Four in-the-Wild Lip-Sync Deep Fakes, Youtube ident: ref101 – ident: ref82 doi: 10.1109/SIPROCESS.2017.8124497 – ident: ref103 doi: 10.1109/CVPRW50498.2020.00341 – ident: ref75 doi: 10.3390/app10010370 – ident: ref65 doi: 10.1016/j.sigpro.2020.107616 – ident: ref89 doi: 10.1109/ICIP.2019.8803661 – ident: ref28 doi: 10.1109/CVPR42600.2020.00582 – ident: ref96 doi: 10.1109/ICCVW.2019.00213 – ident: ref12 doi: 10.1109/5.726791 – ident: ref9 doi: 10.1109/JSTSP.2020.3002101 – ident: ref47 doi: 10.1186/s13635-017-0067-2 – ident: ref58 doi: 10.1109/ICCV48922.2021.00996 – ident: ref19 doi: 10.1109/CVPR.2016.90 – volume-title: Contributing Data to Deepfake Detection Research year: 2019 ident: ref52 – ident: ref60 doi: 10.1109/ICMEW.2015.7169839 – ident: ref102 doi: 10.1145/3394171.3413700 – ident: ref20 doi: 10.1162/neco.1997.9.8.1735 – ident: ref105 doi: 10.1109/FG.2018.00020 – ident: ref18 doi: 10.1109/CVPR.2015.7298594 – ident: ref43 doi: 10.1145/3306346.3323028 – ident: ref78 doi: 10.1109/CVPR42600.2020.00808 – ident: ref30 doi: 10.1109/CVPR46437.2021.00480 – year: 2020 ident: ref42 article-title: Everybody’s Talkin’: Let me talk as you want publication-title: arXiv:2001.05201 – ident: ref130 doi: 10.1109/SP.2017.49 – ident: ref4 doi: 10.1109/CVPR.2019.00453 – ident: ref109 doi: 10.1109/TPAMI.2021.3093446 – year: 2002 ident: ref61 article-title: The VidTIMIT database – ident: ref99 doi: 10.1145/3394171.3413570 – ident: ref66 doi: 10.1109/CVPR46437.2021.00500 – volume-title: Four in-the-Wild Lip-Sync Deep Fakes, Instagram ident: ref100 – ident: ref1 doi: 10.1109/MSP.2008.931079 – ident: ref25 doi: 10.1109/CVPR42600.2020.00813 – year: 2019 ident: ref53 article-title: The deepfake detection challenge (DFDC) preview dataset publication-title: arXiv:1910.08854 – year: 2018 ident: ref24 article-title: Large scale GAN training for high fidelity natural image synthesis publication-title: arXiv:1809.11096 – volume: 2 start-page: 396 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref11 article-title: Handwritten digit recognition with a back-propagation network – year: 2018 ident: ref95 article-title: Glow: Generative flow with invertible 1x1 convolutions publication-title: arXiv:1807.03039 – ident: ref34 doi: 10.1109/CVPR.2016.262 – ident: ref73 doi: 10.1007/978-3-030-58201-2_28 – year: 2014 ident: ref17 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv:1409.1556 – volume: 1 start-page: 1 volume-title: Proc. CVPR Workshops ident: ref88 article-title: Protecting world leaders against deep fakes – year: 2018 ident: ref49 article-title: DeepFakes: A new threat to face recognition? Assessment and detection publication-title: arXiv:1812.08685 – ident: ref48 doi: 10.1109/ICASSP.2019.8683164 – ident: ref55 doi: 10.1109/CVPR42600.2020.00327 – ident: ref57 doi: 10.1145/3394171.3413769 – ident: ref44 doi: 10.1109/TIFS.2011.2129512 – ident: ref123 doi: 10.1007/11744023_32 – ident: ref76 doi: 10.1109/IJCNN48605.2020.9207034 – ident: ref64 doi: 10.2352/ISSN.2470-1173.2019.5.MWSF-532 – ident: ref10 doi: 10.1007/BF00344251 – ident: ref46 doi: 10.1109/ICCV.2015.425 – ident: ref31 doi: 10.1109/CVPR.2018.00916 – year: 2019 ident: ref126 article-title: Attribution-based confidence metric for deep neural networks – year: 2018 ident: ref41 article-title: Talking face generation by conditional recurrent adversarial network publication-title: arXiv:1804.04786 – ident: ref22 doi: 10.1109/TNNLS.2018.2875194 – start-page: 1 volume-title: Proc. BIOSIG Int. Conf. Biometrics Special Interest Group (BIOSIG) ident: ref70 article-title: On the effectiveness of local binary patterns in face anti-spoofing – ident: ref108 doi: 10.1145/3448017.3457387 – year: 2020 ident: ref74 article-title: T-GD: Transferable GAN-generated images detection framework publication-title: arXiv:2008.04115 – ident: ref106 doi: 10.1145/3394171.3413707 |
| SSID | ssj0000816957 |
| Score | 2.6142151 |
| SecondaryResourceType | review_article |
| Snippet | Techniques for creating and manipulating multimedia information have progressed to the point where they can now ensure a high degree of realism. DeepFake is a... |
| SourceID | doaj proquest crossref nii ieee |
| SourceType | Open Website Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 18757 |
| SubjectTerms | Algorithms Artificial neural networks CNNs Computer & video games Crime Datasets Deception Deep learning DeepFake Electrical engineering. Electronics. Nuclear engineering Faces Forensics GANs Image manipulation Information integrity Kernel Machine learning Media Motion pictures Multimedia TK1-9971 Videos Visual effects |
| SummonAdditionalLinks | – databaseName: IEEE Electronic Library (IEL) dbid: RIE link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB61FQc48CqIlBb5wLGhSezEMRKHZcsKLhUSD_UWJZ6xtAKy1T4q8e8746QRCITEJYoix7I99sw3Y_sbgJemLltHlU69L4vUkOclhaZKS9miKjrTZRZjsgl7cVFfXrqPe3A63YUhonj4jF7Ja9zLx5XfSajszNmc0UK5D_vW2uGu1hRPkQQSrrQjsVCeubPZfM59YBewKNgzFSRd_WZ8Ikf_mFSFLUu_XP6hj6ORWTz4v-Y9hPsjmFSzQfqPYI_6x3DvF4rBQ3hzTnS1aL-ROqdtPHbVK8apKgbv1aL1pD78YJ2yUW2P6usSabV5rWbq0259TT-fwJfFu8_z9-mYMiH1Zaa3rFErxNp0Tja8KpN7DF0nWaewY2BAhr1BHbRHdMFgjuyN6cAGqWCg5SkPQT-Fg37V0zNQeVlb0rasOmNNHbK2Y-hUBFejYS-tDgkUt2PZ-JFPXNJafG-iX5G5ZhBAIwJoRgEkcDr9dDXQafy7-FsR0lRUuLDjBx73ZlxaDTcOvbcaebYZakNbafHL0DlE6W8ChyKrqZJRTAmcsMi56fLMWW0xcJa8kEZXQtljEji-nQzNuK43TVFpYeOxLjv6e63P4a50YAjSHMPBdr2jE7jjr7fLzfpFnLI3sdLimg priority: 102 providerName: IEEE |
| Title | DeepFake Detection for Human Face Images and Videos: A Survey |
| URI | https://ieeexplore.ieee.org/document/9712265 https://cir.nii.ac.jp/crid/1870020693043634944 https://www.proquest.com/docview/2631959790 https://doaj.org/article/48fdcc73dc524eafa630960d99dd5493 |
| Volume | 10 |
| WOSCitedRecordID | wos000760748700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2169-3536 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000816957 issn: 2169-3536 databaseCode: DOA dateStart: 20130101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2169-3536 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000816957 issn: 2169-3536 databaseCode: M~E dateStart: 20130101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nb9QwEB2higMcELQg0i_50CNRk9ixYyQOy7ar9kBVqYB6sxKPLa2gabW7rcSF386Mk64WIcGFiw9RYtnP9swb23kDcKSaurVBy9z7uspV8LSkUOm85iOqqlNdYTAlmzAXF831tb3cSPXFd8IGeeABuGPVRPTeSKS6VGhjqyWzbrQWkWKbpPNJrGcjmEo2uCm1rc0oM1QW9ngynVKPKCCsKopTmVfr31xRUuwfU6yQn-nn8z-sc3I5s5fwYuSKYjK08RU8Cf02PN9QENyBDych3M3ab0GchFW6VdULoqEi7c2LWeuDOL8hk7EUbY_i6xzD7fK9mIir-8VD-PEavsxOP0_P8jEjQu7rQq7IYGrERnWWz7O0Kj3GruOkUtiR3w-KAYnSI9qosEQKtmQkf1MRj_KhjFG-ga3-tg9vQZR1Y4I0te6UIYyLtiNmVEXboKIgrIkZVI_gOD_KhXPWiu8uhQ2FdQOijhF1I6IZvFt_dDeoZfz99Y-M-vpVlrpOD2gCuHECuH9NgAx2eMzWlVhTEqWsMzigMaSmc1mSVSJezGkfldSsyKMy2H8cXTcu26WrtGSxHWOL3f_RtD14xt0ddmz2YWu1uA8H8NQ_rObLxWGasVR--nl6mP47_AVtc-hD |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB6VggQ98GoRgRZ84NjQJHYertTDsmXVirJCoqDerMQzllZAttpHpf57xk4agagqcYmiyLFsjz3zzdj-BuCdqvJaUyFja_MsVmR5SaEq4txvUWWNapISQ7KJcjqtLi70lw3YH-7CEFE4fEbv_WvYy8e5XftQ2YEuU0YL-T24nyuVpd1trSGi4lNI6LzsqYXSRB-MxmPuBTuBWca-qcfSxV_mJ7D092lV2La0s9k_GjmYmcmT_2vgU3jcw0kx6uT_DDaofQ5bf5AMbsPRMdHlpP5B4phW4eBVKxipihC-F5Pakjj9xVplKeoWxfcZ0nx5KEbi63pxRdc78G3y8Xx8EvdJE2KbJ3LFOrVArFSj_ZZXoVKLrml83ilsGBqQYn9QOmkRtVOYIvtj0rFJyhhqWUqdky9gs5239BJEmlclyTIvGlWqyiV1w-Apc7pCxX5a5SLIbsbS2J5R3Ce2-GmCZ5Fo0wnAeAGYXgAR7A8_XXaEGncX_-CFNBT1bNjhA4-76ReX4cahtaVEnm-KalcX0ntmqDWi728E215WQyW9mCLYY5Fz0_0zZcXF0NlnhlSy8KQ9KoLdm8lg-pW9NFkhPR9PqZNXt9f6Fh6enH8-M2en00-v4ZHvTBey2YXN1WJNe_DAXq1my8WbMH1_A6ab5eE |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DeepFake+Detection+for+Human+Face+Images+and+Videos%3A+A+Survey&rft.jtitle=IEEE+access&rft.au=Malik%2C+Asad&rft.au=Kuribayashi%2C+Minoru&rft.au=Abdullahi%2C+Sani+M.&rft.au=Khan%2C+Ahmad+Neyaz&rft.date=2022&rft.issn=2169-3536&rft.eissn=2169-3536&rft.volume=10&rft.spage=18757&rft.epage=18775&rft_id=info:doi/10.1109%2FACCESS.2022.3151186&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_ACCESS_2022_3151186 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon |