Pros and cons of GAN evaluation measures: New developments
This work is an update of my previous paper on the same topic published a few years ago (Borji, 2019). With the dramatic progress in generative modeling, a suite of new quantitative and qualitative techniques to evaluate models has emerged. Although some measures such as Inception Score, Fréchet Inc...
Uloženo v:
| Vydáno v: | Computer vision and image understanding Ročník 215; s. 103329 |
|---|---|
| Hlavní autor: | |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier Inc
01.01.2022
|
| Témata: | |
| ISSN: | 1077-3142, 1090-235X |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | This work is an update of my previous paper on the same topic published a few years ago (Borji, 2019). With the dramatic progress in generative modeling, a suite of new quantitative and qualitative techniques to evaluate models has emerged. Although some measures such as Inception Score, Fréchet Inception Distance, Precision–Recall, and Perceptual Path Length are relatively more popular, GAN evaluation is not a settled issue and there is still room for improvement. Here, I describe new dimensions that are becoming important in assessing models (e.g. bias and fairness) and discuss the connection between GAN evaluation and deepfakes. These are important areas of concern in the machine learning community today and progress in GAN evaluation can help mitigate them.
•A critical review of new techniques for evaluating generative models.•A discussion of bias and fairness in the context of GANs and ways to mitigate them.•A discussion of how realistic deepfakes are and approaches to detect them. |
|---|---|
| AbstractList | This work is an update of my previous paper on the same topic published a few years ago (Borji, 2019). With the dramatic progress in generative modeling, a suite of new quantitative and qualitative techniques to evaluate models has emerged. Although some measures such as Inception Score, Fréchet Inception Distance, Precision–Recall, and Perceptual Path Length are relatively more popular, GAN evaluation is not a settled issue and there is still room for improvement. Here, I describe new dimensions that are becoming important in assessing models (e.g. bias and fairness) and discuss the connection between GAN evaluation and deepfakes. These are important areas of concern in the machine learning community today and progress in GAN evaluation can help mitigate them.
•A critical review of new techniques for evaluating generative models.•A discussion of bias and fairness in the context of GANs and ways to mitigate them.•A discussion of how realistic deepfakes are and approaches to detect them. |
| ArticleNumber | 103329 |
| Author | Borji, Ali |
| Author_xml | – sequence: 1 givenname: Ali orcidid: 0000-0001-8198-0335 surname: Borji fullname: Borji, Ali email: aliborji@gmail.com organization: Quintic AI, San Francisco, CA, USA |
| BookMark | eNp9kLFOwzAQhi1UJNrCCzD5BVJsJ6mdiqWqoCBVhaEDm-WcL5KrNK7sNIi3J2mZGDrd6aTv13_fhIwa3yAhj5zNOOPzp_0MOneaCSZ4f0hTUdyQMWcFS0Saf42GXcok5Zm4I5MY94xxnhV8TBafwUdqGkvBN5H6iq6XW4qdqU-mdb6hBzTxFDAu6Ba_qcUOa388YNPGe3JbmTriw9-ckt3ry271lmw-1u-r5SaBlLE2UazkAJxZm2WVAJGhEXkOFm2FAKWSvMxsrgqlyrJkJpNqLvvSICWakmE6JeoSC33TGLDS4NpztzYYV2vO9KBA7_WgQA8K9EVBj4p_6DG4gwk_16HnC4T9T53DoCM4bACtCwittt5dw38BaX93vA |
| CitedBy_id | crossref_primary_10_1007_s10334_023_01127_6 crossref_primary_10_3390_s24041229 crossref_primary_10_3390_s22197534 crossref_primary_10_1002_mp_17801 crossref_primary_10_1007_s10999_024_09711_x crossref_primary_10_1177_15353702221121602 crossref_primary_10_3390_s22041565 crossref_primary_10_1109_TMI_2024_3414931 crossref_primary_10_1109_ACCESS_2024_3370848 crossref_primary_10_1016_j_bbe_2025_09_001 crossref_primary_10_1016_j_jasc_2022_10_001 crossref_primary_10_1016_j_engappai_2025_111032 crossref_primary_10_1016_j_trd_2024_104067 crossref_primary_10_1016_j_media_2024_103106 crossref_primary_10_1016_j_conbuildmat_2025_143369 crossref_primary_10_1016_j_neucom_2025_129750 crossref_primary_10_1109_TEVC_2023_3338371 crossref_primary_10_1016_j_ascom_2025_100990 crossref_primary_10_1080_15376494_2023_2198528 crossref_primary_10_1002_mp_17473 crossref_primary_10_1080_13682199_2024_2395749 crossref_primary_10_1145_3769106 crossref_primary_10_1109_ACCESS_2024_3366801 crossref_primary_10_1109_TCC_2024_3459789 crossref_primary_10_3390_app14209586 crossref_primary_10_3390_s23031450 crossref_primary_10_1109_ACCESS_2022_3228552 crossref_primary_10_3390_ijgi12060245 crossref_primary_10_1007_s11042_025_20612_9 crossref_primary_10_1016_j_jmbbm_2025_107116 crossref_primary_10_1016_j_compbiomed_2024_109414 crossref_primary_10_1029_2024JE008739 crossref_primary_10_3390_math11040977 crossref_primary_10_1016_j_foodcont_2024_110790 crossref_primary_10_1109_JSTARS_2025_3590446 crossref_primary_10_1016_j_compgeo_2025_107177 crossref_primary_10_3389_fcomp_2024_1260604 crossref_primary_10_1016_j_patrec_2024_08_023 crossref_primary_10_1038_s42256_024_00889_5 crossref_primary_10_1109_ACCESS_2024_3518979 crossref_primary_10_1093_jcde_qwae061 crossref_primary_10_1007_s10278_023_00951_5 crossref_primary_10_1016_j_ijleo_2022_169607 crossref_primary_10_3390_diagnostics14161756 crossref_primary_10_1038_s41598_025_87358_0 crossref_primary_10_1007_s00138_025_01732_6 crossref_primary_10_3390_app12104844 crossref_primary_10_1002_mp_17931 crossref_primary_10_1016_j_engappai_2025_110133 crossref_primary_10_1109_ACCESS_2025_3547255 crossref_primary_10_1109_TAI_2024_3440219 crossref_primary_10_1016_j_jobe_2025_113365 crossref_primary_10_1016_j_jocs_2025_102539 crossref_primary_10_1109_TSE_2022_3202311 crossref_primary_10_3390_electronics13173462 crossref_primary_10_1016_j_compag_2022_107208 crossref_primary_10_1145_3666006 crossref_primary_10_3390_s25082477 crossref_primary_10_3390_make6030073 crossref_primary_10_4018_JOEUC_371759 crossref_primary_10_1016_j_scitotenv_2024_172578 crossref_primary_10_1016_j_energy_2025_135423 crossref_primary_10_1080_15481603_2024_2364460 crossref_primary_10_1016_j_commatsci_2023_112512 crossref_primary_10_1007_s10278_025_01536_0 crossref_primary_10_1038_s41576_023_00636_3 crossref_primary_10_1021_jacs_2c13467 crossref_primary_10_1109_ACCESS_2024_3354724 crossref_primary_10_1186_s40537_024_00924_7 crossref_primary_10_1002_widm_70038 crossref_primary_10_1109_ACCESS_2024_3368612 crossref_primary_10_3390_electronics11060837 crossref_primary_10_1007_s12243_023_00980_9 crossref_primary_10_1016_j_compag_2024_108762 crossref_primary_10_3390_electronics13020395 crossref_primary_10_1016_j_eswa_2024_124341 crossref_primary_10_1007_s10994_024_06701_0 crossref_primary_10_1016_j_cad_2023_103498 crossref_primary_10_1016_j_cviu_2023_103768 crossref_primary_10_1109_TKDE_2024_3361474 crossref_primary_10_1016_j_jobe_2024_110320 crossref_primary_10_1109_JSTARS_2024_3361444 crossref_primary_10_1007_s13534_024_00409_9 crossref_primary_10_1038_s41598_024_83975_3 crossref_primary_10_1088_2632_2153_adb3ee crossref_primary_10_1093_mnras_stad2778 crossref_primary_10_1007_s13748_024_00359_4 crossref_primary_10_29220_CSAM_2025_32_2_235 crossref_primary_10_1007_s11042_024_19361_y crossref_primary_10_1016_j_neucom_2025_129422 crossref_primary_10_1088_1674_1056_ad23d8 crossref_primary_10_3390_app14156831 crossref_primary_10_1016_j_ymssp_2023_110370 crossref_primary_10_1016_j_ejrai_2025_100030 crossref_primary_10_1029_2024JH000533 crossref_primary_10_1007_s10278_024_01205_8 crossref_primary_10_1016_j_commatsci_2023_112074 crossref_primary_10_1038_s44172_025_00450_1 crossref_primary_10_1016_j_eswa_2024_125576 crossref_primary_10_1016_j_cosrev_2024_100662 crossref_primary_10_1016_j_energy_2025_134711 crossref_primary_10_3390_su17125255 crossref_primary_10_1103_PhysRevApplied_21_044032 crossref_primary_10_3390_info12090375 crossref_primary_10_1016_j_image_2025_117307 crossref_primary_10_1007_s43681_022_00161_9 crossref_primary_10_1109_ACCESS_2024_3505989 crossref_primary_10_3348_kjr_2024_0392 crossref_primary_10_1007_s11227_024_06198_3 crossref_primary_10_1080_10447318_2025_2531286 crossref_primary_10_1088_2632_2153_acee44 crossref_primary_10_1093_ptep_ptae106 crossref_primary_10_1038_s41551_025_01468_8 crossref_primary_10_17163_ings_n34_2025_09 crossref_primary_10_1016_j_cad_2023_103609 crossref_primary_10_1093_insilicoplants_diaf002 crossref_primary_10_3233_JIFS_219373 crossref_primary_10_1109_TVT_2022_3216028 crossref_primary_10_1007_s10489_023_04644_y crossref_primary_10_1016_j_engstruct_2025_120276 crossref_primary_10_1109_TIFS_2025_3560557 crossref_primary_10_1145_3559540 crossref_primary_10_3390_electronics13030476 crossref_primary_10_1016_j_cma_2025_118064 crossref_primary_10_1016_j_heliyon_2024_e26466 crossref_primary_10_1016_j_jmb_2025_169181 crossref_primary_10_1007_s42979_025_03867_9 crossref_primary_10_1111_1556_4029_15680 crossref_primary_10_1088_2632_2153_ad9a3a crossref_primary_10_1016_j_engappai_2023_107255 crossref_primary_10_1016_j_physd_2023_133831 crossref_primary_10_1088_2058_9565_acd578 crossref_primary_10_1109_ACCESS_2025_3588762 crossref_primary_10_1007_s42484_023_00135_y crossref_primary_10_1016_j_imavis_2023_104771 |
| Cites_doi | 10.1016/j.neunet.2020.07.007 10.1109/CVPR.2018.00068 10.1021/acs.jcim.8b00234 10.1007/978-3-030-01249-6_13 10.1016/j.inffus.2020.06.014 10.1007/978-3-030-01234-2_32 10.1109/CVPR42600.2020.00813 10.1007/s12559-019-09670-y 10.1016/j.proeng.2013.09.086 10.1147/JRD.2019.2945519 10.23915/distill.00018 10.1109/CVPR.2019.00453 10.1109/ICCV.2015.425 10.1016/j.cviu.2018.10.009 10.1109/ICCV.2019.00765 10.1109/CVPR.2019.00244 10.1109/CVPR.2018.00165 10.1109/CVPR42600.2020.00872 10.1109/JPROC.2021.3049196 10.1145/325334.325242 10.1109/ICCV.2019.00460 10.1109/CVPR.2017.502 10.1109/CVPR42600.2020.00611 10.1109/CVPR42600.2020.00791 |
| ContentType | Journal Article |
| Copyright | 2021 Elsevier Inc. |
| Copyright_xml | – notice: 2021 Elsevier Inc. |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.cviu.2021.103329 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering Computer Science |
| EISSN | 1090-235X |
| ExternalDocumentID | 10_1016_j_cviu_2021_103329 S1077314221001685 |
| GroupedDBID | --K --M -~X .DC .~1 0R~ 1B1 1~. 1~5 29F 4.4 457 4G. 5GY 5VS 6TJ 7-5 71M 8P~ AABNK AACTN AAEDT AAEDW AAIAV AAIKC AAIKJ AAKOC AALRI AAMNW AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABEFU ABFNM ABJNI ABMAC ABXDB ABYKQ ACDAQ ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADFGL ADJOM ADMUD ADTZH AEBSH AECPX AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CAG COF CS3 DM4 DU5 EBS EFBJH EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HF~ HVGLF HZ~ IHE J1W JJJVA KOM LG5 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SDF SDG SDP SES SEW SPC SPCBC SSV SSZ T5K TN5 XPP ZMT ~G- 9DU AATTM AAXKI AAYWO AAYXX ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS SST ~HD |
| ID | FETCH-LOGICAL-c300t-80b1cc10dd44f2c24ea255cdedfeccb871b4d58988bbb0a47867235c77eab0e3 |
| ISICitedReferencesCount | 189 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000736276200004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1077-3142 |
| IngestDate | Tue Nov 18 21:01:46 EST 2025 Sat Nov 29 07:05:22 EST 2025 Fri Feb 23 02:41:11 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Generative modeling Deepfakes GAN evaluation 41A10 65D05 65D17 41A05 |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c300t-80b1cc10dd44f2c24ea255cdedfeccb871b4d58988bbb0a47867235c77eab0e3 |
| ORCID | 0000-0001-8198-0335 |
| ParticipantIDs | crossref_citationtrail_10_1016_j_cviu_2021_103329 crossref_primary_10_1016_j_cviu_2021_103329 elsevier_sciencedirect_doi_10_1016_j_cviu_2021_103329 |
| PublicationCentury | 2000 |
| PublicationDate | January 2022 2022-01-00 |
| PublicationDateYYYYMMDD | 2022-01-01 |
| PublicationDate_xml | – month: 01 year: 2022 text: January 2022 |
| PublicationDecade | 2020 |
| PublicationTitle | Computer vision and image understanding |
| PublicationYear | 2022 |
| Publisher | Elsevier Inc |
| Publisher_xml | – name: Elsevier Inc |
| References | Roblek, Kilgour, Sharifi, Zuluaga (b65) 2019 Verma, Rubin (b82) 2018 Djolonga, Lucic, Cuturi, Bachem, Bousquet, Gelly (b22) 2020 Tulyakov, S., Liu, M.-Y., Yang, X., Kautz, J., 2018. Mocogan: Decomposing motion and content for video generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1526–1535. Yu, Seff, Zhang, Song, Funkhouser, Xiao (b91) 2015 Theis, Oord, Bethge (b77) 2015 Ramesh, Pavlov, Goh, Gray, Voss, Radford, Chen, Sutskever (b62) 2021 Park, T., Liu, M.-Y., Wang, T.-C., Zhu, J.-Y., 2019. Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2337–2346. Sajjadi, Bachem, Lucic, Bousquet, Gelly (b66) 2018 Wang, She, Ward (b84) 2019 McDuff, Ma, Song, Kapoor (b49) 2019 Simonyan, Zisserman (b73) 2014 Buolamwini, Gebru (b12) 2018 Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O., 2018. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 586–595. Iqbal, Qureshi (b33) 2020 Chai, Bau, Lim, Isola (b16) 2020 Parmar, Zhang, Zhu (b60) 2021 Tevet, Habib, Shwartz, Berant (b76) 2018 Sattigeri, Hoffman, Chenthamarakshan, Varshney (b68) 2019; 63 Liu, Huang, Yu, Wang, Mallya (b43) 2021 Jahanian, Chai, Isola (b34) 2019 Morozov, Voynov, Babenko (b51) 2020 Preuer, Renz, Unterthiner, Hochreiter, Klambauer (b61) 2018; 58 Frank, Eisenhofer, Schönherr, Fischer, Kolossa, Holz (b25) 2020 Zeng, Lu, Borji (b92) 2017 Brock, Donahue, Simonyan (b11) 2018 Chong, M.J., Forsyth, D., 2020. Effectively unbiased FID and inception score and where to find them. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6070–6079. Xu, Yuan, Zhang, Wu (b86) 2018 Razavi, Oord, Vinyals (b64) 2019 Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A., 2020a. CNN-generated images are surprisingly easy to spot... for now. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Vol. 7. Carreira, J., Zisserman, A., 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6299–6308. Karras, T., Laine, S., Aila, T., 2019. A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4401–4410. Lucic, Kurach, Michalski, Gelly, Bousquet (b46) 2017 Narvekar, Karam (b53) 2009 Gragnaniello, Cozzolino, Marra, Poggi, Verdoliva (b28) 2021 Bau, Zhu, Strobelt, Zhou, Tenenbaum, Freeman, Torralba (b6) 2018 Wang, Healy, Smeaton, Ward (b83) 2020; 12 Mathiasen, Hvilshøj (b48) 2020 Kynkäänniemi, Karras, Laine, Lehtinen, Aila (b41) 2019 Shmelkov, Schmid, Alahari (b69) 2018 Nash, Menick, Dieleman, Battaglia (b54) 2021 Dzanic, Shah, Witherden (b24) 2019 Alaa, van Breugel, Saveliev, van der Schaar (b1) 2021 Liu, Wei, Lu, Zhou (b45) 2018 Bond-Taylor, Leach, Long, Willcocks (b9) 2021 Kingma, Welling (b39) 2013 Tsitsulin, Munkhoeva, Mottin, Karras, Bronstein, Oseledets, Müller (b79) 2019 Luzi, Marrero, Wynar, Baraniuk, Henry (b47) 2021 Kolchinski, Zhou, Zhao, Gordon, Ermon (b40) 2019 Yang, C., Wang, Z., Zhu, X., Huang, C., Shi, J., Lin, D., 2018. Pose guided human video generation. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 201–216. Jiang, Chang, Wang (b35) 2021 Barannikov, Trofimov, Sotnikov, Trimbach, Korotin, Filippov, Burnaev (b3) 2021 Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio (b27) 2014 Yu, N., Davis, L.S., Fritz, M., 2019. Attributing fake images to gans: Learning and analyzing gan fingerprints. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7556–7566. Xuan, Yang, Yang, He, Wang (b87) 2019 Durall, R., Keuper, M., Keuper, J., 2020. Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7890–7899. Ravuri, Vinyals (b63) 2019 Zhou, Gordon, Krishna, Narcomey, Fei-Fei, Bernstein (b95) 2019 Naeem, Oh, Uh, Choi, Yoo (b52) 2020 De, Masilamani (b19) 2013; 64 Barratt, Sharma (b4) 2018 Bau, D., Zhu, J.-Y., Wulff, J., Peebles, W., Strobelt, H., Zhou, B., Torralba, A., 2019. Seeing what a gan cannot generate. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4502–4511. van Steenkiste, Kurach, Schmidhuber, Gelly (b75) 2020; 130 Yu, Li, Zhou, Malik, Davis, Fritz (b90) 2020 Zhao, Ren, Yuan, Song, Goodman, Ermon (b94) 2018 Bińkowski, Sutherland, Arbel, Gretton (b8) 2018 Galteri, Seidenari, Bongini, Bertini, Del Bimbo (b26) 2021 Odena (b56) 2019; 4 Barua, Ma, Erfani, Houle, Bailey (b5) 2019 Oprea, Martinez-Gonzalez, Garcia-Garcia, Castro-Vargas, Orts-Escolano, Garcia-Rodriguez, Argyros (b58) 2020 Simon, Webster, Rabin (b72) 2019 Khrulkov, Oseledets (b38) 2018 Gulrajani, Raffel, Metz (b30) 2020 Grnarova, Levy, Lucchi, Perraudin, Goodfellow, Hofmann, Krause (b29) 2019 Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T., 2020. Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8110–8119. Chen, L., Li, Z., Maddox, R.K., Duan, Z., Xu, C., 2018. Lip movements generation at a glance. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 520–535. Heusel, Ramsauer, Unterthiner, Nessler, Hochreiter (b31) 2017 Ding, Yang, Hong, Zheng, Zhou, Yin, Lin, Zou, Shao, Yang (b21) 2021 Meehan, Chaudhuri, Dasgupta (b50) 2020 Unterthiner, van Steenkiste, Kurach, Marinier, Michalski, Gelly (b81) 2019 van den Burg, Williams (b13) 2021 O’Brien, Groh, Dubey (b55) 2018 Tolosana, Vera-Rodriguez, Fierrez, Morales, Ortega-Garcia (b78) 2020; 64 Bai, Lin, Raffel, Kan (b2) 2021 Casanova, Drozdzal, Romero-Soriano (b15) 2020 Denton, Chintala, Szlam, Fergus (b20) 2015 Sidheekh, Aimen, Krishnan (b71) 2021 Lee, Town (b42) 2020 Oord, Kalchbrenner, Vinyals, Espeholt, Graves, Kavukcuoglu (b57) 2016 Shoemake, K., 1985. Animating rotation with quaternion curves. In: Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques. pp. 245–254. Borji (b10) 2019; 179 Hudson, Zitnick (b32) 2021 Liu, Z., Luo, P., Wang, X., Tang, X., 2015. Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3730–3738. Salimans, Goodfellow, Zaremba, Cheung, Radford, Chen (b67) 2016 Soloveitchik, Diskin, Morin, Wiesel (b74) 2021 Nash (10.1016/j.cviu.2021.103329_b54) 2021 Barannikov (10.1016/j.cviu.2021.103329_b3) 2021 Alaa (10.1016/j.cviu.2021.103329_b1) 2021 McDuff (10.1016/j.cviu.2021.103329_b49) 2019 van den Burg (10.1016/j.cviu.2021.103329_b13) 2021 Bond-Taylor (10.1016/j.cviu.2021.103329_b9) 2021 O’Brien (10.1016/j.cviu.2021.103329_b55) 2018 Grnarova (10.1016/j.cviu.2021.103329_b29) 2019 De (10.1016/j.cviu.2021.103329_b19) 2013; 64 Lee (10.1016/j.cviu.2021.103329_b42) 2020 Bai (10.1016/j.cviu.2021.103329_b2) 2021 Liu (10.1016/j.cviu.2021.103329_b45) 2018 10.1016/j.cviu.2021.103329_b80 Sattigeri (10.1016/j.cviu.2021.103329_b68) 2019; 63 Oprea (10.1016/j.cviu.2021.103329_b58) 2020 Bau (10.1016/j.cviu.2021.103329_b6) 2018 Chai (10.1016/j.cviu.2021.103329_b16) 2020 10.1016/j.cviu.2021.103329_b85 Heusel (10.1016/j.cviu.2021.103329_b31) 2017 10.1016/j.cviu.2021.103329_b44 10.1016/j.cviu.2021.103329_b88 Zhao (10.1016/j.cviu.2021.103329_b94) 2018 Barua (10.1016/j.cviu.2021.103329_b5) 2019 10.1016/j.cviu.2021.103329_b89 Buolamwini (10.1016/j.cviu.2021.103329_b12) 2018 Verma (10.1016/j.cviu.2021.103329_b82) 2018 10.1016/j.cviu.2021.103329_b17 Frank (10.1016/j.cviu.2021.103329_b25) 2020 Khrulkov (10.1016/j.cviu.2021.103329_b38) 2018 Brock (10.1016/j.cviu.2021.103329_b11) 2018 10.1016/j.cviu.2021.103329_b18 Salimans (10.1016/j.cviu.2021.103329_b67) 2016 Hudson (10.1016/j.cviu.2021.103329_b32) 2021 Odena (10.1016/j.cviu.2021.103329_b56) 2019; 4 Narvekar (10.1016/j.cviu.2021.103329_b53) 2009 Simonyan (10.1016/j.cviu.2021.103329_b73) 2014 Sajjadi (10.1016/j.cviu.2021.103329_b66) 2018 Gragnaniello (10.1016/j.cviu.2021.103329_b28) 2021 Unterthiner (10.1016/j.cviu.2021.103329_b81) 2019 10.1016/j.cviu.2021.103329_b93 Tevet (10.1016/j.cviu.2021.103329_b76) 2018 10.1016/j.cviu.2021.103329_b7 Morozov (10.1016/j.cviu.2021.103329_b51) 2020 Lucic (10.1016/j.cviu.2021.103329_b46) 2017 10.1016/j.cviu.2021.103329_b59 10.1016/j.cviu.2021.103329_b14 Iqbal (10.1016/j.cviu.2021.103329_b33) 2020 Oord (10.1016/j.cviu.2021.103329_b57) 2016 Tsitsulin (10.1016/j.cviu.2021.103329_b79) 2019 Galteri (10.1016/j.cviu.2021.103329_b26) 2021 Ding (10.1016/j.cviu.2021.103329_b21) 2021 Djolonga (10.1016/j.cviu.2021.103329_b22) 2020 Goodfellow (10.1016/j.cviu.2021.103329_b27) 2014 van Steenkiste (10.1016/j.cviu.2021.103329_b75) 2020; 130 Luzi (10.1016/j.cviu.2021.103329_b47) 2021 Barratt (10.1016/j.cviu.2021.103329_b4) 2018 Shmelkov (10.1016/j.cviu.2021.103329_b69) 2018 Wang (10.1016/j.cviu.2021.103329_b83) 2020; 12 Liu (10.1016/j.cviu.2021.103329_b43) 2021 Parmar (10.1016/j.cviu.2021.103329_b60) 2021 Soloveitchik (10.1016/j.cviu.2021.103329_b74) 2021 Xu (10.1016/j.cviu.2021.103329_b86) 2018 Kingma (10.1016/j.cviu.2021.103329_b39) 2013 Simon (10.1016/j.cviu.2021.103329_b72) 2019 Borji (10.1016/j.cviu.2021.103329_b10) 2019; 179 Kolchinski (10.1016/j.cviu.2021.103329_b40) 2019 Mathiasen (10.1016/j.cviu.2021.103329_b48) 2020 Ramesh (10.1016/j.cviu.2021.103329_b62) 2021 Yu (10.1016/j.cviu.2021.103329_b91) 2015 Kynkäänniemi (10.1016/j.cviu.2021.103329_b41) 2019 10.1016/j.cviu.2021.103329_b23 Jahanian (10.1016/j.cviu.2021.103329_b34) 2019 Razavi (10.1016/j.cviu.2021.103329_b64) 2019 Zeng (10.1016/j.cviu.2021.103329_b92) 2017 Theis (10.1016/j.cviu.2021.103329_b77) 2015 Wang (10.1016/j.cviu.2021.103329_b84) 2019 Naeem (10.1016/j.cviu.2021.103329_b52) 2020 Meehan (10.1016/j.cviu.2021.103329_b50) 2020 Preuer (10.1016/j.cviu.2021.103329_b61) 2018; 58 Roblek (10.1016/j.cviu.2021.103329_b65) 2019 Yu (10.1016/j.cviu.2021.103329_b90) 2020 Zhou (10.1016/j.cviu.2021.103329_b95) 2019 Ravuri (10.1016/j.cviu.2021.103329_b63) 2019 Bińkowski (10.1016/j.cviu.2021.103329_b8) 2018 Denton (10.1016/j.cviu.2021.103329_b20) 2015 Gulrajani (10.1016/j.cviu.2021.103329_b30) 2020 Jiang (10.1016/j.cviu.2021.103329_b35) 2021 10.1016/j.cviu.2021.103329_b70 Dzanic (10.1016/j.cviu.2021.103329_b24) 2019 Casanova (10.1016/j.cviu.2021.103329_b15) 2020 Tolosana (10.1016/j.cviu.2021.103329_b78) 2020; 64 Sidheekh (10.1016/j.cviu.2021.103329_b71) 2021 Xuan (10.1016/j.cviu.2021.103329_b87) 2019 10.1016/j.cviu.2021.103329_b37 10.1016/j.cviu.2021.103329_b36 |
| References_xml | – year: 2016 ident: b57 article-title: Conditional image generation with pixelcnn decoders – year: 2021 ident: b3 article-title: Manifold topology divergence: a framework for comparing data manifolds – year: 2020 ident: b33 article-title: The survey: Text generation models in deep learning publication-title: J. King Saud Univ.-Comput. Inf. Sci. – volume: 58 start-page: 1736 year: 2018 end-page: 1741 ident: b61 article-title: Fréchet ChemNet distance: a metric for generative models for molecules in drug discovery publication-title: J. Chem. Inf. Model. – reference: Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A., 2020a. CNN-generated images are surprisingly easy to spot... for now. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Vol. 7. – start-page: 12092 year: 2019 end-page: 12102 ident: b29 article-title: A domain agnostic measure for monitoring and evaluating GANs publication-title: Advances in Neural Information Processing Systems – year: 2021 ident: b71 article-title: Characterizing GAN convergence through proximal duality gap – volume: 64 start-page: 131 year: 2020 end-page: 148 ident: b78 article-title: Deepfakes and beyond: A survey of face manipulation and fake detection publication-title: Inf. Fusion – year: 2019 ident: b64 article-title: Generating diverse high-fidelity images with vq-vae-2 – year: 2020 ident: b15 article-title: Generating unseen complex scenes: are we there yet? – year: 2021 ident: b60 article-title: On buggy resizing libraries and surprising subtleties in FID calculation – year: 2020 ident: b50 article-title: A non-parametric test to detect data-copying in generative models – reference: Yu, N., Davis, L.S., Fritz, M., 2019. Attributing fake images to gans: Learning and analyzing gan fingerprints. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7556–7566. – volume: 64 start-page: 149 year: 2013 end-page: 158 ident: b19 article-title: Image sharpness measure for blurred images in frequency domain publication-title: Procedia Eng. – year: 2015 ident: b91 article-title: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop – year: 2017 ident: b31 article-title: Gans trained by a two time-scale update rule converge to a local nash equilibrium – year: 2017 ident: b46 article-title: Are gans created equal? a large-scale study – year: 2018 ident: b55 article-title: Evaluating generative adversarial networks on explicitly parameterized distributions – year: 2019 ident: b79 article-title: The shape of data: Intrinsic distance for data distributions – start-page: 2550 year: 2020 end-page: 2559 ident: b22 article-title: Precision-recall curves using information divergence frontiers publication-title: International Conference on Artificial Intelligence and Statistics – reference: Chen, L., Li, Z., Maddox, R.K., Duan, Z., Xu, C., 2018. Lip movements generation at a glance. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 520–535. – year: 2014 ident: b73 article-title: Very deep convolutional networks for large-scale image recognition – start-page: 3449 year: 2019 end-page: 3461 ident: b95 article-title: Hype: A benchmark for human eye perceptual evaluation of generative models publication-title: Advances in Neural Information Processing Systems – volume: 179 start-page: 41 year: 2019 end-page: 65 ident: b10 article-title: Pros and cons of gan evaluation measures publication-title: Comput. Vis. Image Underst. – reference: Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O., 2018. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 586–595. – year: 2021 ident: b2 article-title: On training sample memorization: Lessons from Benchmarking generative modeling with a large-scale competition – year: 2016 ident: b67 article-title: Improved techniques for training gans – volume: 130 start-page: 309 year: 2020 end-page: 325 ident: b75 article-title: Investigating object compositionality in generative adversarial networks publication-title: Neural Netw. – year: 2021 ident: b62 article-title: Zero-shot text-to-image generation – reference: Bau, D., Zhu, J.-Y., Wulff, J., Peebles, W., Strobelt, H., Zhou, B., Torralba, A., 2019. Seeing what a gan cannot generate. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4502–4511. – year: 2021 ident: b32 article-title: Generative adversarial transformers – year: 2020 ident: b51 article-title: On self-supervised image representations for GAN evaluation publication-title: International Conference on Learning Representations – year: 2018 ident: b8 article-title: Demystifying mmd gans – year: 2019 ident: b40 article-title: Approximating human judgment of generated image quality – start-page: 5228 year: 2018 end-page: 5237 ident: b66 article-title: Assessing generative models via precision and recall publication-title: Advances in Neural Information Processing Systems – year: 2019 ident: b84 article-title: Generative adversarial networks in computer vision: A survey and taxonomy – year: 2019 ident: b34 article-title: On the”steerability” of generative adversarial networks – year: 2021 ident: b47 article-title: Evaluating generative networks using Gaussian mixtures of image features – volume: 63 start-page: 3:1 year: 2019 end-page: 3:9 ident: b68 article-title: Fairness GAN: Generating datasets with fairness properties using a generative adversarial network publication-title: IBM J. Res. Dev. – year: 2019 ident: b72 article-title: Revisiting precision and recall definition for generative model evaluation – year: 2021 ident: b13 article-title: On memorization in probabilistic deep generative models – year: 2019 ident: b24 article-title: Fourier spectrum discrepancies in deep network generated images – year: 2020 ident: b58 article-title: A review on deep learning techniques for video prediction publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – reference: Durall, R., Keuper, M., Keuper, J., 2020. Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7890–7899. – year: 2020 ident: b48 article-title: Fast fr – year: 2021 ident: b35 article-title: Transgan: Two transformers can make one strong gan – volume: 4 year: 2019 ident: b56 article-title: Open questions about generative adversarial networks publication-title: Distill – reference: Yang, C., Wang, Z., Zhu, X., Huang, C., Shi, J., Lin, D., 2018. Pose guided human video generation. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 201–216. – start-page: 77 year: 2018 end-page: 91 ident: b12 article-title: Gender shades: Intersectional accuracy disparities in commercial gender classification publication-title: Conference on Fairness, Accountability and Transparency – year: 2015 ident: b20 article-title: Deep generative image models using a laplacian pyramid of adversarial networks – start-page: 103 year: 2020 end-page: 120 ident: b16 article-title: What makes fake images detectable? Understanding properties that generalize publication-title: European Conference on Computer Vision – start-page: 377 year: 2020 end-page: 393 ident: b90 article-title: Inclusive gan: Improving data and minority coverage in generative models publication-title: European Conference on Computer Vision – year: 2019 ident: b41 article-title: Improved precision and recall metric for assessing generative models – year: 2021 ident: b21 article-title: CogView: Mastering text-to-image generation via transformers – start-page: 570 year: 2018 end-page: 575 ident: b86 article-title: Fairgan: Fairness-aware generative adversarial networks publication-title: 2018 IEEE International Conference on Big Data (Big Data) – reference: Chong, M.J., Forsyth, D., 2020. Effectively unbiased FID and inception score and where to find them. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6070–6079. – reference: Tulyakov, S., Liu, M.-Y., Yang, X., Kautz, J., 2018. Mocogan: Decomposing motion and content for video generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1526–1535. – year: 2019 ident: b65 article-title: Fr – start-page: 213 year: 2018 end-page: 229 ident: b69 article-title: How good is my gan? publication-title: Proceedings of the European Conference on Computer Vision (ECCV) – start-page: 3247 year: 2020 end-page: 3258 ident: b25 article-title: Leveraging frequency analysis for deep fake image recognition publication-title: International Conference on Machine Learning – year: 2018 ident: b4 article-title: A note on the inception score – start-page: 12268 year: 2019 end-page: 12279 ident: b63 article-title: Classification accuracy score for conditional generative models publication-title: Advances in Neural Information Processing Systems – year: 2021 ident: b74 article-title: Conditional frechet inception distance – year: 2018 ident: b76 article-title: Evaluating text gans as language models – year: 2021 ident: b43 article-title: Generative adversarial networks for image and video synthesis: Algorithms and applications publication-title: Proc. IEEE – year: 2021 ident: b1 article-title: How faithful is your synthetic data? Sample-level metrics for evaluating and auditing generative models – year: 2018 ident: b6 article-title: Gan dissection: Visualizing and understanding generative adversarial networks – start-page: 87 year: 2009 end-page: 91 ident: b53 article-title: A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection publication-title: 2009 International Workshop on Quality of Multimedia Experience – year: 2021 ident: b54 article-title: Generating images with sparse representations – year: 2017 ident: b92 article-title: Statistics of deep generated images – year: 2021 ident: b28 article-title: Are GAN generated images easy to detect? A critical analysis of the state-of-the-art – reference: Park, T., Liu, M.-Y., Wang, T.-C., Zhu, J.-Y., 2019. Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2337–2346. – year: 2019 ident: b5 article-title: Quality evaluation of gans using cross local intrinsic dimensionality – year: 2019 ident: b81 article-title: FVD: A new metric for video generation – year: 2020 ident: b30 article-title: Towards GAN benchmarks which require generalization – year: 2018 ident: b45 article-title: An improved evaluation framework for generative adversarial networks – year: 2019 ident: b87 article-title: On the anomalous generalization of GANs – start-page: 2621 year: 2018 end-page: 2629 ident: b38 article-title: Geometry score: A method for comparing generative adversarial networks publication-title: International Conference on Machine Learning – year: 2020 ident: b52 article-title: Reliable fidelity and diversity metrics for generative models – start-page: 1 year: 2018 end-page: 7 ident: b82 article-title: Fairness definitions explained publication-title: 2018 Ieee/Acm International Workshop on Software Fairness (Fairware) – reference: Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T., 2020. Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8110–8119. – year: 2013 ident: b39 article-title: Auto-encoding variational bayes – reference: Shoemake, K., 1985. Animating rotation with quaternion curves. In: Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques. pp. 245–254. – year: 2014 ident: b27 article-title: Generative adversarial networks – year: 2018 ident: b11 article-title: Large scale GAN training for high fidelity natural image synthesis – year: 2021 ident: b26 article-title: Language based image quality assessment – reference: Carreira, J., Zisserman, A., 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6299–6308. – year: 2015 ident: b77 article-title: A note on the evaluation of generative models – year: 2019 ident: b49 article-title: Characterizing bias in classifiers using generative models – reference: Karras, T., Laine, S., Aila, T., 2019. A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4401–4410. – year: 2020 ident: b42 article-title: Mimicry: Towards the reproducibility of GAN research – volume: 12 start-page: 13 year: 2020 end-page: 24 ident: b83 article-title: Use of neural signals to evaluate the quality of generative adversarial network performance in facial image generation publication-title: Cogn. Comput. – reference: Liu, Z., Luo, P., Wang, X., Tang, X., 2015. Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 3730–3738. – year: 2021 ident: b9 article-title: Deep generative modelling: A comparative review of VAEs, GANs, normalizing flows, energy-based and autoregressive models – start-page: 10792 year: 2018 end-page: 10801 ident: b94 article-title: Bias and generalization in deep generative models: An empirical study publication-title: Advances in Neural Information Processing Systems – year: 2021 ident: 10.1016/j.cviu.2021.103329_b28 – volume: 130 start-page: 309 year: 2020 ident: 10.1016/j.cviu.2021.103329_b75 article-title: Investigating object compositionality in generative adversarial networks publication-title: Neural Netw. doi: 10.1016/j.neunet.2020.07.007 – year: 2018 ident: 10.1016/j.cviu.2021.103329_b4 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b81 – year: 2018 ident: 10.1016/j.cviu.2021.103329_b11 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b32 – ident: 10.1016/j.cviu.2021.103329_b93 doi: 10.1109/CVPR.2018.00068 – volume: 58 start-page: 1736 issue: 9 year: 2018 ident: 10.1016/j.cviu.2021.103329_b61 article-title: Fréchet ChemNet distance: a metric for generative models for molecules in drug discovery publication-title: J. Chem. Inf. Model. doi: 10.1021/acs.jcim.8b00234 – ident: 10.1016/j.cviu.2021.103329_b88 doi: 10.1007/978-3-030-01249-6_13 – start-page: 87 year: 2009 ident: 10.1016/j.cviu.2021.103329_b53 article-title: A no-reference perceptual image sharpness metric based on a cumulative probability of blur detection – year: 2013 ident: 10.1016/j.cviu.2021.103329_b39 – volume: 64 start-page: 131 year: 2020 ident: 10.1016/j.cviu.2021.103329_b78 article-title: Deepfakes and beyond: A survey of face manipulation and fake detection publication-title: Inf. Fusion doi: 10.1016/j.inffus.2020.06.014 – year: 2015 ident: 10.1016/j.cviu.2021.103329_b91 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b60 – year: 2017 ident: 10.1016/j.cviu.2021.103329_b46 – year: 2015 ident: 10.1016/j.cviu.2021.103329_b77 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b72 – ident: 10.1016/j.cviu.2021.103329_b17 doi: 10.1007/978-3-030-01234-2_32 – start-page: 12092 year: 2019 ident: 10.1016/j.cviu.2021.103329_b29 article-title: A domain agnostic measure for monitoring and evaluating GANs – year: 2016 ident: 10.1016/j.cviu.2021.103329_b57 – year: 2020 ident: 10.1016/j.cviu.2021.103329_b15 – ident: 10.1016/j.cviu.2021.103329_b37 doi: 10.1109/CVPR42600.2020.00813 – start-page: 5228 year: 2018 ident: 10.1016/j.cviu.2021.103329_b66 article-title: Assessing generative models via precision and recall – year: 2019 ident: 10.1016/j.cviu.2021.103329_b41 – volume: 12 start-page: 13 issue: 1 year: 2020 ident: 10.1016/j.cviu.2021.103329_b83 article-title: Use of neural signals to evaluate the quality of generative adversarial network performance in facial image generation publication-title: Cogn. Comput. doi: 10.1007/s12559-019-09670-y – year: 2020 ident: 10.1016/j.cviu.2021.103329_b51 article-title: On self-supervised image representations for GAN evaluation – start-page: 1 year: 2018 ident: 10.1016/j.cviu.2021.103329_b82 article-title: Fairness definitions explained – year: 2019 ident: 10.1016/j.cviu.2021.103329_b87 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b40 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b26 – volume: 64 start-page: 149 year: 2013 ident: 10.1016/j.cviu.2021.103329_b19 article-title: Image sharpness measure for blurred images in frequency domain publication-title: Procedia Eng. doi: 10.1016/j.proeng.2013.09.086 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b24 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b47 – year: 2020 ident: 10.1016/j.cviu.2021.103329_b58 article-title: A review on deep learning techniques for video prediction publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – year: 2021 ident: 10.1016/j.cviu.2021.103329_b74 – start-page: 10792 year: 2018 ident: 10.1016/j.cviu.2021.103329_b94 article-title: Bias and generalization in deep generative models: An empirical study – year: 2019 ident: 10.1016/j.cviu.2021.103329_b65 – volume: 63 start-page: 3:1 issue: 4/5 year: 2019 ident: 10.1016/j.cviu.2021.103329_b68 article-title: Fairness GAN: Generating datasets with fairness properties using a generative adversarial network publication-title: IBM J. Res. Dev. doi: 10.1147/JRD.2019.2945519 – year: 2017 ident: 10.1016/j.cviu.2021.103329_b92 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b9 – year: 2020 ident: 10.1016/j.cviu.2021.103329_b50 – year: 2014 ident: 10.1016/j.cviu.2021.103329_b73 – start-page: 213 year: 2018 ident: 10.1016/j.cviu.2021.103329_b69 article-title: How good is my gan? – year: 2019 ident: 10.1016/j.cviu.2021.103329_b49 – year: 2016 ident: 10.1016/j.cviu.2021.103329_b67 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b34 – year: 2018 ident: 10.1016/j.cviu.2021.103329_b8 – volume: 4 issue: 4 year: 2019 ident: 10.1016/j.cviu.2021.103329_b56 article-title: Open questions about generative adversarial networks publication-title: Distill doi: 10.23915/distill.00018 – year: 2020 ident: 10.1016/j.cviu.2021.103329_b52 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b84 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b13 – year: 2020 ident: 10.1016/j.cviu.2021.103329_b30 – ident: 10.1016/j.cviu.2021.103329_b36 doi: 10.1109/CVPR.2019.00453 – year: 2018 ident: 10.1016/j.cviu.2021.103329_b55 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b71 – year: 2017 ident: 10.1016/j.cviu.2021.103329_b31 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b62 – ident: 10.1016/j.cviu.2021.103329_b44 doi: 10.1109/ICCV.2015.425 – volume: 179 start-page: 41 year: 2019 ident: 10.1016/j.cviu.2021.103329_b10 article-title: Pros and cons of gan evaluation measures publication-title: Comput. Vis. Image Underst. doi: 10.1016/j.cviu.2018.10.009 – start-page: 3449 year: 2019 ident: 10.1016/j.cviu.2021.103329_b95 article-title: Hype: A benchmark for human eye perceptual evaluation of generative models – start-page: 103 year: 2020 ident: 10.1016/j.cviu.2021.103329_b16 article-title: What makes fake images detectable? Understanding properties that generalize – year: 2015 ident: 10.1016/j.cviu.2021.103329_b20 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b21 – ident: 10.1016/j.cviu.2021.103329_b89 doi: 10.1109/ICCV.2019.00765 – start-page: 77 year: 2018 ident: 10.1016/j.cviu.2021.103329_b12 article-title: Gender shades: Intersectional accuracy disparities in commercial gender classification – year: 2018 ident: 10.1016/j.cviu.2021.103329_b6 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b54 – year: 2020 ident: 10.1016/j.cviu.2021.103329_b42 – start-page: 2621 year: 2018 ident: 10.1016/j.cviu.2021.103329_b38 article-title: Geometry score: A method for comparing generative adversarial networks – start-page: 2550 year: 2020 ident: 10.1016/j.cviu.2021.103329_b22 article-title: Precision-recall curves using information divergence frontiers – ident: 10.1016/j.cviu.2021.103329_b59 doi: 10.1109/CVPR.2019.00244 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b64 – start-page: 377 year: 2020 ident: 10.1016/j.cviu.2021.103329_b90 article-title: Inclusive gan: Improving data and minority coverage in generative models – year: 2020 ident: 10.1016/j.cviu.2021.103329_b33 article-title: The survey: Text generation models in deep learning publication-title: J. King Saud Univ.-Comput. Inf. Sci. – ident: 10.1016/j.cviu.2021.103329_b80 doi: 10.1109/CVPR.2018.00165 – ident: 10.1016/j.cviu.2021.103329_b85 doi: 10.1109/CVPR42600.2020.00872 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b79 – year: 2019 ident: 10.1016/j.cviu.2021.103329_b5 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b35 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b43 article-title: Generative adversarial networks for image and video synthesis: Algorithms and applications publication-title: Proc. IEEE doi: 10.1109/JPROC.2021.3049196 – year: 2020 ident: 10.1016/j.cviu.2021.103329_b48 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b1 – ident: 10.1016/j.cviu.2021.103329_b70 doi: 10.1145/325334.325242 – start-page: 3247 year: 2020 ident: 10.1016/j.cviu.2021.103329_b25 article-title: Leveraging frequency analysis for deep fake image recognition – start-page: 570 year: 2018 ident: 10.1016/j.cviu.2021.103329_b86 article-title: Fairgan: Fairness-aware generative adversarial networks – year: 2018 ident: 10.1016/j.cviu.2021.103329_b76 – ident: 10.1016/j.cviu.2021.103329_b7 doi: 10.1109/ICCV.2019.00460 – ident: 10.1016/j.cviu.2021.103329_b14 doi: 10.1109/CVPR.2017.502 – year: 2014 ident: 10.1016/j.cviu.2021.103329_b27 – year: 2018 ident: 10.1016/j.cviu.2021.103329_b45 – start-page: 12268 year: 2019 ident: 10.1016/j.cviu.2021.103329_b63 article-title: Classification accuracy score for conditional generative models – year: 2021 ident: 10.1016/j.cviu.2021.103329_b2 – year: 2021 ident: 10.1016/j.cviu.2021.103329_b3 – ident: 10.1016/j.cviu.2021.103329_b18 doi: 10.1109/CVPR42600.2020.00611 – ident: 10.1016/j.cviu.2021.103329_b23 doi: 10.1109/CVPR42600.2020.00791 |
| SSID | ssj0011491 |
| Score | 2.7061317 |
| Snippet | This work is an update of my previous paper on the same topic published a few years ago (Borji, 2019). With the dramatic progress in generative modeling, a... |
| SourceID | crossref elsevier |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 103329 |
| SubjectTerms | Deepfakes GAN evaluation Generative modeling |
| Title | Pros and cons of GAN evaluation measures: New developments |
| URI | https://dx.doi.org/10.1016/j.cviu.2021.103329 |
| Volume | 215 |
| WOSCitedRecordID | wos000736276200004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1090-235X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0011491 issn: 1077-3142 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtZ3NT9swFMCtQXcYBwZsiI8N-bBbFZTYSe1wqxDbQFPVQzX1FsVfUitIUb_En89z7aSmYwgOXKLKSpyoP-flvef3gdAPKg2NDVFRZnIepaWgERd5J9LcKFgfQuWr3fO_f1ivx4fDvO-7t81W7QRYVfGHh_z-XVHDGMC2qbNvwN1MCgPwG6DDEbDD8VXg-_Dd89lqLsTtV7cXFPVu3zmv4CoUzkY3qnXY0CxUVet-D22Xfu52Ge5siM8iTIhp7PnJdDxyKTOj0JVASOBKcNIvZtZpmT4RjyTJAgGXxJQ6F8U_ste5AcbncjlagOFNkvP1yU8LXW98gJqwwDribFzYOQo7R-Hm2EItwrIcJG-re301vGk2isDAS1xYqXtynxflQvg2n-R53SPQJwZ7aNcbArjrAO6jD7o6QJ-9UYC9yJ3BUM2hHjtAO0ERyS_owgLHAANb4HhiMADHa-C4Bn6BATcOcX9Fg59Xg8vfke-IEUkax3NQJ0QiZRIrlaaGSJLqEkxCqbQy8CoKMH5FqjKecy6EiMuU8Q4jNJOM6VLEmh6i7WpS6SOEQXHrmFKBwibLVJtOScFwJBm3DgGjE3aMkvqvKqSvFm-bltwW_4d0jNrNNfeuVsqLZ2c1gcJre06LK2BBvXDdyZvucoo-rVf6N7Q9ny70d_RRLuej2fTMr6ZHxG1zrw |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Pros+and+cons+of+GAN+evaluation+measures%3A+New+developments&rft.jtitle=Computer+vision+and+image+understanding&rft.au=Borji%2C+Ali&rft.date=2022-01-01&rft.issn=1077-3142&rft.volume=215&rft.spage=103329&rft_id=info:doi/10.1016%2Fj.cviu.2021.103329&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_cviu_2021_103329 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-3142&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-3142&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-3142&client=summon |