Towards Unsupervised Deep Image Enhancement With Generative Adversarial Network
Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched...
Uloženo v:
| Vydáno v: | IEEE transactions on image processing Ročník 29; s. 9140 - 9151 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
IEEE
01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss , which is defined as a <inline-formula> <tex-math notation="LaTeX">\ell 2 </tex-math></inline-formula> regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images. Our code is available at: https://github.com/eezkni/UEGAN . |
|---|---|
| AbstractList | Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a l2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images. Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a l2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images.Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a l2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images. Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss , which is defined as a <inline-formula> <tex-math notation="LaTeX">\ell 2 </tex-math></inline-formula> regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images. Our code is available at: https://github.com/eezkni/UEGAN . Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss , which is defined as a [Formula Omitted] regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images. Our code is available at: https://github.com/eezkni/UEGAN . |
| Author | Wang, Shiqi Ma, Lin Kwong, Sam Ni, Zhangkai Yang, Wenhan |
| Author_xml | – sequence: 1 givenname: Zhangkai orcidid: 0000-0003-3682-6288 surname: Ni fullname: Ni, Zhangkai email: eezkni@gmail.com organization: Department of Computer Science, City University of Hong Kong, Hong Kong – sequence: 2 givenname: Wenhan orcidid: 0000-0002-1692-0069 surname: Yang fullname: Yang, Wenhan email: yangwenhan@pku.edu.cn organization: Department of Computer Science, City University of Hong Kong, Hong Kong – sequence: 3 givenname: Shiqi orcidid: 0000-0002-3583-959X surname: Wang fullname: Wang, Shiqi email: shiqwang@cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Hong Kong – sequence: 4 givenname: Lin orcidid: 0000-0002-7331-6132 surname: Ma fullname: Ma, Lin email: forest.linma@gmail.com organization: Meituan-Dianping Group, Beijing, China – sequence: 5 givenname: Sam orcidid: 0000-0001-7484-7261 surname: Kwong fullname: Kwong, Sam email: cssamk@cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Hong Kong |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/32960763$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kT1rHDEQhkWwiT-SPhAwC27c7GX0uafS-CsHJk5xJqWQpdlYzq72LO2eyb-PjjuncOFqpnieYXjfI7IXh4iEfKEwoxT0t-Xi54wBgxkHxhWVH8gh1YLWAILtlR1kUzdU6ANylPMTABWSqo_kgDOtoFH8kNwthxebfK7uY55WmNYho68uEVfVore_sbqKjzY67DGO1a8wPlY3GDHZMayxOvdrTNmmYLvqB44vQ_rziey3tsv4eTePyf311fLie317d7O4OL-tHRfNWHvHWuCN5nPvtSsfau5ogwpbLaVEgLnw2DiB3jmq5xSFkIp5ytWDaGmr-TE5295dpeF5wjyaPmSHXWcjDlM2rAiCKaZkQU_foE_DlGL5bkM1skSieKFOdtT00KM3qxR6m_6a16gKoLaAS0POCVvjwlhyGOKYbOgMBbPpxJROzKYTs-ukiPBGfL39jvJ1qwRE_I9rBkKIOf8H1EuU5w |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1016_j_measurement_2025_117932 crossref_primary_10_1109_TCSVT_2023_3239511 crossref_primary_10_1109_TCSVT_2024_3408007 crossref_primary_10_1109_TPAMI_2025_3545966 crossref_primary_10_1007_s10791_024_09488_9 crossref_primary_10_1109_ACCESS_2021_3084339 crossref_primary_10_1109_TCSVT_2022_3146731 crossref_primary_10_1016_j_cag_2023_12_014 crossref_primary_10_1016_j_dsp_2024_104521 crossref_primary_10_1016_j_neucom_2025_129509 crossref_primary_10_1109_TETCI_2023_3301337 crossref_primary_10_3390_sym17050725 crossref_primary_10_1109_TCSVT_2021_3121012 crossref_primary_10_1007_s11831_025_10226_7 crossref_primary_10_1016_j_asoc_2025_112778 crossref_primary_10_1109_TMM_2023_3307903 crossref_primary_10_1109_TIP_2023_3274988 crossref_primary_10_1038_s41598_024_65270_3 crossref_primary_10_1007_s13042_022_01659_8 crossref_primary_10_1109_TNNLS_2025_3566647 crossref_primary_10_1016_j_eswa_2025_127106 crossref_primary_10_1145_3763240 crossref_primary_10_1109_ACCESS_2021_3083075 crossref_primary_10_1109_TMM_2023_3293736 crossref_primary_10_1142_S0129156425401524 crossref_primary_10_1007_s11263_024_02084_w crossref_primary_10_1016_j_cviu_2023_103681 crossref_primary_10_1016_j_cviu_2024_104276 crossref_primary_10_1007_s13735_023_00310_8 crossref_primary_10_1016_j_image_2023_116923 crossref_primary_10_1007_s11042_023_15147_w crossref_primary_10_1109_TIP_2024_3378910 crossref_primary_10_1109_TCSVT_2022_3141578 crossref_primary_10_3390_photonics10020198 crossref_primary_10_1109_TPAMI_2021_3126387 crossref_primary_10_1109_TCSVT_2023_3241162 crossref_primary_10_1145_3716820 crossref_primary_10_1109_TCSVT_2025_3556203 crossref_primary_10_1109_TMM_2023_3262185 crossref_primary_10_1109_TMM_2024_3405729 crossref_primary_10_1016_j_neucom_2025_131052 crossref_primary_10_1109_TCSVT_2024_3454763 crossref_primary_10_1109_TGRS_2022_3220230 crossref_primary_10_1109_ACCESS_2022_3209807 crossref_primary_10_1109_TIP_2023_3251695 crossref_primary_10_1007_s00371_022_02651_7 crossref_primary_10_3390_app14167177 crossref_primary_10_1016_j_bspc_2022_104450 crossref_primary_10_1016_j_engappai_2023_107799 crossref_primary_10_1016_j_bspc_2025_107765 crossref_primary_10_1016_j_cviu_2025_104496 crossref_primary_10_1109_TMM_2024_3400668 crossref_primary_10_1109_TMM_2022_3207330 crossref_primary_10_1109_TMM_2025_3535333 crossref_primary_10_1016_j_knosys_2023_111053 crossref_primary_10_1109_JBHI_2024_3454068 crossref_primary_10_1109_TITS_2022_3165176 crossref_primary_10_3390_app14052121 crossref_primary_10_1007_s10586_025_05538_z crossref_primary_10_1109_TPAMI_2024_3516874 crossref_primary_10_1109_TCSVT_2023_3262303 crossref_primary_10_1109_ACCESS_2022_3187209 crossref_primary_10_3390_electronics12214445 crossref_primary_10_1002_mco2_70247 crossref_primary_10_1007_s00146_023_01836_5 crossref_primary_10_1049_ipr2_13092 crossref_primary_10_1142_S0218194025410025 crossref_primary_10_1080_09540091_2022_2147902 crossref_primary_10_1109_ACCESS_2022_3178698 crossref_primary_10_1109_TCSVT_2023_3340506 crossref_primary_10_1016_j_neucom_2024_128974 crossref_primary_10_1016_j_image_2022_116890 crossref_primary_10_1016_j_imavis_2024_105281 crossref_primary_10_1109_TIP_2023_3340522 crossref_primary_10_1051_shsconf_202521401006 crossref_primary_10_1109_TIP_2024_3519997 crossref_primary_10_3390_electronics11010032 |
| Cites_doi | 10.1145/3181974 10.1109/83.826787 10.1109/ICASSP.1991.150915 10.1109/TIP.2018.2831899 10.1109/CVPR.2019.00701 10.1145/2980179.2982423 10.1109/ICCV.2017.244 10.1109/TIP.2009.2021548 10.1109/TIP.2005.864170 10.1109/ICCV.2019.00461 10.1109/ICCVW.2017.356 10.1109/83.841534 10.1145/3072959.3073592 10.1109/CVPR.2018.00660 10.1109/TIP.2018.2838660 10.1145/2790296 10.1109/TIM.2010.2089110 10.1109/TIP.2016.2639450 10.1109/TCE.2007.4429280 10.1109/TIP.2013.2261309 10.1109/TPAMI.2012.213 10.1109/CVPR.2012.6247954 10.1109/TIP.2013.2284059 10.1109/TIP.2018.2810539 10.1109/ICCV.2017.355 10.1109/TIP.2019.2910412 10.1109/TCE.2007.381734 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2020.3023615 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 9151 |
| ExternalDocumentID | 32960763 10_1109_TIP_2020_3023615 9204448 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Key Project of Science and Technology Innovation 2030 – fundername: Natural Science Foundation of China grantid: 61772344; 61672443 funderid: 10.13039/501100001809 – fundername: Ministry of Science and Technology of China grantid: 2018AAA0101301 funderid: 10.13039/501100002855 – fundername: Hong Kong Research Grants Council (RGC) Early Career Scheme grantid: 9048122 (CityU 21211018) funderid: 10.13039/501100002920 – fundername: Hong Kong Research Grants Council (RGC) General Research Funds grantid: 9042816 (CityU 11209819); 9042957 (CityU 11203220) |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c347t-dc2f037938dd9c14993c17e6ef9555e0084de7c4edcc1981e44562d136b4f1f93 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 117 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000574739100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Sun Sep 28 08:55:49 EDT 2025 Mon Jun 30 10:24:38 EDT 2025 Sun Nov 09 08:39:25 EST 2025 Sat Nov 29 03:21:13 EST 2025 Tue Nov 18 21:18:39 EST 2025 Wed Aug 27 02:31:55 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c347t-dc2f037938dd9c14993c17e6ef9555e0084de7c4edcc1981e44562d136b4f1f93 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0001-7484-7261 0000-0003-3682-6288 0000-0002-7331-6132 0000-0002-1692-0069 0000-0002-3583-959X |
| PMID | 32960763 |
| PQID | 2447551663 |
| PQPubID | 85429 |
| PageCount | 12 |
| ParticipantIDs | crossref_primary_10_1109_TIP_2020_3023615 proquest_miscellaneous_2445426265 pubmed_primary_32960763 ieee_primary_9204448 crossref_citationtrail_10_1109_TIP_2020_3023615 proquest_journals_2447551663 |
| PublicationCentury | 2000 |
| PublicationDate | 2020-01-01 |
| PublicationDateYYYYMMDD | 2020-01-01 |
| PublicationDate_xml | – month: 01 year: 2020 text: 2020-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2020 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref34 ref12 ref15 ref14 ref33 ref10 simonyan (ref31) 2014 ref2 ref1 ref17 ref19 ref18 ronneberger (ref27) 2015 bychkovsky (ref11) 2011 zhang (ref30) 2019 kingma (ref32) 2014 jiang (ref16) 2019 ref24 ref23 ref26 ref25 ref20 jolicoeur-martineau (ref29) 2018 ref22 ref21 ref28 ref8 ref7 ref9 ref4 ref3 ref6 ref5 |
| References_xml | – start-page: 7354 year: 2019 ident: ref30 article-title: Self-attention generative adversarial networks publication-title: Proc Int Conf Mach Learn – ident: ref15 doi: 10.1145/3181974 – ident: ref22 doi: 10.1109/83.826787 – year: 2014 ident: ref31 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv 1409 1556 – ident: ref20 doi: 10.1109/ICASSP.1991.150915 – ident: ref33 doi: 10.1109/TIP.2018.2831899 – ident: ref10 doi: 10.1109/CVPR.2019.00701 – ident: ref28 doi: 10.1145/2980179.2982423 – ident: ref12 doi: 10.1109/ICCV.2017.244 – start-page: 97 year: 2011 ident: ref11 article-title: Learning photographic global tonal adjustment with a database of input/output image pairs publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref1 doi: 10.1109/TIP.2009.2021548 – ident: ref4 doi: 10.1109/TIP.2005.864170 – ident: ref14 doi: 10.1109/ICCV.2019.00461 – ident: ref26 doi: 10.1109/ICCVW.2017.356 – ident: ref18 doi: 10.1109/83.841534 – ident: ref6 doi: 10.1145/3072959.3073592 – year: 2019 ident: ref16 article-title: EnlightenGAN: Deep light enhancement without paired supervision publication-title: arXiv 1906 06972 – ident: ref13 doi: 10.1109/CVPR.2018.00660 – ident: ref19 doi: 10.1109/TIP.2018.2838660 – ident: ref8 doi: 10.1145/2790296 – ident: ref2 doi: 10.1109/TIM.2010.2089110 – year: 2018 ident: ref29 article-title: The relativistic discriminator: A key element missing from standard GAN publication-title: arXiv 1807 00734 – ident: ref25 doi: 10.1109/TIP.2016.2639450 – ident: ref17 doi: 10.1109/TCE.2007.4429280 – ident: ref24 doi: 10.1109/TIP.2013.2261309 – start-page: 234 year: 2015 ident: ref27 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent – ident: ref21 doi: 10.1109/TPAMI.2012.213 – ident: ref34 doi: 10.1109/CVPR.2012.6247954 – ident: ref3 doi: 10.1109/TIP.2013.2284059 – ident: ref23 doi: 10.1109/TIP.2018.2810539 – ident: ref7 doi: 10.1109/ICCV.2017.355 – ident: ref9 doi: 10.1109/TIP.2019.2910412 – ident: ref5 doi: 10.1109/TCE.2007.381734 – year: 2014 ident: ref32 article-title: Adam: A method for stochastic optimization publication-title: arXiv 1412 6980 |
| SSID | ssj0014516 |
| Score | 2.6082873 |
| Snippet | Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 9140 |
| SubjectTerms | Algorithms Gallium nitride generative adversarial network Generative adversarial networks Generators global attention Histograms Image color analysis Image enhancement Image quality Machine learning Mapping Regularization Task analysis Unsupervised learning |
| Title | Towards Unsupervised Deep Image Enhancement With Generative Adversarial Network |
| URI | https://ieeexplore.ieee.org/document/9204448 https://www.ncbi.nlm.nih.gov/pubmed/32960763 https://www.proquest.com/docview/2447551663 https://www.proquest.com/docview/2445426265 |
| Volume | 29 |
| WOSCitedRecordID | wos000574739100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8QwEB5UPOjB92N9EcGLYN1t-khzFB8oyOphxb2VJpmgoN3F7vr7zaTd4kEFb4WmSck3M5nJJN8AnLglQ0mtTJDZIgtiK2VQpCIOrMl0Ia1VifJI34t-PxsO5eMcnLV3YRDRHz7Dc3r0uXwz0lPaKutKTuxm2TzMCyHqu1ptxoAKzvrMZiIC4dz-WUqyJ7uDu0cXCHIXnxJdOhXA_bYE-Zoqv7uXfpm5Wf3fD67BSuNOsosa_3WYw3IDVhvXkjWKW23A8jfewU14GPjDshV7KqvpmKxF5VpfIY7Z3bszMOy6fCFpoNHY8-vkhdXs1GQamS_hXBUkuKxfHyLfgqeb68HlbdBUVgh0FItJYDS3vcipZmaM1G62ZKRDgSlamSQJEsm-QaFjNFqHMgsxpkDJhFGqYhtaGW3DQjkqcRdYokKqf6WN4WGMaVFwboue5SkqFUZGdaA7m-xcN7TjVP3iLffhR0_mDp6c4MkbeDpw2n4xrik3_mi7SSi07RoAOnAwwzNvdLLKOXEbOilJow4ct6-dNlGKpChxNPVtEuLoT13PO7UctH1H3EV7zhzv_TzmPizRn9XbMwewMPmY4iEs6s_Ja_Vx5ER2mB15kf0C8Hfncw |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB5RqFQ4QAsFllddqZdKTXftOA8fUQGx6nbLYVG5RbE9FkiQXZFdfj8eJxtxaJF6i5TxQ56HZzz2NwBf_JahldE2yl2ZR9IpFZVpJiNnc1Mq53SiA6dH2Xic39yoqxX41r2FQcRw-Qy_02fI5dupWdBRWV8JQjfL38BaIqXgzWutLmdAJWdDbjPJosw7_suk5ED1J8MrHwoKH6ESYDqVwH2xCYWqKv92MMNGc7H1f1N8D5utQ8lOGwn4ACtYbcNW61yyVnXrbdh4gTy4A78n4bpsza6rejEje1F76jPEGRs-eBPDzqtbkgcajf25m9-yBp-ajCMLRZzrkkSXjZtr5B_h-uJ88uMyamsrRCaW2TyyRrhB7JUzt1YZv1oqNjzDFJ1KkgQJZt9iZiRaY7jKOUoKlSyPUy0ddyrehdVqWuE-sERzqoBlrBVcYlqWQrhy4ESKWvPY6h70l4tdmBZ4nOpf3BchABmowrOnIPYULXt68LVrMWtAN16h3SEudHQtA3pwtORn0WplXQhCN_RSksY9-Nz99vpESZKywuki0CSE0p_6nvcaOej6joWP97xBPvj7mJ_g3eXk16gYDcc_D2GdZtkc1hzB6vxxgcfw1jzN7-rHkyC4z6uI6dI |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Towards+Unsupervised+Deep+Image+Enhancement+with+Generative+Adversarial+Network&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Ni%2C+Zhangkai&rft.au=Yang%2C+Wenhan&rft.au=Wang%2C+Shiqi&rft.au=Ma%2C+Lin&rft.date=2020-01-01&rft.eissn=1941-0042&rft.volume=PP&rft_id=info:doi/10.1109%2FTIP.2020.3023615&rft_id=info%3Apmid%2F32960763&rft.externalDocID=32960763 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |