Aggregated Contextual Transformations for High-Resolution Image Inpainting
Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved significant progress by taking advantage of generative adversarial networks (GAN). However, these approaches can suffer from generating distorted s...
Uloženo v:
| Vydáno v: | IEEE transactions on visualization and computer graphics Ročník 29; číslo 7; s. 3266 - 3280 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
IEEE
01.07.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 1077-2626, 1941-0506, 1941-0506 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved significant progress by taking advantage of generative adversarial networks (GAN). However, these approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., <inline-formula><tex-math notation="LaTeX">512\times 512</tex-math> <mml:math><mml:mrow><mml:mn>512</mml:mn><mml:mo>×</mml:mo><mml:mn>512</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="fu-ieq1-3156949.gif"/> </inline-formula>). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named A ggregated C O ntextual- T ransformation GAN ( AOT-GAN ), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release codes and models in https://github.com/researchmm/AOT-GAN-for-Inpainting . |
|---|---|
| AbstractList | Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved significant progress by taking advantage of generative adversarial networks (GAN). However, these approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., 512×512). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release codes and models in https://github.com/researchmm/AOT-GAN-for-Inpainting.Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved significant progress by taking advantage of generative adversarial networks (GAN). However, these approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., 512×512). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release codes and models in https://github.com/researchmm/AOT-GAN-for-Inpainting. Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved significant progress by taking advantage of generative adversarial networks (GAN). However, these approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., <inline-formula><tex-math notation="LaTeX">512\times 512</tex-math> <mml:math><mml:mrow><mml:mn>512</mml:mn><mml:mo>×</mml:mo><mml:mn>512</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="fu-ieq1-3156949.gif"/> </inline-formula>). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named A ggregated C O ntextual- T ransformation GAN ( AOT-GAN ), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release codes and models in https://github.com/researchmm/AOT-GAN-for-Inpainting . Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved significant progress by taking advantage of generative adversarial networks (GAN). However, these approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., [Formula Omitted]). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named A ggregated C O ntextual- T ransformation GAN ( AOT-GAN ), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release codes and models in https://github.com/researchmm/AOT-GAN-for-Inpainting . Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved significant progress by taking advantage of generative adversarial networks (GAN). However, these approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., 512×512). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release codes and models in https://github.com/researchmm/AOT-GAN-for-Inpainting. |
| Author | Fu, Jianlong Chao, Hongyang Zeng, Yanhong Guo, Baining |
| Author_xml | – sequence: 1 givenname: Yanhong orcidid: 0000-0003-3596-5163 surname: Zeng fullname: Zeng, Yanhong email: zengyh7@mail2.sysu.edu.cn organization: School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China – sequence: 2 givenname: Jianlong orcidid: 0000-0002-1025-2012 surname: Fu fullname: Fu, Jianlong email: jianf@microsoft.com organization: Microsoft Research, Redmond, WA, USA – sequence: 3 givenname: Hongyang orcidid: 0000-0002-6104-2322 surname: Chao fullname: Chao, Hongyang email: isschhy@mail.sysu.edu.cn organization: School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China – sequence: 4 givenname: Baining surname: Guo fullname: Guo, Baining email: bainguo@microsoft.com organization: Microsoft Research, Redmond, WA, USA |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35254985$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kU1rGzEQhkVwSZyPHxAKZSGXXNbV966OxqSJSyAQ3FyFLM9uFXYlV9JC---7jp0cfMhphuF5Z4b3PUcTHzwgdE3wjBCsvq9eFvcziimdMSKk4uoETYnipMQCy8nY46oqqaTyDJ2n9Iox4bxWp-iMCSq4qsUU_Zy3bYTWZNgUi-Az_M2D6YpVND41IfYmu-BTMbbFg2t_l8-QQjfshsWyNy0US781zmfn20v0pTFdgqtDvUC_ftytFg_l49P9cjF_LC3jKpd1gymuLQfGBQMl11JUDbXUYrsm1ljRUKAbRvGGMyzAVrZq1kQ21tYYC2LYBbrd793G8GeAlHXvkoWuMx7CkDSVrGJEKaFG9OYIfQ1D9ON3mtZ0tIdwJUbq24Ea1j1s9Da63sR_-t2lESB7wMaQUoTmAyFY75LQuyT0Lgl9SGLUVEca6_Kbmzka132q_LpXOgD4uKQqqoTk7D8xe5T3 |
| CODEN | ITVGEA |
| CitedBy_id | crossref_primary_10_3390_electronics12061490 crossref_primary_10_1016_j_icte_2024_08_008 crossref_primary_10_1016_j_asoc_2025_113271 crossref_primary_10_1109_LSENS_2024_3404139 crossref_primary_10_1038_s40494_025_01693_z crossref_primary_10_1016_j_image_2025_117326 crossref_primary_10_1117_1_JEI_33_5_053011 crossref_primary_10_1038_s40494_025_01710_1 crossref_primary_10_1109_JSTARS_2025_3547917 crossref_primary_10_3390_rs15092454 crossref_primary_10_3390_app132011189 crossref_primary_10_1109_ACCESS_2024_3443924 crossref_primary_10_3390_electronics13101852 crossref_primary_10_3390_s24144470 crossref_primary_10_1109_JETCAS_2025_3575272 crossref_primary_10_1109_TMM_2024_3521800 crossref_primary_10_1038_s40494_025_01891_9 crossref_primary_10_1109_JSEN_2022_3211007 crossref_primary_10_3389_frai_2025_1614608 crossref_primary_10_3390_jmse12122178 crossref_primary_10_1109_ACCESS_2024_3484472 crossref_primary_10_1109_TGRS_2024_3390392 crossref_primary_10_1109_JBHI_2023_3288830 crossref_primary_10_1088_1361_6560_aced78 crossref_primary_10_1109_TCSVT_2025_3532321 crossref_primary_10_3390_ani15070978 crossref_primary_10_1007_s11220_025_00624_3 crossref_primary_10_1109_ACCESS_2025_3595120 crossref_primary_10_3390_math11244908 crossref_primary_10_1007_s00371_024_03549_2 crossref_primary_10_1038_s41598_024_82097_0 crossref_primary_10_3390_rs17132132 crossref_primary_10_1049_ipr2_13318 crossref_primary_10_3390_math11204349 crossref_primary_10_1016_j_displa_2025_103092 crossref_primary_10_1016_j_neucom_2025_131001 crossref_primary_10_1016_j_ijmecsci_2025_110797 crossref_primary_10_1186_s13640_025_00670_7 crossref_primary_10_1007_s10462_024_11075_9 crossref_primary_10_1109_TVCG_2023_3286394 crossref_primary_10_1016_j_displa_2024_102945 crossref_primary_10_1360_SST_2024_0343 crossref_primary_10_3389_fphy_2023_1277710 crossref_primary_10_1016_j_ymssp_2024_111783 crossref_primary_10_2478_ijanmc_2024_0031 crossref_primary_10_3389_frai_2025_1616007 crossref_primary_10_1007_s11042_024_19365_8 crossref_primary_10_1007_s00530_024_01585_5 crossref_primary_10_3390_math13142254 crossref_primary_10_3390_math11173644 crossref_primary_10_1007_s11760_023_02931_2 crossref_primary_10_1007_s11042_025_21027_2 crossref_primary_10_1007_s00371_023_03045_z crossref_primary_10_1038_s41598_024_51651_1 crossref_primary_10_1007_s00371_025_04170_7 crossref_primary_10_1016_j_eswa_2025_128957 crossref_primary_10_3390_jmse13050997 crossref_primary_10_1016_j_neucom_2025_130133 crossref_primary_10_1109_TCSVT_2024_3370578 crossref_primary_10_3390_s23239439 crossref_primary_10_1007_s11760_025_03987_y crossref_primary_10_1016_j_engappai_2024_109397 crossref_primary_10_1007_s00530_024_01466_x crossref_primary_10_1038_s41377_023_01296_y crossref_primary_10_1016_j_neucom_2025_129663 crossref_primary_10_1109_ACCESS_2023_3301568 crossref_primary_10_1007_s11760_023_02939_8 crossref_primary_10_1038_s41598_024_72368_1 crossref_primary_10_1109_ACCESS_2023_3285798 crossref_primary_10_1016_j_bspc_2025_108296 crossref_primary_10_1007_s11760_024_03415_7 crossref_primary_10_1109_TVCG_2025_3550844 crossref_primary_10_3390_info14090512 crossref_primary_10_1007_s00530_024_01609_0 crossref_primary_10_1016_j_bdes_2025_100004 crossref_primary_10_32604_cmc_2022_023071 crossref_primary_10_1016_j_cviu_2025_104495 crossref_primary_10_3390_app13137638 crossref_primary_10_1007_s11263_025_02448_w crossref_primary_10_1038_s40494_025_01719_6 crossref_primary_10_1049_cmu2_12846 crossref_primary_10_1016_j_engappai_2024_109181 crossref_primary_10_1109_TCSVT_2024_3454457 crossref_primary_10_1088_1361_6560_ad02d9 crossref_primary_10_1007_s11760_025_04685_5 crossref_primary_10_1049_ipr2_70105 crossref_primary_10_3390_electronics13101873 crossref_primary_10_1364_OL_571252 crossref_primary_10_1063_5_0139034 crossref_primary_10_1109_TPAMI_2025_3558092 crossref_primary_10_1007_s00371_024_03578_x crossref_primary_10_3233_JIFS_239513 crossref_primary_10_1007_s00371_024_03702_x crossref_primary_10_1109_JSTARS_2025_3591103 crossref_primary_10_1038_s40494_025_01607_z crossref_primary_10_1109_LSP_2023_3340998 crossref_primary_10_1109_TWC_2024_3509382 crossref_primary_10_1016_j_eswa_2025_129667 crossref_primary_10_1049_ipr2_12958 crossref_primary_10_1109_TCSVT_2024_3450493 |
| Cites_doi | 10.1109/ICCV48922.2021.00465 10.1109/TPAMI.2007.60 10.1109/ICCV.2017.89 10.1007/978-3-030-01252-6_6 10.1109/CVPR.2016.90 10.1109/TVCG.2014.2298016 10.1007/978-3-030-66823-5_1 10.1109/ICCV.2019.00457 10.1109/CVPR.2016.278 10.1109/CVPR42600.2020.00583 10.1109/ACSSC.2003.1292216 10.1109/ICCV48922.2021.01008 10.1109/CVPR.2017.624 10.1109/CVPR.2017.434 10.1109/CVPR.2001.990497 10.1109/ICCV48922.2021.01434 10.1145/3072959.3073659 10.1109/ICCV.2001.937493 10.1109/TVCG.2015.2462368 10.1109/TIP.2004.833105 10.1109/CVPR.2009.5206848 10.1109/TPAMI.2017.2723009 10.1109/CVPR.2017.634 10.1109/TVCG.2018.2889297 10.1109/CVPR.2017.728 10.1109/ICCV.2019.00027 10.1109/CVPR.2018.00577 10.1109/ICCV.2017.304 10.1109/TVCG.2017.2702738 10.1109/ICCV.2019.00895 10.1109/CVPR.2019.01162 10.1145/501786.501787 10.1109/CVPR.2019.00158 10.1145/1073204.1073274 10.1109/CVPR.2019.00953 10.1145/2980179.2982398 10.1145/344779.344972 10.1007/978-3-030-58517-4_31 10.1109/TIP.2003.815261 10.1016/j.image.2017.10.003 10.1145/3474085.3475421 10.1109/WACV.2018.00163 10.1109/CVPR.2019.00599 10.1007/978-3-030-01216-8_1 10.1007/s00530-005-0167-6 10.1007/978-3-030-01264-9_1 10.1109/CVPR.2017.632 10.1109/CVPR.2018.00068 10.1016/j.cviu.2020.103155 10.1109/CVPR42600.2020.00753 10.1109/CVPR.2016.265 10.1145/1531326.1531330 10.1145/3394171.3413853 10.1109/ICCV.1999.790383 10.1109/TPAMI.2017.2699184 10.1145/3219819.3219944 10.1145/383259.383296 10.1080/00031305.1994.10476030 10.1109/ICCV.2019.00427 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TVCG.2022.3156949 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic Technology Research Database PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1941-0506 |
| EndPage | 3280 |
| ExternalDocumentID | 35254985 10_1109_TVCG_2022_3156949 9729564 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China; NSF of China grantid: 61672548; U1611461 funderid: 10.13039/501100001809 |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS TN5 AAYXX CITATION 5VS AETIX AGSQL AI. AIBXA ALLEH H~9 IFJZH NPM RIG RNI RZB VH1 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c349t-8f0208c4e3453e96b657f2c2c0cb1cac5f2e2d320d4305ec7c7fb16fcc80051a3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 162 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001000210200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1077-2626 1941-0506 |
| IngestDate | Thu Oct 02 03:26:54 EDT 2025 Sun Jun 29 12:32:10 EDT 2025 Mon Jul 21 06:03:37 EDT 2025 Sat Nov 29 03:31:40 EST 2025 Tue Nov 18 22:22:54 EST 2025 Wed Aug 27 02:14:17 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c349t-8f0208c4e3453e96b657f2c2c0cb1cac5f2e2d320d4305ec7c7fb16fcc80051a3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-1025-2012 0000-0002-6104-2322 0000-0003-3596-5163 |
| PMID | 35254985 |
| PQID | 2821071495 |
| PQPubID | 75741 |
| PageCount | 15 |
| ParticipantIDs | proquest_miscellaneous_2637319959 ieee_primary_9729564 crossref_citationtrail_10_1109_TVCG_2022_3156949 pubmed_primary_35254985 proquest_journals_2821071495 crossref_primary_10_1109_TVCG_2022_3156949 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-07-01 |
| PublicationDateYYYYMMDD | 2023-07-01 |
| PublicationDate_xml | – month: 07 year: 2023 text: 2023-07-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on visualization and computer graphics |
| PublicationTitleAbbrev | TVCG |
| PublicationTitleAlternate | IEEE Trans Vis Comput Graph |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 miyato (ref52) 2018 zeng (ref37) 2021 ref12 ref56 ref15 ref59 pukelsheim (ref67) 1994; 48 ref14 ref58 ref53 ref11 ref55 ref10 ref54 ref19 ref18 simonyan (ref57) 2014 ref51 ref50 ref46 ref48 ref47 ref42 ref41 ref44 ref43 yu (ref45) 2016 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 nazeri (ref17) 2019 ref40 ref35 heusel (ref65) 2017 ref34 ref36 ref31 ref30 ref33 ref32 goodfellow (ref16) 2014 ref2 ref1 zhao (ref60) 2020 ref39 wang (ref20) 2018 karras (ref61) 2018 ref38 su (ref62) 2018 johnson (ref27) 2016 ref71 ref70 ref72 ref24 ref68 ronneberger (ref63) 2015 ref23 ref26 ref25 ref69 ref64 ref22 ref66 ref21 ref28 ref29 |
| References_xml | – ident: ref59 doi: 10.1109/ICCV48922.2021.00465 – start-page: 2672 year: 2014 ident: ref16 article-title: Generative adversarial nets publication-title: Proc 27th Int Conf Neural Informat Process Syst – ident: ref4 doi: 10.1109/TPAMI.2007.60 – year: 2016 ident: ref45 article-title: Multi-scale context aggregation by dilated convolutions publication-title: Proc Int Conf Learn Representations – ident: ref70 doi: 10.1109/ICCV.2017.89 – year: 2021 ident: ref37 article-title: Improving visual quality of image synthesis by a token-based generator with transformers publication-title: Proc Int Conf Neural Informat Process Syst – ident: ref50 doi: 10.1007/978-3-030-01252-6_6 – start-page: 694 year: 2016 ident: ref27 article-title: Perceptual losses for real-time style transfer and super-resolution publication-title: Proc Eur Conf Comput Vis – ident: ref15 doi: 10.1109/CVPR.2016.90 – ident: ref6 doi: 10.1109/TVCG.2014.2298016 – ident: ref54 doi: 10.1007/978-3-030-66823-5_1 – ident: ref7 doi: 10.1109/ICCV.2019.00457 – ident: ref14 doi: 10.1109/CVPR.2016.278 – ident: ref36 doi: 10.1109/CVPR42600.2020.00583 – ident: ref64 doi: 10.1109/ACSSC.2003.1292216 – ident: ref35 doi: 10.1109/ICCV48922.2021.01008 – ident: ref38 doi: 10.1109/CVPR.2017.624 – ident: ref19 doi: 10.1109/CVPR.2017.434 – ident: ref30 doi: 10.1109/CVPR.2001.990497 – ident: ref49 doi: 10.1109/ICCV48922.2021.01434 – ident: ref40 doi: 10.1145/3072959.3073659 – start-page: 6626 year: 2017 ident: ref65 article-title: GANs trained by a two time-scale update rule converge to a local nash equilibrium publication-title: Proc 31st Int Conf Neural Informat Process Syst – ident: ref29 doi: 10.1109/ICCV.2001.937493 – ident: ref5 doi: 10.1109/TVCG.2015.2462368 – ident: ref3 doi: 10.1109/TIP.2004.833105 – ident: ref58 doi: 10.1109/CVPR.2009.5206848 – ident: ref26 doi: 10.1109/TPAMI.2017.2723009 – ident: ref53 doi: 10.1109/CVPR.2017.634 – ident: ref10 doi: 10.1109/TVCG.2018.2889297 – start-page: 234 year: 2015 ident: ref63 article-title: U-net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Interv – start-page: 331 year: 2018 ident: ref20 article-title: Image inpainting via generative multi-column convolutional neural networks publication-title: Proc Int Conf Neural Informat Process Syst – ident: ref39 doi: 10.1109/CVPR.2017.728 – ident: ref51 doi: 10.1109/ICCV.2019.00027 – year: 2018 ident: ref52 article-title: Spectral normalization for generative adversarial networks publication-title: Proc Int Conf Learn Representations – ident: ref13 doi: 10.1109/CVPR.2018.00577 – ident: ref56 doi: 10.1109/ICCV.2017.304 – ident: ref11 doi: 10.1109/TVCG.2017.2702738 – ident: ref42 doi: 10.1109/ICCV.2019.00895 – ident: ref44 doi: 10.1109/CVPR.2019.01162 – year: 2018 ident: ref62 article-title: Open logo detection challenge publication-title: Proc Brit Mach Vis Conf – ident: ref31 doi: 10.1145/501786.501787 – ident: ref21 doi: 10.1109/CVPR.2019.00158 – ident: ref12 doi: 10.1145/1073204.1073274 – year: 2019 ident: ref17 article-title: EdgeConnect: Generative image inpainting with adversarial edge learning publication-title: Proc Int Conf Comput Vis – ident: ref71 doi: 10.1109/CVPR.2019.00953 – ident: ref72 doi: 10.1145/2980179.2982398 – ident: ref1 doi: 10.1145/344779.344972 – ident: ref18 doi: 10.1007/978-3-030-58517-4_31 – ident: ref9 doi: 10.1109/TIP.2003.815261 – ident: ref69 doi: 10.1016/j.image.2017.10.003 – ident: ref34 doi: 10.1145/3474085.3475421 – ident: ref23 doi: 10.1109/WACV.2018.00163 – ident: ref25 doi: 10.1109/CVPR.2019.00599 – ident: ref43 doi: 10.1007/978-3-030-01216-8_1 – ident: ref68 doi: 10.1007/s00530-005-0167-6 – ident: ref41 doi: 10.1007/978-3-030-01264-9_1 – ident: ref24 doi: 10.1109/CVPR.2017.632 – ident: ref66 doi: 10.1109/CVPR.2018.00068 – ident: ref55 doi: 10.1016/j.cviu.2020.103155 – ident: ref8 doi: 10.1109/CVPR42600.2020.00753 – ident: ref28 doi: 10.1109/CVPR.2016.265 – ident: ref2 doi: 10.1145/1531326.1531330 – ident: ref48 doi: 10.1145/3394171.3413853 – ident: ref32 doi: 10.1109/ICCV.1999.790383 – ident: ref46 doi: 10.1109/TPAMI.2017.2699184 – ident: ref47 doi: 10.1145/3219819.3219944 – ident: ref33 doi: 10.1145/383259.383296 – volume: 48 start-page: 88 year: 1994 ident: ref67 article-title: The three sigma rule publication-title: The Amer Statistician doi: 10.1080/00031305.1994.10476030 – year: 2018 ident: ref61 article-title: Progressive growing of GANs for improved quality, stability, and variation publication-title: Proc Int Conf Learn Representations – ident: ref22 doi: 10.1109/ICCV.2019.00427 – year: 2014 ident: ref57 article-title: Very deep convolutional networks for large-scale image recognition – year: 2020 ident: ref60 article-title: Large scale image completion via co-modulated generative adversarial networks publication-title: Proc Int Conf Learn Representations |
| SSID | ssj0014489 |
| Score | 2.694736 |
| Snippet | Image inpainting that completes large free-form missing regions in images is a promising yet challenging task. State-of-the-art approaches have achieved... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 3266 |
| SubjectTerms | Cognition Context Convolution Discriminators Filling Free form Generative adversarial networks generative adversarial networks (GAN) Generators High resolution Image enhancement image inpainting Image resolution Image synthesis object removal Reasoning State-of-the-art reviews Synthesis Task analysis Texture Training |
| Title | Aggregated Contextual Transformations for High-Resolution Image Inpainting |
| URI | https://ieeexplore.ieee.org/document/9729564 https://www.ncbi.nlm.nih.gov/pubmed/35254985 https://www.proquest.com/docview/2821071495 https://www.proquest.com/docview/2637319959 |
| Volume | 29 |
| WOSCitedRecordID | wos001000210200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0506 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014489 issn: 1077-2626 databaseCode: RIE dateStart: 19950101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3BTtwwEB0B4lAOUEqBLRSlEqeqLontxPERodLCAXFYqr1FzthBSJBFsIv4_M442WgPbSVuUeI4lmccz3hm3gM4Vph5rRopXKidYIwpUVr0QroMM96h0hwj2YS5uionE3u9At-GWpgQQkw-C9_5Msby_RTnfFR2YskSzAu9CqvGmK5Wa4gYkJthu_xCIyRZ6X0EM0vtyfj32U_yBKUkBzUvrGakUEYB1ZYZlJe2o8iv8m9TM24551tvG-x72OxNy-S004VtWAntB9hYAhzcgcvTW_Kv-eTMJxGX6pXLR5LxkvVKWpjQZcIJIIIP9zvVTC4e6M-TXLSP7i6SS3yEm_Mf47NfomdTEKi0nYmyYT5O1EHpXAVb1EVuGokSU6wzdJg3MkivZOoZBSygQdPUWdEglrxyndqFtXbahn1IDNkEjm6TqYe6rH3dKKO88-idtkHiCNLFpFbYQ40z48V9FV2O1FYskopFUvUiGcHX4ZXHDmfjf413eL6Hhv1Uj-BwIbmqX4nPFbmUGRdp2XwEX4bHtIY4MOLaMJ1Tm4LGz7Xq1PNeJ_Gh74WifPr7Nw_gHRPQdwm8h7A2e5qHz7COL7O756cjUtRJeRQV9Q-m_-Jg |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4BRYIe2gJ9bAs0SJwQhsR2Hj4iVMq2sOKwIG6RM3YqpJJFsFv153fGyUZ7KJV6ixLHsTzjeMYz830A-woTp1UthfWVFYwxJQqDTkibYMI7VJxiIJvIR6Pi9tZcLcFhXwvjvQ_JZ_6IL0Ms301wxkdlx4YswTTTy_Ai1VombbVWHzMgR8O0GYa5kGSndzHMJDbH45vTr-QLSkkuapoZzVihjAOqDXMoL2xIgWHleWMzbDpnr_9vuG_gVWdcRietNmzAkm824eUC5OAWfDv5QR42n525KCBT_eYCkmi8YL-SHkZ0GXEKiODj_VY5o-E9_XuiYfNg7wK9xFu4PvsyPj0XHZ-CQKXNVBQ1M3Ki9kqnypusytK8ligxxipBi2ktvXRKxo5xwDzmmNdVktWIBa9dq97BSjNp_AeIcrIKLN0mYw91UbmqVrly1qGz2niJA4jnk1piBzbOnBc_y-B0xKZkkZQskrITyQAO-lceWqSNfzXe4vnuG3ZTPYDtueTKbi0-leRUJlymZdIB7PWPaRVxaMQ2fjKjNhmNn6vVqef3rcT7vueK8vHv3_wMa-fjy4vyYjj6_gnWmY6-TefdhpXp48zvwCr-mt49Pe4Gdf0DPdLkvw |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Aggregated+Contextual+Transformations+for+High-Resolution+Image+Inpainting&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Zeng%2C+Yanhong&rft.au=Fu%2C+Jianlong&rft.au=Chao%2C+Hongyang&rft.au=Guo%2C+Baining&rft.date=2023-07-01&rft.issn=1941-0506&rft.eissn=1941-0506&rft.volume=29&rft.issue=7&rft.spage=3266&rft_id=info:doi/10.1109%2FTVCG.2022.3156949&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon |