Content-Based Colour Transfer

This paper presents a novel content‐based method for transferring the colour patterns between images. Unlike previous methods that rely on image colour statistics, our method puts an emphasis on high‐level scene content analysis. We first automatically extract the foreground subject areas and backgr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum Jg. 32; H. 1; S. 190 - 203
Hauptverfasser: Wu, Fuzhang, Dong, Weiming, Kong, Yan, Mei, Xing, Paul, Jean-Claude, Zhang, Xiaopeng
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Oxford, UK Blackwell Publishing Ltd 01.02.2013
Wiley
Schlagworte:
ISSN:0167-7055, 1467-8659
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents a novel content‐based method for transferring the colour patterns between images. Unlike previous methods that rely on image colour statistics, our method puts an emphasis on high‐level scene content analysis. We first automatically extract the foreground subject areas and background scene layout from the scene. The semantic correspondences of the regions between source and target images are established. In the second step, the source image is re‐coloured in a novel optimization framework, which incorporates the extracted content information and the spatial distributions of the target colour styles. A new progressive transfer scheme is proposed to integrate the advantages of both global and local transfer algorithms, as well as avoid the over‐segmentation artefact in the result. Experiments show that with a better understanding of the scene contents, our method well preserves the spatial layout, the colour distribution and the visual coherence in the transfer process. As an interesting extension, our method can also be used to re‐colour video clips with spatially‐varied colour effects. This paper presents a novel content‐based method for transferring the colour patterns between images. Unlike previous methods that rely on image colour statistics, our method puts an emphasis on high level scene content analysis. We first automatically extract the foreground subject areas and background scene layout from the scene. The semantic correspondences of the regions between source and target images are established. In the second step, the source image is re‐coloured in a novel optimization framework, which incorporates the extracted content information and the spatial distributions of the target colour styles.
Bibliographie:ark:/67375/WNG-JX6LL6R7-W
istex:244D1A36F4EEB46E08DC6E058497231E09694E30
ArticleID:CGF12008
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.12008