Example‐Based Colourization Via Dense Encoding Pyramids

We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a multinomial classification problem. Given a greyscale image and a reference image, the proposed network leverages large‐scale data and then predicts...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Computer graphics forum Ročník 39; číslo 1; s. 20 - 33
Hlavní autoři: Xiao, Chufeng, Han, Chu, Zhang, Zhuming, Qin, Jing, Wong, Tien‐Tsin, Han, Guoqiang, He, Shengfeng
Médium: Journal Article
Jazyk:angličtina
Vydáno: Oxford Blackwell Publishing Ltd 01.02.2020
Témata:
ISSN:0167-7055, 1467-8659
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a multinomial classification problem. Given a greyscale image and a reference image, the proposed network leverages large‐scale data and then predicts colours by analysing the colour distribution of the reference image. We design the network as a pyramid structure in order to exploit the inherent multi‐scale, pyramidal hierarchy of colour representations. Between two adjacent levels, we propose a hierarchical decoder–encoder filter to pass the colour distributions from the lower level to higher level in order to take both semantic information and fine details into account during the colourization process. Within the network, a novel parallel residual dense block is proposed to effectively extract the local–global context of the colour representations by widening the network. Several experiments, as well as a user study, are conducted to evaluate the performance of our network against state‐of‐the‐art colourization methods. Experimental results show that our network is able to generate colourful, semantically correct and visually pleasant colour images. In addition, unlike fully automatic colourization that produces fixed colour images, the reference image of our network is flexible; both natural images and simple colour palettes can be used to guide the colourization. We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a multinomial classification problem. Given a greyscale image and a reference image, the proposed network leverages large‐scale data and then predicts colours by analysing the colour distribution of the reference image. We design the network as a pyramid structure in order to exploit the inherent multi‐scale, pyramidal hierarchy of colour representations. Between two adjacent levels, we propose a hierarchical decoder–encoder filter to pass the colour distributions from the lower level to higher level in order to take both semantic information and fine details into account during the colourization process. Within the network, a novel parallel residual dense block is proposed to effectively extract the local–global context of the colour representations by widening the network. Several experiments, as well as a user study, are conducted to evaluate the performance of our network against state‐of‐the‐art colourization methods.
AbstractList We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a multinomial classification problem. Given a greyscale image and a reference image, the proposed network leverages large‐scale data and then predicts colours by analysing the colour distribution of the reference image. We design the network as a pyramid structure in order to exploit the inherent multi‐scale, pyramidal hierarchy of colour representations. Between two adjacent levels, we propose a hierarchical decoder–encoder filter to pass the colour distributions from the lower level to higher level in order to take both semantic information and fine details into account during the colourization process. Within the network, a novel parallel residual dense block is proposed to effectively extract the local–global context of the colour representations by widening the network. Several experiments, as well as a user study, are conducted to evaluate the performance of our network against state‐of‐the‐art colourization methods. Experimental results show that our network is able to generate colourful, semantically correct and visually pleasant colour images. In addition, unlike fully automatic colourization that produces fixed colour images, the reference image of our network is flexible; both natural images and simple colour palettes can be used to guide the colourization. We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a multinomial classification problem. Given a greyscale image and a reference image, the proposed network leverages large‐scale data and then predicts colours by analysing the colour distribution of the reference image. We design the network as a pyramid structure in order to exploit the inherent multi‐scale, pyramidal hierarchy of colour representations. Between two adjacent levels, we propose a hierarchical decoder–encoder filter to pass the colour distributions from the lower level to higher level in order to take both semantic information and fine details into account during the colourization process. Within the network, a novel parallel residual dense block is proposed to effectively extract the local–global context of the colour representations by widening the network. Several experiments, as well as a user study, are conducted to evaluate the performance of our network against state‐of‐the‐art colourization methods.
We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a multinomial classification problem. Given a greyscale image and a reference image, the proposed network leverages large‐scale data and then predicts colours by analysing the colour distribution of the reference image. We design the network as a pyramid structure in order to exploit the inherent multi‐scale, pyramidal hierarchy of colour representations. Between two adjacent levels, we propose a hierarchical decoder–encoder filter to pass the colour distributions from the lower level to higher level in order to take both semantic information and fine details into account during the colourization process. Within the network, a novel parallel residual dense block is proposed to effectively extract the local–global context of the colour representations by widening the network. Several experiments, as well as a user study, are conducted to evaluate the performance of our network against state‐of‐the‐art colourization methods. Experimental results show that our network is able to generate colourful, semantically correct and visually pleasant colour images. In addition, unlike fully automatic colourization that produces fixed colour images, the reference image of our network is flexible; both natural images and simple colour palettes can be used to guide the colourization.
We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a multinomial classification problem. Given a greyscale image and a reference image, the proposed network leverages large‐scale data and then predicts colours by analysing the colour distribution of the reference image. We design the network as a pyramid structure in order to exploit the inherent multi‐scale, pyramidal hierarchy of colour representations. Between two adjacent levels, we propose a hierarchical decoder–encoder filter to pass the colour distributions from the lower level to higher level in order to take both semantic information and fine details into account during the colourization process. Within the network, a novel parallel residual dense block is proposed to effectively extract the local–global context of the colour representations by widening the network. Several experiments, as well as a user study, are conducted to evaluate the performance of our network against state‐of‐the‐art colourization methods. Experimental results show that our network is able to generate colourful, semantically correct and visually pleasant colour images. In addition, unlike fully automatic colourization that produces fixed colour images, the reference image of our network is flexible; both natural images and simple colour palettes can be used to guide the colourization.
Author Han, Guoqiang
Wong, Tien‐Tsin
Zhang, Zhuming
Han, Chu
Qin, Jing
Xiao, Chufeng
He, Shengfeng
Author_xml – sequence: 1
  givenname: Chufeng
  surname: Xiao
  fullname: Xiao, Chufeng
  email: chufengxiao@outlook.com
  organization: South China University of Technology
– sequence: 2
  givenname: Chu
  orcidid: 0000-0001-7557-9131
  surname: Han
  fullname: Han, Chu
  email: chan@cse.cuhk.edu.hk
  organization: The Chinese University of Hong Kong
– sequence: 3
  givenname: Zhuming
  surname: Zhang
  fullname: Zhang, Zhuming
  email: zhangzm@cse.cuhk.edu.hk
  organization: The Chinese University of Hong Kong
– sequence: 4
  givenname: Jing
  surname: Qin
  fullname: Qin, Jing
  email: harry.qin@polyu.edu.hk
  organization: The Hong Kong Polytechnic University
– sequence: 5
  givenname: Tien‐Tsin
  surname: Wong
  fullname: Wong, Tien‐Tsin
  email: ttwong@cse.cuhk.edu.hk
  organization: The Chinese University of Hong Kong
– sequence: 6
  givenname: Guoqiang
  surname: Han
  fullname: Han, Guoqiang
  email: csgqhan@scut.edu.cn
  organization: South China University of Technology
– sequence: 7
  givenname: Shengfeng
  orcidid: 0000-0002-3802-4644
  surname: He
  fullname: He, Shengfeng
  email: hesfe@scut.edu.cn
  organization: South China University of Technology
BookMark eNp9kL9OwzAQxi1UJNrCwBtEYmJI6z9JHI8Q2oJUCQZgtRz7UrlKk2KnomXiEXhGngTTMiHBLXfD7_vu7hugXtM2gNA5wSMSaqwX1YiwLBVHqE-SjMd5mHuoj0mYOU7TEzTwfokxTniW9pGYbNVqXcPn-8e18mCioq3bjbNvqrNtEz1bFd1A4yGaNLo1tllEDzunVtb4U3RcqdrD2U8foqfp5LG4jef3s7viah5rKriIFReiohWGkmbGVIxqYApMCgy4ICkWSuvclCXLOZSAEyoIY5lKSgWYQZWwIbo4-K5d-7IB38llOLAJKyVlWSI4xXkeqMsDpV3rvYNKrp1dKbeTBMvvZGRIRu6TCez4F6ttt_-3c8rW_ylebQ27v61lMZseFF8jPXdZ
CitedBy_id crossref_primary_10_1111_cgf_14170
crossref_primary_10_1109_TVCG_2024_3398791
crossref_primary_10_1109_TVCG_2025_3543527
crossref_primary_10_1111_cgf_14791
crossref_primary_10_1016_j_asoc_2025_113010
crossref_primary_10_1016_j_jrras_2024_101285
crossref_primary_10_1109_TCE_2024_3514111
crossref_primary_10_3390_s22197419
crossref_primary_10_1109_TPAMI_2023_3278287
crossref_primary_10_1016_j_ijhcs_2022_102924
crossref_primary_10_1007_s11042_021_10881_5
crossref_primary_10_1111_cgf_13811
crossref_primary_10_1007_s00371_024_03500_5
crossref_primary_10_1109_TIP_2022_3215910
crossref_primary_10_32604_cmes_2022_022369
crossref_primary_10_1016_j_neucom_2024_128121
crossref_primary_10_1109_ACCESS_2020_2994148
crossref_primary_10_1109_TIP_2023_3293777
crossref_primary_10_1016_j_compbiomed_2022_106451
crossref_primary_10_3390_s22207779
crossref_primary_10_1177_0040517520984093
crossref_primary_10_1109_TVCG_2023_3296386
crossref_primary_10_1109_ACCESS_2025_3608008
Cites_doi 10.1111/cgf.13275
10.1007/978-3-319-46493-0_35
10.1145/2508363.2508404
10.1007/978-3-319-46487-9_40
10.1109/CVPR.2017.660
10.1109/CVPR.2016.90
10.1109/TPAMI.2017.2699184
10.5244/C.31.112
10.1145/2897824.2925974
10.1109/TIP.2003.819861
10.1109/TIP.2013.2288929
10.1145/3197517.3201365
10.1109/TMI.2017.2715284
10.1145/566654.566576
10.1145/2393347.2393402
10.1007/978-3-319-10578-9_23
10.1145/1015706.1015780
10.1109/38.946629
10.1145/2766978
10.1111/cgf.12008
10.1109/CVPR.2009.5206757
10.1109/TPAMI.2016.2644615
10.1109/ICCV.2017.168
10.1109/CVPR.2017.243
10.1145/1015706.1015720
10.1109/ICCV.2015.55
10.1007/s11263-015-0816-y
10.1109/CVPR.2017.544
10.1109/ICCV.2005.239
10.1007/978-3-030-01234-2_18
10.1145/1141911.1142017
10.1145/987657.987677
10.1145/3072959.3073703
ContentType Journal Article
Copyright 2019 The Authors Computer Graphics Forum © 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
Copyright_xml – notice: 2019 The Authors Computer Graphics Forum © 2019 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
– notice: 2020 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1111/cgf.13659
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Computer and Information Systems Abstracts
CrossRef
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1467-8659
EndPage 33
ExternalDocumentID 10_1111_cgf_13659
CGF13659
Genre article
GrantInformation_xml – fundername: Special Fund of Science and Technology Research and Development on Application From Guangdong Province
  funderid: 2016B010127003
– fundername: Innovation and Technology Fund of Hong Kong
  funderid: ITS/319/17
– fundername: Shenzhen Science and Technology Program
  funderid: JCYJ20160429190300857
– fundername: RGC General Research Fund
  funderid: CUHK14201017
– fundername: Guangzhou Key Industrial Technology Research fund
  funderid: 201802010036
– fundername: National Natural Science Foundation of China
  funderid: 61472145; 61702194
– fundername: Guangdong Natural Science Foundation
  funderid: 2017A030312008
GroupedDBID .3N
.4S
.DC
.GA
.Y3
05W
0R~
10A
15B
1OB
1OC
29F
31~
33P
3SF
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5HH
5LA
5VS
66C
6J9
702
7PT
8-0
8-1
8-3
8-4
8-5
8UM
8VB
930
A03
AAESR
AAEVG
AAHHS
AAHQN
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABDBF
ABDPE
ABEML
ABPVW
ACAHQ
ACBWZ
ACCFJ
ACCZN
ACFBH
ACGFS
ACPOU
ACRPL
ACSCC
ACUHS
ACXBN
ACXQS
ACYXJ
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADNMO
ADOZA
ADXAS
ADZMN
ADZOD
AEEZP
AEGXH
AEIGN
AEIMD
AEMOZ
AENEX
AEQDE
AEUQT
AEUYR
AFBPY
AFEBI
AFFNX
AFFPM
AFGKR
AFPWT
AFWVQ
AFZJQ
AHBTC
AHEFC
AHQJS
AITYG
AIURR
AIWBW
AJBDE
AJXKR
AKVCP
ALAGY
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ARCSS
ASPBG
ATUGU
AUFTA
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BMXJE
BNHUX
BROTX
BRXPI
BY8
CAG
COF
CS3
CWDTD
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
DU5
EAD
EAP
EBA
EBO
EBR
EBS
EBU
EDO
EJD
EMK
EST
ESX
F00
F01
F04
F5P
FEDTE
FZ0
G-S
G.N
GODZA
H.T
H.X
HF~
HGLYW
HVGLF
HZI
HZ~
I-F
IHE
IX1
J0M
K1G
K48
LATKE
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N04
N05
N9A
NF~
O66
O9-
OIG
P2W
P2X
P4D
PALCI
PQQKQ
Q.N
Q11
QB0
QWB
R.K
RDJ
RIWAO
RJQFR
ROL
RX1
SAMSI
SUPJJ
TH9
TN5
TUS
UB1
V8K
W8V
W99
WBKPD
WIH
WIK
WOHZO
WQJ
WRC
WXSBR
WYISQ
WZISG
XG1
ZL0
ZZTAW
~IA
~IF
~WT
AAMMB
AAYXX
ADMLS
AEFGJ
AEYWJ
AGHNM
AGQPQ
AGXDD
AGYGG
AIDQK
AIDYY
AIQQE
CITATION
O8X
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c2979-a799f2f0eb26ddf32ce3aed5e3e791509acc8dbb387ebe04291336a4bae03ef43
IEDL.DBID DRFUL
ISICitedReferencesCount 37
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000519969500003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0167-7055
IngestDate Sun Jul 13 03:24:16 EDT 2025
Tue Nov 18 21:57:38 EST 2025
Sat Nov 29 03:41:16 EST 2025
Wed Jan 22 16:34:29 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c2979-a799f2f0eb26ddf32ce3aed5e3e791509acc8dbb387ebe04291336a4bae03ef43
Notes Joint first authors.
Corresponding author.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-7557-9131
0000-0002-3802-4644
PQID 2364972088
PQPubID 30877
PageCount 14
ParticipantIDs proquest_journals_2364972088
crossref_primary_10_1111_cgf_13659
crossref_citationtrail_10_1111_cgf_13659
wiley_primary_10_1111_cgf_13659_CGF13659
PublicationCentury 2000
PublicationDate February 2020
2020-02-00
20200201
PublicationDateYYYYMMDD 2020-02-01
PublicationDate_xml – month: 02
  year: 2020
  text: February 2020
PublicationDecade 2020
PublicationPlace Oxford
PublicationPlace_xml – name: Oxford
PublicationTitle Computer graphics forum
PublicationYear 2020
Publisher Blackwell Publishing Ltd
Publisher_xml – name: Blackwell Publishing Ltd
References 2015; 34
2012
2011
2004; 23
2009
2007
2006
2005
2018; 40
2004
2002
2014; 23
2016; 35
2001; 21
2015; 115
2017; 36
2013; 32
2017; 39
2004; 13
2018
2017
2016
2015
2014
2013
2018; 37
e_1_2_7_6_1
e_1_2_7_5_1
e_1_2_7_4_1
e_1_2_7_3_1
e_1_2_7_8_1
e_1_2_7_7_1
e_1_2_7_19_1
e_1_2_7_17_1
e_1_2_7_16_1
e_1_2_7_40_1
e_1_2_7_2_1
e_1_2_7_15_1
e_1_2_7_14_1
Lazebnik S. (e_1_2_7_23_1) 2006
e_1_2_7_42_1
e_1_2_7_13_1
e_1_2_7_43_1
e_1_2_7_12_1
e_1_2_7_44_1
e_1_2_7_11_1
e_1_2_7_45_1
e_1_2_7_10_1
Tai Y.‐W. (e_1_2_7_32_1) 2005
e_1_2_7_26_1
e_1_2_7_27_1
e_1_2_7_28_1
e_1_2_7_29_1
Mao X. (e_1_2_7_25_1) 2016
Glorot X. (e_1_2_7_9_1) 2011
Ironi R. (e_1_2_7_18_1) 2005
e_1_2_7_30_1
e_1_2_7_31_1
e_1_2_7_33_1
e_1_2_7_22_1
e_1_2_7_34_1
Zhou B. (e_1_2_7_41_1) 2014
e_1_2_7_21_1
e_1_2_7_35_1
e_1_2_7_20_1
e_1_2_7_36_1
Luan Q. (e_1_2_7_24_1) 2007
e_1_2_7_37_1
e_1_2_7_38_1
e_1_2_7_39_1
References_xml – volume: 37
  start-page: 47:1
  issue: 4
  year: 2018
  end-page: 47:16
  article-title: Deep exemplar‐based colorization
  publication-title: ACM Transactions on Graphics
– start-page: 346
  year: 2014
  end-page: 361
– start-page: 3
  year: 2017
– start-page: 1458
  year: 2005
  end-page: 1465
– start-page: 315
  year: 2011
  end-page: 323
– start-page: 121
  year: 2004
  end-page: 127
– start-page: 201
  year: 2005
  end-page: 210
– volume: 23
  start-page: 309
  year: 2004
  end-page: 314
  article-title: Grabcut: Interactive foreground extraction using iterated graph cuts
  publication-title: ACM Transactions on Graphics
– start-page: 770
  year: 2016
  end-page: 778
– volume: 36
  start-page: 2524
  issue: 12
  year: 2017
  end-page: 2535
  article-title: Low‐dose CT with a residual encoder‐decoder convolutional neural network
  publication-title: IEEE Transactions on Medical Imaging
– start-page: 93
  year: 2017
  end-page: 103
  article-title: L0 gradient‐preserving color transfer
  publication-title: Computer Graphics Forum 36
– volume: 35
  start-page: 110:1
  issue: 4
  year: 2016
  end-page: 110:11
  article-title: Let there be color!: Joint end‐to‐end learning of global and local image priors for automatic image colorization with simultaneous classification
  publication-title: ACM Transactions on Graphics
– start-page: 2802
  year: 2016
  end-page: 2810
– start-page: 487
  year: 2014
  end-page: 495
– start-page: 689
  year: 2004
  end-page: 694
  article-title: Colorization using optimization
  publication-title: ACM Transactions on Graphics 23
– start-page: 309
  year: 2007
  end-page: 320
– start-page: 1214
  year: 2006
  end-page: 1220
  article-title: Manga colorization
  publication-title: ACM Transactions on Graphics 25
– volume: 23
  start-page: 298
  issue: 1
  year: 2014
  end-page: 307
  article-title: Variational exemplar‐based image colorization
  publication-title: IEEE Transactions on Image Processing
– start-page: 2881
  year: 2017
  end-page: 2890
– volume: 40
  start-page: 834
  issue: 4
  year: 2018
  end-page: 848
  article-title: DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– start-page: 277
  year: 2002
  end-page: 280
  article-title: Transferring color to greyscale images
  publication-title: ACM Transactions on Graphics 21
– start-page: 190
  year: 2013
  end-page: 203
  article-title: Content‐based colour transfer
  publication-title: Computer Graphics Forum 32
– start-page: 286
  year: 2018
  end-page: 301
– start-page: 1794
  year: 2009
  end-page: 1801
– start-page: 415
  year: 2015
  end-page: 423
– volume: 36
  start-page: 119:1
  issue: 4
  year: 2017
  end-page: 119:11
  article-title: Real‐time user‐guided image colorization with learned deep priors
  publication-title: ACM Transactions on Graphics
– start-page: 2169
  year: 2006
  end-page: 2178
– volume: 21
  start-page: 34
  issue: 5
  year: 2001
  end-page: 41
  article-title: Color transfer between images
  publication-title: IEEE Computer Graphics and Applications
– start-page: 747
  year: 2005
  end-page: 754
– volume: 34
  start-page: 139:1
  issue: 4
  year: 2015
  end-page: 139:11
  article-title: Palette‐based photo recoloring
  publication-title: ACM Transactions on Graphics
– start-page: 369
  year: 2012
  end-page: 378
– start-page: 577
  year: 2016
  end-page: 593
– start-page: 649
  year: 2016
  end-page: 666
– volume: 39
  start-page: 2481
  issue: 12
  year: 2017
  end-page: 2495
  article-title: Segnet: A deep convolutional encoder‐decoder architecture for image segmentation
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– volume: 32
  start-page: 197:1
  issue: 6
  year: 2013
  end-page: 197:10
  article-title: A sparse control model for image and video editing
  publication-title: ACM Transactions on Graphics
– year: 2017
– volume: 13
  start-page: 600
  issue: 4
  year: 2004
  end-page: 612
  article-title: Image quality assessment: From error visibility to structural similarity
  publication-title: IEEE Transactions on Image Processing
– volume: 115
  start-page: 211
  issue: 3
  year: 2015
  end-page: 252
  article-title: Imagenet large scale visual recognition challenge
  publication-title: International Journal of Computer Vision
– start-page: 2169
  volume-title: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  year: 2006
  ident: e_1_2_7_23_1
– ident: e_1_2_7_20_1
– ident: e_1_2_7_36_1
  doi: 10.1111/cgf.13275
– start-page: 315
  volume-title: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
  year: 2011
  ident: e_1_2_7_9_1
– ident: e_1_2_7_22_1
  doi: 10.1007/978-3-319-46493-0_35
– ident: e_1_2_7_37_1
  doi: 10.1145/2508363.2508404
– ident: e_1_2_7_39_1
  doi: 10.1007/978-3-319-46487-9_40
– ident: e_1_2_7_42_1
  doi: 10.1109/CVPR.2017.660
– ident: e_1_2_7_17_1
  doi: 10.1109/CVPR.2016.90
– ident: e_1_2_7_6_1
  doi: 10.1109/TPAMI.2017.2699184
– ident: e_1_2_7_12_1
  doi: 10.5244/C.31.112
– ident: e_1_2_7_19_1
  doi: 10.1145/2897824.2925974
– ident: e_1_2_7_34_1
  doi: 10.1109/TIP.2003.819861
– ident: e_1_2_7_3_1
  doi: 10.1109/TIP.2013.2288929
– ident: e_1_2_7_13_1
  doi: 10.1145/3197517.3201365
– ident: e_1_2_7_8_1
  doi: 10.1109/TMI.2017.2715284
– start-page: 487
  volume-title: Proceedings of International Conference on Neural Information Processing Systems
  year: 2014
  ident: e_1_2_7_41_1
– ident: e_1_2_7_33_1
  doi: 10.1145/566654.566576
– ident: e_1_2_7_10_1
  doi: 10.1145/2393347.2393402
– ident: e_1_2_7_16_1
  doi: 10.1007/978-3-319-10578-9_23
– start-page: 2802
  volume-title: Proceedings of International Conference on Neural Information Processing Systems
  year: 2016
  ident: e_1_2_7_25_1
– ident: e_1_2_7_21_1
  doi: 10.1145/1015706.1015780
– ident: e_1_2_7_27_1
  doi: 10.1109/38.946629
– ident: e_1_2_7_4_1
  doi: 10.1145/2766978
– ident: e_1_2_7_30_1
– ident: e_1_2_7_35_1
  doi: 10.1111/cgf.12008
– ident: e_1_2_7_38_1
  doi: 10.1109/CVPR.2009.5206757
– ident: e_1_2_7_2_1
  doi: 10.1109/TPAMI.2016.2644615
– ident: e_1_2_7_5_1
  doi: 10.1109/ICCV.2017.168
– ident: e_1_2_7_14_1
  doi: 10.1109/CVPR.2017.243
– ident: e_1_2_7_29_1
  doi: 10.1145/1015706.1015720
– ident: e_1_2_7_7_1
  doi: 10.1109/ICCV.2015.55
– ident: e_1_2_7_28_1
  doi: 10.1007/s11263-015-0816-y
– ident: e_1_2_7_43_1
– ident: e_1_2_7_45_1
  doi: 10.1109/CVPR.2017.544
– start-page: 201
  volume-title: Proceedings of Eurographics Conference on Rendering Techniques
  year: 2005
  ident: e_1_2_7_18_1
– start-page: 747
  volume-title: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005
  year: 2005
  ident: e_1_2_7_32_1
– ident: e_1_2_7_11_1
  doi: 10.1109/ICCV.2005.239
– ident: e_1_2_7_40_1
  doi: 10.1007/978-3-030-01234-2_18
– start-page: 309
  volume-title: Proceedings of the 18th Eurographics Conference on Rendering Techniques
  year: 2007
  ident: e_1_2_7_24_1
– ident: e_1_2_7_26_1
  doi: 10.1145/1141911.1142017
– ident: e_1_2_7_31_1
  doi: 10.1145/987657.987677
– ident: e_1_2_7_44_1
  doi: 10.1145/3072959.3073703
– ident: e_1_2_7_15_1
SSID ssj0004765
Score 2.489928
Snippet We propose a novel deep example‐based image colourization method called dense encoding pyramid network. In our study, we define the colourization as a...
SourceID proquest
crossref
wiley
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 20
SubjectTerms Coders
Color
Computational Photography
I.3.3 [Computer Graphics]: Picture/Image; Computing Methodologies: Neural Networks
image and video processing
Image classification
image processing
Pyramids
Representations
Title Example‐Based Colourization Via Dense Encoding Pyramids
URI https://onlinelibrary.wiley.com/doi/abs/10.1111%2Fcgf.13659
https://www.proquest.com/docview/2364972088
Volume 39
WOSCitedRecordID wos000519969500003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Full Collection 2020
  customDbUrl:
  eissn: 1467-8659
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0004765
  issn: 0167-7055
  databaseCode: DRFUL
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://onlinelibrary.wiley.com
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEB5q60EPvsVqlSAevATa3c1j8aR96KGUIlZ6C5t9SEFTSVrRmz_B3-gvcXebtBUUBG85TJJldma-mWXmW4AzjCjzNAy7hHvYJYQSNxS47jKNlp7ilCBu2fW7Qa8XDoe0X4KLYhZmxg8xP3AznmHjtXFwFmdLTs4flO3RoitQQdpuSRkqrdvOoLsYiwx8r6D2NqQxObGQaeSZv_wdjhY55nKmaqGms_mvRW7BRp5hOpczk9iGkkx2YH2Jd3AXaPuVGVbgz_ePK41iwmnqEDhN85FM537EnJYub6XTTvjYgJvTf0vZ00hkezDotO-aN25-h4LLEQ2oywJKFVJ1XUD7QiiMuMRMCk9iGVCdDFLGeSjiGIeB3k6DTrpo9RmJmaxjqQjeh3IyTuQBODjmHmfKx8JweDV0LqEoikkj4KEiXKAqnBeqjHhOMG7uuXiMikJDayOy2qjC6Vz0ecaq8ZNQrdiPKHesLDJ89zRAOjbq31nN__6BqHndsQ-Hfxc9gjVkKmrbl12D8iSdymNY5S-TUZae5Bb2BRcQ01Y
linkProvider Wiley-Blackwell
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEB5qK6gH32K1ahAPXgLt7uax4EX7sGIsRVrxFjb7kIKm0ofozZ_gb_SXuJsmbQUFwVsOk02YnZlvZpn5FuAEI8ocDcM24Q62CaHE9gUu20yjpaM4JYgn7PqB12r59_e0nYOzbBZmwg8xPXAznpHEa-Pg5kB6zsv5g0qatOgCFIg2IycPhdptoxvM5iI918m4vQ1rTMosZDp5pi9_x6NZkjmfqiZY01j731-uw2qaY1rnE6PYgJyMN2FljnlwC2j9lRle4M_3jwuNY8Kq6iA4HqRDmdZdj1k1XeBKqx7zvoE3q_02YE89MdyGbqPeqTbt9BYFmyPqUZt5lCqkyrqEdoVQGHGJmRSOxNKjOh2kjHNfRBH2Pb2hBp902eoyEjFZxlIRvAP5uB_LXbBwxB3OlIuFYfGq6GxCURSRisd9RbhARTjNdBnylGLc3HTxGGalhtZGmGijCMdT0ecJr8ZPQqVsQ8LUtYahYbynHtLRUX8uUf3vC4TVy0bysPd30SNYanZugjC4al3vwzIy9XXSpV2C_GgwlgewyF9GveHgMDW3L8fq10Y
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEB5qFdGDb7FaNYgHL4F2d_NY8KJto2IpRaz0Fjb7kIKmJW1Fb_4Ef6O_xN00aSsoCN5ymDyY3ZlvvjDzLcApRpQ5GoZtwh1sE0KJ7QtcsZlGS0dxShBP1fWbXqvld7u0XYDzfBZmog8x_eFmIiPN1ybA5UCouSjnjypt0qILsEgc6uqwXKzfBZ3mbC7Sc51c29uoxmTKQqaTZ3rzdzyaFZnzpWqKNcH6_75yA9ayGtO6mGyKTSjIeAtW55QHt4E2XpnRBf58_7jUOCasmk6C4yQbyrQeesyqa4IrrUbM-wberPZbwp57YrgDnaBxX7u2s1MUbI6oR23mUaqQqmgK7QqhMOISMykciaVHdTlIGee-iCLse3pBDT5p2uoyEjFZwVIRvAvFuB_LPbBwxB3OlIuFUfGq6mpCURSRqsd9RbhAJTjLfRnyTGLcnHTxFOZUQ3sjTL1RgpOp6WCiq_GTUTlfkDALrWFoFO-ph3R21K9LXf_7A8LaVZBe7P_d9BiW2_UgbN60bg9gBRl6nTZpl6E4SsbyEJb4y6g3TI6y3fYFjqHWwQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Example%E2%80%90Based+Colourization+Via+Dense+Encoding+Pyramids&rft.jtitle=Computer+graphics+forum&rft.au=Xiao%2C+Chufeng&rft.au=Chu%2C+Han&rft.au=Zhang%2C+Zhuming&rft.au=Qin%2C+Jing&rft.date=2020-02-01&rft.pub=Blackwell+Publishing+Ltd&rft.issn=0167-7055&rft.eissn=1467-8659&rft.volume=39&rft.issue=1&rft.spage=20&rft.epage=33&rft_id=info:doi/10.1111%2Fcgf.13659&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-7055&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-7055&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-7055&client=summon