Rain-Free and Residue Hand-in-Hand: A Progressive Coupled Network for Real-Time Image Deraining

Rainy weather is a challenge for many vision-oriented tasks ( e.g. , object detection and segmentation), which causes performance degradation. Image deraining is an effective solution to avoid performance drop of downstream vision tasks. However, most existing deraining methods either fail to produc...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 30; pp. 7404 - 7418
Main Authors: Jiang, Kui, Wang, Zhongyuan, Yi, Peng, Chen, Chen, Wang, Zheng, Wang, Xiao, Jiang, Junjun, Lin, Chia-Wen
Format: Journal Article
Language:English
Published: New York IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1057-7149, 1941-0042, 1941-0042
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Rainy weather is a challenge for many vision-oriented tasks ( e.g. , object detection and segmentation), which causes performance degradation. Image deraining is an effective solution to avoid performance drop of downstream vision tasks. However, most existing deraining methods either fail to produce satisfactory restoration results or cost too much computation. In this work, considering both effectiveness and efficiency of image deraining, we propose a progressive coupled network (PCNet) to well separate rain streaks while preserving rain-free details. To this end, we investigate the blending correlations between them and particularly devise a novel coupled representation module (CRM) to learn the joint features and the blending correlations. By cascading multiple CRMs, PCNet extracts the hierarchical features of multi-scale rain streaks, and separates the rain-free content and rain streaks progressively. To promote computation efficiency, we employ depth-wise separable convolutions and a U-shaped structure, and construct CRM in an asymmetric architecture to reduce model parameters and memory footprint. Extensive experiments are conducted to evaluate the efficacy of the proposed PCNet in two aspects: (1) image deraining on several synthetic and real-world rain datasets and (2) joint image deraining and downstream vision tasks ( e.g. , object detection and segmentation). Furthermore, we show that the proposed CRM can be easily adopted to similar image restoration tasks including image dehazing and low-light enhancement with competitive performance. The source code is available at https://github.com/kuijiang0802/PCNet .
AbstractList Rainy weather is a challenge for many vision-oriented tasks ( e.g. , object detection and segmentation), which causes performance degradation. Image deraining is an effective solution to avoid performance drop of downstream vision tasks. However, most existing deraining methods either fail to produce satisfactory restoration results or cost too much computation. In this work, considering both effectiveness and efficiency of image deraining, we propose a progressive coupled network (PCNet) to well separate rain streaks while preserving rain-free details. To this end, we investigate the blending correlations between them and particularly devise a novel coupled representation module (CRM) to learn the joint features and the blending correlations. By cascading multiple CRMs, PCNet extracts the hierarchical features of multi-scale rain streaks, and separates the rain-free content and rain streaks progressively. To promote computation efficiency, we employ depth-wise separable convolutions and a U-shaped structure, and construct CRM in an asymmetric architecture to reduce model parameters and memory footprint. Extensive experiments are conducted to evaluate the efficacy of the proposed PCNet in two aspects: (1) image deraining on several synthetic and real-world rain datasets and (2) joint image deraining and downstream vision tasks ( e.g. , object detection and segmentation). Furthermore, we show that the proposed CRM can be easily adopted to similar image restoration tasks including image dehazing and low-light enhancement with competitive performance. The source code is available at https://github.com/kuijiang0802/PCNet .
Rainy weather is a challenge for many vision-oriented tasks (e.g., object detection and segmentation), which causes performance degradation. Image deraining is an effective solution to avoid performance drop of downstream vision tasks. However, most existing deraining methods either fail to produce satisfactory restoration results or cost too much computation. In this work, considering both effectiveness and efficiency of image deraining, we propose a progressive coupled network (PCNet) to well separate rain streaks while preserving rain-free details. To this end, we investigate the blending correlations between them and particularly devise a novel coupled representation module (CRM) to learn the joint features and the blending correlations. By cascading multiple CRMs, PCNet extracts the hierarchical features of multi-scale rain streaks, and separates the rain-free content and rain streaks progressively. To promote computation efficiency, we employ depth-wise separable convolutions and a U-shaped structure, and construct CRM in an asymmetric architecture to reduce model parameters and memory footprint. Extensive experiments are conducted to evaluate the efficacy of the proposed PCNet in two aspects: (1) image deraining on several synthetic and real-world rain datasets and (2) joint image deraining and downstream vision tasks (e.g., object detection and segmentation). Furthermore, we show that the proposed CRM can be easily adopted to similar image restoration tasks including image dehazing and low-light enhancement with competitive performance. The source code is available at https://github.com/kuijiang0802/PCNet.Rainy weather is a challenge for many vision-oriented tasks (e.g., object detection and segmentation), which causes performance degradation. Image deraining is an effective solution to avoid performance drop of downstream vision tasks. However, most existing deraining methods either fail to produce satisfactory restoration results or cost too much computation. In this work, considering both effectiveness and efficiency of image deraining, we propose a progressive coupled network (PCNet) to well separate rain streaks while preserving rain-free details. To this end, we investigate the blending correlations between them and particularly devise a novel coupled representation module (CRM) to learn the joint features and the blending correlations. By cascading multiple CRMs, PCNet extracts the hierarchical features of multi-scale rain streaks, and separates the rain-free content and rain streaks progressively. To promote computation efficiency, we employ depth-wise separable convolutions and a U-shaped structure, and construct CRM in an asymmetric architecture to reduce model parameters and memory footprint. Extensive experiments are conducted to evaluate the efficacy of the proposed PCNet in two aspects: (1) image deraining on several synthetic and real-world rain datasets and (2) joint image deraining and downstream vision tasks (e.g., object detection and segmentation). Furthermore, we show that the proposed CRM can be easily adopted to similar image restoration tasks including image dehazing and low-light enhancement with competitive performance. The source code is available at https://github.com/kuijiang0802/PCNet.
Author Lin, Chia-Wen
Chen, Chen
Yi, Peng
Wang, Zheng
Jiang, Kui
Wang, Xiao
Jiang, Junjun
Wang, Zhongyuan
Author_xml – sequence: 1
  givenname: Kui
  orcidid: 0000-0002-4055-7503
  surname: Jiang
  fullname: Jiang, Kui
  email: kuijiang_1994@163.com
  organization: National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China
– sequence: 2
  givenname: Zhongyuan
  orcidid: 0000-0002-9796-488X
  surname: Wang
  fullname: Wang, Zhongyuan
  email: wzy_hope@163.com
  organization: National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China
– sequence: 3
  givenname: Peng
  orcidid: 0000-0001-9366-951X
  surname: Yi
  fullname: Yi, Peng
  email: yipeng@whu.edu.cn
  organization: National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China
– sequence: 4
  givenname: Chen
  orcidid: 0000-0003-3957-7061
  surname: Chen
  fullname: Chen, Chen
  email: chen.chen@ucf.edu
  organization: Center for Research in Computer Vision (CRCV), University of Central Florida, Orlando, FL, USA
– sequence: 5
  givenname: Zheng
  orcidid: 0000-0003-3846-9157
  surname: Wang
  fullname: Wang, Zheng
  email: wangzwhu@whu.edu.cn
  organization: National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China
– sequence: 6
  givenname: Xiao
  surname: Wang
  fullname: Wang, Xiao
  email: hebeiwangxiao@163.com
  organization: National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China
– sequence: 7
  givenname: Junjun
  orcidid: 0000-0002-5694-505X
  surname: Jiang
  fullname: Jiang, Junjun
  email: junjun0595@163.com
  organization: Peng Cheng Laboratory, Shenzhen, China
– sequence: 8
  givenname: Chia-Wen
  orcidid: 0000-0002-9097-2318
  surname: Lin
  fullname: Lin, Chia-Wen
  email: cwlin@ee.nthu.edu.tw
  organization: Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
BookMark eNp9kEtrGzEURkVwyLP7QjeCbroZ9-o1j-6Mm4chJCG4a6HR3DFKxyNXmknpv48Gmyy86OpK6DvfFeeSzHrfIyGfGcwZg-r7evU858DZXDDgCuQJuWCVZBmA5LN0BlVkBZPVObmM8RWAScXyM3IupAQhRH5B9ItxfXYbEKnpG_qC0TUj0vt0ydLDNH_QBX0OfhMwRveGdOnHXYcNfcThrw-_aetD4kyXrd0W6WprNkh_Yki9rt9ck9PWdBE_HeYV-XV7s17eZw9Pd6vl4iGzgsshqxlDLqtSWgsKc9OKsinrumhsW0MNDKCtbdOWoqwKLJqy5XVRS0CW20oUKMUV-bbv3QX_Z8Q46K2LFrvO9OjHqLnKueIgoEzRr0fRVz-GPv1uSikuCiWnQtinbPAxBmz1LritCf80Az3J10m-nuTrg_yE5EeIdYMZnO-HZKP7H_hlDzpE_NhTKaZUycU7Po-Pqw
CODEN IIPRE4
CitedBy_id crossref_primary_10_1109_TPAMI_2023_3238179
crossref_primary_10_3390_electronics11091375
crossref_primary_10_1007_s00530_024_01606_3
crossref_primary_10_1109_TIP_2025_3567205
crossref_primary_10_1109_TMM_2024_3521760
crossref_primary_10_1109_TBDATA_2024_3442538
crossref_primary_10_3390_w15081585
crossref_primary_10_1109_TETCI_2024_3382233
crossref_primary_10_1007_s11760_025_04723_2
crossref_primary_10_1016_j_neucom_2024_129236
crossref_primary_10_1049_ipr2_12726
crossref_primary_10_1080_19479832_2025_2465532
crossref_primary_10_1109_TMM_2024_3387746
crossref_primary_10_1109_TMM_2024_3366765
crossref_primary_10_1109_TIP_2023_3334556
crossref_primary_10_1016_j_eswa_2025_126857
crossref_primary_10_1145_3689642
crossref_primary_10_3390_electronics11091384
crossref_primary_10_1145_3679203
crossref_primary_10_1109_LSP_2024_3407536
crossref_primary_10_1109_ACCESS_2024_3508805
crossref_primary_10_1016_j_dsp_2022_103740
crossref_primary_10_1109_LGRS_2021_3112038
crossref_primary_10_1007_s00371_024_03287_5
crossref_primary_10_1016_j_cviu_2025_104287
crossref_primary_10_1007_s11760_021_02046_6
crossref_primary_10_1109_TGRS_2023_3347745
crossref_primary_10_3390_math11030691
crossref_primary_10_1016_j_jvcir_2024_104250
crossref_primary_10_1016_j_neucom_2022_02_059
crossref_primary_10_1007_s11063_024_11541_z
crossref_primary_10_1016_j_inffus_2025_103104
crossref_primary_10_1109_TETCI_2024_3369321
crossref_primary_10_1080_10095020_2023_2288179
crossref_primary_10_1016_j_jvcir_2022_103719
crossref_primary_10_1007_s11042_023_17146_3
crossref_primary_10_1049_ipr2_70040
crossref_primary_10_1016_j_geits_2025_100337
crossref_primary_10_1109_LSP_2021_3129667
crossref_primary_10_1007_s11227_022_04895_5
crossref_primary_10_1016_j_jestch_2024_101893
crossref_primary_10_3390_electronics11091354
crossref_primary_10_1109_ACCESS_2024_3407750
crossref_primary_10_1109_TMM_2025_3535316
crossref_primary_10_1109_TCE_2024_3409313
crossref_primary_10_1109_LSP_2023_3330129
crossref_primary_10_1109_ACCESS_2024_3392016
crossref_primary_10_1016_j_neucom_2023_127066
crossref_primary_10_1016_j_eswa_2025_128308
crossref_primary_10_1016_j_dsp_2025_105216
crossref_primary_10_1016_j_eswa_2024_126248
crossref_primary_10_3390_electronics11091298
crossref_primary_10_1109_TCSVT_2024_3411655
crossref_primary_10_1016_j_dsp_2023_104248
crossref_primary_10_1109_TIP_2023_3272173
crossref_primary_10_1117_1_JEI_32_6_063015
crossref_primary_10_1016_j_neucom_2022_04_034
crossref_primary_10_1109_TNNLS_2025_3527557
crossref_primary_10_1016_j_engappai_2025_110267
crossref_primary_10_1109_LSP_2024_3374057
crossref_primary_10_1007_s11760_023_02649_1
crossref_primary_10_1016_j_infrared_2024_105390
crossref_primary_10_1109_LSP_2025_3577924
crossref_primary_10_1007_s00521_023_08899_x
crossref_primary_10_3390_s24206724
crossref_primary_10_3390_s22134707
crossref_primary_10_3390_sym15081571
crossref_primary_10_1109_TCSVT_2024_3469190
crossref_primary_10_1007_s44267_025_00083_0
crossref_primary_10_3390_app13042709
crossref_primary_10_1109_TCSVT_2023_3264824
crossref_primary_10_1109_JSEN_2024_3350742
crossref_primary_10_1016_j_patcog_2023_110205
crossref_primary_10_1016_j_engappai_2023_107789
crossref_primary_10_1109_LSP_2021_3118640
crossref_primary_10_1007_s11431_023_2614_8
crossref_primary_10_1109_TGRS_2023_3277486
crossref_primary_10_1109_TCSVT_2024_3377365
crossref_primary_10_1007_s00530_024_01469_8
crossref_primary_10_1109_TMM_2022_3144066
crossref_primary_10_1109_TPAMI_2024_3525089
crossref_primary_10_1016_j_knosys_2025_113162
crossref_primary_10_1016_j_inffus_2024_102494
crossref_primary_10_1016_j_neucom_2025_129490
crossref_primary_10_1177_1088467X241308764
crossref_primary_10_1007_s00530_024_01462_1
crossref_primary_10_1109_TVCG_2025_3550844
crossref_primary_10_1007_s11263_024_02056_0
crossref_primary_10_1109_LSP_2024_3383796
crossref_primary_10_1109_TGRS_2022_3215205
crossref_primary_10_1109_ACCESS_2024_3480205
crossref_primary_10_1016_j_engappai_2024_109067
crossref_primary_10_1016_j_image_2025_117311
crossref_primary_10_1109_TIM_2022_3230482
crossref_primary_10_1007_s00371_022_02533_y
crossref_primary_10_1016_j_eswa_2023_121339
crossref_primary_10_1049_ipr2_12669
crossref_primary_10_1007_s10044_024_01401_w
crossref_primary_10_1109_JSEN_2023_3263834
crossref_primary_10_1016_j_jvcir_2024_104060
crossref_primary_10_1109_JAS_2024_124521
crossref_primary_10_1109_LSP_2022_3154981
crossref_primary_10_3390_ai5040113
crossref_primary_10_3390_electronics13163182
crossref_primary_10_1109_TCE_2025_3534680
crossref_primary_10_1109_TGRS_2023_3291822
crossref_primary_10_1109_TPAMI_2025_3584921
crossref_primary_10_1016_j_cviu_2024_104097
crossref_primary_10_1109_TMM_2022_3141886
crossref_primary_10_1016_j_patcog_2025_112081
crossref_primary_10_1080_17538947_2025_2480267
crossref_primary_10_1109_TNNLS_2021_3112235
crossref_primary_10_1109_TCSVT_2022_3229730
Cites_doi 10.1109/CVPR.2019.00060
10.1109/TIP.2020.3048625
10.1109/TCSVT.2017.2748150
10.1007/978-3-030-01234-2_16
10.1016/j.cviu.2019.05.003
10.1145/3240508.3240636
10.1007/s11263-020-01421-z
10.1109/CVPR42600.2020.00597
10.1609/aaai.v34i07.6706
10.1109/TPAMI.2020.2995190
10.1145/3343031.3350926
10.1109/ICCV.2017.511
10.1007/978-3-642-33715-4_54
10.1109/CVPR.2018.00194
10.1109/TIP.2018.2867951
10.1109/TCSVT.2018.2880223
10.1109/TCSVT.2020.3044887
10.1109/CVPR.2018.00716
10.1007/s11263-020-01428-6
10.1109/TMM.2013.2284759
10.1109/CVPR.2019.00701
10.1007/978-3-642-33266-1_8
10.1109/CVPR42600.2020.00313
10.1109/TIP.2006.877407
10.1109/TIP.2021.3051462
10.1109/CVPR42600.2020.00324
10.1109/ICCVW.2017.71
10.1109/TPAMI.2020.3042298
10.1109/TIP.2017.2691802
10.1609/aaai.v34i07.6865
10.1109/CVPR.2017.618
10.1109/CVPR.2018.00132
10.1109/CVPR.2019.00941
10.1515/9783110524116
10.1109/TIP.2021.3064229
10.1109/CVPR42600.2020.00288
10.1609/aaai.v34i07.6701
10.1109/WACV.2017.145
10.1109/CVPR.2019.00835
10.1109/83.557356
10.1109/ICCV.2017.322
10.1109/CVPR.2018.00068
10.1007/s11263-020-01416-w
10.1109/TPAMI.2015.2439281
10.1109/TIP.2011.2179057
10.1109/TCSVT.2019.2920407
10.1109/ICCV.2005.253
10.1109/CVPR.2019.00821
10.1109/CVPR.2017.549
10.1016/j.image.2014.06.006
10.1109/CVPR42600.2020.00837
10.1109/CVPR.2017.186
10.1109/CVPR.2019.00396
10.1109/CVPR.2019.00860
10.1109/LSP.2012.2227726
10.1109/ICCV.2019.00200
10.1109/CVPR.2019.00406
10.1109/TIP.2016.2639450
10.1109/CVPR.2019.00400
10.1109/CVPR.2018.00079
10.1109/TIP.2006.881969
10.1109/TNNLS.2019.2926481
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2021.3102504
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 7418
ExternalDocumentID 10_1109_TIP_2021_3102504
9515582
Genre orig-research
GrantInformation_xml – fundername: Hubei Technological Innovation Special Fund; Hubei Province Technological Innovation Major Project
  grantid: 2020BAB018
  funderid: 10.13039/501100012239
– fundername: National Natural Science Foundation of China
  grantid: U1903214; 62071339; 61971165; 61671332; 61801335
  funderid: 10.13039/501100001809
– fundername: Qualcomm Technologies, Inc., USA, through the Taiwan University Research Collaboration Project
  grantid: NAT-410478
  funderid: 10.13039/100005144
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c324t-b11e24984cc05e6af38d8bb7dcfb0b0100fbcdf83897e7d8f2b7b40e16c937e43
IEDL.DBID RIE
ISICitedReferencesCount 148
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000690439600004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1057-7149
1941-0042
IngestDate Wed Oct 01 13:05:27 EDT 2025
Mon Jun 30 10:19:52 EDT 2025
Tue Nov 18 21:25:32 EST 2025
Sat Nov 29 03:21:15 EST 2025
Wed Aug 27 02:27:36 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c324t-b11e24984cc05e6af38d8bb7dcfb0b0100fbcdf83897e7d8f2b7b40e16c937e43
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-9366-951X
0000-0002-4055-7503
0000-0003-3957-7061
0000-0002-5694-505X
0000-0002-9097-2318
0000-0003-3846-9157
0000-0002-9796-488X
PMID 34403336
PQID 2565237544
PQPubID 85429
PageCount 15
ParticipantIDs crossref_primary_10_1109_TIP_2021_3102504
crossref_citationtrail_10_1109_TIP_2021_3102504
proquest_miscellaneous_2562520308
proquest_journals_2565237544
ieee_primary_9515582
PublicationCentury 2000
PublicationDate 20210000
2021-00-00
20210101
PublicationDateYYYYMMDD 2021-01-01
PublicationDate_xml – year: 2021
  text: 20210000
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
ref56
ref12
ioffe (ref28) 2015
ref59
ref15
ref58
krizhevsky (ref27) 2012
ref55
ref54
ref10
yang (ref23) 2021
he (ref84) 2020; 12358
ref17
ref16
ref19
ref18
guo (ref82) 2020
feifan lv (ref79) 2018
wang (ref52) 2019
redmon (ref66) 2018
ref46
ref45
yu (ref70) 2018
ref47
ref42
ref85
ref41
ref44
ref43
ref8
ref7
zamir (ref51) 2020
ref9
ref4
ref3
ref6
wei (ref71) 2018
ref5
jiang (ref48) 2021
howard (ref29) 2017
ref81
ref40
ref83
zhang (ref49) 2019
mei (ref50) 2020
ref80
mehta (ref32) 2018
ref78
ref37
ref75
ref31
ref74
ref30
ding (ref63) 2020
ref77
ref33
ref76
ref2
sifre (ref35) 2014; 1
ref1
ref39
ref38
ding (ref34) 2021
ref73
ref72
eboli (ref36) 2020
ref68
ref24
ref67
ref26
ref69
ref25
li (ref14) 2017
ref64
ref20
ref22
ref65
ref21
zhang (ref53) 2018
ronneberger (ref11) 2015
ref60
ref62
ref61
References_xml – ident: ref10
  doi: 10.1109/CVPR.2019.00060
– ident: ref16
  doi: 10.1109/TIP.2020.3048625
– ident: ref44
  doi: 10.1109/TCSVT.2017.2748150
– ident: ref5
  doi: 10.1007/978-3-030-01234-2_16
– ident: ref8
  doi: 10.1016/j.cviu.2019.05.003
– ident: ref19
  doi: 10.1145/3240508.3240636
– ident: ref22
  doi: 10.1007/s11263-020-01421-z
– year: 2017
  ident: ref14
  article-title: Single image deraining using scale-aware multi-stage recurrent network
  publication-title: arXiv 1712 06830
– ident: ref45
  doi: 10.1109/CVPR42600.2020.00597
– ident: ref37
  doi: 10.1609/aaai.v34i07.6706
– ident: ref25
  doi: 10.1109/TPAMI.2020.2995190
– year: 2021
  ident: ref34
  article-title: RepVGG: Making VGG-style ConvNets great again
  publication-title: arXiv 2101 03697
– ident: ref83
  doi: 10.1145/3343031.3350926
– ident: ref72
  doi: 10.1109/ICCV.2017.511
– ident: ref78
  doi: 10.1007/978-3-642-33715-4_54
– ident: ref62
  doi: 10.1109/CVPR.2018.00194
– start-page: 1097
  year: 2012
  ident: ref27
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– year: 2020
  ident: ref36
  article-title: Structured and localized image restoration
  publication-title: arXiv 2006 09261
– ident: ref77
  doi: 10.1109/TIP.2018.2867951
– start-page: 552
  year: 2018
  ident: ref32
  article-title: Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation
  publication-title: Proc ECCV
– ident: ref43
  doi: 10.1109/TCSVT.2018.2880223
– start-page: 286
  year: 2018
  ident: ref53
  article-title: Image super-resolution using very deep residual channel attention networks
  publication-title: Proc ECCV
– year: 2017
  ident: ref29
  article-title: MobileNets: Efficient convolutional neural networks for mobile vision applications
  publication-title: arXiv 1704 04861
– ident: ref56
  doi: 10.1109/TCSVT.2020.3044887
– ident: ref31
  doi: 10.1109/CVPR.2018.00716
– ident: ref21
  doi: 10.1007/s11263-020-01428-6
– ident: ref13
  doi: 10.1109/TMM.2013.2284759
– year: 2020
  ident: ref63
  article-title: Image quality assessment: Unifying structure and texture similarity
  publication-title: arXiv 2004 07728
– year: 2021
  ident: ref23
  article-title: End-toend rain removal network based on progressive residual detail supplement
  publication-title: IEEE Trans Multimedia
– ident: ref81
  doi: 10.1109/CVPR.2019.00701
– ident: ref26
  doi: 10.1007/978-3-642-33266-1_8
– ident: ref46
  doi: 10.1109/CVPR42600.2020.00313
– ident: ref39
  doi: 10.1109/TIP.2006.877407
– ident: ref47
  doi: 10.1109/TIP.2021.3051462
– ident: ref38
  doi: 10.1109/CVPR42600.2020.00324
– ident: ref30
  doi: 10.1109/ICCVW.2017.71
– volume: 1
  start-page: 3
  year: 2014
  ident: ref35
  article-title: Rigid-motion scattering for image classification
– ident: ref55
  doi: 10.1109/TPAMI.2020.3042298
– start-page: 220
  year: 2018
  ident: ref79
  article-title: MBLLEN: Low-light image/video enhancement using CNNs
  publication-title: Proc BMVC
– volume: 12358
  start-page: 679
  year: 2020
  ident: ref84
  article-title: Conditional sequential modulation for efficient global image retouching
  publication-title: Proc ECCV
– start-page: 234
  year: 2015
  ident: ref11
  article-title: U-net: Convolutional networks for biomedical image segmentation
  publication-title: Proc Conf Med Image Comput Comput -Assist Intervent
– ident: ref4
  doi: 10.1109/TIP.2017.2691802
– ident: ref75
  doi: 10.1609/aaai.v34i07.6865
– ident: ref54
  doi: 10.1109/CVPR.2017.618
– ident: ref69
  doi: 10.1109/CVPR.2018.00132
– ident: ref7
  doi: 10.1109/CVPR.2019.00941
– start-page: 1
  year: 2019
  ident: ref49
  article-title: Residual non-local attention networks for image restoration
  publication-title: Proc ICLR
– start-page: 155
  year: 2018
  ident: ref71
  article-title: Deep retinex decomposition for low-light enhancement
  publication-title: Proc BMVC
– year: 2018
  ident: ref66
  article-title: YOLOv3: An incremental improvement
  publication-title: arXiv 1804 02767
– year: 2019
  ident: ref52
  article-title: An effective two-branch model-based deep network for single image deraining
  publication-title: arXiv 1905 05404
– ident: ref42
  doi: 10.1515/9783110524116
– ident: ref20
  doi: 10.1109/TIP.2021.3064229
– ident: ref74
  doi: 10.1109/CVPR42600.2020.00288
– ident: ref76
  doi: 10.1609/aaai.v34i07.6701
– ident: ref15
  doi: 10.1109/WACV.2017.145
– ident: ref73
  doi: 10.1109/CVPR.2019.00835
– year: 2021
  ident: ref48
  article-title: Degrade is upgrade: Learning degradation for low-light image enhancement
  publication-title: arXiv 2103 10621
– year: 2018
  ident: ref70
  article-title: BDD100K: A diverse driving dataset for heterogeneous multitask learning
  publication-title: arXiv 1805 04687
– ident: ref85
  doi: 10.1109/83.557356
– ident: ref67
  doi: 10.1109/ICCV.2017.322
– ident: ref61
  doi: 10.1109/CVPR.2018.00068
– ident: ref24
  doi: 10.1007/s11263-020-01416-w
– start-page: 1777
  year: 2020
  ident: ref82
  article-title: Zero-reference deep curve estimation for low-light image enhancement
  publication-title: Proc CVPR
– ident: ref40
  doi: 10.1109/TPAMI.2015.2439281
– ident: ref12
  doi: 10.1109/TIP.2011.2179057
– ident: ref59
  doi: 10.1109/TCSVT.2019.2920407
– ident: ref3
  doi: 10.1109/ICCV.2005.253
– ident: ref65
  doi: 10.1109/CVPR.2019.00821
– start-page: 448
  year: 2015
  ident: ref28
  article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift
  publication-title: Proc ICML
– ident: ref68
  doi: 10.1109/CVPR.2017.549
– ident: ref58
  doi: 10.1016/j.image.2014.06.006
– ident: ref1
  doi: 10.1109/CVPR42600.2020.00837
– ident: ref17
  doi: 10.1109/CVPR.2017.186
– ident: ref60
  doi: 10.1109/CVPR.2019.00396
– ident: ref6
  doi: 10.1109/CVPR.2019.00860
– ident: ref57
  doi: 10.1109/LSP.2012.2227726
– ident: ref33
  doi: 10.1109/ICCV.2019.00200
– start-page: 492
  year: 2020
  ident: ref51
  article-title: Learning enriched features for real image restoration and enhancement
  publication-title: Proc ECCV
– ident: ref2
  doi: 10.1109/CVPR.2019.00406
– year: 2020
  ident: ref50
  article-title: Pyramid attention networks for image restoration
  publication-title: arXiv 2004 13824
– ident: ref80
  doi: 10.1109/TIP.2016.2639450
– ident: ref64
  doi: 10.1109/CVPR.2019.00400
– ident: ref18
  doi: 10.1109/CVPR.2018.00079
– ident: ref41
  doi: 10.1109/TIP.2006.881969
– ident: ref9
  doi: 10.1109/TNNLS.2019.2926481
SSID ssj0014516
Score 2.6708531
Snippet Rainy weather is a challenge for many vision-oriented tasks ( e.g. , object detection and segmentation), which causes performance degradation. Image deraining...
Rainy weather is a challenge for many vision-oriented tasks (e.g., object detection and segmentation), which causes performance degradation. Image deraining is...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 7404
SubjectTerms attention mechanism
Blending
Computational modeling
Customer relationship management
Degradation
Feature extraction
Image degradation
Image deraining
Image enhancement
Image restoration
Image segmentation
multi-scale fusion
non-local network
Object recognition
Performance degradation
Production methods
Rain
Source code
Task analysis
Vision
Title Rain-Free and Residue Hand-in-Hand: A Progressive Coupled Network for Real-Time Image Deraining
URI https://ieeexplore.ieee.org/document/9515582
https://www.proquest.com/docview/2565237544
https://www.proquest.com/docview/2562520308
Volume 30
WOSCitedRecordID wos000690439600004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Ri9QwEB7OQ-R88PROcfU8IvgiGLe7TZrEt-N0uQNZFjll30qTTEFYu8fu9n6_M2m3CorgU1OStKUzmXxfJpkBeON0XdShitJbraRSRHeoVMjgUJmIWtcpD9m3z2Y-t8ulWxzAu-EsDCKmzWf4novJlx_XoeWlsrHjfCSWDO49Y4rurNbgMeCEs8mzqY00BPv3LsnMjW-uF0QEpxPipyli1xE8yJXK8jzFZf41G6X0Kn_Y5DTRzI7_7xMfw6MeUIqLTgOewAE2J3Dcg0vRD93tCTz8LfLgKZTs15GzDaKomii-IClli-KKbiRV8PWDuBAL3r3FG2XvUFyu29sVPXLebRwXhHapX7WSfIpEXP8gwyQ-Yp9x4il8nX26ubySfa4FGQhS7aSfTJCYmFUhZBqLqs5ttN6bGGrPa6VZVvsQa0v4xqCJtp5641WGkyIQwEGVP4PDZt3gcxDEebXLVCRmWijjnTNRRw67h9aTfQwjGO__eRn6QOScD2NVJkKSuZIEVrLAyl5gI3g79LjtgnD8o-0pS2Vo1wtkBGd7sZb9KN2WBPeIh3MIwBG8HqppfLHTpGpw3aY2Uz3lqD4v_v7kl3DE7--WZc7gcLdp8RXcD3e779vNOanq0p4nVf0JsFThHw
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fb9MwED5NA8F4YLAxrWOAkXhBwtRt7DjZ2zSoWlGqChW0Nyu2LxJSSae22d-_s5MGpCEknuLIP2Tl7Mv3-c53AO9yVaalKzy3mZJcSqI7VEq5y1Fqj0qVMQ_Zj6mezbLr63y-Bx-6uzCIGJ3P8GMoRlu-X7k6HJX185CPJCOF-0DRqKK5rdXZDELK2WjbVJprAv47o6TI-4vJnKjgcEAMNcbsOoBHiZQiSWJk5t__o5hg5Z5Wjr-a0eH_TfIZPG0hJbts1sBz2MPqCA5beMnazbs5gid_xB48BhMsO3y0RmRF5dk3pGVZIxvTC6eK8Lxgl2we_LeCq-wtsqtVfbOkIWeN6zgjvEv9iiUP90jY5BepJvYJ25wTL-D76PPiaszbbAvcEajacjsYIHGxTDonFKZFmWQ-s1Z7V9pwWipEaZ0vM0I4GrXPyqHVVgocpI4gDsrkBParVYWnwIj1qlxIT9w0ldrmufbKh8B7mFnSkK4H_d03N64NRR4yYixNpCQiNyQwEwRmWoH14H3X46YJw_GPtsdBKl27ViA9ON-J1bT7dGMI8BETD0EAe_C2q6YdFswmRYWrOrYZqmGI63P295HfwOPx4uvUTCezLy_hIMylOaQ5h_3tusZX8NDdbn9u1q_jgr0DQQPjfg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Rain-Free+and+Residue+Hand-in-Hand%3A+A+Progressive+Coupled+Network+for+Real-Time+Image+Deraining&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Jiang%2C+Kui&rft.au=Wang%2C+Zhongyuan&rft.au=Yi%2C+Peng&rft.au=Chen%2C+Chen&rft.date=2021&rft.issn=1057-7149&rft.eissn=1941-0042&rft.volume=30&rft.spage=7404&rft.epage=7418&rft_id=info:doi/10.1109%2FTIP.2021.3102504&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIP_2021_3102504
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon