Getting to know low-light images with the Exclusively Dark dataset

Low-light is an inescapable element of our daily surroundings that greatly affects the efficiency of our vision. Research works on low-light imagery have seen a steady growth, particularly in the field of image enhancement, but there is still a lack of a go-to database as a benchmark. Besides, resea...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computer vision and image understanding Ročník 178; s. 30 - 42
Hlavní autori: Loh, Yuen Peng, Chan, Chee Seng
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Inc 01.01.2019
Predmet:
ISSN:1077-3142, 1090-235X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Low-light is an inescapable element of our daily surroundings that greatly affects the efficiency of our vision. Research works on low-light imagery have seen a steady growth, particularly in the field of image enhancement, but there is still a lack of a go-to database as a benchmark. Besides, research fields that may assist us in low-light environments, such as object detection, has glossed over this aspect even though breakthroughs-after-breakthroughs had been achieved in recent years, most noticeably from the lack of low-light data (less than 2% of the total images) in successful public benchmark datasets such as PASCAL VOC, ImageNet, and Microsoft COCO. Thus, we propose the Exclusively Dark dataset to elevate this data drought. It consists exclusively of low-light images captured in visible light only, with image and object level annotations. Moreover, we share insightful findings in regards to the effects of low-light on the object detection task by analyzing the visualizations of both hand-crafted and learned features. We found that the effects of low-light reach far deeper into the features than can be solved by simple “illumination invariance”. It is our hope that this analysis and the Exclusively Dark dataset can encourage the growth in low-light domain researches on different fields. The dataset can be downloaded at https://github.com/cs-chan/Exclusively-Dark-Image-Dataset. •A new lowlight image only dataset, the Exclusively DARK is proposed.•Dataset contains 10 lowlight illumination types and annotation of 12 object classes.•Analysis of lowlight images with handcrafted and learned features of object detection.•Lowlight presents illumination challenges that is fundamentally different from bright.•Dataset can be the go-to database to benchmark low-light domain research.
AbstractList Low-light is an inescapable element of our daily surroundings that greatly affects the efficiency of our vision. Research works on low-light imagery have seen a steady growth, particularly in the field of image enhancement, but there is still a lack of a go-to database as a benchmark. Besides, research fields that may assist us in low-light environments, such as object detection, has glossed over this aspect even though breakthroughs-after-breakthroughs had been achieved in recent years, most noticeably from the lack of low-light data (less than 2% of the total images) in successful public benchmark datasets such as PASCAL VOC, ImageNet, and Microsoft COCO. Thus, we propose the Exclusively Dark dataset to elevate this data drought. It consists exclusively of low-light images captured in visible light only, with image and object level annotations. Moreover, we share insightful findings in regards to the effects of low-light on the object detection task by analyzing the visualizations of both hand-crafted and learned features. We found that the effects of low-light reach far deeper into the features than can be solved by simple “illumination invariance”. It is our hope that this analysis and the Exclusively Dark dataset can encourage the growth in low-light domain researches on different fields. The dataset can be downloaded at https://github.com/cs-chan/Exclusively-Dark-Image-Dataset. •A new lowlight image only dataset, the Exclusively DARK is proposed.•Dataset contains 10 lowlight illumination types and annotation of 12 object classes.•Analysis of lowlight images with handcrafted and learned features of object detection.•Lowlight presents illumination challenges that is fundamentally different from bright.•Dataset can be the go-to database to benchmark low-light domain research.
Author Chan, Chee Seng
Loh, Yuen Peng
Author_xml – sequence: 1
  givenname: Yuen Peng
  surname: Loh
  fullname: Loh, Yuen Peng
– sequence: 2
  givenname: Chee Seng
  surname: Chan
  fullname: Chan, Chee Seng
  email: cs.chan@um.edu.my
BookMark eNp9kM1OwzAQhC1UJNrCC3DyCySs4zg_EhcopSBV4gISN8txNq3bkCDbbenb46icOPS0q5G-3ZmZkFHXd0jILYOYAcvuNrHem12cACuCEAODCzJmUEKUcPE5GvY8jzhLkysycW4DwFhasjF5XKD3pltR39Nt1x9o2x-i1qzWnpovtUJHD8avqV8jnf_odufMHtsjfVJ2S2vllUN_TS4b1Tq8-ZtT8vE8f5-9RMu3xevsYRlpDuCj4A1qVjMuAMsMIU9BY83zklciz0WWFqJoGArIhaiaTFSKF5qXWlUMsiZN-JQkp7va9s5ZbOS3DR7tUTKQQwtyI4cW5NDCoIWHASr-Qdp45U3featMex69P6EYQu0NWum0wS6YNha1l3VvzuG_GnF51w
CitedBy_id crossref_primary_10_1016_j_compag_2021_106266
crossref_primary_10_1109_TNNLS_2023_3270159
crossref_primary_10_3390_app15105362
crossref_primary_10_1038_s41598_023_39524_5
crossref_primary_10_1049_ipr2_12173
crossref_primary_10_1145_3750732
crossref_primary_10_1002_cpe_7347
crossref_primary_10_1016_j_patcog_2024_110650
crossref_primary_10_1016_j_image_2019_02_001
crossref_primary_10_1109_TITS_2023_3342799
crossref_primary_10_1016_j_engappai_2023_106925
crossref_primary_10_1016_j_jfranklin_2023_02_023
crossref_primary_10_1016_j_imavis_2024_105035
crossref_primary_10_1007_s00530_025_01873_8
crossref_primary_10_1109_TETCI_2024_3378651
crossref_primary_10_1016_j_jksuci_2024_102234
crossref_primary_10_1016_j_imavis_2024_105271
crossref_primary_10_1016_j_heliyon_2023_e14558
crossref_primary_10_3390_s25113382
crossref_primary_10_1007_s11063_023_11303_3
crossref_primary_10_1016_j_image_2021_116527
crossref_primary_10_1016_j_engappai_2024_109906
crossref_primary_10_1016_j_neucom_2025_131383
crossref_primary_10_1109_JSEN_2025_3543860
crossref_primary_10_1109_TIM_2022_3165303
crossref_primary_10_1016_j_patcog_2025_112421
crossref_primary_10_1007_s00371_023_03091_7
crossref_primary_10_1007_s11277_022_10020_9
crossref_primary_10_3390_electronics9061011
crossref_primary_10_1007_s11042_023_15147_w
crossref_primary_10_1016_j_memori_2024_100115
crossref_primary_10_1109_TITS_2023_3308894
crossref_primary_10_1007_s11042_025_21051_2
crossref_primary_10_1016_j_asoc_2025_112865
crossref_primary_10_1016_j_neucom_2025_129507
crossref_primary_10_1016_j_optlastec_2025_113165
crossref_primary_10_3390_app13021034
crossref_primary_10_1016_j_neucom_2025_129866
crossref_primary_10_1109_TII_2020_3026036
crossref_primary_10_1109_TMM_2024_3391907
crossref_primary_10_1109_TPAMI_2021_3126387
crossref_primary_10_1109_ACCESS_2025_3530089
crossref_primary_10_1007_s00371_023_03039_x
crossref_primary_10_1007_s11263_023_01900_z
crossref_primary_10_1016_j_patcog_2024_111076
crossref_primary_10_1109_TIM_2024_3497140
crossref_primary_10_1007_s11042_023_16841_5
crossref_primary_10_1049_ipr2_12732
crossref_primary_10_1109_ACCESS_2024_3434531
crossref_primary_10_1016_j_aej_2025_04_047
crossref_primary_10_1109_TCSVT_2024_3480930
crossref_primary_10_1016_j_eswa_2025_126504
crossref_primary_10_1109_JSEN_2021_3092583
crossref_primary_10_1109_TITS_2023_3270887
crossref_primary_10_1007_s11263_021_01466_8
crossref_primary_10_3390_s24092711
crossref_primary_10_1016_j_image_2023_116971
crossref_primary_10_3390_app15010090
crossref_primary_10_1038_s41598_025_95329_8
crossref_primary_10_1007_s11042_024_20552_w
crossref_primary_10_1016_j_eswa_2023_119739
crossref_primary_10_1007_s11263_024_02239_9
crossref_primary_10_1007_s00530_025_01859_6
crossref_primary_10_1109_ACCESS_2025_3550947
crossref_primary_10_3390_s24175787
crossref_primary_10_3390_app13169261
crossref_primary_10_1007_s11263_023_01932_5
crossref_primary_10_1007_s11042_023_16852_2
crossref_primary_10_1109_TCSVT_2025_3541429
crossref_primary_10_3390_sym12091561
crossref_primary_10_1007_s11042_024_19380_9
crossref_primary_10_1007_s11042_021_11590_9
crossref_primary_10_1016_j_rineng_2024_102510
crossref_primary_10_1016_j_inffus_2024_102467
crossref_primary_10_3390_app15094950
crossref_primary_10_1007_s11042_024_19271_z
crossref_primary_10_1111_cgf_14960
crossref_primary_10_1109_TCSVT_2024_3520802
crossref_primary_10_1109_TCI_2024_3378091
crossref_primary_10_3788_LOP242249
crossref_primary_10_1109_TIP_2021_3062184
crossref_primary_10_3390_wevj16060305
crossref_primary_10_1080_10095020_2025_2542964
crossref_primary_10_1145_3618373
crossref_primary_10_1016_j_neucom_2024_128974
crossref_primary_10_1007_s11263_023_01808_8
crossref_primary_10_1007_s11042_024_20086_1
crossref_primary_10_1109_ACCESS_2024_3451556
crossref_primary_10_3390_s23187763
crossref_primary_10_1007_s00530_024_01589_1
crossref_primary_10_1109_ACCESS_2024_3435764
crossref_primary_10_1049_ipr2_12771
crossref_primary_10_1007_s11042_020_09562_6
crossref_primary_10_1016_j_dsp_2024_104802
crossref_primary_10_12677_mos_2025_148567
crossref_primary_10_1109_TCSVT_2024_3408007
crossref_primary_10_3390_electronics10222756
crossref_primary_10_1007_s11042_021_10614_8
crossref_primary_10_1016_j_dsp_2025_105176
crossref_primary_10_1016_j_knosys_2021_108010
crossref_primary_10_1109_TCE_2025_3572099
crossref_primary_10_3390_s24030772
crossref_primary_10_1016_j_sigpro_2022_108821
crossref_primary_10_1016_j_compind_2025_104314
crossref_primary_10_1049_ipr2_13180
crossref_primary_10_1007_s10489_024_05534_7
crossref_primary_10_1111_1556_4029_15673
crossref_primary_10_1016_j_engappai_2025_111115
crossref_primary_10_1109_JIOT_2024_3402622
crossref_primary_10_1007_s11760_021_01915_4
crossref_primary_10_1145_3758097
crossref_primary_10_1007_s00521_022_07612_8
crossref_primary_10_1016_j_neucom_2025_129399
crossref_primary_10_1016_j_dsp_2024_104808
crossref_primary_10_1007_s11042_024_20070_9
crossref_primary_10_1016_j_eswa_2025_129476
crossref_primary_10_1007_s42979_023_02145_w
crossref_primary_10_1016_j_patcog_2024_111033
crossref_primary_10_1109_TIM_2024_3350120
crossref_primary_10_1117_1_JEI_33_6_063013
crossref_primary_10_1109_TIP_2025_3553070
crossref_primary_10_3390_s25144463
crossref_primary_10_3390_s22103703
crossref_primary_10_1016_j_patrec_2021_10_030
crossref_primary_10_3390_rs16234493
crossref_primary_10_1093_comjnl_bxab055
crossref_primary_10_3390_s20010043
crossref_primary_10_1038_s41598_024_54428_8
crossref_primary_10_1142_S0219467825500287
crossref_primary_10_3390_photonics10020198
crossref_primary_10_1007_s11042_024_19594_x
crossref_primary_10_1007_s00530_023_01100_2
crossref_primary_10_1007_s00371_025_04147_6
crossref_primary_10_1109_JPROC_2023_3338272
crossref_primary_10_1109_TCSVT_2023_3241162
crossref_primary_10_1109_TCSVT_2023_3303574
crossref_primary_10_1007_s00371_022_02402_8
crossref_primary_10_1049_ccs2_12114
crossref_primary_10_1016_j_cviu_2023_103916
crossref_primary_10_1109_TCSVT_2023_3305996
crossref_primary_10_1109_LRA_2022_3187831
crossref_primary_10_1049_ipr2_12311
crossref_primary_10_3390_sym17071122
crossref_primary_10_1051_jnwpu_20234110144
crossref_primary_10_3390_app15095170
crossref_primary_10_1109_LSP_2021_3134943
crossref_primary_10_1109_TITS_2024_3359755
crossref_primary_10_1007_s11760_024_03810_0
crossref_primary_10_1016_j_eswa_2025_127638
crossref_primary_10_1007_s40747_024_01681_z
crossref_primary_10_1109_ACCESS_2020_3007610
crossref_primary_10_1007_s10462_025_11294_8
crossref_primary_10_3389_fpls_2024_1501043
crossref_primary_10_1038_s41598_025_98173_y
crossref_primary_10_1117_1_JEI_32_4_043024
crossref_primary_10_17694_bajece_1415025
crossref_primary_10_1007_s00371_023_03024_4
crossref_primary_10_1088_1742_6596_3055_1_012006
crossref_primary_10_1109_TCSVT_2023_3286802
crossref_primary_10_1007_s10586_024_04829_1
crossref_primary_10_3390_jimaging11080253
crossref_primary_10_1109_TIP_2020_2981922
crossref_primary_10_1088_1742_6596_1964_6_062077
crossref_primary_10_1109_ACCESS_2025_3596308
crossref_primary_10_1109_TMM_2023_3348333
crossref_primary_10_3390_rs14184608
crossref_primary_10_1145_3550298
crossref_primary_10_3390_electronics12143089
crossref_primary_10_1371_journal_pone_0247440
crossref_primary_10_1016_j_scs_2022_104064
crossref_primary_10_1177_00202940251356201
crossref_primary_10_1109_TCSVT_2024_3377108
crossref_primary_10_1109_ACCESS_2025_3595572
crossref_primary_10_3390_electronics11172750
crossref_primary_10_1049_iet_ipr_2020_0100
crossref_primary_10_1109_TIE_2020_3013783
crossref_primary_10_1109_TIM_2022_3222517
crossref_primary_10_1016_j_optlaseng_2024_108154
crossref_primary_10_1109_TNSE_2022_3151502
crossref_primary_10_1109_ACCESS_2020_2992749
crossref_primary_10_1007_s00521_024_10885_w
crossref_primary_10_1007_s00530_025_01940_0
crossref_primary_10_1016_j_jvcir_2024_104079
crossref_primary_10_1109_JSEN_2023_3314898
crossref_primary_10_1109_LSP_2021_3096160
crossref_primary_10_1016_j_image_2024_117174
crossref_primary_10_1109_TIP_2021_3051462
crossref_primary_10_1007_s11831_025_10226_7
crossref_primary_10_1177_30504554251342571
crossref_primary_10_1007_s00371_022_02412_6
crossref_primary_10_3390_automation5030013
crossref_primary_10_1016_j_jvcir_2025_104480
crossref_primary_10_1007_s11760_023_02850_2
crossref_primary_10_1155_int_8834271
crossref_primary_10_1109_TIM_2024_3351255
crossref_primary_10_1007_s00371_023_02770_9
crossref_primary_10_1016_j_procs_2024_09_219
crossref_primary_10_1109_TGRS_2024_3351134
crossref_primary_10_3390_s21155116
crossref_primary_10_1109_TNNLS_2023_3274926
crossref_primary_10_1109_TIP_2025_3587588
crossref_primary_10_1109_TCSVT_2025_3547029
crossref_primary_10_1016_j_dsp_2025_105268
crossref_primary_10_1016_j_compeleceng_2024_109622
crossref_primary_10_1016_j_image_2025_117345
crossref_primary_10_1016_j_cviu_2024_104271
crossref_primary_10_1109_TGRS_2025_3596581
crossref_primary_10_1109_TMM_2025_3543047
crossref_primary_10_1007_s00371_024_03452_w
crossref_primary_10_1016_j_patrec_2020_07_041
crossref_primary_10_1109_ACCESS_2024_3467395
crossref_primary_10_1016_j_patcog_2025_112264
crossref_primary_10_1155_2024_4650233
crossref_primary_10_1109_TCSVT_2022_3186880
crossref_primary_10_1109_ACCESS_2022_3202940
crossref_primary_10_1109_TIP_2023_3286254
crossref_primary_10_3390_sym14061165
crossref_primary_10_3390_biomimetics9030158
crossref_primary_10_3390_math12244028
crossref_primary_10_3390_photonics10030273
crossref_primary_10_1016_j_dsp_2024_104467
crossref_primary_10_1007_s11554_023_01320_9
crossref_primary_10_1016_j_jvcir_2023_104010
crossref_primary_10_1049_ipr2_12114
crossref_primary_10_1016_j_patcog_2025_111882
crossref_primary_10_3390_electronics14010143
crossref_primary_10_1007_s00371_023_03249_3
crossref_primary_10_1016_j_knosys_2025_113827
crossref_primary_10_1109_ACCESS_2020_2983457
crossref_primary_10_32604_cmc_2023_044374
crossref_primary_10_3390_electronics13183713
crossref_primary_10_1109_TETCI_2021_3053253
crossref_primary_10_1007_s10489_020_02119_y
crossref_primary_10_1117_1_JEI_31_6_063055
crossref_primary_10_1177_30504554251319452
crossref_primary_10_1016_j_knosys_2024_112544
crossref_primary_10_1631_FITEE_2200344
crossref_primary_10_1016_j_neucom_2025_129572
crossref_primary_10_1109_ACCESS_2021_3097913
crossref_primary_10_1007_s10489_023_05082_6
crossref_primary_10_1016_j_knosys_2023_111053
crossref_primary_10_1016_j_eswa_2025_128761
crossref_primary_10_1117_1_JEI_31_6_063050
crossref_primary_10_1016_j_image_2022_116848
crossref_primary_10_1109_TITS_2022_3165176
crossref_primary_10_1016_j_dsp_2025_104999
crossref_primary_10_1002_rob_70064
crossref_primary_10_1049_ipr2_12124
crossref_primary_10_1007_s00371_022_02560_9
crossref_primary_10_3389_fphy_2023_1147031
crossref_primary_10_1007_s00530_023_01228_1
crossref_primary_10_3390_electronics12214445
crossref_primary_10_1007_s00530_025_01820_7
crossref_primary_10_1145_3654668
crossref_primary_10_1016_j_imavis_2024_105202
crossref_primary_10_3390_app11188694
crossref_primary_10_1109_TIP_2021_3135473
crossref_primary_10_1117_1_JEI_34_1_013007
crossref_primary_10_1007_s11263_024_02250_0
crossref_primary_10_3390_rs17122069
crossref_primary_10_3390_app142311033
crossref_primary_10_1080_13682199_2023_2260663
crossref_primary_10_1007_s13735_025_00374_8
crossref_primary_10_1007_s11042_024_18866_w
crossref_primary_10_1109_TPAMI_2025_3586712
crossref_primary_10_1016_j_cviu_2024_103933
crossref_primary_10_1016_j_dsp_2024_104524
crossref_primary_10_1007_s11042_023_16742_7
crossref_primary_10_3390_electronics11030458
crossref_primary_10_3390_machines11020246
crossref_primary_10_1109_TITS_2023_3328294
crossref_primary_10_1007_s11760_025_04195_4
crossref_primary_10_1049_el_2020_0106
crossref_primary_10_3390_electronics13010230
crossref_primary_10_1016_j_compind_2023_103862
crossref_primary_10_1016_j_dsp_2024_104521
crossref_primary_10_1016_j_displa_2023_102614
crossref_primary_10_3390_app13095499
crossref_primary_10_1016_j_knosys_2025_113611
crossref_primary_10_1007_s11760_025_04090_y
crossref_primary_10_1109_JAS_2025_125333
crossref_primary_10_1049_ipr2_12369
crossref_primary_10_3390_s25082464
crossref_primary_10_1007_s11760_022_02319_8
crossref_primary_10_1016_j_eswa_2025_129529
crossref_primary_10_1080_10589759_2023_2274011
crossref_primary_10_1109_TNNLS_2025_3566647
crossref_primary_10_1016_j_knosys_2024_111958
crossref_primary_10_1109_ACCESS_2022_3197629
crossref_primary_10_1109_TGRS_2025_3558541
crossref_primary_10_1145_3638772
crossref_primary_10_1109_TCE_2024_3476033
crossref_primary_10_1038_s41598_025_04172_4
crossref_primary_10_3390_app13052871
crossref_primary_10_3390_s22103667
crossref_primary_10_1016_j_dsp_2025_105221
crossref_primary_10_1038_s41598_024_80265_w
crossref_primary_10_1109_ACCESS_2025_3545258
crossref_primary_10_1109_TCI_2023_3323835
crossref_primary_10_1109_ACCESS_2025_3573171
crossref_primary_10_1007_s00034_023_02311_8
crossref_primary_10_1007_s11042_024_19087_x
crossref_primary_10_3758_s13428_022_01833_4
crossref_primary_10_1016_j_patcog_2025_111814
crossref_primary_10_1016_j_imavis_2024_105102
crossref_primary_10_1109_TNNLS_2021_3071245
crossref_primary_10_3390_ani14243635
crossref_primary_10_7717_peerj_cs_2799
crossref_primary_10_3390_aerospace9120829
crossref_primary_10_1016_j_knosys_2025_112998
crossref_primary_10_1016_j_eswa_2025_128795
crossref_primary_10_1117_1_JEI_31_4_043050
crossref_primary_10_1007_s11801_025_4038_4
crossref_primary_10_1017_S1431927621013799
crossref_primary_10_3390_app15031604
crossref_primary_10_1016_j_neucom_2025_131052
crossref_primary_10_1016_j_neucom_2025_131174
crossref_primary_10_1007_s00138_022_01365_z
crossref_primary_10_1016_j_cviu_2024_103952
crossref_primary_10_1007_s00530_025_01710_y
crossref_primary_10_1109_TIP_2020_3045617
crossref_primary_10_1007_s11263_020_01418_8
crossref_primary_10_1016_j_jvcir_2024_104337
crossref_primary_10_3390_computation9110117
crossref_primary_10_1016_j_sigpro_2021_108447
crossref_primary_10_1109_TPAMI_2025_3563612
crossref_primary_10_1016_j_patcog_2025_112370
crossref_primary_10_1080_02533839_2025_2491425
crossref_primary_10_1049_ipr2_12287
crossref_primary_10_1109_TCI_2023_3240087
crossref_primary_10_3390_app14188109
crossref_primary_10_3390_app122312476
crossref_primary_10_1109_TCSVT_2025_3540495
crossref_primary_10_1109_ACCESS_2025_3558574
crossref_primary_10_1016_j_displa_2025_103174
crossref_primary_10_1016_j_engappai_2024_109967
crossref_primary_10_3390_app15020701
crossref_primary_10_3390_s23083784
crossref_primary_10_3390_photonics12090832
crossref_primary_10_1109_TIV_2024_3395455
crossref_primary_10_1109_TMM_2022_3175634
crossref_primary_10_1016_j_inffus_2023_101822
crossref_primary_10_1109_ACCESS_2019_2908856
crossref_primary_10_1177_09544070211016254
crossref_primary_10_1109_TPAMI_2022_3152562
crossref_primary_10_1016_j_procs_2025_04_562
Cites_doi 10.1023/B:VISI.0000029664.99615.94
10.1109/TCE.2015.7064113
10.1016/j.jvcir.2016.11.001
10.1007/s11263-009-0275-4
10.1007/s11263-007-0090-8
10.1016/j.patcog.2014.06.004
10.1007/s11263-015-0816-y
10.1109/TIP.2018.2810539
10.1109/TIP.2007.901238
10.1109/TPAMI.2007.1014
10.1016/j.patcog.2017.05.015
10.1109/TIP.2016.2639450
10.1016/j.patcog.2016.06.008
10.1016/j.cviu.2013.07.007
10.1016/j.cviu.2006.06.010
10.1016/j.sigpro.2016.05.031
10.1109/TCE.2005.1468014
10.1016/j.infrared.2014.02.005
10.1016/j.cviu.2016.09.001
10.1109/TIP.2013.2261309
10.1016/j.patcog.2014.12.013
10.1109/TIP.2012.2226047
10.1007/s11263-014-0733-5
ContentType Journal Article
Copyright 2018 Elsevier Inc.
Copyright_xml – notice: 2018 Elsevier Inc.
DBID AAYXX
CITATION
DOI 10.1016/j.cviu.2018.10.010
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
Computer Science
EISSN 1090-235X
EndPage 42
ExternalDocumentID 10_1016_j_cviu_2018_10_010
S1077314218304296
GroupedDBID --K
--M
-~X
.DC
.~1
0R~
1B1
1~.
1~5
29F
4.4
457
4G.
5GY
5VS
6TJ
7-5
71M
8P~
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKC
AAIKJ
AAKOC
AALRI
AAMNW
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABJNI
ABMAC
ABXDB
ABYKQ
ACDAQ
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADFGL
ADJOM
ADMUD
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CAG
COF
CS3
DM4
DU5
EBS
EFBJH
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HF~
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
LG5
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SDF
SDG
SDP
SES
SEW
SPC
SPCBC
SSV
SSZ
T5K
TN5
XPP
ZMT
~G-
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
CITATION
EFKBS
SST
~HD
ID FETCH-LOGICAL-c300t-2010d1d1350e96e0740ced3793b577564858f1e50755bf65ba38c39cab106f423
ISICitedReferencesCount 471
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000454372800003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1077-3142
IngestDate Sat Nov 29 07:07:04 EST 2025
Tue Nov 18 21:44:07 EST 2025
Fri Feb 23 02:26:54 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords 65D05
65D17
41A05
41A10
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c300t-2010d1d1350e96e0740ced3793b577564858f1e50755bf65ba38c39cab106f423
PageCount 13
ParticipantIDs crossref_primary_10_1016_j_cviu_2018_10_010
crossref_citationtrail_10_1016_j_cviu_2018_10_010
elsevier_sciencedirect_doi_10_1016_j_cviu_2018_10_010
PublicationCentury 2000
PublicationDate January 2019
2019-01-00
PublicationDateYYYYMMDD 2019-01-01
PublicationDate_xml – month: 01
  year: 2019
  text: January 2019
PublicationDecade 2010
PublicationTitle Computer vision and image understanding
PublicationYear 2019
Publisher Elsevier Inc
Publisher_xml – name: Elsevier Inc
References Zeiler, Fergus (b57) 2014
He, Zhang, Ren, Sun (b19) 2016
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H., 2015. Understanding neural networks through deep visualization. arXiv preprint
Dollár, P., Piotr’s Computer Vision Matlab Toolbox (PMT).
Olmeda, D., Premebida, C., Nunes, U., Armingol, J.M., Escalera, A.d.l., 2013. Lsi far infrared pedestrian dataset.
Krizhevsky, Sutskever, Hinton (b24) 2012
Lee, Maik, Jang, Shin, Paik (b27) 2005; 51
Fu, Zeng, Huang, Zhang, Ding (b17) 2016
Maaten, Hinton (b37) 2008; 9
Wang, Zheng, Hu, Li (b54) 2013; 22
Kim, Park, Han, Ko (b23) 2015; 61
Mahendran, Vedaldi (b38) 2015
Li, Chu, Liao, Zhang (b29) 2007; 29
Malm, Oskarsson, Warrant, Clarberg, Hasselgren, Lejdfors (b39) 2007
Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint
Li, Wang, Wang, Gao (b31) 2015
Loh, Chan (b34) 2015
Kang, Han, Jain, Lee (b22) 2014; 47
Qi, John, Liu, Mita (b42) 2014
Redmon, Divvala, Girshick, Farhadi (b43) 2016
Shen, L., Yue, Z., Feng, F., Chen, Q., Liu, S., Ma, J., 2017. Msr-net: Low-light image enhancement using deep convolutional network. arXiv preprint
Leo, Medioni, Trivedi, Kanade, Farinella (b28) 2017; 154
Zitnick, Dollár (b59) 2014
Chen, Chen, Xu, Koltun (b2) 2018
Huang, Cheng, Chiu (b20) 2013; 22
Guo, Li, Ling (b18) 2017; 26
Davis, Keck (b6) 2005
Jung, Yang, Sun, Fu, Song (b21) 2017; 42
.
Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein (b47) 2015; 115
Donahue, Jia, Vinyals, Hoffman, Zhang, Tzeng, Darrell (b9) 2014
Fang, Cao, Xiao, Zhu, Yuan (b14) 2016; 25
Remez, T., Litany, O., Giryes, R., Bronstein, A.M., 2017. Deep convolutional denoising of low-light images. arXiv preprint
Wei, Wang, Yang, Liu (b55) 2018
Le Callet, P., Autrusseau, F., 2005. Subjective quality assessment irccyn/ivc database.
Fu, Zeng, Huang, Liao, Ding, Paisley (b16) 2016; 129
Bilodeau, Torabi, St-Charles, Riahi (b1) 2014; 64
Cheng, Zhang, Lin, Torr (b3) 2014
Russell, Torralba, Murphy, Freeman (b48) 2008; 77
Dabov, Foi, Katkovnik, Egiazarian (b4) 2007; 16
Li, Liu, Yang, Sun, Guo (b30) 2018; 27
Torralba, Efros (b52) 2011
Davis, Sharma (b7) 2007; 106
Lowe (b36) 2004; 60
Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein, Berg, Fei-Fei (b46) 2015; 115
Dalal, Triggs (b5) 2005
Everingham, Van Gool, Williams, Winn, Zisserman (b13) 2010; 88
Everingham, Eslami, Van Gool, Williams, Winn, Zisserman (b12) 2015; 111
Lee, Chan, Mayo, Remagnino (b26) 2017; 71
Philbin, Chum, Isard, Sivic, Zisserman (b41) 2008
Felzenszwalb, McAllester, Ramanan (b15) 2008
Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, Zitnick (b33) 2014
Elguebaly, Bouguila (b11) 2013; 117
Zhao, He, Zhang, Liang (b58) 2015; 48
Dong, Ge, Luo (b10) 2007
Ren, He, Girshick, Sun (b45) 2015
Lim, Kim, Sim, Kim (b32) 2015
Wang, Yang, Yu, Lv, Huang, Gong (b53) 2010
Lore, Akintayo, Sarkar (b35) 2017; 61
Su, Jung (b51) 2017
Leo (10.1016/j.cviu.2018.10.010_b28) 2017; 154
Li (10.1016/j.cviu.2018.10.010_b31) 2015
Krizhevsky (10.1016/j.cviu.2018.10.010_b24) 2012
Everingham (10.1016/j.cviu.2018.10.010_b13) 2010; 88
Felzenszwalb (10.1016/j.cviu.2018.10.010_b15) 2008
Ren (10.1016/j.cviu.2018.10.010_b45) 2015
Jung (10.1016/j.cviu.2018.10.010_b21) 2017; 42
Russakovsky (10.1016/j.cviu.2018.10.010_b47) 2015; 115
Fang (10.1016/j.cviu.2018.10.010_b14) 2016; 25
Loh (10.1016/j.cviu.2018.10.010_b34) 2015
Davis (10.1016/j.cviu.2018.10.010_b6) 2005
Guo (10.1016/j.cviu.2018.10.010_b18) 2017; 26
Lowe (10.1016/j.cviu.2018.10.010_b36) 2004; 60
Zhao (10.1016/j.cviu.2018.10.010_b58) 2015; 48
Lim (10.1016/j.cviu.2018.10.010_b32) 2015
Li (10.1016/j.cviu.2018.10.010_b30) 2018; 27
Huang (10.1016/j.cviu.2018.10.010_b20) 2013; 22
Zeiler (10.1016/j.cviu.2018.10.010_b57) 2014
Fu (10.1016/j.cviu.2018.10.010_b17) 2016
Elguebaly (10.1016/j.cviu.2018.10.010_b11) 2013; 117
Malm (10.1016/j.cviu.2018.10.010_b39) 2007
Wei (10.1016/j.cviu.2018.10.010_b55) 2018
Torralba (10.1016/j.cviu.2018.10.010_b52) 2011
Russell (10.1016/j.cviu.2018.10.010_b48) 2008; 77
Lin (10.1016/j.cviu.2018.10.010_b33) 2014
Wang (10.1016/j.cviu.2018.10.010_b54) 2013; 22
10.1016/j.cviu.2018.10.010_b25
Li (10.1016/j.cviu.2018.10.010_b29) 2007; 29
10.1016/j.cviu.2018.10.010_b44
Maaten (10.1016/j.cviu.2018.10.010_b37) 2008; 9
Russakovsky (10.1016/j.cviu.2018.10.010_b46) 2015; 115
10.1016/j.cviu.2018.10.010_b40
Su (10.1016/j.cviu.2018.10.010_b51) 2017
Fu (10.1016/j.cviu.2018.10.010_b16) 2016; 129
Lee (10.1016/j.cviu.2018.10.010_b26) 2017; 71
Cheng (10.1016/j.cviu.2018.10.010_b3) 2014
Chen (10.1016/j.cviu.2018.10.010_b2) 2018
Mahendran (10.1016/j.cviu.2018.10.010_b38) 2015
10.1016/j.cviu.2018.10.010_b56
Lee (10.1016/j.cviu.2018.10.010_b27) 2005; 51
10.1016/j.cviu.2018.10.010_b50
Davis (10.1016/j.cviu.2018.10.010_b7) 2007; 106
Bilodeau (10.1016/j.cviu.2018.10.010_b1) 2014; 64
10.1016/j.cviu.2018.10.010_b8
Donahue (10.1016/j.cviu.2018.10.010_b9) 2014
He (10.1016/j.cviu.2018.10.010_b19) 2016
Qi (10.1016/j.cviu.2018.10.010_b42) 2014
Philbin (10.1016/j.cviu.2018.10.010_b41) 2008
Wang (10.1016/j.cviu.2018.10.010_b53) 2010
Everingham (10.1016/j.cviu.2018.10.010_b12) 2015; 111
Dalal (10.1016/j.cviu.2018.10.010_b5) 2005
Redmon (10.1016/j.cviu.2018.10.010_b43) 2016
Dabov (10.1016/j.cviu.2018.10.010_b4) 2007; 16
Kang (10.1016/j.cviu.2018.10.010_b22) 2014; 47
Zitnick (10.1016/j.cviu.2018.10.010_b59) 2014
Dong (10.1016/j.cviu.2018.10.010_b10) 2007
Kim (10.1016/j.cviu.2018.10.010_b23) 2015; 61
Lore (10.1016/j.cviu.2018.10.010_b35) 2017; 61
10.1016/j.cviu.2018.10.010_b49
References_xml – start-page: 647
  year: 2014
  end-page: 655
  ident: b9
  article-title: Decaf: A deep convolutional activation feature for generic visual recognition
  publication-title: International Conference on Machine Learning
– reference: Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H., 2015. Understanding neural networks through deep visualization. arXiv preprint
– reference: Remez, T., Litany, O., Giryes, R., Bronstein, A.M., 2017. Deep convolutional denoising of low-light images. arXiv preprint
– start-page: 391
  year: 2014
  end-page: 405
  ident: b59
  article-title: Edge boxes: Locating object proposals from edges
  publication-title: European Conference on Computer Vision
– volume: 154
  start-page: 1
  year: 2017
  end-page: 15
  ident: b28
  article-title: Computer vision for assistive technologies
  publication-title: Comput. Vision Image Understanding
– volume: 64
  start-page: 79
  year: 2014
  end-page: 86
  ident: b1
  article-title: Thermal–visible registration of human silhouettes: A similarity measure performance evaluation
  publication-title: Infrared Phys. Technol.
– volume: 111
  start-page: 98
  year: 2015
  end-page: 136
  ident: b12
  article-title: The pascal visual object classes challenge: A retrospective
  publication-title: Int. J. Comput. Vis.
– start-page: 5188
  year: 2015
  end-page: 5196
  ident: b38
  article-title: Understanding deep image representations by inverting them
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
– reference: Le Callet, P., Autrusseau, F., 2005. Subjective quality assessment irccyn/ivc database.
– start-page: 274
  year: 2014
  end-page: 280
  ident: b42
  article-title: Use of sparse representation for pedestrian detection in thermal images
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops
– start-page: 1
  year: 2008
  end-page: 8
  ident: b41
  article-title: Lost in quantization: Improving particular object retrieval in large scale image databases
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2008 IEEE Conference on
– volume: 51
  start-page: 648
  year: 2005
  end-page: 653
  ident: b27
  article-title: Noise-adaptive spatio-temporal filter for real-time noise removal in low light level images
  publication-title: IEEE Trans. Consum. Electron.
– volume: 129
  start-page: 82
  year: 2016
  end-page: 96
  ident: b16
  article-title: A fusion-based enhancing method for weakly illuminated images
  publication-title: Signal Process.
– volume: 115
  start-page: 211
  year: 2015
  end-page: 252
  ident: b46
  article-title: ImageNet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
– volume: 117
  start-page: 1659
  year: 2013
  end-page: 1671
  ident: b11
  article-title: Finite asymmetric generalized gaussian mixture models learning for infrared object detection
  publication-title: Comput. Vision Image Understanding
– volume: 61
  start-page: 650
  year: 2017
  end-page: 662
  ident: b35
  article-title: Llnet: A deep autoencoder approach to natural low-light image enhancement
  publication-title: Pattern Recognit.
– start-page: 1097
  year: 2012
  end-page: 1105
  ident: b24
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Advances in Neural Information Processing Systems
– volume: 22
  start-page: 1032
  year: 2013
  end-page: 1041
  ident: b20
  article-title: Efficient contrast enhancement using adaptive gamma correction with weighting distribution
  publication-title: IEEE Trans. Image Process.
– start-page: 740
  year: 2014
  end-page: 755
  ident: b33
  article-title: Microsoft coco: Common objects in context
  publication-title: European Conference on Computer Vision
– start-page: 2782
  year: 2016
  end-page: 2790
  ident: b17
  article-title: A weighted variational model for simultaneous reflectance and illumination estimation
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on
– start-page: 1
  year: 2008
  end-page: 8
  ident: b15
  article-title: A discriminatively trained, multiscale, deformable part model
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2008 IEEE Conference on
– start-page: 1521
  year: 2011
  end-page: 1528
  ident: b52
  article-title: Unbiased look at dataset bias
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
– start-page: 770
  year: 2016
  end-page: 778
  ident: b19
  article-title: Deep residual learning for image recognition
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on
– start-page: 3360
  year: 2010
  end-page: 3367
  ident: b53
  article-title: Locality-constrained linear coding for image classification
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on
– start-page: 3730
  year: 2015
  end-page: 3734
  ident: b31
  article-title: A low-light image enhancement method for both denoising and contrast enlarging
  publication-title: Image Processing (ICIP), 2015 IEEE International Conference on
– volume: 42
  start-page: 28
  year: 2017
  end-page: 36
  ident: b21
  article-title: Low light image enhancement with dual-tree complex wavelet transform
  publication-title: J. Vis. Commun. Image Represent.
– volume: 77
  start-page: 157
  year: 2008
  end-page: 173
  ident: b48
  article-title: Labelme: a database and web-based tool for image annotation
  publication-title: Int. J. Comput. Vis.
– year: 2018
  ident: b55
  article-title: Deep retinex decomposition for low-light enhancement
  publication-title: British Machine Vision Conference
– volume: 106
  start-page: 162
  year: 2007
  end-page: 182
  ident: b7
  article-title: Background-subtraction using contour-based fusion of thermal and visible imagery
  publication-title: Comput. Vision Image Understanding
– start-page: 266
  year: 2015
  end-page: 270
  ident: b34
  article-title: Unveiling contrast in darkness
  publication-title: Pattern Recognition (ACPR), 2015 3rd IAPR Asian Conference on
– volume: 16
  start-page: 2080
  year: 2007
  end-page: 2095
  ident: b4
  article-title: Image denoising by sparse 3-d transform-domain collaborative filtering
  publication-title: IEEE Trans. Image Process.
– volume: 115
  start-page: 211
  year: 2015
  end-page: 252
  ident: b47
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
– reference: Shen, L., Yue, Z., Feng, F., Chen, Q., Liu, S., Ma, J., 2017. Msr-net: Low-light image enhancement using deep convolutional network. arXiv preprint
– start-page: 818
  year: 2014
  end-page: 833
  ident: b57
  article-title: Visualizing and understanding convolutional networks
  publication-title: European Conference on Computer Vision
– start-page: 1977
  year: 2017
  end-page: 1981
  ident: b51
  article-title: Low light image enhancement based on two-step noise suppression
  publication-title: Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on
– volume: 61
  start-page: 72
  year: 2015
  end-page: 80
  ident: b23
  article-title: A novel approach for denoising and enhancement of extremely low-light video
  publication-title: IEEE Trans. Consum. Electron.
– volume: 22
  start-page: 3538
  year: 2013
  end-page: 3548
  ident: b54
  article-title: Naturalness preserved enhancement algorithm for non-uniform illumination images
  publication-title: IEEE Trans. Image Process.
– reference: Olmeda, D., Premebida, C., Nunes, U., Armingol, J.M., Escalera, A.d.l., 2013. Lsi far infrared pedestrian dataset.
– volume: 71
  start-page: 1
  year: 2017
  end-page: 13
  ident: b26
  article-title: How deep learning extracts and learns leaf features for plant classification
  publication-title: Pattern Recognit.
– start-page: 886
  year: 2005
  end-page: 893
  ident: b5
  article-title: Histograms of oriented gradients for human detection
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2005 IEEE Conference on
– volume: 47
  start-page: 3750
  year: 2014
  end-page: 3766
  ident: b22
  article-title: Nighttime face recognition at large standoff: Cross-distance and cross-spectral matching
  publication-title: Pattern Recognit.
– volume: 25
  start-page: 4116
  year: 2016
  end-page: 4128
  ident: b14
  article-title: Adobe boxes: Locating object proposals using object adobes
  publication-title: IEEE Trans. Image Process.
– start-page: 4131
  year: 2015
  end-page: 4135
  ident: b32
  article-title: Robust contrast enhancement of noisy low-light images: Denoising-enhancement-completion
  publication-title: Image Processing (ICIP), 2015 IEEE International Conference on
– volume: 60
  start-page: 91
  year: 2004
  end-page: 110
  ident: b36
  article-title: Distinctive image features from scale-invariant keypoints
  publication-title: Int. J. Comput. Vis.
– volume: 9
  start-page: 2579
  year: 2008
  end-page: 2605
  ident: b37
  article-title: Visualizing data using t-sne
  publication-title: J. Mach. Learn. Res.
– volume: 48
  start-page: 1947
  year: 2015
  end-page: 1960
  ident: b58
  article-title: Robust pedestrian detection in thermal infrared imagery using a shape distribution histogram feature and modified sparse representation classification
  publication-title: Pattern Recognit.
– volume: 27
  start-page: 2828
  year: 2018
  end-page: 2841
  ident: b30
  article-title: Structure-revealing low-light image enhancement via robust retinex model
  publication-title: IEEE Trans. Image Process.
– reference: Dollár, P., Piotr’s Computer Vision Matlab Toolbox (PMT).
– volume: 88
  start-page: 303
  year: 2010
  end-page: 338
  ident: b13
  article-title: The pascal visual object classes (voc) challenge
  publication-title: Int. J. Comput. Vis.
– start-page: 364
  year: 2005
  end-page: 369
  ident: b6
  article-title: A two-stage template approach to person detection in thermal imagery
  publication-title: Application of Computer Vision, 2005. WACV/MOTIONS’05 Volume 1. Seventh IEEE Workshops on
– volume: 26
  start-page: 982
  year: 2017
  end-page: 993
  ident: b18
  article-title: Lime: Low-light image enhancement via illumination map estimation
  publication-title: IEEE Trans. Image Process.
– year: 2018
  ident: b2
  article-title: Learning to see in the dark
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2018 IEEE Conference on
– reference: .
– start-page: 91
  year: 2015
  end-page: 99
  ident: b45
  article-title: Faster r-cnn: Towards real-time object detection with region proposal networks
  publication-title: Advances in Neural Information Processing Systems
– year: 2007
  ident: b39
  article-title: Adaptive enhancement and noise reduction in very low light-level video
  publication-title: ICCV
– start-page: VI
  year: 2007
  end-page: 185
  ident: b10
  article-title: Nighttime pedestrian detection with near infrared using cascaded classifiers
  publication-title: Image Processing (ICIP), 2007 IEEE International Conference on
– volume: 29
  start-page: 627
  year: 2007
  end-page: 639
  ident: b29
  article-title: Illumination invariant face recognition using near-infrared images
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 779
  year: 2016
  end-page: 788
  ident: b43
  article-title: You only look once: Unified, real-time object detection
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on
– reference: Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint
– start-page: 3286
  year: 2014
  end-page: 3293
  ident: b3
  article-title: Bing: Binarized normed gradients for objectness estimation at 300fps
  publication-title: Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on
– year: 2018
  ident: 10.1016/j.cviu.2018.10.010_b2
  article-title: Learning to see in the dark
– start-page: 3730
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b31
  article-title: A low-light image enhancement method for both denoising and contrast enlarging
– start-page: 818
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b57
  article-title: Visualizing and understanding convolutional networks
– volume: 60
  start-page: 91
  year: 2004
  ident: 10.1016/j.cviu.2018.10.010_b36
  article-title: Distinctive image features from scale-invariant keypoints
  publication-title: Int. J. Comput. Vis.
  doi: 10.1023/B:VISI.0000029664.99615.94
– volume: 61
  start-page: 72
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b23
  article-title: A novel approach for denoising and enhancement of extremely low-light video
  publication-title: IEEE Trans. Consum. Electron.
  doi: 10.1109/TCE.2015.7064113
– ident: 10.1016/j.cviu.2018.10.010_b50
– volume: 9
  start-page: 2579
  year: 2008
  ident: 10.1016/j.cviu.2018.10.010_b37
  article-title: Visualizing data using t-sne
  publication-title: J. Mach. Learn. Res.
– start-page: 779
  year: 2016
  ident: 10.1016/j.cviu.2018.10.010_b43
  article-title: You only look once: Unified, real-time object detection
– volume: 42
  start-page: 28
  year: 2017
  ident: 10.1016/j.cviu.2018.10.010_b21
  article-title: Low light image enhancement with dual-tree complex wavelet transform
  publication-title: J. Vis. Commun. Image Represent.
  doi: 10.1016/j.jvcir.2016.11.001
– volume: 88
  start-page: 303
  year: 2010
  ident: 10.1016/j.cviu.2018.10.010_b13
  article-title: The pascal visual object classes (voc) challenge
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-009-0275-4
– start-page: 647
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b9
  article-title: Decaf: A deep convolutional activation feature for generic visual recognition
– ident: 10.1016/j.cviu.2018.10.010_b25
– ident: 10.1016/j.cviu.2018.10.010_b44
– start-page: 1097
  year: 2012
  ident: 10.1016/j.cviu.2018.10.010_b24
  article-title: Imagenet classification with deep convolutional neural networks
– ident: 10.1016/j.cviu.2018.10.010_b40
– volume: 77
  start-page: 157
  year: 2008
  ident: 10.1016/j.cviu.2018.10.010_b48
  article-title: Labelme: a database and web-based tool for image annotation
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-007-0090-8
– ident: 10.1016/j.cviu.2018.10.010_b49
– volume: 47
  start-page: 3750
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b22
  article-title: Nighttime face recognition at large standoff: Cross-distance and cross-spectral matching
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2014.06.004
– volume: 115
  start-page: 211
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b47
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-015-0816-y
– volume: 27
  start-page: 2828
  year: 2018
  ident: 10.1016/j.cviu.2018.10.010_b30
  article-title: Structure-revealing low-light image enhancement via robust retinex model
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2810539
– volume: 16
  start-page: 2080
  year: 2007
  ident: 10.1016/j.cviu.2018.10.010_b4
  article-title: Image denoising by sparse 3-d transform-domain collaborative filtering
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2007.901238
– start-page: 4131
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b32
  article-title: Robust contrast enhancement of noisy low-light images: Denoising-enhancement-completion
– volume: 29
  start-page: 627
  year: 2007
  ident: 10.1016/j.cviu.2018.10.010_b29
  article-title: Illumination invariant face recognition using near-infrared images
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2007.1014
– start-page: 740
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b33
  article-title: Microsoft coco: Common objects in context
– start-page: 3360
  year: 2010
  ident: 10.1016/j.cviu.2018.10.010_b53
  article-title: Locality-constrained linear coding for image classification
– start-page: 391
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b59
  article-title: Edge boxes: Locating object proposals from edges
– start-page: 274
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b42
  article-title: Use of sparse representation for pedestrian detection in thermal images
– start-page: 266
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b34
  article-title: Unveiling contrast in darkness
– start-page: 770
  year: 2016
  ident: 10.1016/j.cviu.2018.10.010_b19
  article-title: Deep residual learning for image recognition
– ident: 10.1016/j.cviu.2018.10.010_b8
– start-page: 1977
  year: 2017
  ident: 10.1016/j.cviu.2018.10.010_b51
  article-title: Low light image enhancement based on two-step noise suppression
– volume: 71
  start-page: 1
  year: 2017
  ident: 10.1016/j.cviu.2018.10.010_b26
  article-title: How deep learning extracts and learns leaf features for plant classification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.05.015
– start-page: 886
  year: 2005
  ident: 10.1016/j.cviu.2018.10.010_b5
  article-title: Histograms of oriented gradients for human detection
– ident: 10.1016/j.cviu.2018.10.010_b56
– volume: 26
  start-page: 982
  year: 2017
  ident: 10.1016/j.cviu.2018.10.010_b18
  article-title: Lime: Low-light image enhancement via illumination map estimation
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2016.2639450
– volume: 61
  start-page: 650
  year: 2017
  ident: 10.1016/j.cviu.2018.10.010_b35
  article-title: Llnet: A deep autoencoder approach to natural low-light image enhancement
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.06.008
– start-page: 2782
  year: 2016
  ident: 10.1016/j.cviu.2018.10.010_b17
  article-title: A weighted variational model for simultaneous reflectance and illumination estimation
– start-page: VI
  year: 2007
  ident: 10.1016/j.cviu.2018.10.010_b10
  article-title: Nighttime pedestrian detection with near infrared using cascaded classifiers
– volume: 117
  start-page: 1659
  year: 2013
  ident: 10.1016/j.cviu.2018.10.010_b11
  article-title: Finite asymmetric generalized gaussian mixture models learning for infrared object detection
  publication-title: Comput. Vision Image Understanding
  doi: 10.1016/j.cviu.2013.07.007
– start-page: 364
  year: 2005
  ident: 10.1016/j.cviu.2018.10.010_b6
  article-title: A two-stage template approach to person detection in thermal imagery
– volume: 106
  start-page: 162
  year: 2007
  ident: 10.1016/j.cviu.2018.10.010_b7
  article-title: Background-subtraction using contour-based fusion of thermal and visible imagery
  publication-title: Comput. Vision Image Understanding
  doi: 10.1016/j.cviu.2006.06.010
– volume: 129
  start-page: 82
  year: 2016
  ident: 10.1016/j.cviu.2018.10.010_b16
  article-title: A fusion-based enhancing method for weakly illuminated images
  publication-title: Signal Process.
  doi: 10.1016/j.sigpro.2016.05.031
– volume: 51
  start-page: 648
  year: 2005
  ident: 10.1016/j.cviu.2018.10.010_b27
  article-title: Noise-adaptive spatio-temporal filter for real-time noise removal in low light level images
  publication-title: IEEE Trans. Consum. Electron.
  doi: 10.1109/TCE.2005.1468014
– volume: 64
  start-page: 79
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b1
  article-title: Thermal–visible registration of human silhouettes: A similarity measure performance evaluation
  publication-title: Infrared Phys. Technol.
  doi: 10.1016/j.infrared.2014.02.005
– volume: 25
  start-page: 4116
  year: 2016
  ident: 10.1016/j.cviu.2018.10.010_b14
  article-title: Adobe boxes: Locating object proposals using object adobes
  publication-title: IEEE Trans. Image Process.
– volume: 154
  start-page: 1
  year: 2017
  ident: 10.1016/j.cviu.2018.10.010_b28
  article-title: Computer vision for assistive technologies
  publication-title: Comput. Vision Image Understanding
  doi: 10.1016/j.cviu.2016.09.001
– year: 2018
  ident: 10.1016/j.cviu.2018.10.010_b55
  article-title: Deep retinex decomposition for low-light enhancement
– start-page: 1
  year: 2008
  ident: 10.1016/j.cviu.2018.10.010_b15
  article-title: A discriminatively trained, multiscale, deformable part model
– volume: 22
  start-page: 3538
  year: 2013
  ident: 10.1016/j.cviu.2018.10.010_b54
  article-title: Naturalness preserved enhancement algorithm for non-uniform illumination images
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2013.2261309
– volume: 48
  start-page: 1947
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b58
  article-title: Robust pedestrian detection in thermal infrared imagery using a shape distribution histogram feature and modified sparse representation classification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2014.12.013
– start-page: 1
  year: 2008
  ident: 10.1016/j.cviu.2018.10.010_b41
  article-title: Lost in quantization: Improving particular object retrieval in large scale image databases
– volume: 115
  start-page: 211
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b46
  article-title: ImageNet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-015-0816-y
– year: 2007
  ident: 10.1016/j.cviu.2018.10.010_b39
  article-title: Adaptive enhancement and noise reduction in very low light-level video
– start-page: 1521
  year: 2011
  ident: 10.1016/j.cviu.2018.10.010_b52
  article-title: Unbiased look at dataset bias
– start-page: 3286
  year: 2014
  ident: 10.1016/j.cviu.2018.10.010_b3
  article-title: Bing: Binarized normed gradients for objectness estimation at 300fps
– start-page: 91
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b45
  article-title: Faster r-cnn: Towards real-time object detection with region proposal networks
– start-page: 5188
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b38
  article-title: Understanding deep image representations by inverting them
– volume: 22
  start-page: 1032
  year: 2013
  ident: 10.1016/j.cviu.2018.10.010_b20
  article-title: Efficient contrast enhancement using adaptive gamma correction with weighting distribution
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2012.2226047
– volume: 111
  start-page: 98
  year: 2015
  ident: 10.1016/j.cviu.2018.10.010_b12
  article-title: The pascal visual object classes challenge: A retrospective
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-014-0733-5
SSID ssj0011491
Score 2.6949553
Snippet Low-light is an inescapable element of our daily surroundings that greatly affects the efficiency of our vision. Research works on low-light imagery have seen...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 30
Title Getting to know low-light images with the Exclusively Dark dataset
URI https://dx.doi.org/10.1016/j.cviu.2018.10.010
Volume 178
WOSCitedRecordID wos000454372800003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1090-235X
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0011491
  issn: 1077-3142
  databaseCode: AIEXJ
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Nb9NAEF1FLQc4lFJALZRqD9wiV7tx1rs-FpTSAoo4BBROlr1ZSynGqfJF-PfMeNZuEiCiBy5WNLHXid_TeHb2zQ5jr6W0NndWBlmYYgszJwKTiiywOlSjKIptXpVHf_mo-30zHMafWq0PdS3MstBlaVar-Pa_Qg02ABtLZ-8BdzMoGOAzgA5HgB2O_wT8O0dSZggqq0K3YvIjKHAK3h5_T3FHB0q9Aj16K1ssUL5e_AT0p9_aKBeduY1sfd31oU1F6LTWgONUDXSbsphG1zOp0jRfFw58q7uzYxEDre47B_7Jf-GzDVjg1GQbyEEKjXnN7qYHpS483gf6dRZ6m9KJv_lpShncnNvleIH6OnOOEjsvcN3YFHvrZdVICGt12k2CYyQ4BhiSqtxuv6NVDF56_-K6N3zfLCrBZFCSBJX-gq-hIrnf9i_5c5yyFnsMDtmBnzTwCwL7CWu58og99hMI7t3zDEw1WrXtiD1a23DyKXvjycHnE47k4A05OJGDIzk4kIOvkYMjObgnxzP2-bI3eHsV-C4agQ2FmAcodxjJkQyVcHHkIGQU1o1C8MuZ0lpFXaNMLh3MC5TK8khlaWhsGNs0kyLKIdp-zvbKSemOGYdYNbNhnsGj7HRtbmLb1UJ1dCxNlMKb84TJ-pkl1m8xj51OiuTvaJ2wdnPNLW2wsvNsVUOR-BCRQr8EmLXjuhf3ustL9vCO-6dsbz5duFfsgV3Ox7PpmafVL2lKhgE
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Getting+to+know+low-light+images+with+the+Exclusively+Dark+dataset&rft.jtitle=Computer+vision+and+image+understanding&rft.au=Loh%2C+Yuen+Peng&rft.au=Chan%2C+Chee+Seng&rft.date=2019-01-01&rft.issn=1077-3142&rft.volume=178&rft.spage=30&rft.epage=42&rft_id=info:doi/10.1016%2Fj.cviu.2018.10.010&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_cviu_2018_10_010
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-3142&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-3142&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-3142&client=summon