A biologically inspired separable learning vision model for real-time traffic object perception in Dark
Fast and accurate object perception in low-light traffic scenes has attracted increasing attention. However, due to severe illumination degradation and the lack of reliable visual cues, existing perception models and methods struggle to quickly adapt to and accurately predict in low-light environmen...
Uložené v:
| Vydané v: | Expert systems with applications Ročník 297; s. 129529 |
|---|---|
| Hlavní autori: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Elsevier Ltd
01.02.2026
|
| Predmet: | |
| ISSN: | 0957-4174 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Fast and accurate object perception in low-light traffic scenes has attracted increasing attention. However, due to severe illumination degradation and the lack of reliable visual cues, existing perception models and methods struggle to quickly adapt to and accurately predict in low-light environments. Moreover, there is the absence of available large-scale benchmark specifically focused on low-light traffic scenes. To bridge this gap, we introduce a physically grounded illumination degradation method tailored to real-world low-light settings and construct Dark-traffic, the largest densely annotated dataset to date for low-light traffic scenes, supporting object detection, instance segmentation, and optical flow estimation. We further propose the Separable Learning Vision Model (SLVM), a biologically inspired framework designed to enhance perception under adverse lighting. SLVM integrates four key components: a light-adaptive pupillary mechanism for illumination-sensitive feature extraction, a feature-level separable learning strategy for efficient representation, task-specific decoupled branches for multi-task separable learning, and a spatial misalignment-aware fusion module for precise multi-feature alignment. Extensive experiments demonstrate that SLVM achieves state-of-the-art performance with reduced computational overhead. Notably, it outperforms RT-DETR by 11.2 percentage points in detection, YOLOv12 by 6.1 percentage points in instance segmentation, and reduces endpoint error (EPE) of baseline by 12.37 % on Dark-traffic. On the LIS benchmark, the end-to-end trained SLVM surpasses Swin Transformer + EnlightenGAN and ConvNeXt-T + EnlightenGAN by an average of 11 percentage points across key metrics, and exceeds Mask RCNN (with light enhancement) by 3.1 percentage points. The Dark-traffic dataset and complete code is released at https://github.com/alanli1997/slvm. |
|---|---|
| AbstractList | Fast and accurate object perception in low-light traffic scenes has attracted increasing attention. However, due to severe illumination degradation and the lack of reliable visual cues, existing perception models and methods struggle to quickly adapt to and accurately predict in low-light environments. Moreover, there is the absence of available large-scale benchmark specifically focused on low-light traffic scenes. To bridge this gap, we introduce a physically grounded illumination degradation method tailored to real-world low-light settings and construct Dark-traffic, the largest densely annotated dataset to date for low-light traffic scenes, supporting object detection, instance segmentation, and optical flow estimation. We further propose the Separable Learning Vision Model (SLVM), a biologically inspired framework designed to enhance perception under adverse lighting. SLVM integrates four key components: a light-adaptive pupillary mechanism for illumination-sensitive feature extraction, a feature-level separable learning strategy for efficient representation, task-specific decoupled branches for multi-task separable learning, and a spatial misalignment-aware fusion module for precise multi-feature alignment. Extensive experiments demonstrate that SLVM achieves state-of-the-art performance with reduced computational overhead. Notably, it outperforms RT-DETR by 11.2 percentage points in detection, YOLOv12 by 6.1 percentage points in instance segmentation, and reduces endpoint error (EPE) of baseline by 12.37 % on Dark-traffic. On the LIS benchmark, the end-to-end trained SLVM surpasses Swin Transformer + EnlightenGAN and ConvNeXt-T + EnlightenGAN by an average of 11 percentage points across key metrics, and exceeds Mask RCNN (with light enhancement) by 3.1 percentage points. The Dark-traffic dataset and complete code is released at https://github.com/alanli1997/slvm. |
| ArticleNumber | 129529 |
| Author | Li, Jun Fan, Linfang Wei, Hanbing Li, Hulin Liu, Zheng Ren, Qiliang |
| Author_xml | – sequence: 1 givenname: Hulin surname: Li fullname: Li, Hulin email: alan@mails.cqjtu.edu.cn organization: School of Traffic and Transportation, Chongqing Jiaotong University, Chongqing 400074, China – sequence: 2 givenname: Qiliang surname: Ren fullname: Ren, Qiliang email: qlren@cqjtu.edu.cn organization: School of Traffic and Transportation, Chongqing Jiaotong University, Chongqing 400074, China – sequence: 3 givenname: Jun surname: Li fullname: Li, Jun email: cqleejun@cqjtu.edu.cn organization: School of Mechatronics and Vehicle Engineering, Chongqing Jiaotong University, Chongqing 400074, China – sequence: 4 givenname: Hanbing surname: Wei fullname: Wei, Hanbing email: hbwei@cqjtu.edu.cn organization: School of Mechatronics and Vehicle Engineering, Chongqing Jiaotong University, Chongqing 400074, China – sequence: 5 givenname: Zheng surname: Liu fullname: Liu, Zheng email: zheng.liu@ubc.ca organization: School of Engineering, University of British Columbia, Okanagan, Kelowna BC V1V 1V7, Canada – sequence: 6 givenname: Linfang surname: Fan fullname: Fan, Linfang email: linfangfan@cqjtu.edu.cn organization: School of Traffic and Transportation, Chongqing Jiaotong University, Chongqing 400074, China |
| BookMark | eNp90L1OwzAQwHEPRaItvACTXyDBduq4kViq8lGkSiwwWxfnXDm4dmRHRX17GpWZ6Zb7n06_BZmFGJCQB85Kznj92JeYf6AUTMiSi0aKZkbmrJGqWHG1uiWLnHvGuGJMzclhQ1sXfTw4A96fqQt5cAk7mnGABK1H6hFScOFATy67GOgxduipjYkmBF-M7oh0TGCtMzS2PZqRDpgMDuO07QJ9hvR9R24s-Iz3f3NJvl5fPre7Yv_x9r7d7AsjZDUWdd1WDRN1hdVacY6A0oKyjeCqXWOl7Ko1XWOaGiwa0a1BSsUMg06KCoWAaknE9a5JMeeEVg_JHSGdNWd64tG9nnj0xKOvPJfo6Rrh5bOTw6SzcRgMdhcKM-ouuv_yX_CLdEw |
| Cites_doi | 10.1109/CVPR.2018.00347 10.1109/CVPRW.2017.66 10.1126/science.156.3775.636 10.1109/CVPR.2016.91 10.1109/CVPR42600.2020.00185 10.1109/CVPR42600.2020.00678 10.1109/CVPR.2019.01099 10.1109/CVPR52688.2022.00795 10.1109/CVPR52733.2024.01605 10.1109/CVPR.2017.106 10.1109/TIP.2021.3051462 10.1145/3343031.3350926 10.1016/j.isprsjprs.2021.11.010 10.1109/CVPR.2016.90 10.1007/s11554-024-01436-6 10.1109/CVPR52688.2022.01167 10.1038/s41467-024-50488-6 10.1109/CVPR42600.2020.00982 10.1109/CVPR.2016.308 10.1038/s41593-019-0520-2 10.1016/j.compbiomed.2024.107917 10.1109/ICCV.2017.322 10.1109/CVPR52688.2022.00135 10.1038/s41586-024-07566-y 10.1007/978-3-030-01264-9_8 10.1109/CVPRW50498.2020.00203 10.1007/s11263-023-01808-8 10.1109/ICCV48922.2021.00986 10.1109/ICCV.2019.00925 10.1038/s41593-019-0470-8 10.1109/NCC.2012.6176791 10.1016/0166-2236(83)90190-X 10.1109/CVPR52733.2024.01440 10.1007/978-3-319-10602-1_48 10.1007/978-3-031-72751-1_1 10.1016/j.cviu.2018.10.010 10.1109/CVPR42600.2020.01020 10.1109/CVPR46437.2021.00349 |
| ContentType | Journal Article |
| Copyright | 2025 Elsevier Ltd |
| Copyright_xml | – notice: 2025 Elsevier Ltd |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.eswa.2025.129529 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| ExternalDocumentID | 10_1016_j_eswa_2025_129529 S0957417425031446 |
| GroupedDBID | --K --M .DC .~1 0R~ 13V 1B1 1RT 1~. 1~5 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN 9JO AAAKF AABNK AAEDT AAEDW AAIKJ AAKOC AALRI AAOAW AAQFI AARIN AATTM AAXKI AAXUO AAYFN AAYWO ABBOA ABFNM ABJNI ABMAC ABMVD ABUCO ABUFD ACDAQ ACGFS ACHRH ACLOT ACNTT ACRLP ACVFH ACZNC ADBBV ADCNI ADEZE ADTZH AEBSH AECPX AEIPS AEKER AENEX AEUPX AFJKZ AFPUW AFTJW AGHFR AGUBO AGUMN AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIGII AIIUN AIKHN AITUG AKBMS AKRWK AKYEP ALEQD ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU AOUOD APLSM APXCP AXJTR BJAXD BKOJK BLXMC BNSAS CS3 DU5 EBS EFJIC EFKBS EFLBG EO8 EO9 EP2 EP3 F5P FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HAMUX IHE J1W JJJVA KOM MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. PQQKQ Q38 ROL RPZ SDF SDG SDP SDS SES SEW SPC SPCBC SSB SSD SSL SST SSV SSZ T5K TN5 ~G- ~HD 29G 9DU AAAKG AAQXK AAYXX ABKBG ABWVN ABXDB ACNNM ACRPL ADJOM ADMUD ADNMO AGQPQ ASPBG AVWKF AZFZN CITATION EJD FEDTE FGOYB G-2 HLZ HVGLF HZ~ LG9 LY1 LY7 M41 R2- SBC SET WUQ XPP ZMT |
| ID | FETCH-LOGICAL-c253t-66b390263e38711eae5fa7f9217b8e37f4bcd9c96afec2d8a5570c0ad523e22a3 |
| ISICitedReferencesCount | 1 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001568955900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0957-4174 |
| IngestDate | Sat Nov 29 06:52:01 EST 2025 Sat Nov 29 17:05:08 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Deep learning Instance segmentation Bio-inspired vision Low-light traffic Object detection |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c253t-66b390263e38711eae5fa7f9217b8e37f4bcd9c96afec2d8a5570c0ad523e22a3 |
| ParticipantIDs | crossref_primary_10_1016_j_eswa_2025_129529 elsevier_sciencedirect_doi_10_1016_j_eswa_2025_129529 |
| PublicationCentury | 2000 |
| PublicationDate | 2026-02-01 2026-02-00 |
| PublicationDateYYYYMMDD | 2026-02-01 |
| PublicationDate_xml | – month: 02 year: 2026 text: 2026-02-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationTitle | Expert systems with applications |
| PublicationYear | 2026 |
| Publisher | Elsevier Ltd |
| Publisher_xml | – name: Elsevier Ltd |
| References | Khanam R, Hussain M. Yolov11: An overview of the key architectural enhancements. arXiv preprint arXiv:2410.17725, 2024. Cheng B, Misra I, Schwing A G, et al. Masked-attention mask transformer for universal image segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 1290–1299. Lin T Y, Maire M, Belongie S, et al. Microsoft coco: Common objects in context[C]//Computer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13. Springer International Publishing, 2014: 740–755. Jocher G, Chaurasia A, Qiu J (2023). YOLO by Ultralytics (Version 8.0.0). Computer software. GitHub. Retrieved from https://github.com/ultralytics/ultralytics. Mandelbrot (b0160) 1967; 156 Shumailov, Shumaylov, Zhao (b0200) 2024; 631 Li J, Li B, Tu Z, et al. Light the night: A multi-condition diffusion framework for unpaired low-light enhancement in autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 15205-15215. Li, Li, Wang (b0125) 2024; 15 Liu Z, Mao H, Wu C Y, et al. A convnet for the 2020s. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11976-11986. De Brabandere B, Neven D, Van Gool L. Semantic instance segmentation for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2017: 7-9. Chen, Wang, Guo (b0025) 2023; 36 Li, Li, Wei (b0120) 2024; 21 Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2117–2125. Loh, Chan (b0150) 2019; 178 Pongrácz (b0180) 2014 Parthasarathy S, Sankaran P. An automated multi scale retinex with color restoration for image enhancement. In: 2012 National Conference on Communications (NCC). IEEE, 2012: 1–5. Rahman, Jobson, Woodell (b0185) 1996 Ge Z, Liu S, Wang F, et al. Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430, 2021. Kirillov A, Wu Y, He K, et al. Pointrend: Image segmentation as rendering. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 9799-9808. Tian Y, Ye Q, Doermann D. Yolov12: Attention-centric real-time object detectors. arXiv preprint arXiv:2502.12524, 2025. Geiger, Lenz, Urtasun (b0060) 2012 Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021: 10012–10022. Wang, Chen, Liu (b0220) 2024; 37 Xu H, Zhang J, Cai J, et al. Gmflow: Learning optical flow via global matching. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 8121–8130. Richards, Lillicrap, Beaudoin (b0195) 2019; 22 Wang C Y, Liao H Y M, Wu Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2020: 390-391. Chen, Fu, Wei (b0020) 2023; 131 Zhang Z, Gupta A, Jiang H, et al. Neuflow v2: High-efficiency optical flow estimation on edge devices. arXiv preprint arXiv:2408.10161, 2024. Zhang, Nex, Kerle (b0260) 2022; 183 Chen C, Chen Q, Xu J, et al. Learning to see in the dark. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3291-3300. Guo C, Li C, Guo J, et al. Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 1780-1789. Lamba M, Mitra K. Restoring extremely dark images in real time. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 3487-3497. Chen, Zhang, Chen (b0030) 2024; 170 Ma N, Zhang X, Zheng H T, et al. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European conference on computer vision (ECCV). 2018: 116-131. He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778. Wu Y, Chen Y, Yuan L, et al. Rethinking classification and localization for object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 10186–10195. Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779–788. Jiang, Gong, Liu (b0085) 2021; 30 Choi, Han, Wang (b0040) 2023; 36 He K, Gkioxari G, Dollár P, et al. Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. 2017: 2961-2969. Wu B, Dai X, Zhang P, et al. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 10734-10742. Niv (b0170) 2019; 22 Burch, Bailey (b0010) 2008 Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. Zheng Y, Zhang M, Lu F. (2020). Optical flow in the dark. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition: 6749-6757. Li (b0110) 2024 Wang C Y, Yeh I H, Mark Liao H Y. Yolov9: Learning what you want to learn using programmable gradient information[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2024: 1–21. Bolya D, Zhou C, Xiao F, et al. Yolact: Real-time instance segmentation. In: Proceedings of the IEEE/CVF international conference on computer vision. 2019: 9157–9166. Zhang Y, Zhang J, Guo X. Kindling the darkness: A practical low-light image enhancer. In: Proceedings of the 27th ACM international conference on multimedia. 2019: 1632–1640. Mishkin, Ungerleider, Macko (b0165) 1983; 6 Simonyan K, Zisserman A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Zhao Y, Lv W, Xu S, et al. Detrs beat yolos on real-time object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024: 16965-16974. Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826. 10.1016/j.eswa.2025.129529_b0215 Wang (10.1016/j.eswa.2025.129529_b0220) 2024; 37 10.1016/j.eswa.2025.129529_b0255 10.1016/j.eswa.2025.129529_b0135 Shumailov (10.1016/j.eswa.2025.129529_b0200) 2024; 631 10.1016/j.eswa.2025.129529_b0015 Chen (10.1016/j.eswa.2025.129529_b0025) 2023; 36 Jiang (10.1016/j.eswa.2025.129529_b0085) 2021; 30 Li (10.1016/j.eswa.2025.129529_b0110) 2024 10.1016/j.eswa.2025.129529_b0065 Chen (10.1016/j.eswa.2025.129529_b0030) 2024; 170 10.1016/j.eswa.2025.129529_b0100 10.1016/j.eswa.2025.129529_b0265 Li (10.1016/j.eswa.2025.129529_b0120) 2024; 21 Loh (10.1016/j.eswa.2025.129529_b0150) 2019; 178 10.1016/j.eswa.2025.129529_b0140 10.1016/j.eswa.2025.129529_b0105 10.1016/j.eswa.2025.129529_b0145 Pongrácz (10.1016/j.eswa.2025.129529_b0180) 2014 10.1016/j.eswa.2025.129529_b0225 Geiger (10.1016/j.eswa.2025.129529_b0060) 2012 Choi (10.1016/j.eswa.2025.129529_b0040) 2023; 36 Li (10.1016/j.eswa.2025.129529_b0125) 2024; 15 10.1016/j.eswa.2025.129529_b0190 10.1016/j.eswa.2025.129529_b0070 10.1016/j.eswa.2025.129529_b0075 10.1016/j.eswa.2025.129529_b0230 10.1016/j.eswa.2025.129529_b0155 10.1016/j.eswa.2025.129529_b0270 Richards (10.1016/j.eswa.2025.129529_b0195) 2019; 22 Zhang (10.1016/j.eswa.2025.129529_b0260) 2022; 183 10.1016/j.eswa.2025.129529_b0035 10.1016/j.eswa.2025.129529_b0235 10.1016/j.eswa.2025.129529_b0115 Niv (10.1016/j.eswa.2025.129529_b0170) 2019; 22 Chen (10.1016/j.eswa.2025.129529_b0020) 2023; 131 10.1016/j.eswa.2025.129529_b0240 10.1016/j.eswa.2025.129529_b0045 Burch (10.1016/j.eswa.2025.129529_b0010) 2008 10.1016/j.eswa.2025.129529_b0205 Mandelbrot (10.1016/j.eswa.2025.129529_b0160) 1967; 156 Rahman (10.1016/j.eswa.2025.129529_b0185) 1996 10.1016/j.eswa.2025.129529_b0245 10.1016/j.eswa.2025.129529_b0005 10.1016/j.eswa.2025.129529_b0090 Mishkin (10.1016/j.eswa.2025.129529_b0165) 1983; 6 10.1016/j.eswa.2025.129529_b0130 10.1016/j.eswa.2025.129529_b0175 10.1016/j.eswa.2025.129529_b0055 10.1016/j.eswa.2025.129529_b0210 10.1016/j.eswa.2025.129529_b0050 10.1016/j.eswa.2025.129529_b0095 10.1016/j.eswa.2025.129529_b0250 |
| References_xml | – reference: Li J, Li B, Tu Z, et al. Light the night: A multi-condition diffusion framework for unpaired low-light enhancement in autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 15205-15215. – volume: 36 start-page: 7050 year: 2023 end-page: 7064 ident: b0025 article-title: Vanillanet: The power of minimalism in deep learning publication-title: Advances in Neural Information Processing Systems – reference: Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 2117–2125. – reference: Liu Z, Mao H, Wu C Y, et al. A convnet for the 2020s. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11976-11986. – volume: 631 start-page: 755 year: 2024 end-page: 759 ident: b0200 article-title: AI models collapse when trained on recursively generated data publication-title: Nature – reference: Simonyan K, Zisserman A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. – reference: Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779–788. – reference: Parthasarathy S, Sankaran P. An automated multi scale retinex with color restoration for image enhancement. In: 2012 National Conference on Communications (NCC). IEEE, 2012: 1–5. – volume: 6 start-page: 414 year: 1983 end-page: 417 ident: b0165 article-title: Object vision and spatial vision: Two cortical pathways publication-title: Trends in neurosciences – reference: Tian Y, Ye Q, Doermann D. Yolov12: Attention-centric real-time object detectors. arXiv preprint arXiv:2502.12524, 2025. – reference: Wang C Y, Liao H Y M, Wu Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2020: 390-391. – year: 2008 ident: b0010 article-title: How Dogs Learn – volume: 22 start-page: 1544 year: 2019 end-page: 1553 ident: b0170 article-title: Learning task-state representations publication-title: Nature Neuroscience – reference: De Brabandere B, Neven D, Van Gool L. Semantic instance segmentation for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2017: 7-9. – reference: Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision. 2021: 10012–10022. – reference: Chen C, Chen Q, Xu J, et al. Learning to see in the dark. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3291-3300. – reference: Jocher G, Chaurasia A, Qiu J (2023). YOLO by Ultralytics (Version 8.0.0). Computer software. GitHub. Retrieved from https://github.com/ultralytics/ultralytics. – volume: 178 start-page: 30 year: 2019 end-page: 42 ident: b0150 article-title: Getting to know low-light images with the exclusively dark dataset publication-title: Computer Vision and Image Understanding – start-page: 3354 year: 2012 end-page: 3361 ident: b0060 article-title: Are we ready for autonomous driving? the kitti vision benchmark suite publication-title: 2012 IEEE conference on computer vision and pattern recognition – volume: 170 year: 2024 ident: b0030 article-title: Accurate leukocyte detection based on deformable-DETR and multi-level feature fusion for aiding diagnosis of blood diseases publication-title: Computers in Biology and Medicine – volume: 131 start-page: 2198 year: 2023 end-page: 2218 ident: b0020 article-title: Instance segmentation in the dark publication-title: International Journal of Computer Vision – volume: 36 start-page: 50408 year: 2023 end-page: 50428 ident: b0040 article-title: A dual-stream neural network explains the functional segregation of dorsal and ventral visual pathways in human brains publication-title: Advances in Neural Information Processing Systems – reference: He K, Gkioxari G, Dollár P, et al. Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. 2017: 2961-2969. – reference: Zhang Z, Gupta A, Jiang H, et al. Neuflow v2: High-efficiency optical flow estimation on edge devices. arXiv preprint arXiv:2408.10161, 2024. – volume: 21 start-page: 62 year: 2024 ident: b0120 article-title: Slim-neck by GSConv: A lightweight-design for real-time detector architectures publication-title: Journal of Real-Time Image Processing – volume: 22 start-page: 1761 year: 2019 end-page: 1770 ident: b0195 article-title: A deep learning framework for neuroscience publication-title: Nature Neuroscience – reference: Zhao Y, Lv W, Xu S, et al. Detrs beat yolos on real-time object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024: 16965-16974. – reference: Wu B, Dai X, Zhang P, et al. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 10734-10742. – reference: Lamba M, Mitra K. Restoring extremely dark images in real time. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 3487-3497. – start-page: 249 year: 2014 end-page: 293 ident: b0180 article-title: Social learning in dogs publication-title: The Social Dog – reference: Bolya D, Zhou C, Xiao F, et al. Yolact: Real-time instance segmentation. In: Proceedings of the IEEE/CVF international conference on computer vision. 2019: 9157–9166. – reference: Lin T Y, Maire M, Belongie S, et al. Microsoft coco: Common objects in context[C]//Computer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13. Springer International Publishing, 2014: 740–755. – reference: Wang C Y, Yeh I H, Mark Liao H Y. Yolov9: Learning what you want to learn using programmable gradient information[C]//European conference on computer vision. Cham: Springer Nature Switzerland, 2024: 1–21. – reference: Xu H, Zhang J, Cai J, et al. Gmflow: Learning optical flow via global matching. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 8121–8130. – reference: Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826. – reference: Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. – reference: Wu Y, Chen Y, Yuan L, et al. Rethinking classification and localization for object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 10186–10195. – start-page: 1003 year: 1996 end-page: 1006 ident: b0185 article-title: Multi-scale retinex for color image enhancement publication-title: Proceedings of 3rd IEEE international conference on image processing – reference: Zhang Y, Zhang J, Guo X. Kindling the darkness: A practical low-light image enhancer. In: Proceedings of the 27th ACM international conference on multimedia. 2019: 1632–1640. – reference: Cheng B, Misra I, Schwing A G, et al. Masked-attention mask transformer for universal image segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 1290–1299. – volume: 156 start-page: 636 year: 1967 end-page: 638 ident: b0160 article-title: How long is the coast of Britain? statistical self-similarity and fractional dimension publication-title: Science – reference: Khanam R, Hussain M. Yolov11: An overview of the key architectural enhancements. arXiv preprint arXiv:2410.17725, 2024. – reference: He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778. – start-page: 74 year: 2024 end-page: 90 ident: b0110 article-title: Rethinking features-fused-pyramid-neck for object detection publication-title: European Conference on Computer Vision – reference: Kirillov A, Wu Y, He K, et al. Pointrend: Image segmentation as rendering. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 9799-9808. – reference: Ge Z, Liu S, Wang F, et al. Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430, 2021. – reference: Ma N, Zhang X, Zheng H T, et al. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European conference on computer vision (ECCV). 2018: 116-131. – reference: Guo C, Li C, Guo J, et al. Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 1780-1789. – volume: 15 start-page: 6261 year: 2024 ident: b0125 article-title: Adaptative machine vision with microsecond-level accurate perception beyond human retina publication-title: Nature Communications – volume: 30 start-page: 2340 year: 2021 end-page: 2349 ident: b0085 article-title: Enlightengan: Deep light enhancement without paired supervision publication-title: IEEE Transactions on Image Processing – reference: Zheng Y, Zhang M, Lu F. (2020). Optical flow in the dark. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition: 6749-6757. – volume: 37 start-page: 107984 year: 2024 end-page: 108011 ident: b0220 article-title: Yolov10: Real-time end-to-end object detection publication-title: Advances in Neural Information Processing Systems – volume: 183 start-page: 470 year: 2022 end-page: 481 ident: b0260 article-title: LISU: Low-light indoor scene understanding with joint learning of reflectance restoration publication-title: ISPRS Journal of Photogrammetry and Remote Sensing – ident: 10.1016/j.eswa.2025.129529_b0015 doi: 10.1109/CVPR.2018.00347 – ident: 10.1016/j.eswa.2025.129529_b0045 doi: 10.1109/CVPRW.2017.66 – volume: 156 start-page: 636 issue: 3775 year: 1967 ident: 10.1016/j.eswa.2025.129529_b0160 article-title: How long is the coast of Britain? statistical self-similarity and fractional dimension publication-title: Science doi: 10.1126/science.156.3775.636 – ident: 10.1016/j.eswa.2025.129529_b0190 doi: 10.1109/CVPR.2016.91 – ident: 10.1016/j.eswa.2025.129529_b0090 – ident: 10.1016/j.eswa.2025.129529_b0065 doi: 10.1109/CVPR42600.2020.00185 – ident: 10.1016/j.eswa.2025.129529_b0270 doi: 10.1109/CVPR42600.2020.00678 – ident: 10.1016/j.eswa.2025.129529_b0235 doi: 10.1109/CVPR.2019.01099 – ident: 10.1016/j.eswa.2025.129529_b0245 doi: 10.1109/CVPR52688.2022.00795 – ident: 10.1016/j.eswa.2025.129529_b0265 doi: 10.1109/CVPR52733.2024.01605 – ident: 10.1016/j.eswa.2025.129529_b0135 doi: 10.1109/CVPR.2017.106 – volume: 37 start-page: 107984 year: 2024 ident: 10.1016/j.eswa.2025.129529_b0220 article-title: Yolov10: Real-time end-to-end object detection publication-title: Advances in Neural Information Processing Systems – volume: 30 start-page: 2340 year: 2021 ident: 10.1016/j.eswa.2025.129529_b0085 article-title: Enlightengan: Deep light enhancement without paired supervision publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2021.3051462 – ident: 10.1016/j.eswa.2025.129529_b0205 – ident: 10.1016/j.eswa.2025.129529_b0250 doi: 10.1145/3343031.3350926 – volume: 183 start-page: 470 year: 2022 ident: 10.1016/j.eswa.2025.129529_b0260 article-title: LISU: Low-light indoor scene understanding with joint learning of reflectance restoration publication-title: ISPRS Journal of Photogrammetry and Remote Sensing doi: 10.1016/j.isprsjprs.2021.11.010 – ident: 10.1016/j.eswa.2025.129529_b0055 – ident: 10.1016/j.eswa.2025.129529_b0070 doi: 10.1109/CVPR.2016.90 – volume: 21 start-page: 62 issue: 3 year: 2024 ident: 10.1016/j.eswa.2025.129529_b0120 article-title: Slim-neck by GSConv: A lightweight-design for real-time detector architectures publication-title: Journal of Real-Time Image Processing doi: 10.1007/s11554-024-01436-6 – ident: 10.1016/j.eswa.2025.129529_b0140 doi: 10.1109/CVPR52688.2022.01167 – ident: 10.1016/j.eswa.2025.129529_b0215 – start-page: 1003 year: 1996 ident: 10.1016/j.eswa.2025.129529_b0185 article-title: Multi-scale retinex for color image enhancement – volume: 15 start-page: 6261 issue: 1 year: 2024 ident: 10.1016/j.eswa.2025.129529_b0125 article-title: Adaptative machine vision with microsecond-level accurate perception beyond human retina publication-title: Nature Communications doi: 10.1038/s41467-024-50488-6 – volume: 36 start-page: 7050 year: 2023 ident: 10.1016/j.eswa.2025.129529_b0025 article-title: Vanillanet: The power of minimalism in deep learning publication-title: Advances in Neural Information Processing Systems – ident: 10.1016/j.eswa.2025.129529_b0100 doi: 10.1109/CVPR42600.2020.00982 – year: 2008 ident: 10.1016/j.eswa.2025.129529_b0010 – ident: 10.1016/j.eswa.2025.129529_b0210 doi: 10.1109/CVPR.2016.308 – volume: 22 start-page: 1761 issue: 11 year: 2019 ident: 10.1016/j.eswa.2025.129529_b0195 article-title: A deep learning framework for neuroscience publication-title: Nature Neuroscience doi: 10.1038/s41593-019-0520-2 – volume: 170 year: 2024 ident: 10.1016/j.eswa.2025.129529_b0030 article-title: Accurate leukocyte detection based on deformable-DETR and multi-level feature fusion for aiding diagnosis of blood diseases publication-title: Computers in Biology and Medicine doi: 10.1016/j.compbiomed.2024.107917 – ident: 10.1016/j.eswa.2025.129529_b0075 doi: 10.1109/ICCV.2017.322 – ident: 10.1016/j.eswa.2025.129529_b0050 – ident: 10.1016/j.eswa.2025.129529_b0035 doi: 10.1109/CVPR52688.2022.00135 – volume: 631 start-page: 755 issue: 8022 year: 2024 ident: 10.1016/j.eswa.2025.129529_b0200 article-title: AI models collapse when trained on recursively generated data publication-title: Nature doi: 10.1038/s41586-024-07566-y – ident: 10.1016/j.eswa.2025.129529_b0155 doi: 10.1007/978-3-030-01264-9_8 – ident: 10.1016/j.eswa.2025.129529_b0230 doi: 10.1109/CVPRW50498.2020.00203 – volume: 36 start-page: 50408 year: 2023 ident: 10.1016/j.eswa.2025.129529_b0040 article-title: A dual-stream neural network explains the functional segregation of dorsal and ventral visual pathways in human brains publication-title: Advances in Neural Information Processing Systems – volume: 131 start-page: 2198 issue: 8 year: 2023 ident: 10.1016/j.eswa.2025.129529_b0020 article-title: Instance segmentation in the dark publication-title: International Journal of Computer Vision doi: 10.1007/s11263-023-01808-8 – start-page: 74 year: 2024 ident: 10.1016/j.eswa.2025.129529_b0110 article-title: Rethinking features-fused-pyramid-neck for object detection – ident: 10.1016/j.eswa.2025.129529_b0145 doi: 10.1109/ICCV48922.2021.00986 – ident: 10.1016/j.eswa.2025.129529_b0005 doi: 10.1109/ICCV.2019.00925 – volume: 22 start-page: 1544 issue: 10 year: 2019 ident: 10.1016/j.eswa.2025.129529_b0170 article-title: Learning task-state representations publication-title: Nature Neuroscience doi: 10.1038/s41593-019-0470-8 – ident: 10.1016/j.eswa.2025.129529_b0095 – ident: 10.1016/j.eswa.2025.129529_b0255 – ident: 10.1016/j.eswa.2025.129529_b0175 doi: 10.1109/NCC.2012.6176791 – volume: 6 start-page: 414 year: 1983 ident: 10.1016/j.eswa.2025.129529_b0165 article-title: Object vision and spatial vision: Two cortical pathways publication-title: Trends in neurosciences doi: 10.1016/0166-2236(83)90190-X – ident: 10.1016/j.eswa.2025.129529_b0115 doi: 10.1109/CVPR52733.2024.01440 – ident: 10.1016/j.eswa.2025.129529_b0130 doi: 10.1007/978-3-319-10602-1_48 – ident: 10.1016/j.eswa.2025.129529_b0225 doi: 10.1007/978-3-031-72751-1_1 – volume: 178 start-page: 30 year: 2019 ident: 10.1016/j.eswa.2025.129529_b0150 article-title: Getting to know low-light images with the exclusively dark dataset publication-title: Computer Vision and Image Understanding doi: 10.1016/j.cviu.2018.10.010 – ident: 10.1016/j.eswa.2025.129529_b0240 doi: 10.1109/CVPR42600.2020.01020 – start-page: 3354 year: 2012 ident: 10.1016/j.eswa.2025.129529_b0060 article-title: Are we ready for autonomous driving? the kitti vision benchmark suite – ident: 10.1016/j.eswa.2025.129529_b0105 doi: 10.1109/CVPR46437.2021.00349 – start-page: 249 year: 2014 ident: 10.1016/j.eswa.2025.129529_b0180 article-title: Social learning in dogs |
| SSID | ssj0017007 |
| Score | 2.488442 |
| Snippet | Fast and accurate object perception in low-light traffic scenes has attracted increasing attention. However, due to severe illumination degradation and the... |
| SourceID | crossref elsevier |
| SourceType | Index Database Publisher |
| StartPage | 129529 |
| SubjectTerms | Bio-inspired vision Deep learning Instance segmentation Low-light traffic Object detection |
| Title | A biologically inspired separable learning vision model for real-time traffic object perception in Dark |
| URI | https://dx.doi.org/10.1016/j.eswa.2025.129529 |
| Volume | 297 |
| WOSCitedRecordID | wos001568955900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 issn: 0957-4174 databaseCode: AIEXJ dateStart: 19950101 customDbUrl: isFulltext: true dateEnd: 99991231 titleUrlDefault: https://www.sciencedirect.com omitProxy: false ssIdentifier: ssj0017007 providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELag5cCFN6LlIR-4rVxl7Thxjisogh4qEEXsLbKTcdXVKq12t9Cf35n1Y7eAECBxiVZR7I1mPk3GM9_MMPYaKj9WfW0F-q5GlGCdMApAOFCAX6dSlb5YD5uoj4_NdNp8jIn25XqcQD0M5uqqufivqsZ7qGwqnf0LdedN8Qb-RqXjFdWO1z9S_GQUGiuR9OdU1Ue5dHQrl0BtvqlQap7CIaGwPEzDWfMN0YOcCxo3T6MjqLnE6NzNYnfjyH-hAMnbVOAzy0Q-WKxiV-hUL7eVGc-snzAim8jvOdETzN6nMwq3nN588OgyP_YVwlI7uPStjaEKmdnNKX6Wamg2hKUQiKxFOQ6zepJNloG0-5N9D6GG2QEsv1PTKKkP0F_RMWZys2_2Z9qY9kUnT9Gp9zbblbVu0PTtTj4cTo9ysqkuQlV9epFYWxVogD_-06_9ly2f5OQBuxcPE3wSQPCQ3YLhEbufBnXwaLcfs9MJ38YET5jgGRM8YYIHTPA1JjhigmdM8IgJHjDBN5jADTlh4gn78u7w5M17ESdsiE5qtRJV5VSDilKg8OA8Bgva29o3eE51BlTtS9f1TddU1kMne2OpYVtX2F5LBVJa9ZTtDOcDPGPc-cJbo4wfl7asfN9o1WkttSxdURnt99goya29CI1U2sQwnLUk5Zak3AYp7zGdRNtGVzC4eC0i4Tfr9v9x3XN2dwPYF2xntbiEl-xO9211tly8ioC5Bg_6h2Y |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+biologically+inspired+separable+learning+vision+model+for+real-time+traffic+object+perception+in+Dark&rft.jtitle=Expert+systems+with+applications&rft.au=Li%2C+Hulin&rft.au=Ren%2C+Qiliang&rft.au=Li%2C+Jun&rft.au=Wei%2C+Hanbing&rft.date=2026-02-01&rft.pub=Elsevier+Ltd&rft.issn=0957-4174&rft.volume=297&rft_id=info:doi/10.1016%2Fj.eswa.2025.129529&rft.externalDocID=S0957417425031446 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0957-4174&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0957-4174&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0957-4174&client=summon |