Deep Content-Dependent 3-D Convolutional Sparse Coding for Hyperspectral Image Denoising

Despite the significant successes in hyperspectral image (HSI) denoising, pure data-driven HSI denoising networks still suffer from limited understanding of inference. Deep unfolding (DU) is a feasible way to improve the interpretability of deep network. However, the specialized spatial-spectral DU...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE journal of selected topics in applied earth observations and remote sensing Ročník 17; s. 4125 - 4138
Hlavní autoři: Yin, Haitao, Chen, Hao
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1939-1404, 2151-1535
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Despite the significant successes in hyperspectral image (HSI) denoising, pure data-driven HSI denoising networks still suffer from limited understanding of inference. Deep unfolding (DU) is a feasible way to improve the interpretability of deep network. However, the specialized spatial-spectral DU methods are seldom studied, and the simple spatial-spectral extension leads to unpleasant spectral distortion. To tackle these issues, we first propose a content-dependent 3-D convolutional sparse coding (CD-CSC) to jointly represent spatial-spectral feature. Specifically, the 3-D filters used in CD-CSC for each HSI are unique, which are determined by linear combination of base 3-D filters. Then, we develop a novel CD-CSC-inspired DU network for HSI denoising, called CD-CSCNet. Furthermore, by exploiting the lightweight of separable convolution and the adaptability of hypernetwork, we design a separable content-dependent 3D Convolution (SCD-Conv) to carry out CD-CSCNet. SCD-Conv not only reduces computational complexity, but also can be viewed as the convolutional sparse coding with spatial and spectral dictionaries. Extensive experimental results on the ICVL, Zhuhai-1 OHS-3C, and GaoFen-5 datasets demonstrate that CD-CSCNet outperforms several recent pure data-driven and DU-based networks quantitatively and visually.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1939-1404
2151-1535
DOI:10.1109/JSTARS.2024.3357732