Self-Similarity Constrained Sparse Representation for Hyperspectral Image Super-Resolution

Fusing a low-resolution hyperspectral (HS) image with the corresponding high-resolution multispectral image to obtain a high-resolution HS image is an important technique for capturing comprehensive scene information in both the spatial and spectral domains. Existing approaches adopt sparsity promot...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on image processing Ročník 27; číslo 11; s. 5625 - 5637
Hlavní autoři: Han, Xian-Hua, Shi, Boxin, Zheng, Yinqiang
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.11.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1057-7149, 1941-0042, 1941-0042
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Fusing a low-resolution hyperspectral (HS) image with the corresponding high-resolution multispectral image to obtain a high-resolution HS image is an important technique for capturing comprehensive scene information in both the spatial and spectral domains. Existing approaches adopt sparsity promoting strategy and encode the spectral information of each pixel independently, which results in noisy sparse representation. We propose a novel HS image super-resolution method via a self-similarity constrained sparse representation. We explore the similar patch structures across the whole image and the pixels with close appearance in the local regions to create global-structure groups and local-spectral super-pixels. By forcing the similarity of the sparse representations for pixels belonging to the same group and super-pixel, we alleviate the effect of the outliers in the learned sparse coding. Experiment results on benchmark datasets validate that the proposed method outperforms the state-of-the-art methods in both the quantitative metrics and visual effect.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2018.2855418