Convolutional Autoencoder for Spectral-Spatial Hyperspectral Unmixing

Blind hyperspectral unmixing is the process of expressing the measured spectrum of a pixel as a combination of a set of spectral signatures called endmembers and simultaneously determining their fractional abundances in the pixel. Most unmixing methods are strictly spectral and do not exploit the sp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing Jg. 59; H. 1; S. 535 - 549
Hauptverfasser: Palsson, Burkni, Ulfarsson, Magnus O., Sveinsson, Johannes R.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.01.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:0196-2892, 1558-0644
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Blind hyperspectral unmixing is the process of expressing the measured spectrum of a pixel as a combination of a set of spectral signatures called endmembers and simultaneously determining their fractional abundances in the pixel. Most unmixing methods are strictly spectral and do not exploit the spatial structure of hyperspectral images (HSIs). In this article, we present a new spectral-spatial linear mixture model and an associated estimation method based on a convolutional neural network autoencoder unmixing (CNNAEU). The CNNAEU technique exploits the spatial and the spectral structure of HSIs both for endmember and abundance map estimation. As it works directly with patches of HSIs and does not use any pooling or upsampling layers, the spatial structure is preserved throughout and abundance maps are obtained as feature maps of a hidden convolutional layer. We compared the CNNAEU method to four conventional and three deep learning state-of-the-art unmixing methods using four real HSIs. Experimental results show that the proposed CNNAEU technique performs particularly well and consistently when it comes to endmembers' extraction and outperforms all the comparison methods.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2020.2992743