Hyperspectral unmixing using deep convolutional autoencoder

Hyperspectral Unmixing (HU) estimates the combination of endmembers and their corresponding fractional abundances in each of the mixed pixels in the hyperspectral remote sensing image. In this paper, we address the linear unmixing problem with an unsupervised Deep Convolutional Autoencoder network (...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International journal of remote sensing Ročník 41; číslo 12; s. 4799 - 4819
Hlavní autori: Elkholy, Menna M., Mostafa, Marwa, Ebied, Hala M., Tolba, Mohamed F.
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London Taylor & Francis 17.06.2020
Taylor & Francis Ltd
Predmet:
ISSN:0143-1161, 1366-5901, 1366-5901
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Hyperspectral Unmixing (HU) estimates the combination of endmembers and their corresponding fractional abundances in each of the mixed pixels in the hyperspectral remote sensing image. In this paper, we address the linear unmixing problem with an unsupervised Deep Convolutional Autoencoder network (DCAE). The proposed DCAE is an end-to-end model that consists of two parts: encoder and decoder. First, deep convolutional encoder is employed to extract significant, denoised, non-redundant feature vector. Then one-layer decoder maps the aforementioned feature vector to obtain the endmembers and their abundance percentages. Extensive experiments were carried to evaluate the performance of the proposed method using synthetic and real hyperspectral datasets namely Samson, Cuprite, Urban, and Jasper Ridge. Analyses of the results demonstrate that the proposed DCAE significantly outperforms benchmark unmixing methods even in a noisy environment in terms of both Root Mean Square Error (RMSE), and Mean Square Error (MSE). The achieved results of the proposed DCAE in terms of mean absolute error were 0.0097, 0.001, 0.0141, and 0.0145 for Samson, Cuprite, Urban, and Jasper Ridge datasets respectively.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0143-1161
1366-5901
1366-5901
DOI:10.1080/01431161.2020.1724346