Perceptual Loss-Constrained Adversarial Autoencoder Networks for Hyperspectral Unmixing

Recently, the use of a deep autoencoder-based method in blind spectral unmixing has attracted great attention as the method can achieve superior performance. However, most autoencoder-based unmixing methods use non-structured reconstruction loss to train networks, leading to the ignorance of band-to...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE geoscience and remote sensing letters Ročník 19; s. 1 - 5
Hlavní autori: Zhao, Min, Wang, Mou, Chen, Jie, Rahardja, Susanto
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1545-598X, 1558-0571
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Recently, the use of a deep autoencoder-based method in blind spectral unmixing has attracted great attention as the method can achieve superior performance. However, most autoencoder-based unmixing methods use non-structured reconstruction loss to train networks, leading to the ignorance of band-to-band-dependent characteristics and fine-grained information. To cope with this issue, we propose a general perceptual loss-constrained adversarial autoencoder network for hyperspectral unmixing. Specifically, the adversarial training process is used to update our framework. The discriminate network is found to be efficient in discovering the discrepancy between the reconstructed pixels and their corresponding ground truth. Moreover, the general perceptual loss is combined with the adversarial loss to further improve the consistency of high-level representations. Ablation studies verify the effectiveness of the proposed components of our framework, and experiments with both synthetic and real data illustrate the superiority of our framework when compared with other competing methods.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2022.3144327