Deep clustering using 3D attention convolutional autoencoder for hyperspectral image analysis

Deep clustering has been widely applicated in various fields, including natural image and language processing. However, when it is applied to hyperspectral image (HSI) processing, it encounters challenges due to high dimensionality of HSI and complex spatial-spectral characteristics. This study intr...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Scientific reports Ročník 14; číslo 1; s. 4209 - 13
Hlavní autoři: Zheng, Ziyou, Zhang, Shuzhen, Song, Hailong, Yan, Qi
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 20.02.2024
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2045-2322, 2045-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Deep clustering has been widely applicated in various fields, including natural image and language processing. However, when it is applied to hyperspectral image (HSI) processing, it encounters challenges due to high dimensionality of HSI and complex spatial-spectral characteristics. This study introduces a kind of deep clustering model specifically tailed for HSI analysis. To address the high dimensionality issue, redundant dimension of HSI is firstly eliminated by combining principal component analysis (PCA) with t- distributed stochastic neighbor embedding (t-SNE). The reduced dataset is then input into a three-dimensional attention convolutional autoencoder (3D-ACAE) to extract essential spatial-spectral features. The 3D-ACAE uses spatial-spectral attention mechanism to enhance captured features. Finally, these enhanced features pass through an embedding layer to create a compact data-representation, and the compact data-representation is divided into distinct clusters by clustering layer. Experimental results on three publicly available datasets validate the superiority of the proposed model for HSI analysis.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-54547-2