Ultralightweight Spatial–Spectral Feature Cooperation Network for Change Detection in Remote Sensing Images

Deep convolutional neural networks (CNNs) have achieved much success in remote sensing image change detection (CD) but still suffer from two main problems. First, the existing multiscale feature fusion methods often use redundant feature extraction and fusion strategies, which often lead to high com...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on geoscience and remote sensing Ročník 61; s. 1 - 14
Hlavní autoři: Lei, Tao, Geng, Xinzhe, Ning, Hailong, Lv, Zhiyong, Gong, Maoguo, Jin, Yaochu, Nandi, Asoke K.
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Témata:
ISSN:0196-2892, 1558-0644
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Deep convolutional neural networks (CNNs) have achieved much success in remote sensing image change detection (CD) but still suffer from two main problems. First, the existing multiscale feature fusion methods often use redundant feature extraction and fusion strategies, which often lead to high computational costs and memory usage. Second, the regular attention mechanism in CD is difficult to model spatial–spectral features and generate 3-D attention weights at the same time, ignoring the cooperation between spatial features and spectral features. To address the above issues, an efficient ultralightweight spatial–spectral feature cooperation network (USSFC-Net) is proposed for CD in this article. The proposed USSFC-Net has two main advantages. First, a multiscale decoupled convolution (MSDConv) is designed, which is clearly different from the popular atrous spatial pyramid pooling (ASPP) module and its variants since it can flexibly capture the multiscale features of changed objects using cyclic multiscale convolution. Meanwhile, the design of MSDConv can greatly reduce the number of parameters and computational redundancy. Second, an efficient spatial–spectral feature cooperation (SSFC) strategy is introduced to obtain richer features. The SSFC differs from the existing 2-D attention mechanisms since it learns 3-D spatial–spectral attention weights without adding any parameters. The experiments on three datasets for remote sensing image CD demonstrate that the proposed USSFC-Net achieves better CD accuracy than most CNNs-based methods and requires lower computational costs and fewer parameters, even it is superior to some Transformer-based methods. The code is available at https://github.com/SUST-reynole/USSFC-Net .
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2023.3261273