CDANet: Contextual Detail-Aware Network for High-Spatial-Resolution Remote-Sensing Imagery Shadow Detection

Shadow detection automatically marks shadow pixels in high-spatial-resolution (HSR) imagery with specific categories based on meaningful colorific features. Accurate shadow mapping is crucial in interpreting images and recovering radiometric information. Recent studies have demonstrated the superior...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on geoscience and remote sensing Ročník 60; s. 1 - 15
Hlavní autoři: Zhu, Qiqi, Yang, Yang, Sun, Xiongli, Guo, Mingqiang
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0196-2892, 1558-0644
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Shadow detection automatically marks shadow pixels in high-spatial-resolution (HSR) imagery with specific categories based on meaningful colorific features. Accurate shadow mapping is crucial in interpreting images and recovering radiometric information. Recent studies have demonstrated the superiority of deep learning in very-high-resolution satellite imagery shadow detection. Previous methods usually overlap convolutional layers but cause the loss of spatial information. In addition, the scale and shape of shadows vary, and the small and irregular shadows are challenging to detect. In addition, the unbalanced distribution of the foreground and the background causes the common binary cross-entropy loss function to be biased, which seriously affects model training. A contextual detail-aware network (CDANet), a novel framework for extracting accurate and complete shadows, is proposed for shadow detection to remedy these issues. In CDANet, a double branch module is embedded in the encoder-decoder structure to effectively alleviate low-level local information loss during convolution. The contextual semantic fusion connection with the residual dilation module is proposed to provide multiscale contextual information of diverse shadows. A hybrid loss function is designed to retain the detailed information of the tiny shadows, which per-pixel calculates the distribution of shadows and improves the robustness of the model. The performance of the proposed method is validated on two distinct shadow detection datasets, and the proposed CDANet reveals higher portability and robustness than other methods.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2022.3143886