An Encoder–Decoder Deep Learning Framework for Building Footprints Extraction from Aerial Imagery

Building footprints segmentation in high-resolution satellite images has wide range of applications in disaster management, land cover analysis and urban planning. However, automatic extraction of building footprints offers many challenges due to large variations in building sizes, complex structure...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Arabian journal for science and engineering (2011) Ročník 48; číslo 2; s. 1273 - 1284
Hlavní autoři: Khan, Sultan Daud, Alarabi, Louai, Basalamah, Saleh
Médium: Journal Article
Jazyk:angličtina
Vydáno: Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2023
Springer Nature B.V
Témata:
ISSN:2193-567X, 1319-8025, 2191-4281
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Building footprints segmentation in high-resolution satellite images has wide range of applications in disaster management, land cover analysis and urban planning. However, automatic extraction of building footprints offers many challenges due to large variations in building sizes, complex structures and cluttered background. Due to these challenges, current state-of-the-art methods are not efficient enough to completely extract buildings footprints and boundaries of different buildings. To this end, we propose an encoder–decoder framework that automatically extracts building footprints from satellite images. Specifically, the encoder part of the network uses a dense network that consists of dense convolutional and transition blocks to capture global multi-scale features. On the other hand, the decoder part of network uses sequence of deconvolution layers to recover the lost spatial information and obtains a dense segmentation map, where the white pixels represent buildings and black pixels represent background/other objects. In addition, we train the network in end-to-end fashion by using hybrid loss that enhances the performance of the framework. We use two publicly available benchmark datasets to gauge the performance of framework. From the experiments, we demonstrate that proposed method achieves comparatively best performance on challenging datasets.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2193-567X
1319-8025
2191-4281
DOI:10.1007/s13369-022-06768-8