An Encoder–Decoder Deep Learning Framework for Building Footprints Extraction from Aerial Imagery

Building footprints segmentation in high-resolution satellite images has wide range of applications in disaster management, land cover analysis and urban planning. However, automatic extraction of building footprints offers many challenges due to large variations in building sizes, complex structure...

Full description

Saved in:
Bibliographic Details
Published in:Arabian journal for science and engineering (2011) Vol. 48; no. 2; pp. 1273 - 1284
Main Authors: Khan, Sultan Daud, Alarabi, Louai, Basalamah, Saleh
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2023
Springer Nature B.V
Subjects:
ISSN:2193-567X, 1319-8025, 2191-4281
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Building footprints segmentation in high-resolution satellite images has wide range of applications in disaster management, land cover analysis and urban planning. However, automatic extraction of building footprints offers many challenges due to large variations in building sizes, complex structures and cluttered background. Due to these challenges, current state-of-the-art methods are not efficient enough to completely extract buildings footprints and boundaries of different buildings. To this end, we propose an encoder–decoder framework that automatically extracts building footprints from satellite images. Specifically, the encoder part of the network uses a dense network that consists of dense convolutional and transition blocks to capture global multi-scale features. On the other hand, the decoder part of network uses sequence of deconvolution layers to recover the lost spatial information and obtains a dense segmentation map, where the white pixels represent buildings and black pixels represent background/other objects. In addition, we train the network in end-to-end fashion by using hybrid loss that enhances the performance of the framework. We use two publicly available benchmark datasets to gauge the performance of framework. From the experiments, we demonstrate that proposed method achieves comparatively best performance on challenging datasets.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2193-567X
1319-8025
2191-4281
DOI:10.1007/s13369-022-06768-8