A Depth-Wise Separable U-Net Architecture with Multiscale Filters to Detect Sinkholes

Numerous variants of the basic deep segmentation model—U-Net—have emerged in recent years, achieving reliable performance across different benchmarks. In this paper, we propose an improved version of U-Net with higher performance and reduced complexity. This improvement was achieved by introducing a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Remote sensing (Basel, Switzerland) Ročník 15; číslo 5; s. 1384
Hlavní autoři: Alshawi, Rasha, Hoque, Md Tamjidul, Flanagin, Maik C.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Basel MDPI AG 01.03.2023
Témata:
ISSN:2072-4292, 2072-4292
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Numerous variants of the basic deep segmentation model—U-Net—have emerged in recent years, achieving reliable performance across different benchmarks. In this paper, we propose an improved version of U-Net with higher performance and reduced complexity. This improvement was achieved by introducing a sparsely connected depth-wise separable block with multiscale filters, enabling the network to capture features of different scales. The use of depth-wise separable convolution significantly reduces the number of trainable parameters, making the training faster, while reducing the risk of overfitting. We used our developed sinkhole dataset and the available benchmark nuclei dataset to assess the proposed model’s performance. Pixel-wise annotation is laborious and requires a great deal of human expertise; therefore, we propose a fully deep convolutional autoencoder network that utilizes the proposed block to automatically annotate the sinkhole dataset. Our segmentation model outperformed the state-of-the-art methods, including U-Net, Attention U-Net, Depth-Separable U-Net, and Inception U-Net, achieving an average improvement of 1.2% and 1.4%, respectively, on the sinkhole and the nuclei datasets, with 94% and 92% accuracy, as well as a reduced training time. It also achieved 83% and 80% intersection-over-union (IoU) on the two datasets, respectively, which is an 11.8% and 9.3% average improvement over the above-mentioned models.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2072-4292
2072-4292
DOI:10.3390/rs15051384