Multi-modal deep network for RGB-D segmentation of clothes

In this Letter, the authors propose a deep learning based method to perform semantic segmentation of clothes from RGB-D images of people. First, they present a synthetic dataset containing more than 50,000 RGB-D samples of characters in different clothing styles, featuring various poses and environm...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Electronics letters Ročník 56; číslo 9; s. 432 - 435
Hlavní autori: Joukovsky, B, Hu, P, Munteanu, A
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: The Institution of Engineering and Technology 30.04.2020
Predmet:
ISSN:0013-5194, 1350-911X, 1350-911X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this Letter, the authors propose a deep learning based method to perform semantic segmentation of clothes from RGB-D images of people. First, they present a synthetic dataset containing more than 50,000 RGB-D samples of characters in different clothing styles, featuring various poses and environments for a total of nine semantic classes. The proposed data generation pipeline allows for fast production of RGB, depth images and ground-truth label maps. Secondly, a novel multi-modal encoder–ecoder convolutional network is proposed which operates on RGB and depth modalities. Multi-modal features are merged using trained fusion modules which use multi-scale atrous convolutions in the fusion process. The method is numerically evaluated on synthetic data and visually assessed on real-world data. The experiments demonstrate the efficiency of the proposed model over existing methods.
ISSN:0013-5194
1350-911X
1350-911X
DOI:10.1049/el.2019.4150