Facial Image Inpainting with Variational Autoencoder

This paper proposed a learning-based approach to reveal diversity possible appearances under the missing area of an occluded unseen image. In general, there are a lot of possible facial appearances for the missing area; for example, a male with a scarf, it is difficult to predict he has a beard in t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2019 2nd International Conference of Intelligent Robotic and Control Engineering (IRCE) S. 119 - 122
Hauptverfasser: Tu, Ching-Ting, Chen, Yi-Fu
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.08.2019
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper proposed a learning-based approach to reveal diversity possible appearances under the missing area of an occluded unseen image. In general, there are a lot of possible facial appearances for the missing area; for example, a male with a scarf, it is difficult to predict he has a beard in the covered area or not? In this paper, we propose a novel method for facial image inpainting, which generates the missing facial appearance by conditioning on the observable appearance. Given a trained standard Variational Autoencoder (VAE) for un-occluded face generation. To be specified, we search for the possible set of VAE coding vector for the current occluded input image, and the predicted coding should be robust to the missing area. The possible facial appearance set is then recovered through the decoder of VAE model. Experiments show that our method successfully predicts recovered results in large missing regions; these results are diverse, and all are reasonable to be consistent with the observable facial area, i.e., both the facial geometry and the personal characteristics are preserved.
DOI:10.1109/IRCE.2019.00031