Face Editing Using Part‐Based Optimization of the Latent Space

We propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling methods rely on linear methods and cannot express complex and non‐linear deformations. In contrast to 3D morphable face models based on Principal Component Analysis (PCA), we int...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computer graphics forum Ročník 42; číslo 2; s. 269 - 279
Hlavní autori: Aliari, Mohammad Amin, Beauchamp, Andre, Popa, Tiberiu, Paquette, Eric
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Oxford Blackwell Publishing Ltd 01.05.2023
Predmet:
ISSN:0167-7055, 1467-8659
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling methods rely on linear methods and cannot express complex and non‐linear deformations. In contrast to 3D morphable face models based on Principal Component Analysis (PCA), we introduce a novel architecture based on variational autoencoders. Our architecture has multiple encoders (one for each part of the face, such as the nose and mouth) which feed a single decoder. As a result, each sub‐vector of the latent vector represents one part. We train our model with a novel loss function that further disentangles the space based on different parts of the face. The output of the network is a whole 3D face. Hence, unlike part‐based PCA methods, our model learns to merge the parts intrinsically and does not require an additional merging process. To achieve interactive face modeling, we optimize for the latent variables given vertex positional constraints provided by a user. To avoid unwanted global changes elsewhere on the face, we only optimize the subset of the latent vector that corresponds to the part of the face being modified. Our editing optimization converges in less than a second. Our results show that the proposed approach supports a broader range of editing constraints and generates more realistic 3D faces.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.14760