Unsupervised learning with a physics-based autoencoder for estimating the thickness and mixing ratio of pigments

Layered surface objects represented by decorated tomb murals and watercolors are in danger of deterioration and damage. To address these dangers, it is necessary to analyze the pigments' thickness and mixing ratio and record the current status. This paper proposes an unsupervised autoencoder mo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of the Optical Society of America. A, Optics, image science, and vision Jg. 40; H. 1; S. 116
Hauptverfasser: Shitomi, Ryuta, Tsuji, Mayuka, Fujimura, Yuki, Funatomi, Takuya, Mukaigawa, Yasuhiro, Morimoto, Tetsuro, Oishi, Takeshi, Takamatsu, Jun, Ikeuchi, Katsushi
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States 01.01.2023
ISSN:1520-8532, 1520-8532
Online-Zugang:Weitere Angaben
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Layered surface objects represented by decorated tomb murals and watercolors are in danger of deterioration and damage. To address these dangers, it is necessary to analyze the pigments' thickness and mixing ratio and record the current status. This paper proposes an unsupervised autoencoder model for thickness and mixing ratio estimation. The input of our autoencoder is spectral data of layered surface objects. Our autoencoder is unique, to our knowledge, in that the decoder part uses a physical model, the Kubelka-Munk model. Since we use the Kubelka-Munk model for the decoder, latent variables in the middle layer can be interpretable as the pigment thickness and mixing ratio. We conducted a quantitative evaluation using synthetic data and confirmed that our autoencoder provides a highly accurate estimation. We measured an object with layered surface pigments for qualitative evaluation and confirmed that our method is valid in an actual environment. We also present the superiority of our unsupervised autoencoder over supervised learning.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1520-8532
1520-8532
DOI:10.1364/JOSAA.472775