Kinship Verification in Childhood Images Using Vision Transformer

Facial Kinship Verification involves determining whether two face images belong to relatives, a task that is particularly challenging due to subtle differences in facial features and large intra-class variations. In recent years, deep learning models have shown great promise in addressing this probl...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Procedia computer science Ročník 258; s. 3105 - 3114
Hlavní autoři: Oruganti, Madhu, Meenpal, Toshanlal, Majumdar, Saikat, Tekchandani, Hitesh
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 2025
Témata:
ISSN:1877-0509, 1877-0509
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Facial Kinship Verification involves determining whether two face images belong to relatives, a task that is particularly challenging due to subtle differences in facial features and large intra-class variations. In recent years, deep learning models have shown great promise in addressing this problem. In this work, we propose a Vision Transformer (ViT) model for facial Kinship Verification, leveraging the proven effectiveness of Transformer architectures in Natural Language Processing. The Vision Transformer is trained end-to-end on two benchmark datasets: the large-scale Families in the Wild (FIW) dataset, consisting of thousands of face images with corresponding kinship labels, and the smaller KinFaceW-II dataset. Our model employs multiple attention mechanisms to capture complex relationships between facial features and produce a final kinship prediction. Experimental results demonstrate that our approach outperforms state-of-the-art methods, achieving an average accuracy of 92% on the FIW dataset and an F1 score of 0.85. The Euclidean distance metric further enhances the classification of kin and non-kin pairs. These findings confirm the effectiveness of Vision Transformer models for facial Kinship Verification and underscore their potential for future research in this domain.
ISSN:1877-0509
1877-0509
DOI:10.1016/j.procs.2025.04.568