AI-Augmented 3D Craniofacial Reconstruction for Enhanced Surgical Planning: A Novel Integration of Depth-Augmented Vision Transformers and MeshCNN for Structural Fidelity
Craniosynostosis, a craniofacial malformation characterized by premature suture fusion, poses significant challenges for surgical correction. Current methods for cranial reconstruction lack the precision required for accurate defect inpainting and depth awareness, limiting their clinical application...
Saved in:
| Published in: | Egyptian informatics journal Vol. 32; p. 100810 |
|---|---|
| Main Authors: | , , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier B.V
01.12.2025
|
| Subjects: | |
| ISSN: | 1110-8665 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Craniosynostosis, a craniofacial malformation characterized by premature suture fusion, poses significant challenges for surgical correction. Current methods for cranial reconstruction lack the precision required for accurate defect inpainting and depth awareness, limiting their clinical application in complex craniosynostosis cases.
This study aims to develop a novel AI-driven framework to enhance the accuracy of 3D craniofacial reconstruction for pre-surgical planning, with a specific focus on large cranial defects. By integrating advanced Artificial Intelligence (AI) techniques, it enhances the accuracy and efficiency of cranial models, enabling surgeons to plan interventions with greater confidence and to potentially improve patient outcomes. The framework’s computational efficiency, achieved through model quantization, further broadens its applicability in resource-limited clinical settings.
The framework begins with 3D U-Net-based segmentation, followed by depth map generation using Mixed Depth-of-Scale Models (MiDaS). Cranial defect inpainting is accomplished with Depth-augmented Vision Transformers (DA-ViT) guided by depth cues and edge detection, where MeshCNN and bilateral filtering refine the final mesh. Quantization-aware training (QAT) and post-training quantization (PTQ) reduce model size and memory footprint.
The framework achieved a Chamfer Distance of 0.14 mm and a Hausdorff Distance of 0.33 mm, significantly outperforming previous methods. Landmark accuracy was 0.10 mm with a consistency ratio of 0.91. After quantization, these values remained similar (0.15 mm, 0.35 mm, and 0.11 mm, respectively), while inference time decreased by 20% (from 35 ms to 28 ms), and memory usage dropped from 12 GB to 5 GB, allowing deployment on mid-range Graphics processing Unit (GPU). |
|---|---|
| ISSN: | 1110-8665 |
| DOI: | 10.1016/j.eij.2025.100810 |