Toward an Automatic System for Computer-Aided Assessment in Facial Palsy

Quantitative assessment of facial function is challenging, and subjective grading scales such as House-Brackmann, Sunnybrook, and eFACE have well-recognized limitations. Machine learning (ML) approaches to facial landmark localization carry great clinical potential as they enable high-throughput aut...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Facial plastic surgery & aesthetic medicine Ročník 22; číslo 1; s. 42
Hlavní autori: Guarin, Diego L, Yunusova, Yana, Taati, Babak, Dusseldorp, Joseph R, Mohan, Suresh, Tavares, Joana, van Veen, Martinus M, Fortier, Emily, Hadlock, Tessa A, Jowett, Nate
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States 01.02.2020
Predmet:
ISSN:2689-3622, 2689-3622
On-line prístup:Zistit podrobnosti o prístupe
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Quantitative assessment of facial function is challenging, and subjective grading scales such as House-Brackmann, Sunnybrook, and eFACE have well-recognized limitations. Machine learning (ML) approaches to facial landmark localization carry great clinical potential as they enable high-throughput automated quantification of relevant facial metrics from photographs and videos. However, the translation from research settings to clinical application still requires important improvements. To develop a novel ML algorithm for fast and accurate localization of facial landmarks in photographs of facial palsy patients and utilize this technology as part of an automated computer-aided diagnosis system. Portrait photographs of 8 expressions obtained from 200 facial palsy patients and 10 healthy participants were manually annotated by localizing 68 facial landmarks in each photograph and by 3 trained clinicians using a custom graphical user interface. A novel ML model for automated facial landmark localization was trained using this disease-specific database. Algorithm accuracy was compared with manual markings and the output of a model trained using a larger database consisting only of healthy subjects. Root mean square error normalized by the interocular distance (NRMSE) of facial landmark localization between prediction of ML algorithm and manually localized landmarks. Publicly available algorithms for facial landmark localization provide poor localization accuracy when applied to photographs of patients compared with photographs of healthy controls (NRMSE, 8.56 ± 2.16 vs. 7.09 ± 2.34,  ≪ 0.01). We found significant improvement in facial landmark localization accuracy for the facial palsy patient population when using a model trained with a relatively small number photographs (1440) of patients compared with a model trained using several thousand more images of healthy faces (NRMSE, 6.03 ± 2.43 vs. 8.56 ± 2.16,  ≪ 0.01). Retraining a computer vision facial landmark detection model with fewer than 1600 annotated images of patients significantly improved landmark detection performance in frontal view photographs of this population. The new annotated database and facial landmark localization model represent the first steps toward an automatic system for computer-aided assessment in facial palsy. 4.
AbstractList Importance: Quantitative assessment of facial function is challenging, and subjective grading scales such as House-Brackmann, Sunnybrook, and eFACE have well-recognized limitations. Machine learning (ML) approaches to facial landmark localization carry great clinical potential as they enable high-throughput automated quantification of relevant facial metrics from photographs and videos. However, the translation from research settings to clinical application still requires important improvements. Objective: To develop a novel ML algorithm for fast and accurate localization of facial landmarks in photographs of facial palsy patients and utilize this technology as part of an automated computer-aided diagnosis system. Design, Setting, and Participants: Portrait photographs of 8 expressions obtained from 200 facial palsy patients and 10 healthy participants were manually annotated by localizing 68 facial landmarks in each photograph and by 3 trained clinicians using a custom graphical user interface. A novel ML model for automated facial landmark localization was trained using this disease-specific database. Algorithm accuracy was compared with manual markings and the output of a model trained using a larger database consisting only of healthy subjects. Main Outcomes and Measurements: Root mean square error normalized by the interocular distance (NRMSE) of facial landmark localization between prediction of ML algorithm and manually localized landmarks. Results: Publicly available algorithms for facial landmark localization provide poor localization accuracy when applied to photographs of patients compared with photographs of healthy controls (NRMSE, 8.56 ± 2.16 vs. 7.09 ± 2.34, p ≪ 0.01). We found significant improvement in facial landmark localization accuracy for the facial palsy patient population when using a model trained with a relatively small number photographs (1440) of patients compared with a model trained using several thousand more images of healthy faces (NRMSE, 6.03 ± 2.43 vs. 8.56 ± 2.16, p ≪ 0.01). Conclusions and Relevance: Retraining a computer vision facial landmark detection model with fewer than 1600 annotated images of patients significantly improved landmark detection performance in frontal view photographs of this population. The new annotated database and facial landmark localization model represent the first steps toward an automatic system for computer-aided assessment in facial palsy. Level of Evidence: 4.Importance: Quantitative assessment of facial function is challenging, and subjective grading scales such as House-Brackmann, Sunnybrook, and eFACE have well-recognized limitations. Machine learning (ML) approaches to facial landmark localization carry great clinical potential as they enable high-throughput automated quantification of relevant facial metrics from photographs and videos. However, the translation from research settings to clinical application still requires important improvements. Objective: To develop a novel ML algorithm for fast and accurate localization of facial landmarks in photographs of facial palsy patients and utilize this technology as part of an automated computer-aided diagnosis system. Design, Setting, and Participants: Portrait photographs of 8 expressions obtained from 200 facial palsy patients and 10 healthy participants were manually annotated by localizing 68 facial landmarks in each photograph and by 3 trained clinicians using a custom graphical user interface. A novel ML model for automated facial landmark localization was trained using this disease-specific database. Algorithm accuracy was compared with manual markings and the output of a model trained using a larger database consisting only of healthy subjects. Main Outcomes and Measurements: Root mean square error normalized by the interocular distance (NRMSE) of facial landmark localization between prediction of ML algorithm and manually localized landmarks. Results: Publicly available algorithms for facial landmark localization provide poor localization accuracy when applied to photographs of patients compared with photographs of healthy controls (NRMSE, 8.56 ± 2.16 vs. 7.09 ± 2.34, p ≪ 0.01). We found significant improvement in facial landmark localization accuracy for the facial palsy patient population when using a model trained with a relatively small number photographs (1440) of patients compared with a model trained using several thousand more images of healthy faces (NRMSE, 6.03 ± 2.43 vs. 8.56 ± 2.16, p ≪ 0.01). Conclusions and Relevance: Retraining a computer vision facial landmark detection model with fewer than 1600 annotated images of patients significantly improved landmark detection performance in frontal view photographs of this population. The new annotated database and facial landmark localization model represent the first steps toward an automatic system for computer-aided assessment in facial palsy. Level of Evidence: 4.
Quantitative assessment of facial function is challenging, and subjective grading scales such as House-Brackmann, Sunnybrook, and eFACE have well-recognized limitations. Machine learning (ML) approaches to facial landmark localization carry great clinical potential as they enable high-throughput automated quantification of relevant facial metrics from photographs and videos. However, the translation from research settings to clinical application still requires important improvements. To develop a novel ML algorithm for fast and accurate localization of facial landmarks in photographs of facial palsy patients and utilize this technology as part of an automated computer-aided diagnosis system. Portrait photographs of 8 expressions obtained from 200 facial palsy patients and 10 healthy participants were manually annotated by localizing 68 facial landmarks in each photograph and by 3 trained clinicians using a custom graphical user interface. A novel ML model for automated facial landmark localization was trained using this disease-specific database. Algorithm accuracy was compared with manual markings and the output of a model trained using a larger database consisting only of healthy subjects. Root mean square error normalized by the interocular distance (NRMSE) of facial landmark localization between prediction of ML algorithm and manually localized landmarks. Publicly available algorithms for facial landmark localization provide poor localization accuracy when applied to photographs of patients compared with photographs of healthy controls (NRMSE, 8.56 ± 2.16 vs. 7.09 ± 2.34,  ≪ 0.01). We found significant improvement in facial landmark localization accuracy for the facial palsy patient population when using a model trained with a relatively small number photographs (1440) of patients compared with a model trained using several thousand more images of healthy faces (NRMSE, 6.03 ± 2.43 vs. 8.56 ± 2.16,  ≪ 0.01). Retraining a computer vision facial landmark detection model with fewer than 1600 annotated images of patients significantly improved landmark detection performance in frontal view photographs of this population. The new annotated database and facial landmark localization model represent the first steps toward an automatic system for computer-aided assessment in facial palsy. 4.
Author Taati, Babak
Yunusova, Yana
Hadlock, Tessa A
Dusseldorp, Joseph R
Jowett, Nate
Mohan, Suresh
Tavares, Joana
Fortier, Emily
Guarin, Diego L
van Veen, Martinus M
Author_xml – sequence: 1
  givenname: Diego L
  surname: Guarin
  fullname: Guarin, Diego L
  organization: Department of Otolaryngology/Head and Neck Surgery, Massachusetts Eye and Ear Infirmary and Harvard Medical School, Boston, Massachusetts
– sequence: 2
  givenname: Yana
  surname: Yunusova
  fullname: Yunusova, Yana
  organization: Hurvitz Brain Sciences Program, Sunnybrook Research Institute, Toronto, Canada
– sequence: 3
  givenname: Babak
  surname: Taati
  fullname: Taati, Babak
  organization: Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Canada
– sequence: 4
  givenname: Joseph R
  surname: Dusseldorp
  fullname: Dusseldorp, Joseph R
  organization: Department of Plastic and Reconstructive Surgery, Royal Australasian College of Surgeons and University of Sydney, Sydney, Australia
– sequence: 5
  givenname: Suresh
  surname: Mohan
  fullname: Mohan, Suresh
  organization: Department of Otolaryngology/Head and Neck Surgery, Massachusetts Eye and Ear Infirmary and Harvard Medical School, Boston, Massachusetts
– sequence: 6
  givenname: Joana
  surname: Tavares
  fullname: Tavares, Joana
  organization: Faculty of Health Sciences, Brasilia University, Brasilia, Brazil
– sequence: 7
  givenname: Martinus M
  surname: van Veen
  fullname: van Veen, Martinus M
  organization: Department of Otolaryngology/Head and Neck Surgery, Massachusetts Eye and Ear Infirmary and Harvard Medical School, Boston, Massachusetts
– sequence: 8
  givenname: Emily
  surname: Fortier
  fullname: Fortier, Emily
  organization: Department of Otolaryngology/Head and Neck Surgery, Massachusetts Eye and Ear Infirmary and Harvard Medical School, Boston, Massachusetts
– sequence: 9
  givenname: Tessa A
  surname: Hadlock
  fullname: Hadlock, Tessa A
  organization: Department of Otolaryngology/Head and Neck Surgery, Massachusetts Eye and Ear Infirmary and Harvard Medical School, Boston, Massachusetts
– sequence: 10
  givenname: Nate
  surname: Jowett
  fullname: Jowett, Nate
  organization: Department of Otolaryngology/Head and Neck Surgery, Massachusetts Eye and Ear Infirmary and Harvard Medical School, Boston, Massachusetts
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32053425$$D View this record in MEDLINE/PubMed
BookMark eNpNj11LwzAYRoNM3Jz7ByK59KY1ffPR5rIM54SBgvO6ZMlbqTRtbVJk_96BE7w6z8XhwHNNZl3fISG3GUszVuiHegjGp8AynYJmjKUfk7kgC1CFTrgCmP3bc7IK4fMkAQeteX5F5hyY5ALkgmz3_bcZHTUdLafYexMbS9-OIaKndT_Sde-HKeKYlI1DR8sQMASPXaRNRzfGNqalr6YNxxtyWZ-IqzOX5H3zuF9vk93L0_O63CVW6CImBw51gUrlxcEVGebCKAsaRS2URiWsZTnPmQMlFYgcHBS5ksYAdxKcYQhLcv_bHcb-a8IQK98Ei21rOuynUAGXUksmFT-pd2d1Onh01TA23ozH6u89_AD69F9V
CitedBy_id crossref_primary_10_1016_j_engappai_2023_105832
crossref_primary_10_1044_2023_JSLHR_22_00321
crossref_primary_10_1109_JBHI_2020_3019242
crossref_primary_10_1055_s_0041_1726313
crossref_primary_10_1016_j_otot_2022_02_010
crossref_primary_10_1177_01945998211004169
crossref_primary_10_1097_PRS_0000000000011924
crossref_primary_10_1089_fpsam_2022_0424
crossref_primary_10_3390_healthcare10040659
crossref_primary_10_1089_fpsam_2022_0104
crossref_primary_10_1097_GOX_0000000000003638
crossref_primary_10_1155_2022_6305590
crossref_primary_10_3389_fsurg_2023_1266399
crossref_primary_10_1109_ACCESS_2023_3287389
crossref_primary_10_1089_fpsam_2024_0278
crossref_primary_10_3390_jcm11174998
crossref_primary_10_1007_s00115_020_01050_4
crossref_primary_10_1177_1357633X251342335
crossref_primary_10_1016_j_anorl_2024_07_005
crossref_primary_10_1055_s_0041_1733995
crossref_primary_10_1097_PRS_0000000000009453
crossref_primary_10_1089_fpsam_2023_0019
crossref_primary_10_1089_fpsam_2024_0381
crossref_primary_10_1016_j_otot_2021_10_014
crossref_primary_10_1155_2022_4254932
crossref_primary_10_1097_GOX_0000000000004762
crossref_primary_10_1097_PRS_0000000000010572
crossref_primary_10_1109_JBHI_2020_3045743
crossref_primary_10_1177_27325016211022805
crossref_primary_10_1002_lary_31471
crossref_primary_10_1080_02699052_2021_1890218
crossref_primary_10_3389_fnins_2025_1646485
crossref_primary_10_1089_fpsam_2022_0235
crossref_primary_10_1097_SCS_0000000000011638
crossref_primary_10_1093_ejo_cjae056
crossref_primary_10_1007_s00784_022_04724_2
crossref_primary_10_1016_j_aforl_2025_01_001
crossref_primary_10_1097_JS9_0000000000000391
crossref_primary_10_1089_fpsam_2020_0300
crossref_primary_10_3390_app12199548
crossref_primary_10_1089_fpsam_2024_0177
crossref_primary_10_1109_TNSRE_2024_3447881
crossref_primary_10_3390_jpm13071135
crossref_primary_10_1089_fpsam_2022_0282
crossref_primary_10_1016_j_bspc_2023_105942
crossref_primary_10_1016_j_bjps_2025_02_049
crossref_primary_10_1089_fpsam_2022_0364
crossref_primary_10_3390_jpm12101739
crossref_primary_10_1089_fpsam_2022_0128
crossref_primary_10_1089_fpsam_2023_0087
crossref_primary_10_1007_s00138_024_01616_1
crossref_primary_10_1159_000529685
crossref_primary_10_1016_j_irbm_2025_100882
crossref_primary_10_1097_PRS_0000000000007572
crossref_primary_10_1177_17531934251334887
crossref_primary_10_1089_fpsam_2020_0556
crossref_primary_10_3390_diagnostics12071528
crossref_primary_10_1109_JBHI_2025_3546019
crossref_primary_10_37155_2972_449X_vol1_3__79
crossref_primary_10_1109_TNSRE_2023_3244563
crossref_primary_10_1089_fpsam_2022_0136
crossref_primary_10_1109_ACCESS_2025_3550577
crossref_primary_10_1007_s10462_025_11246_2
crossref_primary_10_3390_diagnostics12051138
crossref_primary_10_1016_j_anplas_2024_11_002
crossref_primary_10_1089_fpsam_2021_0247
crossref_primary_10_1089_fpsam_2024_0204
crossref_primary_10_1109_ACCESS_2023_3330242
crossref_primary_10_3390_app11052435
crossref_primary_10_3390_app13169130
crossref_primary_10_1016_j_compbiomed_2023_107194
crossref_primary_10_3390_s21144858
ContentType Journal Article
DBID CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1089/fpsam.2019.29000.gua
DatabaseName Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
MEDLINE
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod no_fulltext_linktorsrc
EISSN 2689-3622
ExternalDocumentID 32053425
Genre Journal Article
GrantInformation_xml – fundername: NIDCD NIH HHS
  grantid: R01 DC013547
GroupedDBID 0R~
ABJNI
ALMA_UNASSIGNED_HOLDINGS
BNQNF
CGR
CUY
CVF
ECM
EIF
NPM
RML
7X8
SCNPE
ID FETCH-LOGICAL-c498t-b32f8e6678bd81e74a6c29e4f469e64cc07370d26562472d28765aa23d52da0e2
IEDL.DBID 7X8
ISICitedReferencesCount 73
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000525077900007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2689-3622
IngestDate Fri Sep 05 10:47:13 EDT 2025
Thu Apr 03 06:58:47 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c498t-b32f8e6678bd81e74a6c29e4f469e64cc07370d26562472d28765aa23d52da0e2
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
PMID 32053425
PQID 2355950563
PQPubID 23479
ParticipantIDs proquest_miscellaneous_2355950563
pubmed_primary_32053425
PublicationCentury 2000
PublicationDate 2020-02-01
PublicationDateYYYYMMDD 2020-02-01
PublicationDate_xml – month: 02
  year: 2020
  text: 2020-02-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Facial plastic surgery & aesthetic medicine
PublicationTitleAlternate Facial Plast Surg Aesthet Med
PublicationYear 2020
SSID ssj0002329937
Score 2.484007
Snippet Quantitative assessment of facial function is challenging, and subjective grading scales such as House-Brackmann, Sunnybrook, and eFACE have well-recognized...
Importance: Quantitative assessment of facial function is challenging, and subjective grading scales such as House-Brackmann, Sunnybrook, and eFACE have...
SourceID proquest
pubmed
SourceType Aggregation Database
Index Database
StartPage 42
SubjectTerms Adolescent
Adult
Aged
Aged, 80 and over
Anatomic Landmarks
Child
Diagnosis, Computer-Assisted
Facial Expression
Facial Paralysis - diagnosis
Female
Humans
Machine Learning
Male
Middle Aged
Photography
Title Toward an Automatic System for Computer-Aided Assessment in Facial Palsy
URI https://www.ncbi.nlm.nih.gov/pubmed/32053425
https://www.proquest.com/docview/2355950563
Volume 22
WOSCitedRecordID wos000525077900007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwELaAMrDwEK_ykpFY3TaO69gTihBVB6g6FNQtcvxAXZJCWiT-PWcnKRMSEku2SMn5fN935_N9CN1xJgzXLia5jiBBUVbCnhOOREYBvjGeJ0Fj6fUpmUzEfC6nTcGtatoq25gYArUpta-R9ykAo_RwHd8v34lXjfKnq42ExjbqxEBlfEtXMhebGguwBQ-_Xl-OC0kgVtP29pyQfbeslL-LHske9dKZvbe1-p1nBrwZHfz3Sw_RfsM0cVq7xhHassUxGs9CmyxWBU7XqzLMa8X11HIM9BW3Ig8kXRhrcLqZ24kXBR4pX1_HU_DYrxP0MnqcPYxJI6ZANJNiRfKYOmE5YFNuRGQTprim0jIH-bHlTGvY68nAUOB3lCXUQCbFh0rR2AypUQNLT9FOURb2HGGaC2klLCmYl-nECT2ApdWRZcopa-Iuum0Nk4Gz-hMIVdhyXWU_pumis9q62bKeqpHFFOIBRJCLP7x9ifaoz3tD9_QV6jj4cXuNdvXnalF93AQvgOdk-vwNW2K7gQ
linkProvider ProQuest
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Toward+an+Automatic+System+for+Computer-Aided+Assessment+in+Facial+Palsy&rft.jtitle=Facial+plastic+surgery+%26+aesthetic+medicine&rft.au=Guarin%2C+Diego+L&rft.au=Yunusova%2C+Yana&rft.au=Taati%2C+Babak&rft.au=Dusseldorp%2C+Joseph+R&rft.date=2020-02-01&rft.eissn=2689-3622&rft.volume=22&rft.issue=1&rft.spage=42&rft_id=info:doi/10.1089%2Ffpsam.2019.29000.gua&rft_id=info%3Apmid%2F32053425&rft_id=info%3Apmid%2F32053425&rft.externalDocID=32053425
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2689-3622&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2689-3622&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2689-3622&client=summon