Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures

This study aims to compare the readability of patient education materials (PEM) of the American Society of Ophthalmic Plastic and Reconstructive Surgery to that of PEMs generated by the AI-chat bots ChatGPT and Google Bard. PEMs on 16 common American Society of Ophthalmic Plastic and Reconstructive...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Ophthalmic plastic and reconstructive surgery Ročník 40; číslo 2; s. 212
Hlavní autoři: Eid, Kevin, Eid, Alen, Wang, Diane, Raiker, Rahul S, Chen, Stephen, Nguyen, John
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States 01.03.2024
Témata:
ISSN:1537-2677, 1537-2677
On-line přístup:Zjistit podrobnosti o přístupu
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This study aims to compare the readability of patient education materials (PEM) of the American Society of Ophthalmic Plastic and Reconstructive Surgery to that of PEMs generated by the AI-chat bots ChatGPT and Google Bard. PEMs on 16 common American Society of Ophthalmic Plastic and Reconstructive Surgery topics were generated by 2 AI models, ChatGPT 4.0 and Google Bard, with and without a 6th-grade reading level prompt modifier. The PEMs were analyzed using 7 readability metrics: Flesch Reading Ease Score, Gunning Fog Index, Flesch-Kincaid Grade Level, Coleman-Liau Index, Simple Measure of Gobbledygook Index Score, Automated Readability Index, and Linsear Write Readability Score. Each AI-generated PEM was compared with the equivalent American Society of Ophthalmic Plastic and Reconstructive Surgery PEM. Across all readability indices, PEM generated by ChatGPT 4.0 consistently had the highest readability scores, indicating that the material generated by this AI chatbot may be most difficult to read in its unprompted form (Flesch Reading Ease Score: 36.5; Simple Measure of Gobbledygook: 14.7). Google's Bard was able to generate content that was easier to read than both the American Society of Ophthalmic Plastic and Reconstructive Surgery and ChatGPT 4.0 (Flesch Reading Ease Score: 52.3; Simple Measure of Gobbledygook: 12.7). When prompted to produce PEM at a 6th-grade reading level, both ChatGPT 4.0 and Bard were able to significantly improve in their readability scores, with prompted ChatGPT 4.0 being able to consistently generate content that was easier to read (Flesch Reading Ease Score: 67.9, Simple Measure of Gobbledygook: 10.2). This study suggests that AI tools, when guided by appropriate prompts, can generate accessible and comprehensible PEMs in the field of ophthalmic plastic and reconstructive surgeries, balancing readability with the complexity of the necessary information.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1537-2677
1537-2677
DOI:10.1097/IOP.0000000000002549