The doc versus the bot: A pilot study to assess the quality and accuracy of physician and chatbot responses to clinical questions in gynecologic oncology
•Physicians provided higher quality responses to common clinical questions in gynecologic oncology compared to chatbots.•Chatbots provided longer, more inaccurate, and lower quality responses to gynecologic oncology clinical questions.•Patients should be cautioned against non-approved/validated arti...
Gespeichert in:
| Veröffentlicht in: | Gynecologic oncology reports Jg. 55; S. 101477 |
|---|---|
| Hauptverfasser: | , , , , , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Netherlands
Elsevier Inc
01.10.2024
Elsevier |
| Schlagworte: | |
| ISSN: | 2352-5789, 2352-5789 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | •Physicians provided higher quality responses to common clinical questions in gynecologic oncology compared to chatbots.•Chatbots provided longer, more inaccurate, and lower quality responses to gynecologic oncology clinical questions.•Patients should be cautioned against non-approved/validated artificial intelligence platforms for medical advice.
Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard (t-test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed. |
|---|---|
| AbstractList | •Physicians provided higher quality responses to common clinical questions in gynecologic oncology compared to chatbots.•Chatbots provided longer, more inaccurate, and lower quality responses to gynecologic oncology clinical questions.•Patients should be cautioned against non-approved/validated artificial intelligence platforms for medical advice. Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard (t-test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed. •Physicians provided higher quality responses to common clinical questions in gynecologic oncology compared to chatbots.•Chatbots provided longer, more inaccurate, and lower quality responses to gynecologic oncology clinical questions.•Patients should be cautioned against non-approved/validated artificial intelligence platforms for medical advice. Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard (t-test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed. Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard (t-test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed.Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard (t-test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed. Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard (t-test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed. Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard ( -test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed. |
| ArticleNumber | 101477 |
| Author | Havrilesky, Laura J. Albright, Benjamin B. Moss, Haley A. Anastasio, Mary Katherine Foote, Jonathan Rossi, Emma Musa, Fernanda Melamed, Alexander Modesitt, Susan C. Peters, Pamela |
| Author_xml | – sequence: 1 givenname: Mary Katherine surname: Anastasio fullname: Anastasio, Mary Katherine email: mm765@duke.edu organization: Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Duke University Medical Center, Durham, NC, USA – sequence: 2 givenname: Pamela surname: Peters fullname: Peters, Pamela organization: Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Duke University Medical Center, Durham, NC, USA – sequence: 3 givenname: Jonathan surname: Foote fullname: Foote, Jonathan organization: Commonwealth Gynecologic Oncology, Bon Secours Health, Richmond, VA, USA – sequence: 4 givenname: Alexander surname: Melamed fullname: Melamed, Alexander organization: Division of Gynecologic Oncology, Vincent Department of Obstetrics & Gynecology, Massachusetts General Hospital, Boston, MA, USA – sequence: 5 givenname: Susan C. surname: Modesitt fullname: Modesitt, Susan C. organization: Division of Gynecologic Oncology, Department of Gynecology and Obstetrics, Emory University School of Medicine, Atlanta, GA, USA – sequence: 6 givenname: Fernanda surname: Musa fullname: Musa, Fernanda organization: Swedish Cancer Institute, Seattle, WA, USA – sequence: 7 givenname: Emma surname: Rossi fullname: Rossi, Emma organization: Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Duke University Medical Center, Durham, NC, USA – sequence: 8 givenname: Benjamin B. surname: Albright fullname: Albright, Benjamin B. organization: Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, University of North Carolina Chapel Hill, Chapel Hill, NC, USA – sequence: 9 givenname: Laura J. surname: Havrilesky fullname: Havrilesky, Laura J. organization: Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Duke University Medical Center, Durham, NC, USA – sequence: 10 givenname: Haley A. surname: Moss fullname: Moss, Haley A. organization: Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Duke University Medical Center, Durham, NC, USA |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39224817$$D View this record in MEDLINE/PubMed |
| BookMark | eNqFkk1r3DAQhk1JadI0f6CHomMvu5XkD9mhUELoRyDQS3oW8mi8q61X2krygn9K_m3ldbYkOaQnjWbmfUaM3rfZiXUWs-w9o0tGWfVps1w5j0tOeTElCiFeZWc8L_miFHVz8ig-zS5C2FBKWUlzUZVvstO84byomTjL7u_WSLQDskcfhkBiurYuXpIrsjO9iyTEQY8kOqJCwDA3_BlUb-JIlNVEAQxewUhcR3brMRgwyh4qsFYxoYjHsHM2iScK9MYaUH1iYIgm5YmxZDVaBNe7lQHi7CEa32WvO9UHvHg4z7Nf377eXf9Y3P78fnN9dbuAUuRx0eacQUPLsuF1iiqh61ZpzkSLosOuq7TmvKOgWFU0eSUAClWjqHVZpyV0mJ9nNzNXO7WRO2-2yo_SKSMPCedXUvlooEdJK2x0zcsKEqVhqlUFRZXTApHlIHRifZlZu6Hdoga00av-CfRpxZq1XLm9ZCw9jRZVInx8IHh3WJHcmgDY98qiG4LMGaVcFGVVptYPj4f9m3L83NRQzw3gXQgeOwkmqmnpabbpJaNyspLcyMlKcrKSnK2UpPyZ9Eh_UfR5FmH6rr1BLwMYtIDaeISY9mlell8-kx_N8hvH_4n_As90-To |
| CitedBy_id | crossref_primary_10_1002_ijgo_70348 crossref_primary_10_1186_s13054_025_05468_7 crossref_primary_10_3390_diagnostics15060735 |
| Cites_doi | 10.1148/radiol.230163 10.1001/jamainternmed.2023.1838 10.1016/j.ajog.2023.03.009 10.1001/jama.2023.1044 10.1308/147870804290 10.1097/AOG.0b013e3181ec5fc1 10.2196/45312 10.1056/NEJMoa1806395 |
| ContentType | Journal Article |
| Copyright | 2024 The Authors 2024 The Authors. 2024 The Authors 2024 |
| Copyright_xml | – notice: 2024 The Authors – notice: 2024 The Authors. – notice: 2024 The Authors 2024 |
| DBID | 6I. AAFTH AAYXX CITATION NPM 7X8 5PM DOA |
| DOI | 10.1016/j.gore.2024.101477 |
| DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef PubMed MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Medicine |
| EISSN | 2352-5789 |
| ExternalDocumentID | oai_doaj_org_article_06e9d8256ccc491aba40ea304ee13c7d PMC11367046 39224817 10_1016_j_gore_2024_101477 S2352578924001565 |
| Genre | Journal Article |
| GroupedDBID | .1- .FO 1P~ 457 53G 5VS AAEDT AAEDW AAIKJ AALRI AAXUO AAYWO ABMAC ACGFS ACVFH ADBBV ADCNI ADEZE ADRAZ ADVLN AEUPX AEVXI AEXQZ AFJKZ AFPUW AFRHN AFTJW AGHFR AIGII AITUG AJUYK AKBMS AKRWK AKYEP ALMA_UNASSIGNED_HOLDINGS AMRAJ AOIJS APXCP BCNDV EBS EJD FDB GROUPED_DOAJ HYE IPNFZ KQ8 M41 M48 M~E OK1 PH~ RIG ROL RPM SSZ Z5R 0SF 6I. AACTN AAFTH NCXOZ AAYXX CITATION NPM 7X8 5PM |
| ID | FETCH-LOGICAL-c573t-b321c905592821c67d8bad217be7feff6dd22f0ca1649367cc4a8e78d58922fe3 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 5 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001294928200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2352-5789 |
| IngestDate | Fri Oct 03 12:52:48 EDT 2025 Tue Nov 04 02:05:26 EST 2025 Wed Oct 01 11:17:27 EDT 2025 Mon Jul 21 05:58:50 EDT 2025 Sat Nov 29 06:26:38 EST 2025 Tue Nov 18 21:38:04 EST 2025 Sat Sep 28 16:02:38 EDT 2024 Tue Aug 26 16:33:26 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Artificial intelligence Gynecologic oncology Patient education |
| Language | English |
| License | This is an open access article under the CC BY-NC-ND license. 2024 The Authors. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c573t-b321c905592821c67d8bad217be7feff6dd22f0ca1649367cc4a8e78d58922fe3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| OpenAccessLink | https://doaj.org/article/06e9d8256ccc491aba40ea304ee13c7d |
| PMID | 39224817 |
| PQID | 3100274565 |
| PQPubID | 23479 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_06e9d8256ccc491aba40ea304ee13c7d pubmedcentral_primary_oai_pubmedcentral_nih_gov_11367046 proquest_miscellaneous_3100274565 pubmed_primary_39224817 crossref_citationtrail_10_1016_j_gore_2024_101477 crossref_primary_10_1016_j_gore_2024_101477 elsevier_sciencedirect_doi_10_1016_j_gore_2024_101477 elsevier_clinicalkey_doi_10_1016_j_gore_2024_101477 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-10-01 |
| PublicationDateYYYYMMDD | 2024-10-01 |
| PublicationDate_xml | – month: 10 year: 2024 text: 2024-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Netherlands |
| PublicationPlace_xml | – name: Netherlands |
| PublicationTitle | Gynecologic oncology reports |
| PublicationTitleAlternate | Gynecol Oncol Rep |
| PublicationYear | 2024 |
| Publisher | Elsevier Inc Elsevier |
| Publisher_xml | – name: Elsevier Inc – name: Elsevier |
| References | Goodman, Patrinely, Stone, Zimmerman, Donald, Chang (b0025) 2023; 6 Gilson, Safranek, Huang, Socrates, Chi, Taylor (b0015) 2023; 9 Sallam (b0060) 2023; 11 Berek, Chalas, Edelson, Moore, Burke, Cliby (b0010) 2010; 116 Grunebaum, Chervenak, Pollet, Katz, Chervenak (b0030) 2023 Lytinen (b0035) 2005 Shen, Heacock, Elias, Hentel, Reig, Shih (b0075) 2023; 307 Gitnux [Internet]. [04/07/2023]. Available from: https://blog.gitnux.com/chat-gpt-statistics/. Racing to Catch Up With ChatGPT, Google Plans Release of Its Own Chatbot. The New York Times. Ramesh, Kambhampati, Monson, Drew (b0050) 2004; 86 Choosing Wisely: Five tips for a meaningful conversation between patients and providers: Society of Gynecologic Oncology; 2023 [Available from: https://www.sgo.org/resources/choosing-wisely-five-tips-for-a-meaningful-conversation-between-patients-and-providers-2/. Sarraju, Bruemmer, Van Iterson, Cho, Rodriguez, Laffin (b0065) 2023; 329 Ayers, Poliak, Dredze, Leas, Zhu, Kelley (b0005) 2023; 183 SEER Cancer Statistics Factsheets: Common Cancer Sites. National Cancer Institute Bethesda, MD [Available from: https://seer.cancer.gov/statfacts/html/common.html. OpenAI. ChatGPT: optimizing language models for dialogue [Available from: https://openai.com/blog/chatgpt/. Ramirez, Frumovitz, Pareja, Lopez, Vieira, Ribeiro (b0055) 2018; 379 Goodman (10.1016/j.gore.2024.101477_b0025) 2023; 6 Gilson (10.1016/j.gore.2024.101477_b0015) 2023; 9 Ramirez (10.1016/j.gore.2024.101477_b0055) 2018; 379 Lytinen (10.1016/j.gore.2024.101477_b0035) 2005 Sarraju (10.1016/j.gore.2024.101477_b0065) 2023; 329 Shen (10.1016/j.gore.2024.101477_b0075) 2023; 307 10.1016/j.gore.2024.101477_b0045 Berek (10.1016/j.gore.2024.101477_b0010) 2010; 116 10.1016/j.gore.2024.101477_b0020 Sallam (10.1016/j.gore.2024.101477_b0060) 2023; 11 Ayers (10.1016/j.gore.2024.101477_b0005) 2023; 183 10.1016/j.gore.2024.101477_b0040 10.1016/j.gore.2024.101477_b0070 Grunebaum (10.1016/j.gore.2024.101477_b0030) 2023 10.1016/j.gore.2024.101477_b0080 Ramesh (10.1016/j.gore.2024.101477_b0050) 2004; 86 |
| References_xml | – reference: Gitnux [Internet]. [04/07/2023]. Available from: https://blog.gitnux.com/chat-gpt-statistics/. – volume: 307 start-page: e230163 year: 2023 ident: b0075 article-title: ChatGPT and Other Large Language Models Are Double-edged Swords publication-title: Radiology – volume: 9 start-page: e45312 year: 2023 ident: b0015 article-title: How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment publication-title: JMIR Med Educ. – reference: SEER Cancer Statistics Factsheets: Common Cancer Sites. National Cancer Institute Bethesda, MD [Available from: https://seer.cancer.gov/statfacts/html/common.html. – volume: 379 start-page: 1895 year: 2018 end-page: 1904 ident: b0055 article-title: Minimally Invasive versus Abdominal Radical Hysterectomy for Cervical Cancer publication-title: N Engl. J. Med. – reference: OpenAI. ChatGPT: optimizing language models for dialogue [Available from: https://openai.com/blog/chatgpt/. – reference: Choosing Wisely: Five tips for a meaningful conversation between patients and providers: Society of Gynecologic Oncology; 2023 [Available from: https://www.sgo.org/resources/choosing-wisely-five-tips-for-a-meaningful-conversation-between-patients-and-providers-2/. – reference: Racing to Catch Up With ChatGPT, Google Plans Release of Its Own Chatbot. The New York Times. – volume: 6 start-page: e2336483 year: 2023 ident: b0025 article-title: Accuracy and Reliability of Chatbot Responses to Physician Questions publication-title: J. Am. Med. Assoc.netw Open. – volume: 329 start-page: 842 year: 2023 end-page: 844 ident: b0065 article-title: Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model publication-title: J. Am. Med. Assoc. – volume: 116 start-page: 733 year: 2010 end-page: 743 ident: b0010 article-title: Prophylactic and risk-reducing bilateral salpingo-oophorectomy: recommendations based on risk of ovarian cancer publication-title: Obstet Gynecol. – volume: 11 year: 2023 ident: b0060 article-title: ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns publication-title: Healthcare (basel). – year: 2005 ident: b0035 article-title: Artificial intelligence: Natural language processing – year: 2023 ident: b0030 article-title: The Exciting Potential for ChatGPT in Obstetrics and Gynecology publication-title: Am. J. Obstet. Gynecol. – volume: 86 start-page: 334 year: 2004 end-page: 338 ident: b0050 article-title: Artificial intelligence in medicine publication-title: Ann. R Coll Surg. Engl. – volume: 183 start-page: 589 year: 2023 end-page: 596 ident: b0005 article-title: Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum publication-title: JAMA Intern. Med. – volume: 307 start-page: e230163 issue: 2 year: 2023 ident: 10.1016/j.gore.2024.101477_b0075 article-title: ChatGPT and Other Large Language Models Are Double-edged Swords publication-title: Radiology doi: 10.1148/radiol.230163 – ident: 10.1016/j.gore.2024.101477_b0080 – volume: 183 start-page: 589 issue: 6 year: 2023 ident: 10.1016/j.gore.2024.101477_b0005 article-title: Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum publication-title: JAMA Intern. Med. doi: 10.1001/jamainternmed.2023.1838 – year: 2005 ident: 10.1016/j.gore.2024.101477_b0035 – year: 2023 ident: 10.1016/j.gore.2024.101477_b0030 article-title: The Exciting Potential for ChatGPT in Obstetrics and Gynecology publication-title: Am. J. Obstet. Gynecol. doi: 10.1016/j.ajog.2023.03.009 – ident: 10.1016/j.gore.2024.101477_b0040 – ident: 10.1016/j.gore.2024.101477_b0045 – ident: 10.1016/j.gore.2024.101477_b0020 – volume: 11 issue: 6 year: 2023 ident: 10.1016/j.gore.2024.101477_b0060 article-title: ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns publication-title: Healthcare (basel). – volume: 329 start-page: 842 issue: 10 year: 2023 ident: 10.1016/j.gore.2024.101477_b0065 article-title: Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model publication-title: J. Am. Med. Assoc. doi: 10.1001/jama.2023.1044 – volume: 86 start-page: 334 issue: 5 year: 2004 ident: 10.1016/j.gore.2024.101477_b0050 article-title: Artificial intelligence in medicine publication-title: Ann. R Coll Surg. Engl. doi: 10.1308/147870804290 – volume: 116 start-page: 733 issue: 3 year: 2010 ident: 10.1016/j.gore.2024.101477_b0010 article-title: Prophylactic and risk-reducing bilateral salpingo-oophorectomy: recommendations based on risk of ovarian cancer publication-title: Obstet Gynecol. doi: 10.1097/AOG.0b013e3181ec5fc1 – ident: 10.1016/j.gore.2024.101477_b0070 – volume: 9 start-page: e45312 year: 2023 ident: 10.1016/j.gore.2024.101477_b0015 article-title: How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment publication-title: JMIR Med Educ. doi: 10.2196/45312 – volume: 6 start-page: e2336483 issue: 10 year: 2023 ident: 10.1016/j.gore.2024.101477_b0025 article-title: Accuracy and Reliability of Chatbot Responses to Physician Questions publication-title: J. Am. Med. Assoc.netw Open. – volume: 379 start-page: 1895 issue: 20 year: 2018 ident: 10.1016/j.gore.2024.101477_b0055 article-title: Minimally Invasive versus Abdominal Radical Hysterectomy for Cervical Cancer publication-title: N Engl. J. Med. doi: 10.1056/NEJMoa1806395 |
| SSID | ssj0001503765 |
| Score | 2.325351 |
| Snippet | •Physicians provided higher quality responses to common clinical questions in gynecologic oncology compared to chatbots.•Chatbots provided longer, more... Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of... |
| SourceID | doaj pubmedcentral proquest pubmed crossref elsevier |
| SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 101477 |
| SubjectTerms | Artificial intelligence Gynecologic oncology Patient education Short Communication |
| Title | The doc versus the bot: A pilot study to assess the quality and accuracy of physician and chatbot responses to clinical questions in gynecologic oncology |
| URI | https://www.clinicalkey.com/#!/content/1-s2.0-S2352578924001565 https://dx.doi.org/10.1016/j.gore.2024.101477 https://www.ncbi.nlm.nih.gov/pubmed/39224817 https://www.proquest.com/docview/3100274565 https://pubmed.ncbi.nlm.nih.gov/PMC11367046 https://doaj.org/article/06e9d8256ccc491aba40ea304ee13c7d |
| Volume | 55 |
| WOSCitedRecordID | wos001294928200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2352-5789 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001503765 issn: 2352-5789 databaseCode: DOA dateStart: 20140101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2352-5789 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001503765 issn: 2352-5789 databaseCode: M~E dateStart: 20150101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwELagQogL4s0WqAaJG4pIYie2uRXUigOtOADam-XYTpuqSqrdLNJe-B_9t4ztJNqA1F64RJEf49gztmfi8TeEvLOZrYu6lEkltEuYkzoRkhcJ9eGNubSSBdyCn1_56alYLuW3nVBf3icswgPHgfuQlk5aNGNKYwyTma40S51GUs5l1HDrV18kumNMxfvBKc6cYrglEx26zjwsJNr6zCcwzmc7UQDsn21I_yqcf_tN7mxEx4_Iw0GDhMP45Y_JHdc-IfdPhjPyp-QaOQ-2M-AdLjZrQA0Pqq7_CIdw1Vx2PQREWeg70OHANxSIdyu3oFsL2pjNSpstdDVMPz5CjjnXPZKCVXSsdWtPZbxbCaEbXoqhaeFs2zoTF1bo2vC2fUZ-HB99__wlGQIwJKbgtE8qmmdGpmh0oGGWmZJbUWmLRkzleO3qurQ2z-vUaLS5JC05ckgLx4UthMQMR5-TvbZr3UsClOU1xaJMS8qkziq0PJnNTS6wBV3zBclGZigzoJP7IBmXanRDu1CegcozUEUGLsj7qc5VxOa4sfQnz-OppMfVDgkobWqQNnWbtC0IHSVEjcOLiy0Sam5suphqDYpNVFhurfd2FEKFs94f5ejWdZu18scyOffa-IK8iEI5dQw13pyJDGuLmbjOej7PaZvzgCzuA_zwlJX7_2OsXpEHvi_R8_E12etXG_eG3DO_-ma9OiB3-VIchFmLz5PfR38AjVhK5w |
| linkProvider | Directory of Open Access Journals |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+doc+versus+the+bot%3A+A+pilot+study+to+assess+the+quality+and+accuracy+of+physician+and+chatbot+responses+to+clinical+questions+in+gynecologic+oncology&rft.jtitle=Gynecologic+oncology+reports&rft.au=Anastasio%2C+Mary+Katherine&rft.au=Peters%2C+Pamela&rft.au=Foote%2C+Jonathan&rft.au=Melamed%2C+Alexander&rft.date=2024-10-01&rft.pub=Elsevier+Inc&rft.issn=2352-5789&rft.eissn=2352-5789&rft.volume=55&rft_id=info:doi/10.1016%2Fj.gore.2024.101477&rft.externalDocID=S2352578924001565 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2352-5789&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2352-5789&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2352-5789&client=summon |