Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam
Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment. This study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguist...
Uloženo v:
| Vydáno v: | International journal of nursing studies Ročník 153; s. 104717 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
England
Elsevier Ltd
01.05.2024
|
| Témata: | |
| ISSN: | 0020-7489, 1873-491X, 1873-491X |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment.
This study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguistic understanding tools in answering question banks for nursing licensing examination preparation. It further analyzes question characteristics that might impact the accuracy of ChatGPT-generated answers and examines its reliability through human expert reviews.
Cross-sectional survey comparing ChatGPT-generated answers and their explanations.
400 questions from Taiwan's 2022 Nursing Licensing Exam.
The study analyzed 400 questions from five distinct subjects of Taiwan's 2022 Nursing Licensing Exam using the ChatGPT model which provided answers and in-depth explanations for each question. The impact of various question characteristics, such as type and cognitive level, on the accuracy of the ChatGPT-generated responses was assessed using logistic regression analysis. Additionally, human experts evaluated the explanations for each question, comparing them with the ChatGPT-generated answers to determine consistency.
ChatGPT exhibited overall accuracy at 80.75 % for Taiwan's National Nursing Exam, which passes the exam. The accuracy of ChatGPT-generated answers diverged significantly across test subjects, demonstrating a hierarchy ranging from General Medicine at 88.75 %, Medical–Surgical Nursing at 80.0 %, Psychology and Community Nursing at 70.0 %, Obstetrics and Gynecology Nursing at 67.5 %, down to Basic Nursing at 63.0 %. ChatGPT had a higher probability of eliciting incorrect responses for questions with certain characteristics, notably those with clinical vignettes [odds ratio 2.19, 95 % confidence interval 1.24–3.87, P = 0.007] and complex multiple-choice questions [odds ratio 2.37, 95 % confidence interval 1.00–5.60, P = 0.049]. Furthermore, 14.25 % of ChatGPT-generated answers were inconsistent with their explanations, leading to a reduction in the overall accuracy to 74 %.
This study reveals the ChatGPT's capabilities and limitations in nursing exam preparation, underscoring its potential as an auxiliary educational tool. It highlights the model's varied performance across different question types and notable inconsistencies between its answers and explanations. The study contributes significantly to the understanding of artificial intelligence in learning environments, guiding the future development of more effective and reliable artificial intelligence-based educational technologies.
New study reveals ChatGPT's potential and challenges in nursing education: Achieves 80.75 % accuracy in exam prep but faces hurdles with complex questions and logical consistency. #AIinNursing #AIinEducation #NursingExams #ChatGPT |
|---|---|
| AbstractList | Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment.BACKGROUNDInvestigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment.This study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguistic understanding tools in answering question banks for nursing licensing examination preparation. It further analyzes question characteristics that might impact the accuracy of ChatGPT-generated answers and examines its reliability through human expert reviews.OBJECTIVEThis study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguistic understanding tools in answering question banks for nursing licensing examination preparation. It further analyzes question characteristics that might impact the accuracy of ChatGPT-generated answers and examines its reliability through human expert reviews.Cross-sectional survey comparing ChatGPT-generated answers and their explanations.DESIGNCross-sectional survey comparing ChatGPT-generated answers and their explanations.400 questions from Taiwan's 2022 Nursing Licensing Exam.SETTING400 questions from Taiwan's 2022 Nursing Licensing Exam.The study analyzed 400 questions from five distinct subjects of Taiwan's 2022 Nursing Licensing Exam using the ChatGPT model which provided answers and in-depth explanations for each question. The impact of various question characteristics, such as type and cognitive level, on the accuracy of the ChatGPT-generated responses was assessed using logistic regression analysis. Additionally, human experts evaluated the explanations for each question, comparing them with the ChatGPT-generated answers to determine consistency.METHODSThe study analyzed 400 questions from five distinct subjects of Taiwan's 2022 Nursing Licensing Exam using the ChatGPT model which provided answers and in-depth explanations for each question. The impact of various question characteristics, such as type and cognitive level, on the accuracy of the ChatGPT-generated responses was assessed using logistic regression analysis. Additionally, human experts evaluated the explanations for each question, comparing them with the ChatGPT-generated answers to determine consistency.ChatGPT exhibited overall accuracy at 80.75 % for Taiwan's National Nursing Exam, which passes the exam. The accuracy of ChatGPT-generated answers diverged significantly across test subjects, demonstrating a hierarchy ranging from General Medicine at 88.75 %, Medical-Surgical Nursing at 80.0 %, Psychology and Community Nursing at 70.0 %, Obstetrics and Gynecology Nursing at 67.5 %, down to Basic Nursing at 63.0 %. ChatGPT had a higher probability of eliciting incorrect responses for questions with certain characteristics, notably those with clinical vignettes [odds ratio 2.19, 95 % confidence interval 1.24-3.87, P = 0.007] and complex multiple-choice questions [odds ratio 2.37, 95 % confidence interval 1.00-5.60, P = 0.049]. Furthermore, 14.25 % of ChatGPT-generated answers were inconsistent with their explanations, leading to a reduction in the overall accuracy to 74 %.RESULTSChatGPT exhibited overall accuracy at 80.75 % for Taiwan's National Nursing Exam, which passes the exam. The accuracy of ChatGPT-generated answers diverged significantly across test subjects, demonstrating a hierarchy ranging from General Medicine at 88.75 %, Medical-Surgical Nursing at 80.0 %, Psychology and Community Nursing at 70.0 %, Obstetrics and Gynecology Nursing at 67.5 %, down to Basic Nursing at 63.0 %. ChatGPT had a higher probability of eliciting incorrect responses for questions with certain characteristics, notably those with clinical vignettes [odds ratio 2.19, 95 % confidence interval 1.24-3.87, P = 0.007] and complex multiple-choice questions [odds ratio 2.37, 95 % confidence interval 1.00-5.60, P = 0.049]. Furthermore, 14.25 % of ChatGPT-generated answers were inconsistent with their explanations, leading to a reduction in the overall accuracy to 74 %.This study reveals the ChatGPT's capabilities and limitations in nursing exam preparation, underscoring its potential as an auxiliary educational tool. It highlights the model's varied performance across different question types and notable inconsistencies between its answers and explanations. The study contributes significantly to the understanding of artificial intelligence in learning environments, guiding the future development of more effective and reliable artificial intelligence-based educational technologies.CONCLUSIONSThis study reveals the ChatGPT's capabilities and limitations in nursing exam preparation, underscoring its potential as an auxiliary educational tool. It highlights the model's varied performance across different question types and notable inconsistencies between its answers and explanations. The study contributes significantly to the understanding of artificial intelligence in learning environments, guiding the future development of more effective and reliable artificial intelligence-based educational technologies.New study reveals ChatGPT's potential and challenges in nursing education: Achieves 80.75 % accuracy in exam prep but faces hurdles with complex questions and logical consistency. #AIinNursing #AIinEducation #NursingExams #ChatGPT.TWEETABLE ABSTRACTNew study reveals ChatGPT's potential and challenges in nursing education: Achieves 80.75 % accuracy in exam prep but faces hurdles with complex questions and logical consistency. #AIinNursing #AIinEducation #NursingExams #ChatGPT. Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment. This study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguistic understanding tools in answering question banks for nursing licensing examination preparation. It further analyzes question characteristics that might impact the accuracy of ChatGPT-generated answers and examines its reliability through human expert reviews. Cross-sectional survey comparing ChatGPT-generated answers and their explanations. 400 questions from Taiwan's 2022 Nursing Licensing Exam. The study analyzed 400 questions from five distinct subjects of Taiwan's 2022 Nursing Licensing Exam using the ChatGPT model which provided answers and in-depth explanations for each question. The impact of various question characteristics, such as type and cognitive level, on the accuracy of the ChatGPT-generated responses was assessed using logistic regression analysis. Additionally, human experts evaluated the explanations for each question, comparing them with the ChatGPT-generated answers to determine consistency. ChatGPT exhibited overall accuracy at 80.75 % for Taiwan's National Nursing Exam, which passes the exam. The accuracy of ChatGPT-generated answers diverged significantly across test subjects, demonstrating a hierarchy ranging from General Medicine at 88.75 %, Medical-Surgical Nursing at 80.0 %, Psychology and Community Nursing at 70.0 %, Obstetrics and Gynecology Nursing at 67.5 %, down to Basic Nursing at 63.0 %. ChatGPT had a higher probability of eliciting incorrect responses for questions with certain characteristics, notably those with clinical vignettes [odds ratio 2.19, 95 % confidence interval 1.24-3.87, P = 0.007] and complex multiple-choice questions [odds ratio 2.37, 95 % confidence interval 1.00-5.60, P = 0.049]. Furthermore, 14.25 % of ChatGPT-generated answers were inconsistent with their explanations, leading to a reduction in the overall accuracy to 74 %. This study reveals the ChatGPT's capabilities and limitations in nursing exam preparation, underscoring its potential as an auxiliary educational tool. It highlights the model's varied performance across different question types and notable inconsistencies between its answers and explanations. The study contributes significantly to the understanding of artificial intelligence in learning environments, guiding the future development of more effective and reliable artificial intelligence-based educational technologies. New study reveals ChatGPT's potential and challenges in nursing education: Achieves 80.75 % accuracy in exam prep but faces hurdles with complex questions and logical consistency. #AIinNursing #AIinEducation #NursingExams #ChatGPT. Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment. This study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguistic understanding tools in answering question banks for nursing licensing examination preparation. It further analyzes question characteristics that might impact the accuracy of ChatGPT-generated answers and examines its reliability through human expert reviews. Cross-sectional survey comparing ChatGPT-generated answers and their explanations. 400 questions from Taiwan's 2022 Nursing Licensing Exam. The study analyzed 400 questions from five distinct subjects of Taiwan's 2022 Nursing Licensing Exam using the ChatGPT model which provided answers and in-depth explanations for each question. The impact of various question characteristics, such as type and cognitive level, on the accuracy of the ChatGPT-generated responses was assessed using logistic regression analysis. Additionally, human experts evaluated the explanations for each question, comparing them with the ChatGPT-generated answers to determine consistency. ChatGPT exhibited overall accuracy at 80.75 % for Taiwan's National Nursing Exam, which passes the exam. The accuracy of ChatGPT-generated answers diverged significantly across test subjects, demonstrating a hierarchy ranging from General Medicine at 88.75 %, Medical–Surgical Nursing at 80.0 %, Psychology and Community Nursing at 70.0 %, Obstetrics and Gynecology Nursing at 67.5 %, down to Basic Nursing at 63.0 %. ChatGPT had a higher probability of eliciting incorrect responses for questions with certain characteristics, notably those with clinical vignettes [odds ratio 2.19, 95 % confidence interval 1.24–3.87, P = 0.007] and complex multiple-choice questions [odds ratio 2.37, 95 % confidence interval 1.00–5.60, P = 0.049]. Furthermore, 14.25 % of ChatGPT-generated answers were inconsistent with their explanations, leading to a reduction in the overall accuracy to 74 %. This study reveals the ChatGPT's capabilities and limitations in nursing exam preparation, underscoring its potential as an auxiliary educational tool. It highlights the model's varied performance across different question types and notable inconsistencies between its answers and explanations. The study contributes significantly to the understanding of artificial intelligence in learning environments, guiding the future development of more effective and reliable artificial intelligence-based educational technologies. New study reveals ChatGPT's potential and challenges in nursing education: Achieves 80.75 % accuracy in exam prep but faces hurdles with complex questions and logical consistency. #AIinNursing #AIinEducation #NursingExams #ChatGPT |
| ArticleNumber | 104717 |
| Author | Lin, Li-En Lin, Li-Hwa Su, Mei-Chin Chen, Yu-Chun |
| Author_xml | – sequence: 1 givenname: Mei-Chin surname: Su fullname: Su, Mei-Chin organization: Department of Nursing, Taipei Veterans General Hospital, Taipei, Taiwan – sequence: 2 givenname: Li-En surname: Lin fullname: Lin, Li-En organization: Big Data Center, Taipei Veterans General Hospital, Taipei, Taiwan – sequence: 3 givenname: Li-Hwa surname: Lin fullname: Lin, Li-Hwa organization: Department of Nursing, Taipei Veterans General Hospital, Taipei, Taiwan – sequence: 4 givenname: Yu-Chun surname: Chen fullname: Chen, Yu-Chun email: yuchn.chen@gmail.com, ycchen22@vghtpe.gov.tw organization: Big Data Center, Taipei Veterans General Hospital, Taipei, Taiwan |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/38401366$$D View this record in MEDLINE/PubMed |
| BookMark | eNqFUU1v1DAQtVAR3Rb-QuUbXLLY-XAcxIFqVUqlFXBYJG6W1xl3vUqc4HGg_Sn8W7yb7oVLTx7PvPfsee-CnPnBAyFXnC054-L9fun2fgoYp2XO8jI1y5rXL8iCy7rIyob_PCMLxnKW1aVszskF4p4xxiWTr8h5IUvGCyEW5O81IiA6f09_TYDRDZ6anQ7aRAgu3Q113nYTeANI03C10_H2--Yt0hGCHUKv04Rq39IAOA4eIYOHsdNez1qpk2QS_fEDvUv1_S4itWHo6Ua7P9onoa9pj8MH1s6AP1Y3D7p_TV5a3SG8eTovyY_PN5vVl2z97fZudb3OTCFkzAppynbLdZ43pq40t9oKkKKpWwGshbLkNjeV3kIlec0bC6IFyURjKigq0LK4JO9m3TEMRwtU79BAl1aAYUJVsEoIUeWyStCrJ-i07aFVY3C9Do_qZGcCfJwBJgyIAawyLh6NiEG7TnGmDumpvTqlpw7pqTm9RBf_0U8vPEv8NBMhGfXbQVBo3CGy1gUwUbWDe07iH0PWvVI |
| CitedBy_id | crossref_primary_10_2196_63731 crossref_primary_10_3390_diagnostics15182315 crossref_primary_10_2196_52784 crossref_primary_10_1097_JCMA_0000000000001130 crossref_primary_10_1097_JCMA_0000000000001273 crossref_primary_10_1111_jan_16628 crossref_primary_10_5772_acrt_20240045 crossref_primary_10_3390_medicina60030445 crossref_primary_10_3390_info15090543 crossref_primary_10_1016_j_colegn_2024_10_004 crossref_primary_10_2196_67197 crossref_primary_10_1016_j_nepr_2025_104284 crossref_primary_10_1177_00472395251378671 crossref_primary_10_1093_dmfr_twaf060 crossref_primary_10_1016_j_ecns_2025_101732 crossref_primary_10_1108_ECAM_06_2024_0701 crossref_primary_10_1016_j_ijnurstu_2024_104763 crossref_primary_10_1097_JS9_0000000000002505 crossref_primary_10_1016_j_nedt_2025_106822 crossref_primary_10_2196_65523 crossref_primary_10_3390_jintelligence13080102 crossref_primary_10_46413_boneyusbad_1472077 |
| Cites_doi | 10.2196/47737 10.2196/44084 10.3928/00220124-20110621-05 10.2214/AJR.15.15944 10.2139/ssrn.4516801 10.1371/journal.pdig.0000198 10.3163/1536-5050.103.3.010 10.1207/S15324818AME1503_5 10.1037/edu0000754 10.3389/fmed.2023.1279707 10.1186/s12909-020-02250-x 10.1001/jama.2023.14311 10.1080/15391523.2022.2142872 10.1148/radiol.230582 10.1016/j.ijnurstu.2023.104522 10.1097/NNE.0000000000001436 10.1001/jama.2023.16943 10.1080/08957347.2019.1660348 10.1038/d41586-023-01026-9 10.2196/45312 10.1007/s42087-022-00304-8 10.1111/jocn.16677 10.1016/j.nedt.2023.105916 10.1016/j.urology.2023.05.010 10.2196/46599 |
| ContentType | Journal Article |
| Copyright | 2024 Elsevier Ltd Copyright © 2024 Elsevier Ltd. All rights reserved. |
| Copyright_xml | – notice: 2024 Elsevier Ltd – notice: Copyright © 2024 Elsevier Ltd. All rights reserved. |
| DBID | AAYXX CITATION NPM 7X8 |
| DOI | 10.1016/j.ijnurstu.2024.104717 |
| DatabaseName | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Nursing |
| EISSN | 1873-491X |
| ExternalDocumentID | 38401366 10_1016_j_ijnurstu_2024_104717 S0020748924000294 |
| Genre | Journal Article |
| GroupedDBID | --- --K --M -ET .GJ .~1 04C 07C 0R~ 186 1B1 1RT 1~. 1~5 29J 3EH 4.4 457 4G. 53G 5GY 5VS 7-5 71M 85S 8P~ 9JM 9JO AABNK AABSN AACTN AAEDT AAEDW AAFJI AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAWTL AAXUO AAYJJ ABBQC ABFNM ABFRF ABIVO ABJNI ABLJU ABLVK ABMAC ABMMH ABMZM ABPPZ ABXDB ACDAQ ACGFO ACGFS ACHQT ACIUM ACJTP ACRLP ADBBV ADEZE ADHUB ADMUD AEBSH AEFWE AEKER AENEX AFFNX AFKWA AFTJW AFXBA AFXIZ AGHFR AGNAY AGUBO AGYEJ AHHHB AIEXJ AIKHN AITUG AJOXV AJRQY AKRWK AKYCK ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ ANZVX AOMHK ASPBG AVARZ AVWKF AXJTR AZFZN BKOJK BLXMC BMSDO BNPGV COPKO CS3 DU5 EBD EBS EFJIC EIHBH EJD EO8 EO9 EP2 EP3 F5P FAFAN FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q GBLVA HMK HMO HVGLF HZ~ IEA IHE IHR INR J1W K-O KOM L7B M29 M2W M41 MO0 N9A O-L O9- OAUVE OHT OZT P-8 P-9 P2P PC. PQQKQ PRBVW Q38 QZG R2- RIG ROL RPZ SAE SCC SDF SDG SDP SEL SES SEW SNG SNH SPCBC SSB SSH SSO SSZ T5K UKR UV1 WH7 WUQ X7L XFK XZL YZZ ZGI ZT4 ~G- 9DU AATTM AAXKI AAYWO AAYXX ABUFD ABWVN ACIEU ACLOT ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGLDT AGQPQ AIGII AIIUN AKBMS AKYEP ANKPU APXCP CITATION EFKBS EFLBG ~HD AGCQF AGRNS NPM 7X8 |
| ID | FETCH-LOGICAL-c368t-38c4db1a229c75a1faf6e8697d6e0de441f2c5abe581719fe6de8069c5e35ea83 |
| ISICitedReferencesCount | 20 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001201872100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0020-7489 1873-491X |
| IngestDate | Mon Sep 29 05:09:39 EDT 2025 Mon Jul 21 06:06:10 EDT 2025 Sat Nov 29 03:56:58 EST 2025 Tue Nov 18 22:28:45 EST 2025 Sat May 25 15:41:13 EDT 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Artificial intelligence language understanding tools Accuracy Question bank Question cognitive level Human-verification of explanations ChatGPT Consistency Question type ChatGPT-generated answers Nursing license exam Clinical vignettes |
| Language | English |
| License | Copyright © 2024 Elsevier Ltd. All rights reserved. |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c368t-38c4db1a229c75a1faf6e8697d6e0de441f2c5abe581719fe6de8069c5e35ea83 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| PMID | 38401366 |
| PQID | 3056665285 |
| PQPubID | 23479 |
| ParticipantIDs | proquest_miscellaneous_3056665285 pubmed_primary_38401366 crossref_citationtrail_10_1016_j_ijnurstu_2024_104717 crossref_primary_10_1016_j_ijnurstu_2024_104717 elsevier_sciencedirect_doi_10_1016_j_ijnurstu_2024_104717 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-05-01 |
| PublicationDateYYYYMMDD | 2024-05-01 |
| PublicationDate_xml | – month: 05 year: 2024 text: 2024-05-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | England |
| PublicationPlace_xml | – name: England |
| PublicationTitle | International journal of nursing studies |
| PublicationTitleAlternate | Int J Nurs Stud |
| PublicationYear | 2024 |
| Publisher | Elsevier Ltd |
| Publisher_xml | – name: Elsevier Ltd |
| References | Chang (bb0055) 2023; 330 Grubaugh, Levitt, Deever (bb0085) 2023; 1 Kuhn, Gal, Farquhar (bb0130) 2023 Thirunavukarasu, Hassan, Mahmood, Sanghera, Barzangi, El Mukashfi, Shah (bb0215) 2023; 9 Jason, Xuezhi, Dale, Maarten, Brian, Fei, Ed, Quoc, Denny (bb0120) 2023 Kung, Cheatham, Medenilla, Sillos, De Leon, Elepano, Madriaga, Aggabao, Diaz-Candido, Maningo, Tseng (bb0135) 2023; 2 Arora, Narayan, Chen, Orr, Guha, Bhatia, Chami, Sala, R’e (bb0025) 2022 Ho, Schmid, Yun (bb0110) 2022 OpenAI (bb0170) 2022 Sætra (bb0190) 2022 Giannos, Delardas (bb0075) 2023; 9 Yang, Du, Mao, Ni, Cambria (bb0245) 2023 Branum, Schiavenato (bb0040) 2023; 48 Qiao, Ou, Zhang, Chen, Yao, Deng, Tan, Huang, Chen (bb0180) 2023 Robinson, Rytting, Wingate (bb0185) 2022 Bhayana, Krishna, Bleakney (bb0035) 2023; 307 Adams (bb0005) 2015; 103 Carvalho, McLaughlin, Koedinger (bb0045) 2022; 114 Li’evin, Hother, Winther (bb0140) 2022 Castonguay, Farthing, Davies, Vogelsang, Kleib, Risling, Green (bb0050) 2023; 129 Pal, Umapathi, Sankarasubbu (bb0175) 2022 Ministry of Examination (bb0160) 2023 Harris (bb0105) 2023; 330 Mayer, Ludwig, Brandt (bb0155) 2022; 55 Tweed, Purdie, Wilkinson (bb0220) 2020; 20 Zeng, Wong, Welker, Choromanski, Tombari, Purohit, Ryoo, Sindhwani, Lee, Vanhoucke, Florence (bb0250) 2022 Creswell, Shanahan (bb0065) 2022 Scerri, Morin (bb0195) 2023; 32 Liu, Yuan, Fu, Jiang, Hayashi, Neubig (bb0145) 2021; 55 Albert, Li (bb0015) 2023 Alam, Lim, Zulkipli (bb0010) 2023; 10 Taira, Itaya, Hanada (bb0205) 2023; 6 Valmeekam, Olmo, Sreedharan, Kambhampati (bb0225) 2022 van der Gijp, Ravesloot, Ten Cate, van Schaik, Webb, Naeger (bb0230) 2016; 207 Weidinger, Uesato, Rauh, Griffin, Huang, Mellor, Glaese, Cheng, Balle, Kasirzadeh, Biles, Brown, Kenton, Hawkins, Stepleton, Birhane, Hendricks, Rimell, Isaac, Haas, Legassick, Irving, Gabriel (bb0235) 2022 Takeshi, Shixiang Shane, Machel, Yutaka, Yusuke (bb0210) 2023 Ma (bb0150) 2023 Harris (bb0100) 2023; 330 Kanzow, Schmidt, Kanzow (bb0125) 2023; 9 Deebel, Terlecki (bb0070) 2023; 177 Yang (bb0240) 2023 Huang, Chang (bb0115) 2023 Bang, Cahyawijaya, Lee, Dai, Su, Wilie, Lovenia, Ji, Yu, Chung, Do, Xu, Fung (bb0030) 2023 Ministry of Examination (bb0165) 2023 Chen, Zaharia, Zou (bb0060) 2023 Haladyna, Rodriguez, Stevens (bb0095) 2019; 32 Haladyna, Downing, Rodriguez (bb0090) 2002; 15 Gilson, Safranek, Huang, Socrates, Chi, Taylor, Chartash (bb0080) 2023; 9 Su, Osisek (bb0200) 2011; 42 Allen, Woodnutt (bb0020) 2023; 145 Bhayana (10.1016/j.ijnurstu.2024.104717_bb0035) 2023; 307 Mayer (10.1016/j.ijnurstu.2024.104717_bb0155) 2022; 55 Chang (10.1016/j.ijnurstu.2024.104717_bb0055) 2023; 330 Scerri (10.1016/j.ijnurstu.2024.104717_bb0195) 2023; 32 Arora (10.1016/j.ijnurstu.2024.104717_bb0025) 2022 Albert (10.1016/j.ijnurstu.2024.104717_bb0015) 2023 Creswell (10.1016/j.ijnurstu.2024.104717_bb0065) 2022 Carvalho (10.1016/j.ijnurstu.2024.104717_bb0045) 2022; 114 Robinson (10.1016/j.ijnurstu.2024.104717_bb0185) 2022 Huang (10.1016/j.ijnurstu.2024.104717_bb0115) 2023 Ma (10.1016/j.ijnurstu.2024.104717_bb0150) 2023 Ministry of Examination (10.1016/j.ijnurstu.2024.104717_bb0160) 2023 Sætra (10.1016/j.ijnurstu.2024.104717_bb0190) 2022 Tweed (10.1016/j.ijnurstu.2024.104717_bb0220) 2020; 20 Giannos (10.1016/j.ijnurstu.2024.104717_bb0075) 2023; 9 Kuhn (10.1016/j.ijnurstu.2024.104717_bb0130) 2023 Deebel (10.1016/j.ijnurstu.2024.104717_bb0070) 2023; 177 Weidinger (10.1016/j.ijnurstu.2024.104717_bb0235) 2022 Harris (10.1016/j.ijnurstu.2024.104717_bb0105) 2023; 330 Takeshi (10.1016/j.ijnurstu.2024.104717_bb0210) 2023 Qiao (10.1016/j.ijnurstu.2024.104717_bb0180) 2023 Su (10.1016/j.ijnurstu.2024.104717_bb0200) 2011; 42 Zeng (10.1016/j.ijnurstu.2024.104717_bb0250) 2022 Ho (10.1016/j.ijnurstu.2024.104717_bb0110) 2022 Taira (10.1016/j.ijnurstu.2024.104717_bb0205) 2023; 6 Yang (10.1016/j.ijnurstu.2024.104717_bb0240) 2023 Allen (10.1016/j.ijnurstu.2024.104717_bb0020) 2023; 145 Castonguay (10.1016/j.ijnurstu.2024.104717_bb0050) 2023; 129 Adams (10.1016/j.ijnurstu.2024.104717_bb0005) 2015; 103 Liu (10.1016/j.ijnurstu.2024.104717_bb0145) 2021; 55 Bang (10.1016/j.ijnurstu.2024.104717_bb0030) 2023 Branum (10.1016/j.ijnurstu.2024.104717_bb0040) 2023; 48 OpenAI (10.1016/j.ijnurstu.2024.104717_bb0170) 2022 Haladyna (10.1016/j.ijnurstu.2024.104717_bb0090) 2002; 15 Harris (10.1016/j.ijnurstu.2024.104717_bb0100) 2023; 330 Kanzow (10.1016/j.ijnurstu.2024.104717_bb0125) 2023; 9 Jason (10.1016/j.ijnurstu.2024.104717_bb0120) 2023 Thirunavukarasu (10.1016/j.ijnurstu.2024.104717_bb0215) 2023; 9 van der Gijp (10.1016/j.ijnurstu.2024.104717_bb0230) 2016; 207 Alam (10.1016/j.ijnurstu.2024.104717_bb0010) 2023; 10 Gilson (10.1016/j.ijnurstu.2024.104717_bb0080) 2023; 9 Kung (10.1016/j.ijnurstu.2024.104717_bb0135) 2023; 2 Grubaugh (10.1016/j.ijnurstu.2024.104717_bb0085) 2023; 1 Yang (10.1016/j.ijnurstu.2024.104717_bb0245) 2023 Haladyna (10.1016/j.ijnurstu.2024.104717_bb0095) 2019; 32 Li’evin (10.1016/j.ijnurstu.2024.104717_bb0140) 2022 Pal (10.1016/j.ijnurstu.2024.104717_bb0175) 2022 Valmeekam (10.1016/j.ijnurstu.2024.104717_bb0225) 2022 Chen (10.1016/j.ijnurstu.2024.104717_bb0060) 2023 Ministry of Examination (10.1016/j.ijnurstu.2024.104717_bb0165) 2023 |
| References_xml | – volume: 103 start-page: 152 year: 2015 end-page: 153 ident: bb0005 article-title: Bloom’s taxonomy of cognitive learning objectives publication-title: J. Med. Libr. Assoc. – year: 2022 ident: bb0025 article-title: Ask Me Anything: a simple strategy for prompting language models publication-title: ArXiv – year: 2023 ident: bb0245 article-title: Logical reasoning over natural language as knowledge representation: a survey publication-title: ArXiv – volume: 145 year: 2023 ident: bb0020 article-title: Can ChatGPT pass a nursing exam? publication-title: Int. J. Nurs. Stud. – volume: 307 year: 2023 ident: bb0035 article-title: Performance of ChatGPT on a radiology board-style examination: insights into current strengths and limitations publication-title: Radiology – year: 2023 ident: bb0160 article-title: Announcement of the Syllabus and Reference Books for the Higher Examination for Nursing Examination – volume: 9 year: 2023 ident: bb0215 article-title: Trialling a large language model (ChatGPT) in general practice with the applied knowledge test: observational study demonstrating opportunities and limitations in primary care publication-title: JMIR Med. Educ. – volume: 15 start-page: 309 year: 2002 end-page: 333 ident: bb0090 article-title: A review of multiple-choice item-writing guidelines for classroom assessment publication-title: Appl. Meas. Educ. – volume: 55 start-page: 125 year: 2022 end-page: 141 ident: bb0155 article-title: Prompt text classifications with transformer models! An exemplary introduction to prompt-based learning with large language models publication-title: J. Res. Technol. Educ. – volume: 6 year: 2023 ident: bb0205 article-title: Performance of the large language model ChatGPT on the national nurse examinations in Japan: evaluation study publication-title: JMIR Nurs. – year: 2022 ident: bb0225 article-title: Large language models still can’t plan (a benchmark for LLMs on planning and reasoning about change) publication-title: ArXiv – year: 2023 ident: bb0180 article-title: Reasoning with language model prompting: a survey publication-title: arXiv – year: 2022 ident: bb0185 article-title: Leveraging large language models for multiple choice question answering publication-title: ArXiv – volume: 20 start-page: 9 year: 2020 ident: bb0220 article-title: Defining and tracking medical student self-monitoring using multiple-choice question item certainty publication-title: BMC Med. Educ. – year: 2022 ident: bb0110 article-title: Large language models are reasoning teachers publication-title: Annual Meeting of the Association for Computational Linguistics – volume: 330 start-page: 496 year: 2023 ident: bb0105 article-title: Study tests large language models’ ability to answer clinical questions publication-title: JAMA – volume: 42 start-page: 321 year: 2011 end-page: 327 ident: bb0200 article-title: The revised Bloom’s taxonomy: implications for educating nurses publication-title: J. Contin. Educ. Nurs. – year: 2022 ident: bb0140 article-title: Can large language models reason about medical questions? publication-title: ArXiv – year: 2022 ident: bb0065 article-title: Faithful reasoning using large language models publication-title: ArXiv – volume: 32 start-page: 4211 year: 2023 end-page: 4213 ident: bb0195 article-title: Using chatbots like ChatGPT to support nursing practice publication-title: J. Clin. Nurs. – volume: 2 year: 2023 ident: bb0135 article-title: Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models publication-title: PLOS Digit Health – year: 2023 ident: bb0240 article-title: How I use ChatGPT responsibly in my teaching publication-title: Nature – year: 2023 ident: bb0115 article-title: Towards reasoning in large language models: a survey publication-title: arXiv – volume: 330 start-page: 792 year: 2023 end-page: 794 ident: bb0100 article-title: Large language models answer medical questions accurately, but can’t match clinicians’ knowledge publication-title: JAMA – volume: 55 start-page: 1 year: 2021 end-page: 35 ident: bb0145 article-title: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing publication-title: ACM Comput. Surv. – year: 2022 ident: bb0235 article-title: Taxonomy of risks posed by language models publication-title: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency – year: 2022 ident: bb0250 article-title: Socratic models: composing zero-shot multimodal reasoning with language publication-title: ArXiv – year: 2022 ident: bb0170 article-title: Introducing ChatGPT – year: 2023 ident: bb0015 article-title: Insights from teaching with AI: how ChatGPT can enhance experiential learning and assist instructors publication-title: SSRN Electron. J. – year: 2023 ident: bb0165 article-title: A Platform for Querying Exam Questions – year: 2023 ident: bb0150 article-title: Prompt Engineering and Calibration for Zero-shot Commonsense Reasoning – volume: 48 start-page: 231 year: 2023 end-page: 233 ident: bb0040 article-title: Can ChatGPT accurately answer a PICOT question?: assessing AI response to a clinical question publication-title: Nurse Educ. – year: 2022 ident: bb0175 article-title: MedMCQA: a large-scale multi-subject multi-choice dataset for medical domain question answering publication-title: ACM Conference on Health, Inference, and Learning – volume: 129 year: 2023 ident: bb0050 article-title: Revolutionizing nursing education through Ai integration: a reflection on the disruptive impact of ChatGPT publication-title: Nurse Educ. Today – volume: 9 year: 2023 ident: bb0125 article-title: Scoring single-response multiple-choice items: scoping review and comparison of different scoring methods publication-title: JMIR Med. Educ. – volume: 10 year: 2023 ident: bb0010 article-title: Integrating AI in medical education: embracing ethical usage and critical understanding publication-title: Front. Med. – volume: 9 year: 2023 ident: bb0080 article-title: How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment publication-title: JMIR Med. Educ. – year: 2022 ident: bb0190 article-title: Scaffolding human champions: AI as a more competent other publication-title: Human Arenas – volume: 9 year: 2023 ident: bb0075 article-title: Performance of ChatGPT on UK standardized admission tests: insights from the BMAT, TMUA, LNAT, and TSA examinations publication-title: JMIR Med. Educ. – year: 2023 ident: bb0120 article-title: Chain-of-thought prompting elicits reasoning in large language models publication-title: ArXiv – volume: 330 start-page: 1521 year: 2023 end-page: 1522 ident: bb0055 article-title: Transformation of undergraduate medical education in 2023 publication-title: JAMA – year: 2023 ident: bb0030 article-title: A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity publication-title: ArXiv – volume: 1 year: 2023 ident: bb0085 article-title: Harnessing AI to power constructivist learning: an evolution in educational methodologies publication-title: EIKI J. Eff. Teach. Methods – year: 2023 ident: bb0210 article-title: Large language models are zero-shot reasoners publication-title: ArXiv – volume: 177 start-page: 29 year: 2023 end-page: 33 ident: bb0070 article-title: ChatGPT performance on the American Urological Association self-assessment study program and the potential influence of artificial intelligence in urologic training publication-title: Urology – year: 2023 ident: bb0060 article-title: How is ChatGPT’s behavior changing over time? publication-title: ArXiv – year: 2023 ident: bb0130 article-title: Semantic uncertainty: linguistic invariances for uncertainty estimation in natural language generation publication-title: ArXiv – volume: 114 start-page: 1723 year: 2022 end-page: 1742 ident: bb0045 article-title: Varied practice testing is associated with better learning outcomes in self-regulated online learning publication-title: J. Educ. Psychol. – volume: 207 start-page: 339 year: 2016 end-page: 343 ident: bb0230 article-title: Tests, quizzes, and self-assessments: how to construct a high-quality examination publication-title: AJR Am. J. Roentgenol. – volume: 32 start-page: 350 year: 2019 end-page: 364 ident: bb0095 article-title: Are multiple-choice items too fat? publication-title: Appl. Meas. Educ. – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0120 article-title: Chain-of-thought prompting elicits reasoning in large language models publication-title: ArXiv – volume: 9 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0075 article-title: Performance of ChatGPT on UK standardized admission tests: insights from the BMAT, TMUA, LNAT, and TSA examinations publication-title: JMIR Med. Educ. doi: 10.2196/47737 – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0140 article-title: Can large language models reason about medical questions? publication-title: ArXiv – volume: 9 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0125 article-title: Scoring single-response multiple-choice items: scoping review and comparison of different scoring methods publication-title: JMIR Med. Educ. doi: 10.2196/44084 – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0175 article-title: MedMCQA: a large-scale multi-subject multi-choice dataset for medical domain question answering – volume: 42 start-page: 321 issue: 7 year: 2011 ident: 10.1016/j.ijnurstu.2024.104717_bb0200 article-title: The revised Bloom’s taxonomy: implications for educating nurses publication-title: J. Contin. Educ. Nurs. doi: 10.3928/00220124-20110621-05 – volume: 207 start-page: 339 issue: 2 year: 2016 ident: 10.1016/j.ijnurstu.2024.104717_bb0230 article-title: Tests, quizzes, and self-assessments: how to construct a high-quality examination publication-title: AJR Am. J. Roentgenol. doi: 10.2214/AJR.15.15944 – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0015 article-title: Insights from teaching with AI: how ChatGPT can enhance experiential learning and assist instructors publication-title: SSRN Electron. J. doi: 10.2139/ssrn.4516801 – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0170 – volume: 2 issue: 2 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0135 article-title: Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models publication-title: PLOS Digit Health doi: 10.1371/journal.pdig.0000198 – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0210 article-title: Large language models are zero-shot reasoners publication-title: ArXiv – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0180 article-title: Reasoning with language model prompting: a survey publication-title: arXiv – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0225 article-title: Large language models still can’t plan (a benchmark for LLMs on planning and reasoning about change) publication-title: ArXiv – volume: 103 start-page: 152 issue: 3 year: 2015 ident: 10.1016/j.ijnurstu.2024.104717_bb0005 article-title: Bloom’s taxonomy of cognitive learning objectives publication-title: J. Med. Libr. Assoc. doi: 10.3163/1536-5050.103.3.010 – volume: 15 start-page: 309 year: 2002 ident: 10.1016/j.ijnurstu.2024.104717_bb0090 article-title: A review of multiple-choice item-writing guidelines for classroom assessment publication-title: Appl. Meas. Educ. doi: 10.1207/S15324818AME1503_5 – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0185 article-title: Leveraging large language models for multiple choice question answering publication-title: ArXiv – volume: 1 issue: 3 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0085 article-title: Harnessing AI to power constructivist learning: an evolution in educational methodologies publication-title: EIKI J. Eff. Teach. Methods – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0115 article-title: Towards reasoning in large language models: a survey publication-title: arXiv – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0250 article-title: Socratic models: composing zero-shot multimodal reasoning with language publication-title: ArXiv – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0130 article-title: Semantic uncertainty: linguistic invariances for uncertainty estimation in natural language generation publication-title: ArXiv – volume: 114 start-page: 1723 issue: 8 year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0045 article-title: Varied practice testing is associated with better learning outcomes in self-regulated online learning publication-title: J. Educ. Psychol. doi: 10.1037/edu0000754 – volume: 10 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0010 article-title: Integrating AI in medical education: embracing ethical usage and critical understanding publication-title: Front. Med. doi: 10.3389/fmed.2023.1279707 – volume: 330 start-page: 496 issue: 6 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0105 article-title: Study tests large language models’ ability to answer clinical questions publication-title: JAMA – volume: 20 start-page: 9 issue: 1 year: 2020 ident: 10.1016/j.ijnurstu.2024.104717_bb0220 article-title: Defining and tracking medical student self-monitoring using multiple-choice question item certainty publication-title: BMC Med. Educ. doi: 10.1186/s12909-020-02250-x – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0235 article-title: Taxonomy of risks posed by language models – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0030 article-title: A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity publication-title: ArXiv – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0065 article-title: Faithful reasoning using large language models publication-title: ArXiv – volume: 330 start-page: 792 issue: 9 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0100 article-title: Large language models answer medical questions accurately, but can’t match clinicians’ knowledge publication-title: JAMA doi: 10.1001/jama.2023.14311 – volume: 55 start-page: 125 issue: 1 year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0155 article-title: Prompt text classifications with transformer models! An exemplary introduction to prompt-based learning with large language models publication-title: J. Res. Technol. Educ. doi: 10.1080/15391523.2022.2142872 – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0025 article-title: Ask Me Anything: a simple strategy for prompting language models publication-title: ArXiv – volume: 307 issue: 5 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0035 article-title: Performance of ChatGPT on a radiology board-style examination: insights into current strengths and limitations publication-title: Radiology doi: 10.1148/radiol.230582 – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0060 article-title: How is ChatGPT’s behavior changing over time? publication-title: ArXiv – volume: 145 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0020 article-title: Can ChatGPT pass a nursing exam? publication-title: Int. J. Nurs. Stud. doi: 10.1016/j.ijnurstu.2023.104522 – volume: 48 start-page: 231 issue: 5 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0040 article-title: Can ChatGPT accurately answer a PICOT question?: assessing AI response to a clinical question publication-title: Nurse Educ. doi: 10.1097/NNE.0000000000001436 – volume: 330 start-page: 1521 issue: 16 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0055 article-title: Transformation of undergraduate medical education in 2023 publication-title: JAMA doi: 10.1001/jama.2023.16943 – volume: 32 start-page: 350 year: 2019 ident: 10.1016/j.ijnurstu.2024.104717_bb0095 article-title: Are multiple-choice items too fat? publication-title: Appl. Meas. Educ. doi: 10.1080/08957347.2019.1660348 – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0240 article-title: How I use ChatGPT responsibly in my teaching publication-title: Nature doi: 10.1038/d41586-023-01026-9 – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0110 article-title: Large language models are reasoning teachers – volume: 6 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0205 article-title: Performance of the large language model ChatGPT on the national nurse examinations in Japan: evaluation study publication-title: JMIR Nurs. – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0245 article-title: Logical reasoning over natural language as knowledge representation: a survey publication-title: ArXiv – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0150 – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0160 – volume: 55 start-page: 1 year: 2021 ident: 10.1016/j.ijnurstu.2024.104717_bb0145 article-title: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing publication-title: ACM Comput. Surv. – volume: 9 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0080 article-title: How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment publication-title: JMIR Med. Educ. doi: 10.2196/45312 – year: 2022 ident: 10.1016/j.ijnurstu.2024.104717_bb0190 article-title: Scaffolding human champions: AI as a more competent other publication-title: Human Arenas doi: 10.1007/s42087-022-00304-8 – volume: 32 start-page: 4211 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0195 article-title: Using chatbots like ChatGPT to support nursing practice publication-title: J. Clin. Nurs. doi: 10.1111/jocn.16677 – volume: 129 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0050 article-title: Revolutionizing nursing education through Ai integration: a reflection on the disruptive impact of ChatGPT publication-title: Nurse Educ. Today doi: 10.1016/j.nedt.2023.105916 – volume: 177 start-page: 29 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0070 article-title: ChatGPT performance on the American Urological Association self-assessment study program and the potential influence of artificial intelligence in urologic training publication-title: Urology doi: 10.1016/j.urology.2023.05.010 – volume: 9 year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0215 article-title: Trialling a large language model (ChatGPT) in general practice with the applied knowledge test: observational study demonstrating opportunities and limitations in primary care publication-title: JMIR Med. Educ. doi: 10.2196/46599 – year: 2023 ident: 10.1016/j.ijnurstu.2024.104717_bb0165 |
| SSID | ssj0001808 |
| Score | 2.4919126 |
| Snippet | Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation... |
| SourceID | proquest pubmed crossref elsevier |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 104717 |
| SubjectTerms | Accuracy Artificial intelligence language understanding tools ChatGPT ChatGPT-generated answers Clinical vignettes Consistency Human-verification of explanations Nursing license exam Question bank Question cognitive level Question type |
| Title | Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam |
| URI | https://dx.doi.org/10.1016/j.ijnurstu.2024.104717 https://www.ncbi.nlm.nih.gov/pubmed/38401366 https://www.proquest.com/docview/3056665285 |
| Volume | 153 |
| WOSCitedRecordID | wos001201872100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1873-491X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001808 issn: 0020-7489 databaseCode: AIEXJ dateStart: 19950201 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9NAEF6lLUhcEG_Co1okBCeH-LHrNbeqSmlRFCrhSuFk2d61mii4IY7b_BX-Er-KGa_XcSBVy4GL5Ww0fuT7Mjs7Ow9C3sIcJ_ssU1bmOR4sUDxpJbZUVgpLh0AxFQhZVdcf-qORGI-D007nl8mFuZz5eS5Wq2D-X6GGMQAbU2f_Ae7mojAA5wA6HAF2ON4KeL2Nix6ASuUjvOlGVWYMwNKNSaqtgsPzePnpNKwc9_M_0ggWOoJWWWo1n8XacYiB6gWSA9Qy-hNO4BMs8AudqRLGk6tYt2kxjogh6KK8Ohus4u9ta3jTHdkqYmECEorNKMevZeW_VRML2343sUS6DMJwYg22jB1frWOR6kyUbyXIl3nb4QEMasILax0tfNfygqrTDkxhW8aMYmduSzVjTQqdJvrXrKEdGNPeZIqvtyx7eNveWmCzTPfoS3R0NhxG4WAcvpv_sLCDGe701-1cdsie47MANOzewclg_LmxC2xR9UdsHrWVr7791teZStcthSqTKHxA7tdrGXqgOfiQdFT-iNytYX9MfjZMpIaJdJOJdM1ECl_WTHxf0BYPKfCQbuMhbfHwIzUspMhCqlkIF6ofhjYcpMjBJ-TsaBAeHlt1JxArdblYWq5IPZnYseMEqc9iO4szrgQPfMlVXyow6TMnZXGimLB9O8gUl0r0eZAy5TIVC_cp2c0vcvWcUJ4mrhSZjYVRPdAbiedmjhSu5JiVbiddwsyPHqV1mXzs1jKLTDzkNDJgRQhWpMHqkg-N3FwXirlRIjCYRrW5q83YCHh5o-wbQ4II5gPc5ItzdVEWEboEOGeOYF3yTLOjeR5XoDeF8xe3kH5J7q3_fK_I7nJRqtfkTnq5nBSLfbLjj8V-zfDfd5vp0w |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Assessing+question+characteristic+influences+on+ChatGPT%27s+performance+and+response-explanation+consistency%3A+Insights+from+Taiwan%27s+Nursing+Licensing+Exam&rft.jtitle=International+journal+of+nursing+studies&rft.au=Su%2C+Mei-Chin&rft.au=Lin%2C+Li-En&rft.au=Lin%2C+Li-Hwa&rft.au=Chen%2C+Yu-Chun&rft.date=2024-05-01&rft.issn=1873-491X&rft.eissn=1873-491X&rft.volume=153&rft.spage=104717&rft_id=info:doi/10.1016%2Fj.ijnurstu.2024.104717&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0020-7489&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0020-7489&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0020-7489&client=summon |