Exploring ChatGPT-4o-generated reflections: Alignment with professional standards in diagnostic radiography: A pilot experiment.
Gespeichert in:
| Titel: | Exploring ChatGPT-4o-generated reflections: Alignment with professional standards in diagnostic radiography: A pilot experiment. |
|---|---|
| Autoren: | Nabasenja C; Faculty of Science and Health, Wagga, Charles Sturt University NSW, Australia., Chau M; Faculty of Science and Health, Port Macquarie, Charles Sturt University NSW, Australia. Electronic address: schau@csu.edu.au., Green E; Three Rivers Department of Rural Health, Charles Sturt University, NSW, Australia. |
| Quelle: | Journal of medical imaging and radiation sciences [J Med Imaging Radiat Sci] 2025 Dec; Vol. 56 (6), pp. 102082. Date of Electronic Publication: 2025 Aug 07. |
| Publikationsart: | Journal Article |
| Sprache: | English |
| Info zur Zeitschrift: | Publisher: Elsevier Country of Publication: United States NLM ID: 101469694 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1876-7982 (Electronic) Linking ISSN: 18767982 NLM ISO Abbreviation: J Med Imaging Radiat Sci Subsets: MEDLINE |
| Imprint Name(s): | Original Publication: New York : Elsevier |
| MeSH-Schlagworte: | Artificial Intelligence* , Radiology*/education , Radiography*/standards, Humans ; Pilot Projects ; Clinical Competence ; Reproducibility of Results ; Australia ; Generative Artificial Intelligence |
| Abstract: | Introduction/background: Artificial intelligence (AI) tools such as ChatGPT-4o are increasingly being explored in education. This study examined the potential of ChatGPT-4o to support reflective practice in medical radiation science (MRS) education. The focus was on the quality of AI-generated reflections in terms of alignment with professional standards, depth, clarity, and practical relevance. Methods: Four clinical scenarios representing third-year diagnostic radiography placements were used as prompts. ChatGPT-4o generated reflective responses, which were assessed by three reviewers. Reflections were evaluated against the Medical Radiation Practice Board of Australia's professional capability domains and the National Safety and Quality Health Service Standards. Review criteria included clarity, depth, authenticity, and practical relevance. Inter-rater reliability was analysed using intraclass correlation coefficients (ICC) and the Friedman test. Results: Scenario 3 achieved the highest inter-rater reliability (ICC: moderate to excellent; p = 0.022). Scenario 2 showed the lowest reliability (ICC: poor to fair; p = 0.060). Reflections were consistently well-structured and clear, but often lacked emotional depth, contextual awareness, and person-centered insights. Qualitative feedback identified limitations in empathetic reflection and critical self-awareness. Discussion: ChatGPT-4o can produce structured reflective responses aligned with professional frameworks. However, its lack of emotional and contextual depth limits its ability to replace authentic reflective practice. Reviewer agreement varied depending on scenario complexity and emotional content. Conclusion: AI tools such as ChatGPT-4o can assist in structuring reflections in MRS education but should complement, not replace, human-guided reflective learning. Hybrid models combining AI and educator input may enhance both efficiency and authenticity. (Copyright © 2025. Published by Elsevier Inc.) |
| Contributed Indexing: | Keywords: Artificial intelligence; Australia; Diagnostic radiography education; Reflective practice; Teaching methods; Technology-assisted learning |
| Entry Date(s): | Date Created: 20250808 Date Completed: 20251123 Latest Revision: 20251123 |
| Update Code: | 20251124 |
| DOI: | 10.1016/j.jmir.2025.102082 |
| PMID: | 40779971 |
| Datenbank: | MEDLINE |
| Abstract: | Introduction/background: Artificial intelligence (AI) tools such as ChatGPT-4o are increasingly being explored in education. This study examined the potential of ChatGPT-4o to support reflective practice in medical radiation science (MRS) education. The focus was on the quality of AI-generated reflections in terms of alignment with professional standards, depth, clarity, and practical relevance.<br />Methods: Four clinical scenarios representing third-year diagnostic radiography placements were used as prompts. ChatGPT-4o generated reflective responses, which were assessed by three reviewers. Reflections were evaluated against the Medical Radiation Practice Board of Australia's professional capability domains and the National Safety and Quality Health Service Standards. Review criteria included clarity, depth, authenticity, and practical relevance. Inter-rater reliability was analysed using intraclass correlation coefficients (ICC) and the Friedman test.<br />Results: Scenario 3 achieved the highest inter-rater reliability (ICC: moderate to excellent; p = 0.022). Scenario 2 showed the lowest reliability (ICC: poor to fair; p = 0.060). Reflections were consistently well-structured and clear, but often lacked emotional depth, contextual awareness, and person-centered insights. Qualitative feedback identified limitations in empathetic reflection and critical self-awareness.<br />Discussion: ChatGPT-4o can produce structured reflective responses aligned with professional frameworks. However, its lack of emotional and contextual depth limits its ability to replace authentic reflective practice. Reviewer agreement varied depending on scenario complexity and emotional content.<br />Conclusion: AI tools such as ChatGPT-4o can assist in structuring reflections in MRS education but should complement, not replace, human-guided reflective learning. Hybrid models combining AI and educator input may enhance both efficiency and authenticity.<br /> (Copyright © 2025. Published by Elsevier Inc.) |
|---|---|
| ISSN: | 1876-7982 |
| DOI: | 10.1016/j.jmir.2025.102082 |
Nájsť tento článok vo Web of Science