Progress Is Impossible without Change: Implementing Automatic Item Generation in Medical Knowledge Progress Testing

Saved in:
Bibliographic Details
Title: Progress Is Impossible without Change: Implementing Automatic Item Generation in Medical Knowledge Progress Testing
Language: English
Authors: Filipe Manuel Vidal Falcão (ORCID 0000-0002-7328-5655), Daniela S.M. Pereira, José Miguel Pêgo, Patrício Costa
Source: Education and Information Technologies. 2024 29(4):4505-4530.
Availability: Springer. Available from: Springer Nature. One New York Plaza, Suite 4600, New York, NY 10004. Tel: 800-777-4643; Tel: 212-460-1500; Fax: 212-460-1700; e-mail: customerservice@springernature.com; Web site: https://link.springer.com/
Peer Reviewed: Y
Page Count: 26
Publication Date: 2024
Document Type: Journal Articles
Reports - Research
Education Level: Higher Education
Postsecondary Education
Descriptors: Automation, Test Items, Progress Monitoring, Medical Education, Test Format, Algorithms, Comparative Testing, Handwriting, Foreign Countries, Medical Students, Technology Uses in Education, Educational Technology
Geographic Terms: Portugal
DOI: 10.1007/s10639-023-12014-x
ISSN: 1360-2357
1573-7608
Abstract: Progress tests (PT) are a popular type of longitudinal assessment used for evaluating clinical knowledge retention and long-life learning in health professions education. Most PTs consist of multiple-choice questions (MCQs) whose development is costly and time-consuming. Automatic Item Generation (AIG) generates test items through algorithms, promising to ease this burden. However, it remains unclear how AIG-items behave in formative assessment (FA) modalities such as PTs compared to manually written items. The purpose of this study was to compare the quality and validity of AIG-items versus manually written items. Responses to 126 (23 automatically generated) dichotomously scored single best-answer five-option MCQs retrieved from the 2021 University of Minho PT of medicine were analyzed. Procedures based on item response theory (IRT), dimensionality testing, item fit, reliability, differential item functioning (DIF) and distractor analysis were used. Qualitative assessment was conducted through expert review. Validity evidence of AIG-items was assessed by using hierarchical linear modeling (HLM). The PT proved to be a viable tool for assessing medical students cognitive competencies. AIG-items were parallel to manually written-items, presenting similar indices of difficulty and information. The proportion of functional distractors for both AIG and manually written items was similar. Evidence of validity for AIG-items was found while showing higher levels of item quality. AIG-items functioned as intended and were appropriate for evaluating medical students at various levels of the knowledge spectrum.
Abstractor: As Provided
Entry Date: 2024
Accession Number: EJ1416068
Database: ERIC
Be the first to leave a comment!
You must be logged in first