'What would my peers say?' Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation.

Saved in:
Bibliographic Details
Title: 'What would my peers say?' Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation.
Authors: Jamie S, Chua, Merel, van Diepen, Marjolijn D, Trietsch, Friedo W, Dekker, Johanna, Schönrock-Adema, Music in Context, Jacqueline, Bustraan, Music in Context
Source: Canadian medical education journal. 15(3):18-25
Publisher Information: University of Calgary, Health Sciences Centre.
Publication Year: 2024
Physical Description: 8
Subject Terms: humans, education, medical, continuing/methods, peer group, educational measurement/methods, male, female, surveys and questionnaires, students, medical/psychology, adult, mensen, peer groepen, volwassen mannen en vrouwen, onderzoeken en vragenlijsten, studenten geneeskunde, studenten psychologie, Art, Art & Wellbeing, Healthy Ageing, Art, Learning and Participation, SDG 10 - Reduced Inequalities, SDG 04 - Quality Education, SDG 03 - Good Health and Well-being, SDG 11 - Sustainable Cities and Communities, Language, Culture and Arts, Education and Teaching, Spatial Planning and Policy, Nature and Agriculture, Recreation, Exercise and Sports, People and Society
Description: Background: Although medical courses are frequently evaluated via surveys with Likert scales ranging from 'strongly agree' to 'strongly disagree,' low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students. Methods: In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; n = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; n = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods. Results: The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree). Conclusions: We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.
Document Type: article
Language: English
Access URL: https://research.hanze.nl/en/publications/31a304a1-6a64-4f6c-9f47-35b6873ba179
Availability: http://www.hbo-kennisbank.nl/en/page/hborecord.view/?uploadId=hanzepure:oai:research.hanze.nl:publications/31a304a1-6a64-4f6c-9f47-35b6873ba179
Accession Number: edshbo.hanzepure.oai.research.hanze.nl.publications.31a304a1.6a64.4f6c.9f47.35b6873ba179
Database: HBO Kennisbank
Description
Abstract:Background: Although medical courses are frequently evaluated via surveys with Likert scales ranging from 'strongly agree' to 'strongly disagree,' low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students. Methods: In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; n = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; n = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods. Results: The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree). Conclusions: We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.