AI-Assisted Qualitative Coding: Comparing Human to Machine Outputs

This study explores the application of generative AI in the qualitative coding of survey responses, comparing its performance to that of human coders. By using ChatGPT-4o, we aimed to automate the coding process traditionally performed manually, assessing the AI's ability to identify themes and...

Full description

Saved in:
Bibliographic Details
Published in:IPCC proceedings (Print) pp. 230 - 233
Main Authors: Hodges, Amy, Ponce, Timothy, Seawright, Leslie
Format: Conference Proceeding
Language:English
Published: IEEE 20.07.2025
Subjects:
ISSN:2158-1002
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study explores the application of generative AI in the qualitative coding of survey responses, comparing its performance to that of human coders. By using ChatGPT-4o, we aimed to automate the coding process traditionally performed manually, assessing the AI's ability to identify themes and patterns within textual data. Our findings reveal that while AI demonstrates remarkable efficiency and speed, it struggles with the nuanced understanding required for complex coding tasks. The AI frequently misinterpreted coding definitions and over-relied on certain codes, indicating a need for more balanced training data and iterative refinement. Despite these challenges, AI proved valuable in providing summative feedback and identifying overall trends, suggesting its potential as a complementary tool in qualitative research. The study underscores the importance of developing codebooks collaboratively with AI and highlights the necessity of human oversight to ensure accuracy and depth.
ISSN:2158-1002
DOI:10.1109/ProComm64814.2025.00051