Automatic Implicit Motive Codings Are at Least as Accurate as Humans’ and 99% Faster

Saved in:
Bibliographic Details
Title: Automatic Implicit Motive Codings Are at Least as Accurate as Humans’ and 99% Faster
Authors: Nilsson, August Håkan, Runge, J. Malte, Ganesan, Adithya V., Lövenstierne, Carl Viggo N.G., Soni, Nikita, Kjell, Oscar N.E.
Contributors: Lund University, Faculty of Social Sciences, Departments of Administrative, Economic and Social Sciences, Department of Psychology, Lunds universitet, Samhällsvetenskapliga fakulteten, Samhällsvetenskapliga institutioner och centrumbildningar, Institutionen för psykologi, Originator, Lund University, Profile areas and other strong research environments, Strategic research areas (SRA), eSSENCE: The e-Science Collaboration, Lunds universitet, Profilområden och andra starka forskningsmiljöer, Strategiska forskningsområden (SFO), eSSENCE: The e-Science Collaboration, Originator
Source: Journal of Personality and Social Psychology. 128(6):1371-1392
Subject Terms: Natural Sciences, Computer and Information Sciences, Naturvetenskap, Data- och informationsvetenskap (Datateknik)
Description: Implicit motives, nonconscious needs that influence individuals’ behaviors and shape their emotions, have been part of personality research for nearly a century but differ from personality traits. The implicit motive assessment is very resource-intensive, involving expert coding of individuals’ written stories about ambiguous pictures, and has hampered implicit motive research. Using large language models and machine learning techniques, we aimed to create high-quality implicit motive models that are easy for researchers to use. We trained models to code the need for power, achievement, and affiliation (N = 85,028 sentences). The person-level assessments converged strongly with the holdout data, intraclass correlation coefficient, ICC(1,1) =.85,.87, and.89 for achievement, power, and affiliation, respectively. We demonstrated causal validity by reproducing two classical experimental studies that aroused implicit motives. We let three coders recode sentences where our models and the original coders strongly disagreed. We found that the new coders agreed with our models in 85% of the cases (p <.001, ϕ =.69). Using topic and word embedding analyses, we found specific language associated with each motive to have a high face validity. We argue that these models can be used in addition to, or instead of, human coders.We provide a free, user-friendly framework in the established R-package text and a tutorial for researchers to apply the models to their data, as these models reduce the coding time by over 99% and require no cognitive effort for coding. We hope this coding automation will facilitate a historical implicit motive research renaissance.
Access URL: https://doi.org/10.1037/pspp0000544
Database: SwePub
Be the first to leave a comment!
You must be logged in first