Human favoritism, not AI aversion: People’s perceptions (and bias) toward generative AI, human experts, and human–GAI collaboration in persuasive content generation

With the wide availability of large language models and generative AI, there are four primary paradigms for human–AI collaboration: human-only, AI-only (ChatGPT-4), augmented human (where a human makes the final decision with AI output as a reference), or augmented AI (where the AI makes the final d...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Judgment and decision making Ročník 18
Hlavní autori: Zhang, Yunhao, Gosline, Renée
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Cambridge University Press 2023
Predmet:
ISSN:1930-2975, 1930-2975
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:With the wide availability of large language models and generative AI, there are four primary paradigms for human–AI collaboration: human-only, AI-only (ChatGPT-4), augmented human (where a human makes the final decision with AI output as a reference), or augmented AI (where the AI makes the final decision with human output as a reference). In partnership with one of the world’s leading consulting firms, we enlisted professional content creators and ChatGPT-4 to create advertising content for products and persuasive content for campaigns following the aforementioned paradigms. First, we find that, contrary to the expectations of some of the existing algorithm aversion literature on conventional predictive AI, the content generated by generative AI and augmented AI is perceived as of higher quality than that produced by human experts and augmented human experts. Second, revealing the source of content production reduces—but does not reverse—the perceived quality gap between human- and AI-generated content. This bias in evaluation is predominantly driven by human favoritism rather than AI aversion: Knowing that the same content is created by a human expert increases its (reported) perceived quality, but knowing that AI is involved in the creation process does not affect its perceived quality. Further analysis suggests this bias is not due to a ‘quality prime’ as knowing the content they are about to evaluate comes from competent creators (e.g., industry professionals and state-of-the-art AI) without knowing exactly that the creator of each piece of content does not increase participants’ perceived quality.
ISSN:1930-2975
1930-2975
DOI:10.1017/jdm.2023.37