The impact of human-AI collaboration types on consumer evaluation and usage intention: a perspective of responsibility attribution

Despite the widespread availability of artificial intelligence (AI) products and services, consumer evaluations and adoption intentions have not met expectations. Existing research mainly focuses on AI’s instrumental attributes from the consumer perspective, along with negative impacts of AI failure...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in psychology Vol. 14; p. 1277861
Main Authors: Yue, Beibei, Li, Hu
Format: Journal Article
Language:English
Published: Frontiers Media S.A 30.10.2023
Subjects:
ISSN:1664-1078, 1664-1078
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Despite the widespread availability of artificial intelligence (AI) products and services, consumer evaluations and adoption intentions have not met expectations. Existing research mainly focuses on AI’s instrumental attributes from the consumer perspective, along with negative impacts of AI failures on evaluations and willingness to use. However, research is lacking on AI as a collaborative agent, investigating the impact of human-AI collaboration on AI acceptance under different outcome expectations. This study examines the interactive effects of human-AI collaboration types (AI-dominant vs. AI-assisted) and outcome expectations (positive vs. negative) on AI product evaluations and usage willingness, along with the underlying mechanisms, from a human-AI relationship perspective. It also investigates the moderating role of algorithm transparency in these effects. Using three online experiments with analysis of variance and bootstrap methods, the study validates these interactive mechanisms, revealing the mediating role of attribution and moderating role of algorithm transparency. Experiment 1 confirms the interactive effects of human-AI collaboration types and outcome expectations on consumer evaluations and usage willingness. Under positive outcome expectations, consumers evaluate and express willingness to use AI-dominant intelligent vehicles with autonomous driving capabilities higher than those with emergency evasion capabilities (AI-assisted). However, under negative outcome expectations, consumers rate autonomous driving capabilities lower compared to emergency evasion capabilities. Experiment 2 examines the mediating role of attribution through ChatGPT’s dominant or assisting role under different outcome expectations. Experiment 3 uses a clinical decision-making system to study algorithm transparency’s moderating role, showing higher transparency improves evaluations and willingness to use AI products and services under negative outcome expectations. Theoretically, this study advances consumer behavior research by exploring the human-AI relationship within artificial intelligence, enhancing understanding of consumer acceptance variations. Practically, it offers insights for better integrating AI products and services into the market.
AbstractList Despite the widespread availability of artificial intelligence (AI) products and services, consumer evaluations and adoption intentions have not met expectations. Existing research mainly focuses on AI’s instrumental attributes from the consumer perspective, along with negative impacts of AI failures on evaluations and willingness to use. However, research is lacking on AI as a collaborative agent, investigating the impact of human-AI collaboration on AI acceptance under different outcome expectations. This study examines the interactive effects of human-AI collaboration types (AI-dominant vs. AI-assisted) and outcome expectations (positive vs. negative) on AI product evaluations and usage willingness, along with the underlying mechanisms, from a human-AI relationship perspective. It also investigates the moderating role of algorithm transparency in these effects. Using three online experiments with analysis of variance and bootstrap methods, the study validates these interactive mechanisms, revealing the mediating role of attribution and moderating role of algorithm transparency. Experiment 1 confirms the interactive effects of human-AI collaboration types and outcome expectations on consumer evaluations and usage willingness. Under positive outcome expectations, consumers evaluate and express willingness to use AI-dominant intelligent vehicles with autonomous driving capabilities higher than those with emergency evasion capabilities (AI-assisted). However, under negative outcome expectations, consumers rate autonomous driving capabilities lower compared to emergency evasion capabilities. Experiment 2 examines the mediating role of attribution through ChatGPT’s dominant or assisting role under different outcome expectations. Experiment 3 uses a clinical decision-making system to study algorithm transparency’s moderating role, showing higher transparency improves evaluations and willingness to use AI products and services under negative outcome expectations. Theoretically, this study advances consumer behavior research by exploring the human-AI relationship within artificial intelligence, enhancing understanding of consumer acceptance variations. Practically, it offers insights for better integrating AI products and services into the market.
Despite the widespread availability of artificial intelligence (AI) products and services, consumer evaluations and adoption intentions have not met expectations. Existing research mainly focuses on AI's instrumental attributes from the consumer perspective, along with negative impacts of AI failures on evaluations and willingness to use. However, research is lacking on AI as a collaborative agent, investigating the impact of human-AI collaboration on AI acceptance under different outcome expectations. This study examines the interactive effects of human-AI collaboration types (AI-dominant vs. AI-assisted) and outcome expectations (positive vs. negative) on AI product evaluations and usage willingness, along with the underlying mechanisms, from a human-AI relationship perspective. It also investigates the moderating role of algorithm transparency in these effects. Using three online experiments with analysis of variance and bootstrap methods, the study validates these interactive mechanisms, revealing the mediating role of attribution and moderating role of algorithm transparency. Experiment 1 confirms the interactive effects of human-AI collaboration types and outcome expectations on consumer evaluations and usage willingness. Under positive outcome expectations, consumers evaluate and express willingness to use AI-dominant intelligent vehicles with autonomous driving capabilities higher than those with emergency evasion capabilities (AI-assisted). However, under negative outcome expectations, consumers rate autonomous driving capabilities lower compared to emergency evasion capabilities. Experiment 2 examines the mediating role of attribution through ChatGPT's dominant or assisting role under different outcome expectations. Experiment 3 uses a clinical decision-making system to study algorithm transparency's moderating role, showing higher transparency improves evaluations and willingness to use AI products and services under negative outcome expectations. Theoretically, this study advances consumer behavior research by exploring the human-AI relationship within artificial intelligence, enhancing understanding of consumer acceptance variations. Practically, it offers insights for better integrating AI products and services into the market.Despite the widespread availability of artificial intelligence (AI) products and services, consumer evaluations and adoption intentions have not met expectations. Existing research mainly focuses on AI's instrumental attributes from the consumer perspective, along with negative impacts of AI failures on evaluations and willingness to use. However, research is lacking on AI as a collaborative agent, investigating the impact of human-AI collaboration on AI acceptance under different outcome expectations. This study examines the interactive effects of human-AI collaboration types (AI-dominant vs. AI-assisted) and outcome expectations (positive vs. negative) on AI product evaluations and usage willingness, along with the underlying mechanisms, from a human-AI relationship perspective. It also investigates the moderating role of algorithm transparency in these effects. Using three online experiments with analysis of variance and bootstrap methods, the study validates these interactive mechanisms, revealing the mediating role of attribution and moderating role of algorithm transparency. Experiment 1 confirms the interactive effects of human-AI collaboration types and outcome expectations on consumer evaluations and usage willingness. Under positive outcome expectations, consumers evaluate and express willingness to use AI-dominant intelligent vehicles with autonomous driving capabilities higher than those with emergency evasion capabilities (AI-assisted). However, under negative outcome expectations, consumers rate autonomous driving capabilities lower compared to emergency evasion capabilities. Experiment 2 examines the mediating role of attribution through ChatGPT's dominant or assisting role under different outcome expectations. Experiment 3 uses a clinical decision-making system to study algorithm transparency's moderating role, showing higher transparency improves evaluations and willingness to use AI products and services under negative outcome expectations. Theoretically, this study advances consumer behavior research by exploring the human-AI relationship within artificial intelligence, enhancing understanding of consumer acceptance variations. Practically, it offers insights for better integrating AI products and services into the market.
Author Li, Hu
Yue, Beibei
Author_xml – sequence: 1
  givenname: Beibei
  surname: Yue
  fullname: Yue, Beibei
– sequence: 2
  givenname: Hu
  surname: Li
  fullname: Li, Hu
BookMark eNp9kUtr3DAUhU1JoWmaP9CVlt14IskaPbILoY-BQDbpWuhxNVGwLUeSA7PtL689k0DpItrocu85H5d7PjdnYxqhab4SvOk6qa7CVA77DcW02xAqhOTkQ3NOOGctwUKe_VN_ai5LecLLY5hiTM-bPw-PgOIwGVdRCuhxHszY3uyQS31vbMqmxjSiepigoKVwaSzzABnBi-nn09CMHs3F7BfOWGFce9fIoAlymcDV-AIrOUOZFnO0sY_1gEytOdp5FX9pPgbTF7h8_S-a3z--P9z-au_uf-5ub-5a1ylcW7rtlPBM-U4GRpwAprh1mNmAFZeWE8CSWe_ZVlDBzTK0VjkbuHPUSyu7i2Z34vpknvSU42DyQScT9bGR8l6bXKPrQcsOB6mCp9IrJsGa4AUBgQNnKjBBFta3E2vK6XmGUvUQi4PlZiOkuWgq1VZgQrerVJ6kLqdSMgTtYj1ermYTe02wXlPUxxT1mqJ-TXGx0v-sb1u_Y_oL0mSmmw
CitedBy_id crossref_primary_10_1080_02650487_2025_2458996
crossref_primary_10_1080_10447318_2025_2520997
crossref_primary_10_1016_j_tele_2025_102304
crossref_primary_10_1080_10447318_2025_2482742
crossref_primary_10_1108_JRIM_03_2025_0135
crossref_primary_10_1108_MD_07_2024_1635
crossref_primary_10_1108_JOSM_05_2024_0223
crossref_primary_10_1108_JSM_10_2024_0539
crossref_primary_10_1109_ACCESS_2025_3567656
crossref_primary_10_1016_j_jbusres_2025_115276
crossref_primary_10_1108_TG_01_2025_0011
crossref_primary_10_1145_3710987
crossref_primary_10_3389_fpos_2025_1560180
crossref_primary_10_1108_TG_01_2025_0004
crossref_primary_10_1002_pa_70067
crossref_primary_10_1016_j_jretconser_2024_103761
crossref_primary_10_1080_19368623_2025_2532488
crossref_primary_10_3390_bs14121216
crossref_primary_10_1057_s41599_025_05097_z
crossref_primary_10_15187_adr_2025_05_38_2_433
crossref_primary_10_1080_10447318_2025_2454954
crossref_primary_10_1002_cb_70045
Cites_doi 10.1177/00187208221113448
10.1016/j.jengtecman.2018.04.006
10.1177/1461444818773059
10.1007/s11747-009-0179-4
10.1016/j.newideapsych.2017.11.001
10.1177/00222429211045687
10.1016/j.engappai.2016.05.009
10.1177/0146167292185006
10.1093/jcmc/zmz026
10.1111/jedm.12050
10.1007/BF01173577
10.1080/15332861.2020.1832817
10.1177/1094670516675416
10.1111/poms.13770
10.2307/30036540
10.1080/10447318.2021.2004139
10.1038/s41598-022-18751-2
10.1016/j.chb.2023.107714
10.1002/mar.21721
10.1016/j.jbusres.2007.09.008
10.1016/j.ijindorg.2017.09.003
10.1145/3361118
10.1016/j.intcom.2006.07.005
10.1007/s11002-019-09485-9
10.1037/0022-3514.66.4.742
10.1109/MIS.2007.21
10.1080/10447318.2018.1456150
10.1108/INTR-08-2021-0600
10.21307/ijssis-2017-283
10.1108/NBRI-05-2022-0051
10.1080/1369118X.2019.1568515
10.1016/j.compedu.2018.09.009
10.1109/ACCESS.2018.2870052
10.1037/10628-000
10.1016/j.jbusres.2006.05.006
10.1038/s41586-019-1138-y
10.1007/s10458-019-09408-y
10.1016/j.jii.2021.100257
10.1145/3233231
10.1093/jcmc/zmac010
10.1518/hfes.46.1.50.30392
10.1037/1089-2680.5.4.323
10.1016/j.biopsycho.2010.07.001
10.1177/00222437211050351
10.1109/THMS.2017.2648849
10.1016/j.jbusvent.2012
10.1007/s12559-018-9619-0
10.1016/S0065-2601(07)00002-0
10.1016/j.jretconser.2021.102900
ContentType Journal Article
Copyright Copyright © 2023 Yue and Li.
Copyright_xml – notice: Copyright © 2023 Yue and Li.
DBID AAYXX
CITATION
7X8
DOA
DOI 10.3389/fpsyg.2023.1277861
DatabaseName CrossRef
MEDLINE - Academic
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE - Academic
DatabaseTitleList
CrossRef
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Psychology
EISSN 1664-1078
ExternalDocumentID oai_doaj_org_article_830f89fd28d948ebafd71e70f649f471
10_3389_fpsyg_2023_1277861
GroupedDBID 53G
5VS
9T4
AAFWJ
AAKDD
AAYXX
ABIVO
ACGFO
ACGFS
ACHQT
ADBBV
ADRAZ
AEGXH
AFPKN
AIAGR
ALMA_UNASSIGNED_HOLDINGS
AOIJS
BAWUL
BCNDV
CITATION
DIK
EBS
EJD
EMOBN
F5P
GROUPED_DOAJ
GX1
HYE
KQ8
M48
M~E
O5R
O5S
OK1
P2P
PGMZT
RNS
RPM
7X8
ID FETCH-LOGICAL-c390t-25397d49d38f41c7e496bc04bf0968b61e084bdd457276a96bbb9cbf6cc2d8b83
IEDL.DBID DOA
ISICitedReferencesCount 28
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001101677200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1664-1078
IngestDate Fri Oct 03 12:43:32 EDT 2025
Thu Oct 02 12:13:13 EDT 2025
Sat Nov 29 03:06:12 EST 2025
Tue Nov 18 22:31:00 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c390t-25397d49d38f41c7e496bc04bf0968b61e084bdd457276a96bbb9cbf6cc2d8b83
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
OpenAccessLink https://doaj.org/article/830f89fd28d948ebafd71e70f649f471
PQID 2895701251
PQPubID 23479
ParticipantIDs doaj_primary_oai_doaj_org_article_830f89fd28d948ebafd71e70f649f471
proquest_miscellaneous_2895701251
crossref_citationtrail_10_3389_fpsyg_2023_1277861
crossref_primary_10_3389_fpsyg_2023_1277861
PublicationCentury 2000
PublicationDate 2023-10-30
PublicationDateYYYYMMDD 2023-10-30
PublicationDate_xml – month: 10
  year: 2023
  text: 2023-10-30
  day: 30
PublicationDecade 2020
PublicationTitle Frontiers in psychology
PublicationYear 2023
Publisher Frontiers Media S.A
Publisher_xml – name: Frontiers Media S.A
References Peng (ref35) 2022
Rosenfeld (ref41) 2019; 33
Hong (ref15) 2022; 38
Hayes (ref13) 2013; 51
Choi (ref6) 2008; 61
Mao (ref30) 2019; 3
McAuley (ref31) 1992; 18
Venkatesh (ref52) 2003; 27
Sundar (ref50) 2015
Heider (ref14) 1958
Wang (ref53) 2023; 14
Jenkins (ref16) 2014; 29
Kaur (ref19) 2018; 48
Oh (ref33) 2018
Reverberi (ref38) 2022; 12
Grunewald (ref11) 2017; 55
Lipton (ref27) 2018; 61
Lee (ref25) 2004; 46
Collier (ref7) 2010; 38
Kim (ref20) 2022; 59
Baumeister (ref5) 2001; 5
Stubbs (ref48) 2007; 22
Yang (ref56) 2020
Park (ref34) 2018
Crolic (ref8) 2022; 86
Peterson (ref36) 1982; 6
Karray (ref18) 2008; 1
Kim (ref21) 2019; 30
Cuddy (ref9) 2008
Maddikunta (ref29) 2022; 26
Song (ref46) 2022; 66
Sundar (ref49) 2020; 25
Gu (ref12) 2010; 85
Scherer (ref43) 2019; 128
Zarifis (ref57) 2021; 20
Rahwan (ref37) 2019; 568
Serenko (ref44) 2007; 19
West (ref54) 2018; 20
Adadi (ref2) 2018; 6
Louie (ref28) 2020
Shank (ref45) 2019; 22
Lehmann (ref26) 2022; 31
Laato (ref22) 2022; 32
Rudin (ref42) 2019
Basso (ref4) 2016; 55
van der Woerdt (ref51) 2019; 54
Robinette (ref40) 2017; 47
Molina (ref32) 2022; 27
Lai (ref23) 2022
Zhang (ref58) 2022; 39
Albrecht (ref3) 2017; 20
Ribeiro (ref39) 2016
Westphal (ref55) 2023; 144
Lee (ref24) 2012
Kalamas (ref17) 2008; 61
Franke (ref10) 2019; 35
Strathman (ref47) 1994; 66
Abbass (ref1) 2019; 11
References_xml – year: 2018
  ident: ref34
– year: 2022
  ident: ref35
  article-title: Drivers' evaluation of different automated driving styles: is it both comfortable and natural?
  publication-title: Hum. Factors
  doi: 10.1177/00187208221113448
– volume: 48
  start-page: 87
  year: 2018
  ident: ref19
  article-title: Trust in driverless cars: investigating key factors influencing the adoption of driverless cars
  publication-title: J. Eng. Technol. Manag.
  doi: 10.1016/j.jengtecman.2018.04.006
– volume: 20
  start-page: 4366
  year: 2018
  ident: ref54
  article-title: Censored, suspended, shadowbanned: user interpretations of content moderation on social media platforms
  publication-title: New Media Soc.
  doi: 10.1177/1461444818773059
– volume-title: The secrets of machine learning: Ten things you wish you had known earlier to be more effective at data analysis
  year: 2019
  ident: ref42
– volume: 38
  start-page: 490
  year: 2010
  ident: ref7
  article-title: Examining the influence of control and convenience in a self-service setting
  publication-title: J. Acad. Mark. Sci.
  doi: 10.1007/s11747-009-0179-4
– volume: 54
  start-page: 93
  year: 2019
  ident: ref51
  article-title: When robots appear to have a mind: the human perception of machine agency and responsibility
  publication-title: New Ideas Psychol.
  doi: 10.1016/j.newideapsych.2017.11.001
– volume: 86
  start-page: 132
  year: 2022
  ident: ref8
  article-title: Blame the bot: anthropomorphism and anger in customer-Chatbot interactions
  publication-title: J. Mark.
  doi: 10.1177/00222429211045687
– volume: 55
  start-page: 14
  year: 2016
  ident: ref4
  article-title: Engineering multi-agent systems using feedback loops and holarchies
  publication-title: Eng. Appl. Artif. Intell.
  doi: 10.1016/j.engappai.2016.05.009
– volume: 18
  start-page: 566
  year: 1992
  ident: ref31
  article-title: Measuring causal attributions: the revised causal dimension scale (CDSII)
  publication-title: Personal. Soc. Psychol. Bull.
  doi: 10.1177/0146167292185006
– volume: 25
  start-page: 74
  year: 2020
  ident: ref49
  article-title: Rise of machine agency: a framework for studying the psychology of human-AI interaction (HAII)
  publication-title: J. Comput.-Mediat. Commun.
  doi: 10.1093/jcmc/zmz026
– volume: 51
  start-page: 335
  year: 2013
  ident: ref13
  article-title: Introduction to mediation, moderation, and conditional process analysis: a regression-based approach
  publication-title: J. Educ. Meas.
  doi: 10.1111/jedm.12050
– volume: 6
  start-page: 287
  year: 1982
  ident: ref36
  article-title: The attributional Style Questionnaire
  publication-title: Cogn. Ther. Res.
  doi: 10.1007/BF01173577
– volume: 20
  start-page: 66
  year: 2021
  ident: ref57
  article-title: Evaluating if trust and personal information privacy concerns are barriers to using health insurance that explicitly utilizes AI
  publication-title: J. Internet Commer.
  doi: 10.1080/15332861.2020.1832817
– volume: 20
  start-page: 188
  year: 2017
  ident: ref3
  article-title: Perceptions of group versus individual service failures and their effects on customer outcomes: the role of attributions and customer entitlement
  publication-title: J. Serv. Res.
  doi: 10.1177/1094670516675416
– volume: 31
  start-page: 3419
  year: 2022
  ident: ref26
  article-title: The risk of algorithm transparency: how algorithm complexity drives the effects on the use of advice
  publication-title: Prod. Oper. Manag.
  doi: 10.1111/poms.13770
– volume: 27
  start-page: 425
  year: 2003
  ident: ref52
  article-title: User acceptance of information technology: toward a unified view
  publication-title: MIS Q.
  doi: 10.2307/30036540
– volume: 38
  start-page: 102
  year: 2022
  ident: ref15
  article-title: Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving Car accidents in experimental settings
  publication-title: Int. J. Human-Computer Interact.
  doi: 10.1080/10447318.2021.2004139
– year: 2012
  ident: ref24
– volume: 12
  start-page: 14952
  year: 2022
  ident: ref38
  article-title: Experimental evidence of effective human-Al collaboration in medical decision-making
  publication-title: Sci. Rep.
  doi: 10.1038/s41598-022-18751-2
– year: 2020
  ident: ref28
– volume: 144
  start-page: 107714
  year: 2023
  ident: ref55
  article-title: Decision control and explanations in human-AI collaboration: improving user perceptions and compliance
  publication-title: Comput. Hum. Behav.
  doi: 10.1016/j.chb.2023.107714
– volume: 39
  start-page: 2171
  year: 2022
  ident: ref58
  article-title: Consumer reactions to AI design: exploring consumer willingness to pay for AI-designed products
  publication-title: Psychol. Mark.
  doi: 10.1002/mar.21721
– volume: 61
  start-page: 813
  year: 2008
  ident: ref17
  article-title: Reaching the boiling point: Consumers' negative affective reactions to firm-attributed service failures
  publication-title: J. Bus. Res.
  doi: 10.1016/j.jbusres.2007.09.008
– year: 2020
  ident: ref56
– volume: 55
  start-page: 91
  year: 2017
  ident: ref11
  article-title: Advertising as signal jamming
  publication-title: Int. J. Ind. Organ.
  doi: 10.1016/j.ijindorg.2017.09.003
– volume: 3
  start-page: 1
  year: 2019
  ident: ref30
  article-title: How data ScientistsWork together with domain experts in scientific collaborations: to find the right answer or to ask the right question?
  publication-title: Proc ACM Hum Comput Interact
  doi: 10.1145/3361118
– volume: 19
  start-page: 293
  year: 2007
  ident: ref44
  article-title: Are interface agents scapegoats? Attributions of responsibility in human-agent interaction
  publication-title: Interact. Comput.
  doi: 10.1016/j.intcom.2006.07.005
– volume: 30
  start-page: 1
  year: 2019
  ident: ref21
  article-title: Eliza in the uncanny valley: anthropomorphizing consumer robots increases their perceived warmth but decreases liking
  publication-title: Mark. Lett.
  doi: 10.1007/s11002-019-09485-9
– volume: 66
  start-page: 742
  year: 1994
  ident: ref47
  article-title: The onsideration of future consequences:weighingimmediate and distant outcomes of behavior
  publication-title: J. Personality Social Psychol.
  doi: 10.1037/0022-3514.66.4.742
– volume: 22
  start-page: 42
  year: 2007
  ident: ref48
  article-title: Autonomy and common ground in human-robot interaction: a field study
  publication-title: IEEE Intell. Syst.
  doi: 10.1109/MIS.2007.21
– volume: 35
  start-page: 456
  year: 2019
  ident: ref10
  article-title: A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale
  publication-title: Int. J. Human-Computer Interact.
  doi: 10.1080/10447318.2018.1456150
– volume: 32
  start-page: 1
  year: 2022
  ident: ref22
  article-title: How to explain AI systems to end users: a systematic literature review and research agenda
  publication-title: Internet Res.
  doi: 10.1108/INTR-08-2021-0600
– volume: 1
  start-page: 137
  year: 2008
  ident: ref18
  article-title: Human-computer interaction
  publication-title: Int. J. Smart Sensing Intelligent Systems
  doi: 10.21307/ijssis-2017-283
– year: 2016
  ident: ref39
– volume: 14
  start-page: 177
  year: 2023
  ident: ref53
  article-title: "facilitators" vs "substitutes": the influence of artificial intelligence products' image on consumer evaluation
  publication-title: Nankai Bus. Rev. Int.
  doi: 10.1108/NBRI-05-2022-0051
– volume: 22
  start-page: 648
  year: 2019
  ident: ref45
  article-title: When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions
  publication-title: Inf. Commun. Soc.
  doi: 10.1080/1369118X.2019.1568515
– volume: 128
  start-page: 13
  year: 2019
  ident: ref43
  article-title: The technology acceptance model (TAM): a meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education
  publication-title: Comput. Educ.
  doi: 10.1016/j.compedu.2018.09.009
– year: 2018
  ident: ref33
– volume: 6
  start-page: 52138
  year: 2018
  ident: ref2
  article-title: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2870052
– start-page: 47
  volume-title: Toward a theory of interactive media effects (TIME) four models for explaining how interface features affect user psychology.
  year: 2015
  ident: ref50
– volume-title: The psychology of interpersonal relations/Fritz Heider
  year: 1958
  ident: ref14
  doi: 10.1037/10628-000
– volume: 61
  start-page: 24
  year: 2008
  ident: ref6
  article-title: Perceived controllability and service expectations: influences on customer reactions following service failure
  publication-title: J. Bus. Res.
  doi: 10.1016/j.jbusres.2006.05.006
– volume: 568
  start-page: 477
  year: 2019
  ident: ref37
  article-title: Machine behaviour
  publication-title: Nature
  doi: 10.1038/s41586-019-1138-y
– year: 2022
  ident: ref23
– volume: 33
  start-page: 673
  year: 2019
  ident: ref41
  article-title: Explainability in human-agent systems
  publication-title: Auton. Agent. Multi-Agent Syst.
  doi: 10.1007/s10458-019-09408-y
– volume: 26
  start-page: 100257
  year: 2022
  ident: ref29
  article-title: Industry 5.0: a survey on enabling technologies and potential applications. Journal of industrial information
  publication-title: J. Ind. Inf. Integr.
  doi: 10.1016/j.jii.2021.100257
– volume: 61
  start-page: 36
  year: 2018
  ident: ref27
  article-title: The mythos of model interpretability
  publication-title: Commun. ACM
  doi: 10.1145/3233231
– volume: 27
  start-page: zac010
  year: 2022
  ident: ref32
  article-title: When AI moderates online content: effects of human collaboration and interactive transparency on user trust
  publication-title: J. Comput.Mediat. Commun.
  doi: 10.1093/jcmc/zmac010
– volume: 46
  start-page: 50
  year: 2004
  ident: ref25
  article-title: Trust in automation: designing for appropriate reliance
  publication-title: Hum. Factors
  doi: 10.1518/hfes.46.1.50.30392
– volume: 5
  start-page: 323
  year: 2001
  ident: ref5
  article-title: Bad is Stronger than Good
  publication-title: Rev. Gen. Psychol.
  doi: 10.1037/1089-2680.5.4.323
– volume: 85
  start-page: 200
  year: 2010
  ident: ref12
  article-title: Anxiety and outcome evaluation: the good, the bad and the ambiguous
  publication-title: Biol. Psychol.
  doi: 10.1016/j.biopsycho.2010.07.001
– volume: 59
  start-page: 79
  year: 2022
  ident: ref20
  article-title: Home-tutoring services assisted with technology: investigating the role of artificial intelligence using a randomized field experiment
  publication-title: J. Mark. Res.
  doi: 10.1177/00222437211050351
– volume: 47
  start-page: 425
  year: 2017
  ident: ref40
  article-title: Effect of robot performance on human-robot Trust in Time-Critical Situations
  publication-title: Ieee Transactions on Human-Machine Systems
  doi: 10.1109/THMS.2017.2648849
– volume: 29
  start-page: 17
  year: 2014
  ident: ref16
  article-title: Individual responses to firm failure: appraisals, grief, and the influence of prior failure experience
  publication-title: J. Bus. Ventur.
  doi: 10.1016/j.jbusvent.2012
– volume: 11
  start-page: 159
  year: 2019
  ident: ref1
  article-title: Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust
  publication-title: Cogn. Comput.
  doi: 10.1007/s12559-018-9619-0
– volume-title: Advances in experimental social psychology-book
  year: 2008
  ident: ref9
  article-title: Warmth and competence as universal dimensions of social perception: the stereotype content model and the BIAS map
  doi: 10.1016/S0065-2601(07)00002-0
– volume: 66
  start-page: 102900
  year: 2022
  ident: ref46
  article-title: Will artificial intelligence replace human customer service? The impact of communication quality and privacy risks on adoption intention
  publication-title: J. Retail. Consum. Serv.
  doi: 10.1016/j.jretconser.2021.102900
SSID ssj0000402002
Score 2.5022929
Snippet Despite the widespread availability of artificial intelligence (AI) products and services, consumer evaluations and adoption intentions have not met...
SourceID doaj
proquest
crossref
SourceType Open Website
Aggregation Database
Enrichment Source
Index Database
StartPage 1277861
SubjectTerms artificial intelligence
evaluation
human-AI collaboration
outcome expectation
responsibility attribution
usage intention
Title The impact of human-AI collaboration types on consumer evaluation and usage intention: a perspective of responsibility attribution
URI https://www.proquest.com/docview/2895701251
https://doaj.org/article/830f89fd28d948ebafd71e70f649f471
Volume 14
WOSCitedRecordID wos001101677200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1664-1078
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000402002
  issn: 1664-1078
  databaseCode: DOA
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1664-1078
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000402002
  issn: 1664-1078
  databaseCode: M~E
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8QwEA66ePAiPnF9EcGbVNM2bRJvKoqCigeFvYU8RZDusg9hLx785WbS7kMEvXgppU3bkG-azLTffIPQkXXMe0aA1E5tEozCJ8JSlThXOkYpLQoTq5bcsYcH3umIx7lSX8AJq-WB64E75TnxXHibcSsod1p5y1LHiC-p8DRmj2fB65kLpuIcDGERUHcgSyZEYeLU9wbjlxMoFn6SZiCaln5biaJg_4_5OC4y16topfEO8XndqzW04Kp1tDydpMYb6DPgiuvURtz1OJbYS85v8Tc8MXxYHeCwY5oMSzxT9caqsngEfDIMWhGR7XiGFe7N0i7hzv158uwYq-G0NNYmer6-erq8SZo6ConJBRkmWRGcDkuFzbmnqWGOilIbQrUP8QvXZeoIp9paWgRnplThpNbCaA-Uass1z7dQq-pWbhthyER1GcmV8oIaKrjSXvuCZ6ZwgnDXRulkTKVpRMah1sWbDMEG4CAjDhJwkA0ObXQ8vaZXS2z82voCoJq2BHnseCAYjWyMRv5lNG10OAFahtcJ_pGoynVHAxniz4IR8Pp2_uNBu2gZOh8XO7KHWsP-yO2jJfM-fB30D9Ai6_CDaLlhe_9x9QUhpvb8
linkProvider Directory of Open Access Journals
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+impact+of+human-AI+collaboration+types+on+consumer+evaluation+and+usage+intention%3A+a+perspective+of+responsibility+attribution&rft.jtitle=Frontiers+in+psychology&rft.au=Yue%2C+Beibei&rft.au=Li%2C+Hu&rft.date=2023-10-30&rft.issn=1664-1078&rft.eissn=1664-1078&rft.volume=14&rft.spage=1277861&rft_id=info:doi/10.3389%2Ffpsyg.2023.1277861&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1664-1078&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1664-1078&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1664-1078&client=summon