Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability

Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore la...

Full description

Saved in:
Bibliographic Details
Published in:International journal of information management Vol. 69; p. 102538
Main Authors: Herm, Lukas-Valentin, Heinrich, Kai, Wanner, Jonas, Janiesch, Christian
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.04.2023
Subjects:
ISSN:0268-4012, 1873-4707
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception. •Theoretical algorithm interpretability does not entail perceived explainability.•Tradeoff can be characterized by a group structure rather than a curve.•Tree-based machine learning algorithms achieve best explainability results.•While performance distance increases for complex datasets, explainability distance decreases.•Local XAI augmentations requiring low cognitive effort fare better with end users.
AbstractList Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user’s perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception. •Theoretical algorithm interpretability does not entail perceived explainability.•Tradeoff can be characterized by a group structure rather than a curve.•Tree-based machine learning algorithms achieve best explainability results.•While performance distance increases for complex datasets, explainability distance decreases.•Local XAI augmentations requiring low cognitive effort fare better with end users.
ArticleNumber 102538
Author Herm, Lukas-Valentin
Wanner, Jonas
Heinrich, Kai
Janiesch, Christian
Author_xml – sequence: 1
  givenname: Lukas-Valentin
  surname: Herm
  fullname: Herm, Lukas-Valentin
  email: lukas-valentin.herm@uni-wuerzburg.de
  organization: Julius-Maximilians-Universität Würzburg, Würzburg, Germany
– sequence: 2
  givenname: Kai
  surname: Heinrich
  fullname: Heinrich, Kai
  email: kai.heinrich@ovgu.de
  organization: Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany
– sequence: 3
  givenname: Jonas
  surname: Wanner
  fullname: Wanner, Jonas
  email: jonas.wanner@uni-wuerzburg.de
  organization: Julius-Maximilians-Universität Würzburg, Würzburg, Germany
– sequence: 4
  givenname: Christian
  surname: Janiesch
  fullname: Janiesch, Christian
  email: christian.janiesch@tu-dortmund.de
  organization: TU Dortmund University, Otto-Hahn-Str. 14, 44227 Dortmund, Germany
BookMark eNqNkMtKAzEYhYNUsFafwfgAU5PMdC4LF6V4g4ILdR0yyZ_2LzNJSWKxW5_c1oqLbnR14MB34HznZOC8A0KuOBtzxsub1RhX6KzvF2ksmBC7Vkzy-oQMeV3lWVGxakCGTJR1VjAuzsh5jCvGeMUmYkg-X5JfUx8MBHQL2iu9RAe0AxXcvlDdwgdMyz7SdkvTEjBQ-Fh3Cp1qscO0vaZT-h4hZBpcggCGottATLhQCb2j3tI1BOtDr5wGqpw5Grggp1Z1ES5_ckTe7u9eZ4_Z_PnhaTadZzovi5RxVletKaw2wPKJNbYqdNlY07al1VzXlWi15lBzaIAr3TRFw1RpVK65FfnE5CNSHXZ18DEGsHIdsFdhKzmTe5VyJX9Vyr1KeVC5I2-PSI3p-10KCrt_8NMDD7t7G4Qgo0bY2TAYQCdpPP658QX1wZ0w
CitedBy_id crossref_primary_10_1016_j_ijinfomgt_2025_102976
crossref_primary_10_1016_j_ijhcs_2025_103622
crossref_primary_10_1162_artl_a_00414
crossref_primary_10_1109_JBHI_2024_3393719
crossref_primary_10_1007_s10853_024_10233_2
crossref_primary_10_1007_s12525_024_00737_9
crossref_primary_10_1002_ajim_23653
crossref_primary_10_1139_cjce_2023_0410
crossref_primary_10_3390_diagnostics15030307
crossref_primary_10_1016_j_ijinfomgt_2024_102779
crossref_primary_10_1109_ACCESS_2025_3575022
crossref_primary_10_1016_j_jbusres_2024_115079
crossref_primary_10_3390_ai4030033
crossref_primary_10_1038_s41467_024_44740_2
crossref_primary_10_1038_s41746_024_01244_z
crossref_primary_10_1093_iwc_iwad028
crossref_primary_10_1177_20539517241235871
crossref_primary_10_1080_10580530_2025_2506369
crossref_primary_10_1093_jamia_ocae247
crossref_primary_10_1007_s00330_023_09902_8
crossref_primary_10_1007_s11227_023_05719_w
crossref_primary_10_1148_radiol_223170
crossref_primary_10_1007_s10462_025_11215_9
crossref_primary_10_1016_j_compbiomed_2024_108557
crossref_primary_10_1061_JCCEE5_CPENG_5980
crossref_primary_10_3390_w16223328
crossref_primary_10_1038_s41699_025_00529_5
crossref_primary_10_1371_journal_pone_0328411
crossref_primary_10_1145_3698111
crossref_primary_10_1016_j_clinthera_2024_02_010
crossref_primary_10_1007_s00477_025_03035_8
crossref_primary_10_1007_s11417_024_09429_x
crossref_primary_10_1108_MD_10_2023_1946
crossref_primary_10_1145_3662178
crossref_primary_10_3390_s23249890
crossref_primary_10_1007_s12525_022_00606_3
crossref_primary_10_1038_s41598_024_79281_7
crossref_primary_10_1155_2023_4459198
crossref_primary_10_1038_s41598_025_87219_w
crossref_primary_10_1038_s41598_025_14460_8
crossref_primary_10_1007_s10143_024_02955_3
crossref_primary_10_1002_jcpy_1416
crossref_primary_10_3390_make6020055
crossref_primary_10_1016_j_actpsy_2025_105097
crossref_primary_10_1016_j_ijmedinf_2024_105398
crossref_primary_10_1007_s12525_022_00593_5
crossref_primary_10_1016_j_eswa_2023_121698
crossref_primary_10_3390_land12020420
crossref_primary_10_1109_ACCESS_2023_3294840
crossref_primary_10_1177_10812865241257850
crossref_primary_10_1007_s10462_024_10852_w
crossref_primary_10_3390_su16020530
crossref_primary_10_1186_s40537_024_01049_7
Cites_doi 10.1109/ACCESS.2019.2949286
10.3390/electronics10050593
10.1145/3387166
10.1007/s12525-020-00441-4
10.1007/978-3-030-20521-8_1
10.1016/j.jjimei.2021.100050
10.1038/s41586-018-0438-y
10.1016/j.patrec.2020.07.042
10.1016/j.jmsy.2018.01.003
10.1016/j.ejmp.2021.02.006
10.1007/978-3-030-50334-5_4
10.1145/3236009
10.1609/aimag.v40i2.2850
10.1038/s41586-019-1799-6
10.1016/j.obhdp.2018.12.005
10.1007/s41870-017-0080-1
10.1007/s10994-019-05856-5
10.1057/ejis.2016.2
10.1016/j.artint.2018.07.007
10.1109/ACCESS.2018.2870052
10.1214/lnms/1196794933
10.1016/j.ijinfomgt.2019.102061
10.1145/3290605.3300233
10.1007/s00330-020-06946-y
10.21275/ART20203995
10.1016/j.inffus.2019.12.012
10.1016/j.ijinfomgt.2019.08.002
10.1016/j.techfore.2021.121390
10.1016/j.ijforecast.2019.03.015
10.1038/s41524-018-0081-z
10.1145/3183399.3183424
10.1145/129617.129621
10.1038/s42256-019-0048-x
10.25300/MISQ/2021/15882
10.1080/135467896394447
10.34068/joe.50.02.48
10.1016/j.ijinfomgt.2021.102379
10.1016/j.ijinfomgt.2022.102497
10.1038/nature14539
10.1007/s12525-021-00475-2
10.1016/j.neunet.2014.09.003
10.3748/wjg.v25.i14.1666
10.1007/s12599-020-00678-5
10.1162/99608f92.5a8a3a3d
10.17705/1jais.00124
10.1016/j.ijinfomgt.2021.102383
10.1007/s13347-021-00495-y
10.1145/3457607
10.17705/1jais.00664
10.1016/j.ijhcs.2020.102551
10.1007/s13347-021-00477-0
10.3389/fpsyg.2017.02239
10.1016/j.artint.2020.103404
10.25300/MISQ/2021/16564
10.1145/2939672.2939778
10.1080/2573234X.2021.1952913
10.1109/MIS.2013.24
10.1016/j.artint.2021.103459
10.1016/j.artint.2021.103573
10.1609/aimag.v38i3.2741
10.1371/journal.pmed.1002709
ContentType Journal Article
Copyright 2022 The Authors
Copyright_xml – notice: 2022 The Authors
DBID 6I.
AAFTH
AAYXX
CITATION
DOI 10.1016/j.ijinfomgt.2022.102538
DatabaseName ScienceDirect Open Access Titles
Elsevier:ScienceDirect:Open Access
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Social Sciences (General)
EISSN 1873-4707
ExternalDocumentID 10_1016_j_ijinfomgt_2022_102538
S026840122200072X
GroupedDBID --K
--M
-~X
.DC
.~1
0R~
13V
1B1
1RT
1~.
1~5
29J
4.4
457
4G.
53G
5GY
5VS
6I.
7-5
71M
77K
8P~
9JO
AABNK
AACTN
AAEDT
AAEDW
AAFJI
AAFTH
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
ABFRF
ABIVO
ABJNI
ABKBG
ABMAC
ABMMH
ABMVD
ABXDB
ABYKQ
ACDAQ
ACGFO
ACGFS
ACHQT
ACHRH
ACNTT
ACRLP
ADBBV
ADEZE
ADMUD
AEBSH
AEFWE
AEKER
AFFNX
AFKWA
AFTJW
AGHFR
AGJBL
AGUBO
AGUMN
AGYEJ
AHHHB
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
AKYCK
ALEQD
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOMHK
ASPBG
AVARZ
AVWKF
AXJTR
AZFZN
BKOJK
BKOMP
BLXMC
BNSAS
C2-
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
GBLVA
HLX
HMY
HVGLF
HZ~
IHE
J1W
KOM
LG8
LY1
M3Y
M41
MO0
MS~
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
PRBVW
Q38
R2-
RIG
ROL
RPZ
RXW
SBM
SCC
SDF
SDG
SDS
SES
SEW
SPCBC
SSB
SSL
SSO
SSS
SSZ
T5K
TAE
TAF
TN5
U5U
UHS
UNMZH
WUQ
XPP
YK3
ZMT
~G-
77I
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
CITATION
EFKBS
~HD
ID FETCH-LOGICAL-c364t-1087bd4fcde035fdf74c69fdbb6fc1c872bcc1e81e9e1ac99490a6da3c1f235d3
ISICitedReferencesCount 80
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000953401900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0268-4012
IngestDate Sat Nov 29 07:22:04 EST 2025
Tue Nov 18 22:32:58 EST 2025
Fri Feb 23 02:39:17 EST 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords XAI
Tradeoff
Performance
Explainability
Machine learning
Language English
License This is an open access article under the CC BY-NC-ND license.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c364t-1087bd4fcde035fdf74c69fdbb6fc1c872bcc1e81e9e1ac99490a6da3c1f235d3
OpenAccessLink https://dx.doi.org/10.1016/j.ijinfomgt.2022.102538
ParticipantIDs crossref_primary_10_1016_j_ijinfomgt_2022_102538
crossref_citationtrail_10_1016_j_ijinfomgt_2022_102538
elsevier_sciencedirect_doi_10_1016_j_ijinfomgt_2022_102538
PublicationCentury 2000
PublicationDate April 2023
2023-04-00
PublicationDateYYYYMMDD 2023-04-01
PublicationDate_xml – month: 04
  year: 2023
  text: April 2023
PublicationDecade 2020
PublicationTitle International journal of information management
PublicationYear 2023
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Ebers (bib18) 2020
Goodman, Flaxman (bib22) 2017; 38
Boone, Boone (bib10) 2012; 50
Collins, Dennehy, Conboy, Mikalef (bib14) 2021; 60
James, Witten, Hastie, Tibshirani (bib35) 2013
Berger, Adam, Rühr, Benlian (bib7) 2021; 63
LeCun, Bengio, Hinton (bib43) 2015; 521
Adadi, Berrada (bib1) 2018; 6
Wang, Ma, Zhang, Gao, Wu (bib76) 2018; 48
Angelov, Soares (bib3) 2019; 1912
Janiesch, Zschech, Heinrich (bib36) 2021; 31
European Conference on Information Systems, Virtual Conference.
Jauernig, Uhl, Walkowitz (bib38) 2022; 35
Mignan, A., & Broccardo, M. (2019). A deeper look into ‘deep learning of aftershock patterns following large earthquakes’: Illustrating first principles in neural network physical interpretability. International Work-Conference on Artificial Neural Networks, Cham.
Chandra, Bedi (bib12) 2021; 13
Janosi, A., Steinbrunn, W., Pfisterer, M., & Detrano, R. (1988).
Dwivedi, Hughes, Ismagilova, Aarts, Coombs, Crick, Eirug (bib17) 2021; 57
Wanner, J., Heinrich, K., Janiesch, C., & Zschech, P. (2020). How Much AI Do You Require? Decision Factors for Adopting AI Technology. 41st International Conference on Information Systems (ICIS), India.
Guidotti, Monreale, Ruggieri, Turini, Giannotti, Pedreschi (bib23) 2018; 51
Rudin, Radin (bib64) 2019; 1
Hilton (bib30) 1996; 2
La Cava, W., Williams, H., Fu, W., & Moore, J. H. (2019). Evaluating recommender systems for AI-driven data science.
Zhang, Ling (bib83) 2018; 4
Chiu, Zhu, Corbett (bib13) 2021; 60
Heinrich, K., Janiesch, C., Möller, B., & Zschech, P. (2019). Is bigger always better? Lessons learnt from the evolution of deep learning architectures for image classification. Pre-ICIS SIGDSA Symposium, Munich, Germany.
Logg, Minson, Moore (bib44) 2019; 151
Meske, Bunde, Schneider, Gersch (bib53) 2022
Fürnkranz, Kliegr, Paulheim (bib20) 2020; 109
Thiebes, Lins, Sunyaev (bib73) 2021; 31
Zhou, Gandomi, Chen, Holzinger (bib84) 2021; 10
Yang, Bang (bib82) 2019; 25
Jussupow, E., Benbasat, I., & Heinzl, A. (2020).
van der Waa, Nieuwburg, Cremers, Neerincx (bib74) 2021; 291
.
28th European Conference on Information Systems, Virtual Conference.
Sharma, Kumar, Chuah (bib67) 2021; 1
Castiglioni, Rundo, Codari, Di Leo, Salvatore, Interlenghi, Sardanelli (bib11) 2021; 83
Arrieta, Díaz-Rodríguez, Del Ser, Bennetot, Tabik, Barbado, Benjamins (bib4) 2020; 58
Straub, Burton-Jones (bib70) 2007; 8
Subramanian, Nosek, Raghunathan, Kanitkar (bib72) 1992; 35
Nanayakkara, Fogarty, Tremeer, Ross, Richards, Bergmeir, Tacey (bib59) 2018; 15
Herm, L.-V., Wanner, J., Seubert, F., & Janiesch, C. (2021).
Mualla, Tchappi, Kampik, Najjar, Calvaresi, Abbas-Turki, Nicolle (bib57) 2022; 302
Mohseni, Zarei, Ragan (bib56) 2021; 11
Russell, Norvig (bib65) 2021
Baird, Maruping (bib6) 2021; 45
Hradecky, Kennell, Cai, Davidson (bib33) 2022; 65
Meske, Bunde (bib52) 2022
Liu, R., Strawderman, W., & Zhang, C.-H. (2007). Complex Datasets and Inverse Problems. Tomography, Networks and Beyond.
Asatiani, Malo, Nagbøl, Penttinen, Rinta-Kahila, Salovaara (bib5) 2021; 22
Miller (bib55) 2019; 267
Schmidhuber (bib66) 2015; 61
Nguyen (bib60) 2018
Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey.
Dam, H. K., Tran, T., & Ghose, A. (2018). Explainable software analytics. 40th International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), Gothenburg.
Mehrabi, Morstatter, Saxena, Lerman, Galstyan (bib49) 2021; 54
Strohm, Hehakaya, Ranschaert, Boon, Moors (bib71) 2020; 30
DeVries, Viégas, Wattenberg, Meade (bib16) 2018; 560
Kenny, Ford, Quinn, Keane (bib39) 2021; 294
Goodfellow, Bengio, Courville (bib21) 2016
Hyndman (bib34) 2020; 36
Wang, Fan, Wang (bib77) 2021; 141
von Eschenbach (bib19) 2021
Shin (bib68) 2021; 146
Vempala, Russo (bib75) 2018; 8
Mahmud, Islam, Ahmed, Smolander (bib47) 2022; 175
McKinney, Sieniek, Godbole, Godwin, Antropova, Ashrafian, Darzi (bib48) 2020; 577
Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for Explainable AI: Challenges and Prospects.
Rudin (bib63) 2019; 1
Wanner, J., Popp, L., Fuchs, K., Heinrich, K., Herm, L.-V., & Janiesch, C. (2021b). Adoption Barriers Of AI: A Context-specific Acceptance Model For Industrial Maintenance. 29th European Conference on Information Systems, Virtual Conference.
Bishop (bib8) 2006
Bohaju (bib9) 2020
Guo, M., Zhang, Q., Liao, X., & Chen, Y. (2019). An interpretable machine learning framework for modelling human decision behavior.
Hoffman, Johnson, Bradshaw, Underbrink (bib32) 2013; 28
Gunning (bib24) 2019; 40
Loyola-Gonzalez (bib45) 2019; 7
Preece, A., Harborne, D., Braines, D., Tomsett, R., & Chakraborty, S. (2018). Stakeholders in explainable AI.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning.
Mahesh (bib46) 2020; 9
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should i trust you?" Explaining the predictions of any classifier. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco, CA.
Meske, C., & Bunde, E. (2020). Transparency and trust in human-AI-interaction: The role of model-agnostic explanations in computer vision-based decision support. International Conference on Human-Computer Interaction, Virtual Conference.
Lebovitz, Levina, Lifshitz-Assaf (bib42) 2021; 45
Shin, Zhong, Biocca (bib69) 2020; 52
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., & Inkpen, K. (2019). Guidelines for human-AI interaction. 2019 CHI Conference on Human Factors in Computing Systems, Glasgow.
Wanner, Herm, Heinrich, Janiesch (bib79) 2021
Wanner, Herm, Heinrich, Janiesch (bib80) 2022; 5
UCI Machine Learning Library. Retrieved 10.10.2021 from
Müller, Junglas, Brocke, Debortoli (bib58) 2017; 25
Rudin (10.1016/j.ijinfomgt.2022.102538_bib64) 2019; 1
10.1016/j.ijinfomgt.2022.102538_bib25
10.1016/j.ijinfomgt.2022.102538_bib26
10.1016/j.ijinfomgt.2022.102538_bib2
10.1016/j.ijinfomgt.2022.102538_bib27
Hyndman (10.1016/j.ijinfomgt.2022.102538_bib34) 2020; 36
Strohm (10.1016/j.ijinfomgt.2022.102538_bib71) 2020; 30
10.1016/j.ijinfomgt.2022.102538_bib28
10.1016/j.ijinfomgt.2022.102538_bib29
Mahmud (10.1016/j.ijinfomgt.2022.102538_bib47) 2022; 175
Zhou (10.1016/j.ijinfomgt.2022.102538_bib84) 2021; 10
Wanner (10.1016/j.ijinfomgt.2022.102538_bib79) 2021
Thiebes (10.1016/j.ijinfomgt.2022.102538_bib73) 2021; 31
Müller (10.1016/j.ijinfomgt.2022.102538_bib58) 2017; 25
Wanner (10.1016/j.ijinfomgt.2022.102538_bib80) 2022; 5
Castiglioni (10.1016/j.ijinfomgt.2022.102538_bib11) 2021; 83
Miller (10.1016/j.ijinfomgt.2022.102538_bib55) 2019; 267
Chiu (10.1016/j.ijinfomgt.2022.102538_bib13) 2021; 60
Nguyen (10.1016/j.ijinfomgt.2022.102538_bib60) 2018
10.1016/j.ijinfomgt.2022.102538_bib61
Straub (10.1016/j.ijinfomgt.2022.102538_bib70) 2007; 8
10.1016/j.ijinfomgt.2022.102538_bib62
von Eschenbach (10.1016/j.ijinfomgt.2022.102538_bib19) 2021
10.1016/j.ijinfomgt.2022.102538_bib78
Nanayakkara (10.1016/j.ijinfomgt.2022.102538_bib59) 2018; 15
Shin (10.1016/j.ijinfomgt.2022.102538_bib69) 2020; 52
Subramanian (10.1016/j.ijinfomgt.2022.102538_bib72) 1992; 35
10.1016/j.ijinfomgt.2022.102538_bib37
Meske (10.1016/j.ijinfomgt.2022.102538_bib52) 2022
James (10.1016/j.ijinfomgt.2022.102538_bib35) 2013
Goodfellow (10.1016/j.ijinfomgt.2022.102538_bib21) 2016
Sharma (10.1016/j.ijinfomgt.2022.102538_bib67) 2021; 1
Baird (10.1016/j.ijinfomgt.2022.102538_bib6) 2021; 45
Mualla (10.1016/j.ijinfomgt.2022.102538_bib57) 2022; 302
Dwivedi (10.1016/j.ijinfomgt.2022.102538_bib17) 2021; 57
Lebovitz (10.1016/j.ijinfomgt.2022.102538_bib42) 2021; 45
Berger (10.1016/j.ijinfomgt.2022.102538_bib7) 2021; 63
Mahesh (10.1016/j.ijinfomgt.2022.102538_bib46) 2020; 9
Chandra (10.1016/j.ijinfomgt.2022.102538_bib12) 2021; 13
DeVries (10.1016/j.ijinfomgt.2022.102538_bib16) 2018; 560
Ebers (10.1016/j.ijinfomgt.2022.102538_bib18) 2020
Hilton (10.1016/j.ijinfomgt.2022.102538_bib30) 1996; 2
Janiesch (10.1016/j.ijinfomgt.2022.102538_bib36) 2021; 31
Logg (10.1016/j.ijinfomgt.2022.102538_bib44) 2019; 151
McKinney (10.1016/j.ijinfomgt.2022.102538_bib48) 2020; 577
Collins (10.1016/j.ijinfomgt.2022.102538_bib14) 2021; 60
10.1016/j.ijinfomgt.2022.102538_bib31
Wang (10.1016/j.ijinfomgt.2022.102538_bib76) 2018; 48
Jauernig (10.1016/j.ijinfomgt.2022.102538_bib38) 2022; 35
Angelov (10.1016/j.ijinfomgt.2022.102538_bib3) 2019; 1912
Meske (10.1016/j.ijinfomgt.2022.102538_bib53) 2022
Loyola-Gonzalez (10.1016/j.ijinfomgt.2022.102538_bib45) 2019; 7
Bishop (10.1016/j.ijinfomgt.2022.102538_bib8) 2006
Vempala (10.1016/j.ijinfomgt.2022.102538_bib75) 2018; 8
Schmidhuber (10.1016/j.ijinfomgt.2022.102538_bib66) 2015; 61
Wang (10.1016/j.ijinfomgt.2022.102538_bib77) 2021; 141
Russell (10.1016/j.ijinfomgt.2022.102538_bib65) 2021
Guidotti (10.1016/j.ijinfomgt.2022.102538_bib23) 2018; 51
Zhang (10.1016/j.ijinfomgt.2022.102538_bib83) 2018; 4
Mehrabi (10.1016/j.ijinfomgt.2022.102538_bib49) 2021; 54
Arrieta (10.1016/j.ijinfomgt.2022.102538_bib4) 2020; 58
Hoffman (10.1016/j.ijinfomgt.2022.102538_bib32) 2013; 28
van der Waa (10.1016/j.ijinfomgt.2022.102538_bib74) 2021; 291
10.1016/j.ijinfomgt.2022.102538_bib81
10.1016/j.ijinfomgt.2022.102538_bib40
Boone (10.1016/j.ijinfomgt.2022.102538_bib10) 2012; 50
10.1016/j.ijinfomgt.2022.102538_bib41
Adadi (10.1016/j.ijinfomgt.2022.102538_bib1) 2018; 6
10.1016/j.ijinfomgt.2022.102538_bib15
Shin (10.1016/j.ijinfomgt.2022.102538_bib68) 2021; 146
Fürnkranz (10.1016/j.ijinfomgt.2022.102538_bib20) 2020; 109
Gunning (10.1016/j.ijinfomgt.2022.102538_bib24) 2019; 40
Goodman (10.1016/j.ijinfomgt.2022.102538_bib22) 2017; 38
LeCun (10.1016/j.ijinfomgt.2022.102538_bib43) 2015; 521
Hradecky (10.1016/j.ijinfomgt.2022.102538_bib33) 2022; 65
Rudin (10.1016/j.ijinfomgt.2022.102538_bib63) 2019; 1
Kenny (10.1016/j.ijinfomgt.2022.102538_bib39) 2021; 294
Mohseni (10.1016/j.ijinfomgt.2022.102538_bib56) 2021; 11
Bohaju (10.1016/j.ijinfomgt.2022.102538_bib9) 2020
Asatiani (10.1016/j.ijinfomgt.2022.102538_bib5) 2021; 22
10.1016/j.ijinfomgt.2022.102538_bib50
10.1016/j.ijinfomgt.2022.102538_bib51
10.1016/j.ijinfomgt.2022.102538_bib54
Yang (10.1016/j.ijinfomgt.2022.102538_bib82) 2019; 25
References_xml – volume: 22
  start-page: 325
  year: 2021
  end-page: 352
  ident: bib5
  article-title: Sociotechnical envelopment of artificial intelligence: An approach to organizational deployment of inscrutable artificial intelligence systems
  publication-title: Journal of the Association for Information Systems
– start-page: 1607
  year: 2021
  end-page: 1622
  ident: bib19
  article-title: Transparency and the black box problem: Why we do not trust AI
  publication-title: Philosophy & Technology
– volume: 25
  start-page: 289
  year: 2017
  end-page: 302
  ident: bib58
  article-title: Utilizing big data analytics for information systems research: Challenges, promises and guidelines
  publication-title: European Journal of Information Systems
– volume: 10
  start-page: 593
  year: 2021
  ident: bib84
  article-title: Evaluating the quality of machine learning explanations: A survey on methods and metrics
  publication-title: Electronics
– reference: Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning.
– volume: 294
  year: 2021
  ident: bib39
  article-title: Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies
  publication-title: Artificial Intelligence
– year: 2016
  ident: bib21
  article-title: Deep learning
– volume: 60
  year: 2021
  ident: bib14
  article-title: Artificial intelligence in information systems research: A systematic literature review and research agenda
  publication-title: International Journal of Information Management
– volume: 38
  start-page: 50
  year: 2017
  end-page: 57
  ident: bib22
  article-title: European union regulations on algorithmic decision-making and a “right to explanation”
  publication-title: AI Magazine
– volume: 45
  start-page: 1501
  year: 2021
  end-page: 1525
  ident: bib42
  article-title: Is AI ground truth really “true”? The dangers of training and evaluating AI tools based on experts’ know-what
  publication-title: Management Information Systems Quarterly
– volume: 61
  start-page: 85
  year: 2015
  end-page: 117
  ident: bib66
  article-title: Deep learning in neural networks: An overview
  publication-title: Neural Networks
– volume: 63
  start-page: 55
  year: 2021
  end-page: 68
  ident: bib7
  article-title: Watch me improve—Algorithm aversion and demonstrating the ability to learn
  publication-title: Business & Information Systems Engineering
– volume: 57
  year: 2021
  ident: bib17
  article-title: Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy
  publication-title: International Journal of Information Management
– volume: 30
  start-page: 5525
  year: 2020
  end-page: 5532
  ident: bib71
  article-title: Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors
  publication-title: European radiology
– volume: 8
  start-page: 2239
  year: 2018
  ident: bib75
  article-title: Modeling music emotion judgments using machine learning methods
  publication-title: Frontiers in Psychology
– reference: European Conference on Information Systems, Virtual Conference.
– volume: 4
  start-page: 25
  year: 2018
  ident: bib83
  article-title: A strategy to apply machine learning to small datasets in materials science
  publication-title: npj Computational Materials
– volume: 6
  start-page: 52138
  year: 2018
  end-page: 52160
  ident: bib1
  article-title: Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)
  publication-title: IEEE Access
– volume: 5
  start-page: 29
  year: 2022
  end-page: 50
  ident: bib80
  article-title: A social evaluation of the perceived goodness of explainability in machine learning
  publication-title: Journal of Business Analytics
– reference: Liu, R., Strawderman, W., & Zhang, C.-H. (2007). Complex Datasets and Inverse Problems. Tomography, Networks and Beyond.
– volume: 83
  start-page: 9
  year: 2021
  end-page: 24
  ident: bib11
  article-title: AI applications to medical images: From machine learning to deep learning
  publication-title: Physica Medica
– volume: 31
  start-page: 685
  year: 2021
  end-page: 695
  ident: bib36
  article-title: Machine learning and deep learning
  publication-title: Electronic Markets
– volume: 577
  start-page: 89
  year: 2020
  end-page: 94
  ident: bib48
  article-title: International evaluation of an AI system for breast cancer screening
  publication-title: Nature
– reference: . 28th European Conference on Information Systems, Virtual Conference.
– reference: Wanner, J., Popp, L., Fuchs, K., Heinrich, K., Herm, L.-V., & Janiesch, C. (2021b). Adoption Barriers Of AI: A Context-specific Acceptance Model For Industrial Maintenance. 29th European Conference on Information Systems, Virtual Conference.
– volume: 51
  start-page: 1
  year: 2018
  end-page: 42
  ident: bib23
  article-title: A survey of methods for explaining black box models
  publication-title: ACM Computing surveys (CSUR)
– volume: 11
  start-page: 1
  year: 2021
  end-page: 45
  ident: bib56
  article-title: A multidisciplinary survey and framework for design and evaluation of explainable Ai systems
  publication-title: ACM Transactions on Interactive Intelligent Systems
– volume: 28
  start-page: 84
  year: 2013
  end-page: 88
  ident: bib32
  article-title: Trust in automation
  publication-title: IEEE Intelligent Systems
– reference: Preece, A., Harborne, D., Braines, D., Tomsett, R., & Chakraborty, S. (2018). Stakeholders in explainable AI.
– reference: Heinrich, K., Janiesch, C., Möller, B., & Zschech, P. (2019). Is bigger always better? Lessons learnt from the evolution of deep learning architectures for image classification. Pre-ICIS SIGDSA Symposium, Munich, Germany.
– reference: Wanner, J., Heinrich, K., Janiesch, C., & Zschech, P. (2020). How Much AI Do You Require? Decision Factors for Adopting AI Technology. 41st International Conference on Information Systems (ICIS), India.
– year: 2013
  ident: bib35
  article-title: An introduction to statistical learning
– volume: 1
  year: 2019
  ident: bib64
  article-title: Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition
  publication-title: Harvard Data Science Review
– year: 2021
  ident: bib65
  publication-title: Artificial intelligence: A modern approach
– volume: 58
  start-page: 82
  year: 2020
  end-page: 115
  ident: bib4
  article-title: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  publication-title: Information Fusion
– reference: Dam, H. K., Tran, T., & Ghose, A. (2018). Explainable software analytics. 40th International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), Gothenburg.
– volume: 65
  year: 2022
  ident: bib33
  article-title: Organizational readiness to adopt artificial intelligence in the exhibition sector in Western Europe
  publication-title: International Journal of Information Management
– volume: 151
  start-page: 90
  year: 2019
  end-page: 103
  ident: bib44
  article-title: Algorithm appreciation: People prefer algorithmic to human judgment
  publication-title: Organizational Behavior and Human Decision Processes
– reference: Jussupow, E., Benbasat, I., & Heinzl, A. (2020).
– volume: 7
  start-page: 154096
  year: 2019
  end-page: 154113
  ident: bib45
  article-title: Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view
  publication-title: IEEE Access
– volume: 40
  start-page: 44
  year: 2019
  end-page: 58
  ident: bib24
  article-title: DARPA’s explainable artificial intelligence (XAI) program
  publication-title: AI Magazine
– volume: 13
  start-page: 1
  year: 2021
  end-page: 11
  ident: bib12
  article-title: Survey on SVM and their application in image classification
  publication-title: International Journal of Information Technology
– volume: 291
  year: 2021
  ident: bib74
  article-title: Evaluating XAI: A comparison of rule-based and example-based explanations
  publication-title: Artificial Intelligence
– volume: 45
  start-page: 315
  year: 2021
  end-page: 341
  ident: bib6
  article-title: The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts
  publication-title: MIS Quarterly
– reference: Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for Explainable AI: Challenges and Prospects.
– reference: Herm, L.-V., Wanner, J., Seubert, F., & Janiesch, C. (2021).
– volume: 15
  year: 2018
  ident: bib59
  article-title: Characterising risk of in-hospital mortality following cardiac arrest using machine learning: A retrospective international registry study
  publication-title: PLoS Medicine
– volume: 175
  year: 2022
  ident: bib47
  article-title: What influences algorithmic decision-making? A systematic literature review on algorithm aversion
  publication-title: Technological Forecasting and Social Change
– reference: La Cava, W., Williams, H., Fu, W., & Moore, J. H. (2019). Evaluating recommender systems for AI-driven data science.
– volume: 9
  start-page: 381
  year: 2020
  end-page: 386
  ident: bib46
  article-title: Machine learning algorithms-a review
  publication-title: International Journal of Science and Research (IJSR)
– start-page: 1
  year: 2022
  end-page: 11
  ident: bib53
  article-title: Explainable artificial intelligence: Objectives, stakeholders, and future research opportunities
  publication-title: Information Systems Management
– volume: 60
  year: 2021
  ident: bib13
  article-title: In the hearts and minds of employees: A model of pre-adoptive appraisal toward artificial intelligence in organizations
  publication-title: International Journal of Information Management
– reference: Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should i trust you?" Explaining the predictions of any classifier. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco, CA.
– volume: 302
  year: 2022
  ident: bib57
  article-title: The quest of parsimonious XAI: A human-agent architecture for explanation formulation
  publication-title: Artificial Intelligence
– volume: 8
  start-page: 223
  year: 2007
  end-page: 229
  ident: bib70
  article-title: Veni, vidi, vici: Breaking the TAM logjam
  publication-title: Journal of the Association for Information Systems
– reference: UCI Machine Learning Library. Retrieved 10.10.2021 from
– reference: Mignan, A., & Broccardo, M. (2019). A deeper look into ‘deep learning of aftershock patterns following large earthquakes’: Illustrating first principles in neural network physical interpretability. International Work-Conference on Artificial Neural Networks, Cham.
– volume: 521
  start-page: 436
  year: 2015
  end-page: 444
  ident: bib43
  article-title: Deep learning
  publication-title: Nature
– reference: Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey.
– volume: 109
  start-page: 853
  year: 2020
  end-page: 898
  ident: bib20
  article-title: On cognitive preferences and the plausibility of rule-based models
  publication-title: Machine Learning
– volume: 35
  start-page: 2
  year: 2022
  ident: bib38
  article-title: People prefer moral discretion to algorithms: Algorithm aversion beyond intransparency
  publication-title: Philosophy & Technology
– reference: Meske, C., & Bunde, E. (2020). Transparency and trust in human-AI-interaction: The role of model-agnostic explanations in computer vision-based decision support. International Conference on Human-Computer Interaction, Virtual Conference.
– reference: Janosi, A., Steinbrunn, W., Pfisterer, M., & Detrano, R. (1988).
– volume: 48
  start-page: 144
  year: 2018
  end-page: 156
  ident: bib76
  article-title: Deep learning for smart manufacturing: Methods and applications
  publication-title: Journal of Manufacturing Systems
– volume: 1912
  start-page: 02523
  year: 2019
  ident: bib3
  article-title: Towards explainable deep neural networks (xDNN)
  publication-title: arXiv
– volume: 36
  start-page: 7
  year: 2020
  end-page: 14
  ident: bib34
  article-title: A brief history of forecasting competitions
  publication-title: International Journal of Forecasting
– volume: 1
  year: 2021
  ident: bib67
  article-title: Turning the blackbox into a glassbox: An explainable machine learning approach for understanding hospitality customer
  publication-title: International Journal of Information Management Data Insights
– volume: 560
  start-page: 632
  year: 2018
  end-page: 634
  ident: bib16
  article-title: Deep learning of aftershock patterns following large earthquakes
  publication-title: Nature
– start-page: 1
  year: 2022
  end-page: 31
  ident: bib52
  article-title: Design principles for user interfaces in Ai-based decision support systems: The case of explainable hate speech detection
  publication-title: Information Systems Frontiers
– volume: 31
  start-page: 447
  year: 2021
  end-page: 464
  ident: bib73
  article-title: Trustworthy artificial intelligence
  publication-title: Electronic Markets
– volume: 35
  start-page: 89
  year: 1992
  end-page: 94
  ident: bib72
  article-title: A comparison of the decision table and tree
  publication-title: Communications of the ACM
– volume: 54
  start-page: 1
  year: 2021
  end-page: 35
  ident: bib49
  article-title: A survey on bias and fairness in machine learning
  publication-title: ACM Computing surveys (CSUR)
– reference: Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., & Inkpen, K. (2019). Guidelines for human-AI interaction. 2019 CHI Conference on Human Factors in Computing Systems, Glasgow.
– start-page: 245
  year: 2021
  end-page: 258
  ident: bib79
  article-title: Stop Ordering Machine Learning Algorithms by Their Explainability! An Empirical Investigation of the Tradeoff Between Performance and Explainability
  publication-title: 20th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society (I3E)
– volume: 25
  start-page: 1666
  year: 2019
  end-page: 1683
  ident: bib82
  article-title: Application of artificial intelligence in gastroenterology
  publication-title: World Journal of gastroenterology
– volume: 146
  year: 2021
  ident: bib68
  article-title: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI
  publication-title: International Journal of Human-Computer Studies
– volume: 267
  start-page: 1
  year: 2019
  end-page: 38
  ident: bib55
  article-title: Explanation in artificial intelligence: Insights from the social sciences
  publication-title: Artificial Intelligence
– reference: .
– volume: 141
  start-page: 61
  year: 2021
  end-page: 67
  ident: bib77
  article-title: Comparative analysis of image classification algorithms based on traditional machine learning and deep learning
  publication-title: Pattern Recognition Letters
– year: 2006
  ident: bib8
  article-title: Pattern recognition and machine learning
– volume: 50
  start-page: 1
  year: 2012
  end-page: 5
  ident: bib10
  article-title: Analyzing likert data
  publication-title: Journal of Extension
– volume: 52
  year: 2020
  ident: bib69
  article-title: Beyond user experience: What constitutes algorithmic experiences?
  publication-title: International Journal of Information Management
– reference: Guo, M., Zhang, Q., Liao, X., & Chen, Y. (2019). An interpretable machine learning framework for modelling human decision behavior.
– year: 2018
  ident: bib60
  article-title: Comparing automatic and human evaluation of local explanations for text classification
  publication-title: 2018 Conference of the North American Chapter of the Association for Computational Linguistics
– year: 2020
  ident: bib9
  article-title: Brain tumor
  publication-title: Kaggle
– year: 2020
  ident: bib18
  article-title: Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s)
  publication-title: Law and Informatics 2020: Law in the Era of Artificial Intelligence
– volume: 1
  start-page: 206
  year: 2019
  end-page: 215
  ident: bib63
  article-title: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
  publication-title: Nature Machine Intelligence
– volume: 2
  start-page: 273
  year: 1996
  end-page: 308
  ident: bib30
  article-title: Mental models and causal explanation: Judgements of probable cause and explanatory relevance
  publication-title: Thinking & Reasoning
– volume: 7
  start-page: 154096
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib45
  article-title: Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2949286
– volume: 10
  start-page: 593
  issue: 5
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib84
  article-title: Evaluating the quality of machine learning explanations: A survey on methods and metrics
  publication-title: Electronics
  doi: 10.3390/electronics10050593
– start-page: 1
  year: 2022
  ident: 10.1016/j.ijinfomgt.2022.102538_bib53
  article-title: Explainable artificial intelligence: Objectives, stakeholders, and future research opportunities
  publication-title: Information Systems Management
– year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib60
  article-title: Comparing automatic and human evaluation of local explanations for text classification
– ident: 10.1016/j.ijinfomgt.2022.102538_bib81
– ident: 10.1016/j.ijinfomgt.2022.102538_bib37
– year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib9
  article-title: Brain tumor
  publication-title: Kaggle
– volume: 11
  start-page: 1
  issue: 3–4
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib56
  article-title: A multidisciplinary survey and framework for design and evaluation of explainable Ai systems
  publication-title: ACM Transactions on Interactive Intelligent Systems
  doi: 10.1145/3387166
– volume: 31
  start-page: 447
  issue: 2
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib73
  article-title: Trustworthy artificial intelligence
  publication-title: Electronic Markets
  doi: 10.1007/s12525-020-00441-4
– ident: 10.1016/j.ijinfomgt.2022.102538_bib54
  doi: 10.1007/978-3-030-20521-8_1
– volume: 1
  issue: 2
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib67
  article-title: Turning the blackbox into a glassbox: An explainable machine learning approach for understanding hospitality customer
  publication-title: International Journal of Information Management Data Insights
  doi: 10.1016/j.jjimei.2021.100050
– volume: 560
  start-page: 632
  issue: 7720
  year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib16
  article-title: Deep learning of aftershock patterns following large earthquakes
  publication-title: Nature
  doi: 10.1038/s41586-018-0438-y
– volume: 141
  start-page: 61
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib77
  article-title: Comparative analysis of image classification algorithms based on traditional machine learning and deep learning
  publication-title: Pattern Recognition Letters
  doi: 10.1016/j.patrec.2020.07.042
– volume: 48
  start-page: 144
  year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib76
  article-title: Deep learning for smart manufacturing: Methods and applications
  publication-title: Journal of Manufacturing Systems
  doi: 10.1016/j.jmsy.2018.01.003
– year: 2013
  ident: 10.1016/j.ijinfomgt.2022.102538_bib35
– volume: 83
  start-page: 9
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib11
  article-title: AI applications to medical images: From machine learning to deep learning
  publication-title: Physica Medica
  doi: 10.1016/j.ejmp.2021.02.006
– ident: 10.1016/j.ijinfomgt.2022.102538_bib51
  doi: 10.1007/978-3-030-50334-5_4
– volume: 1912
  start-page: 02523
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib3
  article-title: Towards explainable deep neural networks (xDNN)
  publication-title: arXiv
– volume: 51
  start-page: 1
  issue: 5
  year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib23
  article-title: A survey of methods for explaining black box models
  publication-title: ACM Computing surveys (CSUR)
  doi: 10.1145/3236009
– volume: 40
  start-page: 44
  issue: 2
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib24
  article-title: DARPA’s explainable artificial intelligence (XAI) program
  publication-title: AI Magazine
  doi: 10.1609/aimag.v40i2.2850
– volume: 577
  start-page: 89
  issue: 7788
  year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib48
  article-title: International evaluation of an AI system for breast cancer screening
  publication-title: Nature
  doi: 10.1038/s41586-019-1799-6
– volume: 151
  start-page: 90
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib44
  article-title: Algorithm appreciation: People prefer algorithmic to human judgment
  publication-title: Organizational Behavior and Human Decision Processes
  doi: 10.1016/j.obhdp.2018.12.005
– volume: 13
  start-page: 1
  issue: 5
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib12
  article-title: Survey on SVM and their application in image classification
  publication-title: International Journal of Information Technology
  doi: 10.1007/s41870-017-0080-1
– volume: 109
  start-page: 853
  issue: 4
  year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib20
  article-title: On cognitive preferences and the plausibility of rule-based models
  publication-title: Machine Learning
  doi: 10.1007/s10994-019-05856-5
– volume: 25
  start-page: 289
  issue: 4
  year: 2017
  ident: 10.1016/j.ijinfomgt.2022.102538_bib58
  article-title: Utilizing big data analytics for information systems research: Challenges, promises and guidelines
  publication-title: European Journal of Information Systems
  doi: 10.1057/ejis.2016.2
– volume: 267
  start-page: 1
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib55
  article-title: Explanation in artificial intelligence: Insights from the social sciences
  publication-title: Artificial Intelligence
  doi: 10.1016/j.artint.2018.07.007
– year: 2006
  ident: 10.1016/j.ijinfomgt.2022.102538_bib8
– volume: 6
  start-page: 52138
  year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib1
  article-title: Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2870052
– ident: 10.1016/j.ijinfomgt.2022.102538_bib28
– ident: 10.1016/j.ijinfomgt.2022.102538_bib50
  doi: 10.1214/lnms/1196794933
– volume: 52
  year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib69
  article-title: Beyond user experience: What constitutes algorithmic experiences?
  publication-title: International Journal of Information Management
  doi: 10.1016/j.ijinfomgt.2019.102061
– ident: 10.1016/j.ijinfomgt.2022.102538_bib2
  doi: 10.1145/3290605.3300233
– volume: 30
  start-page: 5525
  year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib71
  article-title: Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors
  publication-title: European radiology
  doi: 10.1007/s00330-020-06946-y
– year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib18
  article-title: Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s)
– ident: 10.1016/j.ijinfomgt.2022.102538_bib25
– volume: 9
  start-page: 381
  issue: 1
  year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib46
  article-title: Machine learning algorithms-a review
  publication-title: International Journal of Science and Research (IJSR)
  doi: 10.21275/ART20203995
– volume: 58
  start-page: 82
  year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib4
  article-title: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  publication-title: Information Fusion
  doi: 10.1016/j.inffus.2019.12.012
– start-page: 245
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib79
  article-title: Stop Ordering Machine Learning Algorithms by Their Explainability! An Empirical Investigation of the Tradeoff Between Performance and Explainability
– volume: 57
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib17
  article-title: Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy
  publication-title: International Journal of Information Management
  doi: 10.1016/j.ijinfomgt.2019.08.002
– volume: 175
  year: 2022
  ident: 10.1016/j.ijinfomgt.2022.102538_bib47
  article-title: What influences algorithmic decision-making? A systematic literature review on algorithm aversion
  publication-title: Technological Forecasting and Social Change
  doi: 10.1016/j.techfore.2021.121390
– volume: 36
  start-page: 7
  issue: 1
  year: 2020
  ident: 10.1016/j.ijinfomgt.2022.102538_bib34
  article-title: A brief history of forecasting competitions
  publication-title: International Journal of Forecasting
  doi: 10.1016/j.ijforecast.2019.03.015
– volume: 4
  start-page: 25
  issue: 1
  year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib83
  article-title: A strategy to apply machine learning to small datasets in materials science
  publication-title: npj Computational Materials
  doi: 10.1038/s41524-018-0081-z
– ident: 10.1016/j.ijinfomgt.2022.102538_bib31
– ident: 10.1016/j.ijinfomgt.2022.102538_bib15
  doi: 10.1145/3183399.3183424
– year: 2016
  ident: 10.1016/j.ijinfomgt.2022.102538_bib21
– ident: 10.1016/j.ijinfomgt.2022.102538_bib29
– start-page: 1
  year: 2022
  ident: 10.1016/j.ijinfomgt.2022.102538_bib52
  article-title: Design principles for user interfaces in Ai-based decision support systems: The case of explainable hate speech detection
  publication-title: Information Systems Frontiers
– volume: 35
  start-page: 89
  issue: 1
  year: 1992
  ident: 10.1016/j.ijinfomgt.2022.102538_bib72
  article-title: A comparison of the decision table and tree
  publication-title: Communications of the ACM
  doi: 10.1145/129617.129621
– volume: 1
  start-page: 206
  issue: 5
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib63
  article-title: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
  publication-title: Nature Machine Intelligence
  doi: 10.1038/s42256-019-0048-x
– volume: 45
  start-page: 315
  issue: 1
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib6
  article-title: The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts
  publication-title: MIS Quarterly
  doi: 10.25300/MISQ/2021/15882
– volume: 2
  start-page: 273
  issue: 4
  year: 1996
  ident: 10.1016/j.ijinfomgt.2022.102538_bib30
  article-title: Mental models and causal explanation: Judgements of probable cause and explanatory relevance
  publication-title: Thinking & Reasoning
  doi: 10.1080/135467896394447
– volume: 50
  start-page: 1
  issue: 2
  year: 2012
  ident: 10.1016/j.ijinfomgt.2022.102538_bib10
  article-title: Analyzing likert data
  publication-title: Journal of Extension
  doi: 10.34068/joe.50.02.48
– volume: 60
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib13
  article-title: In the hearts and minds of employees: A model of pre-adoptive appraisal toward artificial intelligence in organizations
  publication-title: International Journal of Information Management
  doi: 10.1016/j.ijinfomgt.2021.102379
– volume: 65
  year: 2022
  ident: 10.1016/j.ijinfomgt.2022.102538_bib33
  article-title: Organizational readiness to adopt artificial intelligence in the exhibition sector in Western Europe
  publication-title: International Journal of Information Management
  doi: 10.1016/j.ijinfomgt.2022.102497
– volume: 521
  start-page: 436
  issue: 7553
  year: 2015
  ident: 10.1016/j.ijinfomgt.2022.102538_bib43
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
– volume: 31
  start-page: 685
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib36
  article-title: Machine learning and deep learning
  publication-title: Electronic Markets
  doi: 10.1007/s12525-021-00475-2
– ident: 10.1016/j.ijinfomgt.2022.102538_bib26
– ident: 10.1016/j.ijinfomgt.2022.102538_bib41
– volume: 61
  start-page: 85
  year: 2015
  ident: 10.1016/j.ijinfomgt.2022.102538_bib66
  article-title: Deep learning in neural networks: An overview
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2014.09.003
– volume: 25
  start-page: 1666
  issue: 14
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib82
  article-title: Application of artificial intelligence in gastroenterology
  publication-title: World Journal of gastroenterology
  doi: 10.3748/wjg.v25.i14.1666
– ident: 10.1016/j.ijinfomgt.2022.102538_bib78
– volume: 63
  start-page: 55
  issue: 1
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib7
  article-title: Watch me improve—Algorithm aversion and demonstrating the ability to learn
  publication-title: Business & Information Systems Engineering
  doi: 10.1007/s12599-020-00678-5
– volume: 1
  issue: 2
  year: 2019
  ident: 10.1016/j.ijinfomgt.2022.102538_bib64
  article-title: Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition
  publication-title: Harvard Data Science Review
  doi: 10.1162/99608f92.5a8a3a3d
– volume: 8
  start-page: 223
  issue: 4
  year: 2007
  ident: 10.1016/j.ijinfomgt.2022.102538_bib70
  article-title: Veni, vidi, vici: Breaking the TAM logjam
  publication-title: Journal of the Association for Information Systems
  doi: 10.17705/1jais.00124
– volume: 60
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib14
  article-title: Artificial intelligence in information systems research: A systematic literature review and research agenda
  publication-title: International Journal of Information Management
  doi: 10.1016/j.ijinfomgt.2021.102383
– ident: 10.1016/j.ijinfomgt.2022.102538_bib61
– volume: 35
  start-page: 2
  issue: 1
  year: 2022
  ident: 10.1016/j.ijinfomgt.2022.102538_bib38
  article-title: People prefer moral discretion to algorithms: Algorithm aversion beyond intransparency
  publication-title: Philosophy & Technology
  doi: 10.1007/s13347-021-00495-y
– volume: 54
  start-page: 1
  issue: 6
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib49
  article-title: A survey on bias and fairness in machine learning
  publication-title: ACM Computing surveys (CSUR)
  doi: 10.1145/3457607
– volume: 22
  start-page: 325
  issue: 2
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib5
  article-title: Sociotechnical envelopment of artificial intelligence: An approach to organizational deployment of inscrutable artificial intelligence systems
  publication-title: Journal of the Association for Information Systems
  doi: 10.17705/1jais.00664
– volume: 146
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib68
  article-title: The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI
  publication-title: International Journal of Human-Computer Studies
  doi: 10.1016/j.ijhcs.2020.102551
– start-page: 1607
  issue: 34
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib19
  article-title: Transparency and the black box problem: Why we do not trust AI
  publication-title: Philosophy & Technology
  doi: 10.1007/s13347-021-00477-0
– volume: 8
  start-page: 2239
  year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib75
  article-title: Modeling music emotion judgments using machine learning methods
  publication-title: Frontiers in Psychology
  doi: 10.3389/fpsyg.2017.02239
– volume: 291
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib74
  article-title: Evaluating XAI: A comparison of rule-based and example-based explanations
  publication-title: Artificial Intelligence
  doi: 10.1016/j.artint.2020.103404
– volume: 45
  start-page: 1501
  issue: 3b
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib42
  article-title: Is AI ground truth really “true”? The dangers of training and evaluating AI tools based on experts’ know-what
  publication-title: Management Information Systems Quarterly
  doi: 10.25300/MISQ/2021/16564
– ident: 10.1016/j.ijinfomgt.2022.102538_bib62
  doi: 10.1145/2939672.2939778
– volume: 5
  start-page: 29
  issue: 1
  year: 2022
  ident: 10.1016/j.ijinfomgt.2022.102538_bib80
  article-title: A social evaluation of the perceived goodness of explainability in machine learning
  publication-title: Journal of Business Analytics
  doi: 10.1080/2573234X.2021.1952913
– year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib65
– ident: 10.1016/j.ijinfomgt.2022.102538_bib27
– volume: 28
  start-page: 84
  issue: 1
  year: 2013
  ident: 10.1016/j.ijinfomgt.2022.102538_bib32
  article-title: Trust in automation
  publication-title: IEEE Intelligent Systems
  doi: 10.1109/MIS.2013.24
– volume: 294
  year: 2021
  ident: 10.1016/j.ijinfomgt.2022.102538_bib39
  article-title: Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies
  publication-title: Artificial Intelligence
  doi: 10.1016/j.artint.2021.103459
– ident: 10.1016/j.ijinfomgt.2022.102538_bib40
– volume: 302
  year: 2022
  ident: 10.1016/j.ijinfomgt.2022.102538_bib57
  article-title: The quest of parsimonious XAI: A human-agent architecture for explanation formulation
  publication-title: Artificial Intelligence
  doi: 10.1016/j.artint.2021.103573
– volume: 38
  start-page: 50
  issue: 3
  year: 2017
  ident: 10.1016/j.ijinfomgt.2022.102538_bib22
  article-title: European union regulations on algorithmic decision-making and a “right to explanation”
  publication-title: AI Magazine
  doi: 10.1609/aimag.v38i3.2741
– volume: 15
  issue: 11
  year: 2018
  ident: 10.1016/j.ijinfomgt.2022.102538_bib59
  article-title: Characterising risk of in-hospital mortality following cardiac arrest using machine learning: A retrospective international registry study
  publication-title: PLoS Medicine
  doi: 10.1371/journal.pmed.1002709
SSID ssj0017052
Score 2.5995243
Snippet Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 102538
SubjectTerms Explainability
Machine learning
Tradeoff
XAI
Title Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
URI https://dx.doi.org/10.1016/j.ijinfomgt.2022.102538
Volume 69
WOSCitedRecordID wos000953401900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1873-4707
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017052
  issn: 0268-4012
  databaseCode: AIEXJ
  dateStart: 19950201
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9NAEF6FlgMXxFOElxaJAyhy5cf6sdwiVFQQqpAoVW6Wvd5NnSaOFbtVuPL_-E_Menf9CJVCD1ysyMlOEs_nmfHsNzMIvc3sIBIAHot7JLVIRFwrDcLEchixUz9KA9Yk3M6_hqen0WxGv41Gv00tzPUyLIpou6Xlf1U1nANly9LZW6i7FQon4DUoHY6gdjj-k-K_1-ty0nTUlFmAVUOW5GY6xHySLOfrTV5frKom8mz2Cfi2XDZVVJIoC1fdAXMhsxeWpG7KYZ6TvGvHoQLMcqfgYCiiH_IOc46DThVt5aQm0fZZOCfgMpqkwdVlUlnnifSOedG9mxdgwS8UISTvtgXMJDG5K9CxI2UdfaU-rbopmJtC5ztcr0eTUWbRDSJ46nUGNlyNe9FGGGImX7WM-cs_qFTF4ihfyNOruWTTuu5Rt2LYkXvHU7b8RUONW8StoFgKipWgO-jQDX0KRvZw-vl49qXd1grtZgRU-xcGhMMbf9PN4VIvBDp7gO7rZxc8VZh7iEa8eITGqsAbaydR4Xe6k_n7x-iXxCI2WMQai9hgEXdYxOlP3GARD4H0Bk_xAIl4gES8FriHRAxI3BHwBP34dHz28cTSQz8s5gWkhrAgCtOMCJZx2_NFJkLCAiqyVFalOSwK3ZQxh0cOp9xJGKWE2kmQJR5zhOv5mfcUHRTrgj9DGHTAiGOLlAgfXBOlCaGEJcRjYKHCTIxRYK5tzHRHfDmYZRnv0e8Y2e3CUjWF2b_kg1FerGNbFbPGAM19i5_f_vteoHvdzfMSHdSbK_4K3WXXdV5tXmtc_gEHB9Uh
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Stop+ordering+machine+learning+algorithms+by+their+explainability%21+A+user-centered+investigation+of+performance+and+explainability&rft.jtitle=International+journal+of+information+management&rft.au=Herm%2C+Lukas-Valentin&rft.au=Heinrich%2C+Kai&rft.au=Wanner%2C+Jonas&rft.au=Janiesch%2C+Christian&rft.date=2023-04-01&rft.issn=0268-4012&rft.volume=69&rft.spage=102538&rft_id=info:doi/10.1016%2Fj.ijinfomgt.2022.102538&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_ijinfomgt_2022_102538
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0268-4012&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0268-4012&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0268-4012&client=summon