xxAI - Beyond Explainable AI International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers

This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human in...

Celý popis

Uložené v:
Podrobná bibliografia
Hlavní autori: Holzinger, Andreas, xxAI - beyond explainable AI (Workshop) (2020 : Vienna, Austria), International Conference on Machine Learning
Médium: E-kniha Kniha
Jazyk:English
Vydavateľské údaje: Cham Springer Nature 2022
Springer
Springer International Publishing AG
Vydanie:1
Edícia:Lecture Notes in Computer Science; Lecture Notes in Artificial Intelligence
Predmet:
ISBN:9783031040832, 303104083X, 9783031040825, 3031040821
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.
AbstractList This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.
This is an open access book.Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI).While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs.causality).
Author Holzinger, Andreas
International Conference on Machine Learning
xxAI - beyond explainable AI (Workshop) (2020 : Vienna, Austria)
Author_xml – sequence: 1
  fullname: Holzinger, Andreas
– sequence: 2
  fullname: xxAI - beyond explainable AI (Workshop) (2020 : Vienna, Austria)
– sequence: 3
  fullname: International Conference on Machine Learning
BackLink https://cir.nii.ac.jp/crid/1130858596794727552$$DView record in CiNii
BookMark eNpdkM2P0zAQxY34EFAqzggO0QoJcQg7488EaQ9tVaBSRS9or9Y4ddvQbJyNC7T_PW7DAbh47Pd-fuPxc_aoDa1n7BXCBwQw16UpcpGDwBwkFCLnD9g4aSIpF4E__O_8hL1czb5-zFAg8kKVSj1l4xi_AwA3AjjHZ-zN8ThZZHk29afQrrP5sWuobsk1PpssXrDHG2qiH_-pI3b7af5t9iVfrj4vZpNlToKXIHMuHXcVGo1aOOIbRFMWBDp10BvDUchUquSCrtYoK0VCSud0mkpQ5TZixN4PwRT3_lfcheYQ7c_GuxD20f4zVGLfDWzXh_sfPh7sBat8e-ipsfPpTJdKigt5M5CBOt_arq_vqD_ZQLVtatcP-7MT-q3lYBWARa6VsUrKlDBir_--vw40vKdAXZzT3w5uW9e2qs8rooBCpZ_WppSGG6XO2NWAVRSpSZi9C23Y9tTtYurDURkQvwHdXoWx
ContentType eBook
Book
DBID I4C
RYH
V1H
A7I
DEWEY 006
DOI 10.1007/978-3-031-04083-2
DatabaseName Casalini Torrossa eBooks Institutional Catalogue
CiNii Complete
DOAB: Directory of Open Access Books
OAPEN
DatabaseTitleList



Database_xml – sequence: 1
  dbid: V1H
  name: DOAB: Directory of Open Access Books
  url: https://directory.doabooks.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9783031040832
303104083X
Edition 1
Editor Holzinger, Andreas
Moon, Taesup
Samek, Wojciech
Müller, Klaus-Robert
Goebel, Randy
Fong, Ruth
Editor_xml – sequence: 1
  fullname: Holzinger, Andreas
– sequence: 2
  fullname: Goebel, Randy
– sequence: 3
  fullname: Fong, Ruth
– sequence: 4
  fullname: Moon, Taesup
– sequence: 5
  fullname: Müller, Klaus-Robert
– sequence: 6
  fullname: Samek, Wojciech
ExternalDocumentID 9783031040832
EBC6954332
oai_library_oapen_org_20_500_12657_54443
81682
BC15438648
5421570
GroupedDBID 38.
A7I
AABBV
AAKKN
AALIB
AAQKC
AAXZC
AAYZJ
AAZWU
ABSVR
ABTHU
ABVND
ACBPT
ACHZO
ACPMC
ADNVS
AEDXK
AEKFX
AFNRJ
AHVRR
AIQUZ
AKAAH
ALMA_UNASSIGNED_HOLDINGS
BBABE
I4C
IEZ
SBO
TPJZQ
TSXQS
V1H
Z5O
Z7R
Z7S
Z7U
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z81
Z82
Z83
Z84
Z85
Z87
Z88
AALJR
ABEEZ
AEJLV
AGWHU
ALNDD
CZZ
EIXGO
RYH
ID FETCH-LOGICAL-a32904-24b2bc176163ba2f11798a062216f721346f7c61606cd14c5a344bb60073acbf3
IEDL.DBID A7I
ISBN 9783031040832
303104083X
9783031040825
3031040821
IngestDate Mon Sep 22 05:09:11 EDT 2025
Wed Nov 19 05:27:33 EST 2025
Wed Dec 10 14:25:53 EST 2025
Wed Oct 08 00:23:57 EDT 2025
Fri Jun 27 01:23:35 EDT 2025
Tue Nov 14 22:57:54 EST 2023
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
LCCallNum_Ident Q
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a32904-24b2bc176163ba2f11798a062216f721346f7c61606cd14c5a344bb60073acbf3
Notes Includes bibliographical references and index
OCLC OCN: 1311285955
1311285955
OpenAccessLink https://library.oapen.org/handle/20.500.12657/54443
PQID EBC6954332
PageCount 397
ParticipantIDs askewsholts_vlebooks_9783031040832
proquest_ebookcentral_EBC6954332
oapen_primary_oai_library_oapen_org_20_500_12657_54443
oapen_doabooks_81682
nii_cinii_1130858596794727552
casalini_monographs_5421570
PublicationCentury 2000
PublicationDate 2022
c2022
2022-04-16
PublicationDateYYYYMMDD 2022-01-01
2022-04-16
PublicationDate_xml – year: 2022
  text: 2022
PublicationDecade 2020
PublicationPlace Cham
PublicationPlace_xml – name: Netherlands
– name: Cham
PublicationSeriesTitle Lecture Notes in Computer Science; Lecture Notes in Artificial Intelligence
PublicationYear 2022
Publisher Springer Nature
Springer
Springer International Publishing AG
Publisher_xml – name: Springer Nature
– name: Springer
– name: Springer International Publishing AG
SSID ssj0002730221
Score 2.276292
Snippet This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML...
This is an open access book.Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI).While the most successful ML models,...
This is an open access book.Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models,...
SourceID askewsholts
proquest
oapen
nii
casalini
SourceType Aggregation Database
Publisher
SubjectTerms Applications
Artificial intelligence
Artificial intelligence -- Congresses
Computer Science
Human-computer interaction
Human-computer interaction -- Congresses
Informatics
Machine learning
Machine learning -- Congresses
Special computer methods
Subtitle International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers
TableOfContents 4.2 eXplainability-Driven Entropy-Constrained Quantization -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 ECQx Results -- 6 Conclusion -- References -- A Whale's Tail - Finding the Right Whale in an Uncertain World -- 1 Introduction -- 2 Related Work -- 3 Humpback Whale Data -- 3.1 Image Data -- 3.2 Expert Annotations -- 4 Methods -- 4.1 Landmark-Based Identification Framework -- 4.2 Uncertainty and Sensitivity Analysis -- 5 Experiments and Results -- 5.1 Experimental Setup -- 5.2 Uncertainty and Sensitivity Analysis of the Landmarks -- 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge -- 5.4 Spatial Uncertainty of Individual Landmarks -- 6 Conclusion and Outlook -- References -- Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science -- 1 Introduction -- 2 XAI Applications -- 2.1 XAI in Remote Sensing and Weather Forecasting -- 2.2 XAI in Climate Prediction -- 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint -- 3 Development of Attribution Benchmarks for Geosciences -- 3.1 Synthetic Framework -- 3.2 Assessment of XAI Methods -- 4 Conclusions -- References -- An Interdisciplinary Approach to Explainable AI -- Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond -- 1 Introduction -- 1.1 Functional Varieties of AI Explanations -- 1.2 Technical Varieties of AI Explanations -- 1.3 Roadmap of the Paper -- 2 Explainable AI Under Current Law -- 2.1 The GDPR: Rights-Enabling Transparency -- 2.2 Contract and Tort Law: Technical and Protective Transparency -- 2.3 Banking Law: More Technical and Protective Transparency -- 3 Regulatory Proposals at the EU Level: The AIA -- 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? -- 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)?
2.1 XAI: Counterfactual Explanations and Algorithmic Recourse -- 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals -- 3 Causal Recourse Formulation -- 3.1 Limitations of CFE-Based Recourse -- 3.2 Recourse Through Minimal Interventions -- 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations -- 4 Recourse Under Imperfect Causal Knowledge -- 4.1 Probabilistic Individualised Recourse -- 4.2 Probabilistic Subpopulation-Based Recourse -- 4.3 Solving the Probabilistic Recourse Optimization Problem -- 5 Experiments -- 5.1 Compared Methods -- 5.2 Metrics -- 5.3 Synthetic 3-Variable SCMs Under Different Assumptions -- 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval -- 6 Discussion -- 7 Conclusion -- References -- Interpreting Generative Adversarial Networks for Interactive Image Generation -- 1 Introduction -- 2 Supervised Approach -- 3 Unsupervised Approach -- 4 Embedding-Guided Approach -- 5 Concluding Remarks -- References -- XAI and Strategy Extraction via Reward Redistribution -- 1 Introduction -- 2 Background -- 2.1 Explainability Methods -- 2.2 Reinforcement Learning -- 2.3 Credit Assignment in Reinforcement Learning -- 2.4 Methods for Credit Assignment -- 2.5 Explainability Methods for Credit Assignment -- 2.6 Credit Assignment via Reward Redistribution -- 3 Strategy Extraction via Reward Redistribution -- 3.1 Strategy Extraction with Profile Models -- 3.2 Explainable Agent Behavior via Strategy Extraction -- 4 Experiments -- 4.1 Gridworld -- 4.2 Minecraft -- 5 Limitations -- 6 Conclusion -- References -- Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis -- 1 Introduction -- 2 Background on Reinforcement Learning -- 3 Programmatic Policies -- 3.1 Traditional Interpretable Models -- 3.2 State Machine Policies -- 3.3 List Processing Programs
Intro -- Preface -- Organization -- Contents -- Editorial -- xxAI - Beyond Explainable Artificial Intelligence -- 1 Introduction and Motivation for Explainable AI -- 2 Explainable AI: Past and Present -- 3 Book Structure -- References -- Current Methods and Challenges -- Explainable AI Methods - A Brief Overview -- 1 Introduction -- 2 Explainable AI Methods - Overview -- 2.1 LIME (Local Interpretable Model Agnostic Explanations) -- 2.2 Anchors -- 2.3 GraphLIME -- 2.4 Method: LRP (Layer-wise Relevance Propagation) -- 2.5 Deep Taylor Decomposition (DTD) -- 2.6 Prediction Difference Analysis (PDA) -- 2.7 TCAV (Testing with Concept Activation Vectors) -- 2.8 XGNN (Explainable Graph Neural Networks) -- 2.9 SHAP (Shapley Values) -- 2.10 Asymmetric Shapley Values (ASV) -- 2.11 Break-Down -- 2.12 Shapley Flow -- 2.13 Textual Explanations of Visual Models -- 2.14 Integrated Gradients -- 2.15 Causal Models -- 2.16 Meaningful Perturbations -- 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) -- 3 Conclusion and Future Outlook -- References -- General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models -- 1 Introduction -- 2 Assuming One-Fits-All Interpretability -- 3 Bad Model Generalization -- 4 Unnecessary Use of Complex Models -- 5 Ignoring Feature Dependence -- 5.1 Interpretation with Extrapolation -- 5.2 Confusing Linear Correlation with General Dependence -- 5.3 Misunderstanding Conditional Interpretation -- 6 Misleading Interpretations Due to Feature Interactions -- 6.1 Misleading Feature Effects Due to Aggregation -- 6.2 Failing to Separate Main from Interaction Effects -- 7 Ignoring Model and Approximation Uncertainty -- 8 Ignoring the Rashomon Effect -- 9 Failure to Scale to High-Dimensional Settings -- 9.1 Human-Intelligibility of High-Dimensional IML Output -- 9.2 Computational Effort
3.4 Neurosymbolic Policies -- 4 Synthesizing Programmatic Policies -- 4.1 Imitation Learning -- 4.2 Q-Guided Imitation Learning -- 4.3 Updating the DNN Policy -- 4.4 Program Synthesis for Supervised Learning -- 5 Case Studies -- 5.1 Interpretability -- 5.2 Verification -- 5.3 Robustness -- 6 Conclusions and Future Work -- References -- Interpreting and Improving Deep-Learning Models with Reality Checks -- 1 Interpretability: For What and For Whom? -- 2 Computing Interpretations for Feature Interactions and Transformations -- 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs -- 2.2 Agglomerative Contextual Decomposition (ACD) -- 2.3 Transformation Importance with Applications to Cosmology (TRIM) -- 3 Using Attributions to Improve Models -- 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) -- 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations -- 4 Real-Data Problems Showcasing Interpretations -- 4.1 Molecular Partner Prediction -- 4.2 Cosmological Parameter Prediction -- 4.3 Improving Skin Cancer Classification via CDEP -- 5 Discussion -- 5.1 Building/Distilling Accurate and Interpretable Models -- 5.2 Making Interpretations Useful -- References -- Beyond the Visual Analysis of Deep Model Saliency -- 1 Introduction -- 2 Saliency-Based XAI in Vision -- 2.1 White-Box Models -- 2.2 Black-Box Models -- 3 XAI for Improved Models: Excitation Dropout -- 4 XAI for Improved Models: Domain Generalization -- 5 XAI for Improved Models: Guided Zoom -- 6 Conclusion -- References -- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs -- 1 Introduction -- 2 Related Work -- 3 Neural Network Quantization -- 3.1 Entropy-Constrained Quantization -- 4 Explainability-Driven Quantization -- 4.1 Layer-Wise Relevance Propagation
3.3 Limitations
9.3 Ignoring Multiple Comparison Problem -- 10 Unjustified Causal Interpretation -- 11 Discussion -- References -- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations -- 1 Introduction -- 2 Related Work -- 3 The CLEVR-X Dataset -- 3.1 The CLEVR Dataset -- 3.2 Dataset Generation -- 3.3 Dataset Analysis -- 3.4 User Study on Explanation Completeness and Relevance -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Evaluating Explanations Generated by State-of-the-Art Methods -- 4.3 Analyzing Results on CLEVR-X by Question and Answer Types -- 4.4 Influence of Using Different Numbers of Ground-Truth Explanations -- 4.5 Qualitative Explanation Generation Results -- 5 Conclusion -- References -- New Developments in Explainable AI -- A Rate-Distortion Framework for Explaining Black-Box Model Decisions -- 1 Introduction -- 2 Related Works -- 3 Rate-Distortion Explanation Framework -- 3.1 General Formulation -- 3.2 Implementation -- 4 Experiments -- 4.1 Images -- 4.2 Audio -- 4.3 Radio Maps -- 5 Conclusion -- References -- Explaining the Predictions of Unsupervised Learning Models -- 1 Introduction -- 2 A Brief Review of Explainable AI -- 2.1 Approaches to Attribution -- 2.2 Neuralization-Propagation -- 3 Kernel Density Estimation -- 3.1 Explaining Outlierness -- 3.2 Explaining Inlierness: Direct Approach -- 3.3 Explaining Inlierness: Random Features Approach -- 4 K-Means Clustering -- 4.1 Explaining Cluster Assignments -- 5 Experiments -- 5.1 Wholesale Customer Analysis -- 5.2 Image Analysis -- 6 Conclusion and Outlook -- A Attribution on CNN Activations -- A.1 Attributing Outlierness -- A.2 Attributing Inlierness -- A.3 Attributing Cluster Membership -- References -- Towards Causal Algorithmic Recourse -- 1 Introduction -- 1.1 Motivating Examples -- 1.2 Summary of Contributions and Structure of This Chapter -- 2 Preliminaries
Title xxAI - Beyond Explainable AI
URI http://digital.casalini.it/9783031040832
https://cir.nii.ac.jp/crid/1130858596794727552
https://directory.doabooks.org/handle/20.500.12854/81682
https://library.oapen.org/handle/20.500.12657/54443
https://ebookcentral.proquest.com/lib/[SITE_ID]/detail.action?docID=6954332
https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9783031040832
Volume 13200
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NS8MwFH_o9KCX-YlzTop4zWjTJG2PUzY2EPEgY7eQpC0MZZN1Dv9830s7P_DmJaUJDemvzftI8n4P4Fbi_HdOGHRLIsmEzC1LS2NZIW2hbGK49GTV04fk8TGdzbKnJo67-l676C8NevN-J79mG0AnvS9DIkNQ5MILIeJd2FOUjJrsoWTytbCC-hj1UkRxHER7KdDGaJh2vu75dnuzYZiNGbYw38T4IRya6gUlDEqfdUXqylSGohRR-yzmc0qMRGP7I7u9Qhq1__UqR7BXUGzDMewUixNobzM6BM0EP4Wrj4_BJGBBHdcS0AG9JroqGEzOYDoaPt-PWZM_gZmYZ6FgXFhuXZQoNLqs4SXRv6UmRNQiVSbE5YYXh62hcnkknDSxENZ6ynrjbBmfQ2uxXBQXEGSpFS4SSaGEEdzlKYoK6fJYFblKyzTpwM0P1PTm1e_1VvoX7B3obsHUOMlqTu5KS4F2RxJ2oIf4ajenMkLdSluWmUKBgTaWlPj0qYdS50tTd045Q7Ba1dVvNT2HJsLsBntdtyD2mocaQdcedO1B70Cw_YLaj7Y5DquHd_cqk0TsdvnfvrtwwClOwq_VXEFrvXoverDvNut5tbr2PyyW02j8CX814AQ
linkProvider Open Access Publishing in European Networks
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.title=xxAI+-+beyond+explainable+AI+%3A+international+workshop%2C+held+in+conjunction+with+ICML+2020%2C+July+18%2C+2020%2C+Vienna%2C+Austria%2C+revised+and+extended+papers&rft.au=Holzinger%2C+Andreas&rft.au=xxAI+-+beyond+explainable+AI+%28Workshop%29+%282020+%3A+Vienna%2C+Austria%29&rft.au=International+Conference+on+Machine+Learning&rft.date=2022-01-01&rft.pub=Springer&rft.isbn=9783031040825&rft_id=info:doi/10.1007%2F978-3-031-04083-2&rft.externalDocID=BC15438648
thumbnail_m http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97830310%2F9783031040832.jpg