Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?

The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly diffic...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE internet computing Ročník 25; číslo 1; s. 51 - 59
Hlavní autoři: Gaur, Manas, Faldu, Keyur, Sheth, Amit
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 01.01.2021
Témata:
ISSN:1089-7801, 1941-0131
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, and human-computer interactions. However, DL's black-box nature and over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for interpretability and explainability. Furthermore, DLs have not proven their ability to effectively utilize relevant domain knowledge critical to human understanding. This aspect was missing in early data-focused approaches and necessitated knowledge-infused learning (K-iL) to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL using K-iL. Through examples from natural language processing applications in healthcare and education, we discuss the utility of K-iL towards interpretability and explainability.
ISSN:1089-7801
1941-0131
DOI:10.1109/MIC.2020.3031769