An Exploration of Explainable Machine Learning Using Semantic Web Technology

The behavior of a Machine Learning (ML) algorithm is generally accepted to be a black box, i.e., it cannot be opened and understood. This paper reports on an effort to provide explanation to ML algorithms by using semantic background knowledge. A preliminary paper was found as a project seed, its ex...

Full description

Saved in:
Bibliographic Details
Published in:2022 IEEE 16th International Conference on Semantic Computing (ICSC) pp. 143 - 146
Main Authors: Procko, Tyler, Elvira, Timothy, Ochoa, Omar, Del Rio, Nicholas
Format: Conference Proceeding
Language:English
Published: IEEE 01.01.2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The behavior of a Machine Learning (ML) algorithm is generally accepted to be a black box, i.e., it cannot be opened and understood. This paper reports on an effort to provide explanation to ML algorithms by using semantic background knowledge. A preliminary paper was found as a project seed, its experiment of ML explanation using the DL-Learner tool recreated and semi-automated. DL-Learner is a framework for supervised ML using background knowledge. DL-Learner induces class relationships that hold true for a positive example set. The work presented in this paper is a novel, semi-automated framework for testing the use of DL-Learner in ML explanation. A scene classifier pipeline was created to obtain test data. For the chosen dataset input to the ML, 32 trials were conducted, and explanations produced. Furthermore, this paper reports on the use of DL-Learner as a tool and the lessons learned from its use. DL-Learner, though temporally slow, may prove to be a novel means of ML explanation.
DOI:10.1109/ICSC52841.2022.00029