Induction of Non-monotonic Logic Programs To Explain Statistical Learning Models

We present a fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models. We reduce the problem of search for best clauses to instances of the High-Utility Itemset Mining (HUIM) problem. In the HUIM problem, feature values and their importance are treated as t...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Electronic proceedings in theoretical computer science Ročník 306; číslo Proc. ICLP 2019; s. 379 - 388
Hlavní autor: Shakerin, Farhad
Médium: Journal Article
Jazyk:angličtina
Vydáno: Open Publishing Association 19.09.2019
ISSN:2075-2180, 2075-2180
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We present a fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models. We reduce the problem of search for best clauses to instances of the High-Utility Itemset Mining (HUIM) problem. In the HUIM problem, feature values and their importance are treated as transactions and utilities respectively. We make use of TreeExplainer, a fast and scalable implementation of the Explainable AI tool SHAP, to extract locally important features and their weights from ensemble tree models. Our experiments with UCI standard benchmarks suggest a significant improvement in terms of classification evaluation metrics and running time of the training algorithm compared to ALEPH, a state-of-the-art Inductive Logic Programming (ILP) system.
ISSN:2075-2180
2075-2180
DOI:10.4204/EPTCS.306.51