Learning from interpretation transition using differentiable logic programming semantics
The combination of learning and reasoning is an essential and challenging topic in neuro-symbolic research. Differentiable inductive logic programming is a technique for learning a symbolic knowledge representation from either complete, mislabeled, or incomplete observed facts using neural networks....
Uloženo v:
| Vydáno v: | Machine learning Ročník 111; číslo 1; s. 123 - 145 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
Springer US
01.01.2022
Springer Nature B.V |
| Témata: | |
| ISSN: | 0885-6125, 1573-0565 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | The combination of learning and reasoning is an essential and challenging topic in neuro-symbolic research. Differentiable inductive logic programming is a technique for learning a symbolic knowledge representation from either complete, mislabeled, or incomplete observed facts using neural networks. In this paper, we propose a novel differentiable inductive logic programming system called differentiable learning from interpretation transition (D-LFIT) for learning logic programs through the proposed embeddings of logic programs, neural networks, optimization algorithms, and an adapted algebraic method to compute the logic program semantics. The proposed model has several characteristics, including a small number of parameters, the ability to generate logic programs in a curriculum-learning setting, and linear time complexity for the extraction of trained neural networks. The well-known bottom clause positionalization algorithm is incorporated when the proposed system learns from relational datasets. We compare our model with NN-LFIT, which extracts propositional logic rules from retuned connected networks, the highly accurate rule learner RIPPER, the purely symbolic LFIT system LF1T, and CILP++, which integrates neural networks and the propositionalization method to handle first-order logic knowledge. From the experimental results, we conclude that D-LFIT yields comparable accuracy with respect to the baselines when given complete, incomplete, and mislabeled data. Our experimental results indicate that D-LFIT not only learns symbolic logic programs quickly and precisely but also performs robustly when processing mislabeled and incomplete datasets. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0885-6125 1573-0565 |
| DOI: | 10.1007/s10994-021-06058-8 |