A defeasible reasoning model of inductive concept learning from examples and communication

This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which...

Full description

Saved in:
Bibliographic Details
Published in:Artificial intelligence Vol. 193; pp. 129 - 148
Main Authors: Ontañón, Santiago, Dellunde, Pilar, Godo, Lluís, Plaza, Enric
Format: Journal Article
Language:English
Published: Oxford Elsevier B.V 01.12.2012
Elsevier
Subjects:
ISSN:0004-3702, 1872-7921
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data.
ISSN:0004-3702
1872-7921
DOI:10.1016/j.artint.2012.08.006