Learning optimal decision trees using constraint programming

Decision trees are among the most popular classification models in machine learning. Traditionally, they are learned using greedy algorithms. However, such algorithms pose several disadvantages: it is difficult to limit the size of the decision trees while maintaining a good classification accuracy,...

Full description

Saved in:
Bibliographic Details
Published in:Constraints : an international journal Vol. 25; no. 3-4; pp. 226 - 250
Main Authors: Verhaeghe, Hélène, Nijssen, Siegfried, Pesant, Gilles, Quimper, Claude-Guy, Schaus, Pierre
Format: Journal Article
Language:English
Published: New York Springer US 01.12.2020
Subjects:
ISSN:1383-7133, 1572-9354
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Decision trees are among the most popular classification models in machine learning. Traditionally, they are learned using greedy algorithms. However, such algorithms pose several disadvantages: it is difficult to limit the size of the decision trees while maintaining a good classification accuracy, and it is hard to impose additional constraints on the models that are learned. For these reasons, there has been a recent interest in exact and flexible algorithms for learning decision trees. In this paper, we introduce a new approach to learn decision trees using constraint programming. Compared to earlier approaches, we show that our approach obtains better performance, while still being sufficiently flexible to allow for the inclusion of constraints. Our approach builds on three key building blocks: (1) the use of AND/OR search, (2) the use of caching, (3) the use of the CoverSize global constraint proposed recently for the problem of itemset mining. This allows our constraint programming approach to deal in a much more efficient way with the decompositions in the learning problem.
ISSN:1383-7133
1572-9354
DOI:10.1007/s10601-020-09312-3