Two-stage optimization for machine learning workflow

Machine learning techniques play a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information systems (Oxford) Jg. 92; S. 101483
1. Verfasser: Quemy, Alexandre
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Oxford Elsevier Ltd 01.09.2020
Elsevier Science Ltd
Schlagworte:
ISSN:0306-4379, 1873-6076
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Machine learning techniques play a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners. For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as AutoML. In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem. •The importance of optimizing data pipeline over hyperparameter tuning is studied.•The results show data pipelines are often more important than hyperparameter tuning.•A two-stage optimization process is proposed to search for a ML workflow.•This process is empirically validated over several time allocation policies.•Iterative and adaptive policies are more robust than static policies.•A metric to measure if a data pipeline is independent from the model is proposed.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0306-4379
1873-6076
DOI:10.1016/j.is.2019.101483