Two-stage optimization for machine learning workflow
Machine learning techniques play a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learn...
Uloženo v:
| Vydáno v: | Information systems (Oxford) Ročník 92; s. 101483 |
|---|---|
| Hlavní autor: | |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Oxford
Elsevier Ltd
01.09.2020
Elsevier Science Ltd |
| Témata: | |
| ISSN: | 0306-4379, 1873-6076 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Machine learning techniques play a preponderant role in dealing with massive amount of data and are employed in almost every possible domain. Building a high quality machine learning model to be deployed in production is a challenging task, from both, the subject matter experts and the machine learning practitioners.
For a broader adoption and scalability of machine learning systems, the construction and configuration of machine learning workflow need to gain in automation. In the last few years, several techniques have been developed in this direction, known as AutoML.
In this paper, we present a two-stage optimization process to build data pipelines and configure machine learning algorithms. First, we study the impact of data pipelines compared to algorithm configuration in order to show the importance of data preprocessing over hyperparameter tuning. The second part presents policies to efficiently allocate search time between data pipeline construction and algorithm configuration. Those policies are agnostic from the metaoptimizer. Last, we present a metric to determine if a data pipeline is specific or independent from the algorithm, enabling fine-grain pipeline pruning and meta-learning for the coldstart problem.
•The importance of optimizing data pipeline over hyperparameter tuning is studied.•The results show data pipelines are often more important than hyperparameter tuning.•A two-stage optimization process is proposed to search for a ML workflow.•This process is empirically validated over several time allocation policies.•Iterative and adaptive policies are more robust than static policies.•A metric to measure if a data pipeline is independent from the model is proposed. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0306-4379 1873-6076 |
| DOI: | 10.1016/j.is.2019.101483 |