Learn to optimize—a brief overview

Most optimization problems of practical significance are typically solved by highly configurable parameterized algorithms. To achieve the best performance on a problem instance, a trial-and-error configuration process is required, which is very costly and even prohibitive for problems that are alrea...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:National science review Ročník 11; číslo 8; s. nwae132
Hlavní autoři: Tang, Ke, Yao, Xin
Médium: Journal Article
Jazyk:angličtina
Vydáno: China Oxford University Press 01.08.2024
Témata:
ISSN:2095-5138, 2053-714X, 2053-714X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Most optimization problems of practical significance are typically solved by highly configurable parameterized algorithms. To achieve the best performance on a problem instance, a trial-and-error configuration process is required, which is very costly and even prohibitive for problems that are already computationally intensive, e.g. optimization problems associated with machine learning tasks. In the past decades, many studies have been conducted to accelerate the tedious configuration process by learning from a set of training instances. This article refers to these studies as learn to optimize and reviews the progress achieved. The article presents an overview on “Learn to Optimiz”, a paradigm that leverage on a set of training instances to accelerate the tedious configuration process of optimization algorithms.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
ObjectType-Review-3
content type line 23
ISSN:2095-5138
2053-714X
2053-714X
DOI:10.1093/nsr/nwae132