Large deviations in the perceptron model and consequences for active learning

Active learning (AL) is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the corresponding labels, subsequently used for supervised le...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Machine learning: science and technology Ročník 2; číslo 4; s. 45001
Hlavní autoři: Cui, H, Saglietti, L, Zdeborová, L
Médium: Journal Article
Jazyk:angličtina
Vydáno: IOP Publishing Ltd 01.12.2021
Témata:
ISSN:2632-2153, 2632-2153
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Active learning (AL) is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the corresponding labels, subsequently used for supervised learning. In this work, we consider the task of choosing the subset of samples to be labeled from a fixed finite pool of samples. We assume the pool of samples to be a random matrix and the ground truth labels to be generated by a single-layer teacher random neural network. We employ replica methods to analyze the large deviations for the accuracy achieved after supervised learning on a subset of the original pool. These large deviations then provide optimal achievable performance boundaries for any AL algorithm. We show that the optimal learning performance can be efficiently approached by simple message-passing AL algorithms. We also provide a comparison with the performance of some other popular active learning strategies.
ISSN:2632-2153
2632-2153
DOI:10.1088/2632-2153/abfbbb