Learning from non-irreducible Markov chains

Most of the existing literature on supervised machine learning problems focuses on the case when the training data set is drawn from an i.i.d. sample. However, many practical problems are characterized by temporal dependence and strong correlation between the marginals of the data-generating process...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Journal of mathematical analysis and applications Ročník 523; číslo 2; s. 127049
Hlavní autoři: Sandrić, Nikola, Šebek, Stjepan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Inc 15.07.2023
Témata:
ISSN:0022-247X, 1096-0813
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Most of the existing literature on supervised machine learning problems focuses on the case when the training data set is drawn from an i.i.d. sample. However, many practical problems are characterized by temporal dependence and strong correlation between the marginals of the data-generating process, suggesting that the i.i.d. assumption is not always justified. This problem has been already considered in the context of Markov chains satisfying the Doeblin condition. This condition, among other things, implies that the chain is not singular in its behavior, i.e. it is irreducible. In this article, we focus on the case when the training data set is drawn from a not necessarily irreducible Markov chain. Under the assumption that the chain is uniformly ergodic with respect to the L1-Wasserstein distance, and certain regularity assumptions on the hypothesis class and the state space of the chain, we first obtain a uniform convergence result for the corresponding sample error, and then we conclude learnability of the approximate sample error minimization algorithm and find its generalization bounds. At the end, a relative uniform convergence result for the sample error is also discussed.
ISSN:0022-247X
1096-0813
DOI:10.1016/j.jmaa.2023.127049