Estimation of prediction error by using K-fold cross-validation

Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K -fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Statistics and computing Ročník 21; číslo 2; s. 137 - 146
Hlavní autor: Fushiki, Tadayoshi
Médium: Journal Article
Jazyk:angličtina
Vydáno: Boston Springer US 01.04.2011
Témata:
ISSN:0960-3174, 1573-1375
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K -fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training error has a downward bias and K -fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates. In this paper, we investigate two families that connect the training error and K -fold cross-validation.
Bibliografie:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0960-3174
1573-1375
DOI:10.1007/s11222-009-9153-8