Estimation of prediction error by using K-fold cross-validation

Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K -fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but...

Full description

Saved in:
Bibliographic Details
Published in:Statistics and computing Vol. 21; no. 2; pp. 137 - 146
Main Author: Fushiki, Tadayoshi
Format: Journal Article
Language:English
Published: Boston Springer US 01.04.2011
Subjects:
ISSN:0960-3174, 1573-1375
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Estimation of prediction accuracy is important when our aim is prediction. The training error is an easy estimate of prediction error, but it has a downward bias. On the other hand, K -fold cross-validation has an upward bias. The upward bias may be negligible in leave-one-out cross-validation, but it sometimes cannot be neglected in 5-fold or 10-fold cross-validation, which are favored from a computational standpoint. Since the training error has a downward bias and K -fold cross-validation has an upward bias, there will be an appropriate estimate in a family that connects the two estimates. In this paper, we investigate two families that connect the training error and K -fold cross-validation.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0960-3174
1573-1375
DOI:10.1007/s11222-009-9153-8