Sets of approximating functions with finite Vapnik–Chervonenkis dimension for nearest-neighbors algorithms

► Reformulation of k-NN algorithm to alpha-NN ∗ algorithm (where alpha is a fraction). ► Establishing sets of functions for alpha-NN ∗, with finite capacity. ► Pointing out degrees of freedom for these sets. ► Proving theorems about dichotomies and VC-dimension for the proposed sets. According to a...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters Vol. 32; no. 14; pp. 1882 - 1893
Main Authors: KLCSK, P, KORZEN, M
Format: Journal Article
Language:English
Published: Amsterdam Elsevier B.V 15.10.2011
Elsevier
Subjects:
ISSN:0167-8655, 1872-7344
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:► Reformulation of k-NN algorithm to alpha-NN ∗ algorithm (where alpha is a fraction). ► Establishing sets of functions for alpha-NN ∗, with finite capacity. ► Pointing out degrees of freedom for these sets. ► Proving theorems about dichotomies and VC-dimension for the proposed sets. According to a certain misconception sometimes met in the literature: for the nearest-neighbors algorithms there is no fixed hypothesis class of limited Vapnik–Chervonenkis dimension. In the paper a simple reformulation (not a modification) of the nearest-neighbors algorithm is shown where instead of a natural number k, a percentage α ∈ (0, 1) of nearest neighbors is used. Owing to this reformulation one can construct sets of approximating functions, which we prove to have finite VC dimension. In a special (but practical) case this dimension is equal to ⌊2/ α⌋. It is also then possible to form a sequence of sets of functions with increasing VC dimension, and to perform complexity selection via cross-validation or similarly to the structural risk minimization framework. Results of such experiments are also presented.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2011.07.012