Nearest-neighbor

The idea is very similar to Nearest neighbour averaging.

  1. Take a neighborhood, \(\mathcal{N}\), around \(x_0\).
  2. Count class labels.
  3. The class label that counts the most will be used to label \(\hat{y}_0\)

This has same problems as Nearest neighbour averaging, the curse of dimentionality. As the number of dimensions increase the training data will become sparse.

A widely used neighborhood algorithm is K-nearest neighbors.

Date: 2026-02-04 Wed 20:24

Author: vj

Created: 2026-03-05 Thu 07:53

Validate