Nearest-neighbor
The idea is very similar to Nearest neighbour averaging.
- Take a neighborhood, \(\mathcal{N}\), around \(x_0\).
- Count class labels.
- The class label that counts the most will be used to label \(\hat{y}_0\)
This has same problems as Nearest neighbour averaging, the curse of dimentionality. As the number of dimensions increase the training data will become sparse.
A widely used neighborhood algorithm is K-nearest neighbors.