A Natural Way We Learn
Say we are given a picture of a thing that we have never seen before and we need to find what type of image is this, call this image test image. Also, we are given 100 different other images whose type is written on its back, call them training images.
let's say we manually compare all the 100 images with the test image and give a simple similarity score. The image having high similarity with our test image will get a higher similarity score. After comparing all the 100 images we found out that one training image has a similarity score of 99% with our test image. If so, then we can predict that the type of our test image is the type of the training image.
K-Nearest Neighbour Algorithm
KNN algorithm assumes that similar things are near to each other. In KNN we select a number of neighbors of the things we're are interested in and based on the properties of its neighbor we predict something about the thing.
In KNN all the training examples are kept in memory. When an unseen test case comes in, the KNN algorithm find k training examples closest to the test case (neighbours). For classification problem KNN algorithm returns the majority label of the neighbours and for regression it returns average label.