The k-nearest neighbors (KNN) method is one of the simplest approaches in supervised learning. It is a lazy, non-parametric approach, which means that rather than try to extract some knowledge from the data in terms of some parametric representation, it keeps hold of the entire dataset and uses it directly to provide predictions about new data points.
In KNN, predictions are based on distance to other points. When a new point needs to be classified, KNN looks for its n nearest neighbours in the dataset (points with smallest distances to the new point). It then decides the class of the new point by voting among these. KNN regression is similar, the difference is that averaging is used in place of voting.
KNN: the principle.
This approach is advantageous for some applications, but distances between points in the input space are not always representative of the true semantic similarity of the two points. E.g. it does not make sense to compare two images in the pixel space: they would need to be very heavily preprocessed for a method like KNN to give reasonable results. More on this topic is discussed in the entry on embeddings.