kNN is a lazy machine learning algorithm that doesn't require training. It makes predictions by calculating distances between new samples and training samples, finding the k nearest neighbors, and classifying the new sample based on a majority vote from those neighbors. The value of 'k' is important as it determines the number of samples considered for prediction. The choice of 'k' can have a significant impact on the outcome. Different distance metrics, such as Euclidean distance, Manhattan distance, and Minkowski distance, can be used in kNN depending on the problem. Cosine similarity is another distance metric that can be useful for high-dimensional data.
Sort: