26 lines
926 B
Markdown
26 lines
926 B
Markdown
# K-Nearest Neighbors (KNN)
|
|
|
|
**KNN** is a simple, "Lazy" learning algorithm.
|
|
|
|
## How it Works
|
|
1. Store all training data.
|
|
2. When a new item arrives, find the **K** closest items (neighbors) to it.
|
|
3. Check the class of those neighbors.
|
|
4. Assign the most common class to the new item.
|
|
|
|
## Key Concepts
|
|
- **Lazy Learner**: It doesn't build a model during training. It waits until it needs to classify.
|
|
- **Distance Measure**: How do we measure "closeness"?
|
|
- **Euclidean Distance**: Straight line distance (most common).
|
|
- **Manhattan Distance**: Grid-like distance.
|
|
- **Choosing K**:
|
|
- If K is too small (e.g., K=1), it's sensitive to noise.
|
|
- If K is too large, it might include points from other classes.
|
|
- Usually, K is an odd number (like 3, 5) to avoid ties.
|
|
|
|
## Example
|
|
- New Point: Green Circle.
|
|
- K = 3.
|
|
- Neighbors: 2 Red Triangles, 1 Blue Square.
|
|
- Result: Green Circle is classified as **Red Triangle**.
|