21 lines
1.0 KiB
Markdown
21 lines
1.0 KiB
Markdown
# Bayesian Classification
|
|
|
|
**Bayesian Classifiers** are based on probability (Bayes' Theorem). They predict the likelihood that a tuple belongs to a class.
|
|
|
|
## Bayes' Theorem
|
|
$$ P(H|X) = \frac{P(X|H) \cdot P(H)}{P(X)} $$
|
|
- **P(H|X)**: Posterior Probability (Probability of Hypothesis H given Evidence X).
|
|
- **P(H)**: Prior Probability (Probability of H being true generally).
|
|
- **P(X|H)**: Likelihood (Probability of seeing Evidence X if H is true).
|
|
- **P(X)**: Evidence (Probability of X occurring).
|
|
|
|
## Naive Bayes Classifier
|
|
- **"Naive"**: It assumes that all attributes are **independent** of each other.
|
|
- *Example*: It assumes "Income" and "Age" don't affect each other, which simplifies the math.
|
|
- **Pros**: Very fast and effective for large datasets (like spam filtering).
|
|
- **Cons**: The independence assumption is often not true in real life.
|
|
|
|
## Bayesian Belief Networks (BBN)
|
|
- Unlike Naive Bayes, BBNs **allow** dependencies between variables.
|
|
- They use a graph structure (DAG) to show which variables affect others.
|