addition of unit 1 3 4 5
This commit is contained in:
20
unit 4/03_Bayesian_Classification.md
Normal file
20
unit 4/03_Bayesian_Classification.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Bayesian Classification
|
||||
|
||||
**Bayesian Classifiers** are based on probability (Bayes' Theorem). They predict the likelihood that a tuple belongs to a class.
|
||||
|
||||
## Bayes' Theorem
|
||||
$$ P(H|X) = \frac{P(X|H) \cdot P(H)}{P(X)} $$
|
||||
- **P(H|X)**: Posterior Probability (Probability of Hypothesis H given Evidence X).
|
||||
- **P(H)**: Prior Probability (Probability of H being true generally).
|
||||
- **P(X|H)**: Likelihood (Probability of seeing Evidence X if H is true).
|
||||
- **P(X)**: Evidence (Probability of X occurring).
|
||||
|
||||
## Naive Bayes Classifier
|
||||
- **"Naive"**: It assumes that all attributes are **independent** of each other.
|
||||
- *Example*: It assumes "Income" and "Age" don't affect each other, which simplifies the math.
|
||||
- **Pros**: Very fast and effective for large datasets (like spam filtering).
|
||||
- **Cons**: The independence assumption is often not true in real life.
|
||||
|
||||
## Bayesian Belief Networks (BBN)
|
||||
- Unlike Naive Bayes, BBNs **allow** dependencies between variables.
|
||||
- They use a graph structure (DAG) to show which variables affect others.
|
||||
Reference in New Issue
Block a user