Files
DMCT-NOTES/unit 4/03_Bayesian_Classification.md
2025-11-24 16:55:19 +05:30

1.0 KiB

Bayesian Classification

Bayesian Classifiers are based on probability (Bayes' Theorem). They predict the likelihood that a tuple belongs to a class.

Bayes' Theorem

P(H|X) = \frac{P(X|H) \cdot P(H)}{P(X)}
  • P(H|X): Posterior Probability (Probability of Hypothesis H given Evidence X).
  • P(H): Prior Probability (Probability of H being true generally).
  • P(X|H): Likelihood (Probability of seeing Evidence X if H is true).
  • P(X): Evidence (Probability of X occurring).

Naive Bayes Classifier

  • "Naive": It assumes that all attributes are independent of each other.
    • Example: It assumes "Income" and "Age" don't affect each other, which simplifies the math.
  • Pros: Very fast and effective for large datasets (like spam filtering).
  • Cons: The independence assumption is often not true in real life.

Bayesian Belief Networks (BBN)

  • Unlike Naive Bayes, BBNs allow dependencies between variables.
  • They use a graph structure (DAG) to show which variables affect others.