Revision as of 07:14, 30 September 2013 by Mhossain (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)


Random Variables and Signals

Topic 4: Statistical Independence



Definition

People generally have an idea of the concept of independence, but we will formalize it for probability theory here.

Definition $ \qquad $ Given (S,F,P) and A, B ∈ F, events A and B are statistically indendent iff

$ P(A\cap B) = P(A)P(B) $

It may seem more intuitive if we consider that, from the definition above, we can say that A and B are statistically independent iff P(A|B)P(B) = P(A)P(B), which means P(A|B) = P(A)

Notation $ \qquad $ A, B are independent ⇔

$ X \perp\!\!\!\perp Y $

Note $ \qquad $ We will usually refer to statistical independence as simply independence in this course.

Note $ \qquad $ If A and B are independent, then A' and B are also independent.

Proof:

$ \begin{align} B&=(A\cap B)\cup(A'\cap B) \\ \Rightarrow P(B)&=p(A\cap B)+P(A'\cap B) \\ &=P(A)P(B) + P(A'\cap B) \end{align} $
$ \begin{align} \Rightarrow P(A'\cap B) &= P(B) - P(A)P(B) \\ &= P(B)[1-P(A)] \\ &= P(B)P(A') \end{align} $

Definition $ \qquad $ Events $ A_1,A_2,...,A_n $ are statistically independent iff for any $ k=1,2,...,n $, and any 1≤$ j_1,j_2,...,j_k $≤n, for any finite n, we have that

$ P(A_{j_1}\cap...\cap A_{j_1}) = \prod_{i=1}^k P(A_{j_1}) $





References



Back to all ECE 600 notes

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn