(New page: Category:math Category:tutorial Category:probability ==Theorem== Let <math>X</math> and <math>Y</math> be two random variables with variances <math>Var(X)</math> and <math>Va...)
 
 
(4 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
==Theorem==
 
==Theorem==
  
Let <math>X</math> and <math>Y</math> be two random variables with variances <math>Var(X)</math> and <math>Var(Y)>/math> respectively and covariance <math>Cov(X,Y)</math>. Then <br/>
+
Let <math>X</math> and <math>Y</math> be two random variables with variances <math>Var(X)</math> and <math>Var(Y)</math> respectively and covariance <math>Cov(X,Y)</math>. Then <br/>
 +
<math> Var(aX + bY) = a^2Var(X) + b^2Var(Y) + 2abCov(X,Y) \ </math>
  
  
Line 11: Line 12:
  
 
==Proof==
 
==Proof==
 +
 +
By definition, we have that the variance of random variable <math>Z</math> is given by <br/>
 +
<math>\begin{align}
 +
Var(Z) &= E[(Z-E[Z])^2] \\
 +
&= E[Z^2 -2ZE[Z] +(E[Z])^2] \\
 +
&= E[Z^2] - 2(E[Z])^2 + (E[Z])^2 \\
 +
&= E[Z^2] - (E[Z])^2
 +
\end{align}</math>
 +
 +
<math>\begin{align}
 +
\Rightarrow Var(aX+bY) &= E[(aX+bY)^2] - (E[aX+bY])^2 \\
 +
&= E[a^2X^2 + +2abXY b^2Y^2] - (aE[X] + bE[Y])^2 \\
 +
&= a^2E[X^2] + 2abE[XY] + b^2E[Y^2]-a^2(E[X])^2 - 2abE[X]E[Y] - b^2(E[Y])^2  \\
 +
&= a^2[E[X^2]-(E[X])^2] + b^2[E[Y^2]-(E[Y])^2] + 2ab(E[XY]-E[X]E[Y]) \\
 +
&= a^2[E[X^2]-\mu_X^2] + b^2[E[Y^2]-\mu_Y^2] + 2ab(E[XY]-\mu_X\mu_Y)
 +
\end{align}</math><br/>
 +
where <math>\mu_X = E[X]</math> and <math>\mu_Y = E[Y]</math>. <br/>
 +
Also recall from the definition of the covariance, <math>Cov(X,Y) = E[XY]-\mu_X\mu_Y</math>. So finally, we have that<br/>
 +
<math> Var(aX+bY) = a^2Var(X) + b^2Var(Y) + 2abCov(X,Y)_{\blacksquare} </math>
 +
 +
 +
Note that the theorem is a particular case of the more general form: <br/>
 +
<math>Var(\sum_{i=1}^N a_i X_i) = \sum_{i=1}^N \sum_{j=1}^N a_i a_j Cov(X_i,X_j)</math>
 +
 +
Also note that for special cases where <math>X</math> and <math>Y</math> are uncorrelated, <math>Cov(X,Y) = 0</math> (proof). This produces the following result<br/>
 +
<math> Var(aX+bY) = a^2Var(X) + b^2Var(Y) \ </math>
 +
 +
 +
----
 +
 +
[[Proofs_mhossain|Back to list of proofs]]

Latest revision as of 14:17, 13 June 2013


Theorem

Let $ X $ and $ Y $ be two random variables with variances $ Var(X) $ and $ Var(Y) $ respectively and covariance $ Cov(X,Y) $. Then
$ Var(aX + bY) = a^2Var(X) + b^2Var(Y) + 2abCov(X,Y) \ $



Proof

By definition, we have that the variance of random variable $ Z $ is given by
$ \begin{align} Var(Z) &= E[(Z-E[Z])^2] \\ &= E[Z^2 -2ZE[Z] +(E[Z])^2] \\ &= E[Z^2] - 2(E[Z])^2 + (E[Z])^2 \\ &= E[Z^2] - (E[Z])^2 \end{align} $

$ \begin{align} \Rightarrow Var(aX+bY) &= E[(aX+bY)^2] - (E[aX+bY])^2 \\ &= E[a^2X^2 + +2abXY b^2Y^2] - (aE[X] + bE[Y])^2 \\ &= a^2E[X^2] + 2abE[XY] + b^2E[Y^2]-a^2(E[X])^2 - 2abE[X]E[Y] - b^2(E[Y])^2 \\ &= a^2[E[X^2]-(E[X])^2] + b^2[E[Y^2]-(E[Y])^2] + 2ab(E[XY]-E[X]E[Y]) \\ &= a^2[E[X^2]-\mu_X^2] + b^2[E[Y^2]-\mu_Y^2] + 2ab(E[XY]-\mu_X\mu_Y) \end{align} $
where $ \mu_X = E[X] $ and $ \mu_Y = E[Y] $.
Also recall from the definition of the covariance, $ Cov(X,Y) = E[XY]-\mu_X\mu_Y $. So finally, we have that
$ Var(aX+bY) = a^2Var(X) + b^2Var(Y) + 2abCov(X,Y)_{\blacksquare} $


Note that the theorem is a particular case of the more general form:
$ Var(\sum_{i=1}^N a_i X_i) = \sum_{i=1}^N \sum_{j=1}^N a_i a_j Cov(X_i,X_j) $

Also note that for special cases where $ X $ and $ Y $ are uncorrelated, $ Cov(X,Y) = 0 $ (proof). This produces the following result
$ Var(aX+bY) = a^2Var(X) + b^2Var(Y) \ $



Back to list of proofs

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood