Line 1: Line 1:
 
Here are some concepts taught in class about conditional probability which can be useful to solve the problem. Some of us have given the procedure to solve the problem. This should act as a review for the formulas and concepts.
 
Here are some concepts taught in class about conditional probability which can be useful to solve the problem. Some of us have given the procedure to solve the problem. This should act as a review for the formulas and concepts.
******************************************************************************************************
+
______________________________________________________________________________________________________
 
Suppose X is a random variable that can be equal either to 0 or to 1. As above, one may speak of the conditional probability of any event A given the event X = 0, and also of the conditional probability of A given the event X = 1. The former is denoted P(A|X = 0) and the latter P(A|X = 1). Now define a new random variable Y, whose value is P(A|X = 0) if X = 0 and P(A|X = 1) if X = 1. That is
 
Suppose X is a random variable that can be equal either to 0 or to 1. As above, one may speak of the conditional probability of any event A given the event X = 0, and also of the conditional probability of A given the event X = 1. The former is denoted P(A|X = 0) and the latter P(A|X = 1). Now define a new random variable Y, whose value is P(A|X = 0) if X = 0 and P(A|X = 1) if X = 1. That is
 
         ''Y = P(A|X = 0)    X = 0''
 
         ''Y = P(A|X = 0)    X = 0''
Line 7: Line 7:
 
         ''Y = P(A|X)''  
 
         ''Y = P(A|X)''  
 
According to the "law of total probability", the expected value of Y is just the marginal (or "unconditional") probability of A.
 
According to the "law of total probability", the expected value of Y is just the marginal (or "unconditional") probability of A.
******************************************************************************************************
+
______________________________________________________________________________________________________
 
Given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X (written "Y | X") is the probability distribution of Y when X is known to be a particular value.
 
Given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X (written "Y | X") is the probability distribution of Y when X is known to be a particular value.
 
       For discrete random variables, the conditional probability mass function can be written as P(Y = y | X = x). From the definition of conditional probability, this is
 
       For discrete random variables, the conditional probability mass function can be written as P(Y = y | X = x). From the definition of conditional probability, this is
Line 16: Line 16:
 
                   ''= (pX|Y (x | y) . pY(y)) / pX(x)''  
 
                   ''= (pX|Y (x | y) . pY(y)) / pX(x)''  
 
where pX,Y(x, y) gives the joint distribution of X and Y, while pX(x) gives the marginal distribution for X.
 
where pX,Y(x, y) gives the joint distribution of X and Y, while pX(x) gives the marginal distribution for X.
*****************************************************************************************************
+
______________________________________________________________________________________________________

Revision as of 16:50, 20 October 2008

Here are some concepts taught in class about conditional probability which can be useful to solve the problem. Some of us have given the procedure to solve the problem. This should act as a review for the formulas and concepts. ______________________________________________________________________________________________________ Suppose X is a random variable that can be equal either to 0 or to 1. As above, one may speak of the conditional probability of any event A given the event X = 0, and also of the conditional probability of A given the event X = 1. The former is denoted P(A|X = 0) and the latter P(A|X = 1). Now define a new random variable Y, whose value is P(A|X = 0) if X = 0 and P(A|X = 1) if X = 1. That is

        Y = P(A|X = 0)     X = 0
        Y = P(A|X = 1)     X = 1

This new random variable Y is said to be the conditional probability of the event A given the discrete random variable X:

        Y = P(A|X) 

According to the "law of total probability", the expected value of Y is just the marginal (or "unconditional") probability of A. ______________________________________________________________________________________________________ Given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X (written "Y | X") is the probability distribution of Y when X is known to be a particular value.

     For discrete random variables, the conditional probability mass function can be written as P(Y = y | X = x). From the definition of conditional probability, this is
     P(Y = y | X = x) = P(X = x and Y = y) / P(X = x)
                      = (P(X = x | Y = y).P(Y = y)) / P(X = x)

Similarly for continuous random variables, the conditional probability density function can be written as pY|X(y | x) and this is

     pY|X (y | x) = pX,Y (x,y) / pX(x)
                 = (pX|Y (x | y) . pY(y)) / pX(x) 

where pX,Y(x, y) gives the joint distribution of X and Y, while pX(x) gives the marginal distribution for X. ______________________________________________________________________________________________________

Alumni Liaison

EISL lab graduate

Mu Qiao