• '''Methods of Generating Random Variables''' == 1. Generating uniformly distributed random numbers between 0 and 1: U(0,1) ==
    3 KB (409 words) - 10:05, 17 April 2013
  • '''Applications of Poisson Random Variables''' == Poisson Random Variables==
    5 KB (708 words) - 07:22, 22 April 2013
  • ...e different statistical numbers describing relations of datasets or random variables. So, I decided to crack down on some research and bring the important ideas '''Covariance:''' This is a measure of two random variable’s association with each other.
    7 KB (1,146 words) - 06:19, 5 May 2013
  • where <math>X_1</math> and <math>X_2</math> are iid scalar random variables. ...two Gaussian variables, then <math>X</math> is also a Gaussian distributed random variable [[Linear_combinations_of_independent_gaussian_RVs|(proof)]] charac
    6 KB (1,084 words) - 13:20, 13 June 2013
  • ...nations_of_independent_gaussian_RVs|Linear Combinations of Gaussian Random Variables]] ..._RVs_mhossain|Moment Generating Functions of Linear Combinations of Random Variables]]
    2 KB (227 words) - 11:15, 6 October 2013
  • ...is the expectation function, <math>X</math> and <math>Y</math> are random variables with distribution functions <math>f_X(x)</math> and <math>f_Y(y)</math> res where <math>X_i</math>'s are random variables and <math>a_i</math>'s are real constants ∀<math>i=1,2,...,N</math>.
    3 KB (585 words) - 14:15, 13 June 2013
  • Let <math>X</math> and <math>Y</math> be two random variables with variances <math>Var(X)</math> and <math>Var(Y)</math> respectively and By definition, we have that the variance of random variable <math>Z</math> is given by <br/>
    2 KB (333 words) - 14:17, 13 June 2013
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] ...ECE 600. Class Lecture. [https://engineering.purdue.edu/~comerm/600 Random Variables and Signals]. Faculty of Electrical Engineering, Purdue University. Fall 20
    10 KB (1,697 words) - 12:10, 21 May 2014
  • '''The Comer Lectures on Random Variables and Signals''' *[[ECE600_F13_rv_definition_mhossain|Random Variables: Definition]]
    2 KB (227 words) - 12:10, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] ...finition''' <math>\qquad</math> An '''outcome''' is a possible result of a random experiment.
    20 KB (3,448 words) - 12:11, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] ...es' Theorem. We will see other equivalent expressions when we cover random variables.
    6 KB (1,023 words) - 12:11, 21 May 2014
  • [[ECE600_F13_rv_distribution_mhossain|Next Topic: Random Variables: Distributions]] [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']]
    7 KB (1,194 words) - 12:11, 21 May 2014
  • [[ECE600_F13_rv_definition_mhossain|Next Topic: Random Variables: Definition]] [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']]
    9 KB (1,543 words) - 12:11, 21 May 2014
  • [[ECE600_F13_rv_definition_mhossain|Previous Topic: Random Variables: Definitions]]<br/> [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']]
    15 KB (2,637 words) - 12:11, 21 May 2014
  • [[ECE600_F13_rv_distribution_mhossain|Previous Topic: Random Variables: Distributions]]<br/> ...00_F13_rv_Functions_of_random_variable_mhossain|Next Topic: Functions of a Random Variable]]
    6 KB (1,109 words) - 12:11, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 8: Functions of Random Variables</font size>
    9 KB (1,723 words) - 12:11, 21 May 2014
  • ...13_rv_Functions_of_random_variable_mhossain|Previous Topic: Functions of a Random Variable]]<br/> [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']]
    8 KB (1,474 words) - 12:12, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] The pdf f<math>_X</math> of a random variable X is a function of a real valued variable x. It is sometimes usefu
    5 KB (804 words) - 12:12, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 11: Two Random Variables: Joint Distribution</font size>
    8 KB (1,524 words) - 12:12, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 12: Independent Random Variables</font size>
    2 KB (449 words) - 12:12, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 13: Functions of Two Random Variables</font size>
    9 KB (1,568 words) - 12:12, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] Given random variables X and Y, let Z = g(X,Y) for some g:'''R'''<math>_2</math>→R. Then E[Z] ca
    7 KB (1,307 words) - 12:12, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 15: Conditional Distributions for Two Random Variables</font size>
    6 KB (1,139 words) - 12:12, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 16: Conditional Expectation for Two Random Variables</font size>
    4 KB (875 words) - 12:13, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 17: Random Vectors</font size>
    12 KB (1,897 words) - 12:13, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] We will now consider infinite sequences of random variables. We will discuss what it means for such a sequence to converge. This will l
    15 KB (2,578 words) - 12:13, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] ...n discrete-time random processes, but we will now formalize the concept of random process, including both discrete-time and continuous time.
    10 KB (1,690 words) - 12:13, 21 May 2014
  • [[ECE600_F13_notes_mhossain|'''The Comer Lectures on Random Variables and Signals''']] <font size= 3> Topic 20: Linear Systems with Random Inputs</font size>
    8 KB (1,476 words) - 12:13, 21 May 2014
  • [[Category:random variables]] Question 1: Probability and Random Processes
    3 KB (449 words) - 21:36, 5 August 2018
  • <font size="4">Question 1: Probability and Random Processes </font> ...cting your proof, keep in mind that may be either a discrete or continuous random variable.
    6 KB (995 words) - 09:21, 15 August 2014
  • <font size="4">Question 1: Probability and Random Processes </font> ...} \dots </math> be a sequence of independent, identical distributed random variables, each uniformly distributed on the interval [0, 1], an hence having pdf <br
    12 KB (1,948 words) - 10:16, 15 August 2014
  • ...A stochastic process { X(t), t∈T } is an ordered collection of random variables, T where T is the index set and if t is a time in the set, X(t) is the proc ...s that use X1,…,Xn as independently identically distributed (iid) random variables. However, note that states do not necessarily have to be independently iden
    19 KB (3,004 words) - 09:39, 23 April 2014
  • ...between the prior probability and the posterior probability of two random variables or events. Give two events <math>A</math> and <math>B</math>, we may want t
    19 KB (3,255 words) - 10:47, 22 January 2015
  • ...ollows a multivariate Gaussian distribution in 2D. The data comes from two random seed which are of equal probability and are well separated for better illus #Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6), 417.
    22 KB (3,459 words) - 10:40, 22 January 2015
  • ..._S14_MH|Whitening and Coloring Transforms for Multivariate Gaussian Random Variables]] ...'R''' where '''X''' ∈ '''R'''<math>^d</math> is a d-dimensional Gaussian random vector with mean '''μ''' and covariance matrix '''Σ'''. This slecture ass
    17 KB (2,603 words) - 10:38, 22 January 2015
  • The generic situation is that we observe a n-dimensional random vector X with probability<br>density (or mass) function <span class="texhtm <br> Treating <span class="texhtml">''X''<sub>''j''</sub></span> as random variables in the above equations, we have
    25 KB (4,187 words) - 10:49, 22 January 2015
  • ...nd way was using a uniform random variable in vector operations. A uniform random vector of the same size of the whole dataset is first generated, whose each ...asy to understand even for people who do not have deep knowledge in random variables & probabilities. The demonstrations (figures & codes) in MATLAB were good i
    3 KB (508 words) - 16:12, 14 May 2014
  • ...or a given value of θ is denoted by p(x|θ ). It should be noted that the random variable X and the parameter θ can be vector-valued. Now we obtain a set o ...s parameter estimation, the parameter θ is viewed as a random variable or random vector following the distribution p(θ ). Then the probability density func
    15 KB (2,273 words) - 10:51, 22 January 2015
  • ...random variables and probability mass function in case of discrete random variables and 'θ' is the parameter being estimated. ...es) or the probability of the probability mass (in case of discrete random variables)'''
    12 KB (1,986 words) - 10:49, 22 January 2015
  • ...tor, <math> \theta \in \Omega</math>. So for example, after we observe the random vector <math>Y \in \mathbb{R}^{n}</math>, then our objective is to use <mat ...andom vector <math>Y</math>, the estimate, <math>\hat{\theta}</math>, is a random vector. The mean of the estimator, <math>\bar{\theta}</math>, can be comput
    14 KB (2,356 words) - 20:48, 30 April 2014
  • ...by flipping coins (heads or tails). As the size of the sample increases, a random classifier's ROC point migrates towards (0.5,0.5). ...ring the area under the curve, we can tell the accuracy of a classifier. A random guess is just the line y = x with an area of 0.5. A perfect model will have
    11 KB (1,823 words) - 10:48, 22 January 2015
  • \section{Title: Generation of normally distributed random numbers under a binary prior probability} ...a_1)]$, label the sample as class 1, then, continue to generating a normal random number based on the class 1 statistics $(\mu, \sigma)$.
    16 KB (2,400 words) - 23:34, 29 April 2014
  • For discrete random variables, Bayes rule formula is given by, For continuous random variables, Bayes rule formula is given by,
    7 KB (1,106 words) - 10:42, 22 January 2015
  • <font size="4">Generation of normally distributed random numbers from two categories with different priors </font> ...2), 1]</math> and should be labeled as class 2, then, move onto the normal random number generation step with the class 2 statistics like the same way as we
    18 KB (2,852 words) - 10:40, 22 January 2015
  • ...tor, <math> \theta \in \Omega</math>. So for example, after we observe the random vector <math>Y \in \mathbb{R}^{n}</math>, then our objective is to use <mat ...andom vector <math>Y</math>, the estimate, <math>\hat{\theta}</math>, is a random vector. The mean of the estimator, <math>\bar{\theta}</math>, can be comput
    19 KB (3,418 words) - 10:50, 22 January 2015
  • If the data-points from each class are random variables, then it can be proven that the optimal decision rule to classify a point < Therefore, it is desirable to assume that the data-points are random variables, and attempt to estimate <math> P(w_i|x_0) </math>, in order to use it to c
    9 KB (1,604 words) - 10:54, 22 January 2015
  • The principle of how to generate a Gaussian random variable ...od for pseudo random number sampling first. Then, we will explain Gaussian random sample generation method based on Box Muller transform. Finally, we will in
    8 KB (1,189 words) - 10:39, 22 January 2015
  • ...,X_N be the Independent and identically distributed (iid) Poisson random variables. Then, we will have a joint frequency function that is the product of margi ..._N be the Independent and identically distributed (iid) exponential random variables. As P(X=x)=0 when x&lt;0, no samples can sit in x&lt;0 region. Thus, for al
    13 KB (1,966 words) - 10:50, 22 January 2015
  • The K-means algorithm also introduces a set of binary variables to represent assignment of data points to specific clusters: <br /> This set of binary variables is interpreted as follows: if data point <math>n</math> is assigned to clus
    8 KB (1,350 words) - 10:57, 22 January 2015
  • [[Category:random variables]] Question 1: Probability and Random Processes
    3 KB (470 words) - 07:47, 4 November 2014

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood