(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
'''Things to Know before we start'''<br />
 
'''Things to Know before we start'''<br />
  
Before we go into the proofs, here are a few definitions and concept to know so you don't get confused when we talk about the proof of this formula. I am giving fairly simple explanations here, just enough so you can understand what it is. If you want to learn more about it, you can find resources about the topic in the "More Sources" page.
+
Before we go into the proofs, here are a few definitions and concept to know so you don't get confused when we talk about the proof of this formula. I am giving fairly simple explanations here, just enough so you can understand what it is. If you want to learn more about it, you can find resources about the topic in the "More Sources" page.<br />
  
 +
'''Probability density function:''' a function that models the probability of a random variable being a certain value, usually denoted as <math>P(X=A)</math> where X is the random variable and A is the outcome we are looking for. The integrals of all probability density functions are equal to 1.
  
 
'''Standard Deviation:''' is used to measure how spread out the numbers in a sample are. Usually denoted by σ. The standard deviation is the square root of variance.  
 
'''Standard Deviation:''' is used to measure how spread out the numbers in a sample are. Usually denoted by σ. The standard deviation is the square root of variance.  
Line 19: Line 20:
 
'''Score:''' is the gradient, or vectors of the partial derivatives, of the natural log of (<math>L(θ)</math>) where <math>L(θ)</math>is a likelihood function of some parameter
 
'''Score:''' is the gradient, or vectors of the partial derivatives, of the natural log of (<math>L(θ)</math>) where <math>L(θ)</math>is a likelihood function of some parameter
  
 
+
[[Walther_MA271_Fall2020_topic13|Back To Fisher information]]
 
[[Category:MA271Fall2020Walther]]
 
[[Category:MA271Fall2020Walther]]

Latest revision as of 22:58, 6 December 2020

Things to Know before we start

Before we go into the proofs, here are a few definitions and concept to know so you don't get confused when we talk about the proof of this formula. I am giving fairly simple explanations here, just enough so you can understand what it is. If you want to learn more about it, you can find resources about the topic in the "More Sources" page.

Probability density function: a function that models the probability of a random variable being a certain value, usually denoted as $ P(X=A) $ where X is the random variable and A is the outcome we are looking for. The integrals of all probability density functions are equal to 1.

Standard Deviation: is used to measure how spread out the numbers in a sample are. Usually denoted by σ. The standard deviation is the square root of variance.

Variance: the average of the squared difference from the mean. Usually denoted by σ2. Variance is the square of the standard deviation. It is used in calculation more often because variance is much easier to manipulate without the loss of data. It is also used because it weighs outliers much more than standard deviation which is important when used by investors and stock traders.

Theta(θ): Is used in statistics to represent any unknown parameter of interest. In a continuous probability function, it can be used as the likelihood that event X occurs.
$ P(x=A)=θ $
θ denotes the probability that event A occurs within the probability function $ P(X) $

Expected Value: the weighted average of a probability function. Denoted as $ E(x) $. In simple terms, the most likely event to occur in a probability function.

Likelihood Function: is used to predict how close a statistical model is. Denoted as $ L(θ) $

Score: is the gradient, or vectors of the partial derivatives, of the natural log of ($ L(θ) $) where $ L(θ) $is a likelihood function of some parameter

Back To Fisher information

Alumni Liaison

Questions/answers with a recent ECE grad

Ryne Rayburn