Line 2: Line 2:
  
 
"For random variable X, with a likelihood function <math>L(θ,X)</math> and score function(with respect to parameter θ) <math>s(θ;X) = \nabla [ln(L(θ,X))]</math>(Rothman)<br />
 
"For random variable X, with a likelihood function <math>L(θ,X)</math> and score function(with respect to parameter θ) <math>s(θ;X) = \nabla [ln(L(θ,X))]</math>(Rothman)<br />
 +
 
In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information <math>I(θ)</math> is equal to the variance of the score.
 
In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information <math>I(θ)</math> is equal to the variance of the score.

Revision as of 20:36, 6 December 2020

The formal definition of Fischer Information is:

"For random variable X, with a likelihood function $ L(θ,X) $ and score function(with respect to parameter θ) $ s(θ;X) = \nabla [ln(L(θ,X))] $(Rothman)

In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information $ I(θ) $ is equal to the variance of the score.

Alumni Liaison

Ph.D. on Applied Mathematics in Aug 2007. Involved on applications of image super-resolution to electron microscopy

Francisco Blanco-Silva