Line 12: Line 12:
 
In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information <math>I(θ)</math>, is equal to the variance of the score.<br />
 
In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information <math>I(θ)</math>, is equal to the variance of the score.<br />
  
<math>I(θ) = Var(s(θ;X)) = E[(s(θ;X))^2] </math>
+
<math>I(θ) = Var(s(θ;X)) = E[(s(θ;X))^2] = \nabla [ln(L(θ,X))] </math>
  
 
[[Walther_MA271_Fall2020_topic13|Back To Fisher information]]
 
[[Walther_MA271_Fall2020_topic13|Back To Fisher information]]
  
 
[[Category:MA271Fall2020Walther]]
 
[[Category:MA271Fall2020Walther]]

Revision as of 21:45, 6 December 2020

Introduction to Fischer's Information

Fischer's information is a way of measuring the amount of information is in a random event X and is denoted by $ I(θ) $.

In order to understand what that means though, we need to understand what the word "information" means in statistics. Information is not the same thing as data. Data is usually a fact or a collection of facts. The data is not going to tell you much and no conclusion will be made with simply the data. Information is the analysis of the data and conclusions that can be made in context of the class. For example, if students in a class took a test, their scores would be the data, and the average of the class would be information. Often times, the words statistics and information are interchanged but they both mean the same in the context of stats. Using Fischer's information model, we can find the standard deviation, mean, etc of a probability function.


The formal definition of Fischer Information is:

"For random variable X, with a likelihood function $ L(θ,X) $ and score function(with respect to parameter θ) $ s(θ;X) = \nabla [ln(L(θ,X))] $ (Rothman)

In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information $ I(θ) $, is equal to the variance of the score.

$ I(θ) = Var(s(θ;X)) = E[(s(θ;X))^2] = \nabla [ln(L(θ,X))] $

Back To Fisher information

Alumni Liaison

Prof. Math. Ohio State and Associate Dean
Outstanding Alumnus Purdue Math 2008

Jeff McNeal