(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
In order to undertstand what Fischer's information, we need to understand what the word "information" means in statistics. Information is not the same thing as data. Data is usually a fact or a collection of facts. The data is not going to tell you much and no conclusion will be made with simply the data. Information is the analysis of the data and conclusions that can be made in context of the class. For example, if students in a class took a test, their scores would be the data, and the average of the class would be information. Often times, the words statistics and information are interchanged but they both mean the same in the context of stats.  
+
=Introduction to Fischer's Information=
 +
 
 +
Fischer's information is a way of measuring the amount of information is in a random event X and is denoted by <math>I(θ)</math>. <br />
 +
 
 +
In order to understand what that means though, we need to understand what the word "information" means in statistics. Information is not the same thing as data. Data is usually a fact or a collection of facts. The data is not going to tell you much and no conclusion will be made with simply the data. Information is the analysis of the data and conclusions that can be made in context of the class. For example, if students in a class took a test, their scores would be the data, and the average of the class would be information. Often times, the words statistics and information are interchanged but they both mean the same in the context of stats. Using Fischer's information model, we can find the standard deviation, mean, etc of a probability function. <br />
  
So we know what information means, but what exactly is Fischer's information? Fischer's information is a way of measuring the amount of information is in a random event X and is denoted by <math>I(θ)</math>. <br />
 
  
 
The formal definition of Fischer Information is:<br />
 
The formal definition of Fischer Information is:<br />
  
"For random variable X, with a likelihood function <math>L(θ,X)</math> and score function(with respect to parameter θ) <math>s(θ;X) = \nabla [ln(L(θ,X))]</math>(Rothman)<br />
+
''"For random variable X, with a likelihood function <math>L(θ,X)</math> and score function(with respect to parameter θ) <math>s(θ;X) = \nabla [ln(L(θ,X))]</math> (Rothman)<br />''
  
 
In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information <math>I(θ)</math>, is equal to the variance of the score.<br />
 
In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information <math>I(θ)</math>, is equal to the variance of the score.<br />
<math>I(θ) = Var(s(θ;X)) = E[(s(θ;X))^2] = -E[\nabla [ln(L(θ,X))]]</math>
+
 
 +
<math>I(θ) = Var(s(θ;X)) = E[(s(θ;X))^2]</math>
 +
 
 +
[[Walther_MA271_Fall2020_topic13|Back To Fisher information]]
 +
 
 +
[[Category:MA271Fall2020Walther]]

Latest revision as of 22:03, 6 December 2020

Introduction to Fischer's Information

Fischer's information is a way of measuring the amount of information is in a random event X and is denoted by $ I(θ) $.

In order to understand what that means though, we need to understand what the word "information" means in statistics. Information is not the same thing as data. Data is usually a fact or a collection of facts. The data is not going to tell you much and no conclusion will be made with simply the data. Information is the analysis of the data and conclusions that can be made in context of the class. For example, if students in a class took a test, their scores would be the data, and the average of the class would be information. Often times, the words statistics and information are interchanged but they both mean the same in the context of stats. Using Fischer's information model, we can find the standard deviation, mean, etc of a probability function.


The formal definition of Fischer Information is:

"For random variable X, with a likelihood function $ L(θ,X) $ and score function(with respect to parameter θ) $ s(θ;X) = \nabla [ln(L(θ,X))] $ (Rothman)

In more understandable terms this just means that for some probability density function, the amount of information stored in the parameter θ, or Fischer information $ I(θ) $, is equal to the variance of the score.

$ I(θ) = Var(s(θ;X)) = E[(s(θ;X))^2] $

Back To Fisher information

Alumni Liaison

Have a piece of advice for Purdue students? Share it through Rhea!

Alumni Liaison