Line 79: Line 79:
 
The test function <math>\phi(x)</math> maps <math>\mathcal{X}</math> to the decision space <math>\{0, 1\} </math> for deciding <math>H_0</math> and <math>H_1</math>. The function <math>\phi(x)</math> induces a partition of <math>\mathcal{X}</math> into decision regions. Denote <math>\mathcal{X}_0 = \{x: \phi(x) = 0 \}</math> and  <math>\mathcal{X}_1 = \{x: \phi(x) = 1 \}</math>. Then, '''false alarm''' and '''miss''' probabilities with the function <math>\phi</math> can be expressed as follow:
 
The test function <math>\phi(x)</math> maps <math>\mathcal{X}</math> to the decision space <math>\{0, 1\} </math> for deciding <math>H_0</math> and <math>H_1</math>. The function <math>\phi(x)</math> induces a partition of <math>\mathcal{X}</math> into decision regions. Denote <math>\mathcal{X}_0 = \{x: \phi(x) = 0 \}</math> and  <math>\mathcal{X}_1 = \{x: \phi(x) = 1 \}</math>. Then, '''false alarm''' and '''miss''' probabilities with the function <math>\phi</math> can be expressed as follow:
 
<center><math>
 
<center><math>
P_F (\theta) = E_\theta [\phi] = \int_\mathcal{X} \phi(x) f(x; \theta) dx, \quad \theta \in \Theta_0
+
P_F (\theta) = E_\theta [\phi] = \int_\mathcal{X} \phi(x) f(x; \theta) dx, \theta \in \Theta_0
 
<math><center>
 
<math><center>
  

Revision as of 21:58, 30 April 2014


Neyman-Pearson Lemma and Receiver Operating Characteristic Curve

A slecture by ECE student Soonam Lee

Partly based on the ECE662 Spring 2014 lecture material of Prof. Mireille Boutin.

click here for PDF version


Introduction

The purpose of this slecture is understanding Neyman-Pearson Lemma and Receiver Operating Characteristic (ROC) curve from theory to application. The fundamental theories stem from statistics and these can be used for signal detection and classification. In order to firmly understand these concepts, we will review statistical concept first including False alarm and Miss. After that, we will consider Bayes' decision rule using cost function. Next, we visit Neyman-Pearson test and minimax test. Lastly, we will discuss ROC curves. Note that we only consider two classes case in this slecture, but the concept can be extended to multiple classes.


The General Two Classes Case Problem

Before starting our discussion, we have the following setup:

  • $ X $ a measure random variable, random vector, or random process
  • $ x \in \mathcal{X} $ is a realization of $ X $
  • $ \theta \in \Theta $ are unknown parameters
  • $ f(x; \theta) $ is pdf of $ X $ (a known function)

Two distinct hypotheses on $ \theta $ such that $ H_0: \theta \in \Theta_0 $ versus $ H_1: \theta \in \Theta_1 $ where $ \Theta_0 $, $ \Theta_1 $ is partition of $ \Theta $ into two disjoint regions

$ \Theta_0 \cup \Theta_1 = \Theta, \qquad \Theta_0 \cap \Theta_1 = \phi $

False Alarm Rate and Miss Rate

From statistical references [1, 2], these two hypotheses make one of two types of error, named Type I Error and Type II Error. If $ \theta \in \Theta_0 $ but the hypothesis test incorrectly decides to reject $ H_0 $, then the test has made a Type I Error. If, on the other hand, $ \theta \in \Theta_1 $ but the test decides to accept $ H_0 $ a Type II Error has been made. Type I Error is often called false positive error and Type II Error is called false negative error. These two types are depicted in Table 1.

Decision
Accpet $ H_0 $
Decision
Reject $ H_0 $
Truth $ H_0 $ Correct Decision Type I Error
Truth $ H_1 $ Type II Error Correct Decision

Table 1: Two types of erros in hypothesis testing.

Different from statistical perspective, engineers focus more on false alarm rate and miss errors rate. False alarm rate is the same as Type I Error such that a given condition has been fulfilled, when it actually has not been fulfilled, that is, erroneously a positive effect has been assumed. Miss rate is such that a given condition has not been fulfilled, when it actually has been fulfilled. In general, to compute miss rate, computing hit rate and subtract hit rate from 1 to get miss rate.

Let's assume a test function $ \phi(x) $ such that

$ \phi(x) = \left\{ <table border="1" style="width:300px" align="center"> <tr> <td align="center">1</td> <td align="center">decide <math> H_1 $</td>

</tr> <tr>

 <td align="center">1</td>
 <td align="center">decide $  H_1  $</td> 

</tr> </table> \right. .

</math>

The test function $ \phi(x) $ maps $ \mathcal{X} $ to the decision space $ \{0, 1\} $ for deciding $ H_0 $ and $ H_1 $. The function $ \phi(x) $ induces a partition of $ \mathcal{X} $ into decision regions. Denote $ \mathcal{X}_0 = \{x: \phi(x) = 0 \} $ and $ \mathcal{X}_1 = \{x: \phi(x) = 1 \} $. Then, false alarm and miss probabilities with the function $ \phi $ can be expressed as follow:

$ P_F (\theta) = E_\theta [\phi] = \int_\mathcal{X} \phi(x) f(x; \theta) dx, \theta \in \Theta_0 <math><center> <center><math> &=& \int_{\mathcal{X}_1} f(x|\theta) dx, \quad \theta \in \Theta_0 \\ P_M (\theta) = E_\theta [1 - \phi] &=& \int_\mathcal{X} [1 - \phi(x)] f(x; \theta) dx, \quad \theta \in \Theta_1 \\ &=& 1 - \int_{\mathcal{X}_1} f(x|\theta) dx, \quad \theta \in \Theta_1. $

The probability of correctly deciding $ H_1 $ is called hit probability such that

$ 1- P_M (\theta) = P_D(\theta) = E_\theta [\phi], \quad \theta \in \Theta_1. $

From the ECE 662 class note [3], Professor introduced error 1 ($ \epsilon_1 $) and error 2 ($ \epsilon_2 $) that are basically same concepts as miss rate and false alarm rate respectively. The only difference is representing the expectation values as fraction as follow:

$ \epsilon_1 = \frac{\text{\# of test data points labeled as class $\Theta_0$ misclassified as class $\Theta_1$}}{\text{\# test data points in class $\Theta_0$}} $$ \epsilon_2 = \frac{\text{\# of test data points labeled as class $\Theta_1$ misclassified as class $\Theta_0$}}{\text{\# test data points in class $\Theta_1$}}. $

Working now...

Post your slecture material here. Guidelines:

  • If you are making a text slecture
    • Type text using wikitext markup languages
    • Type all equations using latex code between <math> </math> tags.
    • You may include links to other Project Rhea pages.





References

[1] G. Casella and R. L. Berger, "Statistical inference (2nd Edition)," Cengage Learning, 2001.
[2] J. K. Ghosh, "STAT528: Introduction to Mathematical Statistics," Fall 2012, Purdue University.
[3] M. Boutin, "ECE662: Statistical Pattern Recognition and Decision Making Process," Spring 2014, Purdue University.

Questions and comments

If you have any questions, comments, etc. please post them on this page.


Back to ECE662, Spring 2014

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood