m
Line 8: Line 8:
 
**Interesting. What is the expected rate of error for this method? -pm
 
**Interesting. What is the expected rate of error for this method? -pm
 
**I would think the expected error would be .5.  Assume if heads decide class 1, if tails decide class 2.  So P(error) = P(Heads)P(Class 2) + P(Tails)P(Class 1).  I'll assume you have a fair coin so P(Heads) = P(Tails) = .5.  Also, if there's only two classes, P(Class 2) + P(Class 1) = 1.  Thus from the above formula, P(error) = .5(P(Class1) + P(Class2)) = .5 -[[User:athershb|ATH]]
 
**I would think the expected error would be .5.  Assume if heads decide class 1, if tails decide class 2.  So P(error) = P(Heads)P(Class 2) + P(Tails)P(Class 1).  I'll assume you have a fair coin so P(Heads) = P(Tails) = .5.  Also, if there's only two classes, P(Class 2) + P(Class 1) = 1.  Thus from the above formula, P(error) = .5(P(Class1) + P(Class2)) = .5 -[[User:athershb|ATH]]
 +
**Actually, a loaded coin might be better! Looking at the relative frequency of the training data points, one can estimate the priors and bias the coin accordingly. -Satyam.
  
 
*Nearest neighbors.  It reminds me of human behavior in that if we don't know what to do in certain situations (social ones in particular), we'll look at those around us to decide what to do.  -[[User:athershb|ATH]]
 
*Nearest neighbors.  It reminds me of human behavior in that if we don't know what to do in certain situations (social ones in particular), we'll look at those around us to decide what to do.  -[[User:athershb|ATH]]
 
*Kernel methods in general (SVM, KDE, KPCA, etc..) since we can handle non-linearly separable data easier. I also feel that clustering techniques are very useful in my research area. --[[User:ilaguna|ilaguna]]
 
*Kernel methods in general (SVM, KDE, KPCA, etc..) since we can handle non-linearly separable data easier. I also feel that clustering techniques are very useful in my research area. --[[User:ilaguna|ilaguna]]
 +
*Nearest neighbor. From practical point of view, it is easy to implement and quite fast (and, surprisingly, not too bad in terms of errors). -Satyam.
 
*write your opinion here. sign your name/nickname.   
 
*write your opinion here. sign your name/nickname.   
 
----
 
----
 
[[ 2010 Spring ECE 662 mboutin|Back to 2010 Spring ECE 662 mboutin]]
 
[[ 2010 Spring ECE 662 mboutin|Back to 2010 Spring ECE 662 mboutin]]

Revision as of 17:39, 28 April 2010


What is your favorite decision method?

Student poll for ECE662, Spring 2010.

  • Coin flipping. ostava
    • Interesting. What is the expected rate of error for this method? -pm
    • I would think the expected error would be .5. Assume if heads decide class 1, if tails decide class 2. So P(error) = P(Heads)P(Class 2) + P(Tails)P(Class 1). I'll assume you have a fair coin so P(Heads) = P(Tails) = .5. Also, if there's only two classes, P(Class 2) + P(Class 1) = 1. Thus from the above formula, P(error) = .5(P(Class1) + P(Class2)) = .5 -ATH
    • Actually, a loaded coin might be better! Looking at the relative frequency of the training data points, one can estimate the priors and bias the coin accordingly. -Satyam.
  • Nearest neighbors. It reminds me of human behavior in that if we don't know what to do in certain situations (social ones in particular), we'll look at those around us to decide what to do. -ATH
  • Kernel methods in general (SVM, KDE, KPCA, etc..) since we can handle non-linearly separable data easier. I also feel that clustering techniques are very useful in my research area. --ilaguna
  • Nearest neighbor. From practical point of view, it is easy to implement and quite fast (and, surprisingly, not too bad in terms of errors). -Satyam.
  • write your opinion here. sign your name/nickname.

Back to 2010 Spring ECE 662 mboutin

Alumni Liaison

Basic linear algebra uncovers and clarifies very important geometry and algebra.

Dr. Paul Garrett