• [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    8 KB (1,354 words) - 08:51, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    13 KB (2,073 words) - 08:39, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    7 KB (1,212 words) - 08:38, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    10 KB (1,607 words) - 08:38, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    6 KB (1,066 words) - 08:40, 17 January 2013
  • ...he section on [[Lecture 3 - Bayes classification_Old Kiwi#Bayes_rule|Bayes rule]] equation <3,4,5> and figures <1,2,3>. ...Clustering Methods_Old Kiwi]] by adding the section on how the separation rule obtained by mixture of Gaussians model can be generalized to future unseen
    10 KB (1,418 words) - 12:21, 28 April 2008
  • ...PROBABILITY and LIKELIHOOD by forming a POSTERIOR probability using Bayes Rule.
    3 KB (558 words) - 17:03, 16 April 2008
  • ...ass 1 is more likely than class 2, and we select class 1. Applying Bayes' rule, and canceling the p(x):
    3 KB (621 words) - 08:48, 10 April 2008
  • ...any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it ...al supervised neural-network training algorithms (including the perceptron rule, the least-mean-square algorithm, three Madaline rules, and the backpropaga
    39 KB (5,715 words) - 10:52, 25 April 2008
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    8 KB (1,360 words) - 08:46, 17 January 2013
  • =Bayes Decision Rule Video= The video demonstrates Bayes decision rule on 2D feature data from two classes. We visualize the decision hyper surfac
    1 KB (172 words) - 11:08, 10 June 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    5 KB (1,003 words) - 08:40, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    6 KB (1,047 words) - 08:42, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    6 KB (1,012 words) - 08:42, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    6 KB (806 words) - 08:42, 17 January 2013
  • ...PROBABILITY and LIKELIHOOD by forming a POSTERIOR probability using Bayes Rule.
    2 KB (302 words) - 01:09, 7 April 2008
  • ...each region, we can observe some samples which are misclassified by Bayes rule. Removing these misclassfied sample will generate two homogeneous sets of s The followings are the algorithm of the editing technique for the K-NN rule:
    2 KB (296 words) - 11:48, 7 April 2008
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    7 KB (1,060 words) - 08:43, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    8 KB (1,254 words) - 08:43, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    8 KB (1,259 words) - 08:43, 17 January 2013
  • == Bayes rule == Bayes rule addresses the predefined classes classification problem.
    2 KB (399 words) - 14:03, 18 June 2008
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    8 KB (1,244 words) - 08:44, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    8 KB (1,337 words) - 08:44, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_Old Kiwi|17]], [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_Old Kiwi|18]],
    10 KB (1,728 words) - 08:55, 17 January 2013
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_OldKiwi|17]]| [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_OldKiwi|18]]|
    5 KB (744 words) - 11:17, 10 June 2013
  • ...tion Rule and Metrics_OldKiwi|Lecture 17 - Nearest Neighbors Clarification Rule and Metrics]] ...nd Metrics(Continued)_OldKiwi|Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)]]
    7 KB (875 words) - 07:11, 13 February 2012
  • *[[Bayes_Rate_Fallacy:_Bayes_Rules_under_Severe_Class_Imbalance|Bayes rule under severe class imbalance]]
    3 KB (429 words) - 09:07, 11 January 2016
  • == '''2.1 Classifier using Bayes rule''' == ...{i} \mid x \big) </math>. So instead of solving eq.(2.1), we use the Bayes rule to change the problem to
    17 KB (2,590 words) - 10:45, 22 January 2015
  • [[Lecture 17 - Nearest Neighbors Clarification Rule and Metrics_OldKiwi|17]]| [[Lecture 18 - Nearest Neighbors Clarification Rule and Metrics(Continued)_OldKiwi|18]]|
    9 KB (1,341 words) - 11:15, 10 June 2013
  • *[[Bayes_Rate_Fallacy:_Bayes_Rules_under_Severe_Class_Imbalance|Bayes rule under severe class imbalance]] *[[Hw1 ECE662Spring2010|HW1- Bayes rule for normally distributed features]]
    4 KB (547 words) - 12:24, 25 June 2010
  • =Is Bayes' Rule Truly the Best?=
    535 B (72 words) - 10:09, 1 March 2010
  • Or, equivalently, we can use Bayes' Rule explicity. Bayes' Rule is:
    7 KB (948 words) - 04:35, 2 February 2010
  • :Experiment with Bayes rule for normally distributed features. Summarize your experiments, results, and
    1 KB (149 words) - 09:07, 6 October 2010
  • ...10|here]]) is a freeform exercise that consists in applying Bayes decision rule to Normally distributed data. The next homework will consists in a peer rev
    4 KB (596 words) - 13:17, 12 November 2010
  • ...Bayes_Rate_Fallacy:_Bayes_Rules_under_Severe_Class_Imbalance|page on Bayes rule under severe class imbalance]] ...esome [[EE662Sp10OptimalPrediction|page discussing the optimality of Bayes rule]].
    7 KB (1,009 words) - 11:27, 13 April 2010
  • | 4. Bayes Rule *The nearest neighbor classification rule.
    1 KB (165 words) - 08:55, 22 April 2010
  • Experiment with making decisions using Bayes rule and parametric density estimation. Summarize your experiments, results, and
    849 B (115 words) - 15:33, 10 May 2010
  • Experiment with making decisions using Bayes rule and non-parametric density estimation. Summarize your experiments, results,
    904 B (122 words) - 15:16, 10 May 2010
  • ...ntroduced [[Bayes_Decision_Theory|Bayes rule]] for making decisions. (This rule is the basis for this course.) We focused our discussion on the case where
    649 B (85 words) - 11:41, 13 April 2010
  • ...that the example previously proposed performs worse]] than following Bayes rule. ...expected loss (called "risk") when following [[Bayes_Decision_Theory|Bayes rule]].
    968 B (131 words) - 11:42, 13 April 2010
  • ...minant functions]] and their relationship to [[Bayes_Decision_Theory|Bayes rule]]. We focused on discriminant functions when the class densities are normal
    462 B (56 words) - 08:48, 11 May 2010
  • [[Category:Bayes decision rule]] [[Bayes Decision Theory|Bayes decision rule]] is a simple, intuitive and powerful classifier. It allows to select the m
    5 KB (694 words) - 12:41, 2 February 2012
  • ...ut [[Bayes Rate Fallacy: Bayes Rules under Severe Class Imbalance‎|Bayes rule under severe class imbalance]]. Please join in!
    1 KB (210 words) - 09:20, 15 April 2010
  • ...uld choose the most likely class given the observation. By following Bayes rule, one achieves the minimum possible probability of error. ...ture_3_-_Bayes_classification_OldKiwi|Lecture 3 introducing Bayes decision rule]]
    2 KB (222 words) - 09:25, 15 April 2010
  • ...osing the class with the higher prior. [[EE662Sp10OptimalPrediction|Bayes rule is optimal]]. - jvaught
    6 KB (884 words) - 16:26, 9 May 2010
  • ...orem. We then discussed the probability of error when using Bayes decision rule. More precisely, we obtained the Chernoff Bound and the Bhattacharrya bound
    628 B (86 words) - 09:09, 11 May 2010
  • Error bounds for Bayes decision rule: As we know Bayes decision rule guarantees the lowest average error rate; It Does not tell what the probabi
    5 KB (806 words) - 09:08, 11 May 2010
  • *[[Homework_1_OldKiwi|Experimenting with Bayes rule]] (from [[ECE662]])
    2 KB (286 words) - 05:45, 29 December 2010
  • == [[Bayes Decision Rule_Old Kiwi|Bayes Decision Rule]] == Bayes' decision rule creates an objective function which minimizes the probability of error (mis
    31 KB (4,787 words) - 18:21, 22 October 2010
  • *[[Bayes_Rate_Fallacy:_Bayes_Rules_under_Severe_Class_Imbalance|Bayes rule under severe class imbalance]]
    1 KB (164 words) - 06:47, 18 November 2010

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)

Alumni Liaison

ECE462 Survivor

Seraj Dosenbach