(19 intermediate revisions by 8 users not shown) | |||
Line 1: | Line 1: | ||
− | |||
+ | ==<font size= 6>Discussion for Lab 5, [[ECE637]], Spring 2013</font size>== | ||
+ | ==Additional Information == | ||
+ | Before jumping into the lab, I would get familiar with basic concepts in Bayes Methods | ||
+ | * http://en.wikipedia.org/wiki/Bayesian_inference | ||
+ | * http://en.wikipedia.org/wiki/Eigen_decomposition | ||
+ | * http://en.wikipedia.org/wiki/Principle_components_analysis | ||
+ | |||
+ | Some concepts on what you are actually doing in lab | ||
+ | * http://en.wikipedia.org/wiki/Supervised_learning | ||
+ | * http://en.wikipedia.org/wiki/Training_set | ||
+ | |||
+ | For data acceleration, consider reading advance software development in the below link | ||
+ | * http://www.mathworks.com/help/matlab/index.html;jsessionid=4eab5513f8c24c803384a13895ef | ||
+ | |||
+ | ==Q&A Section== | ||
+ | ---- | ||
+ | <!-- Question format: ;'''Q - QUESTION HERE''' | ||
+ | :A - ANSWER HERE --> | ||
+ | ;'''Q: How do you calculate the theoretical R for section 2.2? I've calculated the Rhat 2x2 matrix, however I'm not sure how to calculate the theoretical one. | ||
+ | :A - The theoretical values are the known values of the covariance you used to generate the samples in Section 2.1. | ||
+ | :More specifically, they are the values of Rx given in equation (14) of section 2.1. | ||
+ | : | ||
+ | ;'''Q: I don't understand what is to be produced in section 4 for the projection coefficients. It is an image or a regular plot? The paragraph describing that procedure is not too clear for me. | ||
+ | :A - The section has been re-written to be clearer. | ||
+ | ;'''Q: Do the answers to Section 4 have to match with those provided in the pdf file Examples posted under course notes. Is the training data the same? For me, out of the 12 eigen images 9-10 match but a couple don't. Also, the projection coefficient variation is somewhat different. | ||
+ | :A - You can get a difference here but in a very specific way. Note that if "u" is an eigenvector, so is "-u", and they're both associated with the same eigenvalue. So SVD() might return an eigen image that appears to have an inverted gray scale, compared to the example. And in this case when you project an image onto this eigenvector, the projection coefficient will have the opposite sign as the example. The key is that when you multiply the coefficient by the eigen vector (e.g. when you synthesize), you should get the same result in either case. | ||
+ | |||
+ | ;'''Q: Section 4 asks for the plots for the 12 largest eigenvalues. How can these be determined once the SVD is calculated from Z? | ||
+ | :A - The SVD of Z is given by [U S V]=svd(Z,0). After this is computed, then U is a pXn matrix, and each column of U is an eigenvector | ||
+ | :of the estimated covariance matrix. Furthermore, the singular values S are the square-root of the associated eigenvalues. | ||
+ | :So assuming that the singular values are ordered from largest to smallest, then U(*,1:12) represents the first 12 eigenvectors. | ||
+ | :You can compute the first 12 projection coefficients for a vector X by computing Y = (U(*,1:12))' X . | ||
+ | |||
+ | ---- | ||
[[2013_Spring_Image_Processing_ECE_637_Bouman|Back to Spring 2013 ECE637]] | [[2013_Spring_Image_Processing_ECE_637_Bouman|Back to Spring 2013 ECE637]] | ||
[[Category:Image ProcessingECE 637Spring2013Bouman]] | [[Category:Image ProcessingECE 637Spring2013Bouman]] | ||
[[Category:ECE637]] | [[Category:ECE637]] | ||
+ | [[Category:image processing]] | ||
+ | [[category:discussion]] |
Latest revision as of 08:12, 9 April 2013
Discussion for Lab 5, ECE637, Spring 2013
Additional Information
Before jumping into the lab, I would get familiar with basic concepts in Bayes Methods
- http://en.wikipedia.org/wiki/Bayesian_inference
- http://en.wikipedia.org/wiki/Eigen_decomposition
- http://en.wikipedia.org/wiki/Principle_components_analysis
Some concepts on what you are actually doing in lab
For data acceleration, consider reading advance software development in the below link
Q&A Section
- Q: How do you calculate the theoretical R for section 2.2? I've calculated the Rhat 2x2 matrix, however I'm not sure how to calculate the theoretical one.
- A - The theoretical values are the known values of the covariance you used to generate the samples in Section 2.1.
- More specifically, they are the values of Rx given in equation (14) of section 2.1.
- Q: I don't understand what is to be produced in section 4 for the projection coefficients. It is an image or a regular plot? The paragraph describing that procedure is not too clear for me.
- A - The section has been re-written to be clearer.
- Q: Do the answers to Section 4 have to match with those provided in the pdf file Examples posted under course notes. Is the training data the same? For me, out of the 12 eigen images 9-10 match but a couple don't. Also, the projection coefficient variation is somewhat different.
- A - You can get a difference here but in a very specific way. Note that if "u" is an eigenvector, so is "-u", and they're both associated with the same eigenvalue. So SVD() might return an eigen image that appears to have an inverted gray scale, compared to the example. And in this case when you project an image onto this eigenvector, the projection coefficient will have the opposite sign as the example. The key is that when you multiply the coefficient by the eigen vector (e.g. when you synthesize), you should get the same result in either case.
- Q: Section 4 asks for the plots for the 12 largest eigenvalues. How can these be determined once the SVD is calculated from Z?
- A - The SVD of Z is given by [U S V]=svd(Z,0). After this is computed, then U is a pXn matrix, and each column of U is an eigenvector
- of the estimated covariance matrix. Furthermore, the singular values S are the square-root of the associated eigenvalues.
- So assuming that the singular values are ordered from largest to smallest, then U(*,1:12) represents the first 12 eigenvectors.
- You can compute the first 12 projection coefficients for a vector X by computing Y = (U(*,1:12))' X .