ece4580:module_classification
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
ece4580:module_classification [2017/01/22 21:03] – pvela | ece4580:module_classification [2024/08/20 21:38] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Classification ====== | + | ====== |
- | Classification involves | + | Classification involves |
+ | This set of learning modules will go a little backwards relative to how our understanding has evolved regarding classification. | ||
+ | |||
+ | ---------- | ||
+ | ===== Module #1: Digit Classification Using Stacked Autoencoders ===== | ||
+ | /* | ||
+ | Andrew Ng | ||
+ | */ | ||
+ | |||
+ | The classical artificial intelligence/ | ||
+ | |||
+ | This set of activities comes courtesy of Prof. Andrew Ng at Stanford university. We will be using his [[http:// | ||
+ | |||
+ | __**Week #1: Sparse Autoencoder**__ \\ | ||
+ | Implement the ' | ||
+ | |||
+ | __**Week #2: Data Pre-Processing and Classification**__ \\ | ||
+ | Complete the `Vectorized implementation', | ||
+ | |||
+ | __** Week #3: Feature Learning and Classification.**__ \\ | ||
+ | Complete the ' | ||
+ | |||
+ | __** Week #4: Deep Neural Networks.**__ \\ | ||
+ | Now, we are going to use the autoencoder repeteadly to create a deep network. | ||
+ | |||
+ | __** Week #5: Character Classification with Deep Neural Networks.**__ \\ | ||
+ | Now you have a very nice classification system on digits. Can we classify on other similar data, such as characters, using the same approach? | ||
+ | Take your feature learning and classification training deep network procedure and train it with characters instead of digits. For characters, we would like to use [[https:// | ||
+ | - Use the feature space you learned already from the digits, and use it as your neural network feature descriptor function. | ||
+ | - Start all over from scratch, and learn with autoencoders a feature space for characters. | ||
+ | - Use the new character feature space to train a regression classifier. Get the accuracy and confusion matrix. | ||
+ | - Report your results and comparison. | ||
+ | |||
+ | ---------------- | ||
+ | ===== Module #2: Engineered Features: Bag-of-Words ===== | ||
+ | |||
+ | Before people got into the auto-learning of feature spaces, it was common to try to hand craft a feature space, or come up with a mechanism for creating feature spaces that generalized. | ||
+ | |||
+ | /* | ||
(2) bag-of-words classifier: http:// | (2) bag-of-words classifier: http:// | ||
They were short courses on ICCV 2005. | They were short courses on ICCV 2005. | ||
+ | */ | ||
+ | /* | ||
+ | Other related material, maybe more classic. | ||
+ | */ | ||
- | ===== Module | + | __**Week |
+ | - Study [[https:// | ||
+ | - Download (or clone) the clustering skeleton code [[https:// | ||
+ | - Implement k-means clustering algorithm working in RGB space by following the algorithmic steps. You are welcome to implement from scratch without skeleton code. | ||
+ | - Test your algorithm on segmenting the image // | ||
+ | - Try different random initialization and show corresponding results. | ||
+ | - Comment on your different segmentation results. | ||
- | Andrew Ng | + | //Matlab Notes:// Matlab has several functions that can assist with the calculations so that you do not have to process the data in a for loops. |
- | ===== Module | + | __**Week |
- | Other related material, maybe more classic. | + | - Study the [[https:// |
+ | - We begin with implementing a simple but powerful recognition system to classify //faces// and //cars//. | ||
+ | - Check [[https:// | ||
+ | - In our implementation, | ||
+ | - Now, use first 40 images in both categories for training. | ||
+ | - Extract SIFT features from each image | ||
+ | - Derive k codewords with k-means clustering in module 1. | ||
+ | - Compute histogram of codewords using [[https:// | ||
+ | - Use the rest of 50 images in both categories to test your implementation. | ||
+ | - Report the accuracy and computation time with different k | ||
+ | |||
+ | |||
+ | |||
+ | __**Week #3: Spatial Pyramid Matching (SPM)**__\\ | ||
+ | Usually objects have different properties across the spatial scales, even though they may appear common at one given scale. | ||
+ | - Study [[https:// | ||
+ | - We will implement a simplified version of SPM based on your molude 2 | ||
+ | - First, for each traning image, divide it equally into a 2 × 2 spatial bin. | ||
+ | - Second, for each of the 4 bins, extract the SIFT features and compute the histograms of codewords as in module 2 | ||
+ | - Third, concatenate the 4 histogram vectors in a fixed order. (hint: the a vector has 4k dimension.) | ||
+ | - Forth, concatenate the vector you have in module 2 with this vector (both weighted by 0.5 before concatenated). | ||
+ | - Finally, use this 5k representation and re-run the training and testing again. | ||
+ | - Compare the results from module 3 and module 2. Explain what you observe. | ||
+ | |||
+ | ----------------- | ||
+ | ;#; | ||
+ | [[ECE4580: | ||
+ | ;#; |
ece4580/module_classification.1485136998.txt.gz · Last modified: 2024/08/20 21:38 (external edit)