ece4580:module_classification
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
ece4580:module_classification [2017/01/25 20:19] – pvela | ece4580:module_classification [2024/08/20 21:38] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Classification ====== | + | ====== |
Classification involves obtaining a sample and determining to which class the sample belongs (from a finite set of classes). | Classification involves obtaining a sample and determining to which class the sample belongs (from a finite set of classes). | ||
Line 15: | Line 15: | ||
This set of activities comes courtesy of Prof. Andrew Ng at Stanford university. We will be using his [[http:// | This set of activities comes courtesy of Prof. Andrew Ng at Stanford university. We will be using his [[http:// | ||
- | **Week #1: Sparse Autoencoder** \\ | + | __**Week #1: Sparse Autoencoder**__ \\ |
Implement the ' | Implement the ' | ||
- | **Week #2: Data Pre-Processing and Classification** \\ | + | __**Week #2: Data Pre-Processing and Classification**__ \\ |
Complete the `Vectorized implementation', | Complete the `Vectorized implementation', | ||
- | ** Week #3: Feature Learning and Classification.** \\ | + | __** Week #3: Feature Learning and Classification.**__ \\ |
Complete the ' | Complete the ' | ||
- | ** Week #4: Deep Neural Networks.** \\ | + | __** Week #4: Deep Neural Networks.**__ \\ |
Now, we are going to use the autoencoder repeteadly to create a deep network. | Now, we are going to use the autoencoder repeteadly to create a deep network. | ||
- | ** Week #5: Character Classification with Deep Neural Networks.** \\ | + | __** Week #5: Character Classification with Deep Neural Networks.**__ \\ |
Now you have a very nice classification system on digits. Can we classify on other similar data, such as characters, using the same approach? | Now you have a very nice classification system on digits. Can we classify on other similar data, such as characters, using the same approach? | ||
Take your feature learning and classification training deep network procedure and train it with characters instead of digits. For characters, we would like to use [[https:// | Take your feature learning and classification training deep network procedure and train it with characters instead of digits. For characters, we would like to use [[https:// | ||
Line 48: | Line 48: | ||
*/ | */ | ||
+ | __**Week #1: Clustering to Define Words**__ \\ | ||
+ | - Study [[https:// | ||
+ | - Download (or clone) the clustering skeleton code [[https:// | ||
+ | - Implement k-means clustering algorithm working in RGB space by following the algorithmic steps. You are welcome to implement from scratch without skeleton code. | ||
+ | - Test your algorithm on segmenting the image // | ||
+ | - Try different random initialization and show corresponding results. | ||
+ | - Comment on your different segmentation results. | ||
+ | |||
+ | //Matlab Notes:// Matlab has several functions that can assist with the calculations so that you do not have to process the data in a for loops. | ||
+ | |||
+ | __**Week #2: Object Recognition**__\\ | ||
+ | |||
+ | - Study the [[https:// | ||
+ | - We begin with implementing a simple but powerful recognition system to classify //faces// and //cars//. | ||
+ | - Check [[https:// | ||
+ | - In our implementation, | ||
+ | - Now, use first 40 images in both categories for training. | ||
+ | - Extract SIFT features from each image | ||
+ | - Derive k codewords with k-means clustering in module 1. | ||
+ | - Compute histogram of codewords using [[https:// | ||
+ | - Use the rest of 50 images in both categories to test your implementation. | ||
+ | - Report the accuracy and computation time with different k | ||
+ | |||
+ | |||
+ | |||
+ | __**Week #3: Spatial Pyramid Matching (SPM)**__\\ | ||
+ | Usually objects have different properties across the spatial scales, even though they may appear common at one given scale. | ||
+ | - Study [[https:// | ||
+ | - We will implement a simplified version of SPM based on your molude 2 | ||
+ | - First, for each traning image, divide it equally into a 2 × 2 spatial bin. | ||
+ | - Second, for each of the 4 bins, extract the SIFT features and compute the histograms of codewords as in module 2 | ||
+ | - Third, concatenate the 4 histogram vectors in a fixed order. (hint: the a vector has 4k dimension.) | ||
+ | - Forth, concatenate the vector you have in module 2 with this vector (both weighted by 0.5 before concatenated). | ||
+ | - Finally, use this 5k representation and re-run the training and testing again. | ||
+ | - Compare the results from module 3 and module 2. Explain what you observe. | ||
----------------- | ----------------- |
ece4580/module_classification.1485393584.txt.gz · Last modified: 2024/08/20 21:38 (external edit)