• image

    Mini Project 6: Generative Network Models.

    Due date: 24th March 2017

    Absolutely No extensions will be provided.

    Grades Posted.

    This is an independent project. This project is in three parts. In the first part you will simply run the Autoencoder Tutorial. Run this tutorial with varying length of codewords and varying number of layers. Summarize your findings with appropriate figures of the filters you learnt.

    In the second part you will simply run the Generative Adversarial Network Tutorial. Run this tutorial with varying number of layers and types of layers in both the discriminator and the generator. Summarize your findings with appropriate figures of the filters you learnt. Be sure to simulate a situation where your GAN will mode collapse.

    In the yann toolbox, you can create a variety of layers. One type of layer is the random layer and another is the merge layer. Refer to the add_layer method for parameters and how to add these layers to the network module. merge layer can take an argument layer_type. If you provide this argument with sum, the output for the merge layer would be the sum of the two input layers supplied.

    Using these layers and the MNIST dataset cooked during the autoencoder tutorial, create a layer that will produce a noisy version of the image. De-noise the image using a denoising autoencoder setup like we studied in the class. Attempt this for varying over-complete encoders and depth of network. Summarize your findings with appropriate figures of the filters you learnt.

    The submission for this project is a three-page report. The three-page report will be typeset in the camera-ready style of IEEE TPAMI. The report should contain detailed analysis of your reporting along with the figures of all the filters and other details requested.

    [Full Project Requirements]
  • image

    Mini Project 5: Dataset Generality and Transferability

    Due date: 17th March 2017

    Absolutely No extensions will be provided

    Grades Posted.

    In this project you will involve you pre-training a network on three datasets and then re-training only the softmax layer on a dataset on which the base was not pre-trained on. Using this architecture you will demonstrate the procedures of using a pre-trained network and also study dataset generality ad outlined in the paper "Neural dataset generality". [Full Project Requirements]

  • image

    Mini Project 4: Convolutional Neural Networks

    Due date: 24th Feb. 2017

    Extended due date: 3rd March 2017

    Grades Posted.

    In this project you will download and create the SVHN dataset for the yann toolbox. You can follow the mat2yann tutorial for converting the SVHN dataset and prepare it for yann. In the previous miniprojects you either focused on designing the network hypothesis, or on the optimizer and learning parameters. In this project you are free to design a network of any architecture (that you can create using yann) with any number of layers (of any type) and with any number of neurons. You are also encouraged to try and use any optimizer with any set of parameters to learn the produced network. Your goal is to produce the best accuracy on the SVHN dataset cooked using the tutorial.

    You are however imposed some industry-like constraints. While the higher the testing accuracy, the higher the grade you shall receive as usual, the higher the memory footprint you leave the lesser the grade you will receive as well. You also have a time constraint: The higher the testing time, the lesser the grade that you will receive. So your goal is not just to optimize for performance but also to consider memory and testing time (think about porting these networks on some device eventually). This implies that you will have to get as high a performance as possible with as less number of layers (some layers occupy less memory than others) and with as less computation as possible (some layers do less computation than others).

    As a quick tip, try to be around the 2GB memory footprint mark. You can use 'htop' or 'nvidia-smi' (if you are using GPUs) to monitor the memory. We will be running this code on a GPU and use 'nvidia-smi' to monitor the memory foot print and use the yann's in-built timer for measuring test time. Also as a tip, do not start directly with the entire dataset. Create dataset with less number of batches first (training and valid batches not test) and then move on to the full dataset.

    [Full Project Requirements]

  • image

    Mini Project 3: Momentums and Second Order Gradient Descent Methods

    Due date: 17th Feb. 2017

    Grades Posted.

    In this project you will setup the yann toolbox on your machine and run the multi-layer neural network tutorial. You will run this with several settings: without any momentum, with Polyak and Nesterov Momentums with adagrad and rmsprop (you will need to refer other tutorials or the API to figure out how to do these). For each of these settings to work efficiently, you need to experiment with various learning rates and combinations of techniques along with regularizers. Your task is to come up with one setting (one file) that will produce the best result as you can muster.

    The submission for this project is a zip file containing a screen shot of the test output of the base tutorial code, your codes for your best setting and a one-page report. The one-page report will be typeset in the camera-ready style of IEEE TPAMI.

    The report should contain results, analysis and discussion comparing the three network models just learnt, difficulties faced during these implementations and solutions arrived at for each of these difficulties. You will use the mnist dataset generated by cook_mnist_normalized method in yann.utils. [Full Project Requirements]

  • image

    Mini Project 2: Multi-Layer Neural Networks

    Due date: 10th Feb. 2017

    Extended due date: 14th Feb. 2017

    Grades Posted.

    In this project you will implement and learn a multi-layer neural network to fit the data from the given dataset generators. You may choose to use any of the methods that were taught in class to fit the data. You may choose to use any of the tricks and tips that were taught (or perhaps you read up on your own) to solve the problem as well. Your machine must be implemented for the structure provided along with the test script. A script similar to the test script will be used (the data created might be from different distributions) to evaluate your implementation. The higher the accuracy score, the higher the grade that you shall receive. [Full Project Requirements]

  • image

    Mini Project 1: Linear Regression

    Due Date: 3rd Feb. 2017

    Grades Posted.

    In this project, you will implement linear regression. You may choose to use any of the methods that were taught in class to fit the regressor. You may choose to use any of the tricks and tips that were taught (or perhaps you read up on your own) to solve the regresor as well. You regressor must be implemented with the structure provided along with the test script. A script similar to the test script will be used (the data created might be from different distributions) to evaluate your implementation. The lower the rmse score, the higher the grade that you shall receive. [Full Project Requirements]