X
X
X

X
Courses » Deep Learning

Deep Learning

ABOUT THE COURSE:

Deep Learning has received a lot of attention over the past few years and has been employed successfully by companies like Google, Microsoft, IBM, Facebook, Twitter etc. to solve a wide range of problems in Computer Vision and Natural Language Processing. In this course we will learn about the building blocks used in these Deep Learning based solutions. Specifically, we will learn about feedforward neural networks, convolutional neural networks, recurrent neural networks and attention mechanisms. We will also look at various optimization algorithms such as Gradient Descent, Nesterov Accelerated Gradient Descent, Adam, AdaGrad and RMSProp which are used for training such deep neural networks. At the end of this course students would have knowledge of deep architectures used for solving various Vision and NLP tasks

Important For Certification/Credit Transfer:

Weekly Assignments and Discussion Forum can be accessed ONLY by enrolling here

Scroll down to Enroll


Note: Content is Free!

All content including discussion forum and assignments, is free


Final Exam (in-person, invigilated, currently conducted in India) is mandatory for Certification and has INR Rs. 1100 as exam fee



INTENDED AUDIENCE: NIL

CORE/ELECTIVE: Elective

UG/PG: UG and PG

PREREQUISITES: Working knowledge of Linear Algebra, Probability Theory. It would be beneficial if the participants have done a course on Machine Learning.

INDUSTRY SUPPORT: NIL

12552 students have enrolled already!!

ABOUT THE INSTRUCTOR:



Prof.Mitesh M. Khapra is an Assistant Professor in the Department of Computer Science and Engineering at IIT Madras. While at IIT Madras he plans to pursue his interests in the areas of Deep Learning, Multimodal Multilingual Processing, Dialog systems and Question Answering. Prior to that he worked as a Researcher at IBM Research India. During the four and half years that he spent at IBM he worked on several interesting problems in the areas of Statistical Machine Translation, Cross Language Learning, Multimodal Learning, Argument Mining and Deep Learning. This work led to publications in top conferences in the areas of Computational Linguistics and Machine Learning. Prior to IBM, he completed his PhD and M.Tech from IIT Bombay in Jan 2012 and July 2008 respectively. His PhD thesis dealt with the important problem of reusing resources for multilingual computation. During his PhD he was a recipient of the IBM PhD Fellowship (2011) and the Microsoft Rising Star Award (2011). He is also a recipient of the Google Faculty Research Award (2017).

COURSE LAYOUT:

Week 1 :  (Partial) History of Deep Learning, Deep Learning Success Stories, McCulloch Pitts Neuron, Thresholding Logic,  Perceptrons, Perceptron Learning Algorithm
Week 2   :   Multilayer Perceptrons (MLPs), Representation Power of MLPs, Sigmoid Neurons, Gradient Descent, Feedforward Neural Networks, Representation Power of Feedforward Neural Networks
Week 3   : FeedForward Neural Networks, Backpropagation
Week 4  :  Gradient Descent (GD), Momentum Based GD, Nesterov Accelerated GD, Stochastic GD, AdaGrad, RMSProp, Adam, Eigenvalues and eigenvectors, Eigenvalue Decomposition, Basis
Week 5   :  Principal Component Analysis and its interpretations, Singular Value Decomposition
Week 6 :  Autoencoders and relation to PCA, Regularization in autoencoders, Denoising autoencoders, Sparse autoencoders, Contractive autoencoders
Week 7  :  Regularization: Bias Variance Tradeoff, L2 regularization, Early stopping, Dataset augmentation, Parameter sharing and tying, Injecting noise at input, Ensemble methods, Dropout
Week 8   :  Greedy Layerwise Pre-training, Better activation functions, Better weight initialization methods, Batch Normalization
Week 9 :  Learning Vectorial Representations Of Words
Week 10   :  Convolutional Neural Networks, LeNet, AlexNet, ZF-Net, VGGNet, GoogLeNet, ResNet, Visualizing Convolutional Neural Networks, Guided Backpropagation, Deep Dream, Deep Art, Fooling Convolutional Neural Networks
Week 11 :  Recurrent Neural Networks, Backpropagation through time (BPTT), Vanishing and Exploding Gradients, Truncated BPTT, GRU, LSTMs
Week 12 :  Encoder Decoder Models, Attention Mechanism, Attention over images

SUGGESTED READING MATERIALS:

•    Deep Learning, An MIT Press book, Ian Goodfellow and Yoshua Bengio and Aaron Courville http://www.deeplearningbook.org
CERTIFICATION EXAM :
  • The exam is optional for a fee.
  • Date of Exams : October 28 (Sunday)
  • Time of Exams : Morning session 9am to 12 noon; Afternoon session: 2pm to 5pm
  • Exam for this course will be available in both morning & afternoon sessions.
  • Registration url: Announcements will be made when the registration form is open for registrations.
  • The online registration form has to be filled and the certification exam fee needs to be paid. More details will be made available when the exam registration form is published.

CERTIFICATION:

  • Final score will be calculated as : 25% assignment score + 75% final exam score
  • 25% assignment score is calculated as 25% of average of  Best 8 out of 12 assignments
  • E-Certificate will be given to those who register and write the exam and score greater than or equal to 40% final score. Certificate will have your name, photograph and the score in the final exam with the breakup.It will have the logos of NPTEL and IIT Madras.It will be e-verifiable at nptel.ac.in/noc.