# Deep Learning: Do-It-Yourself!

## Course description

Recent developments in neural network approaches (more known now as “deep learning”) have dramatically changed the landscape of several research fields such as image classification, object detection, speech recognition, machine translation, self-driving cars and many more. Due to its promise of leveraging large (sometimes even small) amounts of data in an end-to-end manner, i.e. train a model to extract features by itself and to learn from them, deep learning is increasingly appealing to other fields as well: medicine, time series analysis, biology, simulation…

This course is a deep dive into practical details of deep learning architectures, in which we attempt to demystify deep learning and kick start you into using it in your own field of interest. During this course, you will gain a better understanding of the basis of deep learning and get familiar with its applications. We will show how to set up, train, debug and visualize your own neural network. Along the way, we will be providing practical engineering tricks for training or adapting neural networks to new tasks.

By the end of this class, you will have an overview on the deep learning landscape and its applications to traditional fields, but also some ideas for applying it to new ones. You should also be able to train a multi-million parameter deep neural network by yourself. For the implementations we will be using the PyTorch library in Python.

## Practical info

This is the materail under development for MAP583 (2020) taught at école polytechnique with Andrei Bursuc.

For 2019 material, see here

- Lesson 1:
- (slides) introductory slides
- (code) a first example on Colab: dogs and cats with VGG
- (code) intro to PyTorch: exo
- a simple example for backprop: slide and code

- Lesson 2:
- (slides) refresher: linear/logistic regressions, classification and PyTorch module.
- (code) understanding convolutions and your first neural network for a digit recognizer
- MLP from scratch colab

- Lesson 3:
- (slides) embeddings and dataloader
- (code) Collaborative filtering: matrix factorization and recommender system (CPU compatible)
- (slides) Convolutions and siamese networks
- (code) Siamese networks on MNIST

- Lesson 4:
- (slides) optimization for DL
- (code) gradient descent optimization algorithms
- (slides) Going deeper

- Lesson 5:
- (slides) Generative Adversarial Networks
- (code) Conditional and Info GANs
- (code) AE and VAE
- (slides) Transfer learning and Self-supervised learning

- Lesson 6:
- Reccurrent Neural Networks: slides and associated code
- (code) PyTorch tutorial on char-RNN
- (code) Word2vec
- (code) Playing with word embedding
- (slides) Under the hood

- Lesson 7:
- A project repo

Back to dataflowr