NVIDIA Tutorial

NVIDIA Tutorial on Deep Learning, July 18, 2016 (6 hours)

LAAS-CNRS, room Europe, 7, avenue du Colonel Roche, Toulouse

Instructors:
Alison Lowndes, Deep Learning Solutions Architect and Community Manager, NVIDIA
Gunter Roth, Deep Learning Solutions Architect, NVIDIA

slides1; slides2; slides3

rsz_1rsz_20160718_93329.jpg rsz_120160718_93337.jpg

Program

- Introduction on GPU and Neural Networks, application to automobile industry and others;

- Demos

- Hands-on using qwiklab content.

 rsz_dsc_5738.jpg

Limited number of participants (40 attendees).

 Details:

Practical Deep Learning – An Introduction

Machine Learning is among the most important developments in the history of computing. Deep learning is one of the fastest growing areas of machine learning and a hot topic in both academia and industry. This workshop will cover the fundamentals of deep learning with a focus on hands-on exercises leveraging some of the most popular deep learning frameworks.

Prerequisites:

Experience on programming, basic knowledge of calculus, linear algebra, and probability theory. Attendees are expected to bring their own laptops for the hands-on practical work.

Content:

Introduction to NVIDIA software for Deep Learning
 
Hands-on lab: Caffe framework
How to:
•             Build and train a convolutional neural network for classifying images.
•             Evaluate the classification performance under different training parameter configurations.
•             Modify the network configuration to improve classification performance.
•             Visualize the features that a trained network has learned.
•             Classify new test images using a trained network.
•             Training and classifying with a subset of the ImageNet dataset.
 
Hands-on lab: Torch framework
•             Introduction & History
•             Torch core features
•             Why use Torch?
•             Torch Community and support.
•             The Cheatsheet
•             Lua (JIT) and LuaRocks
•             Torch’s universal data structure

Tensors
•             Creating a LeNet network
•             Criterion: Defining a loss function
•             Using dataloaders to load 50,000 CIFAR-10 (3x32x32) images
•             Load and normalize data
•             Define Neural Network
•             Define Loss function
•             Train network on training data
•             Test network on test data.
 
Hands-on lab: Theano framework
•             Theano integration with the Python ecosystem
•             Data management options in Theano
•             DNN definition and training
•             Ease of extensibility of DNN functionality, e.g. defining new activation and loss functions
 
 Hands-on lab: Tensorflow
•             computation graph basics
•             linear regression
•             sequence autoencoder
•             multi-layer convolutional net
•             multi-GPU use
•             TensorBoard visualization
 
Hands-on lab:  Introduction to Recurring Neural Networks (RNNs) (Chainer)
•             What are RNNs?
•             Simple example of Binary addition
•             RNN training with stochastic gradient descent (SGD)
•             Backpropagation through time (BPTT)
•             Challenges such as "vanishing" and "exploding" gradients
•             Backprop in RNNs
•             Long Short Term Memory (LSTM)
•             RNNs and text generation using Chainer
•             Exercises with Gated Recurrent Units and perplexity.
  
 

 

DeepLearning

 

Online user: 1