NVIDIA Deep Learning Institute (DLI)
Fundamentals of Deep Learning for Multi-GPUs
The first NVIDIA DLI workshop in Prague
This course will teach you how to use multiple GPUs for training of neural networks. It will be guided by Adam Grzywaczewski, Senior Deep Learning Solution Architect at NVIDIA, author of the course.
THE WORKSHOP IS FULLY BOOKED NOW!
More information about Fundamentals of Deep Learning for Multi-GPUs course
The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible. This workshop will teach you how to use multiple GPUs to train neural networks.
- Approaches to multi-GPUs training
- Algorithmic and engineering challenges to large-scale training
- Key techniques used to overcome the challenges mentioned above
Upon completion, you’ll be able to effectively parallelize training of deep neural networks using TensorFlow.
Prerequisites: Experience with stochastic gradient descent mechanics, network architecture, and parallel computing
Tools and Frameworks: TensorFlow
Assessment Type: Code-based