Large Scale multi-GPU training

Organized in conjunction with NVAITC@UNIMORE - NVIDIA AI Technical Center at UNIMORE

April 14th 2020, at 11:00


Abstract

The computational requirements of deep neural networks used to enable AI applications like objects captioning or identification are enormous. A single training cycle can take weeks on a single GPU, or even months for the larger datasets like those used in computer vision research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

This two-hour seminar will introduce you on how to use multiple GPUs to training neural networks. You'll learn:

  • Approaches to multi-GPU training
  • Algorithmic and engineering challenges to large-scale training

Upon completion, you'll be able to effectively parallelize training of deep neural networks.


Speaker

Giuseppe Fiameni

Giuseppe Fiameni
Giuseppe Fiameni, PhD, is a Solution Architect for AI and Accelerated Computing at NVIDIA and optimizes deep learning workloads on High Performance Computing systems. He is the technical lead of the Italian NVIDIA Artificial Intelligence Technology Centre.


Connect to the seminar room

The permanent link for connecting will be available shortly before the seminar.

Go to the seminar room

Organization

The seminar is organized as part of the "Computer Vision and Cognitive Systems" course (Prof. R. Cucchiara, L. Baraldi).