Using Multiple GPUs in Tensorflow
As neural networks get deeper and training data get bigger, deep learning needs more computing power to accommodate the computationally intensive training process. This lecture introduces how to leverage HPC to accelerate deep learning through parallelism. We will discuss topics like data I/O, utilizing multiple GPUs on a single machine or across a cluster, etc. We will also give benchmarks of a typical deep learning problem (mnist) with various hardware configurations, which include single/multiple CPUs, single/multiple GPUs. We use Tensorflow for the above implementations and benchmarking tests. Audience will learn practical skills of making good use of available hardware to maximize the training speed. _________________________________________________ This webinar was presented by Weiguang Guan (SHARCNET) on November 6 2019 as a part of a series of regular biweekly General Interest webinars ran by SHARCNET. The webinars cover different high performance computing (HPC) topics, are approximately 45 minutes in length, and are delivered by experts in the relevant fields. Further details can be found on this web page: https://www.sharcnet.ca/help/index.php/Online_Seminars . Subscribe to our twitter account (@SHARCNET) to stay updated about our upcoming webinars. SHARCNET is a consortium of 19 Canadian academic institutions who share a network of high performance computers (http://www.sharcnet.ca). SHARCNET is a part of Compute Ontario (http://computeontario.ca/) and Compute Canada (https://computecanada.ca).
Download
1 formatsVideo Formats
Right-click 'Download' and select 'Save Link As' if the file opens in a new tab.