Back to All Events

Better, Faster Utilizing Your GPUs for Deep Learning Workload

Abstract

The utilization of GPUs has evolved from gaming purposes to deep learning and is geared towards, but not limited to, computer vision types of workloads: autonomous vehicles, medical images in radiomics, and video surveillance, to name a few. However, having powerful GPUs at your disposal is one thing; knowing how to use them to their full potential is something we will explore together.

Working at Nvidia, we are dedicated to creating an eco-system with a complete cycle of hardware-and-software solutions that enables optimization on all fronts. From providing plug-and-play-able docker repo maintained by us to data augmentations within GPU on-the-fly, to parallel model training using multiple GPUs with mixed_precision for target deployment.

We have open-sourced all these toolkits for data scientists to quickly kick-start and get to work instead of spending precious time to fix environmental installation errors and bring supercomputing power at your fingertips.

Today, we will take a look at these essential toolkits that enable deep learning practitioners and data scientists alike to improve their model training in parallel, iterate faster, and develop better market-ready products.

Zenodia Charpy

Senior Deep Learning Solution Architect @ NVIDIA

I have worked many years hands-on as an in-house data scientist, an external deep-learning consultant, a cloud solution architect (on Azure), and now a senior deep learning solution architect at Nvidia. My journey of seeking the optimal pathways in utilizing multiple GPUs for deep learning is paved with years of industrial experiences, and practical tips & tricks learned from pitfalls with real-world projects. I am on a mission to help data scientists and researchers alike to accelerate their deep learning workload with ease taking advantage of my learnings and experiences.