Deep Learning at scale with applications to Computer Vision
by Răzvan Pașcanu and Viorica Pătrăucean (DeepMind)
In this tutorial, we will discuss different aspects of scaling up deep learning models for image and video processing and the existing bottlenecks in terms of optimisation, memory footprint, and latency. We will present works that address these challenges by regularising the deep models, by compressing or sparsifying models, or by enabling parallelism during training and inference. The tutorial will start with basic notions of convnets for image and video processing and will present recent techniques for stabilising training or making these models more efficient.
Short Bio
Razvan Pascanu is a research scientist in DeepMind. He obtained his PhD from University of Montreal under the supervision of Yoshua Bengio. His main research interests span multiple aspects of deep learning and reinforcement learning from optimization, expressivity and learnability of these models, to recurrent and sequential models and dealing with multiple objectives, or efficiency of learning in reinforcement learning. He has published his work at international conferences and journals (ICLR, ICML, NeurIPS, JMLR)
Viorica Patraucean is a research scientist in DeepMind. She obtained her PhD from University of Toulouse and carried out postdoctoral work at Ecole Polytechnique Paris and University of Cambridge, on processing of images, videos, and point-clouds. Her main research interests are around efficient vision systems that could achieve similar performance to humans, with published works in international conferences and journals (CVPR, ECCV, ICLR, TPAMI). Her recent works focus on parallelising deep video models.