Announcements
- November, 21st: CADL 2023 is ready to start! We will be in P&J (Main Conference Centre)
- August, 10th: Deadline has been extended to September 1, 2023 (11:59 AM Pacific Time)!
- June 7th: CADL2023 has been accepted as a BMVC 2023 workshop! More details to follow!
Workshop description
Deep Learning has been one the most significant breakthroughs in computer science in the last 10 years. It has achieved significant advances in terms of the effectiveness of prediction models across many research topics and application fields. This paradigm shift has radically changed the way scientific research is conducted: DL is becoming a computational science where gigantic models with millions of parameters are trained on large-scale computational facilities. While this transition is leading to better and more accurate results by accelerating scientific discovery and technology advance, the reduction of energy consumption, the availability of such computational power and the ability to harness it are a key factor of success.
In this context, optimisation and careful design of neural architectures play an increasingly important role that directly affects the pace of research, the effectiveness of state-of-the-art models, their applicability at production scale and, last but not least, the reduction of energy consumed to train models and infer results.
The BMVC workshop on "Computational Aspects of Deep Learning" fosters the submission of novel research works that focus on the development of deep neural network architectures, libraries, frameworks, strategies, HW solutions to address challenging experiments while optimizing the use of computational resources. This includes computationally efficient training and inference strategies, the design of novel architectures for increasing the efficacy or the efficiency in feature extraction and classification, the optimization of hyperparameters to enhance model’s performance, solutions for training in multi-node systems such as HPC (High Performance Computing) clusters, and the reduction of model complexity through numerical optimisation, e.g. quantization.
The best academic paper will be awarded a top-of-the-line GPU sponsored by NVIDIA. Geographical constraints for the winner will be imposed due to shipping restrictions. Winners must reside in trade approved countries.
Each submission must include a paragraph explaining how presented research can improve AI adoption and scientific discovery in situations where computational resources are scarce or limited, reduce energy consumption, favour inclusion.
Topics
Topics of interest include the following:
- Developing optimization strategies for reducing the energy consumption in deep learning
- Design of novel architectures and operators that are suitable for data-intensive scenarios
- Developing distributed, efficient reinforcement learning algorithms
- Implementing large-scale pre-training techniques for real-world applications
- Developing distributed training approaches and architectures
- Utilizing HPC and massively parallel architectures for deep learning
- Exploring frameworks and optimization algorithms for training deep networks
- Utilizing model pruning, gradient compression techniques, and quantization to reduce the computational complexity
- Developing methods to reduce the memory/data transmission footprint
- Developing methods and differentiable metrics to estimate computational costs, energy consumption and power consumption of models
- Designing, implementing and using hardware accelerators for deep learning
- Developing efficient and cost saving models and methods that promote diversity and inclusivity in the field of deep learning.
- Speed up of the training and inference of GPT, generative or foundation models
Important dates
- Submission deadline (extended): September 1, 2023 (11:59 AM Pacific Time)
- Author notification: September 15, 2023 (11:59 AM Pacific Time)
- Camera-ready submission: September 28, 2023 (11:59 AM Pacific Time)
Organizers
Giuseppe Fiameni
Solution Architect (AI), NVIDIA
Iuri Frosio
Principal Research Scientist, NVIDIA
Claudio Baecchi
Senior Machine Learning Engineer, Small Pixels & University of Florence
Frederic Pariente
Solutions Architect (AI) Manager, NVIDIA
Lorenzo Baraldi
Tenure-Track Assistant Professor, ELLIS Scholar, UNIMORE