- May, 10th: The CMT is open for submissions: https://cmt3.research.microsoft.com/CADL2022/
- May, 3rd: The submission timeline has been updated. Check out the new dates.
- April 9th: CADL2022 has been accepted as an ECCV 2022 workshop! More details to follow!
Deep Learning has been one the most significant breakthroughs in computer science in the last 10 years. It has achieved significant advances in terms of the effectiveness of prediction models across many research topics and application fields. This paradigm shift has radically changed the way scientific research is conducted: DL is becoming a computational science where gigantic models with millions of parameters are trained on large-scale computational facilities. While this transition is leading to better and more accurate results by accelerating scientific discovery and technology advance, the reduction of energy consumption, the availability of such computational power and the ability to harness it are a key factor of success.
The ECCV workshop on “Computational Aspects of Deep Learning'' fosters the submission of novel research works that focus on the development of deep neural network architectures, libraries, frameworks, strategies, HW solutions to address challenging experiments while optimizing the use of computational resources. This includes computationally efficient training and inference strategies, the design of novel architectures for increasing the efficacy or the efficiency in feature extraction and classification, the optimization of hyperparameters to enhance model’s performance, solutions for training in multi-node systems such as HPC (High Performance Computing) clusters, and the reduction of model complexity through numerical optimisation, e.g. quantization.
The best academic paper will be awarded with a high-end GPU sponsored by NVIDIA. Geographical constraints for the winner will be imposed due to shipping restrictions. Winners must reside in trade approved countries.
Topics of interest include the following:
- Model optimisation for reducing energy consumption while ensuring high performance.
- Design of innovative architectures and operators for data-intensive scenarios.
- Applications of large-scale pre-training techniques.
- Distributed training approaches and architectures.
- HPC and massively parallel architectures for Deep Learning.
- Model pruning, gradient compression techniques, quantization to reduce training/inference time.
- Strategies to reduce memory/data transmission footprint.
- Methods to estimate computational costs/energy/power.
- Design, implementation and efficient use of hardware accelerators.
- Models/methods that can foster diversity and inclusions in research through the adoption of computationally efficient procedures for DL.
- Analysis of computational cost and consequent social impact of large model training.
- Submission deadline: July 11, 2022 (11:59 AM Pacific Time)
- Author notification: August 17, 2022 (11:59 AM Pacific Time)
- Camera-ready submission: August 22, 2022 (11:59 AM Pacific Time)
Principal Research Scientist, NVIDIA
Assistant Professor, SK Hynix Faculty Fellow, EECS, UC Berkeley
Tenure-Track Assistant Professor, ELLIS Scholar, UNIMORE
Post-Doc at MICC, University of Florence
Engineering Manager at NVIDIA, deputy director of NVAITC in EMEA