Aims and scope
Machine-generated images are becoming more and more popular in the digital world, thanks to the spread of Deep Learning models that can generate visual data like Generative Adversarial Networks, and Diffusion Models. While image generation tools can be employed for lawful goals (e.g., to assist content creators, generate simulated datasets, or enable multi-modal interactive applications), there is a growing concern that they might also be used for illegal and malicious purposes, such as the forgery of natural images, the generation of images in support of fake news, misogyny or revenge porn. While the results obtained in the past few years contained artefacts which made generated images easily recognizable, today's results are way less recognizable from a pure perceptual point of view. In this context, assessing the authenticity of fake images becomes a fundamental goal for security and for guaranteeing a degree of trustworthiness of AI algorithms. There is a growing need, therefore, to develop automated methods which can assess the authenticity of images (and, in general, multimodal content), and which can follow the constant evolution of generative models, which become more realistic over time.
The first Workshop and Challenge on DeepFake Analysis and Detection (DFAD) focuses on the development of benchmarks and tools for Fake data Understanding and Detection, with the final goal of protecting from visual disinformation and misuse of generated images and text, and to monitor the progress of existing and proposed solutions for detection. It fosters the submission of works that identify novel ways of understanding and detecting fake data, especially through new machine learning approaches capable of mixing syntactic and perceptive analysis.
The ELSA Challenge on DeepFake Detection
In parallel to soliciting the submission of relevant scientific works, the Workshop will host a competition on deepfake detection. This is organised with the support of the ELSA project - the European Lighthouse on Secure and Safe AI, which builds on and extends the existing internationally recognized and excellently positioned ELLIS (European Laboratory for Learning and Intelligent Systems) network of excellence. The objective of the challenge is to monitor and evaluate the development of algorithms for deep fake detection, in terms of efficacy and explainability. The challenge will run on the ELSA DeepFake dataset, collected by the University of Modena and Reggio Emilia and Leonardo SpA. Submitted papers do not need to be linked with the challenge.
Researchers interested in participating to the challenge will find the dataset and metrics at the link below. During the workshop, selected submissions will be invited to present their method.
An AI Whodunnit: Following Image Manipulation Clues to their Source, 11:10 AM
CISPA Helmholtz Center for Information Security
Sustainable DeepFake Detection, Watermarking, and Personalized Disinformation, 11:45 AM
Naomi Ken Korem
Between Illusion and Reality: Guiding AI to Generate IdentityPreserving Images, 12:20 AM
Call for Contributions
We invite participants to submit their work to the workshop as full papers. Submitted papers do not need to be linked with the challenge.
Full papers must present original research, not published elsewhere, and follow the ICCV main conference format with a length of 4-8 pages, including figures and tables. Additional pages containing only cited references are allowed. Appendices may not be a part of the additional pages. Supplemental materials are also allowed, both in PDF and ZIP format. Accepted papers will be included in the conference proceedings.
All submissions will be handled electronically via CMT.
Submission site: DFAD 2023 submission site
The workshop calls for submissions addressing, but not limited to, the following topics:
- Approaches for fake image detection, relying on both low-level, hand-crafted features or learnable and semantic approaches
- Partially-altered fake image detection
- GAN and Diffusion-based techniques with safety reassurance for image and video synthesis and generation
- Video Deepfake detection and multimodal approaches to deepfake detection
- Approaches for detecting generated text and fake news, also based on multimodal analysis
- Approaches and techniques for explainable deepfake detection
- Evaluation metrics for deepfake generation and detection systems
- Paper Submission Deadline: July, 17th AoE
- Decision to Authors: August, 8th AoE
- Camera ready papers due: August, 21st
- Workshop date: October 2nd, morning
University of Modena and Reggio Emilia
University of Modena and Reggio Emilia