CALL FOR BOOK CHAPTER (Adversarial Multimedia Forensics)

We are pleased to invite you to submit a chapter for inclusion in the “Adversarial Multimedia Forensics” book that will be published by Springer – Advances in Information Security. Chapter submissions should be 15 to 20 pages long, single-spaced, and single-column in latex, and should provide enough information for readers and professionals in Cybersecurity Applications, particularly with regard to multimedia forensics and security. Multimedia forensics, counter-forensics, and anti-counter-forensics constitute the three primary sections of the book.

Looking For: We are looking for chapter that apply security and attacks concepts, methods, and techniques to the image and video that are listed in our Call, and NOT applying some of our areas to security (Computer Security). For example, for a multimedia forensics, one example related to Fine-grained CFA artifact assessment for forgery detection or Source camera identification. Another example regarding counter-forensics, applying adversarial attacks on a medical images, or considering the transferability of adversarial attacks against vision transformers. Concerning Anti-Counter Forensics in a exploratory attacks, one example can be related to adversary-aware double JPEG-detector via selected training on attacked samples or resistant to JPEG compression image contrast manipulation identification. Regarding Causative attacks, one example can be proposing defense strategy against poisoning attacks on satellite imagery models. Due to the outstanding capabilities of current Machine learning (ML) algorithms, ML is becoming the de-facto for Multimedia Forensics (MF). However, the inherent fragility of ML architectures creates new, significant security vulnerabilities that prevent their use in security critical applications like MF, where the potential existence of an adversary cannot be ignored. However, given the weakness of the traces that forensic techniques rely on, disabling forensic analysis proves to be a simple task. Therefore, development of novel strategies capable of enhancing the protection of ML-based methods, as well as the assessment of their security in the presence of an adversary, are thus of greatest importance. Thus, it has become essential in MF to develop solutions capable of overcoming the security constraints of ML models used as counter-forensics techniques. This book contributes to the aforementioned goal by emphasizing on image manipulation detection using ML/DL algorithms for MF in adversarial environments. The main structure of the book is divided into the following three sections: (I) presents different methodologies in multimedia forensics; (II) and discusses general concepts and terminology in the field of adversarial machine learning (Adv-ML), with a focus on the concern of counter-forensics (CF), and anti-counter forensics.                                   

Originality: Chapter contributions should contain 25-30% novel content compared to earlier published work by the authors.

Submission: There are no submission or acceptance fees for manuscripts submitted to this book for publication. All manuscripts are accepted based on a double-blind peer-review editorial process. Please send your manuscript *.pdf, *.tex to the e-mail address of one of the editors (e.nowroozi@qub.ac.uk, Alireza.jolfaei@flinders.edu.au, Kassem.kallas@inria.fr)

Timeline:

New deadline for submission proposal: 15th of August, 2023 and submit a full chapter: 28th of August, 2023 (Send by email to editors)

Book Areas: As we mentioned, the core of the book consists of (I) Multimedia Forensics, (II) Counter-Forensics, and (III) Anti-Counter-Forensics. The tentative table of contents will be:

(Part-I) Multimedia Forensics: This chapter discusses machine learning and deep learning techniques for digital image forensics and image tampering detection. Recent forensic analysis techniques will be covered in this chapter, including (I) acquisition-based footprints, (II) coding-based footprints, and (III) editing-based footprints.

(Part-II) Counter-Forensics: This section will explain the counterpart of the detector, namely, the counter-forensics (CF) which refers to any methods designed to thwart a forensic investigation. This is also known in the literature as anti-forensics. In this situation, deep learning and adversarial attacks on a machine learning model as forgery to bypass the counter-forensics methods can be classified into exploratory and causative. This part discusses the various methods that have been proposed so far to mitigate forensic analysis.

Exploratory Attacks: The exploratory attack scenario restricts the adversary’s ability to change to test data and forbids changes to training examples. Example 1: Adversarial Cross-Modal Attacks from Images to Videos, Example 2: Adversarial attacks on medical images.

Causative attacks: In causative attacks, the offensive can disrupt the training process to inject a backdoor into the model to be exploited later at inference time; these attacks are commonly referred to as poisoning, backdoor or Trojan attacks, Example 1: Attacks using backdoors against Vision Transformers, Example 2: Performing Backdoor Attacks Using Rotation Transformation

(Part-III) Anti Counter-Forensics: To protect the reliability of the forensic analysis, numerous anti- CF techniques have been developed in response to CF. The majority of these methods are appropriate for particular CF methods. This section will explain recent advances in anti counter-forensics methods.

Defense against Exploratory Attacks: This part will discuss recent methods that has been proposed so far for improving the security of detectors against exploratory attacks, such as adversary- aware detectors and developing a secure architectures, Example 1: Adversary-Aware Double JPEG-Detector via Selected Training on Attacked Samples, Example 2: Resistant to JPEG Compression Image Contrast Manipulation Identification.

Defense against Causative Attacks: In this section, we will survey the recent techniques that has been proposed so far for enhancing the security of model against poisoning attacks, Example 1: Defense against poisoning attacks on satellite imagery models, Example 2: Using Heatmap Clustering to find Deep Neural Network Backdoor Poisoning Attacks

Book Editors

Dr. Ehsan Nowroozi,  Research fellow at Queen’s University Belfast (QUB), United Kingdom (e.nowroozi@qub.ac.uk)

Dr. Alireza Jolfaei, Associate Professor, Flinders University, Adelaide, Australia (Alireza.jolfaei@flinders.edu.au)

Dr. Kassem Kallas, Research Scientist at the INRIA, Rennes, France (Kassem.kallas@inria.fr)

 

TC Members:

  • Ali DehghanTanha, University of Guelph, Canada
  • Anselmo Ferriera, Cyber and Digital Citizen’s Security Unit, European Joint Research Center, European Commission, Italy
  • Jianwei Fei, Nanjing University of Information Science & Technology, China
  • Behrooz Razeghi, Idiap Research Institute, Switzerland
  • Meng Li, Hefei University of Technology, China
  • Shantanu Pal, Deakin University, Australia 
  • Saman Shoja Chaeikar, Australian Institute of Higher Education, Australia 
  • Ali Mehrabi, Western Sydney University, Australia 
  • Soheila Ghane, BHP, Australia 
  • Sona Taheri, RMIT University, Australia 
  • Prabhat Kumar, LUT University, Finland 
  • Randhir Kumar, Indian Institute of Technology, India
  • Sattar Seifollahi, Resolution Life, Australia
  • Rahim Taheri, University of Portsmouth, England
  • Iuliia Tkachenko, Université Lumière Lyon 2, France
  • Hannes Mareen, Gent University, Belgium
  • Siwei Lyu, University of Buffalo, USA
  • KIJAK Ewa, University of Rennes, IRISA, INRIA, France
  • Pedro Comesaña Alfaro, University of Vigo, Spain
  • Lydia Abady, University of Siena, Italy
  •  

Add a Comment

Your email address will not be published. Required fields are marked *