A Survey of Machine Learning Techniques in Adversarial Image Forensics

journal Paper1
Authors:Ehsan Nowroozi, Ali Dehghantanha, Reza M. Parizi, Kim-Kwang Raymond Choo
Year: 2021
Abstract: mage forensic plays a crucial role in both criminal investigations (e.g., dissemination of fake images to spread racial hate or false narratives about specific ethnicity groups or political campaigns) and civil litigation (e.g., defamation). Increasingly, machine learning approaches are also utilized in image forensics. However, there are also a number of limitations and vulnerabilities associated with machine learning-based approaches (e.g., how to detect adversarial (image) examples), and there are associated real-world consequences (e.g., inadmissible evidence, or wrongful conviction)…….

VIPPrint: Validating Synthetic Image Detection and Source Linking Methods on a Large Scale Dataset of Printed Documents

journal Paper1
Authors: Anselmo Ferreira, Ehsan Nowroozi, Mauro Barni
Year: 2021
Abstract: The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned…..

Higher-Order, Adversary-Aware, Double JPEG-Detection via Selected Training on Attacked Samples

Conference Paper1
Authors: Mauro Barni, Ehsan Nowroozi, Benedetta Tondi
Year: 2017
Abstract: In this paper we present an adversary-aware double JPEG detector which is capable of detecting the presence of two JPEG compression steps even in the presence of heterogeneous processing and counter-forensic (C-F) attacks. The detector is based on an SVM classifier fed with a large number of features and trained to recognise the traces left by double JPEG detection in the presence of attacks. Since it is not possible to train the SVM on all possible kinds of processing and C-F attacks, a selected set of images, manipulated with a limited number of attacks is added to the training set……

Detection of Adaptive Histogram Equalization Robust Against JPEG Compression

Conference Paper1
Authors: Mauro Barni, Ehsan Nowroozi, Benedetta Tondi
Year: 2018
Abstract: Contrast Enhancement (CE) detection in the presence of laundering attacks, i.e. common processing operators applied with the goal to erase the traces the CE detector looks for, is a challenging task. JPEG compression is one of the most harmful laundering attacks, which has been proven to deceive most CE detectors proposed so far. In this paper, we present a system that is able to detect contrast enhancement by means of adaptive histogram equalization in the presence of JPEG compression, by training a JPEG-aware SVM detector based on color SPAM features, i.e., an SVM detector trained on contrast-enhanced-then-JPEG-compressed images……

On the Transferability of Adversarial Examples Against CNN-Based Image Forensics

Conference Paper1
Authors: Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi
Year: 2018
Abstract: Recent studies have shown that Convolutional Neural Networks (CNN) are relatively easy to attack through the generation of so-called adversarial examples. Such vulnerability also affects CNN-based image forensic tools. Research in deep learning has shown that adversarial examples exhibit a certain degree of transferability, i.e., they maintain part of their effectiveness even against CNN models other than the one targeted by the attack……..

Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples

Conference Paper1
Authors: Mauro Barni, Ehsan Nowroozi, Benedetta Tondi, Bowen Zhang
Year: 2020
Abstract: We investigate if the random feature selection approach proposed in [1] to improve the robustness of forensic detectors to targeted attacks, can be extended to detectors based on deep learning features. In particular, we study the transferability of adversarial examples targeting an original CNN image manipulation detector to other detectors (a fully connected neural network and a linear SVM) that rely on a random subset of the features extracted from the flatten layer of the original network…….

Authors: Ehsan Nowroozi, Yassine Mekdad, Mohammad Hajian Berenjestanadi, Mauro Conti, and Abdeslam EL Fergougui Conference Detail: IEEE Transactions on Network and Service Management (IEEE TNSM), April 2022

Abstract:

Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. This major property is known as transferability, and makes CNNs ill-suited for security applications. In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. Furthermore, we investigate whether the transferability property issue holds in computer networks applications. In our experiments, we first consider five different attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L-BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then, we perform these attacks against three well-known datasets: the Network-based Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA) dataset, and the RIPE Atlas dataset. Our experimental results show clearly that the transferability happens in specific use cases for the I-FGSM, the JSMA, and the LBFGS attack. In such scenarios, the attack success rate on the target network range from 63.00% to 100%. Finally, we suggest two shielding strategies to hinder the attack transferability, by considering the Most Powerful Attacks (MPAs), and the mismatch LSTM architecture.

Paper Link

Demystifying the Transferability of Adversarial Attacks in Computer Networks

CNN-based detection of generic contrast adjustment with JPEG post-processing

Conference Paper1
Authors: Mauro Barni, Andrea Costanzo, Ehsan Nowroozi, Benedetta Tondi
Year: 2018
Abstract: Detection of contrast adjustments in the presence of JPEG post processing is known to be a challenging task. JPEG post processing is often applied innocently, as JPEG is the most common image format, or it may correspond to a laundering attack, when it is purposely applied to erase the traces of manipulation. In this paper, we propose a CNN-based detector for generic contrast adjustment, which is robust to JPEG compression……