Real or Virtual: A Video Conferencing Background Manipulation-Detection System

journal Paper1
Authors: Ehsan Nowroozi, Yassine Mekdad, Mauro Conti, Simone Milani, Selcuk Uluagac, Berrin Yanikoglu
Year: 2022
Abstract: Recently, the popularity and wide use of last-generation video conferencing technologies has created an exponential growth in their market size. Such technology allows participants in different geographic regions to have virtual face-to-face meetings. Additionally, it enables users to employ a virtual background to conceal their own environment due to privacy concerns or to reduce distractions, particularly in professional settings. Nevertheless, in scenarios where users should not hide their actual locations, they may mislead other participants by claiming their virtual background as a real one. Therefore, it is crucial to develop tools and strategies to detect the authenticity of the considered virtual background…..

Double JPEG Compression Detection Using Statistical Analysis

journal Paper1
Authors: Ehsan Nowroozi, Ali Zakerolhosseini
Year: 2015
Abstract: Nowadays, with advancement of technology, tampering of digital images using computer and advanced software packages like Photoshop has become a simple task. Many algorithms have been proposed to detect tampered images that have been kept developing. In this regard, verification of the accuracy of image content and detection of manipulations in images regardless of any previous knowledge about the image content can be an important research field……

Improving the Security of Image Manipulation Detection through One-and-a-half-class Multiple Classification

journal Paper1
Authors: Mauro Barni, Ehsan Nowroozi, Benedetta Tondi
Year: 1019
Abstract: Protecting image manipulation detectors against perfect knowledge attacks requires the adoption of detector architectures which are intrinsically difficult to attack. In this paper, we do so, by exploiting a recently proposed multiple-classifier architecture combining the improved security of 1-Class (1C) classification and the good performance ensured by conventional 2-Class (2C) classification in the absence of attacks. The architecture, also known as 1.5-Class (1.5C) classifier, consists of one 2C classifier and two 1C classifiers run in parallel followed by a final 1C classifier…..

A Survey of Machine Learning Techniques in Adversarial Image Forensics

journal Paper1
Authors:Ehsan Nowroozi, Ali Dehghantanha, Reza M. Parizi, Kim-Kwang Raymond Choo
Year: 2021
Abstract: mage forensic plays a crucial role in both criminal investigations (e.g., dissemination of fake images to spread racial hate or false narratives about specific ethnicity groups or political campaigns) and civil litigation (e.g., defamation). Increasingly, machine learning approaches are also utilized in image forensics. However, there are also a number of limitations and vulnerabilities associated with machine learning-based approaches (e.g., how to detect adversarial (image) examples), and there are associated real-world consequences (e.g., inadmissible evidence, or wrongful conviction)…….

VIPPrint: Validating Synthetic Image Detection and Source Linking Methods on a Large Scale Dataset of Printed Documents

journal Paper1
Authors: Anselmo Ferreira, Ehsan Nowroozi, Mauro Barni
Year: 2021
Abstract: The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned…..

Authors: Ehsan Nowroozi, Yassine Mekdad, Mohammad Hajian Berenjestanadi, Mauro Conti, and Abdeslam EL Fergougui Conference Detail: IEEE Transactions on Network and Service Management (IEEE TNSM), April 2022

Abstract:

Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. This major property is known as transferability, and makes CNNs ill-suited for security applications. In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. Furthermore, we investigate whether the transferability property issue holds in computer networks applications. In our experiments, we first consider five different attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L-BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then, we perform these attacks against three well-known datasets: the Network-based Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA) dataset, and the RIPE Atlas dataset. Our experimental results show clearly that the transferability happens in specific use cases for the I-FGSM, the JSMA, and the LBFGS attack. In such scenarios, the attack success rate on the target network range from 63.00% to 100%. Finally, we suggest two shielding strategies to hinder the attack transferability, by considering the Most Powerful Attacks (MPAs), and the mismatch LSTM architecture.

Paper Link

Demystifying the Transferability of Adversarial Attacks in Computer Networks