Mirrorcheck: Efficient Adversarial
Defense for Vision-Language Models

1MBZUAI 2Computer Vision Laboratory, EPFL
Arxiv 2024, *Equal contribution
MBZUAI logo EPFL logo
Teaser Image

Mirrorcheck approach. Mirrorcheck approach. At inference time, to check if an input image has been adversarially attacked, our framework follows this procedure: (1) generates the text description for the image. (2) use this caption to regenerate the image with a text-to-image model. (3) extract and compare embeddings from both the original and regenerated images using a feature extractor. If the embeddings significantly differ, the original image likely suffered an attack. The intuition behind our method is that if the input was attacked, the image and the caption would not be semantically consistent. Therefore, using the predicted caption as a prompt for image generation would result in an image that is significantly semantically different.

Abstract

Vision-Language Models (VLMs) are becoming increasingly vulnerable to adversarial attacks as various novel attack strategies are being proposed against these models. While existing defenses excel in unimodal contexts, they currently fall short in safeguarding VLMs against adversarial threats. To mitigate this vulnerability, we propose a novel, yet elegantly simple approach for detecting adversarial samples in VLMs. Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs. Subsequently, we calculate the similarities of the embeddings of both input and generated images in the feature space to identify adversarial samples. Empirical evaluations conducted on different datasets validate the efficacy of our approach, outperforming baseline methods adapted from image classification domains. Furthermore, we extend our methodology to classification tasks, showcasing its adaptability and model-agnostic nature. Theoretical analyses and empirical findings also show the resilience of our approach against adaptive attacks, positioning it as an excellent defense mechanism for real-world deployment against adversarial threats.

BibTeX

@article{fares2024mirrorcheck,
  title = {Mirrorcheck: Efficient Adversarial Defense for Vision-Language Models},
  author = {Fares, Samar and Ziu, Klea and Aremu, Toluwani and Durasov, Nikita and Takáč, Martin and Fua, Pascal and Nandakumar, Karthik and Laptev, Ivan},
  journal={arXiv preprint arXiv:2406.09250},
  year = {2024}
}