Safetensors
English
llama

OpenCLIP-LLaVA

OpenLVLM-MIA: A Controlled Benchmark Revealing the Limits of Membership Inference Attacks on Large Vision-Language Models

Overview

  • OpenLVLM-MIA offers a controlled benchmark to reassess membership inference attacks (MIA) on large vision-language models beyond dataset-induced biases.
  • The benchmark consists of a 6,000-image dataset with controlled member/non-member distributions and ground-truth membership at three training stages.
  • On this setup, state-of-the-art MIA approaches perform at chance level, clarifying the true difficulty of the problem and motivating more robust privacy defenses.

Other Resources

Downloads last month
44
Safetensors
Model size
7B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for paper-2229/openclip-llava

Finetuned
(10)
this model

Datasets used to train paper-2229/openclip-llava