OpenCLIP-LLaVA
OpenLVLM-MIA: A Controlled Benchmark Revealing the Limits of Membership Inference Attacks on Large Vision-Language Models
Overview
- OpenLVLM-MIA offers a controlled benchmark to reassess membership inference attacks (MIA) on large vision-language models beyond dataset-induced biases.
- The benchmark consists of a 6,000-image dataset with controlled member/non-member distributions and ground-truth membership at three training stages.
- On this setup, state-of-the-art MIA approaches perform at chance level, clarifying the true difficulty of the problem and motivating more robust privacy defenses.
Other Resources
- Code: yamanalab/openlvlm-mia
- Dataset: paper-2229/openlvlm-mia
- Downloads last month
- 44
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for paper-2229/openclip-llava
Base model
laion/CLIP-ViT-H-14-laion2B-s32B-b79K