Enhance model card: Add pipeline tag, paper link, GitHub link, and usage details (#1)
Browse files- Enhance model card: Add pipeline tag, paper link, GitHub link, and usage details (36c16f8dc5d94e4a480311e4ea297063fad97fd6)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -1,3 +1,77 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: video-classification
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# Referee: Reference-aware Audiovisual Deepfake Detection
|
| 7 |
+
|
| 8 |
+
This repository contains the `Referee` model, presented in the paper [Referee: Reference-aware Audiovisual Deepfake Detection](https://huggingface.co/papers/2510.27475).
|
| 9 |
+
|
| 10 |
+
Code: [https://github.com/ewha-mmai/referee](https://github.com/ewha-mmai/referee)
|
| 11 |
+
|
| 12 |
+
## Abstract
|
| 13 |
+
<img src="https://github.com/ewha-mmai/referee/raw/main/referee.png" alt="Referee Architecture" width="900"/>
|
| 14 |
+
|
| 15 |
+
Since deepfakes generated by advanced generative models have rapidly posed serious threats, existing audiovisual deepfake detection approaches struggle to generalize to unseen forgeries. We propose a novel reference-aware audiovisual deepfake detection method, called *Referee*. Speaker-specific cues from only one-shot examples are leveraged to detect manipulations beyond spatiotemporal artifacts. By matching and aligning identity-related queries from reference and target content into cross-modal features, Referee jointly reasons about audiovisual synchrony and identity consistency. Extensive experiments on FakeAVCeleb, FaceForensics++, and KoDF demonstrate that Referee achieves state-of-the-art performance on cross-dataset and cross-language evaluation protocols. Experimental results highlight the importance of cross-modal identity verification for future deepfake detection.
|
| 16 |
+
|
| 17 |
+
## Requirements
|
| 18 |
+
### Environment
|
| 19 |
+
To train or evaluate Referee, you must first set up the environment:
|
| 20 |
+
|
| 21 |
+
```bash
|
| 22 |
+
conda create -n referee python=3.8.16
|
| 23 |
+
conda activate referee
|
| 24 |
+
pip install -r requirements.txt
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
### Dataset
|
| 28 |
+
For training and evaluation, the dataset should be prepared following the specified format.
|
| 29 |
+
An example dataset structure is provided in the [GitHub repository's `data/`](https://github.com/ewha-mmai/referee/tree/main/data).
|
| 30 |
+
|
| 31 |
+
### Pretrained Checkpoints
|
| 32 |
+
This project requires pretrained checkpoints to run training, evaluation, or fine-tuning.
|
| 33 |
+
|
| 34 |
+
- **Training from Scratch**
|
| 35 |
+
To train the model from scratch, download the Synchformer checkpoint trained on **LRS3** from the [link](https://github.com/v-iashin/Synchformer) and place it in the `model/pretrained/` directory.
|
| 36 |
+
|
| 37 |
+
- **Evaluation or Fine-tuning Referee**
|
| 38 |
+
To evaluate or fine-tune **Referee**, download the provided checkpoint from the [link](https://huggingface.co/eunsanglee/Referee/tree/main) and put it into the `model/pretrained/` directory.
|
| 39 |
+
|
| 40 |
+
## Train
|
| 41 |
+
To train Referee, you can use the provided `train.sh`. Some training-specific settings, such as the number of epochs, starting epoch, and training dataset, are set directly in `train.sh`.
|
| 42 |
+
|
| 43 |
+
You can change most training parameters in the config file, `configs/pair_sync.yaml`. For example, you can adjust the learning rate, batch size, number of layers, etc.
|
| 44 |
+
|
| 45 |
+
Once you have set all parameters as desired, you can start training Referee using:
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
sh scripts/train.sh
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
## Evaluation
|
| 52 |
+
To evaluate Referee, you can use the provided `test.sh`. Some evaluation-specific settings, such as the model path and test dataset, are set directly in `test.sh`.
|
| 53 |
+
|
| 54 |
+
You can change most evaluation parameters in the config file, `configs/pair_sync.yaml`. For example, you can adjust the number of layers, the number of identity queries, etc.
|
| 55 |
+
|
| 56 |
+
Once you have set all parameters as desired, you can start evaluating Referee using:
|
| 57 |
+
|
| 58 |
+
```bash
|
| 59 |
+
sh scripts/test.sh
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## Acknowledgement
|
| 63 |
+
This project heavily references the implementation of [SynchFormer](https://github.com/v-iashin/Synchformer).
|
| 64 |
+
|
| 65 |
+
We thank the authors for making their code publicly available.
|
| 66 |
+
|
| 67 |
+
## Citation
|
| 68 |
+
If you find our work helpful or inspiring, please feel free to cite it:
|
| 69 |
+
|
| 70 |
+
```bibtex
|
| 71 |
+
@article{boo2025referee,
|
| 72 |
+
title={Referee: Reference-aware Audiovisual Deepfake Detection},
|
| 73 |
+
author={Boo, Hyemin and Lee, Eunsang and Lee, Jiyoung},
|
| 74 |
+
journal={arXiv preprint arXiv:2510.27475},
|
| 75 |
+
year={2025}
|
| 76 |
+
}
|
| 77 |
+
```
|