|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
# From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos |
|
|
|
|
|
<img width="1024" alt="image" src="https://github.com/user-attachments/assets/c629e924-cec2-46c9-9e9a-369b4e6d0aef"> |
|
|
|
|
|
|
|
|
[](https://paperswithcode.com/sota/dynamic-facial-expression-recognition-on?p=from-static-to-dynamic-adapting-landmark-1)<br> |
|
|
[](https://paperswithcode.com/sota/dynamic-facial-expression-recognition-on-dfew?p=from-static-to-dynamic-adapting-landmark-1)<br> |
|
|
[](https://paperswithcode.com/sota/dynamic-facial-expression-recognition-on-mafw?p=from-static-to-dynamic-adapting-landmark-1)<br> |
|
|
|
|
|
[](https://paperswithcode.com/sota/facial-expression-recognition-on-affectnet?p=from-static-to-dynamic-adapting-landmark-1)<br> |
|
|
[](https://paperswithcode.com/sota/facial-expression-recognition-on-raf-db?p=from-static-to-dynamic-adapting-landmark-1)<br> |
|
|
|
|
|
>[From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos](https://ieeexplore.ieee.org/document/10663980)<br> |
|
|
>Yin Chen$^{β }$, Jia Li$^{β β}$, Shiguang Shan, Meng Wang, and Richang Hong |
|
|
|
|
|
|
|
|
## π° News |
|
|
**[2024.9.5]** The fine-tuned checkpoints are available. |
|
|
|
|
|
**[2024.9.2]** The code and pre-trained models are available. |
|
|
|
|
|
**[2024.8.28]** The paper is accepted by IEEE Transactions on Affective Computing. |
|
|
|
|
|
**[2023.12.5]** ~~Code and pre-trained models will be released here~~. |
|
|
|
|
|
## π Main Results |
|
|
|
|
|
### Dynamic Facial Expression Recognition |
|
|
<img width="1024" alt="image" src="https://github.com/user-attachments/assets/2144837d-9fd5-4f88-8447-1f6049b38e9a"> |
|
|
|
|
|
<img width="1024" alt="image" src="https://github.com/user-attachments/assets/4a80731e-666e-4cef-9f74-5f794eea7116"> |
|
|
|
|
|
|
|
|
### Static Facial Expression Recognition |
|
|
<img width="1024" alt="image" src="https://github.com/user-attachments/assets/89a47ea3-1036-4124-927c-563af8007d1f"> |
|
|
|
|
|
### Visualization |
|
|
<img width="1024" alt="image" src="https://github.com/user-attachments/assets/aea1385d-0d1b-4f5e-8775-087a30363751"> |
|
|
|
|
|
|
|
|
## Fine-tune with pre-trained weights |
|
|
1γ Download the pre-trained weights from [baidu drive](https://pan.baidu.com/s/1J5eCnTn_Wpn0raZTIUCfgw?pwd=dji4) or [google drive](https://drive.google.com/file/d/1Y9zz8z_LwUi-tSFBAwDPZkVoyY6mhZlu/view?usp=drive_link) or [onedrive](https://mailhfuteducn-my.sharepoint.com/:f:/g/personal/2022111029_mail_hfut_edu_cn/EgKQNq8Y2chKl2TSoYf_OA0BQpCwx-FDw2ksPaMxBntZ8A), and move it to the ckpts directory. |
|
|
|
|
|
2γ Run the following command to fine-tune the model on the target dataset. |
|
|
```bash |
|
|
conda create -n s2d python=3.9 |
|
|
conda activate s2d |
|
|
pip install -r requirements.txt |
|
|
bash run.sh |
|
|
``` |
|
|
|
|
|
## π Reported Results and Fine-tuned Weights |
|
|
The fine-tuned checkpoints can be downloaded from [baidu driver](https://pan.baidu.com/s/1Xz5j8QW32x7L0bnTEorUbA?pwd=5drk) or [huggingface](https://huggingface.co/cyinen/S2D). |
|
|
<table border="1" cellspacing="0" cellpadding="5"> |
|
|
[](https://star-history.com/#MSA-LMC/S2D&Date) |
|
|
|
|
|
|