Add `notebook.ipynb` to the model repo
#1
by
reach-vb
HF Staff
- opened
- README.md +2 -3
- notebook_finetuning.ipynb +0 -0
README.md
CHANGED
|
@@ -3,7 +3,6 @@ license: mit
|
|
| 3 |
pipeline_tag: video-classification
|
| 4 |
tags:
|
| 5 |
- video
|
| 6 |
-
library_name: transformers
|
| 7 |
---
|
| 8 |
|
| 9 |
# V-JEPA 2
|
|
@@ -11,7 +10,7 @@ library_name: transformers
|
|
| 11 |
A frontier video understanding model developed by FAIR, Meta, which extends the pretraining objectives of [VJEPA](https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/), resulting in state-of-the-art video understanding capabilities, leveraging data and model sizes at scale.
|
| 12 |
The code is released [in this repository](https://github.com/facebookresearch/vjepa2).
|
| 13 |
|
| 14 |
-
<img src="https://
|
| 15 |
|
| 16 |
## Installation
|
| 17 |
|
|
@@ -84,4 +83,4 @@ Rabbat, Michael and Ballas, Nicolas},
|
|
| 84 |
institution={FAIR at Meta},
|
| 85 |
year={2025}
|
| 86 |
}
|
| 87 |
-
```
|
|
|
|
| 3 |
pipeline_tag: video-classification
|
| 4 |
tags:
|
| 5 |
- video
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
# V-JEPA 2
|
|
|
|
| 10 |
A frontier video understanding model developed by FAIR, Meta, which extends the pretraining objectives of [VJEPA](https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/), resulting in state-of-the-art video understanding capabilities, leveraging data and model sizes at scale.
|
| 11 |
The code is released [in this repository](https://github.com/facebookresearch/vjepa2).
|
| 12 |
|
| 13 |
+
<img src="https://dl.fbaipublicfiles.com/vjepa2/vjepa2-pretrain.gif">
|
| 14 |
|
| 15 |
## Installation
|
| 16 |
|
|
|
|
| 83 |
institution={FAIR at Meta},
|
| 84 |
year={2025}
|
| 85 |
}
|
| 86 |
+
```
|
notebook_finetuning.ipynb
DELETED
|
The diff for this file is too large to render.
See raw diff
|
|
|