Update pipeline_tag to `image-text-to-text`

#6
by chiragjn - opened
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -14,6 +14,7 @@ tags:
14
  - multimodal
15
  - vision
16
  - pytorch
 
17
  ---
18
 
19
  ## ***See [our collection](https://huggingface.co/collections/unsloth/vision-multimodal-models-673eb9908fc2cb3deebd2fa3) for vision models including Llama 3.2, Llava, Qwen2-VL and Pixtral.***
@@ -72,4 +73,4 @@ The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a
72
 
73
  **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
74
 
75
- Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
 
14
  - multimodal
15
  - vision
16
  - pytorch
17
+ pipeline_tag: image-text-to-text
18
  ---
19
 
20
  ## ***See [our collection](https://huggingface.co/collections/unsloth/vision-multimodal-models-673eb9908fc2cb3deebd2fa3) for vision models including Llama 3.2, Llava, Qwen2-VL and Pixtral.***
 
73
 
74
  **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
75
 
76
+ Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).