siglip-finetuned-artdatasets

This model is a fine-tuned version of google/siglip-base-patch16-224 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1192

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.01
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.458 0.3139 500 0.3211
1.1107 0.6277 1000 0.2249
0.928 0.9416 1500 0.1843
0.6952 1.2555 2000 0.1574
0.6393 1.5694 2500 0.1432
0.5702 1.8832 3000 0.1336
0.4676 2.1971 3500 0.1277
0.4588 2.5110 4000 0.1220
0.4416 2.8249 4500 0.1192

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.1+cu124
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
38
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for juliadollis/siglip-finetuned-artdatasets

Finetuned
(10)
this model