SentenceTransformer based on ibm-granite/granite-embedding-english-r2

This is a sentence-transformers model finetuned from ibm-granite/granite-embedding-english-r2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: ibm-granite/granite-embedding-english-r2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("shatonix/granite-embedding-math-cs")
# Run inference
sentences = [
    'Calculate $(-1)^{47} + 2^{(3^3+4^2-6^2)}$.',
    'Context: \nAnswer: 127',
    '4750',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.5650, -0.0154],
#         [ 0.5650,  1.0000, -0.0246],
#         [-0.0154, -0.0246,  1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.626
cosine_accuracy@3 0.706
cosine_accuracy@5 0.726
cosine_accuracy@10 0.758
cosine_precision@1 0.626
cosine_precision@3 0.2353
cosine_precision@5 0.1452
cosine_precision@10 0.0758
cosine_recall@1 0.626
cosine_recall@3 0.706
cosine_recall@5 0.726
cosine_recall@10 0.758
cosine_ndcg@10 0.6916
cosine_mrr@10 0.6704
cosine_map@100 0.6751

Information Retrieval

Metric Value
cosine_accuracy@1 0.636
cosine_accuracy@3 0.7
cosine_accuracy@5 0.724
cosine_accuracy@10 0.758
cosine_precision@1 0.636
cosine_precision@3 0.2333
cosine_precision@5 0.1448
cosine_precision@10 0.0758
cosine_recall@1 0.636
cosine_recall@3 0.7
cosine_recall@5 0.724
cosine_recall@10 0.758
cosine_ndcg@10 0.694
cosine_mrr@10 0.6739
cosine_map@100 0.6785

Information Retrieval

Metric Value
cosine_accuracy@1 0.638
cosine_accuracy@3 0.698
cosine_accuracy@5 0.712
cosine_accuracy@10 0.75
cosine_precision@1 0.638
cosine_precision@3 0.2327
cosine_precision@5 0.1424
cosine_precision@10 0.075
cosine_recall@1 0.638
cosine_recall@3 0.698
cosine_recall@5 0.712
cosine_recall@10 0.75
cosine_ndcg@10 0.6915
cosine_mrr@10 0.6731
cosine_map@100 0.6781

Information Retrieval

Metric Value
cosine_accuracy@1 0.636
cosine_accuracy@3 0.698
cosine_accuracy@5 0.716
cosine_accuracy@10 0.74
cosine_precision@1 0.636
cosine_precision@3 0.2327
cosine_precision@5 0.1432
cosine_precision@10 0.074
cosine_recall@1 0.636
cosine_recall@3 0.698
cosine_recall@5 0.716
cosine_recall@10 0.74
cosine_ndcg@10 0.6863
cosine_mrr@10 0.6693
cosine_map@100 0.6739

Information Retrieval

Metric Value
cosine_accuracy@1 0.628
cosine_accuracy@3 0.692
cosine_accuracy@5 0.714
cosine_accuracy@10 0.734
cosine_precision@1 0.628
cosine_precision@3 0.2307
cosine_precision@5 0.1428
cosine_precision@10 0.0734
cosine_recall@1 0.628
cosine_recall@3 0.692
cosine_recall@5 0.714
cosine_recall@10 0.734
cosine_ndcg@10 0.6806
cosine_mrr@10 0.6635
cosine_map@100 0.6681

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,500 training samples
  • Columns: anchor, positive, and id
  • Approximate statistics based on the first 1000 samples:
    anchor positive id
    type string string string
    details
    • min: 8 tokens
    • mean: 80.08 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 165.53 tokens
    • max: 512 tokens
    • min: 3 tokens
    • mean: 3.81 tokens
    • max: 4 tokens
  • Samples:
    anchor positive id
    Stella’s antique shop has 3 dolls, 2 clocks and 5 glasses for sale. She sells the dolls for $5 each. The clocks are priced at $15 each. The glasses are priced at $4 each. If she spent $40 to buy everything and she sells all of her merchandise, how much profit will she make? Context:
    Answer: 25
    3430
    You are tasked with creating a Ruby program that defines a service for creating a project in a Continuous Integration (CI) system. The service should be able to execute with valid parameters and handle specific scenarios.

    The program should include the following:
    - A class called Ci::CreateProjectService that defines the service for creating a project.
    - A method within the Ci::CreateProjectService class called execute that takes in three parameters: current_user (representing the current user), project (representing the project to be created), and ci_origin_project (optional, representing the project to use as a template for settings and jobs).
    - The execute method should handle the following scenarios:
    1. When executed with valid parameters, it should return a new instance of Ci::Project that is persisted.
    2. When executed without a project dump (empty string), it should raise an exception.
    3. When executed with a ci_origin_project for forking, it should use ...
    Context:
    Answer: ruby<br>class Ci::CreateProjectService<br> def execute(current_user, project, ci_origin_project = nil)<br> if project.empty?<br> raise StandardError, 'Project dump is required'<br> end<br><br> new_project = Ci::Project.new<br> new_project.save<br><br> if ci_origin_project<br> new_project.shared_runners_enabled = ci_origin_project.shared_runners_enabled<br> new_project.public = ci_origin_project.public<br> new_project.allow_git_fetch = ci_origin_project.allow_git_fetch<br> end<br><br> new_project<br> end<br>end<br>
    656
    Why is the Insertion Sort algorithm considered optimal for nearly sorted datasets, and how does its time complexity compare to other quadratic sorting algorithms? Context:
    Answer: Insertion Sort operates in O(n²) time complexity in the worst case, but for nearly sorted datasets, it achieves O(n) time complexity. This is because it only requires a minimal number of swaps to place elements in order. For datasets where most elements are already in their correct positions, the number of inversions (pairs out of order) is small, reducing the number of comparisons and swaps. This contrasts with other quadratic algorithms like Selection Sort, which must scan the entire dataset for each element, leading to O(n²) operations regardless of initial order. The efficiency of Insertion Sort for nearly sorted data stems from its ability to leverage existing order, making it a better choice for such scenarios.
    1305
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • gradient_accumulation_steps: 2
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • dataloader_num_workers: 4
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 4
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
-1 -1 - 0.6227 0.6213 0.6163 0.6036 0.5905
0.2817 10 10.3671 - - - - -
0.5634 20 8.1302 - - - - -
0.8451 30 6.6781 - - - - -
1.0 36 - 0.6371 0.6373 0.6368 0.6384 0.6297
1.1127 40 5.6041 - - - - -
1.3944 50 5.3589 - - - - -
1.6761 60 5.2615 - - - - -
1.9577 70 5.1322 - - - - -
2.0 72 - 0.6584 0.6599 0.6567 0.6590 0.6588
2.2254 80 4.2222 - - - - -
2.5070 90 3.6282 - - - - -
2.7887 100 3.5652 - - - - -
3.0 108 - 0.6679 0.6724 0.6750 0.6699 0.6645
3.0563 110 3.1212 - - - - -
3.3380 120 1.8016 - - - - -
3.6197 130 1.8941 - - - - -
3.9014 140 1.8576 - - - - -
4.0 144 - 0.6900 0.6923 0.6937 0.6863 0.6771
4.1690 150 1.0872 - - - - -
4.4507 160 0.7482 - - - - -
4.7324 170 0.7307 - - - - -
5.0 180 0.8322 0.6909 0.6988 0.6947 0.6873 0.6800
5.2817 190 0.329 - - - - -
5.5634 200 0.3246 - - - - -
5.8451 210 0.274 - - - - -
6.0 216 - 0.6898 0.6929 0.6904 0.6900 0.6801
6.1127 220 0.2161 - - - - -
6.3944 230 0.1178 - - - - -
6.6761 240 0.1418 - - - - -
6.9577 250 0.1319 - - - - -
7.0 252 - 0.6920 0.6890 0.6910 0.6880 0.6789
7.2254 260 0.0979 - - - - -
7.5070 270 0.0653 - - - - -
7.7887 280 0.0852 - - - - -
8.0 288 - 0.6934 0.69 0.6934 0.6877 0.6825
8.0563 290 0.08 - - - - -
8.3380 300 0.0526 - - - - -
8.6197 310 0.066 - - - - -
8.9014 320 0.0549 - - - - -
9.0 324 - 0.6911 0.6929 0.6905 0.6858 0.6802
9.1690 330 0.0384 - - - - -
9.4507 340 0.0523 - - - - -
9.7324 350 0.0333 - - - - -
10.0 360 0.0488 0.6916 0.6940 0.6915 0.6863 0.6806
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.4.2
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
67
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shatonix/granite-embedding-math-cs

Finetuned
(2)
this model

Papers for shatonix/granite-embedding-math-cs

Evaluation results