Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
• 1908.10084 • Published
• 12
This is a sentence-transformers model finetuned from Qwen/Qwen3-Embedding-0.6B. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False, 'architecture': 'Qwen3Model'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
queries = [
"Instruct: \nQuery: \u0027user\u0027: Create a C++ program that multiplies two numbers and then calculates the factorial of the result., \u0027assistant\u0027: Sure! I will write a C++ program that multiplies two numbers and then calculates the factorial of the result. Here\u0027s the code:\n\n```c++\n#include \u003ciostream\u003e\n\nint factorial(int n) {\n if(n == 0 || n == 1) {\n return 1;\n }\n return n * factorial(n - 1);\n}\n\nint main() {\n int num1, num2, product;\n std::cout \u003c\u003c \"Enter the first number: \";\n std::cin \u003e\u003e num1;\n std::cout \u003c\u003c \"Enter the second number: \";\n std::cin \u003e\u003e num2;\n \n product = num1 * num2;\n std::cout \u003c\u003c \"Product: \" \u003c\u003c product \u003c\u003c std::endl;\n \n int factorialResult = factorial(product);\n std::cout \u003c\u003c \"Factorial of the product: \" \u003c\u003c factorialResult \u003c\u003c std::endl;\n \n return 0;\n}\n```\n\nThis program takes two numbers as input, multiplies them, and then calculates the factorial of the product. The factorial function is a recursive function that returns the factorial of a given number. The main function prompts the user to enter the numbers, performs the multiplication, and displays the product and the factorial of the product.\n\nPlease let me know if you need any further assistance!, \u0027user\u0027: The factorial function could be optimized for larger inputs. Also, consider adding error handling for negative inputs.",
]
documents = [
'Instruct: \nQuery: You\'re correct. The factorial function could potentially be optimized for larger inputs using dynamic programming to avoid stack overflow from deep recursion. Negative input handling is also crucial to prevent incorrect results.\n\nHere\'s the optimized code:\n\n```c++\n#include <iostream>\n#include <vector>\n\nlong long factorial(int n) {\n if(n < 0) {\n throw std::invalid_argument("Negative input not allowed");\n }\n \n std::vector<long long> fact(n + 1);\n fact[0] = 1;\n for(int i = 1; i <= n; i++) {\n fact[i] = i * fact[i - 1];\n }\n \n return fact[n];\n}\n\nint main() {\n try {\n int num1, num2, product;\n std::cout << "Enter the first number: ";\n std::cin >> num1;\n\n if(num1 < 0) throw std::invalid_argument("Negative input not allowed");\n\n std::cout << "Enter the second number: ";\n std::cin >> num2;\n\n if(num2 < 0) throw std::invalid_argument("Negative input not allowed");\n\n product = num1 * num2;\n std::cout << "Product: " << product << std::endl;\n\n long long factorialResult = factorial(product);\n std::cout << "Factorial of the product: " << factorialResult << std::endl;\n\n return 0;\n\n } catch(const std::invalid_argument& e){\n std::cerr << e.what() << std::endl;\n return 1;\n }\n}\n```\nThis code now includes error handling for negative inputs and improved performance for the factorial calculation via dynamic programming. The factorial function calculates each factorial from 0 to n once and stores it in a \'fact\' vector. So if the factorial of a certain number has already been calculated, it is directly retrieved from the \'fact\' vector, significantly reducing the number of calculations needed for larger inputs.\n\nPlease note that I currently can\'t execute C++ code as I\'m designed to execute Python code and I can\'t test C++. You\'d need to compile and run the updated code in your C++ development environment.',
"Instruct: \nQuery: SELECT hotel_name, q1_2022_views FROM virtual_tour_stats WHERE region = 'APAC' ORDER BY q1_2022_views DESC LIMIT 5;",
'Instruct: \nQuery: x=int(input())\ndef s(a):\n r=0\n while a>0:\n r+=a%10\n a//=10\n return r\ndef d(a,b):\n r=0\n for i in range(6):\n if a%10!=b%10:\n r += 1\n a//=10\n b//=10\n return r\nc=6\nfor i in range(1000000):\n if s(i%1000)==s(i//1000):\n c=min(c,d(x,i))\nprint(c)',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[ 0.8128, -0.0636, 0.0845]])
anchor and positive| anchor | positive | |
|---|---|---|
| type | string | string |
| details |
|
|
| anchor | positive |
|---|---|
Instruct: |
Instruct: |
Instruct: |
Instruct: |
Instruct: |
Instruct: |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
learning_rate: 0.0001num_train_epochs: 1bf16: Trueoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 8per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 0.0001weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}| Epoch | Step | Training Loss |
|---|---|---|
| 0.2124 | 500 | 0.2082 |
| 0.4248 | 1000 | 0.1959 |
| 0.6372 | 1500 | 0.1597 |
| 0.8496 | 2000 | 0.111 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}