SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B

This is a sentence-transformers model finetuned from Qwen/Qwen3-Embedding-0.6B. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Qwen/Qwen3-Embedding-0.6B
  • Maximum Sequence Length: 1024 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False, 'architecture': 'Qwen3Model'})
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
queries = [
    "Instruct: \nQuery:  \u0027user\u0027: Create a C++ program that multiplies two numbers and then calculates the factorial of the result., \u0027assistant\u0027: Sure! I will write a C++ program that multiplies two numbers and then calculates the factorial of the result. Here\u0027s the code:\n\n```c++\n#include \u003ciostream\u003e\n\nint factorial(int n) {\n    if(n == 0 || n == 1) {\n        return 1;\n    }\n    return n * factorial(n - 1);\n}\n\nint main() {\n    int num1, num2, product;\n    std::cout \u003c\u003c \"Enter the first number: \";\n    std::cin \u003e\u003e num1;\n    std::cout \u003c\u003c \"Enter the second number: \";\n    std::cin \u003e\u003e num2;\n    \n    product = num1 * num2;\n    std::cout \u003c\u003c \"Product: \" \u003c\u003c product \u003c\u003c std::endl;\n    \n    int factorialResult = factorial(product);\n    std::cout \u003c\u003c \"Factorial of the product: \" \u003c\u003c factorialResult \u003c\u003c std::endl;\n    \n    return 0;\n}\n```\n\nThis program takes two numbers as input, multiplies them, and then calculates the factorial of the product. The factorial function is a recursive function that returns the factorial of a given number. The main function prompts the user to enter the numbers, performs the multiplication, and displays the product and the factorial of the product.\n\nPlease let me know if you need any further assistance!, \u0027user\u0027: The factorial function could be optimized for larger inputs. Also, consider adding error handling for negative inputs.",
]
documents = [
    'Instruct: \nQuery:  You\'re correct. The factorial function could potentially be optimized for larger inputs using dynamic programming to avoid stack overflow from deep recursion. Negative input handling is also crucial to prevent incorrect results.\n\nHere\'s the optimized code:\n\n```c++\n#include <iostream>\n#include <vector>\n\nlong long factorial(int n) {\n    if(n < 0) {\n        throw std::invalid_argument("Negative input not allowed");\n    }\n    \n    std::vector<long long> fact(n + 1);\n    fact[0] = 1;\n    for(int i = 1; i <= n; i++) {\n        fact[i] = i * fact[i - 1];\n    }\n    \n    return fact[n];\n}\n\nint main() {\n    try {\n        int num1, num2, product;\n        std::cout << "Enter the first number: ";\n        std::cin >> num1;\n\n        if(num1 < 0) throw std::invalid_argument("Negative input not allowed");\n\n        std::cout << "Enter the second number: ";\n        std::cin >> num2;\n\n        if(num2 < 0) throw std::invalid_argument("Negative input not allowed");\n\n        product = num1 * num2;\n        std::cout << "Product: " << product << std::endl;\n\n        long long factorialResult = factorial(product);\n        std::cout << "Factorial of the product: " << factorialResult << std::endl;\n\n        return 0;\n\n    } catch(const std::invalid_argument& e){\n        std::cerr << e.what() << std::endl;\n        return 1;\n    }\n}\n```\nThis code now includes error handling for negative inputs and improved performance for the factorial calculation via dynamic programming. The factorial function calculates each factorial from 0 to n once and stores it in a \'fact\' vector. So if the factorial of a certain number has already been calculated, it is directly retrieved from the \'fact\' vector, significantly reducing the number of calculations needed for larger inputs.\n\nPlease note that I currently can\'t execute C++ code as I\'m designed to execute Python code and I can\'t test C++. You\'d need to compile and run the updated code in your C++ development environment.',
    "Instruct: \nQuery:  SELECT hotel_name, q1_2022_views FROM virtual_tour_stats WHERE region = 'APAC' ORDER BY q1_2022_views DESC LIMIT 5;",
    'Instruct: \nQuery:  x=int(input())\ndef s(a):\n  r=0\n  while a>0:\n    r+=a%10\n    a//=10\n  return r\ndef d(a,b):\n  r=0\n  for i in range(6):\n    if a%10!=b%10:\n      r += 1\n    a//=10\n    b//=10\n  return r\nc=6\nfor i in range(1000000):\n  if s(i%1000)==s(i//1000):\n    c=min(c,d(x,i))\nprint(c)',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[ 0.8128, -0.0636,  0.0845]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 18,825 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 12 tokens
    • mean: 307.69 tokens
    • max: 1024 tokens
    • min: 15 tokens
    • mean: 210.69 tokens
    • max: 1024 tokens
  • Samples:
    anchor positive
    Instruct:
    Query: 1d array in char datatype in python
    Instruct:
    Query: def _convert_to_array(array_like, dtype):
    """
    Convert Matrix attributes which are array-like or buffer to array.
    """
    if isinstance(array_like, bytes):
    return np.frombuffer(array_like, dtype=dtype)
    return np.asarray(array_like, dtype=dtype)
    Instruct:
    Query: What is the total volume of timber sold by each salesperson, sorted by salesperson?
    CREATE TABLE salesperson (salesperson_id INT, name TEXT, region TEXT); INSERT INTO salesperson (salesperson_id, name, region) VALUES (1, 'John Doe', 'North'), (2, 'Jane Smith', 'South'); CREATE TABLE timber_sales (sales_id INT, salesperson_id INT, volume REAL, sale_date DATE); INSERT INTO timber_sales (sales_id, salesperson_id, volume, sale_date) VALUES (1, 1, 120, '2021-01-01'), (2, 1, 150, '2021-02-01'), (3, 2, 180, '2021-01-01');
    Instruct:
    Query: SELECT salesperson_id, name, SUM(volume) as total_volume FROM timber_sales JOIN salesperson ON timber_sales.salesperson_id = salesperson.salesperson_id GROUP BY salesperson_id, name ORDER BY total_volume DESC;
    Instruct:
    Query: An accordion is a string (yes, in the real world accordions are musical instruments, but let's forget about it for a while) which can be represented as a concatenation of: an opening bracket (ASCII code $091$), a colon (ASCII code $058$), some (possibly zero) vertical line characters (ASCII code $124$), another colon, and a closing bracket (ASCII code $093$). The length of the accordion is the number of characters in it.

    For example, [::], [:||:] and [:|||:] are accordions having length $4$, $6$ and $7$. (:|:), {:||:}, [:], ]:||:[ are not accordions.

    You are given a string $s$. You want to transform it into an accordion by removing some (possibly zero) characters from it. Note that you may not insert new characters or reorder existing ones. Is it possible to obtain an accordion by removing characters from $s$, and if so, what is the maximum possible length of the result?


    -----Input-----

    The only line contains one string $s$ ($1 \le |s| \le 500000$). It consists...
    Instruct:
    Query: s = input()
    n = len(s)
    ind = -1
    f = False
    for i in range(n):
    if s[i] == '[':
    f = True
    elif s[i] == ':':
    if f:
    ind = i
    break
    bind = -1
    f = False
    for i in range(n-1,-1,-1):
    if s[i] == ']':
    f = True
    elif s[i] == ':':
    if f:
    bind = i
    break
    # print(ind,bind)
    if ind == -1 or bind == -1:
    print(-1)
    elif ind >= bind:
    print(-1)
    else:
    ans = 4
    for i in range(ind+1,bind):
    if s[i] == '|':
    ans += 1
    print(ans)
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • learning_rate: 0.0001
  • num_train_epochs: 1
  • bf16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.0001
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.2124 500 0.2082
0.4248 1000 0.1959
0.6372 1500 0.1597
0.8496 2000 0.111

Framework Versions

  • Python: 3.12.3
  • Sentence Transformers: 5.1.0
  • Transformers: 4.55.0
  • PyTorch: 2.8.0+cu128
  • Accelerate: 1.10.1
  • Datasets: 3.6.0
  • Tokenizers: 0.21.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
-
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for elplaguister/qwen_xxx

Finetuned
(131)
this model

Papers for elplaguister/qwen_xxx