MPNet base trained on AllNLI triplets
This is a sentence-transformers model finetuned from microsoft/mpnet-base on the all-nli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: microsoft/mpnet-base
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("bingcheng9/mpnet-base-all-nli-triplet")
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Triplet
| Metric |
Value |
| cosine_accuracy |
0.9156 |
| dot_accuracy |
0.0848 |
| manhattan_accuracy |
0.9124 |
| euclidean_accuracy |
0.9113 |
| max_accuracy |
0.9156 |
Triplet
| Metric |
Value |
| cosine_accuracy |
0.9262 |
| dot_accuracy |
0.0726 |
| manhattan_accuracy |
0.9197 |
| euclidean_accuracy |
0.9201 |
| max_accuracy |
0.9262 |
Training Details
Training Dataset
all-nli
Evaluation Dataset
all-nli
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
learning_rate: 2e-05
num_train_epochs: 1
warmup_ratio: 0.1
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 2e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: False
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: False
hub_always_push: False
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
dispatch_batches: None
split_batches: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
eval_use_gather_object: False
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
Training Logs
| Epoch |
Step |
Training Loss |
loss |
all-nli-dev_max_accuracy |
all-nli-test_max_accuracy |
| 0 |
0 |
- |
- |
0.6832 |
- |
| 0.016 |
100 |
3.0282 |
1.5784 |
0.7751 |
- |
| 0.032 |
200 |
1.2537 |
0.9115 |
0.7983 |
- |
| 0.048 |
300 |
1.435 |
0.7883 |
0.8095 |
- |
| 0.064 |
400 |
0.8952 |
0.7637 |
0.8112 |
- |
| 0.08 |
500 |
0.8482 |
0.8154 |
0.8086 |
- |
| 0.096 |
600 |
1.056 |
0.8993 |
0.8033 |
- |
| 0.112 |
700 |
0.967 |
0.8740 |
0.8007 |
- |
| 0.128 |
800 |
1.1139 |
1.0261 |
0.7930 |
- |
| 0.144 |
900 |
1.1765 |
0.9142 |
0.8127 |
- |
| 0.16 |
1000 |
1.1022 |
0.8580 |
0.7980 |
- |
| 0.176 |
1100 |
1.1095 |
1.0273 |
0.7889 |
- |
| 0.192 |
1200 |
1.0725 |
0.9443 |
0.7998 |
- |
| 0.208 |
1300 |
0.9075 |
0.8191 |
0.8070 |
- |
| 0.224 |
1400 |
0.7504 |
0.8069 |
0.8104 |
- |
| 0.24 |
1500 |
0.815 |
0.7824 |
0.8193 |
- |
| 0.256 |
1600 |
0.6089 |
0.8256 |
0.8168 |
- |
| 0.272 |
1700 |
0.8689 |
0.8470 |
0.8079 |
- |
| 0.288 |
1800 |
0.8359 |
0.8588 |
0.8103 |
- |
| 0.304 |
1900 |
0.8157 |
0.7955 |
0.8129 |
- |
| 0.32 |
2000 |
0.7511 |
0.7027 |
0.8467 |
- |
| 0.336 |
2100 |
0.603 |
0.7624 |
0.8467 |
- |
| 0.352 |
2200 |
0.6005 |
0.7071 |
0.8686 |
- |
| 0.368 |
2300 |
0.8079 |
0.7497 |
0.8492 |
- |
| 0.384 |
2400 |
0.7237 |
0.6801 |
0.8586 |
- |
| 0.4 |
2500 |
0.669 |
0.6595 |
0.8694 |
- |
| 0.416 |
2600 |
0.6013 |
0.6700 |
0.8587 |
- |
| 0.432 |
2700 |
0.8929 |
0.7217 |
0.8645 |
- |
| 0.448 |
2800 |
0.8627 |
0.6720 |
0.8521 |
- |
| 0.464 |
2900 |
0.8279 |
0.6561 |
0.8698 |
- |
| 0.48 |
3000 |
0.6893 |
0.6243 |
0.8692 |
- |
| 0.496 |
3100 |
0.7609 |
0.6052 |
0.8711 |
- |
| 0.512 |
3200 |
0.5704 |
0.6042 |
0.8677 |
- |
| 0.528 |
3300 |
0.6117 |
0.5398 |
0.8827 |
- |
| 0.544 |
3400 |
0.5231 |
0.5743 |
0.8797 |
- |
| 0.56 |
3500 |
0.5231 |
0.5817 |
0.8923 |
- |
| 0.576 |
3600 |
0.4825 |
0.5309 |
0.8911 |
- |
| 0.592 |
3700 |
0.5464 |
0.5261 |
0.8961 |
- |
| 0.608 |
3800 |
0.4846 |
0.5017 |
0.8979 |
- |
| 0.624 |
3900 |
0.4896 |
0.5280 |
0.8947 |
- |
| 0.64 |
4000 |
0.7499 |
0.5435 |
0.9061 |
- |
| 0.656 |
4100 |
0.916 |
0.5268 |
0.9060 |
- |
| 0.672 |
4200 |
0.8733 |
0.4855 |
0.9074 |
- |
| 0.688 |
4300 |
0.6963 |
0.4717 |
0.9105 |
- |
| 0.704 |
4400 |
0.5907 |
0.4567 |
0.9142 |
- |
| 0.72 |
4500 |
0.5768 |
0.4702 |
0.9111 |
- |
| 0.736 |
4600 |
0.6173 |
0.4491 |
0.9151 |
- |
| 0.752 |
4700 |
0.6802 |
0.4680 |
0.9124 |
- |
| 0.768 |
4800 |
0.6099 |
0.4372 |
0.9130 |
- |
| 0.784 |
4900 |
0.5689 |
0.4480 |
0.9066 |
- |
| 0.8 |
5000 |
0.6554 |
0.4603 |
0.9118 |
- |
| 0.816 |
5100 |
0.511 |
0.4356 |
0.9116 |
- |
| 0.832 |
5200 |
0.5725 |
0.4246 |
0.9092 |
- |
| 0.848 |
5300 |
0.5196 |
0.4359 |
0.9107 |
- |
| 0.864 |
5400 |
0.6112 |
0.4403 |
0.9104 |
- |
| 0.88 |
5500 |
0.5233 |
0.4236 |
0.9115 |
- |
| 0.896 |
5600 |
0.5467 |
0.4217 |
0.9127 |
- |
| 0.912 |
5700 |
0.6109 |
0.4199 |
0.9156 |
- |
| 0.928 |
5800 |
0.54 |
0.4077 |
0.9148 |
- |
| 0.944 |
5900 |
0.6739 |
0.4111 |
0.9145 |
- |
| 0.96 |
6000 |
0.723 |
0.4170 |
0.9154 |
- |
| 0.976 |
6100 |
0.6753 |
0.4162 |
0.9154 |
- |
| 0.992 |
6200 |
0.0591 |
0.4157 |
0.9156 |
- |
| 1.0 |
6250 |
- |
- |
- |
0.9262 |
Framework Versions
- Python: 3.12.4
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.2.2
- Accelerate: 0.26.0
- Datasets: 3.0.2
- Tokenizers: 0.20.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}