metadata
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:14287
- loss:BinaryCrossEntropyLoss
base_model: yoriis/GTE-tydi-tafseer-quqa-haqa
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: CrossEncoder based on yoriis/GTE-tydi-tafseer-quqa-haqa
results:
- task:
type: cross-encoder-classification
name: Cross Encoder Classification
dataset:
name: eval
type: eval
metrics:
- type: accuracy
value: 0.97544080604534
name: Accuracy
- type: accuracy_threshold
value: 0.02913171425461769
name: Accuracy Threshold
- type: f1
value: 0.8446215139442231
name: F1
- type: f1_threshold
value: 0.02913171425461769
name: F1 Threshold
- type: precision
value: 0.828125
name: Precision
- type: recall
value: 0.8617886178861789
name: Recall
- type: average_precision
value: 0.8740056534530515
name: Average Precision
CrossEncoder based on yoriis/GTE-tydi-tafseer-quqa-haqa
This is a Cross Encoder model finetuned from yoriis/GTE-tydi-tafseer-quqa-haqa using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: yoriis/GTE-tydi-tafseer-quqa-haqa
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the ๐ค Hub
model = CrossEncoder("yoriis/GTE-tydi-tafseer-quqa-haqa-task-70")
# Get scores for pairs of texts
pairs = [
['ุฃูู ููุน ุงูุฌูุฏูุ', '[PASSAGE_NOT_FOUND]'],
['ู
ุง ูู ุงูุขูุงุช ุงูุชู ุชุชุญุฏุซ ุนู ู
ูุถูุน ุงููุตูุฉ ูู ุณูุฑุฉ ุงูู
ุงุฆุฏุฉุ', 'ููู
ุง ุฌุงุกูู
ูุชุงุจ ู
ู ุนูุฏ ุงููู ู
ุตุฏู ูู
ุง ู
ุนูู
ููุงููุง ู
ู ูุจู ูุณุชูุชุญูู ุนูู ุงูุฐูู ููุฑูุง ููู
ุง ุฌุงุกูู
ู
ุง ุนุฑููุง ููุฑูุง ุจู ููุนูุฉ ุงููู ุนูู ุงููุงูุฑูู. ุจุฆุณู
ุง ุงุดุชุฑูุง ุจู ุฃููุณูู
ุฃู ูููุฑูุง ุจู
ุง ุฃูุฒู ุงููู ุจุบูุง ุฃู ููุฒู ุงููู ู
ู ูุถูู ุนูู ู
ู ูุดุงุก ู
ู ุนุจุงุฏู ูุจุงุกูุง ุจุบุถุจ ุนูู ุบุถุจ ููููุงูุฑูู ุนุฐุงุจ ู
ููู.'],
['ูู ูุฑุฏ ูู ุงููุฑุขู ุฅุดุงุฑุฉ ูุตูุช ุฐู ุชุฃุซูุฑ ุฅูุฌุงุจู ุนูู ุฌุณู
ุงูุฅูุณุงูุ', 'ูุงูู
ุคู
ููู ูุงูู
ุคู
ูุงุช ุจุนุถูู
ุฃูููุงุก ุจุนุถ ูุฃู
ุฑูู ุจุงูู
ุนุฑูู ูููููู ุนู ุงูู
ููุฑ ููููู
ูู ุงูุตูุงุฉ ููุคุชูู ุงูุฒูุงุฉ ููุทูุนูู ุงููู ูุฑุณููู ุฃููุฆู ุณูุฑุญู
ูู
ุงููู ุฅู ุงููู ุนุฒูุฒ ุญููู
. ูุนุฏ ุงููู ุงูู
ุคู
ููู ูุงูู
ุคู
ูุงุช ุฌูุงุช ุชุฌุฑู ู
ู ุชุญุชูุง ุงูุฃููุงุฑ ุฎุงูุฏูู ูููุง ูู
ุณุงูู ุทูุจุฉ ูู ุฌูุงุช ุนุฏู ูุฑุถูุงู ู
ู ุงููู ุฃูุจุฑ ุฐูู ูู ุงูููุฒ ุงูุนุธูู
.'],
['ูู
ูุชุฑุฉ ุฑุถุงุนุฉ ุงูู
ูููุฏุ', '[PASSAGE_NOT_FOUND]'],
['ู
ุง ูู ุงูุขูุงุช ุงูุชู ุชุชุญุฏุซ ุนู ู
ูุถูุน ุงููุตูุฉ ูู ุณูุฑุฉ ุงูู
ุงุฆุฏุฉุ', '[PASSAGE_NOT_FOUND]'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'ุฃูู ููุน ุงูุฌูุฏูุ',
[
'[PASSAGE_NOT_FOUND]',
'ููู
ุง ุฌุงุกูู
ูุชุงุจ ู
ู ุนูุฏ ุงููู ู
ุตุฏู ูู
ุง ู
ุนูู
ููุงููุง ู
ู ูุจู ูุณุชูุชุญูู ุนูู ุงูุฐูู ููุฑูุง ููู
ุง ุฌุงุกูู
ู
ุง ุนุฑููุง ููุฑูุง ุจู ููุนูุฉ ุงููู ุนูู ุงููุงูุฑูู. ุจุฆุณู
ุง ุงุดุชุฑูุง ุจู ุฃููุณูู
ุฃู ูููุฑูุง ุจู
ุง ุฃูุฒู ุงููู ุจุบูุง ุฃู ููุฒู ุงููู ู
ู ูุถูู ุนูู ู
ู ูุดุงุก ู
ู ุนุจุงุฏู ูุจุงุกูุง ุจุบุถุจ ุนูู ุบุถุจ ููููุงูุฑูู ุนุฐุงุจ ู
ููู.',
'ูุงูู
ุคู
ููู ูุงูู
ุคู
ูุงุช ุจุนุถูู
ุฃูููุงุก ุจุนุถ ูุฃู
ุฑูู ุจุงูู
ุนุฑูู ูููููู ุนู ุงูู
ููุฑ ููููู
ูู ุงูุตูุงุฉ ููุคุชูู ุงูุฒูุงุฉ ููุทูุนูู ุงููู ูุฑุณููู ุฃููุฆู ุณูุฑุญู
ูู
ุงููู ุฅู ุงููู ุนุฒูุฒ ุญููู
. ูุนุฏ ุงููู ุงูู
ุคู
ููู ูุงูู
ุคู
ูุงุช ุฌูุงุช ุชุฌุฑู ู
ู ุชุญุชูุง ุงูุฃููุงุฑ ุฎุงูุฏูู ูููุง ูู
ุณุงูู ุทูุจุฉ ูู ุฌูุงุช ุนุฏู ูุฑุถูุงู ู
ู ุงููู ุฃูุจุฑ ุฐูู ูู ุงูููุฒ ุงูุนุธูู
.',
'[PASSAGE_NOT_FOUND]',
'[PASSAGE_NOT_FOUND]',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Classification
- Dataset:
eval - Evaluated with
CrossEncoderClassificationEvaluator
| Metric | Value |
|---|---|
| accuracy | 0.9754 |
| accuracy_threshold | 0.0291 |
| f1 | 0.8446 |
| f1_threshold | 0.0291 |
| precision | 0.8281 |
| recall | 0.8618 |
| average_precision | 0.874 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 14,287 training samples
- Columns:
sentence_0,sentence_1, andlabel - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 11 characters
- mean: 39.93 characters
- max: 201 characters
- min: 19 characters
- mean: 215.57 characters
- max: 912 characters
- min: 0.0
- mean: 0.07
- max: 1.0
- Samples:
sentence_0 sentence_1 label ุฃูู ููุน ุงูุฌูุฏูุ[PASSAGE_NOT_FOUND]0.0ู ุง ูู ุงูุขูุงุช ุงูุชู ุชุชุญุฏุซ ุนู ู ูุถูุน ุงููุตูุฉ ูู ุณูุฑุฉ ุงูู ุงุฆุฏุฉุููู ุง ุฌุงุกูู ูุชุงุจ ู ู ุนูุฏ ุงููู ู ุตุฏู ูู ุง ู ุนูู ููุงููุง ู ู ูุจู ูุณุชูุชุญูู ุนูู ุงูุฐูู ููุฑูุง ููู ุง ุฌุงุกูู ู ุง ุนุฑููุง ููุฑูุง ุจู ููุนูุฉ ุงููู ุนูู ุงููุงูุฑูู. ุจุฆุณู ุง ุงุดุชุฑูุง ุจู ุฃููุณูู ุฃู ูููุฑูุง ุจู ุง ุฃูุฒู ุงููู ุจุบูุง ุฃู ููุฒู ุงููู ู ู ูุถูู ุนูู ู ู ูุดุงุก ู ู ุนุจุงุฏู ูุจุงุกูุง ุจุบุถุจ ุนูู ุบุถุจ ููููุงูุฑูู ุนุฐุงุจ ู ููู.0.0ูู ูุฑุฏ ูู ุงููุฑุขู ุฅุดุงุฑุฉ ูุตูุช ุฐู ุชุฃุซูุฑ ุฅูุฌุงุจู ุนูู ุฌุณู ุงูุฅูุณุงูุูุงูู ุคู ููู ูุงูู ุคู ูุงุช ุจุนุถูู ุฃูููุงุก ุจุนุถ ูุฃู ุฑูู ุจุงูู ุนุฑูู ูููููู ุนู ุงูู ููุฑ ููููู ูู ุงูุตูุงุฉ ููุคุชูู ุงูุฒูุงุฉ ููุทูุนูู ุงููู ูุฑุณููู ุฃููุฆู ุณูุฑุญู ูู ุงููู ุฅู ุงููู ุนุฒูุฒ ุญููู . ูุนุฏ ุงููู ุงูู ุคู ููู ูุงูู ุคู ูุงุช ุฌูุงุช ุชุฌุฑู ู ู ุชุญุชูุง ุงูุฃููุงุฑ ุฎุงูุฏูู ูููุง ูู ุณุงูู ุทูุจุฉ ูู ุฌูุงุช ุนุฏู ูุฑุถูุงู ู ู ุงููู ุฃูุจุฑ ุฐูู ูู ุงูููุฒ ุงูุนุธูู .0.0 - Loss:
BinaryCrossEntropyLosswith these parameters:{ "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsnum_train_epochs: 4fp16: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 8per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 4max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss | eval_average_precision |
|---|---|---|---|
| 0.2800 | 500 | 0.1616 | 0.8419 |
| 0.5599 | 1000 | 0.1487 | 0.8512 |
| 0.8399 | 1500 | 0.1337 | 0.8641 |
| 1.0 | 1786 | - | 0.8671 |
| 1.1198 | 2000 | 0.1151 | 0.8723 |
| 1.3998 | 2500 | 0.0972 | 0.8755 |
| 1.6797 | 3000 | 0.1107 | 0.8740 |
| 1.9597 | 3500 | 0.1032 | 0.8744 |
| 2.0 | 3572 | - | 0.8741 |
| 2.2396 | 4000 | 0.0859 | 0.8730 |
| 2.5196 | 4500 | 0.0987 | 0.8751 |
| 2.7996 | 5000 | 0.0845 | 0.8752 |
| 3.0 | 5358 | - | 0.8745 |
| 3.0795 | 5500 | 0.0981 | 0.8738 |
| 3.3595 | 6000 | 0.0937 | 0.8727 |
| 3.6394 | 6500 | 0.0688 | 0.8732 |
| 3.9194 | 7000 | 0.0796 | 0.8740 |
| 4.0 | 7144 | - | 0.8740 |
Framework Versions
- Python: 3.11.13
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}