Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'traceback', 'error_msg'}) and 4 missing columns ({'eval_version', 'result_metrics_npm', 'result_metrics_average', 'result_metrics'}).
This happened while the json dataset builder was generating data using
hf://datasets/eduagarcia-temp/llm_pt_leaderboard_requests/22h/cabrita-lora-v0-1_eval_request_False_float16_Adapter.json (at revision ea89cc7912815e2260ab025a932cc9ae04f6570e)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
model: string
base_model: string
revision: string
private: bool
precision: string
params: int64
architectures: string
weight_type: string
status: string
submitted_time: timestamp[s]
model_type: string
source: string
job_id: int64
job_start_time: string
main_language: string
error_msg: string
traceback: string
to
{'model': Value('string'), 'base_model': Value('string'), 'revision': Value('string'), 'private': Value('bool'), 'precision': Value('string'), 'params': Value('float64'), 'architectures': Value('string'), 'weight_type': Value('string'), 'main_language': Value('string'), 'status': Value('string'), 'submitted_time': Value('timestamp[s]'), 'model_type': Value('string'), 'source': Value('string'), 'job_id': Value('int64'), 'job_start_time': Value('string'), 'eval_version': Value('string'), 'result_metrics': {'enem_challenge': Value('float64'), 'bluex': Value('float64'), 'oab_exams': Value('float64'), 'assin2_rte': Value('float64'), 'assin2_sts': Value('float64'), 'faquad_nli': Value('float64'), 'hatebr_offensive': Value('float64'), 'portuguese_hate_speech': Value('float64'), 'tweetsentbr': Value('float64')}, 'result_metrics_average': Value('float64'), 'result_metrics_npm': Value('float64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'traceback', 'error_msg'}) and 4 missing columns ({'eval_version', 'result_metrics_npm', 'result_metrics_average', 'result_metrics'}).
This happened while the json dataset builder was generating data using
hf://datasets/eduagarcia-temp/llm_pt_leaderboard_requests/22h/cabrita-lora-v0-1_eval_request_False_float16_Adapter.json (at revision ea89cc7912815e2260ab025a932cc9ae04f6570e)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
model
string | base_model
string | revision
string | private
bool | precision
string | params
float64 | architectures
string | weight_type
string | main_language
string | status
string | submitted_time
timestamp[us] | model_type
string | source
string | job_id
int64 | job_start_time
string | eval_version
string | result_metrics
dict | result_metrics_average
float64 | result_metrics_npm
float64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tanliboy/lambda-qwen2.5-14b-dpo-test
|
main
| false
|
bfloat16
| 14.77
|
Qwen2ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-28T11:33:19
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,221
|
2024-10-17T01-43-26.171758
|
1.1.0
|
{
"enem_challenge": 0.7991602519244226,
"bluex": 0.7315716272600834,
"oab_exams": 0.6104783599088838,
"assin2_rte": 0.9448521957747049,
"assin2_sts": 0.8243398669298373,
"faquad_nli": 0.7882522522522523,
"hatebr_offensive": 0.9003808155770413,
"portuguese_hate_speech": 0.7474723628059027,
"tweetsentbr": 0.7221843254982979
}
| 0.78541
| 0.678964
|
|
tanliboy/lambda-qwen2.5-32b-dpo-test
|
main
| false
|
bfloat16
| 32.764
|
Qwen2ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-30T16:35:08
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,295
|
2024-12-03T02-30-57.448037
|
1.1.0
|
{
"enem_challenge": 0.6599020293911827,
"bluex": 0.36439499304589706,
"oab_exams": 0.4396355353075171,
"assin2_rte": 0.9464747005467062,
"assin2_sts": 0.786818651451759,
"faquad_nli": 0.8249062367717624,
"hatebr_offensive": 0.913360975307878,
"portuguese_hate_speech": 0.7404314091052409,
"tweetsentbr": 0.7510175701499394
}
| 0.714105
| 0.591499
|
|
01-ai/Yi-1.5-34B-32K
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-20T02:43:37
|
π’ : pretrained
|
manual
| 674
|
2024-05-20T21-45-47.139761
|
1.1.0
|
{
"enem_challenge": 0.7354793561931421,
"bluex": 0.6787204450625869,
"oab_exams": 0.54624145785877,
"assin2_rte": 0.9121699049758872,
"assin2_sts": 0.809949940837174,
"faquad_nli": 0.7177866756717641,
"hatebr_offensive": 0.8271604938271605,
"portuguese_hate_speech": 0.6997859414986487,
"tweetsentbr": 0.7309621331738047
}
| 0.739806
| 0.604782
|
|
01-ai/Yi-1.5-34B-Chat-16K
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-20T02:44:14
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 673
|
2024-05-20T18-32-22.664525
|
1.1.0
|
{
"enem_challenge": 0.7004898530440867,
"bluex": 0.5201668984700973,
"oab_exams": 0.5257403189066059,
"assin2_rte": 0.9116655919331504,
"assin2_sts": 0.777225956509937,
"faquad_nli": 0.7909900023305279,
"hatebr_offensive": 0.8889320535439632,
"portuguese_hate_speech": 0.6504272099901414,
"tweetsentbr": 0.7088300278488365
}
| 0.719385
| 0.584898
|
|
01-ai/Yi-1.5-34B-Chat
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-15T17:39:33
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 624
|
2024-05-16T15-15-24.863291
|
1.1.0
|
{
"enem_challenge": 0.6906927921623512,
"bluex": 0.6648122392211405,
"oab_exams": 0.5248291571753987,
"assin2_rte": 0.9170744853491483,
"assin2_sts": 0.7661887019644651,
"faquad_nli": 0.7743940809133725,
"hatebr_offensive": 0.8210886883714428,
"portuguese_hate_speech": 0.7105164005570834,
"tweetsentbr": 0.7096563287199421
}
| 0.731028
| 0.598601
|
|
01-ai/Yi-1.5-34B
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-15T17:40:15
|
π’ : pretrained
|
manual
| 627
|
2024-05-17T10-36-18.336343
|
1.1.0
|
{
"enem_challenge": 0.71518544436669,
"bluex": 0.6662030598052852,
"oab_exams": 0.5489749430523918,
"assin2_rte": 0.8976911637262349,
"assin2_sts": 0.8148786802023537,
"faquad_nli": 0.585644163957417,
"hatebr_offensive": 0.8363023241432246,
"portuguese_hate_speech": 0.6962399848962205,
"tweetsentbr": 0.7228749707523902
}
| 0.720444
| 0.570852
|
|
01-ai/Yi-1.5-6B-Chat
|
main
| false
|
bfloat16
| 6.061
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-16T14:35:19
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 629
|
2024-05-17T14-53-37.626126
|
1.1.0
|
{
"enem_challenge": 0.5066480055983205,
"bluex": 0.4631432545201669,
"oab_exams": 0.3908883826879271,
"assin2_rte": 0.8478217777818736,
"assin2_sts": 0.6797897994537765,
"faquad_nli": 0.6548247706694055,
"hatebr_offensive": 0.7881170986195587,
"portuguese_hate_speech": 0.6486990242682011,
"tweetsentbr": 0.6586657928083186
}
| 0.626511
| 0.445931
|
|
01-ai/Yi-1.5-6B
|
main
| false
|
bfloat16
| 6.061
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-16T14:34:00
|
π’ : pretrained
|
manual
| 628
|
2024-05-17T13-51-05.776238
|
1.1.0
|
{
"enem_challenge": 0.5395381385584325,
"bluex": 0.4993045897079277,
"oab_exams": 0.4154897494305239,
"assin2_rte": 0.85320443811568,
"assin2_sts": 0.611946662194731,
"faquad_nli": 0.566892243623113,
"hatebr_offensive": 0.8390372896945542,
"portuguese_hate_speech": 0.6034251055649058,
"tweetsentbr": 0.6835417262403757
}
| 0.623598
| 0.440799
|
|
01-ai/Yi-1.5-9B-32K
|
main
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-20T02:43:42
|
π’ : pretrained
|
manual
| 673
|
2024-05-20T20-18-06.355522
|
1.1.0
|
{
"enem_challenge": 0.6724982505248426,
"bluex": 0.5702364394993046,
"oab_exams": 0.5011389521640092,
"assin2_rte": 0.8657419886018202,
"assin2_sts": 0.7267527969011244,
"faquad_nli": 0.5410839160839161,
"hatebr_offensive": 0.7806530019415174,
"portuguese_hate_speech": 0.6955025872509083,
"tweetsentbr": 0.6843568952184516
}
| 0.670885
| 0.499193
|
|
01-ai/Yi-1.5-9B-Chat-16K
|
main
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-20T02:43:57
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 652
|
2024-05-20T09-41-40.204030
|
1.1.0
|
{
"enem_challenge": 0.7046885934219734,
"bluex": 0.5660639777468707,
"oab_exams": 0.48701594533029613,
"assin2_rte": 0.889630445004916,
"assin2_sts": 0.7254379491320617,
"faquad_nli": 0.6373099047367834,
"hatebr_offensive": 0.8668983847883869,
"portuguese_hate_speech": 0.5826350789692436,
"tweetsentbr": 0.6467979685370989
}
| 0.678498
| 0.514674
|
|
01-ai/Yi-1.5-9B-Chat
|
main
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-15T17:39:54
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 625
|
2024-05-17T02-43-33.664147
|
1.1.0
|
{
"enem_challenge": 0.6242127361791463,
"bluex": 0.5479833101529903,
"oab_exams": 0.4469248291571754,
"assin2_rte": 0.8807093197512774,
"assin2_sts": 0.7520700202607307,
"faquad_nli": 0.6913654763916721,
"hatebr_offensive": 0.8297877646706737,
"portuguese_hate_speech": 0.667940108892922,
"tweetsentbr": 0.6732618942406834
}
| 0.679362
| 0.521304
|
|
01-ai/Yi-1.5-9B
|
main
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-15T17:40:08
|
π’ : pretrained
|
manual
| 626
|
2024-05-17T09-09-38.931019
|
1.1.0
|
{
"enem_challenge": 0.6710986703988804,
"bluex": 0.5771905424200278,
"oab_exams": 0.4947608200455581,
"assin2_rte": 0.8815204475360152,
"assin2_sts": 0.7102876692830821,
"faquad_nli": 0.6362495548508539,
"hatebr_offensive": 0.7837384886240519,
"portuguese_hate_speech": 0.6780580075662044,
"tweetsentbr": 0.6934621257745327
}
| 0.680707
| 0.518635
|
|
01-ai/Yi-34B-200K
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-05T23:18:19
|
π’ : pretrained
|
script
| 480
|
2024-04-17T23-49-34.862700
|
1.1.0
|
{
"enem_challenge": 0.7172848145556333,
"bluex": 0.6481223922114048,
"oab_exams": 0.5517084282460136,
"assin2_rte": 0.9097218456052794,
"assin2_sts": 0.7390390977418284,
"faquad_nli": 0.49676238738738737,
"hatebr_offensive": 0.8117947554592124,
"portuguese_hate_speech": 0.7007076712295253,
"tweetsentbr": 0.6181054682174745
}
| 0.688139
| 0.523233
|
|
01-ai/Yi-34B-Chat
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-27T00:40:17
|
π¬ : chat models (RLHF, DPO, IFT, ...)
|
leaderboard
| 272
|
2024-02-28T08-14-36.046639
|
1.1.0
|
{
"enem_challenge": 0.7123862841147656,
"bluex": 0.6328233657858137,
"oab_exams": 0.5202733485193621,
"assin2_rte": 0.924014535978148,
"assin2_sts": 0.7419038025688336,
"faquad_nli": 0.7157210401891253,
"hatebr_offensive": 0.7198401711140126,
"portuguese_hate_speech": 0.7135410538975384,
"tweetsentbr": 0.6880686233555414
}
| 0.707619
| 0.557789
|
|
01-ai/Yi-34B
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-05T23:05:39
|
π’ : pretrained
|
script
| 440
|
2024-04-13T15-53-49.411062
|
1.1.0
|
{
"enem_challenge": 0.7207837648705389,
"bluex": 0.6648122392211405,
"oab_exams": 0.5599088838268793,
"assin2_rte": 0.917882167398896,
"assin2_sts": 0.76681855136608,
"faquad_nli": 0.7798334442926054,
"hatebr_offensive": 0.8107834570679608,
"portuguese_hate_speech": 0.6224786612758311,
"tweetsentbr": 0.7320656959105744
}
| 0.730596
| 0.591978
|
|
01-ai/Yi-6B-200K
|
main
| false
|
bfloat16
| 6.061
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-05T23:18:12
|
π’ : pretrained
|
script
| 469
|
2024-04-16T17-07-31.622853
|
1.1.0
|
{
"enem_challenge": 0.5423372988103569,
"bluex": 0.4673157162726008,
"oab_exams": 0.4328018223234624,
"assin2_rte": 0.40523403335417163,
"assin2_sts": 0.4964641013268987,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.4892942520605069,
"portuguese_hate_speech": 0.6053769911504425,
"tweetsentbr": 0.6290014694641435
}
| 0.500831
| 0.214476
|
|
01-ai/Yi-6B-Chat
|
main
| false
|
bfloat16
| 6.061
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-27T00:40:39
|
π¬ : chat models (RLHF, DPO, IFT, ...)
|
leaderboard
| 273
|
2024-02-28T14-35-07.615539
|
1.1.0
|
{
"enem_challenge": 0.5570328901329601,
"bluex": 0.5006954102920723,
"oab_exams": 0.4118451025056948,
"assin2_rte": 0.7948490568935549,
"assin2_sts": 0.5684271643349206,
"faquad_nli": 0.637960088691796,
"hatebr_offensive": 0.775686136523575,
"portuguese_hate_speech": 0.5712377041472934,
"tweetsentbr": 0.5864804330790114
}
| 0.600468
| 0.40261
|
|
01-ai/Yi-6B
|
main
| false
|
bfloat16
| 6.061
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-05T23:04:05
|
π’ : pretrained
|
script
| 228
|
2024-02-17T03-42-08.504508
|
1.1.0
|
{
"enem_challenge": 0.5689293212036389,
"bluex": 0.5132127955493742,
"oab_exams": 0.4460136674259681,
"assin2_rte": 0.7903932929806128,
"assin2_sts": 0.5666878345297481,
"faquad_nli": 0.5985418799210473,
"hatebr_offensive": 0.7425595238095237,
"portuguese_hate_speech": 0.6184177704320946,
"tweetsentbr": 0.5081067075683067
}
| 0.594763
| 0.391626
|
|
01-ai/Yi-9B-200K
|
v20240318
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-20T02:40:48
|
π’ : pretrained
|
manual
| 666
|
2024-05-20T06-21-38.524751
|
1.1.0
|
{
"enem_challenge": 0.6529041287613716,
"bluex": 0.5438108484005564,
"oab_exams": 0.496127562642369,
"assin2_rte": 0.8735592777805543,
"assin2_sts": 0.7486645258696737,
"faquad_nli": 0.7445188998494914,
"hatebr_offensive": 0.817858599988343,
"portuguese_hate_speech": 0.6727118239818735,
"tweetsentbr": 0.7225357780938043
}
| 0.696966
| 0.547383
|
|
01-ai/Yi-9B-200k
|
main
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-04-13T05:22:25
|
π’ : pretrained
|
leaderboard
| 451
|
2024-04-14T12-49-52.148781
|
1.1.0
|
{
"enem_challenge": 0.6564030790762772,
"bluex": 0.5354659248956884,
"oab_exams": 0.5056947608200456,
"assin2_rte": 0.8708321784112503,
"assin2_sts": 0.7508245525986388,
"faquad_nli": 0.7162112665738773,
"hatebr_offensive": 0.8238294119604646,
"portuguese_hate_speech": 0.6723821369343758,
"tweetsentbr": 0.7162549372015228
}
| 0.694211
| 0.54216
|
|
01-ai/Yi-9B
|
main
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-04-13T05:20:56
|
π’ : pretrained
|
leaderboard
| 453
|
2024-04-14T11-08-02.891090
|
1.1.0
|
{
"enem_challenge": 0.6759972008397481,
"bluex": 0.5493741307371349,
"oab_exams": 0.4783599088838269,
"assin2_rte": 0.8784695970900473,
"assin2_sts": 0.752860308487488,
"faquad_nli": 0.7478708154144531,
"hatebr_offensive": 0.8574531631821884,
"portuguese_hate_speech": 0.6448598532923182,
"tweetsentbr": 0.6530471966712571
}
| 0.693144
| 0.542367
|
|
01-ai/Yi-Coder-9B-Chat
|
main
| false
|
bfloat16
| 9
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-21T08:22:47
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,132
|
2024-10-02T01-59-56.934513
|
1.1.0
|
{
"enem_challenge": 0.4541637508747376,
"bluex": 0.42559109874826145,
"oab_exams": 0.35079726651480636,
"assin2_rte": 0.7994152126314071,
"assin2_sts": 0.7640954232319354,
"faquad_nli": 0.6541275915011303,
"hatebr_offensive": 0.8464002808680579,
"portuguese_hate_speech": 0.6100826569784736,
"tweetsentbr": 0.6641060566616098
}
| 0.618753
| 0.431402
|
|
01-ai/Yi-Coder-9B
|
main
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-21T08:22:54
|
π’ : pretrained
|
leaderboard
| 1,132
|
2024-10-02T03-06-38.881073
|
1.1.0
|
{
"enem_challenge": 0.4730580825752274,
"bluex": 0.3949930458970793,
"oab_exams": 0.3703872437357631,
"assin2_rte": 0.8479141293428532,
"assin2_sts": 0.7388500902557162,
"faquad_nli": 0.5212030552344689,
"hatebr_offensive": 0.8155218925088192,
"portuguese_hate_speech": 0.6617294815662288,
"tweetsentbr": 0.6216077000909054
}
| 0.605029
| 0.41049
|
|
152334H/miqu-1-70b-sf
|
main
| false
|
float16
| 68.977
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-04-26T08:25:57
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 704
|
2024-05-23T11-20-45.843993
|
1.1.0
|
{
"enem_challenge": 0.7466759972008398,
"bluex": 0.6481223922114048,
"oab_exams": 0.5398633257403189,
"assin2_rte": 0.9309631191550011,
"assin2_sts": 0.6972270341129023,
"faquad_nli": 0.7641750093536355,
"hatebr_offensive": 0.8382584367896485,
"portuguese_hate_speech": 0.7336708394698086,
"tweetsentbr": 0.7158026226548874
}
| 0.734973
| 0.609318
|
|
22h/cabrita-lora-v0-1
|
huggyllama/llama-7b
|
main
| false
|
float16
| 0
|
?
|
Adapter
|
Portuguese
|
FAILED
| 2024-02-05T23:03:11
|
πΆ : fine-tuned
|
script
| 820
|
2024-06-16T10-13-34.877976
| null | null | null | null |
22h/cabrita_7b_pt_850000
|
main
| false
|
float16
| 7
|
LlamaForCausalLM
|
Original
|
Portuguese
|
FINISHED
| 2024-02-11T13:34:40
|
π : language adapted models (FP, FT, ...)
|
script
| 305
|
2024-03-08T02-07-35.059732
|
1.1.0
|
{
"enem_challenge": 0.22533240027991602,
"bluex": 0.23087621696801114,
"oab_exams": 0.2920273348519362,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.1265472264440735,
"faquad_nli": 0.17721518987341772,
"hatebr_offensive": 0.5597546967409981,
"portuguese_hate_speech": 0.490163110698825,
"tweetsentbr": 0.4575265405956153
}
| 0.32142
| -0.032254
|
|
22h/open-cabrita3b
|
main
| false
|
float16
| 3
|
LlamaForCausalLM
|
Original
|
Portuguese
|
FINISHED
| 2024-02-11T13:34:36
|
π : language adapted models (FP, FT, ...)
|
script
| 285
|
2024-02-28T16-38-27.766897
|
1.1.0
|
{
"enem_challenge": 0.17984604618614417,
"bluex": 0.2114047287899861,
"oab_exams": 0.22687927107061504,
"assin2_rte": 0.4301327637723658,
"assin2_sts": 0.08919111846797594,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.5046251022011318,
"portuguese_hate_speech": 0.4118866620594333,
"tweetsentbr": 0.47963247012405114
}
| 0.330361
| -0.005342
|
|
AALF/gemma-2-27b-it-SimPO-37K-100steps
|
main
| false
|
bfloat16
| 27.227
|
Gemma2ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-21T04:24:32
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,132
|
2024-10-02T01-48-29.895681
|
1.1.0
|
{
"enem_challenge": 0.781665500349895,
"bluex": 0.7162726008344924,
"oab_exams": 0.5662870159453303,
"assin2_rte": 0.8957539543945432,
"assin2_sts": 0.7116783323256065,
"faquad_nli": 0.7043137254901961,
"hatebr_offensive": 0.8798753128354102,
"portuguese_hate_speech": 0.7243041235926246,
"tweetsentbr": 0.6962639594964193
}
| 0.741824
| 0.613438
|
|
AALF/gemma-2-27b-it-SimPO-37K
|
main
| false
|
bfloat16
| 27.227
|
Gemma2ForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-29T19:24:31
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,513
|
2025-02-12T19-41-06.015447
| null | null | null | null |
|
AALF/gemma-2-27b-it-SimPO-37K
|
main
| false
|
float16
| 27.227
|
Gemma2ForCausalLM
|
Original
|
English
|
FAILED
| 2024-09-05T20:33:39
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,645
|
2025-07-07T16-05-00.643635
| null | null | null | null |
|
AI-Sweden-Models/gpt-sw3-20b
|
main
| false
|
float16
| 20.918
|
GPT2LMHeadModel
|
Original
|
English
|
FINISHED
| 2024-02-05T23:15:38
|
π’ : pretrained
|
script
| 827
|
2024-06-17T02-29-40.078292
|
1.1.0
|
{
"enem_challenge": 0.1973407977606718,
"bluex": 0.22531293463143254,
"oab_exams": 0.27790432801822323,
"assin2_rte": 0.5120595126338128,
"assin2_sts": 0.07348005953132232,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.533175653895298,
"portuguese_hate_speech": 0.2347827795333542,
"tweetsentbr": 0.13985170050404128
}
| 0.292618
| -0.064504
|
|
AI-Sweden-Models/gpt-sw3-40b
|
main
| false
|
float16
| 39.927
|
GPT2LMHeadModel
|
Original
|
English
|
FINISHED
| 2024-02-05T23:15:47
|
π’ : pretrained
|
script
| 253
|
2024-02-21T07-59-22.606213
|
1.1.0
|
{
"enem_challenge": 0.2358292512246326,
"bluex": 0.2809457579972184,
"oab_exams": 0.2542141230068337,
"assin2_rte": 0.4096747911636189,
"assin2_sts": 0.17308746611294112,
"faquad_nli": 0.5125406216148655,
"hatebr_offensive": 0.3920230910522173,
"portuguese_hate_speech": 0.4365404510655907,
"tweetsentbr": 0.491745311259787
}
| 0.354067
| 0.018354
|
|
AI-Sweden-Models/gpt-sw3-6.7b-v2
|
main
| false
|
float16
| 7.111
|
GPT2LMHeadModel
|
Original
|
English
|
FINISHED
| 2024-02-05T23:15:31
|
π’ : pretrained
|
script
| 462
|
2024-04-16T00-18-50.805343
|
1.1.0
|
{
"enem_challenge": 0.22813156053184044,
"bluex": 0.23504867872044508,
"oab_exams": 0.23097949886104785,
"assin2_rte": 0.5833175952742944,
"assin2_sts": 0.14706689693418745,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.5569200631123247,
"portuguese_hate_speech": 0.5048069947120815,
"tweetsentbr": 0.45897627809523983
}
| 0.3761
| 0.073856
|
|
AI-Sweden-Models/gpt-sw3-6.7b
|
main
| false
|
float16
| 7.111
|
GPT2LMHeadModel
|
Original
|
English
|
FINISHED
| 2024-02-05T23:15:23
|
π’ : pretrained
|
script
| 466
|
2024-04-15T22-34-55.424388
|
1.1.0
|
{
"enem_challenge": 0.21133659902029392,
"bluex": 0.2573018080667594,
"oab_exams": 0.2296127562642369,
"assin2_rte": 0.6192900448928588,
"assin2_sts": 0.08103924791097977,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.40737531518832293,
"portuguese_hate_speech": 0.4441161100880904,
"tweetsentbr": 0.433837189305867
}
| 0.347063
| 0.024837
|
|
AIDC-AI/Marco-LLM-ES
|
main
| false
|
bfloat16
| 0
|
Qwen2ForCausalLM
|
Original
|
Spanish
|
FINISHED
| 2025-02-27T03:49:44
|
π’ : pretrained
|
leaderboard
| 1,541
|
2025-04-06T01-12-18.981578
|
1.1.0
|
{
"enem_challenge": 0.6459062281315605,
"bluex": 0.44089012517385257,
"oab_exams": 0.4113895216400911,
"assin2_rte": 0.9205062830085595,
"assin2_sts": 0.7418390500261379,
"faquad_nli": 0.7559457143682347,
"hatebr_offensive": 0.8373430424906447,
"portuguese_hate_speech": 0.6694562146892655,
"tweetsentbr": 0.6464711867885252
}
| 0.674416
| 0.522169
|
|
AIDC-AI/Ovis1.5-Gemma2-9B
|
main
| false
|
bfloat16
| 11.359
|
Ovis
|
Original
|
English
|
FAILED
| 2024-09-18T02:27:02
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,093
|
2024-09-22T04-13-27.607074
| null | null | null | null |
|
AIDC-AI/Ovis1.5-Llama3-8B
|
fb8c34a71ae86a9a12a033a395c640a2825d909e
| false
|
bfloat16
| 8
|
Ovis
|
Original
|
English
|
FAILED
| 2024-09-18T02:41:23
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,096
|
2024-09-22T04-33-19.526440
| null | null | null | null |
|
AIDC-AI/Ovis2.5-2B
|
main
| false
|
bfloat16
| 2.57
|
Ovis2_5
|
Original
|
English
|
FAILED
| 2025-08-30T23:37:03
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,857
|
2025-09-05T01-35-48.236217
| null | null | null | null |
|
AIDC-AI/Ovis2.5-9B
|
main
| false
|
bfloat16
| 9.175
|
Ovis2_5
|
Original
|
English
|
PENDING
| 2025-08-30T23:36:55
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| -1
| null | null | null | null | null |
|
AIJUUD/QWEN2_70B_JUUD_V1
|
main
| false
|
float16
| 70
|
Qwen2ForCausalLM
|
Original
|
Chinese
|
FAILED
| 2024-06-24T00:34:13
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,711
|
2025-08-30T14-39-32.617876
| null | null | null | null |
|
AIM-ZJU/HawkLlama_8b
|
main
| false
|
bfloat16
| 8.494
|
LlavaNextForConditionalGeneration
|
Original
|
English
|
FAILED
| 2024-09-18T02:05:36
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,091
|
2024-09-22T01-54-50.526492
| null | null | null | null |
|
AXCXEPT/EZO-Qwen2.5-32B-Instruct
|
main
| false
|
bfloat16
| 32.764
|
Qwen2ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-10-04T05:43:37
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,298
|
2024-12-03T10-35-22.020429
|
1.1.0
|
{
"enem_challenge": 0.8313505948215535,
"bluex": 0.7830319888734353,
"oab_exams": 0.6136674259681093,
"assin2_rte": 0.9489341505871822,
"assin2_sts": 0.7368658525162626,
"faquad_nli": 0.8287377450980392,
"hatebr_offensive": 0.9185049019607843,
"portuguese_hate_speech": 0.7303548795944234,
"tweetsentbr": 0.7415684786894087
}
| 0.792557
| 0.694324
|
|
AXCXEPT/EZO-Qwen2.5-72B-Instruct
|
main
| false
|
bfloat16
| 72.706
|
Qwen2ForCausalLM
|
Original
|
English
|
PENDING
| 2025-02-07T18:26:08
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| -1
| null | null | null | null | null |
|
AbacusResearch/Jallabi-34B
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FAILED
| 2024-09-05T13:41:51
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,484
|
2025-02-05T18-24-10.846965
| null | null | null | null |
|
AdaptLLM/finance-LLM-13B
|
main
| false
|
float16
| 13
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-11T13:37:27
|
πΆ : fine-tuned
|
script
| 555
|
2024-04-24T18-00-42.073230
|
1.1.0
|
{
"enem_challenge": 0.4730580825752274,
"bluex": 0.3852573018080668,
"oab_exams": 0.36173120728929387,
"assin2_rte": 0.8704914563684142,
"assin2_sts": 0.6914158506759536,
"faquad_nli": 0.6137142857142857,
"hatebr_offensive": 0.8210157972117231,
"portuguese_hate_speech": 0.6648065091139095,
"tweetsentbr": 0.6129534464124105
}
| 0.610494
| 0.4269
|
|
AdaptLLM/finance-LLM
|
main
| false
|
float16
| 0
|
LLaMAForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-11T13:37:12
|
πΆ : fine-tuned
|
script
| 545
|
2024-04-24T13-38-50.219195
|
1.1.0
|
{
"enem_challenge": 0.37578726382085376,
"bluex": 0.2906815020862309,
"oab_exams": 0.3011389521640091,
"assin2_rte": 0.7173994459883221,
"assin2_sts": 0.3141019003448064,
"faquad_nli": 0.6856866537717602,
"hatebr_offensive": 0.6665618718263835,
"portuguese_hate_speech": 0.3323844809709906,
"tweetsentbr": 0.5501887299910238
}
| 0.470437
| 0.214015
|
|
AdaptLLM/law-LLM-13B
|
main
| false
|
float16
| 13
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-11T13:37:17
|
πΆ : fine-tuned
|
script
| 551
|
2024-04-24T17-20-32.289644
|
1.1.0
|
{
"enem_challenge": 0.48915325402379284,
"bluex": 0.3796940194714882,
"oab_exams": 0.36082004555808656,
"assin2_rte": 0.7762008093366958,
"assin2_sts": 0.6862803522831282,
"faquad_nli": 0.5589431210148192,
"hatebr_offensive": 0.7648719048333295,
"portuguese_hate_speech": 0.6972417545621965,
"tweetsentbr": 0.5969146546466134
}
| 0.590013
| 0.387281
|
|
AdaptLLM/law-LLM
|
main
| false
|
float16
| 0
|
LLaMAForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-11T13:37:01
|
πΆ : fine-tuned
|
script
| 550
|
2024-04-24T01-23-04.736612
|
1.1.0
|
{
"enem_challenge": 0.3932820153953814,
"bluex": 0.3157162726008345,
"oab_exams": 0.3034168564920273,
"assin2_rte": 0.7690457097032879,
"assin2_sts": 0.2736321836385559,
"faquad_nli": 0.6837598520969155,
"hatebr_offensive": 0.6310564282443625,
"portuguese_hate_speech": 0.32991640141820316,
"tweetsentbr": 0.4897974076561671
}
| 0.465514
| 0.208557
|
|
AdaptLLM/medicine-LLM-13B
|
main
| false
|
float16
| 13
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-11T13:37:22
|
πΆ : fine-tuned
|
script
| 553
|
2024-04-24T17-45-23.659613
|
1.1.0
|
{
"enem_challenge": 0.45976207137858643,
"bluex": 0.37552155771905427,
"oab_exams": 0.3553530751708428,
"assin2_rte": 0.802953910231819,
"assin2_sts": 0.6774179667769704,
"faquad_nli": 0.7227569273678784,
"hatebr_offensive": 0.8155967923139503,
"portuguese_hate_speech": 0.6722790404040404,
"tweetsentbr": 0.5992582348356217
}
| 0.608989
| 0.426546
|
|
AdaptLLM/medicine-LLM
|
main
| false
|
float16
| 0
|
LLaMAForCausalLM
|
Original
|
English
|
FINISHED
| 2024-02-11T13:37:07
|
πΆ : fine-tuned
|
script
| 550
|
2024-04-24T13-09-44.649718
|
1.1.0
|
{
"enem_challenge": 0.3806857942617215,
"bluex": 0.3129346314325452,
"oab_exams": 0.28610478359908886,
"assin2_rte": 0.7412241742464284,
"assin2_sts": 0.30610797857979344,
"faquad_nli": 0.6385993049986635,
"hatebr_offensive": 0.4569817890542286,
"portuguese_hate_speech": 0.26575729349526506,
"tweetsentbr": 0.4667966458909563
}
| 0.428355
| 0.135877
|
|
AetherResearch/Cerebrum-1.0-7b
|
main
| false
|
float16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FINISHED
| 2024-03-14T11:07:59
|
π¬ : chat models (RLHF, DPO, IFT, ...)
|
leaderboard
| 332
|
2024-04-01T22-58-48.098123
|
1.1.0
|
{
"enem_challenge": 0.6137158852344297,
"bluex": 0.5062586926286509,
"oab_exams": 0.44510250569476084,
"assin2_rte": 0.8562832789419443,
"assin2_sts": 0.7083110279713039,
"faquad_nli": 0.7709976024119299,
"hatebr_offensive": 0.7925948726646638,
"portuguese_hate_speech": 0.6342708554907774,
"tweetsentbr": 0.6171926929726294
}
| 0.660525
| 0.494853
|
|
Alibaba-NLP/gte-Qwen2-7B-instruct
|
97fb655ac3882bce80a8ce4ecc9212ec24555fea
| false
|
bfloat16
| 7.613
|
Qwen2ForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-08T14:53:17
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,004
|
2024-08-11T15-51-57.600432
| null | null | null | null |
|
Alibaba-NLP/gte-Qwen2-7B-instruct
|
main
| false
|
bfloat16
| 7.613
|
Qwen2ForCausalLM
|
Original
|
English
|
FAILED
| 2024-07-16T18:11:18
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 918
|
2024-07-16T19-11-22.839874
| null | null | null | null |
|
Alibaba-NLP/gte-Qwen2-7B-instruct
|
e26182b2122f4435e8b3ebecbf363990f409b45b
| false
|
bfloat16
| 7.613
|
Qwen2ForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-01T19:47:27
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 979
|
2024-08-08T03-00-24.963139
| null | null | null | null |
|
Azure99/blossom-v5.1-34b
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-06-05T14:01:58
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 798
|
2024-06-13T23-01-25.469438
|
1.1.0
|
{
"enem_challenge": 0.7263820853743876,
"bluex": 0.6717663421418637,
"oab_exams": 0.5444191343963554,
"assin2_rte": 0.9087560002226345,
"assin2_sts": 0.8294159674038925,
"faquad_nli": 0.8188429839812075,
"hatebr_offensive": 0.8519995605589193,
"portuguese_hate_speech": 0.7209014624205066,
"tweetsentbr": 0.7213065589027687
}
| 0.754866
| 0.632723
|
|
BAAI/Aquila-7B
|
main
| false
|
float16
| 7
|
AquilaModel
|
Original
|
?
|
FINISHED
| 2024-02-05T23:09:00
|
π’ : pretrained
|
script
| 343
|
2024-04-03T05-32-42.254781
|
1.1.0
|
{
"enem_challenge": 0.3275017494751575,
"bluex": 0.2795549374130737,
"oab_exams": 0.3047835990888383,
"assin2_rte": 0.7202499022958302,
"assin2_sts": 0.04640761012170769,
"faquad_nli": 0.47034320848362593,
"hatebr_offensive": 0.6981236353283272,
"portuguese_hate_speech": 0.4164993156903397,
"tweetsentbr": 0.4656320326711388
}
| 0.414344
| 0.144131
|
|
BAAI/Aquila2-34B
|
main
| false
|
bfloat16
| 34
|
LlamaForCausalLM
|
Original
|
?
|
FINISHED
| 2024-02-05T23:10:17
|
π’ : pretrained
|
script
| 484
|
2024-04-18T14-04-47.026230
|
1.1.0
|
{
"enem_challenge": 0.5479356193142058,
"bluex": 0.4381084840055633,
"oab_exams": 0.40455580865603646,
"assin2_rte": 0.8261661293083891,
"assin2_sts": 0.643049056717646,
"faquad_nli": 0.4471267110923455,
"hatebr_offensive": 0.4920183585480058,
"portuguese_hate_speech": 0.6606858054226475,
"tweetsentbr": 0.5598737392847967
}
| 0.557724
| 0.319206
|
|
BAAI/Aquila2-7B
|
main
| false
|
float16
| 7
|
AquilaModel
|
Original
|
?
|
FINISHED
| 2024-02-05T23:09:07
|
π’ : pretrained
|
script
| 360
|
2024-04-03T05-55-31.957348
|
1.1.0
|
{
"enem_challenge": 0.20573827851644508,
"bluex": 0.14464534075104313,
"oab_exams": 0.3225512528473804,
"assin2_rte": 0.5426094787796916,
"assin2_sts": 0.3589709171853071,
"faquad_nli": 0.49799737773227726,
"hatebr_offensive": 0.642139037433155,
"portuguese_hate_speech": 0.5212215320910973,
"tweetsentbr": 0.2826286167270258
}
| 0.390945
| 0.091046
|
|
BAAI/Emu3-Chat
|
main
| false
|
bfloat16
| 8.492
|
Emu3ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-10-21T21:11:49
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,240
|
2024-10-23T07-24-58.039230
|
1.1.0
|
{
"enem_challenge": 0.23722883135059483,
"bluex": 0.24061196105702365,
"oab_exams": 0.24419134396355352,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.1347279162589762,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.37439678336438037
}
| 0.28526
| -0.101355
|
|
BAAI/Emu3-Gen
|
main
| false
|
bfloat16
| 8.492
|
Emu3ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-10-21T21:11:31
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,239
|
2024-10-23T02-54-02.753983
|
1.1.0
|
{
"enem_challenge": 0.19384184744576627,
"bluex": 0.19471488178025034,
"oab_exams": 0.23735763097949886,
"assin2_rte": 0,
"assin2_sts": 0.04718167761792302,
"faquad_nli": 0,
"hatebr_offensive": 0.4333858584779487,
"portuguese_hate_speech": 0.23104388244535695,
"tweetsentbr": 0.1506866897702477
}
| 0.165357
| -0.303077
|
|
BAAI/Emu3-Stage1
|
main
| false
|
bfloat16
| 8.492
|
Emu3ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-10-21T21:10:34
|
π’ : pretrained
|
leaderboard
| 1,238
|
2024-10-23T01-39-06.029347
|
1.1.0
|
{
"enem_challenge": 0.19314205738278517,
"bluex": 0.20723226703755215,
"oab_exams": 0.2592255125284738,
"assin2_rte": 0.5195268543314986,
"assin2_sts": 0.06240586658952754,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.4946676587301587,
"portuguese_hate_speech": 0.25062068613758454,
"tweetsentbr": 0.5214764078192898
}
| 0.32755
| -0.012098
|
|
BAAI/Gemma2-9B-IT-Simpo-Infinity-Preference
|
main
| false
|
bfloat16
| 9.242
|
Gemma2ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-02T16:32:57
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,626
|
2025-05-07T16-13-17.495494
|
1.1.0
|
{
"enem_challenge": 0.7718684394681595,
"bluex": 0.6481223922114048,
"oab_exams": 0.5398633257403189,
"assin2_rte": 0.9308059939638886,
"assin2_sts": 0.7736833047344445,
"faquad_nli": 0.7185244587008821,
"hatebr_offensive": 0.8886125389192874,
"portuguese_hate_speech": 0.6879898419370376,
"tweetsentbr": 0.6650785355565184
}
| 0.736061
| 0.605014
|
|
BAAI/Gemma2-9B-IT-Simpo-Infinity-Preference
|
fd6d02d300e3b9015e07c217e26c6f1b4823963a
| false
|
bfloat16
| 9.242
|
Gemma2ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-29T03:58:30
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,152
|
2024-10-04T05-31-00.057927
|
1.1.0
|
{
"enem_challenge": 0.7235829251224632,
"bluex": 0.6175243393602226,
"oab_exams": 0.5430523917995445,
"assin2_rte": 0.9275070649057842,
"assin2_sts": 0.7792788542377653,
"faquad_nli": 0.7059360440659401,
"hatebr_offensive": 0.8936026780915527,
"portuguese_hate_speech": 0.6831189599233336,
"tweetsentbr": 0.6664191998474515
}
| 0.726669
| 0.592002
|
|
BAAI/Infinity-Instruct-3M-0613-Mistral-7B
|
main
| false
|
float16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FINISHED
| 2024-06-22T00:35:20
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 844
|
2024-06-22T01-31-31.647844
|
1.1.0
|
{
"enem_challenge": 0.6466060181945417,
"bluex": 0.5326842837273992,
"oab_exams": 0.44510250569476084,
"assin2_rte": 0.9174712657490132,
"assin2_sts": 0.7632047672731808,
"faquad_nli": 0.8241841468197617,
"hatebr_offensive": 0.7990490978163615,
"portuguese_hate_speech": 0.7141208181486736,
"tweetsentbr": 0.6666509531443612
}
| 0.701008
| 0.56041
|
|
BAAI/Infinity-Instruct-3M-0625-Llama3-8B
|
main
| false
|
bfloat16
| 8.03
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-07-18T22:33:18
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 922
|
2024-07-19T01-31-48.292983
|
1.1.0
|
{
"enem_challenge": 0.6962911126662001,
"bluex": 0.5702364394993046,
"oab_exams": 0.4911161731207289,
"assin2_rte": 0.9157306525139366,
"assin2_sts": 0.6927579734425038,
"faquad_nli": 0.6831208704581523,
"hatebr_offensive": 0.8396850647140018,
"portuguese_hate_speech": 0.6524280322235367,
"tweetsentbr": 0.6735124292047466
}
| 0.690542
| 0.539493
|
|
BAAI/Infinity-Instruct-3M-0625-Mistral-7B
|
main
| false
|
bfloat16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FINISHED
| 2024-07-18T22:57:22
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 923
|
2024-07-19T01-41-09.242433
|
1.1.0
|
{
"enem_challenge": 0.6634009797060881,
"bluex": 0.5507649513212796,
"oab_exams": 0.44419134396355353,
"assin2_rte": 0.9195035639155449,
"assin2_sts": 0.7748240246646378,
"faquad_nli": 0.8110585067106806,
"hatebr_offensive": 0.8052263390689189,
"portuguese_hate_speech": 0.7276695425104325,
"tweetsentbr": 0.6851627567981504
}
| 0.709089
| 0.571585
|
|
BAAI/Infinity-Instruct-3M-0625-Qwen2-7B
|
main
| false
|
bfloat16
| 7.616
|
Qwen2ForCausalLM
|
Original
|
English
|
FINISHED
| 2024-07-16T18:07:10
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 917
|
2024-07-16T18-14-59.914390
|
1.1.0
|
{
"enem_challenge": 0.7179846046186145,
"bluex": 0.6063977746870653,
"oab_exams": 0.5043280182232346,
"assin2_rte": 0.9260247502412066,
"assin2_sts": 0.7493063947663584,
"faquad_nli": 0.7919477341597589,
"hatebr_offensive": 0.7747879604913737,
"portuguese_hate_speech": 0.6566388141704343,
"tweetsentbr": 0.6595443181650077
}
| 0.709662
| 0.564613
|
|
BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B
|
main
| false
|
bfloat16
| 0.003
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-07-18T22:23:09
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,577
|
2025-05-04T10-35-32.472663
|
1.1.0
|
{
"enem_challenge": 0.6913925822253324,
"bluex": 0.5841446453407511,
"oab_exams": 0.475626423690205,
"assin2_rte": 0.8902591771846999,
"assin2_sts": 0.7417032662621263,
"faquad_nli": 0.7691782381060779,
"hatebr_offensive": 0.8633798389731095,
"portuguese_hate_speech": 0.6466753105050977,
"tweetsentbr": 0.6968077814532404
}
| 0.706574
| 0.563758
|
|
BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B
|
a42c86c61b98ca4fdf238d688fe6ea11cf414d29
| false
|
bfloat16
| 8.829
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-08-01T19:56:06
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 980
|
2024-08-08T03-15-01.373098
|
1.1.0
|
{
"enem_challenge": 0.6920923722883136,
"bluex": 0.588317107093185,
"oab_exams": 0.4760820045558087,
"assin2_rte": 0.8898452558964334,
"assin2_sts": 0.7424901825788529,
"faquad_nli": 0.7683175374941738,
"hatebr_offensive": 0.8633798389731095,
"portuguese_hate_speech": 0.6477449279306864,
"tweetsentbr": 0.6951286339371854
}
| 0.707044
| 0.564291
|
|
BAAI/Infinity-Instruct-7M-0729-Llama3_1-8B
|
main
| false
|
bfloat16
| 8.03
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-08-05T19:30:28
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 982
|
2024-08-08T04-37-11.506316
|
1.1.0
|
{
"enem_challenge": 0.708187543736879,
"bluex": 0.588317107093185,
"oab_exams": 0.49430523917995445,
"assin2_rte": 0.9353919239904989,
"assin2_sts": 0.7590004583613057,
"faquad_nli": 0.7479196445389977,
"hatebr_offensive": 0.8223021238433512,
"portuguese_hate_speech": 0.6924232912933478,
"tweetsentbr": 0.6961115534449834
}
| 0.715995
| 0.577578
|
|
BAAI/Infinity-Instruct-7M-0729-mistral-7B
|
36651591cb13346ecbde23832013e024029700fa
| false
|
bfloat16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-08T14:51:17
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,003
|
2024-08-11T15-39-11.190873
| null | null | null | null |
|
BAAI/Infinity-Instruct-7M-0729-mistral-7B
|
main
| false
|
bfloat16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-05T19:35:35
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 983
|
2024-08-08T05-34-45.659619
| null | null | null | null |
|
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B
|
main
| false
|
bfloat16
| 70.554
|
LlamaForCausalLM
|
Original
|
English
|
PENDING
| 2024-08-22T16:02:17
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| -1
| null | null | null | null | null |
|
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B
|
main
| false
|
bfloat16
| 8.03
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-08-22T16:01:20
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,032
|
2024-08-25T02-40-32.192736
|
1.1.0
|
{
"enem_challenge": 0.708187543736879,
"bluex": 0.588317107093185,
"oab_exams": 0.49430523917995445,
"assin2_rte": 0.9353919239904989,
"assin2_sts": 0.7590004583613057,
"faquad_nli": 0.7479196445389977,
"hatebr_offensive": 0.8223021238433512,
"portuguese_hate_speech": 0.6924232912933478,
"tweetsentbr": 0.6961115534449834
}
| 0.715995
| 0.577578
|
|
BAAI/Infinity-Instruct-7M-Gen-mistral-7B
|
4356a156ed02a12d2dcabcc3d64a1b588a9ceb05
| false
|
bfloat16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-28T16:27:37
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,044
|
2024-09-01T03-51-16.696093
| null | null | null | null |
|
BAAI/Infinity-Instruct-7M-Gen-mistral-7B
|
82c83d670a8954f4250547b53a057dea1fbd460d
| false
|
bfloat16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-25T05:20:51
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,038
|
2024-08-26T05-13-12.087329
| null | null | null | null |
|
BAAI/Infinity-Instruct-7M-Gen-mistral-7B
|
main
| false
|
bfloat16
| 7.242
|
MistralForCausalLM
|
Original
|
English
|
FAILED
| 2024-08-22T16:01:54
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,033
|
2024-08-25T03-38-30.014056
| null | null | null | null |
|
BAAI/OPI-Llama-3.1-8B-Instruct
|
main
| false
|
bfloat16
| 8.03
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-09-21T15:22:21
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 1,133
|
2024-10-02T03-38-58.627810
|
1.1.0
|
{
"enem_challenge": 0.4282715185444367,
"bluex": 0.32684283727399166,
"oab_exams": 0.3630979498861048,
"assin2_rte": 0.3364346429550707,
"assin2_sts": 0.17801095882930634,
"faquad_nli": 0.17721518987341772,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.17212346147627985
}
| 0.282799
| -0.126392
|
|
BSC-LT/salamandra-2b
|
main
| false
|
bfloat16
| 2.253
|
LlamaForCausalLM
|
Original
|
Spanish
|
FINISHED
| 2025-04-04T07:53:45
|
π’ : pretrained
|
leaderboard
| 1,542
|
2025-04-06T03-29-18.305527
|
1.1.0
|
{
"enem_challenge": 0.1959412176347096,
"bluex": 0.26008344923504867,
"oab_exams": 0.2660592255125285,
"assin2_rte": 0.6975027707196586,
"assin2_sts": 0.0671816933521037,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.5042049852088193,
"portuguese_hate_speech": 0.4016778711484594,
"tweetsentbr": 0.5462981139082685
}
| 0.3754
| 0.0754
|
|
BSC-LT/salamandra-7b-instruct
|
main
| false
|
bfloat16
| 7.768
|
LlamaForCausalLM
|
Original
|
Spanish
|
FINISHED
| 2025-04-04T07:53:31
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,556
|
2025-04-07T04-07-55.633968
|
1.1.0
|
{
"enem_challenge": 0.5983205038488454,
"bluex": 0.4937413073713491,
"oab_exams": 0.4223234624145786,
"assin2_rte": 0.9098606334526578,
"assin2_sts": 0.6729342980455344,
"faquad_nli": 0.5734435078379178,
"hatebr_offensive": 0.8052173913043479,
"portuguese_hate_speech": 0.6271335205219492,
"tweetsentbr": 0.7109712230662115
}
| 0.645994
| 0.471959
|
|
BSC-LT/salamandra-7b
|
main
| false
|
bfloat16
| 7.768
|
LlamaForCausalLM
|
Original
|
Spanish
|
FINISHED
| 2025-04-04T07:53:00
|
π’ : pretrained
|
leaderboard
| 1,543
|
2025-04-06T04-40-14.780421
|
1.1.0
|
{
"enem_challenge": 0.2736179146256123,
"bluex": 0.2712100139082058,
"oab_exams": 0.2669703872437358,
"assin2_rte": 0.5473539470436837,
"assin2_sts": 0.37404739198943987,
"faquad_nli": 0.5425562583730434,
"hatebr_offensive": 0.7970481703639771,
"portuguese_hate_speech": 0.39534271113218483,
"tweetsentbr": 0.6407663517114162
}
| 0.456546
| 0.18901
|
|
Ba2han/QwQenSeek-coder
|
main
| false
|
bfloat16
| 32.76
|
Qwen2ForCausalLM
|
Original
|
English
|
FAILED
| 2025-02-05T23:57:59
|
π€ : base merges and moerges
|
leaderboard
| 1,516
|
2025-02-12T20-51-32.141608
| null | null | null | null |
|
BornSaint/Dare_Angel_8B
|
llama 3.1 8B
|
main
| false
|
float16
| 8.03
|
LlamaForCausalLM
|
Original
|
Portuguese
|
FINISHED
| 2025-05-22T00:17:48
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,820
|
2025-08-31T06-31-43.322640
|
1.1.0
|
{
"enem_challenge": 0.6564030790762772,
"bluex": 0.5187760778859527,
"oab_exams": 0.4601366742596811,
"assin2_rte": 0.9194964689896243,
"assin2_sts": 0.748953328078115,
"faquad_nli": 0.7446341747047971,
"hatebr_offensive": 0.8198582099698948,
"portuguese_hate_speech": 0.6447722626079055,
"tweetsentbr": 0.6444351195891266
}
| 0.684163
| 0.530784
|
Bruno/Caramelinho
|
ybelkada/falcon-7b-sharded-bf16
|
main
| false
|
bfloat16
| 0
|
?
|
Adapter
|
Portuguese
|
FINISHED
| 2024-02-24T18:01:08
|
π : language adapted models (FP, FT, ...)
|
leaderboard
| 256
|
2024-02-26T15-17-54.708968
|
1.1.0
|
{
"enem_challenge": 0.21483554933519944,
"bluex": 0.2211404728789986,
"oab_exams": 0.25148063781321184,
"assin2_rte": 0.4896626375608876,
"assin2_sts": 0.19384903999896694,
"faquad_nli": 0.43917169974115616,
"hatebr_offensive": 0.3396512838306731,
"portuguese_hate_speech": 0.46566706851516976,
"tweetsentbr": 0.563106045239156
}
| 0.353174
| 0.017928
|
Bruno/Caramelo_7B
|
ybelkada/falcon-7b-sharded-bf16
|
main
| false
|
bfloat16
| 7
|
?
|
Adapter
|
Portuguese
|
FINISHED
| 2024-02-24T18:00:57
|
π : language adapted models (FP, FT, ...)
|
leaderboard
| 255
|
2024-02-26T13-57-57.036659
|
1.1.0
|
{
"enem_challenge": 0.1980405878236529,
"bluex": 0.24478442280945759,
"oab_exams": 0.2528473804100228,
"assin2_rte": 0.5427381481762671,
"assin2_sts": 0.07473225338478715,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.33650009913117634,
"portuguese_hate_speech": 0.412292817679558,
"tweetsentbr": 0.35365936890599253
}
| 0.31725
| -0.028868
|
ByteDance-Seed/Seed-OSS-36B-Base-woSyn
|
main
| false
|
bfloat16
| 36.151
|
SeedOssForCausalLM
|
Original
|
English
|
PENDING
| 2025-08-30T20:34:34
|
π’ : pretrained
|
leaderboard
| -1
| null | null | null | null | null |
|
ByteDance-Seed/Seed-OSS-36B-Base
|
main
| false
|
bfloat16
| 36.151
|
SeedOssForCausalLM
|
Original
|
English
|
PENDING
| 2025-08-30T20:34:09
|
π’ : pretrained
|
leaderboard
| -1
| null | null | null | null | null |
|
ByteDance-Seed/Seed-OSS-36B-Instruct
|
main
| false
|
bfloat16
| 36.151
|
SeedOssForCausalLM
|
Original
|
English
|
PENDING
| 2025-08-30T20:34:42
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| -1
| null | null | null | null | null |
|
ByteDance-Seed/UI-TARS-1.5-7B
|
main
| false
|
bfloat16
| 8.292
|
Qwen2_5_VLForConditionalGeneration
|
Original
|
English
|
FAILED
| 2025-08-30T23:34:11
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,903
|
2025-09-14T06-31-40.721981
| null | null | null | null |
|
ByteDance-Seed/UI-TARS-2B-SFT
|
main
| false
|
bfloat16
| 2.442
|
Qwen2VLForConditionalGeneration
|
Original
|
English
|
FAILED
| 2025-08-30T23:34:39
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 1,856
|
2025-09-05T01-33-40.005871
| null | null | null | null |
|
CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
|
main
| false
|
bfloat16
| 4.3
|
Gemma3ForConditionalGeneration
|
Original
|
Portuguese
|
FINISHED
| 2025-07-08T03:48:01
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 1,812
|
2025-08-31T00-43-48.499487
|
1.1.0
|
{
"enem_challenge": 0.6067179846046186,
"bluex": 0.49791376912378305,
"oab_exams": 0.42870159453302964,
"assin2_rte": 0.9025589840060155,
"assin2_sts": 0.7464401798411457,
"faquad_nli": 0.6804733727810651,
"hatebr_offensive": 0.8586922668135876,
"portuguese_hate_speech": 0.716694381302537,
"tweetsentbr": 0.6718275029768425
}
| 0.678891
| 0.527585
|
|
CausalLM/34b-beta
|
main
| false
|
bfloat16
| 34.389
|
LlamaForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-30T19:56:15
|
πΆ : fine-tuned/fp on domain-specific datasets
|
leaderboard
| 795
|
2024-06-13T11-57-13.556700
|
1.1.0
|
{
"enem_challenge": 0.7242827151854444,
"bluex": 0.6606397774687065,
"oab_exams": 0.5348519362186788,
"assin2_rte": 0.876489392618425,
"assin2_sts": 0.772190146473889,
"faquad_nli": 0.751649303344456,
"hatebr_offensive": 0.8089822265646441,
"portuguese_hate_speech": 0.7003298984357624,
"tweetsentbr": 0.6590870652076504
}
| 0.720945
| 0.577932
|
|
CofeAI/Tele-FLM
|
main
| false
|
bfloat16
| 0
|
?
|
Original
|
English
|
FAILED
| 2024-06-18T15:54:20
|
π’ : pretrained
|
leaderboard
| 825
|
2024-06-18T19-47-08.886524
| null | null | null | null |
|
CohereForAI/aya-101
|
main
| false
|
float16
| 12.921
|
T5ForConditionalGeneration
|
Original
|
English
|
FINISHED
| 2024-02-17T03:43:40
|
π¬ : chat models (RLHF, DPO, IFT, ...)
|
leaderboard
| 253
|
2024-02-21T19-25-38.847154
|
1.1.0
|
{
"enem_challenge": 0.5703289013296011,
"bluex": 0.47844228094575797,
"oab_exams": 0.3895216400911162,
"assin2_rte": 0.845896116707975,
"assin2_sts": 0.18932506997017534,
"faquad_nli": 0.3536861536119358,
"hatebr_offensive": 0.8577866430260047,
"portuguese_hate_speech": 0.5858880778588808,
"tweetsentbr": 0.7292099162284759
}
| 0.555565
| 0.354086
|
|
CohereForAI/aya-23-35B
|
main
| false
|
float16
| 34.981
|
CohereForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-23T18:08:24
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
leaderboard
| 725
|
2024-05-25T09-17-37.504596
|
1.1.0
|
{
"enem_challenge": 0.7039888033589923,
"bluex": 0.6022253129346314,
"oab_exams": 0.5466970387243736,
"assin2_rte": 0.9304615643741841,
"assin2_sts": 0.7846161558721925,
"faquad_nli": 0.7233650163143708,
"hatebr_offensive": 0.8860471199766156,
"portuguese_hate_speech": 0.6667720351930878,
"tweetsentbr": 0.5833463689780555
}
| 0.714169
| 0.573536
|
|
CohereForAI/aya-23-8B
|
main
| false
|
float16
| 8.028
|
CohereForCausalLM
|
Original
|
English
|
FINISHED
| 2024-05-23T18:08:07
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 726
|
2024-05-25T07-14-27.654611
|
1.1.0
|
{
"enem_challenge": 0.6046186144156753,
"bluex": 0.48400556328233657,
"oab_exams": 0.4328018223234624,
"assin2_rte": 0.9189769820971867,
"assin2_sts": 0.780672309349922,
"faquad_nli": 0.6541835357624831,
"hatebr_offensive": 0.7471163522824039,
"portuguese_hate_speech": 0.6490477906224632,
"tweetsentbr": 0.6908665777890396
}
| 0.662477
| 0.491916
|
|
CohereForAI/aya-expanse-32b
|
main
| false
|
float16
| 32.296
|
CohereForCausalLM
|
Original
|
English
|
FINISHED
| 2024-10-27T00:57:18
|
π¬ : chat (RLHF, DPO, IFT, ...)
|
manual
| 1,301
|
2024-12-03T17-11-02.917534
|
1.1.0
|
{
"enem_challenge": 0.7375787263820853,
"bluex": 0.6536856745479833,
"oab_exams": 0.5990888382687927,
"assin2_rte": 0.923384962213484,
"assin2_sts": 0.7373015447324608,
"faquad_nli": 0.720673391991921,
"hatebr_offensive": 0.8251623100939243,
"portuguese_hate_speech": 0.7105613391695256,
"tweetsentbr": 0.6648642123277785
}
| 0.730256
| 0.595249
|
|
CohereForAI/aya-expanse-8b
|
main
| false
|
float16
| 8.028
|
CohereForCausalLM
|
Original
|
English
|
FINISHED
| 2024-10-24T16:10:02
|
π : language adapted (FP, FT, ...)
|
manual
| 1,315
|
2024-12-04T11-05-27.239366
|
1.1.0
|
{
"enem_challenge": 0.6592022393282015,
"bluex": 0.5438108484005564,
"oab_exams": 0.47425968109339406,
"assin2_rte": 0.9115861876158329,
"assin2_sts": 0.7797645180627574,
"faquad_nli": 0.7434885556432518,
"hatebr_offensive": 0.8207730793849852,
"portuguese_hate_speech": 0.7088100738188532,
"tweetsentbr": 0.719073576922255
}
| 0.706752
| 0.564488
|
|
CohereForAI/c4ai-command-r-plus-4bit
|
main
| false
|
4bit
| 55.052
|
CohereForCausalLM
|
Original
|
English
|
FINISHED
| 2024-04-05T14:50:15
|
π¬ : chat models (RLHF, DPO, IFT, ...)
|
leaderboard
| 464
|
2024-04-15T16-05-38.445928
|
1.1.0
|
{
"enem_challenge": 0.7508747375787264,
"bluex": 0.6620305980528511,
"oab_exams": 0.6255125284738041,
"assin2_rte": 0.9301234467745643,
"assin2_sts": 0.7933785386356376,
"faquad_nli": 0.7718257450767017,
"hatebr_offensive": 0.773798484417851,
"portuguese_hate_speech": 0.7166167166167167,
"tweetsentbr": 0.7540570104676597
}
| 0.753135
| 0.625007
|
|
CohereForAI/c4ai-command-r-plus
|
main
| false
|
float16
| 103.811
|
CohereForCausalLM
|
Original
|
English
|
RERUN
| 2024-04-07T18:08:25
|
π¬ : chat models (RLHF, DPO, IFT, ...)
|
leaderboard
| 996
|
2024-08-10T16-28-15.102154
| null | null | null | null |
End of preview.
No dataset card yet
- Downloads last month
- 24,567