[2025-07-09 17:01:35,568][__main__][INFO] - cache_dir: /tmp/ dataset: name: kamel-usp/aes_enem_dataset split: JBCS2025 training_params: seed: 42 num_train_epochs: 20 logging_steps: 100 metric_for_best_model: QWK bf16: true bootstrap: enabled: true n_bootstrap: 10000 bootstrap_seed: 42 metrics: - QWK - Macro_F1 - Weighted_F1 post_training_results: model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59 experiments: model: name: ricardoz/BERTugues-base-portuguese-cased type: encoder_classification num_labels: 6 output_dir: ./results/ logging_dir: ./logs/ best_model_dir: ./results/best_model tokenizer: name: ricardoz/BERTugues-base-portuguese-cased dataset: grade_index: 0 use_full_context: false training_params: weight_decay: 0.01 warmup_ratio: 0.1 learning_rate: 5.0e-05 train_batch_size: 16 eval_batch_size: 16 gradient_accumulation_steps: 1 gradient_checkpointing: false [2025-07-09 17:01:39,460][__main__][INFO] - GPU 0: NVIDIA RTX A6000 | TDP ≈ 300 W [2025-07-09 17:01:39,460][__main__][INFO] - Starting the Fine Tuning training process. [2025-07-09 17:01:44,526][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json [2025-07-09 17:01:44,528][transformers.configuration_utils][INFO] - Model config BertConfig { "architectures": [ "BertForPreTraining" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.53.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [2025-07-09 17:01:44,746][transformers.models.auto.tokenization_auto][INFO] - Could not locate the tokenizer configuration file, will try to use the model config instead. [2025-07-09 17:01:44,949][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json [2025-07-09 17:01:44,950][transformers.configuration_utils][INFO] - Model config BertConfig { "architectures": [ "BertForPreTraining" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.53.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [2025-07-09 17:01:45,587][transformers.tokenization_utils_base][INFO] - loading file vocab.txt from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/vocab.txt [2025-07-09 17:01:45,587][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at None [2025-07-09 17:01:45,587][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at None [2025-07-09 17:01:45,587][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at None [2025-07-09 17:01:45,587][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at None [2025-07-09 17:01:45,587][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None [2025-07-09 17:01:45,588][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json [2025-07-09 17:01:45,588][transformers.configuration_utils][INFO] - Model config BertConfig { "architectures": [ "BertForPreTraining" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.53.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [2025-07-09 17:01:45,622][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json [2025-07-09 17:01:45,622][transformers.configuration_utils][INFO] - Model config BertConfig { "architectures": [ "BertForPreTraining" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.53.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [2025-07-09 17:01:45,641][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False [2025-07-09 17:01:46,310][__main__][INFO] - Token statistics for 'train' split: [2025-07-09 17:01:46,310][__main__][INFO] - Total examples: 500 [2025-07-09 17:01:46,310][__main__][INFO] - Min tokens: 512 [2025-07-09 17:01:46,310][__main__][INFO] - Max tokens: 512 [2025-07-09 17:01:46,310][__main__][INFO] - Avg tokens: 512.00 [2025-07-09 17:01:46,310][__main__][INFO] - Std tokens: 0.00 [2025-07-09 17:01:46,406][__main__][INFO] - Token statistics for 'validation' split: [2025-07-09 17:01:46,407][__main__][INFO] - Total examples: 132 [2025-07-09 17:01:46,407][__main__][INFO] - Min tokens: 512 [2025-07-09 17:01:46,407][__main__][INFO] - Max tokens: 512 [2025-07-09 17:01:46,407][__main__][INFO] - Avg tokens: 512.00 [2025-07-09 17:01:46,407][__main__][INFO] - Std tokens: 0.00 [2025-07-09 17:01:46,506][__main__][INFO] - Token statistics for 'test' split: [2025-07-09 17:01:46,506][__main__][INFO] - Total examples: 138 [2025-07-09 17:01:46,506][__main__][INFO] - Min tokens: 512 [2025-07-09 17:01:46,506][__main__][INFO] - Max tokens: 512 [2025-07-09 17:01:46,506][__main__][INFO] - Avg tokens: 512.00 [2025-07-09 17:01:46,506][__main__][INFO] - Std tokens: 0.00 [2025-07-09 17:01:46,506][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding. [2025-07-09 17:01:46,506][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated. [2025-07-09 17:01:46,719][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json [2025-07-09 17:01:46,720][transformers.configuration_utils][INFO] - Model config BertConfig { "architectures": [ "BertForPreTraining" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": 0, "1": 40, "2": 80, "3": 120, "4": 160, "5": 200 }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "0": 0, "40": 1, "80": 2, "120": 3, "160": 4, "200": 5 }, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.53.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 30522 } [2025-07-09 17:01:46,911][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/model.safetensors [2025-07-09 17:01:46,911][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.float32 as defined in model's config object [2025-07-09 17:01:46,911][transformers.modeling_utils][INFO] - Instantiating BertForSequenceClassification model under default dtype torch.float32. [2025-07-09 17:01:48,083][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at ricardoz/BERTugues-base-portuguese-cased were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [2025-07-09 17:01:48,084][transformers.modeling_utils][WARNING] - Some weights of BertForSequenceClassification were not initialized from the model checkpoint at ricardoz/BERTugues-base-portuguese-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [2025-07-09 17:01:48,090][transformers.training_args][INFO] - PyTorch: setting up devices [2025-07-09 17:01:48,113][__main__][INFO] - Total steps: 620. Number of warmup steps: 62 [2025-07-09 17:01:48,121][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching. [2025-07-09 17:01:48,144][transformers.trainer][INFO] - Using auto half precision backend [2025-07-09 17:01:48,147][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:01:48,152][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:01:48,152][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:01:48,152][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:01:49,260][transformers.trainer][INFO] - The following columns in the Training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:01:49,270][transformers.trainer][INFO] - ***** Running training ***** [2025-07-09 17:01:49,270][transformers.trainer][INFO] - Num examples = 500 [2025-07-09 17:01:49,270][transformers.trainer][INFO] - Num Epochs = 20 [2025-07-09 17:01:49,270][transformers.trainer][INFO] - Instantaneous batch size per device = 16 [2025-07-09 17:01:49,270][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16 [2025-07-09 17:01:49,270][transformers.trainer][INFO] - Gradient Accumulation steps = 1 [2025-07-09 17:01:49,270][transformers.trainer][INFO] - Total optimization steps = 640 [2025-07-09 17:01:49,271][transformers.trainer][INFO] - Number of trainable parameters = 109,486,854 [2025-07-09 17:01:54,149][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:01:54,152][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:01:54,152][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:01:54,152][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:01:54,570][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-32 [2025-07-09 17:01:54,572][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-32/config.json [2025-07-09 17:01:55,547][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-32/model.safetensors [2025-07-09 17:02:01,078][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:01,081][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:01,081][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:01,081][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:01,499][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-64 [2025-07-09 17:02:01,501][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-64/config.json [2025-07-09 17:02:02,401][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-64/model.safetensors [2025-07-09 17:02:03,876][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-32] due to args.save_total_limit [2025-07-09 17:02:08,594][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:08,597][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:08,597][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:08,598][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:09,021][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-96 [2025-07-09 17:02:09,022][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-96/config.json [2025-07-09 17:02:10,284][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-96/model.safetensors [2025-07-09 17:02:15,880][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:15,883][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:15,883][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:15,883][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:16,290][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-128 [2025-07-09 17:02:16,291][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-128/config.json [2025-07-09 17:02:17,310][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-128/model.safetensors [2025-07-09 17:02:18,246][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-64] due to args.save_total_limit [2025-07-09 17:02:18,313][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-96] due to args.save_total_limit [2025-07-09 17:02:22,925][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:22,927][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:22,927][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:22,927][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:23,315][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-160 [2025-07-09 17:02:23,316][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-160/config.json [2025-07-09 17:02:24,123][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-160/model.safetensors [2025-07-09 17:02:29,422][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:29,424][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:29,425][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:29,425][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:29,814][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-192 [2025-07-09 17:02:29,815][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-192/config.json [2025-07-09 17:02:30,704][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-192/model.safetensors [2025-07-09 17:02:32,183][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-160] due to args.save_total_limit [2025-07-09 17:02:36,833][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:36,836][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:36,836][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:36,836][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:37,229][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-224 [2025-07-09 17:02:37,230][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-224/config.json [2025-07-09 17:02:38,109][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-224/model.safetensors [2025-07-09 17:02:38,912][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-192] due to args.save_total_limit [2025-07-09 17:02:43,498][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:43,501][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:43,501][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:43,501][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:43,887][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-256 [2025-07-09 17:02:43,889][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-256/config.json [2025-07-09 17:02:44,876][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-256/model.safetensors [2025-07-09 17:02:45,756][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-224] due to args.save_total_limit [2025-07-09 17:02:50,418][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:50,420][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:50,420][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:50,421][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:50,818][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-288 [2025-07-09 17:02:50,819][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-288/config.json [2025-07-09 17:02:51,840][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-288/model.safetensors [2025-07-09 17:02:52,719][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-256] due to args.save_total_limit [2025-07-09 17:02:52,800][transformers.trainer][INFO] - Training completed. Do not forget to share your model on huggingface.co/models =) [2025-07-09 17:02:52,801][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-128 (score: 0.5213772228528187). [2025-07-09 17:02:53,045][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-09/17-01-35/results/checkpoint-288] due to args.save_total_limit [2025-07-09 17:02:53,175][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:53,178][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:53,178][transformers.trainer][INFO] - Num examples = 132 [2025-07-09 17:02:53,178][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:53,607][__main__][INFO] - Training completed successfully. [2025-07-09 17:02:53,607][__main__][INFO] - Running on Test [2025-07-09 17:02:53,607][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt. If essay_text, supporting_text, grades, prompt, essay_year, reference, id, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. [2025-07-09 17:02:53,610][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-09 17:02:53,610][transformers.trainer][INFO] - Num examples = 138 [2025-07-09 17:02:53,610][transformers.trainer][INFO] - Batch size = 16 [2025-07-09 17:02:54,035][__main__][INFO] - Test metrics: {'eval_loss': 1.247100830078125, 'eval_model_preparation_time': 0.0025, 'eval_accuracy': 0.5434782608695652, 'eval_RMSE': 30.45547950507524, 'eval_QWK': 0.6139860139860139, 'eval_HDIV': 0.007246376811594235, 'eval_Macro_F1': 0.38733766233766237, 'eval_Micro_F1': 0.5434782608695652, 'eval_Weighted_F1': 0.5620203065855239, 'eval_TP_0': 0, 'eval_TN_0': 137, 'eval_FP_0': 0, 'eval_FN_0': 1, 'eval_TP_1': 0, 'eval_TN_1': 138, 'eval_FP_1': 0, 'eval_FN_1': 0, 'eval_TP_2': 7, 'eval_TN_2': 109, 'eval_FP_2': 19, 'eval_FN_2': 3, 'eval_TP_3': 35, 'eval_TN_3': 61, 'eval_FP_3': 11, 'eval_FN_3': 31, 'eval_TP_4': 28, 'eval_TN_4': 67, 'eval_FP_4': 20, 'eval_FN_4': 23, 'eval_TP_5': 5, 'eval_TN_5': 115, 'eval_FP_5': 13, 'eval_FN_5': 5, 'eval_runtime': 0.42, 'eval_samples_per_second': 328.555, 'eval_steps_per_second': 21.427, 'epoch': 9.0} [2025-07-09 17:02:54,036][transformers.trainer][INFO] - Saving model checkpoint to ./results/best_model [2025-07-09 17:02:54,038][transformers.configuration_utils][INFO] - Configuration saved in ./results/best_model/config.json [2025-07-09 17:02:55,219][transformers.modeling_utils][INFO] - Model weights saved in ./results/best_model/model.safetensors [2025-07-09 17:02:55,221][transformers.tokenization_utils_base][INFO] - tokenizer config file saved in ./results/best_model/tokenizer_config.json [2025-07-09 17:02:55,221][transformers.tokenization_utils_base][INFO] - Special tokens file saved in ./results/best_model/special_tokens_map.json [2025-07-09 17:02:55,234][__main__][INFO] - Model and tokenizer saved to ./results/best_model [2025-07-09 17:02:55,238][__main__][INFO] - Fine Tuning Finished. [2025-07-09 17:02:55,748][__main__][INFO] - Total emissions: 0.0015 kg CO2eq