YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🧠 LLaMA 3.3 70B – Fine-tuned for Argumentative Writing Feedback

This model is a fully fine-tuned version of LLaMA 3.3 70B for the task of generating fine-grained teacher feedback on student argumentative essays. It identifies specific textual spans and provides localized comments related to grammar, clarity, coherence, and argumentation.


πŸ“Œ Task Overview

  • Task: Fine-grained feedback generation
  • Input: Essay prompt + student essay + in-line comments
  • Output: A structured JSON object with in-line comments
  • Comment format: Each comment targets a span of the essay with character-level offsets and an associated suggestion
  • Training data: TOEFL Public Dataset with annotated feedback (requires permission from ETS)

Example output format:

{
  "comments": [
    {
      "id": 0,
      "start": 12,
      "end": 28,
      "highlighted_text": "for living properly",
      "data": "Consider rephrasing to 'to live comfortably' for more natural phrasing."
    }
  ]
}

πŸ”§ Training Configuration

Hyperparameter Value Base model LLaMA 3.3 70B Fine-tuning method Full-parameter Epochs 20 Batch size 32 Learning rate 3e-6 Optimizer AdamW Scheduler Cosine Training hardware 2Γ—A100

Note: A 10-epoch version was initially trained but generated malformed JSON. This 20-epoch version significantly improved output formatting quality and stability.

πŸ“Š Evaluation

Output quality assessed by manual inspection of comment necessity and effectiveness.

πŸ“„ Citation

@misc{llama3-feedback, title={LLaMA 3.3 70B Fine-tuned for Argumentative Writing Feedback}, author={Wang, Q, Labib, A, Yuan, Z}, year={2025}, note={\url{https://huggingface.co/judywq/llama-ft-feedback_comment}}, }

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support