| | ---
|
| | license: llama3.2
|
| | language:
|
| | - en
|
| | base_model: meta-llama/Llama-3.2-1B
|
| | pipeline_tag: text-classification
|
| | library_name: peft
|
| | tags:
|
| | - regression
|
| | - story-point-estimation
|
| | - software-engineering
|
| | datasets:
|
| | - appceleratorstudio
|
| | - titanium
|
| | metrics:
|
| | - mae
|
| | - mdae
|
| | model-index:
|
| | - name: llama-3.2-1b-story-point-estimation
|
| | results:
|
| | - task:
|
| | type: regression
|
| | name: Story Point Estimation
|
| | dataset:
|
| | name: titanium Dataset
|
| | type: titanium
|
| | split: test
|
| | metrics:
|
| | - type: mae
|
| | value: 3.309
|
| | name: Mean Absolute Error (MAE)
|
| | - type: mdae
|
| | value: 2.24
|
| | name: Median Absolute Error (MdAE)
|
| | ---
|
| | # LLAMA 3 Story Point Estimator - appceleratorstudio - titanium |
| |
|
| | This model is fine-tuned on issue descriptions from appceleratorstudio and tested on titanium for story point estimation. |
| |
|
| | ## Model Details |
| | - Base Model: LLAMA 3.2 1B |
| | - Training Project: appceleratorstudio |
| | - Test Project: titanium |
| | - Task: Story Point Estimation (Regression) |
| | - Architecture: PEFT (LoRA) |
| |
|
| | - Input: Issue titles |
| | - Output: Story point estimation (continuous value) |
| |
|
| | ## Usage |
| | ```python |
| | from transformers import AutoModelForSequenceClassification, AutoTokenizer |
| | from peft import PeftConfig, PeftModel |
| | |
| | # Load peft config model |
| | config = PeftConfig.from_pretrained("DEVCamiloSepulveda/00-LLAMA3SP-appceleratorstudio-titanium") |
| | |
| | # Load tokenizer and model |
| | tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/00-LLAMA3SP-appceleratorstudio-titanium") |
| | base_model = AutoModelForSequenceClassification.from_pretrained( |
| | config.base_model_name_or_path, |
| | num_labels=1, |
| | torch_dtype=torch.float16, |
| | device_map='auto' |
| | ) |
| | model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/00-LLAMA3SP-appceleratorstudio-titanium") |
| | |
| | # Prepare input text |
| | text = "Your issue description here" |
| | inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length") |
| | |
| | # Get prediction |
| | outputs = model(**inputs) |
| | story_points = outputs.logits.item() |
| | ``` |
| |
|
| | ## Training Details |
| | - Fine-tuning method: LoRA (Low-Rank Adaptation) |
| | - Sequence length: 20 tokens |
| | - Best training epoch: 0 / 20 epochs |
| | - Batch size: 32 |
| | - Training time: 63.384 seconds |
| | - Mean Absolute Error (MAE): 3.309 |
| | - Median Absolute Error (MdAE): 2.240 |
| | ### Framework versions |
| |
|
| | - PEFT 0.14.0 |