File size: 2,825 Bytes
1186208 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
---
language: en
license: apache-2.0
library_name: transformers
tags:
- tptt
- peft
- trust_remote_code
pipeline_tag: text-generation
base_model: meta-llama/Llama-3.2-1B
datasets:
- yahma/alpaca-cleaned
---
# lora_delta_product_r_m0.5_constant
<p align="center">
<a href="https://arxiv.org/abs/2506.17671">
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-tptt-blueviolet.svg">
</a>
<a href="https://pypi.org/project/tptt/">
<img alt="PyPI" src="https://img.shields.io/pypi/v/tptt?color=orange">
</a>
<a href="https://github.com/fabienfrfr/tptt/">
<img alt="Release" src="https://img.shields.io/github/v/release/fabienfrfr/tptt?color=brightgreen">
</a>
<a href="https://fabienfrfr.github.io/tptt/">
<img alt="Documentation" src="https://img.shields.io/badge/docs-online-blue">
</a>
<a href="https://huggingface.co/ffurfaro">
<img alt="HuggingFace" src="https://img.shields.io/badge/hf-ffurfaro-yellow">
</a>
</p>
Titanesque version of `meta-llama/Llama-3.2-1B` with parallel linearized attention (TPTT 😊) and PEFT.
The architecture was presented in the paper [TPTT](https://huggingface.co/papers/2506.17671).
## Model Details
- **Architecture:** ['TpttModel']
- **Base model:** meta-llama/Llama-3.2-1B
- **LiZA config:** operator=delta_product_r, mag=0.5
- **LoRA config:** r=8, alpha=16, dropout=0.05
- **torch_dtype:**
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"ffurfaro/lora_delta_product_r_m0.5_constant",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("ffurfaro/meta-llama/Llama-3.2-1B")
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs, skip_special_tokens=True))
```
> [!IMPORTANT]
> You must specify the `subfolder` if the repo contains multiple models, see the homepage for details.
## Training
- **Dataset:** yahma/alpaca-cleaned
- **Platform:** Kaggle
- **Hardware:** 2xT4
- **Batch size:** 2
- **Epochs:** 1.0
- **Learning rate (final):** N/A
- **Loss (final):** 7.606347968441995
- **Training runtime:** 2004.1174 sec
- **Samples per second:** 1.291
- **Steps per second:** 0.323
- **Total FLOPs:** 1937596357804032.0
- **Gradient norm (final):** N/A
## Evaluation
- **Metrics:** Training loss only (no eval yet, table soon : PiQA, ARC, Hella, Wino, GSM8K, MMLU)
- **Results:** Final training loss: 7.606347968441995
## Citation & Contact
If you use TPTT in your academic work, please cite [Furfaro](https://huggingface.co/ffurfaro). For questions or support, please open an issue on the [GitHub repository](https://github.com/fabienfrfr/tptt) or contact the maintainer.
--- |