CALISTA-INDUSTRY/llama-3.2-3B-reasoning-en-ft-v1
Model Summary
CALISTA-INDUSTRY/llama-3.2-3B-reasoning-en-ft-v1 is a fine-tuned version of Meta's LLaMA 3 3B model, optimized for English-language reasoning tasks. This model has been adapted to enhance performance in logical reasoning, problem-solving, and conversational understanding.
Model Details
- Developed by: Mohammad Yani & Rizky Sulaeman, Politeknik Negeri Indramayu
- Model type: Decoder-only transformer (LLaMA 3 architecture)
- Parameter count: 3.21 billion
- Quantization formats: 4-bit (Q4_K_M), 5-bit (Q5_K_M), 8-bit (Q8_0)
- Training data: [Specify datasets or data sources used]
- License: Apache License 2.0
- Base model: meta-llama/Llama-3-3B
Intended Uses & Limitations
Intended Uses
- Applications:
- Logical reasoning tasks
- Conversational agents requiring enhanced reasoning capabilities
- Educational tools focusing on critical thinking
- Users:
- Researchers in natural language processing
- Developers building AI-driven applications
- Educators and students in AI-related fields
Limitations
- The model's performance may degrade on tasks outside its fine-tuned domain.
- Not suitable for real-time applications without further optimization.
- May produce incorrect or nonsensical answers; outputs should be verified in critical applications.
How to Use
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="CALISTA-INDUSTRY/llama-3.2-3B-reasoning-en-ft-v1")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages)
- Downloads last month
- 63
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
Model tree for CALISTA-INDUSTRY/llama-3.2-3B-reasoning-en-ft-v1
Base model
meta-llama/Llama-3.2-3B-Instruct