File size: 1,639 Bytes
f2fd34d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: mit
datasets:
- HuggingFaceH4/CodeAlpaca_20K
base_model:
- Qwen/Qwen3-0.6B
---
# ๐Ÿง  Qwen-0.6B โ€“ Code Generation Model

**Model Repo:** `XformAI-india/qwen-0.6b-coder`  
**Base Model:** [`Qwen/Qwen-0.5B`](https://huggingface.co/Qwen/Qwen-0.5B)  
**Task:** Code generation and completion  
**Trained by:** [XformAI](https://xformai.in)  
**Date:** May 2025

---

## ๐Ÿ” What is this?

This is a fine-tuned version of Qwen-0.6B optimized for **code generation, completion, and programming logic reasoning**.

Itโ€™s designed to be lightweight, fast, and capable of handling common developer tasks across multiple programming languages.

---

## ๐Ÿ’ป Use Cases

- AI-powered code assistants  
- Auto-completion for IDEs  
- Offline code generation  
- Learning & training environments  
- Natural language โ†’ code prompts

---

## ๐Ÿ“š Training Details

| Parameter     | Value        |
|---------------|--------------|
| Epochs        | 3            |
| Batch Size    | 16           |
| Optimizer     | AdamW        |
| Precision     | bfloat16     |
| Context Window | 2048 tokens |
| Framework     | ๐Ÿค— Transformers + LoRA (PEFT)

---

## ๐Ÿš€ Example Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-0.6b-coder")
tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-0.6b-coder")

prompt = "Write a Python function that checks if a number is prime:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))