FunctionGemma PocketAssist 270M

Fine-tuned FunctionGemma 270M for PocketAssist.

Accuracy: 35.0% on 440 examples

Function Accuracy
get_password 0.0%
save_password 80.0%
search 30.0%

Training

  • Base: unsloth/functiongemma-270m-it
  • Method: QLoRA 4-bit (Unsloth) + sequence packing
  • LoRA rank: 16, alpha: 32
  • LR: 0.0002, epochs: 5
  • seq_len: 256, batch: 4×4
  • Hardware: Colab T4 (16 GB VRAM)

Usage

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    "2796gauravc/functiongemma-pocketassist-270m",
    load_in_4bit=True,
)
Downloads last month
5
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 2796gauravc/functiongemma-pocketassist-270m

Finetuned
(327)
this model

Dataset used to train 2796gauravc/functiongemma-pocketassist-270m