Pritish92/ner-medgemma15-4b-it-lora
This is a LoRA adapter fine-tuned from google/medgemma-1.5-4b-it for instruction-following NER extraction into a strict JSON list format:
[{"label":"...","text":"..."}]
This repository contains adapter weights only (not full base model weights). You must have access to google/medgemma-1.5-4b-it to run it.
Prompt format (exact)
### Instruction:
{instruction}
Maintain the JSON key order exactly as shown.
Output format: [{"label":"...","text":"..."}]
### Input:
{input_chunk}
### Response:
How to load
import torch
from peft import PeftModel
from transformers import AutoProcessor, AutoModelForImageTextToText
adapter_id = "Pritish92/ner-medgemma15-4b-it-lora"
base_id = "google/medgemma-1.5-4b-it"
processor = AutoProcessor.from_pretrained(adapter_id, use_fast=False)
base_model = AutoModelForImageTextToText.from_pretrained(
base_id,
dtype=torch.bfloat16,
device_map="auto",
)
model = PeftModel.from_pretrained(base_model, adapter_id)
model.eval()
Training details
- Date: 2026-02-18
- Sequence length cap (
max_length): 6144 - Chunking strategy: entity_aware
- prompt overhead tokens reserved: 256
- output overhead tokens reserved: 1024
- max input chunk tokens: 1536
- overlap chunk tokens: 256
- min chunk tokens: 256
- Batch size: 2
- Gradient accumulation: 4 (effective batch: 8)
- Learning rate: 2e-05
- Planned epochs: 3.0
- Loss masking: response-only (prompt + input chunk tokens masked with -100)
LoRA / PEFT
- LoRA rank (r): 64
- LoRA alpha: 128
- LoRA dropout: 0.05
- Target modules: 33.self_attn.q_proj, 32.self_attn.v_proj, language_model.layers.22.self_attn.k_proj, language_model.layers.11.self_attn.v_proj, language_model.layers.16.self_attn.k_proj, language_model.layers.10.self_attn.v_proj, 27.self_attn.k_proj, 31.self_attn.q_proj, language_model.layers.10.self_attn.q_proj, 32.self_attn.q_proj, o_proj, 32.self_attn.k_proj, 27.self_attn.v_proj, language_model.layers.22.self_attn.v_proj, language_model.layers.18.self_attn.k_proj, language_model.layers.1.self_attn.v_proj, language_model.layers.18.self_attn.v_proj, language_model.layers.13.self_attn.v_proj, language_model.layers.22.self_attn.q_proj, language_model.layers.16.self_attn.v_proj, language_model.layers.8.self_attn.k_proj, language_model.layers.24.self_attn.v_proj, language_model.layers.11.self_attn.q_proj, language_model.layers.12.self_attn.k_proj, language_model.layers.13.self_attn.k_proj, language_model.layers.21.self_attn.k_proj, language_model.layers.14.self_attn.k_proj, language_model.layers.4.self_attn.k_proj, language_model.layers.5.self_attn.v_proj, 31.self_attn.v_proj, language_model.layers.20.self_attn.k_proj, language_model.layers.24.self_attn.k_proj, language_model.layers.20.self_attn.q_proj, language_model.layers.6.self_attn.v_proj, 28.self_attn.q_proj, up_proj, language_model.layers.5.self_attn.q_proj, language_model.layers.17.self_attn.v_proj, language_model.layers.5.self_attn.k_proj, 29.self_attn.v_proj, language_model.layers.15.self_attn.k_proj, language_model.layers.7.self_attn.q_proj, language_model.layers.16.self_attn.q_proj, language_model.layers.15.self_attn.v_proj, language_model.layers.23.self_attn.v_proj, language_model.layers.24.self_attn.q_proj, 33.self_attn.v_proj, language_model.layers.10.self_attn.k_proj, language_model.layers.4.self_attn.v_proj, language_model.layers.12.self_attn.q_proj, 30.self_attn.q_proj, language_model.layers.2.self_attn.v_proj, language_model.layers.0.self_attn.k_proj, language_model.layers.14.self_attn.v_proj, language_model.layers.2.self_attn.q_proj, language_model.layers.1.self_attn.k_proj, 29.self_attn.q_proj, language_model.layers.7.self_attn.v_proj, 29.self_attn.k_proj, language_model.layers.25.self_attn.q_proj, 33.self_attn.k_proj, language_model.layers.1.self_attn.q_proj, language_model.layers.20.self_attn.v_proj, language_model.layers.15.self_attn.q_proj, language_model.layers.13.self_attn.q_proj, 27.self_attn.q_proj, language_model.layers.17.self_attn.k_proj, language_model.layers.26.self_attn.q_proj, language_model.layers.14.self_attn.q_proj, language_model.layers.9.self_attn.v_proj, language_model.layers.9.self_attn.q_proj, language_model.layers.4.self_attn.q_proj, language_model.layers.11.self_attn.k_proj, language_model.layers.8.self_attn.v_proj, language_model.layers.19.self_attn.k_proj, language_model.layers.21.self_attn.q_proj, language_model.layers.0.self_attn.q_proj, language_model.layers.3.self_attn.v_proj, language_model.layers.19.self_attn.q_proj, 28.self_attn.k_proj, language_model.layers.8.self_attn.q_proj, language_model.layers.26.self_attn.v_proj, language_model.layers.25.self_attn.k_proj, language_model.layers.17.self_attn.q_proj, language_model.layers.3.self_attn.k_proj, language_model.layers.23.self_attn.k_proj, language_model.layers.25.self_attn.v_proj, language_model.layers.12.self_attn.v_proj, language_model.layers.3.self_attn.q_proj, language_model.layers.26.self_attn.k_proj, language_model.layers.6.self_attn.q_proj, gate_proj, language_model.layers.0.self_attn.v_proj, language_model.layers.19.self_attn.v_proj, language_model.layers.9.self_attn.k_proj, 30.self_attn.v_proj, 28.self_attn.v_proj, language_model.layers.23.self_attn.q_proj, 31.self_attn.k_proj, language_model.layers.6.self_attn.k_proj, language_model.layers.7.self_attn.k_proj, down_proj, language_model.layers.18.self_attn.q_proj, 30.self_attn.k_proj, language_model.layers.2.self_attn.k_proj, language_model.layers.21.self_attn.v_proj
Training data
Local CSVs:
NER/NER-Data/ner_train_dataset.csvNER/NER-Data/ner_dev_dataset.csvNER/NER-Data/ner_test_dataset.csv
Example counts: N/A
Evaluation
- Best checkpoint metric: N/A
- Train runtime: 13203.8s (3h 40m 3s)
- eval_entity_f1: 0.426995
- eval_entity_micro_f1: 0.395010
- eval_entity_parse_fail_rate: 0.828125
- eval_entity_precision: 0.669197
- eval_entity_recall: 0.357311
- eval_runtime: 3224.531100
- eval_samples_per_second: 0.020000
- eval_steps_per_second: 0.002000
Notes
- MedGemma can be prompt-sensitive; keep inference prompt formatting aligned with training.
- Validate JSON output before downstream use.
- If
google/medgemma-1.5-4b-itis gated, authenticate first.
References
- MedGemma model card: https://huggingface.co/google/medgemma-1.5-4b-it
- MedGemma notebooks: https://github.com/google-health/medgemma/tree/main/notebooks
- Downloads last month
- 7
Model tree for Pritish92/ner-medgemma15-4b-it-lora
Base model
google/medgemma-1.5-4b-it