medical-qa-anatomy-v5

Fine-tuned BioMistral-7B for medical Q&A, specializing in anatomy and clinical reasoning.

Model Details

  • Base Model: BioMistral/BioMistral-7B
  • Training Data: 60K medical Q&A samples
  • Training Method: LoRA (rank 64, alpha 128)
  • Final Eval Loss: 0.960
  • Token Accuracy: 74.8%

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "medcoterie/medical-qa-anatomy-v5",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained("medcoterie/medical-qa-anatomy-v5")

prompt = '''<s>[INST] You are an expert medical educator. Answer the following medical question with accurate, detailed information.

What is human anatomy? [/INST]'''

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=600,
    temperature=0.3,
    top_p=0.85,
    top_k=50,
    repetition_penalty=1.15,
    do_sample=True,
)

answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer.split("[/INST]")[-1].strip())

Recommended Settings

  • temperature: 0.2-0.3 (factual)
  • top_p: 0.7-0.85
  • top_k: 40-50
  • repetition_penalty: 1.15

Training Details

  • LoRA rank: 64, alpha: 128
  • Learning rate: 1e-4
  • Batch size: 2 (gradient accumulation: 8)
  • Training steps: 3200
  • GPU: 1x 40GB

Limitations

  • Trained on custom dataset
  • Specializes in anatomy
  • Not validated on standardized benchmarks
  • Not for clinical decision-making

License

Apache 2.0

Downloads last month
12
Safetensors
Model size
7B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for medcoterie/medical-qa-anatomy-v5

Finetuned
(95)
this model
Quantizations
2 models