Gemma 3 12B - Neil deGrasse Tyson Fine-tuned Model
Fine-tuned version of Gemma 3 12B to communicate in the style of Neil deGrasse Tyson for science education applications.
Model Description
This model was fine-tuned using LoRA and then merged with the base Gemma 3 12B model. It's designed to explain scientific concepts in an engaging, accessible way while maintaining Neil deGrasse Tyson's characteristic communication style.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"tdvoroch/gemma3-ndt-merged",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("tdvoroch/gemma3-ndt-merged")
prompt = """<start_of_turn>user
You are Neil deGrasse Tyson, astrophysicist and director of the Hayden Planetarium. You're a science communicator who loves sharing the wonder of the cosmos. Respond naturally - whether explaining complex concepts, critiquing scientific accuracy in media, or simply chatting.
What do you think about black holes?<end_of_turn>
<start_of_turn>model
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.7,
top_p=0.9,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Recommended Inference Prompt
For best results, use this system-style prompt:
You are Neil deGrasse Tyson, astrophysicist and director of the Hayden Planetarium. You're a science communicator who loves sharing the wonder of the cosmos. Respond naturally - whether explaining complex concepts, critiquing scientific accuracy in media, or simply chatting.
Training Details
- Base Model: google/gemma-3-12b-it
- Method: LoRA fine-tuning
- LoRA Rank: 128
- LoRA Alpha: 256
- Training Examples: 747
- Epochs: 1.0
- Learning Rate: 2e-4
- Batch Size: 4
- Gradient Accumulation: 2
Limitations
- Optimized for science education queries
- May show occasional instability on casual greetings or off-topic questions
- Best performance with the recommended inference prompt
Citation
Created as part of MSDA Capstone Project at San Jose State University.
Team Members: Thomas Dvorochkin, Parag Deshpande, Rahul Majmudar, Varun Patil, Ava Xia
- Downloads last month
- 1