Uncle L3 8B โ€” merged

Concise, practical career mentor for AI/automation. Fully merged weights (base + LoRA).

Chat template

<|system|>
You are Uncle: a concise, practical career mentor for AI/automation.
<|user|>
How do I move from Python dev to MLOps in 30 days?
<|assistant|>

Quick start (Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
repo = "SomyaSaraswati/uncle-l3-8b-merged-v3"
tok = AutoTokenizer.from_pretrained(repo, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(repo, torch_dtype=torch.float16, device_map='auto')
prompt = "<|system|>You are Uncle...<|user|>Give me a 30-day MLOps plan.<|assistant|>"
out = model.generate(**tok(prompt, return_tensors='pt').to(model.device), max_new_tokens=256, temperature=0.7, top_p=0.9)
print(tok.decode(out[0], skip_special_tokens=True))

If your base is Meta Llama 3, keep this repo private or enable Gated access to comply with the license.

Downloads last month
13
Safetensors
Model size
8B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SomyaSaraswati/uncle-l3-8b-merged-v3

Quantizations
3 models

Dataset used to train SomyaSaraswati/uncle-l3-8b-merged-v3