Daizee commited on
Commit
92b3b6e
·
verified ·
1 Parent(s): 2e6213b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -19
README.md CHANGED
@@ -60,25 +60,6 @@ This repo contains:
60
 
61
  ---
62
 
63
- ## 🚀 Quick Start (Transformers)
64
-
65
- ```python
66
- from transformers import AutoTokenizer, AutoModelForCausalLM
67
-
68
- MODEL = "Daizee/Dirty-Calla"
69
- tok = AutoTokenizer.from_pretrained(MODEL, use_fast=True)
70
- model = AutoModelForCausalLM.from_pretrained(MODEL, device_map="auto")
71
-
72
- # Gemma-3 style chat template (example)
73
- dialog = [
74
- {"role": "system", "content": "You are Dirty-Calla, a bold, stylish fiction writer. Be vivid and punchy."},
75
- {"role": "user", "content": "Give me a one-paragraph teaser for a dramatic, slow-burn romance. Keep it PG-13."}
76
- ]
77
- prompt = tok.apply_chat_template(dialog, tokenize=False, add_generation_prompt=True)
78
- inputs = tok(prompt, return_tensors="pt").to(model.device)
79
- out = model.generate(**inputs, max_new_tokens=220, temperature=0.9, top_p=0.9)
80
- print(tok.decode(out[0], skip_special_tokens=True))
81
-
82
  **SAMPLE of Q8_0)**
83
 
84
  Prompt: write a sensual story of two college rivals on a trip who get stuck sharing a hotel room because of a mix up.
 
60
 
61
  ---
62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  **SAMPLE of Q8_0)**
64
 
65
  Prompt: write a sensual story of two college rivals on a trip who get stuck sharing a hotel room because of a mix up.