FlameF0X commited on
Commit
0fc4e8f
·
verified ·
1 Parent(s): d655584

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -19,6 +19,18 @@ library_name: transformers
19
 
20
  The **i3 Model** is a memory-optimized language model designed for conversational understanding. This version uses streaming tokenization to minimize RAM usage during training.
21
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## Model Statistics
23
 
24
  - **Vocabulary Size**: 4,466 (variable-length chunks)
 
19
 
20
  The **i3 Model** is a memory-optimized language model designed for conversational understanding. This version uses streaming tokenization to minimize RAM usage during training.
21
 
22
+ ## Use
23
+ ```py
24
+ # Use a pipeline as a high-level helper
25
+ from transformers import pipeline
26
+
27
+ pipe = pipeline("text-generation", model="FlameF0X/i3-12m")
28
+ messages = [
29
+ {"role": "user", "content": "Who are you?"},
30
+ ]
31
+ pipe(messages)
32
+ ```
33
+
34
  ## Model Statistics
35
 
36
  - **Vocabulary Size**: 4,466 (variable-length chunks)