davanstrien HF Staff Claude Opus 4.5 commited on
Commit
1e871c4
ยท
1 Parent(s): d272f1c

Add hf-jobs tag to README frontmatter

Browse files

Standardizing metadata tags across uv-scripts organization
for better discoverability of HF Jobs-compatible scripts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

Files changed (1) hide show
  1. README.md +33 -15
README.md CHANGED
@@ -1,11 +1,11 @@
1
  ---
2
  viewer: false
3
- tags: [uv-script, classification, vllm, structured-outputs, gpu-required]
4
  ---
5
 
6
  # Dataset Classification Script
7
 
8
- GPU-accelerated text classification for Hugging Face datasets with guaranteed valid outputs through structured generation.
9
 
10
  ## ๐Ÿš€ Quick Start
11
 
@@ -32,7 +32,7 @@ That's it! No installation, no setup - just `uv run`.
32
  - **Guaranteed valid outputs** using structured generation with guided decoding
33
  - **Zero-shot classification** without training data required
34
  - **GPU-optimized** for maximum throughput and efficiency
35
- - **Default model**: HuggingFaceTB/SmolLM3-3B (fast 3B model with thinking capabilities)
36
  - **Robust text handling** with preprocessing and validation
37
  - **Automatic progress tracking** and detailed statistics
38
  - **Direct Hub integration** - read and write datasets seamlessly
@@ -126,7 +126,11 @@ uv run classify-dataset.py \
126
  ### Support Ticket Classification
127
 
128
  ```bash
129
- uv run classify-dataset.py \
 
 
 
 
130
  --input-dataset user/support-tickets \
131
  --column content \
132
  --labels "bug,feature_request,question,other" \
@@ -137,18 +141,25 @@ uv run classify-dataset.py \
137
  ### News Categorization
138
 
139
  ```bash
140
- uv run classify-dataset.py \
 
 
 
 
141
  --input-dataset ag_news \
142
  --column text \
143
  --labels "world,sports,business,tech" \
144
- --output-dataset user/ag-news-categorized \
145
- --model meta-llama/Llama-3.2-3B-Instruct
146
  ```
147
 
148
  ### Complex Classification with Reasoning
149
 
150
  ```bash
151
- uv run classify-dataset.py \
 
 
 
 
152
  --input-dataset user/customer-feedback \
153
  --column text \
154
  --labels "very_positive,positive,neutral,negative,very_negative" \
@@ -176,7 +187,10 @@ uv run classify-dataset.py \
176
  --shuffle
177
 
178
  # With reasoning for nuanced classification
179
- uv run classify-dataset.py \
 
 
 
180
  --input-dataset librarian-bots/arxiv-metadata-snapshot \
181
  --column abstract \
182
  --labels "multimodal,agents,reasoning,safety,efficiency" \
@@ -205,11 +219,9 @@ hf jobs uv run \
205
  ```
206
 
207
  ### GPU Flavors
208
- - `t4-small`: Budget option for smaller models
209
- - `l4x1`: Good balance for 7B models
210
- - `a10g-small`: Fast inference for 3B models
211
- - `a10g-large`: More memory for larger models
212
- - `a100-large`: Maximum performance
213
 
214
  ## ๐Ÿ”ง Advanced Usage
215
 
@@ -236,7 +248,13 @@ This is especially important for:
236
 
237
  ### Using Different Models
238
 
239
- By default, this script uses **HuggingFaceTB/SmolLM3-3B** - a fast, efficient 3B parameter model that's perfect for most classification tasks. You can easily use any other instruction-tuned model:
 
 
 
 
 
 
240
 
241
  ```bash
242
  # Larger model for complex classification
 
1
  ---
2
  viewer: false
3
+ tags: [uv-script, classification, vllm, structured-outputs, gpu-required, hf-jobs]
4
  ---
5
 
6
  # Dataset Classification Script
7
 
8
+ GPU-accelerated text classification for Hugging Face datasets with guaranteed valid outputs through structured generation. Powered by SmolLM3-3B's advanced reasoning capabilities.
9
 
10
  ## ๐Ÿš€ Quick Start
11
 
 
32
  - **Guaranteed valid outputs** using structured generation with guided decoding
33
  - **Zero-shot classification** without training data required
34
  - **GPU-optimized** for maximum throughput and efficiency
35
+ - **Default model**: [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) - a fast 3B model with native thinking capabilities (`<think>` tags)
36
  - **Robust text handling** with preprocessing and validation
37
  - **Automatic progress tracking** and detailed statistics
38
  - **Direct Hub integration** - read and write datasets seamlessly
 
126
  ### Support Ticket Classification
127
 
128
  ```bash
129
+ # Run on HF Jobs with SmolLM3-3B (default)
130
+ hf jobs uv run \
131
+ --flavor l4x1 \
132
+ --image vllm/vllm-openai:latest \
133
+ https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \
134
  --input-dataset user/support-tickets \
135
  --column content \
136
  --labels "bug,feature_request,question,other" \
 
141
  ### News Categorization
142
 
143
  ```bash
144
+ # Using SmolLM3-3B for efficient news classification
145
+ hf jobs uv run \
146
+ --flavor l4x1 \
147
+ --image vllm/vllm-openai:latest \
148
+ https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \
149
  --input-dataset ag_news \
150
  --column text \
151
  --labels "world,sports,business,tech" \
152
+ --output-dataset user/ag-news-categorized
 
153
  ```
154
 
155
  ### Complex Classification with Reasoning
156
 
157
  ```bash
158
+ # SmolLM3's thinking mode for nuanced feedback analysis
159
+ hf jobs uv run \
160
+ --flavor l4x1 \
161
+ --image vllm/vllm-openai:latest \
162
+ https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \
163
  --input-dataset user/customer-feedback \
164
  --column text \
165
  --labels "very_positive,positive,neutral,negative,very_negative" \
 
187
  --shuffle
188
 
189
  # With reasoning for nuanced classification
190
+ hf jobs uv run \
191
+ --flavor l4x1 \
192
+ --image vllm/vllm-openai:latest \
193
+ https://huggingface.co/datasets/uv-scripts/classification/raw/main/classify-dataset.py \
194
  --input-dataset librarian-bots/arxiv-metadata-snapshot \
195
  --column abstract \
196
  --labels "multimodal,agents,reasoning,safety,efficiency" \
 
219
  ```
220
 
221
  ### GPU Flavors
222
+ - `l4x1`: **Recommended starting point** - great for SmolLM3
223
+ - `a10g-large`: More memory for larger batches or 7B+ models
224
+ - `a100-large`: Maximum performance for demanding workloads
 
 
225
 
226
  ## ๐Ÿ”ง Advanced Usage
227
 
 
248
 
249
  ### Using Different Models
250
 
251
+ By default, this script uses **[HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B)** - a state-of-the-art 3B parameter model specifically designed for efficient inference. SmolLM3 features:
252
+ - Native thinking capabilities with `<think>` tags for step-by-step reasoning
253
+ - Excellent performance on classification tasks
254
+ - Fast inference speed (50-100 texts/second on A10)
255
+ - Low memory footprint allowing larger batch sizes
256
+
257
+ While you can use other models, SmolLM3 is recommended for its balance of quality, speed, and reasoning capabilities:
258
 
259
  ```bash
260
  # Larger model for complex classification