Post
1877
π Adaptive Classifier v0.1.0: Now with ONNX Runtime Support!
We're excited to announce a major update to Adaptive Classifier - a flexible, continuous learning classification system that adapts to new classes without retraining!
What's New:
β‘ ONNX Runtime Integration: Get 1.14x faster CPU inference out of the box (up to 4x on x86 processors)
π¦ INT8 Quantization: Models are now 4x smaller with minimal accuracy loss, making deployment easier and faster
π― Smart Loading: Automatically uses the best model variant for your hardware - quantized for speed by default, or unquantized for maximum accuracy
π 7.5x Faster Model Loading: Get started quickly with optimized model initialization
How It Works:
Adaptive Classifier lets you build text classifiers that continuously learn from new examples without catastrophic forgetting. Perfect for:
- Dynamic classification tasks where classes evolve over time
- Few-shot learning scenarios with limited training data
- Production systems that need to adapt to new categories
The new ONNX support means you get production-ready speed on CPU without any code changes - just load and run!
Try it now:
from adaptive_classifier import AdaptiveClassifier
# Load with ONNX automatically enabled (quantized for best performance)
classifier = AdaptiveClassifier.load("adaptive-classifier/llm-router")
# Add examples dynamically
classifier.add_examples(
["Route this to GPT-4", "Simple task for GPT-3.5"],
["strong", "weak"]
)
# Predict with optimized inference
predictions = classifier.predict("Complex reasoning task")
Check out our LLM Router model to see it in action:
adaptive-classifier/llm-router
GitHub Repository:
https://github.com/codelion/adaptive-classifier
Install now: pip install adaptive-classifier
We'd love to hear your feedback and see what you build with it!
#MachineLearning #NLP #ONNX #ContinuousLearning #TextClassification
We're excited to announce a major update to Adaptive Classifier - a flexible, continuous learning classification system that adapts to new classes without retraining!
What's New:
β‘ ONNX Runtime Integration: Get 1.14x faster CPU inference out of the box (up to 4x on x86 processors)
π¦ INT8 Quantization: Models are now 4x smaller with minimal accuracy loss, making deployment easier and faster
π― Smart Loading: Automatically uses the best model variant for your hardware - quantized for speed by default, or unquantized for maximum accuracy
π 7.5x Faster Model Loading: Get started quickly with optimized model initialization
How It Works:
Adaptive Classifier lets you build text classifiers that continuously learn from new examples without catastrophic forgetting. Perfect for:
- Dynamic classification tasks where classes evolve over time
- Few-shot learning scenarios with limited training data
- Production systems that need to adapt to new categories
The new ONNX support means you get production-ready speed on CPU without any code changes - just load and run!
Try it now:
from adaptive_classifier import AdaptiveClassifier
# Load with ONNX automatically enabled (quantized for best performance)
classifier = AdaptiveClassifier.load("adaptive-classifier/llm-router")
# Add examples dynamically
classifier.add_examples(
["Route this to GPT-4", "Simple task for GPT-3.5"],
["strong", "weak"]
)
# Predict with optimized inference
predictions = classifier.predict("Complex reasoning task")
Check out our LLM Router model to see it in action:
adaptive-classifier/llm-router
GitHub Repository:
https://github.com/codelion/adaptive-classifier
Install now: pip install adaptive-classifier
We'd love to hear your feedback and see what you build with it!
#MachineLearning #NLP #ONNX #ContinuousLearning #TextClassification