Purple Squirrel R1

Fine-tuned DeepSeek-R1-Distill-Llama-8B for Purple Squirrel AI Platform

Research Paper Video Forge GitHub

Related Resources

Model Details

  • Base Model: DeepSeek-R1-Distill-Llama-8B
  • Parameters: 8B
  • Context Length: 4096 tokens
  • Quantization: 4-bit NF4 (GGUF f16 available)
  • Specialization: Purple Squirrel AI platform operations

Research Papers

This model is deployed in the AIDP Neural Cloud distributed inference system and powers the AIDP Video Forge processing pipeline.

AIDP Neural Cloud — Distributed LLM Inference on Decentralized GPU Networks:

  • 47% cost reduction vs OpenAI
  • 28% faster latency (p50: 180ms vs 250ms)
  • 50 req/s throughput with fault tolerance

AIDP Video Forge — GPU-Accelerated Video Processing:

  • 10-20x faster encoding vs CPU
  • 40-60% cost reduction vs centralized cloud
  • VMAF 95.8 quality score

Capabilities

Fine-tuned to excel at:

  • Video Analysis: AI-powered transcription and tagging
  • Blockchain Operations: Multi-chain NFT minting (Solana, Ethereum, Polygon)
  • Cloud Integration: OCI, AWS, IPFS storage operations
  • Video Editing: Professional workflow understanding
  • Platform Operations: Purple Squirrel feature guidance

Quick Start

Using Ollama

ollama pull purplesquirrelnetworks/purple-squirrel-r1
ollama run purplesquirrelnetworks/purple-squirrel-r1

Using Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "purplesquirrelnetworks/purple-squirrel-r1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

Via AIDP Neural Cloud API

import openai

client = openai.OpenAI(
    base_url="https://neural-cloud.aidp.store/v1",
    api_key="your-api-key"
)

response = client.chat.completions.create(
    model="purple-squirrel-r1",
    messages=[
        {"role": "user", "content": "Explain decentralized GPU compute"}
    ]
)
print(response.choices[0].message.content)

Additional Resources

  • Model Comparison — Side-by-side comparison of base DeepSeek-R1 vs Purple Squirrel R1 with example prompts and responses
  • Blog Post — Technical write-up covering training setup, data curation, results, and usage guide

Citation

If you use this model or the associated research, please cite:

@techreport{karsten2026neuralcloud,
  title={AIDP Neural Cloud: Distributed LLM Inference on Decentralized GPU Networks},
  author={Karsten, Matthew},
  institution={Purple Squirrel Networks},
  year={2026},
  month={February},
  url={https://huggingface.co/purplesquirrelnetworks/aidp-neural-cloud-paper}
}

@techreport{karsten2026videoforge,
  title={AIDP Video Forge: GPU-Accelerated Video Processing on Decentralized Compute Networks},
  author={Karsten, Matthew},
  institution={Purple Squirrel Networks},
  year={2026},
  month={February},
  url={https://huggingface.co/purplesquirrelnetworks/aidp-video-forge-paper}
}

Built by Purple Squirrel Networks

Downloads last month
46
Safetensors
Model size
8B params
Tensor type
F32
·
U8
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for purplesquirrelnetworks/purple-squirrel-r1

Quantized
(190)
this model

Dataset used to train purplesquirrelnetworks/purple-squirrel-r1

Space using purplesquirrelnetworks/purple-squirrel-r1 1

Collection including purplesquirrelnetworks/purple-squirrel-r1

Evaluation results