Ollama DevOps Agent

A lightweight AI-powered DevOps automation tool using a fine-tuned Qwen3-1.7B model with Ollama and SmolAgents. Specialized for Docker and Kubernetes workflows with sequential tool execution and structured reasoning.

Features

  • Sequential Tool Execution: Calls ONE tool at a time, waits for results, then proceeds
  • Structured Reasoning: Uses <think> and <plan> tags to show thought process
  • Validation-Aware: Checks command outputs for errors before proceeding
  • Multi-Step Tasks: Handles complex workflows requiring multiple tool calls
  • Approval Mode: User confirmation before executing each tool call for enhanced safety (enabled by default)
  • Resource Efficient: Optimized for local development (1GB GGUF model)
  • Fast: Completes typical DevOps tasks in ~10 seconds

What's Special About This Model?

This model is fine-tuned specifically for DevOps automation with improved reasoning capabilities:

  • Docker & Kubernetes Expert: Trained on 300+ Docker and Kubernetes workflows (90% of training data)
  • One tool at a time: Unlike base models that try to call all tools at once, this model executes sequentially
  • Explicit planning: Shows reasoning with <think> and <plan> before acting
  • Uses actual values: Extracts and uses real values from tool responses in subsequent calls
  • Error handling: Validates each step and tries alternative approaches on failure

Training Data Focus

The model has been trained on:

  • Docker workflows: Building images, containers, Docker Compose, optimization
  • Kubernetes operations: Pods, deployments, services, configurations
  • General DevOps: File operations, system commands, basic troubleshooting

⚠️ Note: The model has limited training on cloud-specific CLIs (gcloud, AWS CLI, Azure CLI). For best results, use it for Docker and Kubernetes tasks.

Example Output

Task: Get all pods in default namespace

Step 1: Execute kubectl command
<tool_call>
{"name": "bash", "arguments": {"command": "kubectl get pods -n default"}}
</tool_call>

[Receives pod list]

Step 2: Provide summary
<tool_call>
{"name": "final_answer", "arguments": {"answer": "Successfully retrieved 10 pods in default namespace..."}}
</tool_call>

Quick Start

🎯 Recommended: Native Installation

For the best experience with full DevOps capabilities:

curl -fsSL https://raw.githubusercontent.com/ubermorgenland/devops-agent/main/install.sh | bash

This will automatically:

  • Install Ollama (if not present)
  • Install Python dependencies
  • Download the model from Hugging Face
  • Create the Ollama model
  • Set up the devops-agent CLI command

Why native installation?

  • βœ… Full system access - manage real infrastructure
  • βœ… No credential mounting - works with your existing setup
  • βœ… Better performance - no container overhead
  • βœ… Simpler usage - just run devops-agent

Downloads last month
208
GGUF
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ubermorgen/qwen3-devops

Finetuned
Qwen/Qwen3-1.7B
Quantized
(135)
this model