Ministral-8B Security Vulnerability Scanner

A fine-tuned version of Ministral-8B-Instruct-2410 specialized for security vulnerability detection in code.

Model Details

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ratnam1510/ministral-8b-security-scanner")
tokenizer = AutoTokenizer.from_pretrained("ratnam1510/ministral-8b-security-scanner")

messages = [
    {"role": "system", "content": "You are an expert security vulnerability analyst."},
    {"role": "user", "content": "Analyze this code for vulnerabilities:\n```python\nimport os\nos.system(user_input)\n```"},
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Evaluation

Evaluated using an LLM-as-judge approach (GLM-4.5-air via OpenRouter) on 20 test samples. The judge was given the code snippet, ground truth analysis, and the model's response, then scored each response on 5 dimensions (1-5 scale).

What Each Metric Measures

Metric What it measures 1 (worst) 5 (best)
Vulnerability ID Did the model correctly identify the vulnerability type? Wrong or none identified Precise CWE classification
Severity Accuracy Is the severity rating reasonable? Wildly off Matches ground truth
Explanation Quality Is the explanation clear and actionable? Vague hand-waving Cites specific lines, root cause
Fix Suggestion Does it suggest correct remediation? No fix or wrong fix Production-ready fix
Relevance Does the response address the actual code shown? Completely unrelated Directly analyzes the snippet

Score Comparison: Base vs Fine-tuned

Dimension Base Model Fine-tuned Delta Winner
Overall 1.55/5 1.81/5 +0.26 Fine-tuned
Vulnerability Identification 1.90 2.31 +0.41 Fine-tuned
Severity Accuracy 1.60 2.62 +1.02 Fine-tuned
Fix Suggestion 1.40 1.44 +0.04 Fine-tuned

Quality Distribution

Metric Base Fine-tuned
% Good (>=4/5) 0.0% 6.2%
% Poor (<=2/5) 90.0% 81.2%
Downloads last month
56
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ratnam1510/wardstral-8b

Finetuned
(96)
this model