RequirementClassifier
Version: 27
Model Description
This model is a fine-tuned BERT model for binary classification of software requirements. It classifies text as either "requirement" or "non-requirement".
Intended Uses
- Classify software requirement documents
- Identify requirement vs non-requirement statements
- Automated requirement extraction from documents
Training Data
The model was trained on the PROMISE NFR dataset with additional non-requirement examples.
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("rajinikarcg/RequirementClassifier")
model = AutoModelForSequenceClassification.from_pretrained("rajinikarcg/RequirementClassifier")
# Prepare input
text = "The system shall respond within 2 seconds"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
# Get prediction
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
prediction = torch.argmax(logits, dim=-1).item()
# Map to label
labels = ["non-requirement", "requirement"]
print(f"Prediction: {labels[prediction]}")
Version History
- 27: Latest version
Citation
If you use this model, please cite the PROMISE NFR dataset.
- Downloads last month
- 10
Evaluation results
- Accuracy on PROMISE NFRself-reported0.000