π’ Green-Guard β RoBERTa ESG Relevance Classifier (v1)
Task: Sentence-level classification β determine if a sentence is Sustainability-Related (Yes / No).
Base model: roberta-base, fine-tuned on a labeled ESG corpus from the Green-Guard dataset.
Repository: GitHub β Green-Guard Project
π Metrics (Test Set)
| Metric | Value |
|---|---|
| Accuracy | 0.90 |
| Macro F1 | 0.89 |
| Weighted F1 | 0.90 |
Metrics computed on a held-out test split (
data/processed/splits/)
using the JSON logs βreports/relevance_metrics_v1.json
π§© Labels
{ "0": "No", "1": "Yes" }
π Quick Inference
You can load and run the model directly:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_id = "salitahir/roberta-esg-relevance-green-guard-v1"
tok = AutoTokenizer.from_pretrained(model_id)
mod = AutoModelForSequenceClassification.from_pretrained(model_id).eval()
text = "We reduced Scope 2 emissions by 24% in 2024."
inputs = tok(text, return_tensors="pt", truncation=True)
pred = torch.softmax(mod(**inputs).logits, dim=-1)
label_id = pred.argmax(-1).item()
label = mod.config.id2label[str(label_id)]
print(label, float(pred[0][label_id]))
β Expected output:
Yes 0.94
π§ Intended Use
This model acts as Stage 1 in the two-stage Green-Guard ESG classifier, filtering sustainability-related sentences before ESG-type categorization.
βοΈ License
MIT License β open for research and commercial reuse with attribution.
- Downloads last month
- 1
Model tree for salitahir/roberta-esg-relevance-green-guard-v1
Base model
FacebookAI/roberta-base