🟒 Green-Guard β€” RoBERTa ESG Relevance Classifier (v1)

Task: Sentence-level classification β€” determine if a sentence is Sustainability-Related (Yes / No).
Base model: roberta-base, fine-tuned on a labeled ESG corpus from the Green-Guard dataset.
Repository: GitHub β†’ Green-Guard Project


πŸ“Š Metrics (Test Set)

Metric Value
Accuracy 0.90
Macro F1 0.89
Weighted F1 0.90

Metrics computed on a held-out test split (data/processed/splits/)
using the JSON logs β†’ reports/relevance_metrics_v1.json


🧩 Labels

{ "0": "No", "1": "Yes" }

πŸš€ Quick Inference

You can load and run the model directly:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_id = "salitahir/roberta-esg-relevance-green-guard-v1"
tok = AutoTokenizer.from_pretrained(model_id)
mod = AutoModelForSequenceClassification.from_pretrained(model_id).eval()

text = "We reduced Scope 2 emissions by 24% in 2024."
inputs = tok(text, return_tensors="pt", truncation=True)
pred = torch.softmax(mod(**inputs).logits, dim=-1)
label_id = pred.argmax(-1).item()
label = mod.config.id2label[str(label_id)]
print(label, float(pred[0][label_id]))

βœ… Expected output:

Yes 0.94


🧠 Intended Use

This model acts as Stage 1 in the two-stage Green-Guard ESG classifier, filtering sustainability-related sentences before ESG-type categorization.


βš–οΈ License

MIT License β€” open for research and commercial reuse with attribution.

Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for salitahir/roberta-esg-relevance-green-guard-v1

Finetuned
(1978)
this model