poltextlab's picture
init
7352fe8 verified
---
model-index:
- name: poltextlab/illframes-climate-v5
results:
- task:
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 72%
- name: F1-Score
type: f1
value: 64%
tags:
- text-classification
- pytorch
metrics:
- precision
- recall
- f1-score
language:
- en
base_model:
- xlm-roberta-large
pipeline_tag: text-classification
library_name: transformers
license: cc-by-4.0
extra_gated_prompt: Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# illframes-climate-v5
# How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/illframes-climate-v5",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "<text_to_classify>"
pipe(text)
```
# Classification Report
## Overall Performance:
* **Accuracy:** 72%
* **Macro Avg:** Precision: 0.45, Recall: 0.29, F1-score: 0.31
* **Weighted Avg:** Precision: 0.65, Recall: 0.72, F1-score: 0.64
## Per-Class Metrics:
| Label | Precision | Recall | F1-score | Support |
|:-----------------------------------------|------------:|---------:|-----------:|----------:|
| 710: Threatening economic growth | 0.63 | 0.3 | 0.41 | 63 |
| 720: Threatening national sovereignty | 1 | 0.15 | 0.26 | 20 |
| 721: Climate conspiracy | 0 | 0 | 0 | 15 |
| 722: Scientific scepticism and denial | 0 | 0 | 0 | 19 |
| 723: Climate movement bashing | 0.33 | 0.28 | 0.3 | 18 |
| 724: Other polluters as the real problem | 0.77 | 0.8 | 0.78 | 25 |
| 730: Threatening energy security | 0.6 | 0.09 | 0.16 | 33 |
| 740: Threatening way of life | 0 | 0 | 0 | 11 |
| 799: None of them | 0.73 | 0.99 | 0.84 | 356 |
# Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
# Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.