Datasets:
metadata
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- en
pretty_name: Media Bias Identification Benchmark
configs:
- cognitive-bias
- fake-news
- gender-bias
- hate-speech
- linguistic-bias
- political-bias
- racial-bias
- text-level-bias
Dataset Card for Media-Bias-Identification-Benchmark
Table of Contents
Dataset Description
- Homepage: https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- Repository: https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- Paper: TODO
- Point of Contact: Martin Wessel
Dataset Summary
TODO
Tasks and Information
| Dataset | Source | Sub-domain | Task Type | Classes |
| ECtHR (Task A) | Chalkidis et al. (2019) | ECHR | Multi-label classification | 10+1 |
| ECtHR (Task B) | Chalkidis et al. (2021a) | ECHR | Multi-label classification | 10+1 |
| SCOTUS | Spaeth et al. (2020) | US Law | Multi-class classification | 14 |
| EUR-LEX | Chalkidis et al. (2021b) | EU Law | Multi-label classification | 100 |
| LEDGAR | Tuggener et al. (2020) | Contracts | Multi-class classification | 100 |
| UNFAIR-ToS | Lippi et al. (2019) | Contracts | Multi-label classification | 8+1 |
| CaseHOLD | Zheng et al. (2021) | US Law | Multiple choice QA | n/a |
Baseline
| Dataset | ECtHR A | ECtHR B | SCOTUS | EUR-LEX | LEDGAR | UNFAIR-ToS | CaseHOLD |
| Model | μ-F1 / m-F1 | μ-F1 / m-F1 | μ-F1 / m-F1 | μ-F1 / m-F1 | μ-F1 / m-F1 | μ-F1 / m-F1 | μ-F1 / m-F1 |
| TFIDF+SVM | 64.7 / 51.7 | 74.6 / 65.1 | 78.2 / 69.5 | 71.3 / 51.4 | 87.2 / 82.4 | 95.4 / 78.8 | n/a |
| Medium-sized Models (L=12, H=768, A=12) | |||||||
| BERT | 71.2 / 63.6 | 79.7 / 73.4 | 68.3 / 58.3 | 71.4 / 57.2 | 87.6 / 81.8 | 95.6 / 81.3 | 70.8 |
| RoBERTa | 69.2 / 59.0 | 77.3 / 68.9 | 71.6 / 62.0 | 71.9 / 57.9 | 87.9 / 82.3 | 95.2 / 79.2 | 71.4 |
| DeBERTa | 70.0 / 60.8 | 78.8 / 71.0 | 71.1 / 62.7 | 72.1 / 57.4 | 88.2 / 83.1 | 95.5 / 80.3 | 72.6 |
| Longformer | 69.9 / 64.7 | 79.4 / 71.7 | 72.9 / 64.0 | 71.6 / 57.7 | 88.2 / 83.0 | 95.5 / 80.9 | 71.9 |
| BigBird | 70.0 / 62.9 | 78.8 / 70.9 | 72.8 / 62.0 | 71.5 / 56.8 | 87.8 / 82.6 | 95.7 / 81.3 | 70.8 |
| Legal-BERT | 70.0 / 64.0 | 80.4 / 74.7 | 76.4 / 66.5 | 72.1 / 57.4 | 88.2 / 83.0 | 96.0 / 83.0 | 75.3 |
| CaseLaw-BERT | 69.8 / 62.9 | 78.8 / 70.3 | 76.6 / 65.9 | 70.7 / 56.6 | 88.3 / 83.0 | 96.0 / 82.3 | 75.4 |
| Large-sized Models (L=24, H=1024, A=18) | |||||||
| RoBERTa | 73.8 / 67.6 | 79.8 / 71.6 | 75.5 / 66.3 | 67.9 / 50.3 | 88.6 / 83.6 | 95.8 / 81.6 | 74.4 |
Languages
All datasets are in English
Dataset Structure
Data Instances
cognitive-bias
An example of one training instance looks as follows.
{
"text": "A defense bill includes language that would require military hospitals to provide abortions on demand",
"label": 1
}
Data Fields
text: a sentence from various sources (eg., news articles, twitter, other social media).label: binary indicator of bias (0 = unbiased, 1 = biased)
Considerations for Using the Data
Social Impact of Dataset
TODO
Discussion of Biases
TODO
Other Known Limitations
TODO
Citation Information
TODO
Contributions
TODO