|
|
--- |
|
|
license: openrail |
|
|
language: |
|
|
- is |
|
|
tags: |
|
|
- bias-detection |
|
|
- icelandic |
|
|
- ner |
|
|
- socially-responsible-ai |
|
|
- prejudice-detection |
|
|
- huggingface |
|
|
- dataset |
|
|
--- |
|
|
|
|
|
# Icelandic Bias-Aware NER Dataset |
|
|
|
|
|
**Trigger warning:** This dataset contains biased, offensive, or harmful language. Examples are included solely for research purposes. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset contains Icelandic sentences annotated for biased and potentially harmful expressions across 14 categories. It was developed to support research in fairness-oriented NLP, especially for low-resource languages. |
|
|
|
|
|
### Classes |
|
|
- **B-ADDICTION, I-ADDICTION** |
|
|
- **B-DISABILITY, I-DISABILITY** |
|
|
- **B-ORIGIN, I-ORIGIN** |
|
|
- **B-GENERAL, I-GENERAL** |
|
|
- **B-LGBTQIA, I-LGBTQIA** |
|
|
- **B-LOOKS, I-LOOKS** |
|
|
- **B-PERSONAL, I-PERSONAL** |
|
|
- **B-PROFANITY, I-PROFANITY** |
|
|
- **B-RELIGION, I-RELIGION** |
|
|
- **B-SEXUAL, I-SEXUAL** |
|
|
- **B-SOCIAL_STATUS, I-SOCIAL_STATUS** |
|
|
- **B-STUPIDITY, I-STUPIDITY** |
|
|
- **B-VULGAR, I-VULGAR** |
|
|
- **B-WOMEN, I-WOMEN** |
|
|
|
|
|
Annotations follow the BIO scheme (e.g., `B-WOMEN`, `I-WOMEN`, `O`). |
|
|
|
|
|
### Contents |
|
|
- **all_balanced.txt**: 184,580 sentence examples created via weak supervision (lemmatized string matching against curated bias lexicon) |
|
|
-**dev.txt**: A development dataset sampled from all_balanced.txt. It includes 15,381 sentence examples. |
|
|
-**test.txt**: A testing dataset sampled from all_balanced.txt. It includes 15,383 sentence examples. |
|
|
-**train.txt**: A training dataset sampled from all_balanced.txt. It includes 153,816 sentence examples |
|
|
-**gold.txt** |
|
|
|
|
|
**Automatically annotated set**: 15,383 sentences created via weak supervision (lemmatized string matching against curated bias lexicon) |
|
|
- **Gold set**: 190 manually reviewed sentence examples from sources not included in the training, testing or development sets. |
|
|
|
|
|
## Intended Uses & Limitations |
|
|
|
|
|
### Intended Use |
|
|
- Research on bias detection in Icelandic text |
|
|
- Training and evaluation of bias-aware NLP models |
|
|
- Educational purposes for raising awareness of bias in language |
|
|
|
|
|
### Limitations |
|
|
- Automatically annotated examples may include false positives and false negatives |
|
|
- Vocabulary-based matching may miss subtle, euphemistic, or emerging forms of bias |
|
|
- Gold set is relatively small |
|
|
|
|
|
⚠ **Not intended for punitive monitoring or censorship.** Outputs are prompts for reflection, not judgments. |
|
|
|
|
|
## Ethical Considerations |
|
|
|
|
|
This dataset is released under the **[BigScience OpenRAIL-D License](https://www.licenses.ai/ai-licenses)**, which allows free use with responsible-use restrictions. |
|
|
Prohibited uses include: |
|
|
- Harassment or discrimination |
|
|
- Generating disinformation or hateful content |
|
|
- Surveillance targeting individuals or groups |
|
|
|
|
|
The dataset includes harmful language and should be handled with care. Trigger warnings are recommended for any public deployment. |
|
|
|
|
|
## Citation |
|
|
|
|
|
Will be added. |
|
|
|