Update README.md
Browse files
README.md
CHANGED
|
@@ -50,11 +50,19 @@ pretty_name: HK Content Corpus (Cantonese \& Traditional Chinese)
|
|
| 50 |
- **Source:** public web sources (news sites, online forums, encyclopedia and restaurant reviews).
|
| 51 |
|
| 52 |
This dataset contains eight cleaned source-specific corpora of **Hong Kong Cantonese** and **Traditional Chinese** text, crawled from public websites and platforms.
|
|
|
|
| 53 |
It was initially created for the experiments reported in **https://doi.org/10.1145/3744341** which study the **effect of diglossia on Hong Kong language** modeling.
|
|
|
|
| 54 |
Each file stores plain UTF-8 text, where **each record occupies one line**, and **blank lines serve as separators**.
|
|
|
|
| 55 |
This dataset is also available at Zenodo: **https://doi.org/10.5281/zenodo.16882351**
|
|
|
|
| 56 |
We only change file extension from .corpus to .csv and add header row here for HuggingFace's dataset viewer function
|
| 57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
### Files
|
| 59 |
|
| 60 |
| Filename | Description | SHA256 hash (without header row, same content as corpus file@Zenodo) |
|
|
|
|
| 50 |
- **Source:** public web sources (news sites, online forums, encyclopedia and restaurant reviews).
|
| 51 |
|
| 52 |
This dataset contains eight cleaned source-specific corpora of **Hong Kong Cantonese** and **Traditional Chinese** text, crawled from public websites and platforms.
|
| 53 |
+
|
| 54 |
It was initially created for the experiments reported in **https://doi.org/10.1145/3744341** which study the **effect of diglossia on Hong Kong language** modeling.
|
| 55 |
+
|
| 56 |
Each file stores plain UTF-8 text, where **each record occupies one line**, and **blank lines serve as separators**.
|
| 57 |
+
|
| 58 |
This dataset is also available at Zenodo: **https://doi.org/10.5281/zenodo.16882351**
|
| 59 |
+
|
| 60 |
We only change file extension from .corpus to .csv and add header row here for HuggingFace's dataset viewer function
|
| 61 |
|
| 62 |
+
👉 This cleaned corpus is derived from a larger MySQL database used to store raw text during the data collection stage.
|
| 63 |
+
If you need the original database for reprocessing or reproduction, please refer to:
|
| 64 |
+
https://huggingface.co/datasets/SolarisCipher/hk_content_corpus_mysql
|
| 65 |
+
|
| 66 |
### Files
|
| 67 |
|
| 68 |
| Filename | Description | SHA256 hash (without header row, same content as corpus file@Zenodo) |
|