omarkamali commited on
Commit
7af8c01
·
verified ·
1 Parent(s): 3d911a3

Upload all models and assets for bpy (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +299 -134
  2. models/embeddings/monolingual/bpy_128d.bin +2 -2
  3. models/embeddings/monolingual/bpy_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/bpy_32d.bin +2 -2
  5. models/embeddings/monolingual/bpy_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/bpy_64d.bin +2 -2
  7. models/embeddings/monolingual/bpy_64d_metadata.json +5 -3
  8. models/subword_markov/bpy_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/bpy_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/bpy_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/bpy_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/bpy_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/bpy_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/bpy_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/bpy_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/bpy_2gram_subword.parquet +2 -2
  17. models/subword_ngram/bpy_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/bpy_3gram_subword.parquet +2 -2
  19. models/subword_ngram/bpy_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/bpy_4gram_subword.parquet +2 -2
  21. models/subword_ngram/bpy_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/bpy_tokenizer_16k.model +2 -2
  23. models/tokenizer/bpy_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/bpy_tokenizer_32k.model +2 -2
  25. models/tokenizer/bpy_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/bpy_tokenizer_64k.model +2 -2
  27. models/tokenizer/bpy_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/bpy_tokenizer_8k.model +2 -2
  29. models/tokenizer/bpy_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/bpy_vocabulary.parquet +2 -2
  31. models/vocabulary/bpy_vocabulary_metadata.json +10 -9
  32. models/word_markov/bpy_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/bpy_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/bpy_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/bpy_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/bpy_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/bpy_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/bpy_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/bpy_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/bpy_2gram_word.parquet +2 -2
  41. models/word_ngram/bpy_2gram_word_metadata.json +2 -2
  42. models/word_ngram/bpy_3gram_word.parquet +2 -2
  43. models/word_ngram/bpy_3gram_word_metadata.json +2 -2
  44. models/word_ngram/bpy_4gram_word.parquet +2 -2
  45. models/word_ngram/bpy_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 5.072
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.7157
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 23871
33
- generated: 2025-12-28
34
  ---
35
 
36
  # BPY - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,52 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 4.628x | 4.39 | 0.2028% | 107,503 |
76
- | **16k** | 4.794x | 4.55 | 0.2101% | 103,773 |
77
- | **32k** | 4.937x | 4.69 | 0.2164% | 100,762 |
78
- | **64k** | 5.072x 🏆 | 4.81 | 0.2222% | 98,093 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `অক্টোবর ৮, গ্রেগরিয়ান পাঞ্জী হান ইলয়া আজি বসরর ২৮১তম (অধিবর্ষত ২৮২তম) দিন হান।...`
85
 
86
  | Vocab | Tokens | Count |
87
  |-------|--------|-------|
88
- | 8k | `▁অক্টোবর ▁৮ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৮১ ... (+22 more)` | 32 |
89
- | 16k | `▁অক্টোবর ▁৮ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৮১ ... (+22 more)` | 32 |
90
- | 32k | `▁অক্টোবর ▁৮ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৮১ ... (+22 more)` | 32 |
91
- | 64k | `▁অক্টোবর ▁৮ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৮১ ... (+22 more)` | 32 |
92
 
93
- **Sample 2:** `হোসেনপুর ইউনিয়ন, পলাশবাড়ী
94
- হোসেনপুর ইউনিয়ন, রাজৈর`
95
 
96
  | Vocab | Tokens | Count |
97
  |-------|--------|-------|
98
- | 8k | `▁হোসেন পুর ▁ইউনিয়ন , ▁পলাশবাড়ী ▁হোসেন পুর ▁ইউনিয়ন , ���রাজৈর` | 10 |
99
- | 16k | `▁হোসেনপুর ▁ইউনিয়ন , ▁পলাশবাড়ী ▁হোসেনপুর ▁ইউনিয়ন , ▁রাজৈর` | 8 |
100
- | 32k | `▁হোসেনপুর ▁ইউনিয়ন , ▁পলাশবাড়ী ▁হোসেনপুর ▁ইউনিয়ন , ▁রাজৈর` | 8 |
101
- | 64k | `▁হোসেনপুর ▁ইউনিয়ন , ▁পলাশবাড়ী ▁হোসেনপুর ▁ইউনিয়ন , ▁রাজৈর` | 8 |
102
 
103
- **Sample 3:** `আগষ্ট ২৫, গ্রেগরিয়ান পাঞ্জী হান ইলয়া আজি বসরর ২৩৭তম (অধিবর্ষত ২৩৮তম) দিন হান। ...`
104
 
105
  | Vocab | Tokens | Count |
106
  |-------|--------|-------|
107
- | 8k | `▁আগষ্ট ▁২৫ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৩৭ ... (+22 more)` | 32 |
108
- | 16k | `▁আগষ্ট ▁২৫ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৩৭ ... (+22 more)` | 32 |
109
- | 32k | `▁আগষ্ট ▁২৫ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৩৭ ... (+22 more)` | 32 |
110
- | 64k | `▁আগষ্ট ▁২৫ , ▁গ্রেগরিয়ান ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁২৩৭ ... (+22 more)` | 32 |
111
 
112
 
113
  ### Key Findings
114
 
115
- - **Best Compression:** 64k achieves 5.072x compression
116
- - **Lowest UNK Rate:** 8k with 0.2028% unknown tokens
117
  - **Trade-off:** Larger vocabularies improve compression but increase model size
118
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
119
 
@@ -122,57 +129,89 @@ Below are sample sentences tokenized with each vocabulary size:
122
 
123
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
124
 
 
 
125
  ![N-gram Coverage](visualizations/ngram_coverage.png)
126
 
127
  ### Results
128
 
129
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
130
- |--------|------------|---------|----------------|------------------|-------------------|
131
- | **2-gram** | 586 🏆 | 9.19 | 23,018 | 53.4% | 92.4% |
132
- | **2-gram** | 401 🏆 | 8.65 | 5,122 | 58.3% | 97.5% |
133
- | **3-gram** | 1,915 | 10.90 | 78,850 | 32.4% | 80.1% |
134
- | **3-gram** | 1,738 | 10.76 | 36,042 | 31.5% | 79.9% |
135
- | **4-gram** | 3,658 | 11.84 | 192,863 | 25.6% | 72.1% |
136
- | **4-gram** | 3,941 | 11.94 | 148,690 | 23.3% | 68.9% |
137
 
138
  ### Top 5 N-grams by Size
139
 
140
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
141
 
142
  | Rank | N-gram | Count |
143
  |------|--------|-------|
144
- | 1 | `া র` | 368,952 |
145
- | 2 | `া ন` | 252,549 |
146
- | 3 | `ম া` | 171,696 |
147
- | 4 | `হ া` | 165,000 |
148
- | 5 | `য ়` | 137,023 |
149
 
150
- **3-grams:**
151
 
152
  | Rank | N-gram | Count |
153
  |------|--------|-------|
154
- | 1 | `ি ়` | 76,882 |
155
- | 2 | `র া` | 75,352 |
156
- | 3 | `ব র` | 75,053 |
157
- | 4 | `া ো` | 70,050 |
158
- | 5 | `ম ন` | 58,522 |
159
 
160
- **4-grams:**
161
 
162
  | Rank | N-gram | Count |
163
  |------|--------|-------|
164
- | 1 | `ব ো` | 69,326 |
165
- | 2 | `ইউন ি ়` | 55,842 |
166
- | 3 | `ম ু` | 51,209 |
167
- | 4 | `া ত` | 49,012 |
168
- | 5 | `া া` | 48,203 |
169
 
170
 
171
  ### Key Findings
172
 
173
- - **Best Perplexity:** 2-gram with 401
174
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
175
- - **Coverage:** Top-1000 patterns cover ~69% of corpus
176
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
177
 
178
  ---
@@ -180,55 +219,86 @@ Below are sample sentences tokenized with each vocabulary size:
180
 
181
  ![Markov Entropy](visualizations/markov_entropy.png)
182
 
 
 
183
  ![Markov Branching](visualizations/markov_branching.png)
184
 
185
  ### Results
186
 
187
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
188
- |---------|-------------|------------|------------------|-----------------|----------------|
189
- | **1** | 0.4826 | 1.397 | 3.74 | 51,595 | 51.7% |
190
- | **1** | 1.2901 | 2.445 | 11.60 | 918 | 0.0% |
191
- | **2** | 0.2087 | 1.156 | 2.09 | 192,535 | 79.1% |
192
- | **2** | 1.0212 | 2.030 | 6.12 | 10,624 | 0.0% |
193
- | **3** | 0.1700 | 1.125 | 1.65 | 401,469 | 83.0% |
194
- | **3** | 0.7832 | 1.721 | 3.64 | 64,912 | 21.7% |
195
- | **4** | 0.1274 🏆 | 1.092 | 1.40 | 660,290 | 87.3% |
196
- | **4** | 0.5823 🏆 | 1.497 | 2.36 | 236,324 | 41.8% |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
197
 
198
- ### Generated Text Samples
199
 
200
- Below are text samples generated from each Markov chain model:
 
 
201
 
202
  **Context Size 1:**
203
 
204
- 1. `া ৪৮ . ২গ ঘর পর ্ ত ি হ া জ ে দ ে ব`
205
- 2. `ি বর থ া ন । এর ে শর জ ি ল া র ১৬ ,`
206
- 3. `র ে র ম ু ন ৬৭ % । আয ় া র ো ৭২০২গ গ`
207
 
208
  **Context Size 2:**
209
 
210
- 1. `া র া ষ ্ ট ্ র া ঘ ি ম া ন ) ১ ,`
211
- 2. `া ন ু ল া র ম া প া হ া ন । খ া লয`
212
- 3. `ম া হ া রহ া ন আস ে । চ ৌ দ ্ র া ষ`
213
 
214
  **Context Size 3:**
215
 
216
- 1. `ি য ় ন , ১৮৭ হ া ন ব া র ো দ ্ র া জ`
217
- 2. `র ম া ম ু ন ি ৫২ % , ব া র ো প া ন ্`
218
- 3. `ব া র ো ' language death ' উল ্ ল ে শব া র ্ ক ি`
219
 
220
  **Context Size 4:**
221
 
222
- 1. `ব া র ো ম ৌ জ া ইউন ি য ় নর স া ক ্ ষরত া`
223
- 2. `ইউন ি য ় ন এগত গ া ঙ : ২১ হ া ন ব া র ো ম`
224
- 3. `ম া ন ু ১৭ , ৬৭৩গ শহর ে দ ে ব া র ো হ ু ক া`
225
 
226
 
227
  ### Key Findings
228
 
229
- - **Best Predictability:** Context-4 with 87.3% predictability
230
  - **Branching Factor:** Decreases with context size (more deterministic)
231
- - **Memory Trade-off:** Larger contexts require more storage (236,324 contexts)
232
  - **Recommendation:** Context-3 or Context-4 for text generation
233
 
234
  ---
@@ -244,64 +314,64 @@ Below are text samples generated from each Markov chain model:
244
 
245
  | Metric | Value |
246
  |--------|-------|
247
- | Vocabulary Size | 23,871 |
248
- | Total Tokens | 5,192,993 |
249
- | Mean Frequency | 217.54 |
250
  | Median Frequency | 3 |
251
- | Frequency Std Dev | 6063.65 |
252
 
253
  ### Most Common Words
254
 
255
  | Rank | Word | Frequency |
256
  |------|------|-----------|
257
- | 1 | | 542,779 |
258
- | 2 | | 370,575 |
259
- | 3 | | 283,266 |
260
- | 4 | | 212,745 |
261
- | 5 | | 205,066 |
262
- | 6 | | 196,409 |
263
- | 7 | | 185,131 |
264
- | 8 | | 177,046 |
265
- | 9 | | 164,231 |
266
- | 10 | | 161,639 |
267
 
268
  ### Least Common Words (from vocabulary)
269
 
270
  | Rank | Word | Frequency |
271
  |------|------|-----------|
272
- | 1 | নঅও | 2 |
273
- | 2 | অহতই | 2 |
274
- | 3 | আপত | 2 |
275
- | 4 | থকভ | 2 |
276
- | 5 | হগই | 2 |
277
- | 6 | হসর | 2 |
278
- | 7 | পরমপ | 2 |
279
- | 8 | হবন | 2 |
280
- | 9 | আকগও | 2 |
281
- | 10 | সযন | 2 |
282
 
283
  ### Zipf's Law Analysis
284
 
285
  | Metric | Value |
286
  |--------|-------|
287
- | Zipf Coefficient | 1.3958 |
288
- | R² (Goodness of Fit) | 0.983849 |
289
  | Adherence Quality | **excellent** |
290
 
291
  ### Coverage Analysis
292
 
293
  | Top N Words | Coverage |
294
  |-------------|----------|
295
- | Top 100 | 89.1% |
296
- | Top 1,000 | 97.2% |
297
- | Top 5,000 | 98.8% |
298
- | Top 10,000 | 99.3% |
299
 
300
  ### Key Findings
301
 
302
- - **Zipf Compliance:** R²=0.9838 indicates excellent adherence to Zipf's law
303
- - **High Frequency Dominance:** Top 100 words cover 89.1% of corpus
304
- - **Long Tail:** 13,871 words needed for remaining 0.7% coverage
305
 
306
  ---
307
  ## 5. Word Embeddings Evaluation
@@ -314,24 +384,116 @@ Below are text samples generated from each Markov chain model:
314
 
315
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
316
 
317
- ### Model Comparison
318
 
319
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
320
- |-------|------------|-----------|----------|----------|----------|
321
- | **mono_32d** | 12,408 | 32 | 4.835 | 0.825 | 0.7157 🏆 |
322
- | **mono_64d** | 12,408 | 64 | 5.039 | 0.789 | 0.5338 |
323
- | **mono_128d** | 12,408 | 128 | 5.101 | 0.751 | 0.2644 |
324
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
325
 
326
  ### Key Findings
327
 
328
- - **Best Isotropy:** mono_32d with 0.7157 (more uniform distribution)
329
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
330
- - **Vocabulary Coverage:** All models cover 12,408 words
331
- - **Recommendation:** 100d for balanced semantic capture and efficiency
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
332
 
333
  ---
334
- ## 6. Summary & Recommendations
335
 
336
  ![Performance Dashboard](visualizations/performance_dashboard.png)
337
 
@@ -339,11 +501,12 @@ Below are text samples generated from each Markov chain model:
339
 
340
  | Component | Recommended | Rationale |
341
  |-----------|-------------|-----------|
342
- | Tokenizer | **32k BPE** | Best compression (5.07x) with low UNK rate |
343
- | N-gram | **5-gram** | Lowest perplexity (401) |
344
- | Markov | **Context-4** | Highest predictability (87.3%) |
345
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
346
 
 
347
  ---
348
  ## Appendix: Metrics Glossary & Interpretation Guide
349
 
@@ -533,7 +696,8 @@ If you use these models in your research, please cite:
533
  author = {Kamali, Omar},
534
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
535
  year = {2025},
536
- publisher = {HuggingFace},
 
537
  url = {https://huggingface.co/wikilangs}
538
  institution = {Omneity Labs}
539
  }
@@ -549,7 +713,8 @@ MIT License - Free for academic and commercial use.
549
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
550
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
551
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
552
  ---
553
  *Generated by Wikilangs Models Pipeline*
554
 
555
- *Report Date: 2025-12-28 07:50:18*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 4.934
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.7051
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # BPY - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 4.500x | 4.51 | 0.2383% | 99,875 |
84
+ | **16k** | 4.662x | 4.67 | 0.2469% | 96,414 |
85
+ | **32k** | 4.817x | 4.83 | 0.2551% | 93,303 |
86
+ | **64k** | 4.934x 🏆 | 4.95 | 0.2613% | 91,087 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `কানাডার জাতীয় চিনত্হান (কোট অব আর্মস)হান। দেশএহানর পুরা নাঙহান কানাডা। জাতীয় চ...`
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁কানা ডার ▁জাতীয় ▁চিনত্হান ▁( কোট ▁অব ▁আর্মস ) হান ... (+21 more)` | 31 |
97
+ | 16k | `▁কানাডার ▁জাতীয় ▁চিনত্হান ▁( কোট ▁অব ▁আর্মস ) হান ... (+17 more)` | 27 |
98
+ | 32k | `▁কানাডার ▁জাতীয় ▁চিনত্হান ▁( কোট ▁অব ▁আর্মস ) হান ... (+17 more)` | 27 |
99
+ | 64k | `▁কানাডার ▁জাতীয় ▁চিনত্হান ▁( কোট ▁অব ▁আর্মস ) হান ... (+17 more)` | 27 |
100
 
101
+ **Sample 2:** `বদরপুর ইউনিয়ন, পটুয়াখালি সদর বদরপুর ইউনিয়ন, লালমোহন`
 
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁বদর পুর ▁ইউনিয়ন , ▁পট ুয়া খালি ▁সদর ▁বদর পুর ... (+3 more)` | 13 |
106
+ | 16k | `▁বদরপুর ▁ইউনিয়ন , ▁পটুয়াখালি ▁সদর ▁বদরপুর ▁ইউনিয়ন , ▁লালমোহন` | 9 |
107
+ | 32k | `▁বদরপুর ▁ইউনিয়ন , ▁পটুয়াখালি ▁সদর ▁বদরপুর ▁ইউনিয়ন , ▁লালমোহন` | 9 |
108
+ | 64k | `▁বদরপুর ▁ইউনিয়ন , ▁পটুয়াখালি ▁সদর ▁বদরপুর ▁ইউনিয়ন , ▁লালমোহন` | 9 |
109
 
110
+ **Sample 3:** `চৈত ২৫, বাংলা পাঞ্জী হান ইলয়া আজি বসরর লমিলগা মাহার ২৫তম দিন হান। খা এশিয়াত এব...`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁চৈত ▁২৫ , ▁বাংলা ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁লমিলগা ... (+17 more)` | 27 |
115
+ | 16k | `▁চৈত ▁২৫ , ▁বাংলা ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁লমিলগা ... (+16 more)` | 26 |
116
+ | 32k | `▁চৈত ▁২৫ , ▁বাংলা ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁লমিলগা ... (+16 more)` | 26 |
117
+ | 64k | `▁চৈত ▁২৫ , ▁বাংলা ▁পাঞ্জী ▁হান ▁ইলয়া ▁আজি ▁বসরর ▁লমিলগা ... (+16 more)` | 26 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 4.934x compression
123
+ - **Lowest UNK Rate:** 8k with 0.2383% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 918 | 9.84 | 15,095 | 44.1% | 86.3% |
141
+ | **2-gram** | Subword | 598 🏆 | 9.23 | 14,925 | 51.1% | 92.8% |
142
+ | **3-gram** | Word | 1,566 | 10.61 | 31,653 | 38.0% | 79.5% |
143
+ | **3-gram** | Subword | 1,914 | 10.90 | 68,764 | 32.6% | 79.7% |
144
+ | **4-gram** | Word | 2,620 | 11.36 | 61,026 | 34.9% | 72.0% |
145
+ | **4-gram** | Subword | 3,540 | 11.79 | 166,785 | 26.1% | 72.8% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `সাক্ষরতার হারহান` | 26,823 |
154
+ | 2 | `অতার মা` | 20,499 |
155
+ | 3 | `জনসংখ্যার উপাত্ত` | 19,707 |
156
+ | 4 | `জনসংখ্যা ইলাতাই` | 19,552 |
157
+ | 5 | `লোক গননা` | 19,533 |
158
+
159
+ **3-grams (Word):**
160
+
161
+ | Rank | N-gram | Count |
162
+ |------|--------|-------|
163
+ | 1 | `মানুলেহা লোক গননা` | 19,527 |
164
+ | 2 | `মারির মানুলেহা লোক` | 19,526 |
165
+ | 3 | `অতার মা মুনি` | 16,569 |
166
+ | 4 | `গ অতার মা` | 15,694 |
167
+ | 5 | `লোক গননা অনুসারে` | 14,182 |
168
+
169
+ **4-grams (Word):**
170
+
171
+ | Rank | N-gram | Count |
172
+ |------|--------|-------|
173
+ | 1 | `মারির মানুলেহা লোক গননা` | 19,525 |
174
+ | 2 | `গ অতার মা মুনি` | 15,620 |
175
+ | 3 | `মানুলেহা লোক গননা অনুসারে` | 14,181 |
176
+ | 4 | `অক্ষাংশ বারো দ্রাঘিমাংশ ইলতাই` | 9,366 |
177
+ | 5 | `মাপাহানর অক্ষাংশ বারো দ্রাঘিমাংশ` | 9,315 |
178
+
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | `র _` | 407,307 |
184
+ | 2 | `। _` | 163,117 |
185
+ | 3 | `হা ন` | 154,741 |
186
+ | 4 | `ন _` | 147,898 |
187
+ | 5 | `_ মা` | 138,499 |
188
 
189
+ **3-grams (Subword):**
190
 
191
  | Rank | N-gram | Count |
192
  |------|--------|-------|
193
+ | 1 | `র _ মা` | 95,264 |
194
+ | 2 | `হা _` | 94,576 |
195
+ | 3 | `_ বা রো` | 68,931 |
196
+ | 4 | `বা রো _` | 68,907 |
197
+ | 5 | `_ উ` | 64,646 |
198
 
199
+ **4-grams (Subword):**
200
 
201
  | Rank | N-gram | Count |
202
  |------|--------|-------|
203
+ | 1 | `_ বা রো _` | 68,902 |
204
+ | 2 | `_ নি` | 64,360 |
205
+ | 3 | `ই নি য়` | 55,649 |
206
+ | 4 | `উ নি য় ন` | 55,616 |
207
+ | 5 | `জ সং খ্যা` | 44,876 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 598
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~73% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.7844 | 1.722 | 4.39 | 60,265 | 21.6% |
231
+ | **1** | Subword | 1.0518 | 2.073 | 11.77 | 3,035 | 0.0% |
232
+ | **2** | Word | 0.1820 | 1.134 | 1.54 | 262,556 | 81.8% |
233
+ | **2** | Subword | 0.6370 | 1.555 | 3.68 | 35,678 | 36.3% |
234
+ | **3** | Word | 0.0756 | 1.054 | 1.27 | 400,175 | 92.4% |
235
+ | **3** | Subword | 0.4890 | 1.403 | 2.43 | 131,152 | 51.1% |
236
+ | **4** | Word | 0.0493 🏆 | 1.035 | 1.19 | 505,259 | 95.1% |
237
+ | **4** | Subword | 0.3612 | 1.284 | 1.77 | 318,528 | 63.9% |
238
+
239
+ ### Generated Text Samples (Word-based)
240
+
241
+ Below are text samples generated from each word-based Markov chain model:
242
+
243
+ **Context Size 1:**
244
+
245
+ 1. `বারো মৌজা ইউনিয়ন এগত গ ঘরর ইউনিট আসে চৌদ্দাহান মুঙেদে ইউনিয়ন কুড়িগ্রাম জিলার উপজিলাগি বাংলাদেশর ম...`
246
+ 2. `ইউনিয়ন এগত ১৩ হান আসে জনসংখ্যার উপাত্ত ভারতর পাঞ্জাব রাজ্যর পৌরসভা এহার মাপাহানর অক্ষাংশ বারো জেলা`
247
+ 3. `উপাত্ত শহর এহার মাপাহানর অক্ষাংশ বারো গাঙ ২২ বারো দ্রাঘিমাংশ ইলতাই সমূদ্রুহার মান্নাহাত্ত এহানর সাক্...`
248
+
249
+ **Context Size 2:**
250
+
251
+ 1. `সাক্ষরতার হারহান ৫৪ মুনির মা সাক্ষরতার হারহান ৬৮ মুনির মা সাক্ষরতার হারহান ৮২ বারো জেলার মা হারহান`
252
+ 2. `অতার মা হুকানাহান ৬৬ ৬৯ বর্গমাইল অতার মা মুনি ৫০ বারো জেলা বেয়াপা ৩৮ এহানাত সাক্ষরতার হারহান`
253
+ 3. `জনসংখ্যার উপাত্ত ব্রাজিলর মারির মানুলেহা লোক গননা অনুসারে বোৱা এসপেরান্সা পর্তুগীজ nova guataporanga...`
254
+
255
+ **Context Size 3:**
256
+
257
+ 1. `মানুলেহা লোক গননা অনুসারে সেন্ট্রো নোভো ডো মারানহো পৌরসভাহানর জনসংখ্যা ইলাতাই ৩৬০ ৭০৬ গ অতার মা মুনি...`
258
+ 2. `মারির মানুলেহা লোক গননা অনুসারে কুডুমুডি শহরহানর জনসংখ্যা ইলাতাই ২০ ০৯৫ গ অতার মা মুনি ৪৯ বারো জিলা`
259
+ 3. `অতার মা মুনি ৫০ বারো জিলা বেয়াপা এরে পৌরসভার মানু শহরেদে বারো গাঙেদে থাইতারা হারি বর্গ কিলোমিটারে ৪...`
260
+
261
+ **Context Size 4:**
262
+
263
+ 1. `মারির মানুলেহা লোক গননা অনুসারে শিকোহাবাদ শহরহানর জনসংখ্যা ইলাতাই ৮৮ ০৭৫ গ অতার মা মুনি ৫০ বারো জিলা...`
264
+ 2. `গ অতার মা মুনি ৫২ বারো জিলা বেয়াপা ৪৮ ইউনিয়ন এগত ১৮ বসরর গজে মানু আসি লহঙ করিসিতা বেয়াপা`
265
+ 3. `মানুলেহা লোক গননা অনুসারে উটুৱাদা র জনসংখ্যা ইলাতাই ৩৫ ২১৪ ঘরর ইউনিট আসে হারি বর্গ মাইলে ৩১ ৭গ মানু`
266
 
 
267
 
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
 
272
  **Context Size 1:**
273
 
274
+ 1. `_হান_বারো_খা_ইউপাসিতার_`
275
+ 2. `র),২,_ইউনিয়নর_ক_মা`
276
+ 3. `নর_বারো_জন_(১,৬৮%,`
277
 
278
  **Context Size 2:**
279
 
280
+ 1. `র_উপাত্ত_পৌরসভারতর_হার_`
281
+ 2. `।_পৌরসভাহানর_গননা)_মানু`
282
+ 3. `হান।_সাধারণ_বপ/য়্যাম।_এ`
283
 
284
  **Context Size 3:**
285
 
286
+ 1. `র_মানু_থাইতারা।_হারি_বর্গমাই`
287
+ 2. `হান_আম্ফোয়ে_ৱারিশপুর_*_টঙ্গিবা`
288
+ 3. `_বারো_অধিবর্ষ_আহান।_জনসংখ্যা`
289
 
290
  **Context Size 4:**
291
 
292
+ 1. `_বারো_জেলা/বেয়াপা_৪৯%_বারো_দ্রা`
293
+ 2. `_ইউনিট_আসে।_চৌদ্দাহান_মুঙেদে:`
294
+ 3. `ইউনিয়ন।_ঔয়াঙেদে:_---_ইউ`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 95.1% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (318,528 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 33,017 |
318
+ | Total Tokens | 2,031,395 |
319
+ | Mean Frequency | 61.53 |
320
  | Median Frequency | 3 |
321
+ | Frequency Std Dev | 896.57 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | বারো | 68,904 |
328
+ | 2 | ইউনিয়ন | 42,536 |
329
+ | 3 | উপাত্ত | 36,521 |
330
+ | 4 | হারহান | 31,910 |
331
+ | 5 | মা | 31,024 |
332
+ | 6 | মানু | 30,464 |
333
+ | 7 | সাক্ষরতার | 26,839 |
334
+ | 8 | | 26,426 |
335
+ | 9 | অতার | 25,586 |
336
+ | 10 | জনসংখ্যার | 24,826 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | সুখর | 2 |
343
+ | 2 | পরিত্যাগ | 2 |
344
+ | 3 | মালতী | 2 |
345
+ | 4 | আকগও | 2 |
346
+ | 5 | ক্ষনিক | 2 |
347
+ | 6 | সযন্তে | 2 |
348
+ | 7 | কণ্টক | 2 |
349
+ | 8 | পরিহার | 2 |
350
+ | 9 | বিরোধিতা | 2 |
351
+ | 10 | অপরাপর | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 1.3135 |
358
+ | R² (Goodness of Fit) | 0.980294 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 62.6% |
366
+ | Top 1,000 | 89.9% |
367
+ | Top 5,000 | 95.0% |
368
+ | Top 10,000 | 96.8% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9803 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 62.6% of corpus
374
+ - **Long Tail:** 23,017 words needed for remaining 3.2% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.7051 🏆 | 0.3773 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.5256 | 0.3351 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.2472 | 0.3242 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_32d with 0.7051 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.3456. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
+
408
+ ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-কা` | কানারিও, কাবেরীপক্কম, কানুপুর |
430
+ | `-মা` | মাকৌপিন, মাঝরদিয়া, মার্চ |
431
+
432
+ #### Productive Suffixes
433
+ | Suffix | Examples |
434
+ |--------|----------|
435
+ | `-া` | ৱারান্টিনা, সাড়া, হঙকরাতারা |
436
+ | `-র` | শিরুর, উপর, গেজেটার |
437
+ | `-়া` | সাড়া, মাঝরদিয়া, মহুয়া |
438
+ | `-ুর` | শিরুর, তরফপুর, রাইপুর |
439
+ | `-য়া` | মাঝরদিয়া, মহুয়া, ক্যালিফোর্নিয়া |
440
+ | `-িয়া` | মাঝরদিয়া, ক্যালিফোর্নিয়া, সাফিয়া |
441
+ | `-পুর` | তরফপুর, রাইপুর, সাদিপুর |
442
+ | `-ার` | গেজেটার, সুইটৱাটার, পিনার |
443
+
444
+ ### 6.3 Bound Stems (Lexical Roots)
445
+
446
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
447
+
448
+ *No significant bound stems detected.*
449
+
450
+
451
+ ### 6.4 Affix Compatibility (Co-occurrence)
452
+
453
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
454
+
455
+ | Prefix | Suffix | Frequency | Examples |
456
+ |--------|--------|-----------|----------|
457
+ | `-কা` | `-া` | 47 words | কাসিবুগ্গা, কাপিক্সাবা |
458
+ | `-মা` | `-া` | 37 words | মারিনগা, মাইসাটুয়া |
459
+ | `-কা` | `-র` | 31 words | কাতারর, কাড়াথুর |
460
+ | `-মা` | `-র` | 23 words | মাহিলপুর, মানিয়ার |
461
+ | `-কা` | `-়া` | 16 words | কানয়া, কামারিয়া |
462
+ | `-কা` | `-য়া` | 13 words | কানয়া, কামারিয়া |
463
+ | `-মা` | `-়া` | 12 words | মাইসাটুয়া, মাছপাড়া |
464
+ | `-কা` | `-ুর` | 11 words | কাড়াথুর, কাবনুর |
465
+ | `-কা` | `-িয়া` | 10 words | কামারিয়া, কালেডোনিয়া |
466
+ | `-মা` | `-য়া` | 7 words | মাইসাটুয়া, মাছুয়া |
467
+
468
+ ### 6.5 Recursive Morpheme Segmentation
469
+
470
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
471
+
472
+ | Word | Suggested Split | Confidence | Stem |
473
+ |------|-----------------|------------|------|
474
+ | লুসিয়ারা | **`লুসি-য়া-রা`** | 6.0 | `লুসি` |
475
+ | জিন্দারপুর | **`জিন্দ-ার-পুর`** | 6.0 | `জিন্দ` |
476
+ | কল্যানপুর | **`কল্যান-পুর`** | 4.5 | `কল্যান` |
477
+ | য়েরভালিয়া | **`য়েরভাল-িয়া`** | 4.5 | `য়েরভাল` |
478
+ | রায়হানপুর | **`রায়হান-পুর`** | 4.5 | `রায়হান` |
479
+ | নাইজেরিয়া | **`নাইজের-িয়া`** | 4.5 | `নাইজের` |
480
+ | মোস্তফাপুর | **`মোস্তফা-পুর`** | 4.5 | `মোস্তফা` |
481
+ | সিঙ্গাপুর | **`সিঙ্গা-পুর`** | 4.5 | `সিঙ্গা` |
482
+ | মির্জাপুর | **`মির্জা-পুর`** | 4.5 | `মির্জা` |
483
+ | পালমাসিয়া | **`পালমাস-িয়া`** | 4.5 | `পালমাস` |
484
+ | ইসলামিয়া | **`ইসলাম-িয়া`** | 4.5 | `ইসলাম` |
485
+ | কামানডুকাইয়া | **`কা-মা-নডুকাই-য়া`** | 4.5 | `নডুকাই` |
486
+ | চরকুমারিয়া | **`চরকুম-ার-িয়া`** | 3.0 | `চরকুম` |
487
+ | কাউন্দিয়া | **`কা-উন্���-িয়া`** | 3.0 | `উন্দ` |
488
+ | কালাইমাজপাড়া | **`কা-লাইমাজপাড-়া`** | 3.0 | `লাইমাজপাড` |
489
+
490
+ ### 6.6 Linguistic Interpretation
491
+
492
+ > **Automated Insight:**
493
+ The language BPY appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
494
 
495
  ---
496
+ ## 7. Summary & Recommendations
497
 
498
  ![Performance Dashboard](visualizations/performance_dashboard.png)
499
 
 
501
 
502
  | Component | Recommended | Rationale |
503
  |-----------|-------------|-----------|
504
+ | Tokenizer | **64k BPE** | Best compression (4.93x) |
505
+ | N-gram | **2-gram** | Lowest perplexity (598) |
506
+ | Markov | **Context-4** | Highest predictability (95.1%) |
507
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
508
 
509
+
510
  ---
511
  ## Appendix: Metrics Glossary & Interpretation Guide
512
 
 
696
  author = {Kamali, Omar},
697
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
698
  year = {2025},
699
+ doi = {10.5281/zenodo.18073153},
700
+ publisher = {Zenodo},
701
  url = {https://huggingface.co/wikilangs}
702
  institution = {Omneity Labs}
703
  }
 
713
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
714
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
715
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
716
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
717
  ---
718
  *Generated by Wikilangs Models Pipeline*
719
 
720
+ *Report Date: 2026-01-03 07:50:05*
models/embeddings/monolingual/bpy_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4fb6b15d9bd093df27b0fd9a4efcd269979e163e27ef6ef6754ac07eb74d1ab2
3
- size 1037046131
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3b15a24394779ec3beddd13e51b8416a137f08e7eef6cd8c13cd09cf43733f0
3
+ size 1035031576
models/embeddings/monolingual/bpy_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12408
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 10500
15
  }
models/embeddings/monolingual/bpy_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:47e22726d92510c004d40792ae23c82129d179cc6367e68d751f9764a7ca0df1
3
- size 259516787
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0c0840199d220785e7d74606060b7a6126430498c7075055630d36f50ea7419
3
+ size 258967576
models/embeddings/monolingual/bpy_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12408
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 10500
15
  }
models/embeddings/monolingual/bpy_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:69a920bfb09617e816f807e477568361980d5eb7d59bc6fce4f483f40da1725b
3
- size 518693235
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6e2813dae96a73ea737d2e4dda08e790f6fb7264595ac54099bd4d8b2292690
3
+ size 517655576
models/embeddings/monolingual/bpy_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12408
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 10500
15
  }
models/subword_markov/bpy_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bdb54276207e5b6285187f2b86c36ad177fb87faaaaefdb2ccb4c908e1f0cb67
3
- size 87617
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8471db68e598c67135a721fbc5abea6aabf5b3531ac753f67a51a53d3a162efe
3
+ size 259139
models/subword_markov/bpy_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "bpy",
5
- "unique_contexts": 918,
6
- "total_transitions": 14885921
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "bpy",
5
+ "unique_contexts": 3035,
6
+ "total_transitions": 9209425
7
  }
models/subword_markov/bpy_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4f249b093261db16fcadfa715db9f8802b43064e74e1ba8719ca675247806d36
3
- size 535820
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f7c571eecbae6ba0f24d562781877ff384a71459459476bc095f782d4915833
3
+ size 1150172
models/subword_markov/bpy_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "bpy",
5
- "unique_contexts": 10624,
6
- "total_transitions": 14860752
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "bpy",
5
+ "unique_contexts": 35678,
6
+ "total_transitions": 9184428
7
  }
models/subword_markov/bpy_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ff090decea3ecf5111ffe1a1235436699266f613b38361d41312345bb838b7e
3
- size 1775613
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e4b755551be0d4958885e5ef7ce3604b86f71a5d45b2cd1cae7865d96cd7b1c
3
+ size 3026127
models/subword_markov/bpy_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "bpy",
5
- "unique_contexts": 64912,
6
- "total_transitions": 14835583
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "bpy",
5
+ "unique_contexts": 131152,
6
+ "total_transitions": 9159431
7
  }
models/subword_markov/bpy_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:52d356df5651a0f5f54331cd5e76f74df23d7adf3534dae88c6e119d9b761ee9
3
- size 5047184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81b6102ef4dca504054a436f595fb2141c4d5e0b2fcb06ee59c2a27d91217af2
3
+ size 6259337
models/subword_markov/bpy_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "bpy",
5
- "unique_contexts": 236324,
6
- "total_transitions": 14810414
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "bpy",
5
+ "unique_contexts": 318528,
6
+ "total_transitions": 9134434
7
  }
models/subword_ngram/bpy_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cdabbb125dfaa15729dfa32510019102c91c18a767800511da74d1f6d7f74b99
3
- size 71525
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f82847f832ddbf2d1fa13369d81d98d1250a13406959bf2e9c7bcf6411f73dae
3
+ size 223418
models/subword_ngram/bpy_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "bpy",
5
- "unique_ngrams": 5122,
6
- "total_ngrams": 14885921
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "bpy",
5
+ "unique_ngrams": 14925,
6
+ "total_ngrams": 9209425
7
  }
models/subword_ngram/bpy_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c4170250afd8b7e81e3bb514e2ea7e5003b5a4cac40d1d8cf1b944941fd5763b
3
- size 488221
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3399384c312065ab04e876e9a8339dd16ad7672600be8f555f1f24f2f750a8e3
3
+ size 994803
models/subword_ngram/bpy_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "bpy",
5
- "unique_ngrams": 36042,
6
- "total_ngrams": 14860752
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "bpy",
5
+ "unique_ngrams": 68764,
6
+ "total_ngrams": 9184428
7
  }
models/subword_ngram/bpy_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49e5e1be83e58be9ef9945b7e74642ea9638babffc0bcefcc28e74284c2eca28
3
- size 1849563
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f5a24ecfb409025513b9a27e490935fab2358c050e05d7b200d31b657033a48
3
+ size 2381769
models/subword_ngram/bpy_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "bpy",
5
- "unique_ngrams": 148690,
6
- "total_ngrams": 14835583
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "bpy",
5
+ "unique_ngrams": 166785,
6
+ "total_ngrams": 9159431
7
  }
models/tokenizer/bpy_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8b1b87ae27874d0e0deb8c8a9507b9010354c9e4be4f579eaf9efb10233bc2b7
3
- size 608584
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de3058025822db51926bead450c89e6488fb90b601282c5bfa70d01e3bb8d118
3
+ size 606496
models/tokenizer/bpy_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bpy_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8549dc42b070078e9b8bf38d5cb50e4361f0b9c8ba4de0cb3db16ff7947e4703
3
- size 1001952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2167815382b0d49ee9d942580a311d5095b796299c9cca191099977285f1f730
3
+ size 1016119
models/tokenizer/bpy_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bpy_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8984f34241873e65994cc434c8d728913d71f46df6299666ebba4dab4b121e30
3
- size 1889033
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15960ddb65b0199fe283b07bc06c542bc2bc8f117e733be94b9fee95d82e1d39
3
+ size 1838927
models/tokenizer/bpy_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bpy_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:17966726c85a44c946d110bbc967aa3995eace9b617553b974f23d2e598fe035
3
- size 419939
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4ec725812a0a54260f2a0e37b399025d18f1be5288cd4960c495698a32d792b
3
+ size 420550
models/tokenizer/bpy_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/bpy_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:68b8a4ca35238e27244b1e44260e17a48c67a189db1d781d8c200c28fb378b13
3
- size 368371
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:566686527453f8982d9f7d761db3c3d1b83725fe8c367e171c03880c653e20a1
3
+ size 615993
models/vocabulary/bpy_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "bpy",
3
- "vocabulary_size": 23871,
 
4
  "statistics": {
5
- "type_token_ratio": 0.009857897178592595,
6
  "coverage": {
7
- "top_100": 0.8861177653236629,
8
- "top_1000": 0.9663878729322724,
9
- "top_5000": 0.9828984715508948,
10
- "top_10000": 0.9880168624748257
11
  },
12
- "hapax_count": 27593,
13
- "hapax_ratio": 0.5361612000621794,
14
- "total_documents": 25169
15
  }
16
  }
 
1
  {
2
  "language": "bpy",
3
+ "vocabulary_size": 33017,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.029318932277929487,
7
  "coverage": {
8
+ "top_100": 0.6173631613153301,
9
+ "top_1000": 0.8866567771129692,
10
+ "top_5000": 0.9377555570451412,
11
+ "top_10000": 0.9552711418354352
12
  },
13
+ "hapax_count": 27343,
14
+ "hapax_ratio": 0.45299867461895293,
15
+ "total_documents": 24997
16
  }
17
  }
models/word_markov/bpy_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1d5036170691d1160320935b9f8ba2016467e19c1053c73425833658f88971c
3
- size 1613321
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c26d16cda447fab844f38aa0014c18fd3efee9851da15d42c78fb9edb16bd18
3
+ size 2583106
models/word_markov/bpy_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "bpy",
5
- "unique_contexts": 51595,
6
- "total_transitions": 9871256
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "bpy",
5
+ "unique_contexts": 60265,
6
+ "total_transitions": 2033741
7
  }
models/word_markov/bpy_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:131007aca3a521d9f995c3b68152c37568e6a3043b27cee57c0170453d3ad10b
3
- size 3751981
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fa1c243dab3eab15b77c513a7038427aa62f1ff8d3dd624cc73c649f9890f7a
3
+ size 6149066
models/word_markov/bpy_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "bpy",
5
- "unique_contexts": 192535,
6
- "total_transitions": 9846087
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "bpy",
5
+ "unique_contexts": 262556,
6
+ "total_transitions": 2008744
7
  }
models/word_markov/bpy_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab0b79c46c94f67d583a0221361f6c4add6cf9a150d20c2a40f4daaef196f21e
3
- size 7021353
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7622a3a71f6f070ac5db83e5e97131e1712eff1adcc7abcee61fb80c35462f2
3
+ size 9548086
models/word_markov/bpy_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "bpy",
5
- "unique_contexts": 401469,
6
- "total_transitions": 9820919
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "bpy",
5
+ "unique_contexts": 400175,
6
+ "total_transitions": 1983747
7
  }
models/word_markov/bpy_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:092aa31f510790527aca2846851d191e9138ff743310cdb5e7d9e612f2bc0b69
3
- size 10993473
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6698170a5559d8625b6a1607e8f77b4b24134e9244062645ebee386de8b603d7
3
+ size 13571757
models/word_markov/bpy_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "bpy",
5
- "unique_contexts": 660290,
6
- "total_transitions": 9795751
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "bpy",
5
+ "unique_contexts": 505259,
6
+ "total_transitions": 1958750
7
  }
models/word_ngram/bpy_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8df7389a29eaa5b721ff0e95520427f6b27d7ebab6cd7ffdd086c22978342cb
3
- size 320237
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecc35b57edce0047176d78cec0d755c7ce6c3fbe11107d56c0fee3e19ffa5811
3
+ size 277645
models/word_ngram/bpy_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "bpy",
5
- "unique_ngrams": 23018,
6
- "total_ngrams": 9871256
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "bpy",
5
+ "unique_ngrams": 15095,
6
+ "total_ngrams": 2033741
7
  }
models/word_ngram/bpy_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f719fdf4863b7285c7d7700f0109a2991c67303c3a99f375996b5ce90543fc58
3
- size 1063287
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90e5d518ab3f7bc7ccb22ada009fe25f616284c21deac03e8501c79db7c8c3ca
3
+ size 662282
models/word_ngram/bpy_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "bpy",
5
- "unique_ngrams": 78850,
6
- "total_ngrams": 9846087
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "bpy",
5
+ "unique_ngrams": 31653,
6
+ "total_ngrams": 2008744
7
  }
models/word_ngram/bpy_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a0ccd76524932bacdba4faae94a4465412313cd3d52a2e7b8449b2b1881217f
3
- size 2632657
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f7ce61f64ce707ee44c9e02ae0e63b105f3bb116f076e34089cd28b78ec52d4
3
+ size 1414259
models/word_ngram/bpy_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "bpy",
5
- "unique_ngrams": 192863,
6
- "total_ngrams": 9820919
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "bpy",
5
+ "unique_ngrams": 61026,
6
+ "total_ngrams": 1983747
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: 78a08a95ac455c7057790f183de38583ed110a3902743a6a64dc43778ff8281b
  • Pointer size: 131 Bytes
  • Size of remote file: 148 kB

Git LFS Details

  • SHA256: 471ef3a3a5e766ec7a04221fd373668c14a06ccb9e14ced35a6c5bda46ef9293
  • Pointer size: 131 Bytes
  • Size of remote file: 149 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED