ljvmiranda921 commited on
Commit
1051609
·
1 Parent(s): ac4653b

Update README

Browse files
Files changed (1) hide show
  1. README.md +22 -6
README.md CHANGED
@@ -50,13 +50,29 @@ git push hf main
50
 
51
  ## Cite
52
 
53
- If you're using [FilBench](arxiv.org/abs/2508.035230), please cite our work:
54
 
55
  ```
56
- @article{miranda2025filbench,
57
- title={Fil{B}ench: {C}an {LLM}s {U}nderstand and {G}enerate {F}ilipino?},
58
- author={Miranda, Lester James V and Aco, Elyanah and Manuel, Conner and Cruz, Jan Christian Blaise and Imperial, Joseph Marvin},
59
- journal={arXiv preprint arXiv:2508.03523},
60
- year={2025}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  }
62
  ```
 
50
 
51
  ## Cite
52
 
53
+ If you're using [FilBench](https://aclanthology.org/2025.emnlp-main.127/), please cite our work:
54
 
55
  ```
56
+ @inproceedings{miranda-etal-2025-filbench,
57
+ title = "{F}il{B}ench: Can {LLM}s Understand and Generate {F}ilipino?",
58
+ author = "Miranda, Lester James Validad and
59
+ Aco, Elyanah and
60
+ Manuel, Conner G. and
61
+ Cruz, Jan Christian Blaise and
62
+ Imperial, Joseph Marvin",
63
+ editor = "Christodoulopoulos, Christos and
64
+ Chakraborty, Tanmoy and
65
+ Rose, Carolyn and
66
+ Peng, Violet",
67
+ booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
68
+ month = nov,
69
+ year = "2025",
70
+ address = "Suzhou, China",
71
+ publisher = "Association for Computational Linguistics",
72
+ url = "https://aclanthology.org/2025.emnlp-main.127/",
73
+ doi = "10.18653/v1/2025.emnlp-main.127",
74
+ pages = "2496--2529",
75
+ ISBN = "979-8-89176-332-6",
76
+ abstract = "Despite the impressive performance of LLMs on English-based tasks, little is known about their capabilities in specific languages such as Filipino. In this work, we address this gap by introducing FilBench, a Filipino-centric benchmark designed to evaluate LLMs across a diverse set of tasks and capabilities in Filipino, Tagalog, and Cebuano. We carefully curate the tasks in FilBench to reflect the priorities and trends of NLP research in the Philippines such as Cultural Knowledge, Classical NLP, Reading Comprehension, and Generation. By evaluating 27 state-of-the-art LLMs on FilBench, we find that several LLMs suffer from reading comprehension and translation capabilities. Our results indicate that FilBench is challenging, with the best model, GPT-4o, achieving only a score of 72.23{\%}. Moreover, we also find that models trained specifically for Southeast Asian languages tend to underperform on FilBench, with the highest-performing model, SEA-LION v3 70B, achieving only a score of 61.07{\%}. Our work demonstrates the value of curating language-specific LLM benchmarks to aid in driving progress on Filipino NLP and increasing the inclusion of Philippine languages in LLM development."
77
  }
78
  ```