--- license: mit language: km --- # PrahokBART (base-sized model) PrahokBART is a pre-trained sequence-to-sequence model trained from scratch for Khmer using carefully curated Khmer and English corpora. This model was trained considering the linguistic issues of Khmer by incorporating linguistic components such as word segmentation and normalization. This model can be finetuned to build natural language generation application for Khmer such as English<->Khmer translation, summarization, headline generation, etc. This model is more efficient than mBART50. You can read more about PrahokBART in this [paper](https://aclanthology.org/2025.coling-main.87/). # Basic Usage **Preprocessing**: Input texts should be normalized (encodings) and word segmented. We only perform word segmentation and assume texts have been normalized here. Please try normalization yourself using [this](https://github.com/hour/prahokbart/blob/main/utils/khnormal.py). ``` from khmernltk import word_tokenize def word_segment(text): return " ".join(word_tokenize(text)).replace(" ", " ▂ ") def word_unsegment(text): return text.replace(" ", "").replace("▂", " ") ``` **Load the model using AutoClass** ``` from transformers import AutoModelForSeq2SeqLM from transformers import AutoTokenizer model_name="nict-astrec-att/prahokbart_base" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) ``` **I/O format**: The format of corpus that PrahokBART was trained on is `Sentence <2xx>` for input and `<2yy> Sentence ` for output where `xx` and `yy` are language codes. **Forward pass** ``` inp = tokenizer( word_segment("ខ្ញុំទៅសាលារៀន <2km>"), add_special_tokens=False, return_tensors="pt", padding=True ) out = tokenizer( "<2en> I go to school ", add_special_tokens=False, return_tensors="pt", padding=True ).input_ids model_output = model( input_ids=inp.input_ids, attention_mask=inp.attention_mask, labels=out, ) # forward pass # For loss model_output.loss ## This is not label smoothed. # For logits model_output.logits ``` **Mask prediction**: Let's ask the model to predict `[MASK]` parts of an input setence. ``` text = "ខ្ញុំទៅសាលារៀន[MASK] <2km>" # I go to school [MASK] inp = tokenizer( word_segment(text), add_special_tokens=False, return_tensors="pt" ).input_ids model_output=model.generate( inp, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2km>") ) decoded_output=tokenizer.decode( model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(word_unsegment(decoded_output)) # Output: ខ្ញុំទៅសាលារៀន ហើយខ្ញុំទៅសាលារៀន ហើយខ្ញុំទៅសាលារៀន ``` ***ខ្ញុំទៅសាលារៀន ហើយខ្ញុំទៅសាលារៀន ហើយខ្ញុំទៅសាលារៀន = I go to school and I go to school and I go to school*** # Finetuning Codes are avaiable in [GitHub](https://github.com/hour/prahokbart). # Citation ```bibtex @inproceedings{kaing2025prahokbart, title={PrahokBART: A Pre-trained Sequence-to-Sequence Model for Khmer Natural Language Generation}, author={Kaing, Hour and Dabre, Raj and Song, Haiyue and Tran, Van-Hien and Tanaka, Hideki and Utiyama, Masao}, booktitle={Proceedings of the 31st International Conference on Computational Linguistics}, pages={1309--1322}, year={2025} } ```