Pruned mDeBERTa
Collection
A collection of vocabulary-pruned mDeBERTa-v3 models (30k/20k) optimized for efficient Indonesian NLP deployment.
•
3 items
•
Updated
This model is a vocabulary-pruned version of microsoft/mdeberta-v3-base, specifically optimized for the Indonesian language.
Vocabulary: 20k tokens (Indonesian)
Note: This model is part of an ongoing research project on efficient Transformer deployment. Full paper and benchmarks will be linked upon publication.