Abstract
PRiSM benchmark evaluates phonetic perception in speech models through standardized transcription-based metrics and downstream applications across clinical, educational, and multilingual domains.
Phone recognition (PR) serves as the atomic interface for language-agnostic modeling for cross-lingual speech processing and phonetic analysis. Despite prolonged efforts in developing PR systems, current evaluations only measure surface-level transcription accuracy. We introduce PRiSM, the first open-source benchmark designed to expose blind spots in phonetic perception through intrinsic and extrinsic evaluation of PR systems. PRiSM standardizes transcription-based evaluation and assesses downstream utility in clinical, educational, and multilingual settings with transcription and representation probes. We find that diverse language exposure during training is key to PR performance, encoder-CTC models are the most stable, and specialized PR models still outperform Large Audio Language Models. PRiSM releases code, recipes, and datasets to move the field toward multilingual speech models with robust phonetic ability: https://github.com/changelinglab/prism.
Community
Main take-aways
PRiSM is the first fully-open benchmark that evaluates Phone-Recognition systems on both intrinsic (phone-transcription) and extrinsic (down-stream) tasks across 12 datasets covering clinical, L2-learning and multilingual settings. We find that Large Audio-Language Models still lag behind specialized PR models on such tasks.
Since intrinsic phone recognition capability is not fully indicative of performance in extrinsic settings, we design transcript and representation based probes that allow an exhaustive analysis, interpretability, and fair comparison.
Language exposure > data size: multilingual training with broad, diverse data matters more for cross lingual generalization.
Code, prompts and data are released under permissive licences.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MauBERT: Universal Phonetic Inductive Biases for Few-Shot Acoustic Units Discovery (2025)
- Hearing to Translate: The Effectiveness of Speech Modality Integration into LLMs (2025)
- Multimodal In-context Learning for ASR of Low-resource Languages (2026)
- SpidR: Learning Fast and Stable Linguistic Units for Spoken Language Models Without Supervision (2025)
- LEMAS: Large A 150K-Hour Large-scale Extensible Multilingual Audio Suite with Generative Speech Models (2026)
- WenetSpeech-Wu: Datasets, Benchmarks, and Models for a Unified Chinese Wu Dialect Speech Processing Ecosystem (2026)
- SITA: Learning Speaker-Invariant and Tone-Aware Speech Representations for Low-Resource Tonal Languages (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper