Introduction

XY-Tokenizer is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate.

📚 Related Project: MOSS-TTSD

XY-Tokenizer serves as the underlying neural codec for MOSS-TTSD, our 1.7B Audio Language Model.
Explore MOSS-TTSD for advanced text-to-speech and other audio generation tasks on GitHub, Blog, 博客, and Space Demo.

✨ Features

  • Dual-channel modeling: Simultaneously captures semantic meaning and acoustic details
  • Efficient representation: 1kbps bitrate with RVQ8 quantization at 12.5Hz
  • High-quality audio tokenization: Convert speech to discrete tokens and back with minimal quality loss
  • Long audio support: Process audio files longer than 30 seconds using chunking with overlap
  • Batch processing: Efficiently process multiple audio files in batches
  • 24kHz output: Generate high-quality 24kHz audio output

🚀 Installation

git clone https://github.com/OpenMOSS/MOSS-TTSD.git
cd MOSS-TTSD
conda create -n xy_tokenizer python=3.10 -y && conda activate xy_tokenizer
pip install -r XY_Tokenizer/requirements.txt

💻 Quick Start

Here's how to use XY-Tokenizer with transformers to encode an audio file into discrete tokens and decode it back into a waveform.

import torchaudio
from transformers import AutoFeatureExtractor, AutoModel

# 1. Load the feature extractor and the codec model
feature_extractor = AutoFeatureExtractor.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True)
codec = AutoModel.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True, device_map="auto").eval()

# 2. Load and preprocess the audio
# The model expects a 16kHz sample rate.
wav_form, sampling_rate = torchaudio.load("examples/zh_spk1_moon.wav")
if sampling_rate != 16000:
    wav_form = torchaudio.functional.resample(wav_form, orig_freq=sampling_rate, new_freq=16000)

# 3. Encode the audio into discrete codes
input_spectrum = feature_extractor(wav_form, sampling_rate=16000, return_attention_mask=True, return_tensors="pt")
# The 'code' dictionary contains the discrete audio codes
code = codec.encode(input_spectrum)

# 4. Decode the codes back to an audio waveform
# The output is high-quality 24kHz audio.
output_wav = codec.decode(code["audio_codes"], overlap_seconds=10)

# 5. Save the reconstructed audio
for i, audio in enumerate(output_wav["audio_values"]):
    torchaudio.save(f"outputs/audio_{i}.wav", audio.cpu(), 24000)
Downloads last month
25
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support