VoxMorph: Scalable Zero-shot Voice Identity Morphing via Disentangled Embeddings

Project Page | Paper | GitHub | Demo | Dataset

VoxMorph is a zero-shot framework that produces high-fidelity voice morphs from as little as five seconds of audio per subject without model retraining. The method disentangles vocal traits into prosody and timbre embeddings, enabling fine-grained interpolation of speaking style and identity. These embeddings are fused via Spherical Linear Interpolation (Slerp) and synthesized using an autoregressive language model coupled with a Conditional Flow Matching network.

This repository hosts the official model checkpoints for VoxMorph: Scalable Zero-shot Voice Identity Morphing via Disentangled Embeddings (ICASSP 2026). It contains the checkpoint files (s3gen.pt and t3_cfg.pt) for VoxMorph, a zero-shot TTS framework built on top of Resemble AI's frozen Chatterbox-TTS backbone.

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BharathK333/VoxMorph-Models

Finetuned
(31)
this model

Dataset used to train BharathK333/VoxMorph-Models

Paper for BharathK333/VoxMorph-Models