Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- examples/magicdata-read/cosyvoice/conf/cosyvoice.fromscratch.yaml +203 -0
- examples/magicdata-read/cosyvoice/conf/cosyvoice.yaml +203 -0
- examples/magicdata-read/cosyvoice/conf/ds_stage2.json +42 -0
- examples/magicdata-read/cosyvoice/local/download_and_untar.sh +97 -0
- examples/magicdata-read/cosyvoice/local/prepare_data.py +52 -0
- examples/magicdata-read/cosyvoice/tts_text.json +18 -0
- runtime/python/Dockerfile +13 -0
- runtime/python/fastapi/client.py +92 -0
- runtime/python/fastapi/server.py +171 -0
- runtime/python/grpc/__pycache__/cosyvoice_pb2_grpc.cpython-310.pyc +0 -0
- runtime/python/grpc/client.py +106 -0
- third_party/Matcha-TTS/.env.example +6 -0
- third_party/Matcha-TTS/.github/PULL_REQUEST_TEMPLATE.md +22 -0
- third_party/Matcha-TTS/.github/codecov.yml +15 -0
- third_party/Matcha-TTS/.github/dependabot.yml +17 -0
- third_party/Matcha-TTS/.github/release-drafter.yml +44 -0
- third_party/Matcha-TTS/.gitignore +163 -0
- third_party/Matcha-TTS/.pre-commit-config.yaml +59 -0
- third_party/Matcha-TTS/.project-root +2 -0
- third_party/Matcha-TTS/.pylintrc +525 -0
- third_party/Matcha-TTS/LICENSE +21 -0
- third_party/Matcha-TTS/MANIFEST.in +14 -0
- third_party/Matcha-TTS/Makefile +42 -0
- third_party/Matcha-TTS/README.md +278 -0
- third_party/Matcha-TTS/configs/__init__.py +1 -0
- third_party/Matcha-TTS/configs/callbacks/default.yaml +5 -0
- third_party/Matcha-TTS/configs/callbacks/none.yaml +0 -0
- third_party/Matcha-TTS/configs/callbacks/rich_progress_bar.yaml +4 -0
- third_party/Matcha-TTS/configs/data/hi-fi_en-US_female.yaml +14 -0
- third_party/Matcha-TTS/configs/data/ljspeech.yaml +21 -0
- third_party/Matcha-TTS/configs/data/vctk.yaml +14 -0
- third_party/Matcha-TTS/configs/debug/default.yaml +35 -0
- third_party/Matcha-TTS/configs/debug/fdr.yaml +9 -0
- third_party/Matcha-TTS/configs/debug/limit.yaml +12 -0
- third_party/Matcha-TTS/configs/debug/overfit.yaml +13 -0
- third_party/Matcha-TTS/configs/debug/profiler.yaml +15 -0
- third_party/Matcha-TTS/configs/eval.yaml +18 -0
- third_party/Matcha-TTS/configs/experiment/hifi_dataset_piper_phonemizer.yaml +14 -0
- third_party/Matcha-TTS/configs/experiment/ljspeech.yaml +14 -0
- third_party/Matcha-TTS/configs/experiment/ljspeech_min_memory.yaml +18 -0
- third_party/Matcha-TTS/configs/experiment/multispeaker.yaml +14 -0
- third_party/Matcha-TTS/configs/extras/default.yaml +8 -0
- third_party/Matcha-TTS/configs/hparams_search/mnist_optuna.yaml +52 -0
- third_party/Matcha-TTS/configs/hydra/default.yaml +19 -0
- third_party/Matcha-TTS/configs/local/.gitkeep +0 -0
- third_party/Matcha-TTS/configs/logger/aim.yaml +28 -0
- third_party/Matcha-TTS/matcha/__pycache__/__init__.cpython-310.pyc +0 -0
- third_party/Matcha-TTS/matcha/hifigan/LICENSE +21 -0
- third_party/Matcha-TTS/matcha/hifigan/README.md +101 -0
- third_party/Matcha-TTS/matcha/hifigan/__init__.py +0 -0
examples/magicdata-read/cosyvoice/conf/cosyvoice.fromscratch.yaml
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# set random seed, so that you may reproduce your result.
|
| 2 |
+
__set_seed1: !apply:random.seed [1986]
|
| 3 |
+
__set_seed2: !apply:numpy.random.seed [1986]
|
| 4 |
+
__set_seed3: !apply:torch.manual_seed [1986]
|
| 5 |
+
__set_seed4: !apply:torch.cuda.manual_seed_all [1986]
|
| 6 |
+
|
| 7 |
+
# fixed params
|
| 8 |
+
sample_rate: 22050
|
| 9 |
+
text_encoder_input_size: 512
|
| 10 |
+
llm_input_size: 1024
|
| 11 |
+
llm_output_size: 1024
|
| 12 |
+
spk_embed_dim: 192
|
| 13 |
+
|
| 14 |
+
# model params
|
| 15 |
+
# for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
|
| 16 |
+
# for system/third_party class/function, we do not require this.
|
| 17 |
+
llm: !new:cosyvoice.llm.llm.TransformerLM
|
| 18 |
+
text_encoder_input_size: !ref <text_encoder_input_size>
|
| 19 |
+
llm_input_size: !ref <llm_input_size>
|
| 20 |
+
llm_output_size: !ref <llm_output_size>
|
| 21 |
+
text_token_size: 51866 # change to 60515 if you want to train with CosyVoice-300M-25Hz recipe
|
| 22 |
+
speech_token_size: 4096
|
| 23 |
+
length_normalized_loss: True
|
| 24 |
+
lsm_weight: 0
|
| 25 |
+
spk_embed_dim: !ref <spk_embed_dim>
|
| 26 |
+
text_encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
|
| 27 |
+
input_size: !ref <text_encoder_input_size>
|
| 28 |
+
output_size: 1024
|
| 29 |
+
attention_heads: 8
|
| 30 |
+
linear_units: 2048
|
| 31 |
+
num_blocks: 3
|
| 32 |
+
dropout_rate: 0.1
|
| 33 |
+
positional_dropout_rate: 0.1
|
| 34 |
+
attention_dropout_rate: 0.0
|
| 35 |
+
normalize_before: True
|
| 36 |
+
input_layer: 'linear'
|
| 37 |
+
pos_enc_layer_type: 'rel_pos_espnet'
|
| 38 |
+
selfattention_layer_type: 'rel_selfattn'
|
| 39 |
+
use_cnn_module: False
|
| 40 |
+
macaron_style: False
|
| 41 |
+
use_dynamic_chunk: False
|
| 42 |
+
use_dynamic_left_chunk: False
|
| 43 |
+
static_chunk_size: 1
|
| 44 |
+
llm: !new:cosyvoice.transformer.encoder.TransformerEncoder
|
| 45 |
+
input_size: !ref <llm_input_size>
|
| 46 |
+
output_size: !ref <llm_output_size>
|
| 47 |
+
attention_heads: 8
|
| 48 |
+
linear_units: 2048
|
| 49 |
+
num_blocks: 7
|
| 50 |
+
dropout_rate: 0.1
|
| 51 |
+
positional_dropout_rate: 0.1
|
| 52 |
+
attention_dropout_rate: 0.0
|
| 53 |
+
input_layer: 'linear_legacy'
|
| 54 |
+
pos_enc_layer_type: 'rel_pos_espnet'
|
| 55 |
+
selfattention_layer_type: 'rel_selfattn'
|
| 56 |
+
static_chunk_size: 1
|
| 57 |
+
sampling: !name:cosyvoice.utils.common.ras_sampling
|
| 58 |
+
top_p: 0.8
|
| 59 |
+
top_k: 25
|
| 60 |
+
win_size: 10
|
| 61 |
+
tau_r: 0.1
|
| 62 |
+
|
| 63 |
+
flow: !new:cosyvoice.flow.flow.MaskedDiffWithXvec
|
| 64 |
+
input_size: 512
|
| 65 |
+
output_size: 80
|
| 66 |
+
spk_embed_dim: !ref <spk_embed_dim>
|
| 67 |
+
output_type: 'mel'
|
| 68 |
+
vocab_size: 4096
|
| 69 |
+
input_frame_rate: 50 # change to 25 if you want to train with CosyVoice-300M-25Hz recipe
|
| 70 |
+
only_mask_loss: True
|
| 71 |
+
encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
|
| 72 |
+
output_size: 512
|
| 73 |
+
attention_heads: 4
|
| 74 |
+
linear_units: 1024
|
| 75 |
+
num_blocks: 3
|
| 76 |
+
dropout_rate: 0.1
|
| 77 |
+
positional_dropout_rate: 0.1
|
| 78 |
+
attention_dropout_rate: 0.1
|
| 79 |
+
normalize_before: True
|
| 80 |
+
input_layer: 'linear'
|
| 81 |
+
pos_enc_layer_type: 'rel_pos_espnet'
|
| 82 |
+
selfattention_layer_type: 'rel_selfattn'
|
| 83 |
+
input_size: 512
|
| 84 |
+
use_cnn_module: False
|
| 85 |
+
macaron_style: False
|
| 86 |
+
length_regulator: !new:cosyvoice.flow.length_regulator.InterpolateRegulator
|
| 87 |
+
channels: 80
|
| 88 |
+
sampling_ratios: [1, 1, 1, 1]
|
| 89 |
+
decoder: !new:cosyvoice.flow.flow_matching.ConditionalCFM
|
| 90 |
+
in_channels: 240
|
| 91 |
+
n_spks: 1
|
| 92 |
+
spk_emb_dim: 80
|
| 93 |
+
cfm_params: !new:omegaconf.DictConfig
|
| 94 |
+
content:
|
| 95 |
+
sigma_min: 1e-06
|
| 96 |
+
solver: 'euler'
|
| 97 |
+
t_scheduler: 'cosine'
|
| 98 |
+
training_cfg_rate: 0.2
|
| 99 |
+
inference_cfg_rate: 0.7
|
| 100 |
+
reg_loss_type: 'l1'
|
| 101 |
+
estimator: !new:cosyvoice.flow.decoder.ConditionalDecoder
|
| 102 |
+
in_channels: 320
|
| 103 |
+
out_channels: 80
|
| 104 |
+
channels: [256, 256]
|
| 105 |
+
dropout: 0.0
|
| 106 |
+
attention_head_dim: 64
|
| 107 |
+
n_blocks: 4
|
| 108 |
+
num_mid_blocks: 8
|
| 109 |
+
num_heads: 8
|
| 110 |
+
act_fn: 'gelu'
|
| 111 |
+
|
| 112 |
+
hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
|
| 113 |
+
in_channels: 80
|
| 114 |
+
base_channels: 512
|
| 115 |
+
nb_harmonics: 8
|
| 116 |
+
sampling_rate: !ref <sample_rate>
|
| 117 |
+
nsf_alpha: 0.1
|
| 118 |
+
nsf_sigma: 0.003
|
| 119 |
+
nsf_voiced_threshold: 10
|
| 120 |
+
upsample_rates: [8, 8]
|
| 121 |
+
upsample_kernel_sizes: [16, 16]
|
| 122 |
+
istft_params:
|
| 123 |
+
n_fft: 16
|
| 124 |
+
hop_len: 4
|
| 125 |
+
resblock_kernel_sizes: [3, 7, 11]
|
| 126 |
+
resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
|
| 127 |
+
source_resblock_kernel_sizes: [7, 11]
|
| 128 |
+
source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5]]
|
| 129 |
+
lrelu_slope: 0.1
|
| 130 |
+
audio_limit: 0.99
|
| 131 |
+
f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
|
| 132 |
+
num_class: 1
|
| 133 |
+
in_channels: 80
|
| 134 |
+
cond_channels: 512
|
| 135 |
+
|
| 136 |
+
# processor functions
|
| 137 |
+
parquet_opener: !name:cosyvoice.dataset.processor.parquet_opener
|
| 138 |
+
get_tokenizer: !name:whisper.tokenizer.get_tokenizer # change to !name:cosyvoice.tokenizer.tokenizer.get_tokenizer if you want to train with CosyVoice-300M-25Hz recipe
|
| 139 |
+
multilingual: True
|
| 140 |
+
num_languages: 100
|
| 141 |
+
language: 'en'
|
| 142 |
+
task: 'transcribe'
|
| 143 |
+
allowed_special: 'all'
|
| 144 |
+
tokenize: !name:cosyvoice.dataset.processor.tokenize
|
| 145 |
+
get_tokenizer: !ref <get_tokenizer>
|
| 146 |
+
allowed_special: !ref <allowed_special>
|
| 147 |
+
filter: !name:cosyvoice.dataset.processor.filter
|
| 148 |
+
max_length: 40960
|
| 149 |
+
min_length: 0
|
| 150 |
+
token_max_length: 200
|
| 151 |
+
token_min_length: 1
|
| 152 |
+
resample: !name:cosyvoice.dataset.processor.resample
|
| 153 |
+
resample_rate: !ref <sample_rate>
|
| 154 |
+
feat_extractor: !name:matcha.utils.audio.mel_spectrogram
|
| 155 |
+
n_fft: 1024
|
| 156 |
+
num_mels: 80
|
| 157 |
+
sampling_rate: !ref <sample_rate>
|
| 158 |
+
hop_size: 256
|
| 159 |
+
win_size: 1024
|
| 160 |
+
fmin: 0
|
| 161 |
+
fmax: 8000
|
| 162 |
+
center: False
|
| 163 |
+
compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
|
| 164 |
+
feat_extractor: !ref <feat_extractor>
|
| 165 |
+
parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
|
| 166 |
+
normalize: True
|
| 167 |
+
shuffle: !name:cosyvoice.dataset.processor.shuffle
|
| 168 |
+
shuffle_size: 1000
|
| 169 |
+
sort: !name:cosyvoice.dataset.processor.sort
|
| 170 |
+
sort_size: 500 # sort_size should be less than shuffle_size
|
| 171 |
+
batch: !name:cosyvoice.dataset.processor.batch
|
| 172 |
+
batch_type: 'dynamic'
|
| 173 |
+
max_frames_in_batch: 12000
|
| 174 |
+
padding: !name:cosyvoice.dataset.processor.padding
|
| 175 |
+
use_spk_embedding: False # change to True during sft
|
| 176 |
+
|
| 177 |
+
# dataset processor pipeline
|
| 178 |
+
data_pipeline: [
|
| 179 |
+
!ref <parquet_opener>,
|
| 180 |
+
!ref <tokenize>,
|
| 181 |
+
!ref <filter>,
|
| 182 |
+
!ref <resample>,
|
| 183 |
+
!ref <compute_fbank>,
|
| 184 |
+
!ref <parse_embedding>,
|
| 185 |
+
!ref <shuffle>,
|
| 186 |
+
!ref <sort>,
|
| 187 |
+
!ref <batch>,
|
| 188 |
+
!ref <padding>,
|
| 189 |
+
]
|
| 190 |
+
|
| 191 |
+
# train conf
|
| 192 |
+
train_conf:
|
| 193 |
+
optim: adam
|
| 194 |
+
optim_conf:
|
| 195 |
+
lr: 0.002 # change to 0.001 if you want to train flow from scratch
|
| 196 |
+
scheduler: warmuplr
|
| 197 |
+
scheduler_conf:
|
| 198 |
+
warmup_steps: 25000
|
| 199 |
+
max_epoch: 200
|
| 200 |
+
grad_clip: 5
|
| 201 |
+
accum_grad: 2
|
| 202 |
+
log_interval: 100
|
| 203 |
+
save_per_step: -1
|
examples/magicdata-read/cosyvoice/conf/cosyvoice.yaml
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# set random seed, so that you may reproduce your result.
|
| 2 |
+
__set_seed1: !apply:random.seed [1986]
|
| 3 |
+
__set_seed2: !apply:numpy.random.seed [1986]
|
| 4 |
+
__set_seed3: !apply:torch.manual_seed [1986]
|
| 5 |
+
__set_seed4: !apply:torch.cuda.manual_seed_all [1986]
|
| 6 |
+
|
| 7 |
+
# fixed params
|
| 8 |
+
sample_rate: 22050
|
| 9 |
+
text_encoder_input_size: 512
|
| 10 |
+
llm_input_size: 1024
|
| 11 |
+
llm_output_size: 1024
|
| 12 |
+
spk_embed_dim: 192
|
| 13 |
+
|
| 14 |
+
# model params
|
| 15 |
+
# for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
|
| 16 |
+
# for system/third_party class/function, we do not require this.
|
| 17 |
+
llm: !new:cosyvoice.llm.llm.TransformerLM
|
| 18 |
+
text_encoder_input_size: !ref <text_encoder_input_size>
|
| 19 |
+
llm_input_size: !ref <llm_input_size>
|
| 20 |
+
llm_output_size: !ref <llm_output_size>
|
| 21 |
+
text_token_size: 51866 # change to 60515 if you want to train with CosyVoice-300M-25Hz recipe
|
| 22 |
+
speech_token_size: 4096
|
| 23 |
+
length_normalized_loss: True
|
| 24 |
+
lsm_weight: 0
|
| 25 |
+
spk_embed_dim: !ref <spk_embed_dim>
|
| 26 |
+
text_encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
|
| 27 |
+
input_size: !ref <text_encoder_input_size>
|
| 28 |
+
output_size: 1024
|
| 29 |
+
attention_heads: 16
|
| 30 |
+
linear_units: 4096
|
| 31 |
+
num_blocks: 6
|
| 32 |
+
dropout_rate: 0.1
|
| 33 |
+
positional_dropout_rate: 0.1
|
| 34 |
+
attention_dropout_rate: 0.0
|
| 35 |
+
normalize_before: True
|
| 36 |
+
input_layer: 'linear'
|
| 37 |
+
pos_enc_layer_type: 'rel_pos_espnet'
|
| 38 |
+
selfattention_layer_type: 'rel_selfattn'
|
| 39 |
+
use_cnn_module: False
|
| 40 |
+
macaron_style: False
|
| 41 |
+
use_dynamic_chunk: False
|
| 42 |
+
use_dynamic_left_chunk: False
|
| 43 |
+
static_chunk_size: 1
|
| 44 |
+
llm: !new:cosyvoice.transformer.encoder.TransformerEncoder
|
| 45 |
+
input_size: !ref <llm_input_size>
|
| 46 |
+
output_size: !ref <llm_output_size>
|
| 47 |
+
attention_heads: 16
|
| 48 |
+
linear_units: 4096
|
| 49 |
+
num_blocks: 14
|
| 50 |
+
dropout_rate: 0.1
|
| 51 |
+
positional_dropout_rate: 0.1
|
| 52 |
+
attention_dropout_rate: 0.0
|
| 53 |
+
input_layer: 'linear_legacy'
|
| 54 |
+
pos_enc_layer_type: 'rel_pos_espnet'
|
| 55 |
+
selfattention_layer_type: 'rel_selfattn'
|
| 56 |
+
static_chunk_size: 1
|
| 57 |
+
sampling: !name:cosyvoice.utils.common.ras_sampling
|
| 58 |
+
top_p: 0.8
|
| 59 |
+
top_k: 25
|
| 60 |
+
win_size: 10
|
| 61 |
+
tau_r: 0.1
|
| 62 |
+
|
| 63 |
+
flow: !new:cosyvoice.flow.flow.MaskedDiffWithXvec
|
| 64 |
+
input_size: 512
|
| 65 |
+
output_size: 80
|
| 66 |
+
spk_embed_dim: !ref <spk_embed_dim>
|
| 67 |
+
output_type: 'mel'
|
| 68 |
+
vocab_size: 4096
|
| 69 |
+
input_frame_rate: 50 # change to 25 if you want to train with CosyVoice-300M-25Hz recipe
|
| 70 |
+
only_mask_loss: True
|
| 71 |
+
encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
|
| 72 |
+
output_size: 512
|
| 73 |
+
attention_heads: 8
|
| 74 |
+
linear_units: 2048
|
| 75 |
+
num_blocks: 6
|
| 76 |
+
dropout_rate: 0.1
|
| 77 |
+
positional_dropout_rate: 0.1
|
| 78 |
+
attention_dropout_rate: 0.1
|
| 79 |
+
normalize_before: True
|
| 80 |
+
input_layer: 'linear'
|
| 81 |
+
pos_enc_layer_type: 'rel_pos_espnet'
|
| 82 |
+
selfattention_layer_type: 'rel_selfattn'
|
| 83 |
+
input_size: 512
|
| 84 |
+
use_cnn_module: False
|
| 85 |
+
macaron_style: False
|
| 86 |
+
length_regulator: !new:cosyvoice.flow.length_regulator.InterpolateRegulator
|
| 87 |
+
channels: 80
|
| 88 |
+
sampling_ratios: [1, 1, 1, 1]
|
| 89 |
+
decoder: !new:cosyvoice.flow.flow_matching.ConditionalCFM
|
| 90 |
+
in_channels: 240
|
| 91 |
+
n_spks: 1
|
| 92 |
+
spk_emb_dim: 80
|
| 93 |
+
cfm_params: !new:omegaconf.DictConfig
|
| 94 |
+
content:
|
| 95 |
+
sigma_min: 1e-06
|
| 96 |
+
solver: 'euler'
|
| 97 |
+
t_scheduler: 'cosine'
|
| 98 |
+
training_cfg_rate: 0.2
|
| 99 |
+
inference_cfg_rate: 0.7
|
| 100 |
+
reg_loss_type: 'l1'
|
| 101 |
+
estimator: !new:cosyvoice.flow.decoder.ConditionalDecoder
|
| 102 |
+
in_channels: 320
|
| 103 |
+
out_channels: 80
|
| 104 |
+
channels: [256, 256]
|
| 105 |
+
dropout: 0.0
|
| 106 |
+
attention_head_dim: 64
|
| 107 |
+
n_blocks: 4
|
| 108 |
+
num_mid_blocks: 12
|
| 109 |
+
num_heads: 8
|
| 110 |
+
act_fn: 'gelu'
|
| 111 |
+
|
| 112 |
+
hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
|
| 113 |
+
in_channels: 80
|
| 114 |
+
base_channels: 512
|
| 115 |
+
nb_harmonics: 8
|
| 116 |
+
sampling_rate: !ref <sample_rate>
|
| 117 |
+
nsf_alpha: 0.1
|
| 118 |
+
nsf_sigma: 0.003
|
| 119 |
+
nsf_voiced_threshold: 10
|
| 120 |
+
upsample_rates: [8, 8]
|
| 121 |
+
upsample_kernel_sizes: [16, 16]
|
| 122 |
+
istft_params:
|
| 123 |
+
n_fft: 16
|
| 124 |
+
hop_len: 4
|
| 125 |
+
resblock_kernel_sizes: [3, 7, 11]
|
| 126 |
+
resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
|
| 127 |
+
source_resblock_kernel_sizes: [7, 11]
|
| 128 |
+
source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5]]
|
| 129 |
+
lrelu_slope: 0.1
|
| 130 |
+
audio_limit: 0.99
|
| 131 |
+
f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
|
| 132 |
+
num_class: 1
|
| 133 |
+
in_channels: 80
|
| 134 |
+
cond_channels: 512
|
| 135 |
+
|
| 136 |
+
# processor functions
|
| 137 |
+
parquet_opener: !name:cosyvoice.dataset.processor.parquet_opener
|
| 138 |
+
get_tokenizer: !name:whisper.tokenizer.get_tokenizer # change to !name:cosyvoice.tokenizer.tokenizer.get_tokenizer if you want to train with CosyVoice-300M-25Hz recipe
|
| 139 |
+
multilingual: True
|
| 140 |
+
num_languages: 100
|
| 141 |
+
language: 'en'
|
| 142 |
+
task: 'transcribe'
|
| 143 |
+
allowed_special: 'all'
|
| 144 |
+
tokenize: !name:cosyvoice.dataset.processor.tokenize
|
| 145 |
+
get_tokenizer: !ref <get_tokenizer>
|
| 146 |
+
allowed_special: !ref <allowed_special>
|
| 147 |
+
filter: !name:cosyvoice.dataset.processor.filter
|
| 148 |
+
max_length: 40960
|
| 149 |
+
min_length: 0
|
| 150 |
+
token_max_length: 200
|
| 151 |
+
token_min_length: 1
|
| 152 |
+
resample: !name:cosyvoice.dataset.processor.resample
|
| 153 |
+
resample_rate: !ref <sample_rate>
|
| 154 |
+
feat_extractor: !name:matcha.utils.audio.mel_spectrogram
|
| 155 |
+
n_fft: 1024
|
| 156 |
+
num_mels: 80
|
| 157 |
+
sampling_rate: !ref <sample_rate>
|
| 158 |
+
hop_size: 256
|
| 159 |
+
win_size: 1024
|
| 160 |
+
fmin: 0
|
| 161 |
+
fmax: 8000
|
| 162 |
+
center: False
|
| 163 |
+
compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
|
| 164 |
+
feat_extractor: !ref <feat_extractor>
|
| 165 |
+
parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
|
| 166 |
+
normalize: True
|
| 167 |
+
shuffle: !name:cosyvoice.dataset.processor.shuffle
|
| 168 |
+
shuffle_size: 1000
|
| 169 |
+
sort: !name:cosyvoice.dataset.processor.sort
|
| 170 |
+
sort_size: 500 # sort_size should be less than shuffle_size
|
| 171 |
+
batch: !name:cosyvoice.dataset.processor.batch
|
| 172 |
+
batch_type: 'dynamic'
|
| 173 |
+
max_frames_in_batch: 2000
|
| 174 |
+
padding: !name:cosyvoice.dataset.processor.padding
|
| 175 |
+
use_spk_embedding: False # change to True during sft
|
| 176 |
+
|
| 177 |
+
# dataset processor pipeline
|
| 178 |
+
data_pipeline: [
|
| 179 |
+
!ref <parquet_opener>,
|
| 180 |
+
!ref <tokenize>,
|
| 181 |
+
!ref <filter>,
|
| 182 |
+
!ref <resample>,
|
| 183 |
+
!ref <compute_fbank>,
|
| 184 |
+
!ref <parse_embedding>,
|
| 185 |
+
!ref <shuffle>,
|
| 186 |
+
!ref <sort>,
|
| 187 |
+
!ref <batch>,
|
| 188 |
+
!ref <padding>,
|
| 189 |
+
]
|
| 190 |
+
|
| 191 |
+
# train conf
|
| 192 |
+
train_conf:
|
| 193 |
+
optim: adam
|
| 194 |
+
optim_conf:
|
| 195 |
+
lr: 0.001 # change to 1e-5 during sft
|
| 196 |
+
scheduler: warmuplr # change to constantlr during sft
|
| 197 |
+
scheduler_conf:
|
| 198 |
+
warmup_steps: 2500
|
| 199 |
+
max_epoch: 200
|
| 200 |
+
grad_clip: 5
|
| 201 |
+
accum_grad: 2
|
| 202 |
+
log_interval: 100
|
| 203 |
+
save_per_step: -1
|
examples/magicdata-read/cosyvoice/conf/ds_stage2.json
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"train_micro_batch_size_per_gpu": 1,
|
| 3 |
+
"gradient_accumulation_steps": 1,
|
| 4 |
+
"steps_per_print": 100,
|
| 5 |
+
"gradient_clipping": 5,
|
| 6 |
+
"fp16": {
|
| 7 |
+
"enabled": false,
|
| 8 |
+
"auto_cast": false,
|
| 9 |
+
"loss_scale": 0,
|
| 10 |
+
"initial_scale_power": 16,
|
| 11 |
+
"loss_scale_window": 256,
|
| 12 |
+
"hysteresis": 2,
|
| 13 |
+
"consecutive_hysteresis": false,
|
| 14 |
+
"min_loss_scale": 1
|
| 15 |
+
},
|
| 16 |
+
"bf16": {
|
| 17 |
+
"enabled": false
|
| 18 |
+
},
|
| 19 |
+
"zero_force_ds_cpu_optimizer": false,
|
| 20 |
+
"zero_optimization": {
|
| 21 |
+
"stage": 2,
|
| 22 |
+
"offload_optimizer": {
|
| 23 |
+
"device": "none",
|
| 24 |
+
"pin_memory": true
|
| 25 |
+
},
|
| 26 |
+
"allgather_partitions": true,
|
| 27 |
+
"allgather_bucket_size": 5e8,
|
| 28 |
+
"overlap_comm": false,
|
| 29 |
+
"reduce_scatter": true,
|
| 30 |
+
"reduce_bucket_size": 5e8,
|
| 31 |
+
"contiguous_gradients" : true
|
| 32 |
+
},
|
| 33 |
+
"optimizer": {
|
| 34 |
+
"type": "AdamW",
|
| 35 |
+
"params": {
|
| 36 |
+
"lr": 0.001,
|
| 37 |
+
"weight_decay": 0.0001,
|
| 38 |
+
"torch_adam": true,
|
| 39 |
+
"adam_w_mode": true
|
| 40 |
+
}
|
| 41 |
+
}
|
| 42 |
+
}
|
examples/magicdata-read/cosyvoice/local/download_and_untar.sh
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# Copyright 2014 Johns Hopkins University (author: Daniel Povey)
|
| 4 |
+
# Apache 2.0
|
| 5 |
+
|
| 6 |
+
remove_archive=false
|
| 7 |
+
|
| 8 |
+
if [ "$1" == --remove-archive ]; then
|
| 9 |
+
remove_archive=true
|
| 10 |
+
shift
|
| 11 |
+
fi
|
| 12 |
+
|
| 13 |
+
if [ $# -ne 3 ]; then
|
| 14 |
+
echo "Usage: $0 [--remove-archive] <data-base> <url-base> <corpus-part>"
|
| 15 |
+
echo "e.g.: $0 /export/a15/vpanayotov/data www.openslr.org/resources/11 dev-clean"
|
| 16 |
+
echo "With --remove-archive it will remove the archive after successfully un-tarring it."
|
| 17 |
+
echo "<corpus-part> can be one of: dev-clean, test-clean, dev-other, test-other,"
|
| 18 |
+
echo " train-clean-100, train-clean-360, train-other-500."
|
| 19 |
+
exit 1
|
| 20 |
+
fi
|
| 21 |
+
|
| 22 |
+
data=$1
|
| 23 |
+
url=$2
|
| 24 |
+
part=$3
|
| 25 |
+
|
| 26 |
+
if [ ! -d "$data" ]; then
|
| 27 |
+
echo "$0: no such directory $data"
|
| 28 |
+
exit 1
|
| 29 |
+
fi
|
| 30 |
+
|
| 31 |
+
part_ok=false
|
| 32 |
+
list="dev_set test_set train_set"
|
| 33 |
+
for x in $list; do
|
| 34 |
+
if [ "$part" == $x ]; then part_ok=true; fi
|
| 35 |
+
done
|
| 36 |
+
if ! $part_ok; then
|
| 37 |
+
echo "$0: expected <corpus-part> to be one of $list, but got '$part'"
|
| 38 |
+
exit 1
|
| 39 |
+
fi
|
| 40 |
+
|
| 41 |
+
if [ -z "$url" ]; then
|
| 42 |
+
echo "$0: empty URL base."
|
| 43 |
+
exit 1
|
| 44 |
+
fi
|
| 45 |
+
|
| 46 |
+
if [ -f $data/.$part.complete ]; then
|
| 47 |
+
echo "$0: data part $part was already successfully extracted, nothing to do."
|
| 48 |
+
exit 0
|
| 49 |
+
fi
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
# sizes of the archive files in bytes. This is some older versions.
|
| 53 |
+
sizes_old="1035537823 2201936013 52627842921"
|
| 54 |
+
# sizes_new is the archive file sizes of the final release. Some of these sizes are of
|
| 55 |
+
# things we probably won't download.
|
| 56 |
+
sizes_new="3886385"
|
| 57 |
+
|
| 58 |
+
if [ -f $data/$part.tar.gz ]; then
|
| 59 |
+
size=$(/bin/ls -l $data/$part.tar.gz | awk '{print $5}')
|
| 60 |
+
size_ok=false
|
| 61 |
+
for s in $sizes_old $sizes_new; do if [ $s == $size ]; then size_ok=true; fi; done
|
| 62 |
+
if ! $size_ok; then
|
| 63 |
+
echo "$0: removing existing file $data/$part.tar.gz because its size in bytes $size"
|
| 64 |
+
echo "does not equal the size of one of the archives."
|
| 65 |
+
rm $data/$part.tar.gz
|
| 66 |
+
else
|
| 67 |
+
echo "$data/$part.tar.gz exists and appears to be complete."
|
| 68 |
+
fi
|
| 69 |
+
fi
|
| 70 |
+
|
| 71 |
+
if [ ! -f $data/$part.tar.gz ]; then
|
| 72 |
+
if ! which wget >/dev/null; then
|
| 73 |
+
echo "$0: wget is not installed."
|
| 74 |
+
exit 1
|
| 75 |
+
fi
|
| 76 |
+
full_url=$url/$part.tar.gz
|
| 77 |
+
echo "$0: downloading data from $full_url. This may take some time, please be patient."
|
| 78 |
+
|
| 79 |
+
if ! wget -P $data --no-check-certificate $full_url; then
|
| 80 |
+
echo "$0: error executing wget $full_url"
|
| 81 |
+
exit 1
|
| 82 |
+
fi
|
| 83 |
+
fi
|
| 84 |
+
|
| 85 |
+
if ! tar -C $data -xvzf $data/$part.tar.gz; then
|
| 86 |
+
echo "$0: error un-tarring archive $data/$part.tar.gz"
|
| 87 |
+
exit 1
|
| 88 |
+
fi
|
| 89 |
+
|
| 90 |
+
touch $data/.$part.complete
|
| 91 |
+
|
| 92 |
+
echo "$0: Successfully downloaded and un-tarred $data/$part.tar.gz"
|
| 93 |
+
|
| 94 |
+
if $remove_archive; then
|
| 95 |
+
echo "$0: removing $data/$part.tar.gz file since --remove-archive option was supplied."
|
| 96 |
+
rm $data/$part.tar.gz
|
| 97 |
+
fi
|
examples/magicdata-read/cosyvoice/local/prepare_data.py
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import argparse
|
| 2 |
+
import logging
|
| 3 |
+
import os
|
| 4 |
+
from tqdm import tqdm
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
logger = logging.getLogger()
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
def main():
|
| 11 |
+
utt2wav, utt2text, utt2spk, spk2utt = {}, {}, {}, {}
|
| 12 |
+
with open(os.path.join(args.src_dir, "TRANS.txt"), "r") as f:
|
| 13 |
+
lines = f.readlines()[1:]
|
| 14 |
+
lines = [l.split('\t') for l in lines]
|
| 15 |
+
for wav, spk, content in tqdm(lines):
|
| 16 |
+
wav, spk, content = wav.strip(), spk.strip(), content.strip()
|
| 17 |
+
content = content.replace('[FIL]', '')
|
| 18 |
+
content = content.replace('[SPK]', '')
|
| 19 |
+
wav = os.path.join(args.src_dir, spk, wav)
|
| 20 |
+
if not os.path.exists(wav):
|
| 21 |
+
continue
|
| 22 |
+
utt = os.path.basename(wav).replace('.wav', '')
|
| 23 |
+
utt2wav[utt] = wav
|
| 24 |
+
utt2text[utt] = content
|
| 25 |
+
utt2spk[utt] = spk
|
| 26 |
+
if spk not in spk2utt:
|
| 27 |
+
spk2utt[spk] = []
|
| 28 |
+
spk2utt[spk].append(utt)
|
| 29 |
+
|
| 30 |
+
with open('{}/wav.scp'.format(args.des_dir), 'w') as f:
|
| 31 |
+
for k, v in utt2wav.items():
|
| 32 |
+
f.write('{} {}\n'.format(k, v))
|
| 33 |
+
with open('{}/text'.format(args.des_dir), 'w') as f:
|
| 34 |
+
for k, v in utt2text.items():
|
| 35 |
+
f.write('{} {}\n'.format(k, v))
|
| 36 |
+
with open('{}/utt2spk'.format(args.des_dir), 'w') as f:
|
| 37 |
+
for k, v in utt2spk.items():
|
| 38 |
+
f.write('{} {}\n'.format(k, v))
|
| 39 |
+
with open('{}/spk2utt'.format(args.des_dir), 'w') as f:
|
| 40 |
+
for k, v in spk2utt.items():
|
| 41 |
+
f.write('{} {}\n'.format(k, ' '.join(v)))
|
| 42 |
+
return
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
if __name__ == "__main__":
|
| 46 |
+
parser = argparse.ArgumentParser()
|
| 47 |
+
parser.add_argument('--src_dir',
|
| 48 |
+
type=str)
|
| 49 |
+
parser.add_argument('--des_dir',
|
| 50 |
+
type=str)
|
| 51 |
+
args = parser.parse_args()
|
| 52 |
+
main()
|
examples/magicdata-read/cosyvoice/tts_text.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"38_5718_20170915093303": [
|
| 3 |
+
"我想这出最好歌曲把歌词发到网上请别人帮我作曲急急",
|
| 4 |
+
"叫他明天早上差五分儿九点去机场"
|
| 5 |
+
],
|
| 6 |
+
"38_5721_20170915091235": [
|
| 7 |
+
"变温室调到零下两度档",
|
| 8 |
+
"交谈中请勿轻信汇款信息陌生电话请勿使用外挂软件"
|
| 9 |
+
],
|
| 10 |
+
"38_5733_20170915130323": [
|
| 11 |
+
"这是老鹰乐队的一首经典歌曲",
|
| 12 |
+
"我急用这段音乐我自己找到一段但是有现场杂音"
|
| 13 |
+
],
|
| 14 |
+
"38_5836_20170916221414": [
|
| 15 |
+
"给我播一个陶喆的专辑",
|
| 16 |
+
"这套餐好贵呀我发这么多短信贵死了"
|
| 17 |
+
]
|
| 18 |
+
}
|
runtime/python/Dockerfile
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
|
| 2 |
+
ENV DEBIAN_FRONTEND=noninteractive
|
| 3 |
+
|
| 4 |
+
WORKDIR /opt/CosyVoice
|
| 5 |
+
|
| 6 |
+
RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list
|
| 7 |
+
RUN apt-get update -y
|
| 8 |
+
RUN apt-get -y install git unzip git-lfs
|
| 9 |
+
RUN git lfs install
|
| 10 |
+
RUN git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
|
| 11 |
+
# here we use python==3.10 because we cannot find an image which have both python3.8 and torch2.0.1-cu118 installed
|
| 12 |
+
RUN cd CosyVoice && pip3 install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
|
| 13 |
+
RUN cd CosyVoice/runtime/python/grpc && python3 -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. cosyvoice.proto
|
runtime/python/fastapi/client.py
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
import argparse
|
| 15 |
+
import logging
|
| 16 |
+
import requests
|
| 17 |
+
import torch
|
| 18 |
+
import torchaudio
|
| 19 |
+
import numpy as np
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
def main():
|
| 23 |
+
url = "http://{}:{}/inference_{}".format(args.host, args.port, args.mode)
|
| 24 |
+
if args.mode == 'sft':
|
| 25 |
+
payload = {
|
| 26 |
+
'tts_text': args.tts_text,
|
| 27 |
+
'spk_id': args.spk_id
|
| 28 |
+
}
|
| 29 |
+
response = requests.request("GET", url, data=payload, stream=True)
|
| 30 |
+
elif args.mode == 'zero_shot':
|
| 31 |
+
payload = {
|
| 32 |
+
'tts_text': args.tts_text,
|
| 33 |
+
'prompt_text': args.prompt_text
|
| 34 |
+
}
|
| 35 |
+
files = [('prompt_wav', ('prompt_wav', open(args.prompt_wav, 'rb'), 'application/octet-stream'))]
|
| 36 |
+
response = requests.request("GET", url, data=payload, files=files, stream=True)
|
| 37 |
+
elif args.mode == 'cross_lingual':
|
| 38 |
+
payload = {
|
| 39 |
+
'tts_text': args.tts_text,
|
| 40 |
+
}
|
| 41 |
+
files = [('prompt_wav', ('prompt_wav', open(args.prompt_wav, 'rb'), 'application/octet-stream'))]
|
| 42 |
+
response = requests.request("GET", url, data=payload, files=files, stream=True)
|
| 43 |
+
else:
|
| 44 |
+
payload = {
|
| 45 |
+
'tts_text': args.tts_text,
|
| 46 |
+
'spk_id': args.spk_id,
|
| 47 |
+
'instruct_text': args.instruct_text
|
| 48 |
+
}
|
| 49 |
+
response = requests.request("GET", url, data=payload, stream=True)
|
| 50 |
+
tts_audio = b''
|
| 51 |
+
for r in response.iter_content(chunk_size=16000):
|
| 52 |
+
tts_audio += r
|
| 53 |
+
tts_speech = torch.from_numpy(np.array(np.frombuffer(tts_audio, dtype=np.int16))).unsqueeze(dim=0)
|
| 54 |
+
logging.info('save response to {}'.format(args.tts_wav))
|
| 55 |
+
torchaudio.save(args.tts_wav, tts_speech, target_sr)
|
| 56 |
+
logging.info('get response')
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
if __name__ == "__main__":
|
| 60 |
+
parser = argparse.ArgumentParser()
|
| 61 |
+
parser.add_argument('--host',
|
| 62 |
+
type=str,
|
| 63 |
+
default='0.0.0.0')
|
| 64 |
+
parser.add_argument('--port',
|
| 65 |
+
type=int,
|
| 66 |
+
default='50000')
|
| 67 |
+
parser.add_argument('--mode',
|
| 68 |
+
default='sft',
|
| 69 |
+
choices=['sft', 'zero_shot', 'cross_lingual', 'instruct'],
|
| 70 |
+
help='request mode')
|
| 71 |
+
parser.add_argument('--tts_text',
|
| 72 |
+
type=str,
|
| 73 |
+
default='你好,我是通义千问语音合成大模型,请问有什么可以帮您的吗?')
|
| 74 |
+
parser.add_argument('--spk_id',
|
| 75 |
+
type=str,
|
| 76 |
+
default='中文女')
|
| 77 |
+
parser.add_argument('--prompt_text',
|
| 78 |
+
type=str,
|
| 79 |
+
default='希望你以后能够做的比我还好呦。')
|
| 80 |
+
parser.add_argument('--prompt_wav',
|
| 81 |
+
type=str,
|
| 82 |
+
default='../../../asset/zero_shot_prompt.wav')
|
| 83 |
+
parser.add_argument('--instruct_text',
|
| 84 |
+
type=str,
|
| 85 |
+
default='Theo \'Crimson\', is a fiery, passionate rebel leader. \
|
| 86 |
+
Fights with fervor for justice, but struggles with impulsiveness.')
|
| 87 |
+
parser.add_argument('--tts_wav',
|
| 88 |
+
type=str,
|
| 89 |
+
default='demo.wav')
|
| 90 |
+
args = parser.parse_args()
|
| 91 |
+
prompt_sr, target_sr = 16000, 22050
|
| 92 |
+
main()
|
runtime/python/fastapi/server.py
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
import os
|
| 15 |
+
import sys
|
| 16 |
+
import argparse
|
| 17 |
+
import logging
|
| 18 |
+
logging.getLogger('matplotlib').setLevel(logging.WARNING)
|
| 19 |
+
from fastapi import FastAPI, UploadFile, Form, File
|
| 20 |
+
from fastapi.responses import StreamingResponse
|
| 21 |
+
from fastapi.responses import Response
|
| 22 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 23 |
+
import uvicorn
|
| 24 |
+
import numpy as np
|
| 25 |
+
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
| 26 |
+
sys.path.append('{}/../../..'.format(ROOT_DIR))
|
| 27 |
+
sys.path.append('{}/../../../third_party/Matcha-TTS'.format(ROOT_DIR))
|
| 28 |
+
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
|
| 29 |
+
#from cosyvoice.utils.file_utils import load_wav
|
| 30 |
+
|
| 31 |
+
from fastapi import HTTPException
|
| 32 |
+
import requests
|
| 33 |
+
import tempfile
|
| 34 |
+
import torchaudio
|
| 35 |
+
|
| 36 |
+
# Keep the original load_wav function unchanged
|
| 37 |
+
def load_wav(wav, target_sr):
|
| 38 |
+
speech, sample_rate = torchaudio.load(wav, backend='soundfile')
|
| 39 |
+
speech = speech.mean(dim=0, keepdim=True)
|
| 40 |
+
if sample_rate != target_sr:
|
| 41 |
+
assert sample_rate > target_sr, 'wav sample rate {} must be greater than {}'.format(sample_rate, target_sr)
|
| 42 |
+
speech = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=target_sr)(speech)
|
| 43 |
+
return speech
|
| 44 |
+
|
| 45 |
+
# Add a new function to handle URLs
|
| 46 |
+
def load_wav_from_url(url, target_sr):
|
| 47 |
+
# Download the file from the URL to a temporary file
|
| 48 |
+
response = requests.get(url)
|
| 49 |
+
if response.status_code != 200:
|
| 50 |
+
raise HTTPException(status_code=400, detail=f"Failed to download audio from URL: {url}")
|
| 51 |
+
|
| 52 |
+
# Create a temporary file
|
| 53 |
+
with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as temp_file:
|
| 54 |
+
temp_file.write(response.content)
|
| 55 |
+
temp_file.flush()
|
| 56 |
+
temp_path = temp_file.name
|
| 57 |
+
|
| 58 |
+
try:
|
| 59 |
+
# Use the existing load_wav function with the file path
|
| 60 |
+
speech, sample_rate = torchaudio.load(temp_path, backend='soundfile')
|
| 61 |
+
speech = speech.mean(dim=0, keepdim=True)
|
| 62 |
+
if sample_rate != target_sr:
|
| 63 |
+
assert sample_rate > target_sr, 'wav sample rate {} must be greater than {}'.format(sample_rate, target_sr)
|
| 64 |
+
speech = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=target_sr)(speech)
|
| 65 |
+
return speech
|
| 66 |
+
finally:
|
| 67 |
+
# Clean up the temporary file
|
| 68 |
+
os.unlink(temp_path)
|
| 69 |
+
|
| 70 |
+
app = FastAPI()
|
| 71 |
+
# set cross region allowance
|
| 72 |
+
app.add_middleware(
|
| 73 |
+
CORSMiddleware,
|
| 74 |
+
allow_origins=["*"],
|
| 75 |
+
allow_credentials=True,
|
| 76 |
+
allow_methods=["*"],
|
| 77 |
+
allow_headers=["*"])
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
def generate_data(model_output):
|
| 81 |
+
for i in model_output:
|
| 82 |
+
tts_audio = (i['tts_speech'].numpy() * (2 ** 15)).astype(np.int16).tobytes()
|
| 83 |
+
yield tts_audio
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
@app.get("/inference_sft")
|
| 87 |
+
@app.post("/inference_sft")
|
| 88 |
+
async def inference_sft(tts_text: str = Form(), spk_id: str = Form()):
|
| 89 |
+
model_output = cosyvoice.inference_sft(tts_text, spk_id)
|
| 90 |
+
return StreamingResponse(generate_data(model_output))
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
@app.get("/inference_zero_shot")
|
| 94 |
+
@app.post("/inference_zero_shot")
|
| 95 |
+
async def inference_zero_shot(
|
| 96 |
+
tts_text: str = Form(),
|
| 97 |
+
prompt_text: str = Form(),
|
| 98 |
+
prompt_wav_url: str = Form(...), # Using ... makes this parameter required
|
| 99 |
+
speed: float = Form(...)
|
| 100 |
+
):
|
| 101 |
+
# Process the URL directly - no need for conditionals
|
| 102 |
+
prompt_speech_16k = load_wav_from_url(prompt_wav_url, 16000)
|
| 103 |
+
|
| 104 |
+
# Rest of the function remains the same
|
| 105 |
+
model_output = cosyvoice.inference_zero_shot(tts_text, prompt_text, prompt_speech_16k, stream=False, speed=speed)
|
| 106 |
+
|
| 107 |
+
# Collect all audio data instead of streaming it
|
| 108 |
+
audio_data = bytearray()
|
| 109 |
+
for chunk in generate_data(model_output):
|
| 110 |
+
audio_data.extend(chunk)
|
| 111 |
+
|
| 112 |
+
# Return complete audio file
|
| 113 |
+
return Response(
|
| 114 |
+
content=bytes(audio_data),
|
| 115 |
+
media_type="audio/wav",
|
| 116 |
+
headers={"Content-Disposition": "attachment; filename=tts_output.wav"}
|
| 117 |
+
)
|
| 118 |
+
|
| 119 |
+
@app.get("/inference_cross_lingual")
|
| 120 |
+
@app.post("/inference_cross_lingual")
|
| 121 |
+
async def inference_cross_lingual(tts_text: str = Form(), prompt_wav: UploadFile = File()):
|
| 122 |
+
prompt_speech_16k = load_wav(prompt_wav.file, 16000)
|
| 123 |
+
model_output = cosyvoice.inference_cross_lingual(tts_text, prompt_speech_16k)
|
| 124 |
+
return StreamingResponse(generate_data(model_output))
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
@app.get("/inference_instruct")
|
| 128 |
+
@app.post("/inference_instruct")
|
| 129 |
+
async def inference_instruct(tts_text: str = Form(), spk_id: str = Form(), instruct_text: str = Form()):
|
| 130 |
+
model_output = cosyvoice.inference_instruct(tts_text, spk_id, instruct_text)
|
| 131 |
+
return StreamingResponse(generate_data(model_output))
|
| 132 |
+
|
| 133 |
+
@app.get("/inference_instruct2")
|
| 134 |
+
@app.post("/inference_instruct2")
|
| 135 |
+
async def inference_instruct2(tts_text: str = Form(), instruct_text: str = Form(), prompt_wav: UploadFile = File(), speed: float = Form(...)):
|
| 136 |
+
prompt_speech_16k = load_wav(prompt_wav.file, 16000)
|
| 137 |
+
|
| 138 |
+
# Disable streaming by setting stream=False (assuming the function accepts this parameter)
|
| 139 |
+
model_output = cosyvoice.inference_instruct2(tts_text, instruct_text, prompt_speech_16k, stream=False, speed=speed)
|
| 140 |
+
|
| 141 |
+
# Collect all audio data instead of streaming it
|
| 142 |
+
audio_data = bytearray()
|
| 143 |
+
for chunk in generate_data(model_output):
|
| 144 |
+
audio_data.extend(chunk)
|
| 145 |
+
print("instruct模式生成成功!")
|
| 146 |
+
# Return complete audio file
|
| 147 |
+
return Response(
|
| 148 |
+
content=bytes(audio_data),
|
| 149 |
+
media_type="audio/wav",
|
| 150 |
+
headers={"Content-Disposition": "attachment; filename=tts_output.wav"}
|
| 151 |
+
)
|
| 152 |
+
|
| 153 |
+
if __name__ == '__main__':
|
| 154 |
+
parser = argparse.ArgumentParser()
|
| 155 |
+
parser.add_argument('--port',
|
| 156 |
+
type=int,
|
| 157 |
+
default=50000)
|
| 158 |
+
parser.add_argument('--model_dir',
|
| 159 |
+
type=str,
|
| 160 |
+
default='pretrained_models/CosyVoice2-0.5B',
|
| 161 |
+
help='local path or modelscope repo id')
|
| 162 |
+
args = parser.parse_args()
|
| 163 |
+
#try:
|
| 164 |
+
# cosyvoice = CosyVoice(args.model_dir)
|
| 165 |
+
#except Exception:
|
| 166 |
+
try:
|
| 167 |
+
#cosyvoice = CosyVoice2(args.model_dir, load_jit=False, load_trt=True, fp16=True)
|
| 168 |
+
cosyvoice = CosyVoice2(args.model_dir, load_jit=False, load_trt=True, fp16=True)
|
| 169 |
+
except Exception:
|
| 170 |
+
raise TypeError('no valid model_type!')
|
| 171 |
+
uvicorn.run(app, host="0.0.0.0", port=8000)
|
runtime/python/grpc/__pycache__/cosyvoice_pb2_grpc.cpython-310.pyc
ADDED
|
Binary file (2.35 kB). View file
|
|
|
runtime/python/grpc/client.py
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu)
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
import os
|
| 15 |
+
import sys
|
| 16 |
+
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
| 17 |
+
sys.path.append('{}/../../..'.format(ROOT_DIR))
|
| 18 |
+
sys.path.append('{}/../../../third_party/Matcha-TTS'.format(ROOT_DIR))
|
| 19 |
+
import logging
|
| 20 |
+
import argparse
|
| 21 |
+
import torchaudio
|
| 22 |
+
import cosyvoice_pb2
|
| 23 |
+
import cosyvoice_pb2_grpc
|
| 24 |
+
import grpc
|
| 25 |
+
import torch
|
| 26 |
+
import numpy as np
|
| 27 |
+
from cosyvoice.utils.file_utils import load_wav
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def main():
|
| 31 |
+
with grpc.insecure_channel("{}:{}".format(args.host, args.port)) as channel:
|
| 32 |
+
stub = cosyvoice_pb2_grpc.CosyVoiceStub(channel)
|
| 33 |
+
request = cosyvoice_pb2.Request()
|
| 34 |
+
if args.mode == 'sft':
|
| 35 |
+
logging.info('send sft request')
|
| 36 |
+
sft_request = cosyvoice_pb2.sftRequest()
|
| 37 |
+
sft_request.spk_id = args.spk_id
|
| 38 |
+
sft_request.tts_text = args.tts_text
|
| 39 |
+
request.sft_request.CopyFrom(sft_request)
|
| 40 |
+
elif args.mode == 'zero_shot':
|
| 41 |
+
logging.info('send zero_shot request')
|
| 42 |
+
zero_shot_request = cosyvoice_pb2.zeroshotRequest()
|
| 43 |
+
zero_shot_request.tts_text = args.tts_text
|
| 44 |
+
zero_shot_request.prompt_text = args.prompt_text
|
| 45 |
+
prompt_speech = load_wav(args.prompt_wav, 16000)
|
| 46 |
+
zero_shot_request.prompt_audio = (prompt_speech.numpy() * (2**15)).astype(np.int16).tobytes()
|
| 47 |
+
request.zero_shot_request.CopyFrom(zero_shot_request)
|
| 48 |
+
elif args.mode == 'cross_lingual':
|
| 49 |
+
logging.info('send cross_lingual request')
|
| 50 |
+
cross_lingual_request = cosyvoice_pb2.crosslingualRequest()
|
| 51 |
+
cross_lingual_request.tts_text = args.tts_text
|
| 52 |
+
prompt_speech = load_wav(args.prompt_wav, 16000)
|
| 53 |
+
cross_lingual_request.prompt_audio = (prompt_speech.numpy() * (2**15)).astype(np.int16).tobytes()
|
| 54 |
+
request.cross_lingual_request.CopyFrom(cross_lingual_request)
|
| 55 |
+
else:
|
| 56 |
+
logging.info('send instruct request')
|
| 57 |
+
instruct_request = cosyvoice_pb2.instructRequest()
|
| 58 |
+
instruct_request.tts_text = args.tts_text
|
| 59 |
+
instruct_request.spk_id = args.spk_id
|
| 60 |
+
instruct_request.instruct_text = args.instruct_text
|
| 61 |
+
request.instruct_request.CopyFrom(instruct_request)
|
| 62 |
+
|
| 63 |
+
response = stub.Inference(request)
|
| 64 |
+
tts_audio = b''
|
| 65 |
+
for r in response:
|
| 66 |
+
tts_audio += r.tts_audio
|
| 67 |
+
tts_speech = torch.from_numpy(np.array(np.frombuffer(tts_audio, dtype=np.int16))).unsqueeze(dim=0)
|
| 68 |
+
logging.info('save response to {}'.format(args.tts_wav))
|
| 69 |
+
torchaudio.save(args.tts_wav, tts_speech, target_sr)
|
| 70 |
+
logging.info('get response')
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
if __name__ == "__main__":
|
| 74 |
+
parser = argparse.ArgumentParser()
|
| 75 |
+
parser.add_argument('--host',
|
| 76 |
+
type=str,
|
| 77 |
+
default='0.0.0.0')
|
| 78 |
+
parser.add_argument('--port',
|
| 79 |
+
type=int,
|
| 80 |
+
default='50000')
|
| 81 |
+
parser.add_argument('--mode',
|
| 82 |
+
default='sft',
|
| 83 |
+
choices=['sft', 'zero_shot', 'cross_lingual', 'instruct'],
|
| 84 |
+
help='request mode')
|
| 85 |
+
parser.add_argument('--tts_text',
|
| 86 |
+
type=str,
|
| 87 |
+
default='你好,我是通义千问语音合成大模型,请问有什么可以帮您的吗?')
|
| 88 |
+
parser.add_argument('--spk_id',
|
| 89 |
+
type=str,
|
| 90 |
+
default='中文女')
|
| 91 |
+
parser.add_argument('--prompt_text',
|
| 92 |
+
type=str,
|
| 93 |
+
default='希望你以后能够做的比我还好呦。')
|
| 94 |
+
parser.add_argument('--prompt_wav',
|
| 95 |
+
type=str,
|
| 96 |
+
default='../../../asset/zero_shot_prompt.wav')
|
| 97 |
+
parser.add_argument('--instruct_text',
|
| 98 |
+
type=str,
|
| 99 |
+
default='Theo \'Crimson\', is a fiery, passionate rebel leader. \
|
| 100 |
+
Fights with fervor for justice, but struggles with impulsiveness.')
|
| 101 |
+
parser.add_argument('--tts_wav',
|
| 102 |
+
type=str,
|
| 103 |
+
default='demo.wav')
|
| 104 |
+
args = parser.parse_args()
|
| 105 |
+
prompt_sr, target_sr = 16000, 22050
|
| 106 |
+
main()
|
third_party/Matcha-TTS/.env.example
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# example of file for storing private and user specific environment variables, like keys or system paths
|
| 2 |
+
# rename it to ".env" (excluded from version control by default)
|
| 3 |
+
# .env is loaded by train.py automatically
|
| 4 |
+
# hydra allows you to reference variables in .yaml configs with special syntax: ${oc.env:MY_VAR}
|
| 5 |
+
|
| 6 |
+
MY_VAR="/home/user/my/system/path"
|
third_party/Matcha-TTS/.github/PULL_REQUEST_TEMPLATE.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## What does this PR do?
|
| 2 |
+
|
| 3 |
+
<!--
|
| 4 |
+
Please include a summary of the change and which issue is fixed.
|
| 5 |
+
Please also include relevant motivation and context.
|
| 6 |
+
List any dependencies that are required for this change.
|
| 7 |
+
List all the breaking changes introduced by this pull request.
|
| 8 |
+
-->
|
| 9 |
+
|
| 10 |
+
Fixes #\<issue_number>
|
| 11 |
+
|
| 12 |
+
## Before submitting
|
| 13 |
+
|
| 14 |
+
- [ ] Did you make sure **title is self-explanatory** and **the description concisely explains the PR**?
|
| 15 |
+
- [ ] Did you make sure your **PR does only one thing**, instead of bundling different changes together?
|
| 16 |
+
- [ ] Did you list all the **breaking changes** introduced by this pull request?
|
| 17 |
+
- [ ] Did you **test your PR locally** with `pytest` command?
|
| 18 |
+
- [ ] Did you **run pre-commit hooks** with `pre-commit run -a` command?
|
| 19 |
+
|
| 20 |
+
## Did you have fun?
|
| 21 |
+
|
| 22 |
+
Make sure you had fun coding 🙃
|
third_party/Matcha-TTS/.github/codecov.yml
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
coverage:
|
| 2 |
+
status:
|
| 3 |
+
# measures overall project coverage
|
| 4 |
+
project:
|
| 5 |
+
default:
|
| 6 |
+
threshold: 100% # how much decrease in coverage is needed to not consider success
|
| 7 |
+
|
| 8 |
+
# measures PR or single commit coverage
|
| 9 |
+
patch:
|
| 10 |
+
default:
|
| 11 |
+
threshold: 100% # how much decrease in coverage is needed to not consider success
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
# project: off
|
| 15 |
+
# patch: off
|
third_party/Matcha-TTS/.github/dependabot.yml
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# To get started with Dependabot version updates, you'll need to specify which
|
| 2 |
+
# package ecosystems to update and where the package manifests are located.
|
| 3 |
+
# Please see the documentation for all configuration options:
|
| 4 |
+
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
|
| 5 |
+
|
| 6 |
+
version: 2
|
| 7 |
+
updates:
|
| 8 |
+
- package-ecosystem: "pip" # See documentation for possible values
|
| 9 |
+
directory: "/" # Location of package manifests
|
| 10 |
+
target-branch: "dev"
|
| 11 |
+
schedule:
|
| 12 |
+
interval: "daily"
|
| 13 |
+
ignore:
|
| 14 |
+
- dependency-name: "pytorch-lightning"
|
| 15 |
+
update-types: ["version-update:semver-patch"]
|
| 16 |
+
- dependency-name: "torchmetrics"
|
| 17 |
+
update-types: ["version-update:semver-patch"]
|
third_party/Matcha-TTS/.github/release-drafter.yml
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name-template: "v$RESOLVED_VERSION"
|
| 2 |
+
tag-template: "v$RESOLVED_VERSION"
|
| 3 |
+
|
| 4 |
+
categories:
|
| 5 |
+
- title: "🚀 Features"
|
| 6 |
+
labels:
|
| 7 |
+
- "feature"
|
| 8 |
+
- "enhancement"
|
| 9 |
+
- title: "🐛 Bug Fixes"
|
| 10 |
+
labels:
|
| 11 |
+
- "fix"
|
| 12 |
+
- "bugfix"
|
| 13 |
+
- "bug"
|
| 14 |
+
- title: "🧹 Maintenance"
|
| 15 |
+
labels:
|
| 16 |
+
- "maintenance"
|
| 17 |
+
- "dependencies"
|
| 18 |
+
- "refactoring"
|
| 19 |
+
- "cosmetic"
|
| 20 |
+
- "chore"
|
| 21 |
+
- title: "📝️ Documentation"
|
| 22 |
+
labels:
|
| 23 |
+
- "documentation"
|
| 24 |
+
- "docs"
|
| 25 |
+
|
| 26 |
+
change-template: "- $TITLE @$AUTHOR (#$NUMBER)"
|
| 27 |
+
change-title-escapes: '\<*_&' # You can add # and @ to disable mentions
|
| 28 |
+
|
| 29 |
+
version-resolver:
|
| 30 |
+
major:
|
| 31 |
+
labels:
|
| 32 |
+
- "major"
|
| 33 |
+
minor:
|
| 34 |
+
labels:
|
| 35 |
+
- "minor"
|
| 36 |
+
patch:
|
| 37 |
+
labels:
|
| 38 |
+
- "patch"
|
| 39 |
+
default: patch
|
| 40 |
+
|
| 41 |
+
template: |
|
| 42 |
+
## Changes
|
| 43 |
+
|
| 44 |
+
$CHANGES
|
third_party/Matcha-TTS/.gitignore
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Byte-compiled / optimized / DLL files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
|
| 6 |
+
# C extensions
|
| 7 |
+
*.so
|
| 8 |
+
|
| 9 |
+
# Distribution / packaging
|
| 10 |
+
.Python
|
| 11 |
+
build/
|
| 12 |
+
develop-eggs/
|
| 13 |
+
dist/
|
| 14 |
+
downloads/
|
| 15 |
+
eggs/
|
| 16 |
+
.eggs/
|
| 17 |
+
lib/
|
| 18 |
+
lib64/
|
| 19 |
+
parts/
|
| 20 |
+
sdist/
|
| 21 |
+
var/
|
| 22 |
+
wheels/
|
| 23 |
+
pip-wheel-metadata/
|
| 24 |
+
share/python-wheels/
|
| 25 |
+
*.egg-info/
|
| 26 |
+
.installed.cfg
|
| 27 |
+
*.egg
|
| 28 |
+
MANIFEST
|
| 29 |
+
|
| 30 |
+
# PyInstaller
|
| 31 |
+
# Usually these files are written by a python script from a template
|
| 32 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
| 33 |
+
*.manifest
|
| 34 |
+
*.spec
|
| 35 |
+
|
| 36 |
+
# Installer logs
|
| 37 |
+
pip-log.txt
|
| 38 |
+
pip-delete-this-directory.txt
|
| 39 |
+
|
| 40 |
+
# Unit test / coverage reports
|
| 41 |
+
htmlcov/
|
| 42 |
+
.tox/
|
| 43 |
+
.nox/
|
| 44 |
+
.coverage
|
| 45 |
+
.coverage.*
|
| 46 |
+
.cache
|
| 47 |
+
nosetests.xml
|
| 48 |
+
coverage.xml
|
| 49 |
+
*.cover
|
| 50 |
+
*.py,cover
|
| 51 |
+
.hypothesis/
|
| 52 |
+
.pytest_cache/
|
| 53 |
+
|
| 54 |
+
# Translations
|
| 55 |
+
*.mo
|
| 56 |
+
*.pot
|
| 57 |
+
|
| 58 |
+
# Django stuff:
|
| 59 |
+
*.log
|
| 60 |
+
local_settings.py
|
| 61 |
+
db.sqlite3
|
| 62 |
+
db.sqlite3-journal
|
| 63 |
+
|
| 64 |
+
# Flask stuff:
|
| 65 |
+
instance/
|
| 66 |
+
.webassets-cache
|
| 67 |
+
|
| 68 |
+
# Scrapy stuff:
|
| 69 |
+
.scrapy
|
| 70 |
+
|
| 71 |
+
# Sphinx documentation
|
| 72 |
+
docs/_build/
|
| 73 |
+
|
| 74 |
+
# PyBuilder
|
| 75 |
+
target/
|
| 76 |
+
|
| 77 |
+
# Jupyter Notebook
|
| 78 |
+
.ipynb_checkpoints
|
| 79 |
+
|
| 80 |
+
# IPython
|
| 81 |
+
profile_default/
|
| 82 |
+
ipython_config.py
|
| 83 |
+
|
| 84 |
+
# pyenv
|
| 85 |
+
.python-version
|
| 86 |
+
|
| 87 |
+
# pipenv
|
| 88 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
| 89 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
| 90 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
| 91 |
+
# install all needed dependencies.
|
| 92 |
+
#Pipfile.lock
|
| 93 |
+
|
| 94 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
|
| 95 |
+
__pypackages__/
|
| 96 |
+
|
| 97 |
+
# Celery stuff
|
| 98 |
+
celerybeat-schedule
|
| 99 |
+
celerybeat.pid
|
| 100 |
+
|
| 101 |
+
# SageMath parsed files
|
| 102 |
+
*.sage.py
|
| 103 |
+
|
| 104 |
+
# Environments
|
| 105 |
+
.venv
|
| 106 |
+
env/
|
| 107 |
+
venv/
|
| 108 |
+
ENV/
|
| 109 |
+
env.bak/
|
| 110 |
+
venv.bak/
|
| 111 |
+
|
| 112 |
+
# Spyder project settings
|
| 113 |
+
.spyderproject
|
| 114 |
+
.spyproject
|
| 115 |
+
|
| 116 |
+
# Rope project settings
|
| 117 |
+
.ropeproject
|
| 118 |
+
|
| 119 |
+
# mkdocs documentation
|
| 120 |
+
/site
|
| 121 |
+
|
| 122 |
+
# mypy
|
| 123 |
+
.mypy_cache/
|
| 124 |
+
.dmypy.json
|
| 125 |
+
dmypy.json
|
| 126 |
+
|
| 127 |
+
# Pyre type checker
|
| 128 |
+
.pyre/
|
| 129 |
+
|
| 130 |
+
### VisualStudioCode
|
| 131 |
+
.vscode/*
|
| 132 |
+
!.vscode/settings.json
|
| 133 |
+
!.vscode/tasks.json
|
| 134 |
+
!.vscode/launch.json
|
| 135 |
+
!.vscode/extensions.json
|
| 136 |
+
*.code-workspace
|
| 137 |
+
**/.vscode
|
| 138 |
+
|
| 139 |
+
# JetBrains
|
| 140 |
+
.idea/
|
| 141 |
+
|
| 142 |
+
# Data & Models
|
| 143 |
+
*.h5
|
| 144 |
+
*.tar
|
| 145 |
+
*.tar.gz
|
| 146 |
+
|
| 147 |
+
# Lightning-Hydra-Template
|
| 148 |
+
configs/local/default.yaml
|
| 149 |
+
/data/
|
| 150 |
+
/logs/
|
| 151 |
+
.env
|
| 152 |
+
|
| 153 |
+
# Aim logging
|
| 154 |
+
.aim
|
| 155 |
+
|
| 156 |
+
# Cython complied files
|
| 157 |
+
matcha/utils/monotonic_align/core.c
|
| 158 |
+
|
| 159 |
+
# Ignoring hifigan checkpoint
|
| 160 |
+
generator_v1
|
| 161 |
+
g_02500000
|
| 162 |
+
gradio_cached_examples/
|
| 163 |
+
synth_output/
|
third_party/Matcha-TTS/.pre-commit-config.yaml
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
default_language_version:
|
| 2 |
+
python: python3.10
|
| 3 |
+
|
| 4 |
+
repos:
|
| 5 |
+
- repo: https://github.com/pre-commit/pre-commit-hooks
|
| 6 |
+
rev: v4.5.0
|
| 7 |
+
hooks:
|
| 8 |
+
# list of supported hooks: https://pre-commit.com/hooks.html
|
| 9 |
+
- id: trailing-whitespace
|
| 10 |
+
- id: end-of-file-fixer
|
| 11 |
+
# - id: check-docstring-first
|
| 12 |
+
- id: check-yaml
|
| 13 |
+
- id: debug-statements
|
| 14 |
+
- id: detect-private-key
|
| 15 |
+
- id: check-toml
|
| 16 |
+
- id: check-case-conflict
|
| 17 |
+
- id: check-added-large-files
|
| 18 |
+
|
| 19 |
+
# python code formatting
|
| 20 |
+
- repo: https://github.com/psf/black
|
| 21 |
+
rev: 23.12.1
|
| 22 |
+
hooks:
|
| 23 |
+
- id: black
|
| 24 |
+
args: [--line-length, "120"]
|
| 25 |
+
|
| 26 |
+
# python import sorting
|
| 27 |
+
- repo: https://github.com/PyCQA/isort
|
| 28 |
+
rev: 5.13.2
|
| 29 |
+
hooks:
|
| 30 |
+
- id: isort
|
| 31 |
+
args: ["--profile", "black", "--filter-files"]
|
| 32 |
+
|
| 33 |
+
# python upgrading syntax to newer version
|
| 34 |
+
- repo: https://github.com/asottile/pyupgrade
|
| 35 |
+
rev: v3.15.0
|
| 36 |
+
hooks:
|
| 37 |
+
- id: pyupgrade
|
| 38 |
+
args: [--py38-plus]
|
| 39 |
+
|
| 40 |
+
# python check (PEP8), programming errors and code complexity
|
| 41 |
+
- repo: https://github.com/PyCQA/flake8
|
| 42 |
+
rev: 7.0.0
|
| 43 |
+
hooks:
|
| 44 |
+
- id: flake8
|
| 45 |
+
args:
|
| 46 |
+
[
|
| 47 |
+
"--max-line-length", "120",
|
| 48 |
+
"--extend-ignore",
|
| 49 |
+
"E203,E402,E501,F401,F841,RST2,RST301",
|
| 50 |
+
"--exclude",
|
| 51 |
+
"logs/*,data/*,matcha/hifigan/*",
|
| 52 |
+
]
|
| 53 |
+
additional_dependencies: [flake8-rst-docstrings==0.3.0]
|
| 54 |
+
|
| 55 |
+
# pylint
|
| 56 |
+
- repo: https://github.com/pycqa/pylint
|
| 57 |
+
rev: v3.0.3
|
| 58 |
+
hooks:
|
| 59 |
+
- id: pylint
|
third_party/Matcha-TTS/.project-root
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# this file is required for inferring the project root directory
|
| 2 |
+
# do not delete
|
third_party/Matcha-TTS/.pylintrc
ADDED
|
@@ -0,0 +1,525 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[MASTER]
|
| 2 |
+
|
| 3 |
+
# A comma-separated list of package or module names from where C extensions may
|
| 4 |
+
# be loaded. Extensions are loading into the active Python interpreter and may
|
| 5 |
+
# run arbitrary code.
|
| 6 |
+
extension-pkg-whitelist=
|
| 7 |
+
|
| 8 |
+
# Add files or directories to the blacklist. They should be base names, not
|
| 9 |
+
# paths.
|
| 10 |
+
ignore=CVS
|
| 11 |
+
|
| 12 |
+
# Add files or directories matching the regex patterns to the blacklist. The
|
| 13 |
+
# regex matches against base names, not paths.
|
| 14 |
+
ignore-patterns=
|
| 15 |
+
|
| 16 |
+
# Python code to execute, usually for sys.path manipulation such as
|
| 17 |
+
# pygtk.require().
|
| 18 |
+
#init-hook=
|
| 19 |
+
|
| 20 |
+
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
|
| 21 |
+
# number of processors available to use.
|
| 22 |
+
jobs=1
|
| 23 |
+
|
| 24 |
+
# Control the amount of potential inferred values when inferring a single
|
| 25 |
+
# object. This can help the performance when dealing with large functions or
|
| 26 |
+
# complex, nested conditions.
|
| 27 |
+
limit-inference-results=100
|
| 28 |
+
|
| 29 |
+
# List of plugins (as comma separated values of python modules names) to load,
|
| 30 |
+
# usually to register additional checkers.
|
| 31 |
+
load-plugins=
|
| 32 |
+
|
| 33 |
+
# Pickle collected data for later comparisons.
|
| 34 |
+
persistent=yes
|
| 35 |
+
|
| 36 |
+
# Specify a configuration file.
|
| 37 |
+
#rcfile=
|
| 38 |
+
|
| 39 |
+
# When enabled, pylint would attempt to guess common misconfiguration and emit
|
| 40 |
+
# user-friendly hints instead of false-positive error messages.
|
| 41 |
+
suggestion-mode=yes
|
| 42 |
+
|
| 43 |
+
# Allow loading of arbitrary C extensions. Extensions are imported into the
|
| 44 |
+
# active Python interpreter and may run arbitrary code.
|
| 45 |
+
unsafe-load-any-extension=no
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
[MESSAGES CONTROL]
|
| 49 |
+
|
| 50 |
+
# Only show warnings with the listed confidence levels. Leave empty to show
|
| 51 |
+
# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED.
|
| 52 |
+
confidence=
|
| 53 |
+
|
| 54 |
+
# Disable the message, report, category or checker with the given id(s). You
|
| 55 |
+
# can either give multiple identifiers separated by comma (,) or put this
|
| 56 |
+
# option multiple times (only on the command line, not in the configuration
|
| 57 |
+
# file where it should appear only once). You can also use "--disable=all" to
|
| 58 |
+
# disable everything first and then reenable specific checks. For example, if
|
| 59 |
+
# you want to run only the similarities checker, you can use "--disable=all
|
| 60 |
+
# --enable=similarities". If you want to run only the classes checker, but have
|
| 61 |
+
# no Warning level messages displayed, use "--disable=all --enable=classes
|
| 62 |
+
# --disable=W".
|
| 63 |
+
disable=missing-docstring,
|
| 64 |
+
too-many-public-methods,
|
| 65 |
+
too-many-lines,
|
| 66 |
+
bare-except,
|
| 67 |
+
## for avoiding weird p3.6 CI linter error
|
| 68 |
+
## TODO: see later if we can remove this
|
| 69 |
+
assigning-non-slot,
|
| 70 |
+
unsupported-assignment-operation,
|
| 71 |
+
## end
|
| 72 |
+
line-too-long,
|
| 73 |
+
fixme,
|
| 74 |
+
wrong-import-order,
|
| 75 |
+
ungrouped-imports,
|
| 76 |
+
wrong-import-position,
|
| 77 |
+
import-error,
|
| 78 |
+
invalid-name,
|
| 79 |
+
too-many-instance-attributes,
|
| 80 |
+
arguments-differ,
|
| 81 |
+
arguments-renamed,
|
| 82 |
+
no-name-in-module,
|
| 83 |
+
no-member,
|
| 84 |
+
unsubscriptable-object,
|
| 85 |
+
raw-checker-failed,
|
| 86 |
+
bad-inline-option,
|
| 87 |
+
locally-disabled,
|
| 88 |
+
file-ignored,
|
| 89 |
+
suppressed-message,
|
| 90 |
+
useless-suppression,
|
| 91 |
+
deprecated-pragma,
|
| 92 |
+
use-symbolic-message-instead,
|
| 93 |
+
useless-object-inheritance,
|
| 94 |
+
too-few-public-methods,
|
| 95 |
+
too-many-branches,
|
| 96 |
+
too-many-arguments,
|
| 97 |
+
too-many-locals,
|
| 98 |
+
too-many-statements,
|
| 99 |
+
duplicate-code,
|
| 100 |
+
not-callable,
|
| 101 |
+
import-outside-toplevel,
|
| 102 |
+
logging-fstring-interpolation,
|
| 103 |
+
logging-not-lazy,
|
| 104 |
+
unused-argument,
|
| 105 |
+
no-else-return,
|
| 106 |
+
chained-comparison,
|
| 107 |
+
redefined-outer-name
|
| 108 |
+
|
| 109 |
+
# Enable the message, report, category or checker with the given id(s). You can
|
| 110 |
+
# either give multiple identifier separated by comma (,) or put this option
|
| 111 |
+
# multiple time (only on the command line, not in the configuration file where
|
| 112 |
+
# it should appear only once). See also the "--disable" option for examples.
|
| 113 |
+
enable=c-extension-no-member
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
[REPORTS]
|
| 117 |
+
|
| 118 |
+
# Python expression which should return a note less than 10 (10 is the highest
|
| 119 |
+
# note). You have access to the variables errors warning, statement which
|
| 120 |
+
# respectively contain the number of errors / warnings messages and the total
|
| 121 |
+
# number of statements analyzed. This is used by the global evaluation report
|
| 122 |
+
# (RP0004).
|
| 123 |
+
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
|
| 124 |
+
|
| 125 |
+
# Template used to display messages. This is a python new-style format string
|
| 126 |
+
# used to format the message information. See doc for all details.
|
| 127 |
+
#msg-template=
|
| 128 |
+
|
| 129 |
+
# Set the output format. Available formats are text, parseable, colorized, json
|
| 130 |
+
# and msvs (visual studio). You can also give a reporter class, e.g.
|
| 131 |
+
# mypackage.mymodule.MyReporterClass.
|
| 132 |
+
output-format=text
|
| 133 |
+
|
| 134 |
+
# Tells whether to display a full report or only the messages.
|
| 135 |
+
reports=no
|
| 136 |
+
|
| 137 |
+
# Activate the evaluation score.
|
| 138 |
+
score=yes
|
| 139 |
+
|
| 140 |
+
|
| 141 |
+
[REFACTORING]
|
| 142 |
+
|
| 143 |
+
# Maximum number of nested blocks for function / method body
|
| 144 |
+
max-nested-blocks=5
|
| 145 |
+
|
| 146 |
+
# Complete name of functions that never returns. When checking for
|
| 147 |
+
# inconsistent-return-statements if a never returning function is called then
|
| 148 |
+
# it will be considered as an explicit return statement and no message will be
|
| 149 |
+
# printed.
|
| 150 |
+
never-returning-functions=sys.exit
|
| 151 |
+
|
| 152 |
+
|
| 153 |
+
[LOGGING]
|
| 154 |
+
|
| 155 |
+
# Format style used to check logging format string. `old` means using %
|
| 156 |
+
# formatting, while `new` is for `{}` formatting.
|
| 157 |
+
logging-format-style=old
|
| 158 |
+
|
| 159 |
+
# Logging modules to check that the string format arguments are in logging
|
| 160 |
+
# function parameter format.
|
| 161 |
+
logging-modules=logging
|
| 162 |
+
|
| 163 |
+
|
| 164 |
+
[SPELLING]
|
| 165 |
+
|
| 166 |
+
# Limits count of emitted suggestions for spelling mistakes.
|
| 167 |
+
max-spelling-suggestions=4
|
| 168 |
+
|
| 169 |
+
# Spelling dictionary name. Available dictionaries: none. To make it working
|
| 170 |
+
# install python-enchant package..
|
| 171 |
+
spelling-dict=
|
| 172 |
+
|
| 173 |
+
# List of comma separated words that should not be checked.
|
| 174 |
+
spelling-ignore-words=
|
| 175 |
+
|
| 176 |
+
# A path to a file that contains private dictionary; one word per line.
|
| 177 |
+
spelling-private-dict-file=
|
| 178 |
+
|
| 179 |
+
# Tells whether to store unknown words to indicated private dictionary in
|
| 180 |
+
# --spelling-private-dict-file option instead of raising a message.
|
| 181 |
+
spelling-store-unknown-words=no
|
| 182 |
+
|
| 183 |
+
|
| 184 |
+
[MISCELLANEOUS]
|
| 185 |
+
|
| 186 |
+
# List of note tags to take in consideration, separated by a comma.
|
| 187 |
+
notes=FIXME,
|
| 188 |
+
XXX,
|
| 189 |
+
TODO
|
| 190 |
+
|
| 191 |
+
|
| 192 |
+
[TYPECHECK]
|
| 193 |
+
|
| 194 |
+
# List of decorators that produce context managers, such as
|
| 195 |
+
# contextlib.contextmanager. Add to this list to register other decorators that
|
| 196 |
+
# produce valid context managers.
|
| 197 |
+
contextmanager-decorators=contextlib.contextmanager
|
| 198 |
+
|
| 199 |
+
# List of members which are set dynamically and missed by pylint inference
|
| 200 |
+
# system, and so shouldn't trigger E1101 when accessed. Python regular
|
| 201 |
+
# expressions are accepted.
|
| 202 |
+
generated-members=numpy.*,torch.*
|
| 203 |
+
|
| 204 |
+
# Tells whether missing members accessed in mixin class should be ignored. A
|
| 205 |
+
# mixin class is detected if its name ends with "mixin" (case insensitive).
|
| 206 |
+
ignore-mixin-members=yes
|
| 207 |
+
|
| 208 |
+
# Tells whether to warn about missing members when the owner of the attribute
|
| 209 |
+
# is inferred to be None.
|
| 210 |
+
ignore-none=yes
|
| 211 |
+
|
| 212 |
+
# This flag controls whether pylint should warn about no-member and similar
|
| 213 |
+
# checks whenever an opaque object is returned when inferring. The inference
|
| 214 |
+
# can return multiple potential results while evaluating a Python object, but
|
| 215 |
+
# some branches might not be evaluated, which results in partial inference. In
|
| 216 |
+
# that case, it might be useful to still emit no-member and other checks for
|
| 217 |
+
# the rest of the inferred objects.
|
| 218 |
+
ignore-on-opaque-inference=yes
|
| 219 |
+
|
| 220 |
+
# List of class names for which member attributes should not be checked (useful
|
| 221 |
+
# for classes with dynamically set attributes). This supports the use of
|
| 222 |
+
# qualified names.
|
| 223 |
+
ignored-classes=optparse.Values,thread._local,_thread._local
|
| 224 |
+
|
| 225 |
+
# List of module names for which member attributes should not be checked
|
| 226 |
+
# (useful for modules/projects where namespaces are manipulated during runtime
|
| 227 |
+
# and thus existing member attributes cannot be deduced by static analysis. It
|
| 228 |
+
# supports qualified module names, as well as Unix pattern matching.
|
| 229 |
+
ignored-modules=
|
| 230 |
+
|
| 231 |
+
# Show a hint with possible names when a member name was not found. The aspect
|
| 232 |
+
# of finding the hint is based on edit distance.
|
| 233 |
+
missing-member-hint=yes
|
| 234 |
+
|
| 235 |
+
# The minimum edit distance a name should have in order to be considered a
|
| 236 |
+
# similar match for a missing member name.
|
| 237 |
+
missing-member-hint-distance=1
|
| 238 |
+
|
| 239 |
+
# The total number of similar names that should be taken in consideration when
|
| 240 |
+
# showing a hint for a missing member.
|
| 241 |
+
missing-member-max-choices=1
|
| 242 |
+
|
| 243 |
+
|
| 244 |
+
[VARIABLES]
|
| 245 |
+
|
| 246 |
+
# List of additional names supposed to be defined in builtins. Remember that
|
| 247 |
+
# you should avoid defining new builtins when possible.
|
| 248 |
+
additional-builtins=
|
| 249 |
+
|
| 250 |
+
# Tells whether unused global variables should be treated as a violation.
|
| 251 |
+
allow-global-unused-variables=yes
|
| 252 |
+
|
| 253 |
+
# List of strings which can identify a callback function by name. A callback
|
| 254 |
+
# name must start or end with one of those strings.
|
| 255 |
+
callbacks=cb_,
|
| 256 |
+
_cb
|
| 257 |
+
|
| 258 |
+
# A regular expression matching the name of dummy variables (i.e. expected to
|
| 259 |
+
# not be used).
|
| 260 |
+
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
|
| 261 |
+
|
| 262 |
+
# Argument names that match this expression will be ignored. Default to name
|
| 263 |
+
# with leading underscore.
|
| 264 |
+
ignored-argument-names=_.*|^ignored_|^unused_
|
| 265 |
+
|
| 266 |
+
# Tells whether we should check for unused import in __init__ files.
|
| 267 |
+
init-import=no
|
| 268 |
+
|
| 269 |
+
# List of qualified module names which can have objects that can redefine
|
| 270 |
+
# builtins.
|
| 271 |
+
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io
|
| 272 |
+
|
| 273 |
+
|
| 274 |
+
[FORMAT]
|
| 275 |
+
|
| 276 |
+
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
|
| 277 |
+
expected-line-ending-format=
|
| 278 |
+
|
| 279 |
+
# Regexp for a line that is allowed to be longer than the limit.
|
| 280 |
+
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
|
| 281 |
+
|
| 282 |
+
# Number of spaces of indent required inside a hanging or continued line.
|
| 283 |
+
indent-after-paren=4
|
| 284 |
+
|
| 285 |
+
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
|
| 286 |
+
# tab).
|
| 287 |
+
indent-string=' '
|
| 288 |
+
|
| 289 |
+
# Maximum number of characters on a single line.
|
| 290 |
+
max-line-length=120
|
| 291 |
+
|
| 292 |
+
# Maximum number of lines in a module.
|
| 293 |
+
max-module-lines=1000
|
| 294 |
+
|
| 295 |
+
# Allow the body of a class to be on the same line as the declaration if body
|
| 296 |
+
# contains single statement.
|
| 297 |
+
single-line-class-stmt=no
|
| 298 |
+
|
| 299 |
+
# Allow the body of an if to be on the same line as the test if there is no
|
| 300 |
+
# else.
|
| 301 |
+
single-line-if-stmt=no
|
| 302 |
+
|
| 303 |
+
|
| 304 |
+
[SIMILARITIES]
|
| 305 |
+
|
| 306 |
+
# Ignore comments when computing similarities.
|
| 307 |
+
ignore-comments=yes
|
| 308 |
+
|
| 309 |
+
# Ignore docstrings when computing similarities.
|
| 310 |
+
ignore-docstrings=yes
|
| 311 |
+
|
| 312 |
+
# Ignore imports when computing similarities.
|
| 313 |
+
ignore-imports=no
|
| 314 |
+
|
| 315 |
+
# Minimum lines number of a similarity.
|
| 316 |
+
min-similarity-lines=4
|
| 317 |
+
|
| 318 |
+
|
| 319 |
+
[BASIC]
|
| 320 |
+
|
| 321 |
+
# Naming style matching correct argument names.
|
| 322 |
+
argument-naming-style=snake_case
|
| 323 |
+
|
| 324 |
+
# Regular expression matching correct argument names. Overrides argument-
|
| 325 |
+
# naming-style.
|
| 326 |
+
argument-rgx=[a-z_][a-z0-9_]{0,30}$
|
| 327 |
+
|
| 328 |
+
# Naming style matching correct attribute names.
|
| 329 |
+
attr-naming-style=snake_case
|
| 330 |
+
|
| 331 |
+
# Regular expression matching correct attribute names. Overrides attr-naming-
|
| 332 |
+
# style.
|
| 333 |
+
#attr-rgx=
|
| 334 |
+
|
| 335 |
+
# Bad variable names which should always be refused, separated by a comma.
|
| 336 |
+
bad-names=
|
| 337 |
+
|
| 338 |
+
# Naming style matching correct class attribute names.
|
| 339 |
+
class-attribute-naming-style=any
|
| 340 |
+
|
| 341 |
+
# Regular expression matching correct class attribute names. Overrides class-
|
| 342 |
+
# attribute-naming-style.
|
| 343 |
+
#class-attribute-rgx=
|
| 344 |
+
|
| 345 |
+
# Naming style matching correct class names.
|
| 346 |
+
class-naming-style=PascalCase
|
| 347 |
+
|
| 348 |
+
# Regular expression matching correct class names. Overrides class-naming-
|
| 349 |
+
# style.
|
| 350 |
+
#class-rgx=
|
| 351 |
+
|
| 352 |
+
# Naming style matching correct constant names.
|
| 353 |
+
const-naming-style=UPPER_CASE
|
| 354 |
+
|
| 355 |
+
# Regular expression matching correct constant names. Overrides const-naming-
|
| 356 |
+
# style.
|
| 357 |
+
#const-rgx=
|
| 358 |
+
|
| 359 |
+
# Minimum line length for functions/classes that require docstrings, shorter
|
| 360 |
+
# ones are exempt.
|
| 361 |
+
docstring-min-length=-1
|
| 362 |
+
|
| 363 |
+
# Naming style matching correct function names.
|
| 364 |
+
function-naming-style=snake_case
|
| 365 |
+
|
| 366 |
+
# Regular expression matching correct function names. Overrides function-
|
| 367 |
+
# naming-style.
|
| 368 |
+
#function-rgx=
|
| 369 |
+
|
| 370 |
+
# Good variable names which should always be accepted, separated by a comma.
|
| 371 |
+
good-names=i,
|
| 372 |
+
j,
|
| 373 |
+
k,
|
| 374 |
+
x,
|
| 375 |
+
ex,
|
| 376 |
+
Run,
|
| 377 |
+
_
|
| 378 |
+
|
| 379 |
+
# Include a hint for the correct naming format with invalid-name.
|
| 380 |
+
include-naming-hint=no
|
| 381 |
+
|
| 382 |
+
# Naming style matching correct inline iteration names.
|
| 383 |
+
inlinevar-naming-style=any
|
| 384 |
+
|
| 385 |
+
# Regular expression matching correct inline iteration names. Overrides
|
| 386 |
+
# inlinevar-naming-style.
|
| 387 |
+
#inlinevar-rgx=
|
| 388 |
+
|
| 389 |
+
# Naming style matching correct method names.
|
| 390 |
+
method-naming-style=snake_case
|
| 391 |
+
|
| 392 |
+
# Regular expression matching correct method names. Overrides method-naming-
|
| 393 |
+
# style.
|
| 394 |
+
#method-rgx=
|
| 395 |
+
|
| 396 |
+
# Naming style matching correct module names.
|
| 397 |
+
module-naming-style=snake_case
|
| 398 |
+
|
| 399 |
+
# Regular expression matching correct module names. Overrides module-naming-
|
| 400 |
+
# style.
|
| 401 |
+
#module-rgx=
|
| 402 |
+
|
| 403 |
+
# Colon-delimited sets of names that determine each other's naming style when
|
| 404 |
+
# the name regexes allow several styles.
|
| 405 |
+
name-group=
|
| 406 |
+
|
| 407 |
+
# Regular expression which should only match function or class names that do
|
| 408 |
+
# not require a docstring.
|
| 409 |
+
no-docstring-rgx=^_
|
| 410 |
+
|
| 411 |
+
# List of decorators that produce properties, such as abc.abstractproperty. Add
|
| 412 |
+
# to this list to register other decorators that produce valid properties.
|
| 413 |
+
# These decorators are taken in consideration only for invalid-name.
|
| 414 |
+
property-classes=abc.abstractproperty
|
| 415 |
+
|
| 416 |
+
# Naming style matching correct variable names.
|
| 417 |
+
variable-naming-style=snake_case
|
| 418 |
+
|
| 419 |
+
# Regular expression matching correct variable names. Overrides variable-
|
| 420 |
+
# naming-style.
|
| 421 |
+
variable-rgx=[a-z_][a-z0-9_]{0,30}$
|
| 422 |
+
|
| 423 |
+
|
| 424 |
+
[STRING]
|
| 425 |
+
|
| 426 |
+
# This flag controls whether the implicit-str-concat-in-sequence should
|
| 427 |
+
# generate a warning on implicit string concatenation in sequences defined over
|
| 428 |
+
# several lines.
|
| 429 |
+
check-str-concat-over-line-jumps=no
|
| 430 |
+
|
| 431 |
+
|
| 432 |
+
[IMPORTS]
|
| 433 |
+
|
| 434 |
+
# Allow wildcard imports from modules that define __all__.
|
| 435 |
+
allow-wildcard-with-all=no
|
| 436 |
+
|
| 437 |
+
# Analyse import fallback blocks. This can be used to support both Python 2 and
|
| 438 |
+
# 3 compatible code, which means that the block might have code that exists
|
| 439 |
+
# only in one or another interpreter, leading to false positives when analysed.
|
| 440 |
+
analyse-fallback-blocks=no
|
| 441 |
+
|
| 442 |
+
# Deprecated modules which should not be used, separated by a comma.
|
| 443 |
+
deprecated-modules=optparse,tkinter.tix
|
| 444 |
+
|
| 445 |
+
# Create a graph of external dependencies in the given file (report RP0402 must
|
| 446 |
+
# not be disabled).
|
| 447 |
+
ext-import-graph=
|
| 448 |
+
|
| 449 |
+
# Create a graph of every (i.e. internal and external) dependencies in the
|
| 450 |
+
# given file (report RP0402 must not be disabled).
|
| 451 |
+
import-graph=
|
| 452 |
+
|
| 453 |
+
# Create a graph of internal dependencies in the given file (report RP0402 must
|
| 454 |
+
# not be disabled).
|
| 455 |
+
int-import-graph=
|
| 456 |
+
|
| 457 |
+
# Force import order to recognize a module as part of the standard
|
| 458 |
+
# compatibility libraries.
|
| 459 |
+
known-standard-library=
|
| 460 |
+
|
| 461 |
+
# Force import order to recognize a module as part of a third party library.
|
| 462 |
+
known-third-party=enchant
|
| 463 |
+
|
| 464 |
+
|
| 465 |
+
[CLASSES]
|
| 466 |
+
|
| 467 |
+
# List of method names used to declare (i.e. assign) instance attributes.
|
| 468 |
+
defining-attr-methods=__init__,
|
| 469 |
+
__new__,
|
| 470 |
+
setUp
|
| 471 |
+
|
| 472 |
+
# List of member names, which should be excluded from the protected access
|
| 473 |
+
# warning.
|
| 474 |
+
exclude-protected=_asdict,
|
| 475 |
+
_fields,
|
| 476 |
+
_replace,
|
| 477 |
+
_source,
|
| 478 |
+
_make
|
| 479 |
+
|
| 480 |
+
# List of valid names for the first argument in a class method.
|
| 481 |
+
valid-classmethod-first-arg=cls
|
| 482 |
+
|
| 483 |
+
# List of valid names for the first argument in a metaclass class method.
|
| 484 |
+
valid-metaclass-classmethod-first-arg=cls
|
| 485 |
+
|
| 486 |
+
|
| 487 |
+
[DESIGN]
|
| 488 |
+
|
| 489 |
+
# Maximum number of arguments for function / method.
|
| 490 |
+
max-args=5
|
| 491 |
+
|
| 492 |
+
# Maximum number of attributes for a class (see R0902).
|
| 493 |
+
max-attributes=7
|
| 494 |
+
|
| 495 |
+
# Maximum number of boolean expressions in an if statement.
|
| 496 |
+
max-bool-expr=5
|
| 497 |
+
|
| 498 |
+
# Maximum number of branch for function / method body.
|
| 499 |
+
max-branches=12
|
| 500 |
+
|
| 501 |
+
# Maximum number of locals for function / method body.
|
| 502 |
+
max-locals=15
|
| 503 |
+
|
| 504 |
+
# Maximum number of parents for a class (see R0901).
|
| 505 |
+
max-parents=15
|
| 506 |
+
|
| 507 |
+
# Maximum number of public methods for a class (see R0904).
|
| 508 |
+
max-public-methods=20
|
| 509 |
+
|
| 510 |
+
# Maximum number of return / yield for function / method body.
|
| 511 |
+
max-returns=6
|
| 512 |
+
|
| 513 |
+
# Maximum number of statements in function / method body.
|
| 514 |
+
max-statements=50
|
| 515 |
+
|
| 516 |
+
# Minimum number of public methods for a class (see R0903).
|
| 517 |
+
min-public-methods=2
|
| 518 |
+
|
| 519 |
+
|
| 520 |
+
[EXCEPTIONS]
|
| 521 |
+
|
| 522 |
+
# Exceptions that will emit a warning when being caught. Defaults to
|
| 523 |
+
# "BaseException, Exception".
|
| 524 |
+
overgeneral-exceptions=builtins.BaseException,
|
| 525 |
+
builtins.Exception
|
third_party/Matcha-TTS/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2023 Shivam Mehta
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in all
|
| 13 |
+
copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
third_party/Matcha-TTS/MANIFEST.in
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
include README.md
|
| 2 |
+
include LICENSE.txt
|
| 3 |
+
include requirements.*.txt
|
| 4 |
+
include *.cff
|
| 5 |
+
include requirements.txt
|
| 6 |
+
include matcha/VERSION
|
| 7 |
+
recursive-include matcha *.json
|
| 8 |
+
recursive-include matcha *.html
|
| 9 |
+
recursive-include matcha *.png
|
| 10 |
+
recursive-include matcha *.md
|
| 11 |
+
recursive-include matcha *.py
|
| 12 |
+
recursive-include matcha *.pyx
|
| 13 |
+
recursive-exclude tests *
|
| 14 |
+
prune tests*
|
third_party/Matcha-TTS/Makefile
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
help: ## Show help
|
| 3 |
+
@grep -E '^[.a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
|
| 4 |
+
|
| 5 |
+
clean: ## Clean autogenerated files
|
| 6 |
+
rm -rf dist
|
| 7 |
+
find . -type f -name "*.DS_Store" -ls -delete
|
| 8 |
+
find . | grep -E "(__pycache__|\.pyc|\.pyo)" | xargs rm -rf
|
| 9 |
+
find . | grep -E ".pytest_cache" | xargs rm -rf
|
| 10 |
+
find . | grep -E ".ipynb_checkpoints" | xargs rm -rf
|
| 11 |
+
rm -f .coverage
|
| 12 |
+
|
| 13 |
+
clean-logs: ## Clean logs
|
| 14 |
+
rm -rf logs/**
|
| 15 |
+
|
| 16 |
+
create-package: ## Create wheel and tar gz
|
| 17 |
+
rm -rf dist/
|
| 18 |
+
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
|
| 19 |
+
python setup.py sdist
|
| 20 |
+
python -m twine upload dist/* --verbose --skip-existing
|
| 21 |
+
|
| 22 |
+
format: ## Run pre-commit hooks
|
| 23 |
+
pre-commit run -a
|
| 24 |
+
|
| 25 |
+
sync: ## Merge changes from main branch to your current branch
|
| 26 |
+
git pull
|
| 27 |
+
git pull origin main
|
| 28 |
+
|
| 29 |
+
test: ## Run not slow tests
|
| 30 |
+
pytest -k "not slow"
|
| 31 |
+
|
| 32 |
+
test-full: ## Run all tests
|
| 33 |
+
pytest
|
| 34 |
+
|
| 35 |
+
train-ljspeech: ## Train the model
|
| 36 |
+
python matcha/train.py experiment=ljspeech
|
| 37 |
+
|
| 38 |
+
train-ljspeech-min: ## Train the model with minimum memory
|
| 39 |
+
python matcha/train.py experiment=ljspeech_min_memory
|
| 40 |
+
|
| 41 |
+
start_app: ## Start the app
|
| 42 |
+
python matcha/app.py
|
third_party/Matcha-TTS/README.md
ADDED
|
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<div align="center">
|
| 2 |
+
|
| 3 |
+
# 🍵 Matcha-TTS: A fast TTS architecture with conditional flow matching
|
| 4 |
+
|
| 5 |
+
### [Shivam Mehta](https://www.kth.se/profile/smehta), [Ruibo Tu](https://www.kth.se/profile/ruibo), [Jonas Beskow](https://www.kth.se/profile/beskow), [Éva Székely](https://www.kth.se/profile/szekely), and [Gustav Eje Henter](https://people.kth.se/~ghe/)
|
| 6 |
+
|
| 7 |
+
[](https://www.python.org/downloads/release/python-3100/)
|
| 8 |
+
[](https://pytorch.org/get-started/locally/)
|
| 9 |
+
[](https://pytorchlightning.ai/)
|
| 10 |
+
[](https://hydra.cc/)
|
| 11 |
+
[](https://black.readthedocs.io/en/stable/)
|
| 12 |
+
[](https://pycqa.github.io/isort/)
|
| 13 |
+
|
| 14 |
+
<p style="text-align: center;">
|
| 15 |
+
<img src="https://shivammehta25.github.io/Matcha-TTS/images/logo.png" height="128"/>
|
| 16 |
+
</p>
|
| 17 |
+
|
| 18 |
+
</div>
|
| 19 |
+
|
| 20 |
+
> This is the official code implementation of 🍵 Matcha-TTS [ICASSP 2024].
|
| 21 |
+
|
| 22 |
+
We propose 🍵 Matcha-TTS, a new approach to non-autoregressive neural TTS, that uses [conditional flow matching](https://arxiv.org/abs/2210.02747) (similar to [rectified flows](https://arxiv.org/abs/2209.03003)) to speed up ODE-based speech synthesis. Our method:
|
| 23 |
+
|
| 24 |
+
- Is probabilistic
|
| 25 |
+
- Has compact memory footprint
|
| 26 |
+
- Sounds highly natural
|
| 27 |
+
- Is very fast to synthesise from
|
| 28 |
+
|
| 29 |
+
Check out our [demo page](https://shivammehta25.github.io/Matcha-TTS) and read [our ICASSP 2024 paper](https://arxiv.org/abs/2309.03199) for more details.
|
| 30 |
+
|
| 31 |
+
[Pre-trained models](https://drive.google.com/drive/folders/17C_gYgEHOxI5ZypcfE_k1piKCtyR0isJ?usp=sharing) will be automatically downloaded with the CLI or gradio interface.
|
| 32 |
+
|
| 33 |
+
You can also [try 🍵 Matcha-TTS in your browser on HuggingFace 🤗 spaces](https://huggingface.co/spaces/shivammehta25/Matcha-TTS).
|
| 34 |
+
|
| 35 |
+
## Teaser video
|
| 36 |
+
|
| 37 |
+
[](https://youtu.be/xmvJkz3bqw0)
|
| 38 |
+
|
| 39 |
+
## Installation
|
| 40 |
+
|
| 41 |
+
1. Create an environment (suggested but optional)
|
| 42 |
+
|
| 43 |
+
```
|
| 44 |
+
conda create -n matcha-tts python=3.10 -y
|
| 45 |
+
conda activate matcha-tts
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
2. Install Matcha TTS using pip or from source
|
| 49 |
+
|
| 50 |
+
```bash
|
| 51 |
+
pip install matcha-tts
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
from source
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
pip install git+https://github.com/shivammehta25/Matcha-TTS.git
|
| 58 |
+
cd Matcha-TTS
|
| 59 |
+
pip install -e .
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
3. Run CLI / gradio app / jupyter notebook
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
# This will download the required models
|
| 66 |
+
matcha-tts --text "<INPUT TEXT>"
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
or
|
| 70 |
+
|
| 71 |
+
```bash
|
| 72 |
+
matcha-tts-app
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
or open `synthesis.ipynb` on jupyter notebook
|
| 76 |
+
|
| 77 |
+
### CLI Arguments
|
| 78 |
+
|
| 79 |
+
- To synthesise from given text, run:
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
matcha-tts --text "<INPUT TEXT>"
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
- To synthesise from a file, run:
|
| 86 |
+
|
| 87 |
+
```bash
|
| 88 |
+
matcha-tts --file <PATH TO FILE>
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
- To batch synthesise from a file, run:
|
| 92 |
+
|
| 93 |
+
```bash
|
| 94 |
+
matcha-tts --file <PATH TO FILE> --batched
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
Additional arguments
|
| 98 |
+
|
| 99 |
+
- Speaking rate
|
| 100 |
+
|
| 101 |
+
```bash
|
| 102 |
+
matcha-tts --text "<INPUT TEXT>" --speaking_rate 1.0
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
- Sampling temperature
|
| 106 |
+
|
| 107 |
+
```bash
|
| 108 |
+
matcha-tts --text "<INPUT TEXT>" --temperature 0.667
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
- Euler ODE solver steps
|
| 112 |
+
|
| 113 |
+
```bash
|
| 114 |
+
matcha-tts --text "<INPUT TEXT>" --steps 10
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
## Train with your own dataset
|
| 118 |
+
|
| 119 |
+
Let's assume we are training with LJ Speech
|
| 120 |
+
|
| 121 |
+
1. Download the dataset from [here](https://keithito.com/LJ-Speech-Dataset/), extract it to `data/LJSpeech-1.1`, and prepare the file lists to point to the extracted data like for [item 5 in the setup of the NVIDIA Tacotron 2 repo](https://github.com/NVIDIA/tacotron2#setup).
|
| 122 |
+
|
| 123 |
+
2. Clone and enter the Matcha-TTS repository
|
| 124 |
+
|
| 125 |
+
```bash
|
| 126 |
+
git clone https://github.com/shivammehta25/Matcha-TTS.git
|
| 127 |
+
cd Matcha-TTS
|
| 128 |
+
```
|
| 129 |
+
|
| 130 |
+
3. Install the package from source
|
| 131 |
+
|
| 132 |
+
```bash
|
| 133 |
+
pip install -e .
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
4. Go to `configs/data/ljspeech.yaml` and change
|
| 137 |
+
|
| 138 |
+
```yaml
|
| 139 |
+
train_filelist_path: data/filelists/ljs_audio_text_train_filelist.txt
|
| 140 |
+
valid_filelist_path: data/filelists/ljs_audio_text_val_filelist.txt
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
5. Generate normalisation statistics with the yaml file of dataset configuration
|
| 144 |
+
|
| 145 |
+
```bash
|
| 146 |
+
matcha-data-stats -i ljspeech.yaml
|
| 147 |
+
# Output:
|
| 148 |
+
#{'mel_mean': -5.53662231756592, 'mel_std': 2.1161014277038574}
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
Update these values in `configs/data/ljspeech.yaml` under `data_statistics` key.
|
| 152 |
+
|
| 153 |
+
```bash
|
| 154 |
+
data_statistics: # Computed for ljspeech dataset
|
| 155 |
+
mel_mean: -5.536622
|
| 156 |
+
mel_std: 2.116101
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
to the paths of your train and validation filelists.
|
| 160 |
+
|
| 161 |
+
6. Run the training script
|
| 162 |
+
|
| 163 |
+
```bash
|
| 164 |
+
make train-ljspeech
|
| 165 |
+
```
|
| 166 |
+
|
| 167 |
+
or
|
| 168 |
+
|
| 169 |
+
```bash
|
| 170 |
+
python matcha/train.py experiment=ljspeech
|
| 171 |
+
```
|
| 172 |
+
|
| 173 |
+
- for a minimum memory run
|
| 174 |
+
|
| 175 |
+
```bash
|
| 176 |
+
python matcha/train.py experiment=ljspeech_min_memory
|
| 177 |
+
```
|
| 178 |
+
|
| 179 |
+
- for multi-gpu training, run
|
| 180 |
+
|
| 181 |
+
```bash
|
| 182 |
+
python matcha/train.py experiment=ljspeech trainer.devices=[0,1]
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
7. Synthesise from the custom trained model
|
| 186 |
+
|
| 187 |
+
```bash
|
| 188 |
+
matcha-tts --text "<INPUT TEXT>" --checkpoint_path <PATH TO CHECKPOINT>
|
| 189 |
+
```
|
| 190 |
+
|
| 191 |
+
## ONNX support
|
| 192 |
+
|
| 193 |
+
> Special thanks to [@mush42](https://github.com/mush42) for implementing ONNX export and inference support.
|
| 194 |
+
|
| 195 |
+
It is possible to export Matcha checkpoints to [ONNX](https://onnx.ai/), and run inference on the exported ONNX graph.
|
| 196 |
+
|
| 197 |
+
### ONNX export
|
| 198 |
+
|
| 199 |
+
To export a checkpoint to ONNX, first install ONNX with
|
| 200 |
+
|
| 201 |
+
```bash
|
| 202 |
+
pip install onnx
|
| 203 |
+
```
|
| 204 |
+
|
| 205 |
+
then run the following:
|
| 206 |
+
|
| 207 |
+
```bash
|
| 208 |
+
python3 -m matcha.onnx.export matcha.ckpt model.onnx --n-timesteps 5
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
Optionally, the ONNX exporter accepts **vocoder-name** and **vocoder-checkpoint** arguments. This enables you to embed the vocoder in the exported graph and generate waveforms in a single run (similar to end-to-end TTS systems).
|
| 212 |
+
|
| 213 |
+
**Note** that `n_timesteps` is treated as a hyper-parameter rather than a model input. This means you should specify it during export (not during inference). If not specified, `n_timesteps` is set to **5**.
|
| 214 |
+
|
| 215 |
+
**Important**: for now, torch>=2.1.0 is needed for export since the `scaled_product_attention` operator is not exportable in older versions. Until the final version is released, those who want to export their models must install torch>=2.1.0 manually as a pre-release.
|
| 216 |
+
|
| 217 |
+
### ONNX Inference
|
| 218 |
+
|
| 219 |
+
To run inference on the exported model, first install `onnxruntime` using
|
| 220 |
+
|
| 221 |
+
```bash
|
| 222 |
+
pip install onnxruntime
|
| 223 |
+
pip install onnxruntime-gpu # for GPU inference
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
then use the following:
|
| 227 |
+
|
| 228 |
+
```bash
|
| 229 |
+
python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs
|
| 230 |
+
```
|
| 231 |
+
|
| 232 |
+
You can also control synthesis parameters:
|
| 233 |
+
|
| 234 |
+
```bash
|
| 235 |
+
python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs --temperature 0.4 --speaking_rate 0.9 --spk 0
|
| 236 |
+
```
|
| 237 |
+
|
| 238 |
+
To run inference on **GPU**, make sure to install **onnxruntime-gpu** package, and then pass `--gpu` to the inference command:
|
| 239 |
+
|
| 240 |
+
```bash
|
| 241 |
+
python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs --gpu
|
| 242 |
+
```
|
| 243 |
+
|
| 244 |
+
If you exported only Matcha to ONNX, this will write mel-spectrogram as graphs and `numpy` arrays to the output directory.
|
| 245 |
+
If you embedded the vocoder in the exported graph, this will write `.wav` audio files to the output directory.
|
| 246 |
+
|
| 247 |
+
If you exported only Matcha to ONNX, and you want to run a full TTS pipeline, you can pass a path to a vocoder model in `ONNX` format:
|
| 248 |
+
|
| 249 |
+
```bash
|
| 250 |
+
python3 -m matcha.onnx.infer model.onnx --text "hey" --output-dir ./outputs --vocoder hifigan.small.onnx
|
| 251 |
+
```
|
| 252 |
+
|
| 253 |
+
This will write `.wav` audio files to the output directory.
|
| 254 |
+
|
| 255 |
+
## Citation information
|
| 256 |
+
|
| 257 |
+
If you use our code or otherwise find this work useful, please cite our paper:
|
| 258 |
+
|
| 259 |
+
```text
|
| 260 |
+
@inproceedings{mehta2024matcha,
|
| 261 |
+
title={Matcha-{TTS}: A fast {TTS} architecture with conditional flow matching},
|
| 262 |
+
author={Mehta, Shivam and Tu, Ruibo and Beskow, Jonas and Sz{\'e}kely, {\'E}va and Henter, Gustav Eje},
|
| 263 |
+
booktitle={Proc. ICASSP},
|
| 264 |
+
year={2024}
|
| 265 |
+
}
|
| 266 |
+
```
|
| 267 |
+
|
| 268 |
+
## Acknowledgements
|
| 269 |
+
|
| 270 |
+
Since this code uses [Lightning-Hydra-Template](https://github.com/ashleve/lightning-hydra-template), you have all the powers that come with it.
|
| 271 |
+
|
| 272 |
+
Other source code we would like to acknowledge:
|
| 273 |
+
|
| 274 |
+
- [Coqui-TTS](https://github.com/coqui-ai/TTS/tree/dev): For helping me figure out how to make cython binaries pip installable and encouragement
|
| 275 |
+
- [Hugging Face Diffusers](https://huggingface.co/): For their awesome diffusers library and its components
|
| 276 |
+
- [Grad-TTS](https://github.com/huawei-noah/Speech-Backbones/tree/main/Grad-TTS): For the monotonic alignment search source code
|
| 277 |
+
- [torchdyn](https://github.com/DiffEqML/torchdyn): Useful for trying other ODE solvers during research and development
|
| 278 |
+
- [labml.ai](https://nn.labml.ai/transformers/rope/index.html): For the RoPE implementation
|
third_party/Matcha-TTS/configs/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
# this file is needed here to include configs when building project as a package
|
third_party/Matcha-TTS/configs/callbacks/default.yaml
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
defaults:
|
| 2 |
+
- model_checkpoint.yaml
|
| 3 |
+
- model_summary.yaml
|
| 4 |
+
- rich_progress_bar.yaml
|
| 5 |
+
- _self_
|
third_party/Matcha-TTS/configs/callbacks/none.yaml
ADDED
|
File without changes
|
third_party/Matcha-TTS/configs/callbacks/rich_progress_bar.yaml
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# https://lightning.ai/docs/pytorch/latest/api/lightning.pytorch.callbacks.RichProgressBar.html
|
| 2 |
+
|
| 3 |
+
rich_progress_bar:
|
| 4 |
+
_target_: lightning.pytorch.callbacks.RichProgressBar
|
third_party/Matcha-TTS/configs/data/hi-fi_en-US_female.yaml
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
defaults:
|
| 2 |
+
- ljspeech
|
| 3 |
+
- _self_
|
| 4 |
+
|
| 5 |
+
# Dataset URL: https://ast-astrec.nict.go.jp/en/release/hi-fi-captain/
|
| 6 |
+
_target_: matcha.data.text_mel_datamodule.TextMelDataModule
|
| 7 |
+
name: hi-fi_en-US_female
|
| 8 |
+
train_filelist_path: data/filelists/hi-fi-captain-en-us-female_train.txt
|
| 9 |
+
valid_filelist_path: data/filelists/hi-fi-captain-en-us-female_val.txt
|
| 10 |
+
batch_size: 32
|
| 11 |
+
cleaners: [english_cleaners_piper]
|
| 12 |
+
data_statistics: # Computed for this dataset
|
| 13 |
+
mel_mean: -6.38385
|
| 14 |
+
mel_std: 2.541796
|
third_party/Matcha-TTS/configs/data/ljspeech.yaml
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
_target_: matcha.data.text_mel_datamodule.TextMelDataModule
|
| 2 |
+
name: ljspeech
|
| 3 |
+
train_filelist_path: data/filelists/ljs_audio_text_train_filelist.txt
|
| 4 |
+
valid_filelist_path: data/filelists/ljs_audio_text_val_filelist.txt
|
| 5 |
+
batch_size: 32
|
| 6 |
+
num_workers: 20
|
| 7 |
+
pin_memory: True
|
| 8 |
+
cleaners: [english_cleaners2]
|
| 9 |
+
add_blank: True
|
| 10 |
+
n_spks: 1
|
| 11 |
+
n_fft: 1024
|
| 12 |
+
n_feats: 80
|
| 13 |
+
sample_rate: 22050
|
| 14 |
+
hop_length: 256
|
| 15 |
+
win_length: 1024
|
| 16 |
+
f_min: 0
|
| 17 |
+
f_max: 8000
|
| 18 |
+
data_statistics: # Computed for ljspeech dataset
|
| 19 |
+
mel_mean: -5.536622
|
| 20 |
+
mel_std: 2.116101
|
| 21 |
+
seed: ${seed}
|
third_party/Matcha-TTS/configs/data/vctk.yaml
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
defaults:
|
| 2 |
+
- ljspeech
|
| 3 |
+
- _self_
|
| 4 |
+
|
| 5 |
+
_target_: matcha.data.text_mel_datamodule.TextMelDataModule
|
| 6 |
+
name: vctk
|
| 7 |
+
train_filelist_path: data/filelists/vctk_audio_sid_text_train_filelist.txt
|
| 8 |
+
valid_filelist_path: data/filelists/vctk_audio_sid_text_val_filelist.txt
|
| 9 |
+
batch_size: 32
|
| 10 |
+
add_blank: True
|
| 11 |
+
n_spks: 109
|
| 12 |
+
data_statistics: # Computed for vctk dataset
|
| 13 |
+
mel_mean: -6.630575
|
| 14 |
+
mel_std: 2.482914
|
third_party/Matcha-TTS/configs/debug/default.yaml
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# default debugging setup, runs 1 full epoch
|
| 4 |
+
# other debugging configs can inherit from this one
|
| 5 |
+
|
| 6 |
+
# overwrite task name so debugging logs are stored in separate folder
|
| 7 |
+
task_name: "debug"
|
| 8 |
+
|
| 9 |
+
# disable callbacks and loggers during debugging
|
| 10 |
+
# callbacks: null
|
| 11 |
+
# logger: null
|
| 12 |
+
|
| 13 |
+
extras:
|
| 14 |
+
ignore_warnings: False
|
| 15 |
+
enforce_tags: False
|
| 16 |
+
|
| 17 |
+
# sets level of all command line loggers to 'DEBUG'
|
| 18 |
+
# https://hydra.cc/docs/tutorials/basic/running_your_app/logging/
|
| 19 |
+
hydra:
|
| 20 |
+
job_logging:
|
| 21 |
+
root:
|
| 22 |
+
level: DEBUG
|
| 23 |
+
|
| 24 |
+
# use this to also set hydra loggers to 'DEBUG'
|
| 25 |
+
# verbose: True
|
| 26 |
+
|
| 27 |
+
trainer:
|
| 28 |
+
max_epochs: 1
|
| 29 |
+
accelerator: cpu # debuggers don't like gpus
|
| 30 |
+
devices: 1 # debuggers don't like multiprocessing
|
| 31 |
+
detect_anomaly: true # raise exception if NaN or +/-inf is detected in any tensor
|
| 32 |
+
|
| 33 |
+
data:
|
| 34 |
+
num_workers: 0 # debuggers don't like multiprocessing
|
| 35 |
+
pin_memory: False # disable gpu memory pin
|
third_party/Matcha-TTS/configs/debug/fdr.yaml
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# runs 1 train, 1 validation and 1 test step
|
| 4 |
+
|
| 5 |
+
defaults:
|
| 6 |
+
- default
|
| 7 |
+
|
| 8 |
+
trainer:
|
| 9 |
+
fast_dev_run: true
|
third_party/Matcha-TTS/configs/debug/limit.yaml
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# uses only 1% of the training data and 5% of validation/test data
|
| 4 |
+
|
| 5 |
+
defaults:
|
| 6 |
+
- default
|
| 7 |
+
|
| 8 |
+
trainer:
|
| 9 |
+
max_epochs: 3
|
| 10 |
+
limit_train_batches: 0.01
|
| 11 |
+
limit_val_batches: 0.05
|
| 12 |
+
limit_test_batches: 0.05
|
third_party/Matcha-TTS/configs/debug/overfit.yaml
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# overfits to 3 batches
|
| 4 |
+
|
| 5 |
+
defaults:
|
| 6 |
+
- default
|
| 7 |
+
|
| 8 |
+
trainer:
|
| 9 |
+
max_epochs: 20
|
| 10 |
+
overfit_batches: 3
|
| 11 |
+
|
| 12 |
+
# model ckpt and early stopping need to be disabled during overfitting
|
| 13 |
+
callbacks: null
|
third_party/Matcha-TTS/configs/debug/profiler.yaml
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# runs with execution time profiling
|
| 4 |
+
|
| 5 |
+
defaults:
|
| 6 |
+
- default
|
| 7 |
+
|
| 8 |
+
trainer:
|
| 9 |
+
max_epochs: 1
|
| 10 |
+
# profiler: "simple"
|
| 11 |
+
profiler: "advanced"
|
| 12 |
+
# profiler: "pytorch"
|
| 13 |
+
accelerator: gpu
|
| 14 |
+
|
| 15 |
+
limit_train_batches: 0.02
|
third_party/Matcha-TTS/configs/eval.yaml
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
defaults:
|
| 4 |
+
- _self_
|
| 5 |
+
- data: mnist # choose datamodule with `test_dataloader()` for evaluation
|
| 6 |
+
- model: mnist
|
| 7 |
+
- logger: null
|
| 8 |
+
- trainer: default
|
| 9 |
+
- paths: default
|
| 10 |
+
- extras: default
|
| 11 |
+
- hydra: default
|
| 12 |
+
|
| 13 |
+
task_name: "eval"
|
| 14 |
+
|
| 15 |
+
tags: ["dev"]
|
| 16 |
+
|
| 17 |
+
# passing checkpoint path is necessary for evaluation
|
| 18 |
+
ckpt_path: ???
|
third_party/Matcha-TTS/configs/experiment/hifi_dataset_piper_phonemizer.yaml
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# to execute this experiment run:
|
| 4 |
+
# python train.py experiment=multispeaker
|
| 5 |
+
|
| 6 |
+
defaults:
|
| 7 |
+
- override /data: hi-fi_en-US_female.yaml
|
| 8 |
+
|
| 9 |
+
# all parameters below will be merged with parameters from default configurations set above
|
| 10 |
+
# this allows you to overwrite only specified parameters
|
| 11 |
+
|
| 12 |
+
tags: ["hi-fi", "single_speaker", "piper_phonemizer", "en_US", "female"]
|
| 13 |
+
|
| 14 |
+
run_name: hi-fi_en-US_female_piper_phonemizer
|
third_party/Matcha-TTS/configs/experiment/ljspeech.yaml
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# to execute this experiment run:
|
| 4 |
+
# python train.py experiment=multispeaker
|
| 5 |
+
|
| 6 |
+
defaults:
|
| 7 |
+
- override /data: ljspeech.yaml
|
| 8 |
+
|
| 9 |
+
# all parameters below will be merged with parameters from default configurations set above
|
| 10 |
+
# this allows you to overwrite only specified parameters
|
| 11 |
+
|
| 12 |
+
tags: ["ljspeech"]
|
| 13 |
+
|
| 14 |
+
run_name: ljspeech
|
third_party/Matcha-TTS/configs/experiment/ljspeech_min_memory.yaml
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# to execute this experiment run:
|
| 4 |
+
# python train.py experiment=multispeaker
|
| 5 |
+
|
| 6 |
+
defaults:
|
| 7 |
+
- override /data: ljspeech.yaml
|
| 8 |
+
|
| 9 |
+
# all parameters below will be merged with parameters from default configurations set above
|
| 10 |
+
# this allows you to overwrite only specified parameters
|
| 11 |
+
|
| 12 |
+
tags: ["ljspeech"]
|
| 13 |
+
|
| 14 |
+
run_name: ljspeech_min
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
model:
|
| 18 |
+
out_size: 172
|
third_party/Matcha-TTS/configs/experiment/multispeaker.yaml
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# to execute this experiment run:
|
| 4 |
+
# python train.py experiment=multispeaker
|
| 5 |
+
|
| 6 |
+
defaults:
|
| 7 |
+
- override /data: vctk.yaml
|
| 8 |
+
|
| 9 |
+
# all parameters below will be merged with parameters from default configurations set above
|
| 10 |
+
# this allows you to overwrite only specified parameters
|
| 11 |
+
|
| 12 |
+
tags: ["multispeaker"]
|
| 13 |
+
|
| 14 |
+
run_name: multispeaker
|
third_party/Matcha-TTS/configs/extras/default.yaml
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# disable python warnings if they annoy you
|
| 2 |
+
ignore_warnings: False
|
| 3 |
+
|
| 4 |
+
# ask user for tags if none are provided in the config
|
| 5 |
+
enforce_tags: True
|
| 6 |
+
|
| 7 |
+
# pretty print config tree at the start of the run using Rich library
|
| 8 |
+
print_config: True
|
third_party/Matcha-TTS/configs/hparams_search/mnist_optuna.yaml
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# @package _global_
|
| 2 |
+
|
| 3 |
+
# example hyperparameter optimization of some experiment with Optuna:
|
| 4 |
+
# python train.py -m hparams_search=mnist_optuna experiment=example
|
| 5 |
+
|
| 6 |
+
defaults:
|
| 7 |
+
- override /hydra/sweeper: optuna
|
| 8 |
+
|
| 9 |
+
# choose metric which will be optimized by Optuna
|
| 10 |
+
# make sure this is the correct name of some metric logged in lightning module!
|
| 11 |
+
optimized_metric: "val/acc_best"
|
| 12 |
+
|
| 13 |
+
# here we define Optuna hyperparameter search
|
| 14 |
+
# it optimizes for value returned from function with @hydra.main decorator
|
| 15 |
+
# docs: https://hydra.cc/docs/next/plugins/optuna_sweeper
|
| 16 |
+
hydra:
|
| 17 |
+
mode: "MULTIRUN" # set hydra to multirun by default if this config is attached
|
| 18 |
+
|
| 19 |
+
sweeper:
|
| 20 |
+
_target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
|
| 21 |
+
|
| 22 |
+
# storage URL to persist optimization results
|
| 23 |
+
# for example, you can use SQLite if you set 'sqlite:///example.db'
|
| 24 |
+
storage: null
|
| 25 |
+
|
| 26 |
+
# name of the study to persist optimization results
|
| 27 |
+
study_name: null
|
| 28 |
+
|
| 29 |
+
# number of parallel workers
|
| 30 |
+
n_jobs: 1
|
| 31 |
+
|
| 32 |
+
# 'minimize' or 'maximize' the objective
|
| 33 |
+
direction: maximize
|
| 34 |
+
|
| 35 |
+
# total number of runs that will be executed
|
| 36 |
+
n_trials: 20
|
| 37 |
+
|
| 38 |
+
# choose Optuna hyperparameter sampler
|
| 39 |
+
# you can choose bayesian sampler (tpe), random search (without optimization), grid sampler, and others
|
| 40 |
+
# docs: https://optuna.readthedocs.io/en/stable/reference/samplers.html
|
| 41 |
+
sampler:
|
| 42 |
+
_target_: optuna.samplers.TPESampler
|
| 43 |
+
seed: 1234
|
| 44 |
+
n_startup_trials: 10 # number of random sampling runs before optimization starts
|
| 45 |
+
|
| 46 |
+
# define hyperparameter search space
|
| 47 |
+
params:
|
| 48 |
+
model.optimizer.lr: interval(0.0001, 0.1)
|
| 49 |
+
data.batch_size: choice(32, 64, 128, 256)
|
| 50 |
+
model.net.lin1_size: choice(64, 128, 256)
|
| 51 |
+
model.net.lin2_size: choice(64, 128, 256)
|
| 52 |
+
model.net.lin3_size: choice(32, 64, 128, 256)
|
third_party/Matcha-TTS/configs/hydra/default.yaml
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# https://hydra.cc/docs/configure_hydra/intro/
|
| 2 |
+
|
| 3 |
+
# enable color logging
|
| 4 |
+
defaults:
|
| 5 |
+
- override hydra_logging: colorlog
|
| 6 |
+
- override job_logging: colorlog
|
| 7 |
+
|
| 8 |
+
# output directory, generated dynamically on each run
|
| 9 |
+
run:
|
| 10 |
+
dir: ${paths.log_dir}/${task_name}/${run_name}/runs/${now:%Y-%m-%d}_${now:%H-%M-%S}
|
| 11 |
+
sweep:
|
| 12 |
+
dir: ${paths.log_dir}/${task_name}/${run_name}/multiruns/${now:%Y-%m-%d}_${now:%H-%M-%S}
|
| 13 |
+
subdir: ${hydra.job.num}
|
| 14 |
+
|
| 15 |
+
job_logging:
|
| 16 |
+
handlers:
|
| 17 |
+
file:
|
| 18 |
+
# Incorporates fix from https://github.com/facebookresearch/hydra/pull/2242
|
| 19 |
+
filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
|
third_party/Matcha-TTS/configs/local/.gitkeep
ADDED
|
File without changes
|
third_party/Matcha-TTS/configs/logger/aim.yaml
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# https://aimstack.io/
|
| 2 |
+
|
| 3 |
+
# example usage in lightning module:
|
| 4 |
+
# https://github.com/aimhubio/aim/blob/main/examples/pytorch_lightning_track.py
|
| 5 |
+
|
| 6 |
+
# open the Aim UI with the following command (run in the folder containing the `.aim` folder):
|
| 7 |
+
# `aim up`
|
| 8 |
+
|
| 9 |
+
aim:
|
| 10 |
+
_target_: aim.pytorch_lightning.AimLogger
|
| 11 |
+
repo: ${paths.root_dir} # .aim folder will be created here
|
| 12 |
+
# repo: "aim://ip_address:port" # can instead provide IP address pointing to Aim remote tracking server which manages the repo, see https://aimstack.readthedocs.io/en/latest/using/remote_tracking.html#
|
| 13 |
+
|
| 14 |
+
# aim allows to group runs under experiment name
|
| 15 |
+
experiment: null # any string, set to "default" if not specified
|
| 16 |
+
|
| 17 |
+
train_metric_prefix: "train/"
|
| 18 |
+
val_metric_prefix: "val/"
|
| 19 |
+
test_metric_prefix: "test/"
|
| 20 |
+
|
| 21 |
+
# sets the tracking interval in seconds for system usage metrics (CPU, GPU, memory, etc.)
|
| 22 |
+
system_tracking_interval: 10 # set to null to disable system metrics tracking
|
| 23 |
+
|
| 24 |
+
# enable/disable logging of system params such as installed packages, git info, env vars, etc.
|
| 25 |
+
log_system_params: true
|
| 26 |
+
|
| 27 |
+
# enable/disable tracking console logs (default value is true)
|
| 28 |
+
capture_terminal_logs: false # set to false to avoid infinite console log loop issue https://github.com/aimhubio/aim/issues/2550
|
third_party/Matcha-TTS/matcha/__pycache__/__init__.cpython-310.pyc
ADDED
|
Binary file (192 Bytes). View file
|
|
|
third_party/Matcha-TTS/matcha/hifigan/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2020 Jungil Kong
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in all
|
| 13 |
+
copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
third_party/Matcha-TTS/matcha/hifigan/README.md
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
|
| 2 |
+
|
| 3 |
+
### Jungil Kong, Jaehyeon Kim, Jaekyoung Bae
|
| 4 |
+
|
| 5 |
+
In our [paper](https://arxiv.org/abs/2010.05646),
|
| 6 |
+
we proposed HiFi-GAN: a GAN-based model capable of generating high fidelity speech efficiently.<br/>
|
| 7 |
+
We provide our implementation and pretrained models as open source in this repository.
|
| 8 |
+
|
| 9 |
+
**Abstract :**
|
| 10 |
+
Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms.
|
| 11 |
+
Although such methods improve the sampling efficiency and memory usage,
|
| 12 |
+
their sample quality has not yet reached that of autoregressive and flow-based generative models.
|
| 13 |
+
In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis.
|
| 14 |
+
As speech audio consists of sinusoidal signals with various periods,
|
| 15 |
+
we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality.
|
| 16 |
+
A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method
|
| 17 |
+
demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than
|
| 18 |
+
real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen
|
| 19 |
+
speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times
|
| 20 |
+
faster than real-time on CPU with comparable quality to an autoregressive counterpart.
|
| 21 |
+
|
| 22 |
+
Visit our [demo website](https://jik876.github.io/hifi-gan-demo/) for audio samples.
|
| 23 |
+
|
| 24 |
+
## Pre-requisites
|
| 25 |
+
|
| 26 |
+
1. Python >= 3.6
|
| 27 |
+
2. Clone this repository.
|
| 28 |
+
3. Install python requirements. Please refer [requirements.txt](requirements.txt)
|
| 29 |
+
4. Download and extract the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/).
|
| 30 |
+
And move all wav files to `LJSpeech-1.1/wavs`
|
| 31 |
+
|
| 32 |
+
## Training
|
| 33 |
+
|
| 34 |
+
```
|
| 35 |
+
python train.py --config config_v1.json
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
To train V2 or V3 Generator, replace `config_v1.json` with `config_v2.json` or `config_v3.json`.<br>
|
| 39 |
+
Checkpoints and copy of the configuration file are saved in `cp_hifigan` directory by default.<br>
|
| 40 |
+
You can change the path by adding `--checkpoint_path` option.
|
| 41 |
+
|
| 42 |
+
Validation loss during training with V1 generator.<br>
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
## Pretrained Model
|
| 46 |
+
|
| 47 |
+
You can also use pretrained models we provide.<br/>
|
| 48 |
+
[Download pretrained models](https://drive.google.com/drive/folders/1-eEYTB5Av9jNql0WGBlRoi-WH2J7bp5Y?usp=sharing)<br/>
|
| 49 |
+
Details of each folder are as in follows:
|
| 50 |
+
|
| 51 |
+
| Folder Name | Generator | Dataset | Fine-Tuned |
|
| 52 |
+
| ------------ | --------- | --------- | ------------------------------------------------------ |
|
| 53 |
+
| LJ_V1 | V1 | LJSpeech | No |
|
| 54 |
+
| LJ_V2 | V2 | LJSpeech | No |
|
| 55 |
+
| LJ_V3 | V3 | LJSpeech | No |
|
| 56 |
+
| LJ_FT_T2_V1 | V1 | LJSpeech | Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2)) |
|
| 57 |
+
| LJ_FT_T2_V2 | V2 | LJSpeech | Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2)) |
|
| 58 |
+
| LJ_FT_T2_V3 | V3 | LJSpeech | Yes ([Tacotron2](https://github.com/NVIDIA/tacotron2)) |
|
| 59 |
+
| VCTK_V1 | V1 | VCTK | No |
|
| 60 |
+
| VCTK_V2 | V2 | VCTK | No |
|
| 61 |
+
| VCTK_V3 | V3 | VCTK | No |
|
| 62 |
+
| UNIVERSAL_V1 | V1 | Universal | No |
|
| 63 |
+
|
| 64 |
+
We provide the universal model with discriminator weights that can be used as a base for transfer learning to other datasets.
|
| 65 |
+
|
| 66 |
+
## Fine-Tuning
|
| 67 |
+
|
| 68 |
+
1. Generate mel-spectrograms in numpy format using [Tacotron2](https://github.com/NVIDIA/tacotron2) with teacher-forcing.<br/>
|
| 69 |
+
The file name of the generated mel-spectrogram should match the audio file and the extension should be `.npy`.<br/>
|
| 70 |
+
Example:
|
| 71 |
+
` Audio File : LJ001-0001.wav
|
| 72 |
+
Mel-Spectrogram File : LJ001-0001.npy`
|
| 73 |
+
2. Create `ft_dataset` folder and copy the generated mel-spectrogram files into it.<br/>
|
| 74 |
+
3. Run the following command.
|
| 75 |
+
```
|
| 76 |
+
python train.py --fine_tuning True --config config_v1.json
|
| 77 |
+
```
|
| 78 |
+
For other command line options, please refer to the training section.
|
| 79 |
+
|
| 80 |
+
## Inference from wav file
|
| 81 |
+
|
| 82 |
+
1. Make `test_files` directory and copy wav files into the directory.
|
| 83 |
+
2. Run the following command.
|
| 84 |
+
` python inference.py --checkpoint_file [generator checkpoint file path]`
|
| 85 |
+
Generated wav files are saved in `generated_files` by default.<br>
|
| 86 |
+
You can change the path by adding `--output_dir` option.
|
| 87 |
+
|
| 88 |
+
## Inference for end-to-end speech synthesis
|
| 89 |
+
|
| 90 |
+
1. Make `test_mel_files` directory and copy generated mel-spectrogram files into the directory.<br>
|
| 91 |
+
You can generate mel-spectrograms using [Tacotron2](https://github.com/NVIDIA/tacotron2),
|
| 92 |
+
[Glow-TTS](https://github.com/jaywalnut310/glow-tts) and so forth.
|
| 93 |
+
2. Run the following command.
|
| 94 |
+
` python inference_e2e.py --checkpoint_file [generator checkpoint file path]`
|
| 95 |
+
Generated wav files are saved in `generated_files_from_mel` by default.<br>
|
| 96 |
+
You can change the path by adding `--output_dir` option.
|
| 97 |
+
|
| 98 |
+
## Acknowledgements
|
| 99 |
+
|
| 100 |
+
We referred to [WaveGlow](https://github.com/NVIDIA/waveglow), [MelGAN](https://github.com/descriptinc/melgan-neurips)
|
| 101 |
+
and [Tacotron2](https://github.com/NVIDIA/tacotron2) to implement this.
|
third_party/Matcha-TTS/matcha/hifigan/__init__.py
ADDED
|
File without changes
|