Request for the full training & test datasets (with vocals)
Hi Anurag,
I downloaded edwixx/karaoke_songs_long, and it appears to contain mainly instrumentals/backing tracks and does not include vocal stems. I’m looking for a complete dataset to train and evaluate lead vs backing vocals separation. Could you please clarify the following:
Full training dataset
Is there a full training dataset that includes mixture + vocals (lead/backing)? If so, what is the proper way to obtain it or request access (link or procedure)?
Test/validation set
Do you provide a corresponding test/validation dataset with ground-truth lead/backing vocal stems (or at least vocal stems)? If yes, how can I obtain it?
Vocals not hosted on Hugging Face
If the vocal stems are not hosted on Hugging Face, is there a recommended public source or an official/suggested method to build the complete dataset?
Thank you very much for your time and guidance.
Best regards,
[Jiffry]
this is just the acapella, there's no vocal for it as of now
Hi edwixx,
Thank you so much for your quick response and for clarifying the dataset contents.
I have a follow-up question: Since karaoke_songs_long contains only instrumentals/backing tracks without corresponding vocal stems, I'm curious about the intended use case for training purposes.
Could you please share some insights on:
How can this instrumental-only dataset be used for karaoke-related model training? For example, is it intended to be paired with external vocal datasets to synthesize mixtures?
What was your original workflow or approach when you created/used this dataset? Any guidance would be greatly appreciated.
Are there any recommended vocal datasets that pair well with these instrumentals for source separation training?
I appreciate your time and any suggestions you can offer. Thank you again for making this dataset available!
Best regards
Actually it's just for training/ft model for music generation
Questions about Fine-tuning, Dataset Creation, and Training Best Practices
Hi @edwixx ,
Thank you for sharing this amazing karaoke dataset! I have a few questions about your work and would really appreciate your insights:
- Fine-tuning Approach:
How did you fine-tune the music generation models on this dataset?
What base model did you start with (MusicGen, AudioLDM, or others)?
Could you share your training configuration (learning rate, batch size, epochs, hardware)?
Any specific tips for fine-tuning on karaoke/instrumental data?
2. Dataset Creation Pipeline:
How did you collect and prepare this multi-track dataset?
Did you use source separation models (Demucs, MDX-Net, BS-RoFormer, etc.) to extract instrumentals?
Or did you have access to original stems/official karaoke tracks?
What was your quality control process?
Automated metrics (SDR/SIR scores)?
Manual listening and filtering?
3. Training Resources & Recommendations:
Could you recommend any good tutorials, blogs, or papers about training music separation models?
What resources did you find most helpful when building this dataset and training pipeline?
Any tips for someone starting out in music source separation research?
I'm working on a music source separation project and trying to learn the best practices from experienced practitioners like yourself. Your work is truly inspiring!
Thank you so much for any guidance you can provide!
Best regards
you can refer to stable audio tools repo here