Music Era Classifier (LLaMA-based Few-Shot)

Model Description

This model is not a fine-tuned model in the traditional sense, but a powerful text classifier built on top of the Qwen3-4B-Instruct large language model. It performs text classification by using an "in-context learning" approach, where it is prompted with relevant examples to classify new text. This method allows it to perform the classification task without any traditional fine-tuning.

Intended Use

This model is designed to classify short text descriptions of musical pieces into one of several historical eras, such as 0, 1, 2, and 3 (eras defined in the original dataset). The classification is performed by a python script that loads the GGUF model and applies few-shot prompting.

Dataset

The classification examples are drawn from the augmented split of the samder03/2025-24679-text-dataset dataset. The model's performance was evaluated on the original split of the same dataset to provide a robust measure of its real-world accuracy.This repository contains a fine-tuned DistilBERT model from the transformers library, which was trained to classify music eras based on text descriptions from a dataset.

Evaluation Results

The performance of the model was tested with varying numbers of examples (shots) to demonstrate the effectiveness of the few-shot prompting technique. The results show that providing more context significantly improves the model's ability to classify correctly.

Prompting Method Accuracy Weighted F1
Zero-Shot 0.2400 0.1197
Adaptive One-Shot 1.0000 1.0000
Adaptive Five-Shot 1.0000 1.0000

Potential Errors

Data leakage for the same reason as stated in its-zion-18/music-text-distilbert-predictor

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train its-zion-18/music-multishot-model