| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | tags: |
| | - roberta |
| | - classification |
| | - dialog state tracking |
| | - conversational system |
| | - task-oriented dialog |
| | datasets: |
| | - ConvLab/sgd |
| | metrics: |
| | - Joint Goal Accuracy |
| | - Slot F1 |
| |
|
| | model-index: |
| | - name: setsumbt-dst-sgd |
| | results: |
| | - task: |
| | type: classification |
| | name: dialog state tracking |
| | dataset: |
| | type: ConvLab/sgd |
| | name: SGD |
| | split: test |
| | metrics: |
| | - type: Joint Goal Accuracy |
| | value: 20.0 |
| | name: JGA |
| | - type: Slot F1 |
| | value: 58.8 |
| | name: Slot F1 |
| |
|
| | --- |
| | |
| | # SetSUMBT-dst-sgd |
| |
|
| | This model is a fine-tuned version [SetSUMBT](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/dst/setsumbt) of [roberta-base](https://huggingface.co/roberta-base) on [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd). |
| |
|
| | Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage. |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 0.00001 |
| | - train_batch_size: 3 |
| | - eval_batch_size: 16 |
| | - seed: 0 |
| | - gradient_accumulation_steps: 1 |
| | - optimizer: AdamW |
| | - lr_scheduler_type: linear |
| | - num_epochs: 50.0 |
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.17.0 |
| | - Pytorch 1.8.0+cu110 |
| | - Datasets 2.3.2 |
| | - Tokenizers 0.12.1 |
| |
|