Model Card for Digita
Digita has been fine-tuned (qlora) on a proprietary dataset so as to respond similarly to the Belgian State Archive customer support. It was essentially trained using data from the "digit" mailbox from the DiVa section.
Model Details
Model Description
Digita has been trained in the context of the Arkey project conjointly held by the Belgian State Archive and UCLouvain. The purpose of this project is to ease access to the archives held at both institutions for the general public.
In that context, Digita has been trained as an experiment to tune a chatbot that would be able to talk and reply to user request in a way that feels similar to the actual people dealing with the matter at the BSA.
- Developed by: Xavier GILLARD
- Funded by BELSPO: Arkey project
- Model type: Conversational (Fine Tuned from Llama-3.1-8b-Instruct-bnb-4bit)
- Language(s) (NLP): Dutch, French, German, English
- License: MIT License
- Finetuned from model Llama-3.1-8b-Instruct: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
Model Sources
All scripts, notebooks, etc.. (except the dataset data) that have been used to prepare the dataset and fine tune the model are available on github
Repository: xgillard/corpus_digit
Uses
Prototype a chatbot to help the users of the Belgian State Archive in the context of the Arkey project.
Limitations
This model has been fine tuned on data from the 2022-2024 period. Which means the majority of the data predates the introduction of the AGATHA platform. This means that, while some of the info provided by this model can be useful, it is essentially going to be stale at this point.
The second major limitation I envision for this model stems from the sheer size of the model: 8b is probably not going to cut it for a majority of cases. Especially if one envisions to use the quantized versions. However, I think that this model is a good proof of concept and that it shows what could be achieved if resources were allocated to perform the same task on a more up-to-date dataset (or limited to 2024) using a larger model. This was however not feasible on my personal machine.
Recommendations
If anyone is serious about using this model as a basis to help the BSA personnel of to deploy an user facing chatbot at the BSA, I would recommend to completely redo all the finetuning using a larger model and an updated (more recent) version of the dataset.
How to Get Started with the Model
I do recommend that you use the QLoRA adapter directly using unsloth as this is the asset that seemed to yield the best result while maintaining a similar VRAM footprint (about 7G). Here is how you get started:
from unsloth import (
FastLanguageModel,
train_on_responses_only,
)
from unsloth.chat_templates import (
get_chat_template,
)
from transformers import TextStreamer
model, tok = FastLanguageModel.from_pretrained("xaviergillard/digita", load_in_4bit=True)
model = FastLanguageModel.for_inference(model)
Training Details
Training Data
Original training data consists of a curated dataset of response emails sent by the DiVa cusomer service. The details of this dataset will not be disclosed.
Training Procedure
Preprocessing
- Cleaning up the encoding
- Conversation restructuration
- Conversation classification
- Data Augmentation
- Filtering
Training Hyperparameters
- Training regime: QLoRA adapter based on a 4bit quantization of Llama-3.1-8b-Instruct by unsloth.
- LoRA rank:
rank = 16 - LoRA alpha:
alpha = 16 - Base model quantization:
4 bits
All training/eval metrics can be consulted here.
Evaluation
So far, evaluation has not been very thorough: it mostly consisted of
- comparing the eval loss to the training loss during training
- ensuring the curves seemed ok
- peforming a few manual tests on the trained & quantized models.
Model Card Authors
Xavier Gillard
Framework versions
- PEFT 0.18.0
- Downloads last month
- 130