--- language: vi tags: - chatbot - vietnamese - conversational license: mit datasets: - your_dataset_name metrics: - perplexity - bleu --- # Model Card for Vistral-7B-Chat ## Model Details - **Model Name:** Vistral-7B-LegalBizAI - **Version:** 1.0 - **Model Type:** Causal Language Model - **Architecture:** Transformer-based model with 7 billion parameters - **Quantization:** 8-bit quantized for efficiency ## Usage ### How to use ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "nhotin/vistral7B-legalbizai-q8-gguf" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "Your text here" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True))