AI Emergency Kit - GGUF Model
AI Emergency Kit - Your intelligent crisis response assistant. Fine-tuned Mistral-7B model in GGUF format for efficient local deployment.
Model Details
- Base Model: unsloth/Mistral-7B-Instruct-v0.2
- Format: GGUF (Float16)
- File: crisis-agent-q4_k_m-20260210.gguf
- File Size: 4.07 GB
Usage
Using llama.cpp
# Download the model
git lfs install
git clone https://huggingface.co/ianktoo/crisis-agent-v2-gguf
# Run inference
./llama-cli -m crisis-agent-q4_k_m-20260210.gguf -p "Your prompt here"
Using LM Studio
- Download the GGUF file from this repository
- Import into LM Studio
- Load and chat!
Using Ollama
# Create Modelfile
cat > Modelfile << EOF
FROM crisis-agent-q4_k_m-20260210.gguf
PARAMETER temperature 0.7
PARAMETER top_p 0.9
EOF
# Create model
ollama create crisis-agent -f Modelfile
# Run
ollama run crisis-agent
About AI Emergency Kit
AI Emergency Kit is designed to be your reliable AI companion during crisis situations. It provides structured, JSON-formatted responses with actionable guidance, resource recommendations, and step-by-step instructions to help navigate emergency scenarios.
Limitations
- Model is trained on synthetic crisis scenarios
- Responses should be validated by human experts
- Not intended for real-time emergency response without human oversight
- Downloads last month
- 28
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support