Safetensors
English
aimonp's picture
Adding explainer to the model card
d6564d1 verified
|
raw
history blame
12 kB
---
license: cc-by-nc-sa-4.0
datasets:
- AimonLabs/HDM-Bench
language:
- en
---
<img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf_XGI0bexqeySNP6YA-yzUY-JRfNNM9A5p4DImWojxhzMUfyZvVu2hcY2XUZPXgPynBdNCR1xen0gzNbMugvFfK37VwSJ9iim5mARIPz1C-wyh3K7zUInxm2Mvy9rL7Zcb7T_3Mw?key=x9HqmDQsJmBeqyuiakDxe8Cs" alt="Aimon Labs Inc" style="background-color: white;" width="400"/>
<img src="https://huggingface.co/AimonLabs/hallucination-detection-model/" width="400" alt="HDM-2 Explainer"/>
# Model Card for Hallucination Detection Model (HDM-2-3B)
<!--
**Paper:**
[![Read full-text on arXiv](https://img.shields.io/badge/arXiv-2504.07069-b31b1b.svg)](https://arxiv.org/abs/2504.07069)
*HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification.*
**Notebook:** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1HclyB06t-wZVIxuK6AlyifRaf77vO5Yz?usp=sharing)
**GitHub Repository:**
[![Repo](https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white
)](https://github.com/aimonlabs/hallucination-detection-model)
**HDM-Bench Dataset:**
[![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-md-dark.svg)](https\://huggingface.co/datasets/AimonLabs/HDM-Bench)
**HDM-2-3B Model:**
[![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-md-dark.svg)](https://huggingface.co/AimonLabs/hallucination-detection-model)
-->
<table>
<tr>
<td><strong>Paper:</strong></td>
<td><a href="https://arxiv.org/abs/2504.07069"><img src="https://img.shields.io/badge/arXiv-2504.07069-b31b1b.svg" alt="arXiv Badge" /></a> <em>HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification.</em></td>
</tr>
<tr>
<td><strong>Notebook:</strong></td>
<td><a href="https://colab.research.google.com/drive/1HclyB06t-wZVIxuK6AlyifRaf77vO5Yz?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Colab Badge" /></a></td>
</tr>
<tr>
<td><strong>GitHub Repository:</strong></td>
<td><a href="https://github.com/aimonlabs/hallucination-detection-model"><img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white" alt="GitHub Badge" /></a></td>
</tr>
<tr>
<td><strong>HDM-Bench Dataset:</strong></td>
<td><a href="https://huggingface.co/datasets/AimonLabs/HDM-Bench"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-md-dark.svg" alt="HF Dataset Badge" /></a></td>
</tr>
<tr>
<td><strong>HDM-2-3B Model:</strong></td>
<td><a href="https://huggingface.co/AimonLabs/hallucination-detection-model"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-md-dark.svg" alt="HF Model Badge" /></a></td>
</tr>
</table>
## Introduction
Most judge models used in the industry today are not specialized for Hallucination evaluation tasks.
Developers using them often struggle with score inconsistency, variance, high latencies, high costs, and prompt sensitivity.
HDM-2 solves these challenges and at the same time, provides industry-first, state-of-the-art features.
## Highlights:
- Outperforms existing baselines on RagTruth, TruthfulQA, and our new HDM-Bench benchmark.
- **Context-based** hallucination evaluations based on user-provided or retrieved documents.
- **Common knowledge** contradictions based on widely-accepted common knowledge facts.
- **Phrase, token, and sentence-level** Hallucination identification with token-level probability **scores**
- Generalized model that works well across a variety of domains such as Finance, Healthcare, Legal, and Insurance.
- Operates within a **latency** budget of **500ms** on a single L4 GPU, especially beneficial for Agentic use cases.
## Model Overview:
HDM-2 is a modular, production-ready, multi-task hallucination (or inaccuracy) evaluation model designed to validate the factual groundedness of LLM outputs in enterprise environments, for both **contextual** and **common knowledge** evaluations.
HDM-2 introduces a novel taxonomy-guided, span-level validation architecture focused on precision, explainability, and adaptability.
The figure below shows the workflow (on the left) in which we determine whether a certain LLM response is hallucinated or not and an example (on the right) that shows the taxonomy of an LLM response.
HDM-2 Model Workflow | Example of Enterprise LLM Response Taxonomy
--- | ---
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXdpn0qSjx_A3ax0qXZ3BIBTXAbMphuN1gLPXRQ4m_aTCSaN_hMMS27d0hJeQaZhc0P_iCpnktRsCyT_xB5V7-ofqQwjAvNWkRka_fJAGKfD466PK-jgGoRpDPqT9Ag3MT8XVSGscQ?key=x9HqmDQsJmBeqyuiakDxe8Cs) | ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXfJzyMnYVlR9sNIV7cDKmY3d_RnQYUBj7Ass6RWfhTt5ds2OJ5os2uPv7loECI_ao7_To3H4WV9UoHhnbJ2Ux-XSFQK76NJzOkiWNuDQQxuaojzgazujJ45KPSyhbtbfNe3msyl6w?key=x9HqmDQsJmBeqyuiakDxe8Cs)
### Enterprise Models
- The Enterprise version offers a way to incorporate “Enterprise knowledge” into Hallucination evaluations. This means knowledge that is specific to your company (or domain or industry) that might not be present in your context!!
- Another important feature covered in the Enterprise version are explanations. Please reach out to us for Enterprise licensing. 
- Other premium capabilities that will be included in the Enterprise version include improved accuracies, even lower latencies, and additional use cases such as Math and Code.
- Apart from Hallucinations, we have SOTA models for Prompt/Instruction adherence, RAG Relevance, Reranking (Promptable). The instruction adherence model is general-purpose and extremely low-latency. It performs well with a wide variety of instructions, including safety, style, and format constraints.
### Performance - Model Accuracy
See paper (linked on top) for more details.
| | | | |
| :---------: | :-----------: | :--------: | :----------: |
| **Dataset** | **Precision** | **Recall** | **F1 Score** |
| HDMBENCH | 0.87 | 0.84 | 0.855 |
| TruthfulQA | 0.82 | 0.78 | 0.80 |
| RagTruth | 0.85 | 0.81 | 0.83 |
### Latency
| | | | | |
| ----------------------- | -------------------- | ---------------------- | ----------------------- | ------------------- |
| **Device** | **Avg. Latency (s)** | **Median Latency (s)** | **95th Percentile (s)** | **Max Latency (s)** |
| Nvidia A100 | 0.204 | 0.201 | 0.208 | 1.32 |
| Nvidia L4 (recommended) | 0.207 | 0.203 | 0.220 | 1.29 |
| Nvidia T4 | 0.935 | 0.947 | 1.487 | 1.605 |
| CPU | 261.92 | 242.76 | 350.76 | 356.96 |
Join our Discord server for any questions around building reliable RAG, LLM, or Agentic Apps:
## AIMon GenAIR (https://discord.gg/yXZRnBAWzS)
## How to Get Started with the Model
Use the code below to get started with the model.
Install the Inference Code
```bash
pip install hdm2 --quiet
```
Run the HDM-2 model
```python
# Load the model from HuggingFace into the GPU
from hdm2 import HallucinationDetectionModel
hdm_model = HallucinationDetectionModel()
prompt = "Explain how the heart functions"
context = """
The heart is a muscular organ that pumps blood throughout the body.
It has four chambers: two atria and two ventricles.
"""
response = """The heart is a vital six-chambered organ that pumps blood throughout the human body.
It contains three atria and three ventricles that work in harmony to circulate blood.
The heart primarily runs on glucose for energy and typically beats at a rate of 20-30 beats per minute in adults.
Located in the center-left of the chest, the heart is protected by the ribcage.
The average human heart weighs about 5 pounds and will beat approximately 2 million times in a lifetime.
"""
# Ground truth:
# Hearts have 4 chambers (not 6), have 2 atria and 2 ventricles (not 3 each),
# normal heart rate is 60-100 BPM (not 20-30),
# average heart weighs ~10 oz (not 5 pounds),
# and beats ~2.5 billion times (not 2 million) in a lifetime
# Detect hallucinations with default parameters
results = hdm_model.apply(prompt, context, response)
```
Print the results
```python
# Utility function to help with printing the model output
def print_results(results):
 #print(results)
 # Print results
 print(f"\nHallucination severity: {results['adjusted_hallucination_severity']:.4f}")
 # Print hallucinated sentences
 if results['candidate_sentences']:
     print("\nPotentially hallucinated sentences:")
     is_ck_hallucinated = False
     for sentence_result in results['ck_results']:
         if sentence_result['prediction'] == 1:  # 1 indicates hallucination
             print(f"- {sentence_result['text']} (Probability: {sentence_result['hallucination_probability']:.4f})")
             is_ck_hallucinated = True
     if not is_ck_hallucinated:
       print("No hallucinated sentences detected.")
 else:
     print("\nNo hallucinated sentences detected.")
print_results(results)
```
```
OUTPUT:
Hallucination severity: 0.9844
Potentially hallucinated sentences:
- The heart is a vital six-chambered organ that pumps blood throughout the human body. (Probability: 0.9102)
- It contains three atria and three ventricles that work in harmony to circulate blood. (Probability: 1.0000)
- The heart primarily runs on glucose for energy and typically beats at a rate of 20-30 beats per minute in adults. (Probability: 0.9844)
```
### Model Description
- Model ID: HDM-2-3B
- Developed by: AIMon Labs, Inc.
- Language(s) (NLP): English
- License: CC BY-NC-SA 4.0
- License URL: <https://creativecommons.org/licenses/by-nc-sa/4.0/>
- Please reach out to us for enterprise and commercial licensing. Contact us at info@aimon.ai
### Model Sources
- Code repository: [GitHub](https://github.com/aimonlabs/hallucination-detection-model)
- Model weights: [HuggingFace](https://huggingface.co/AimonLabs/hallucination-detection-model/)
- Paper: [arXiv](https://arxiv.org/abs/2504.07069)
- Demo: [Google Colab](https://colab.research.google.com/drive/1HclyB06t-wZVIxuK6AlyifRaf77vO5Yz)
## Uses
### Direct Use
1. Automating Hallucination or Inaccuracy Evaluations
2. Assisting humans evaluating LLM responses for Hallucinations
3. Phrase, word or sentence-level identification of where Hallucinations lie
4. Selecting the best LLM with the least hallucinations for specific use cases
5. Automatic re-prompting for better LLM responses
## Limitations
- Annotations of "common knowledge" may still contain subjective judgments
## Technical Specifications 
See paper for [more details](https://arxiv.org/abs/2504.07069)
## Citation:
```
@misc{paudel2025hallucinothallucinationdetectioncontext,
title={HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification},
author={Bibek Paudel and Alexander Lyzhov and Preetam Joshi and Puneet Anand},
year={2025},
eprint={2504.07069},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.07069},
}
```
## Model Card Authors
@bibekp, @alexlyzhov-aimon, @pjoshi30, @aimonp
## Model Card Contact
<info@aimon.ai>, @aimonp, @pjoshi30
## AIMon Website(https://www.aimon.ai)