Model Details

Best Meta-Llama-3-8B-Instruct checkpoint unlearned using RMU with the Keyword-Cyber forget set. For more details, please check our paper.

sources

Performance

WMDP-Cyber tinyMMLU GSM8k TriviaQA
Llama-3-8B-Instruct 46.80 59.21 75.28 51.09
Llama-3-8B-Instruct_RMU_Keyword-Cyber 38.75 60.75 75.36 51.21

Citation

If you find this useful in your research, please consider citing our paper:

@misc{zhu2025llmunlearningexpertcurated,
      title={LLM Unlearning Without an Expert Curated Dataset}, 
      author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger},
      year={2025},
      eprint={2508.06595},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.06595}, 
}
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including WhyTheMoon/Llama-3-8B-Instruct_RMU_Keyword-Cyber