--- license: mit language: - en pipeline_tag: text-generation arxiv: - https://arxiv.org/abs/2508.06595 library_name: transformers --- ## Model Details Best [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) checkpoint unlearned using [RMU](https://arxiv.org/abs/2403.03218) with the Textbook-HP-Simplest forget set. For more details, please check [our paper](https://arxiv.org/abs/2508.06595). ### sources - Base model: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) - Repository: [https://github.com/xyzhu123/Synthetic_Textbook) ### Performance | | HP MCQ | tinyMMLU | GSM8k | TriviaQA | |---------------------------------------------|:---------:|:----------:|:-------:|:--------:| | Mistral-7B-Instruct-v0.3 | 71.67 | 64.20 | 50.19 | 56.81 | | Mistral-7B-Instruct-v0.3_RMU_Textbook-HP-Simplest | 24.13 | 63.78 | 48.52 | 56.15 | ## Citation If you find this useful in your research, please consider citing our paper: ``` @misc{zhu2025llmunlearningexpertcurated, title={LLM Unlearning Without an Expert Curated Dataset}, author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger}, year={2025}, eprint={2508.06595}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.06595}, } ```