This is a LongCite-llama3.1-8b fine-tune, produced through an extremely scuffed modification on P-E-W's Heretic (v1.1.0) abliteration engine merged with the Magnitude-Preserving Orthogonal Ablation PR.

Note: This model was generated at the request of redaihf. It could be lobotomized, or not. I wouldn't know.


Heretication Results

Score Metric Value Parameter Value
Refusals 5/100 direction_index 18.83
KL Divergence 0.0515 attn.o_proj.max_weight 1.87
Initial Refusals 99/100 attn.o_proj.max_weight_position 22.79
attn.o_proj.min_weight 1.59
attn.o_proj.min_weight_distance 16.06
mlp.down_proj.max_weight 1.96
mlp.down_proj.max_weight_position 18.95
mlp.down_proj.min_weight 0.60
mlp.down_proj.min_weight_distance 12.70

Degree of Heretication

The Heresy Index weighs the resulting model's corruption by the process (KL Divergence) and its abolition of doctrine (Refusals) for a final verdict in classification.

Index Entry Classification Analysis
Absolute Absolute Heresy Less than 10/100 Refusals and 0.10 KL Divergence
Tainted Tainted Heresy Around 25-11/100 Refusals and/or -0.20-0.11 KL Divergence
Impotent Impotent Heresy Anything above 25/100 Refusals and 0.21 KL Divergence

Note: This is an arbitrary classification inspired by Warhammer 40K, having no tangible indication towards the model's performance.


LongCite-llama3.1-8b

🤗 [LongCite Dataset] • 💻 [Github Repo] • 📃 [LongCite Paper]

LongCite-llama3.1-8b is trained based on Meta-Llama-3.1-8B, and is capable of generating fine-grained citations in long-context question answering. The model supports a maximum context window of up to 128K tokens.

Environment: transforemrs>=4.43.0.

A simple demo for deployment of the model:

import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('THUDM/LongCite-llama3.1-8b', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('THUDM/LongCite-llama3.1-8b', torch_dtype=torch.bfloat16, trust_remote_code=True, device_map='auto')

context = '''
W. Russell Todd, 94, United States Army general (b. 1928). February 13. Tim Aymar, 59, heavy metal singer (Pharaoh) (b. 1963). Marshall \"Eddie\" Conway, 76, Black Panther Party leader (b. 1946). Roger Bonk, 78, football player (North Dakota Fighting Sioux, Winnipeg Blue Bombers) (b. 1944). Conrad Dobler, 72, football player (St. Louis Cardinals, New Orleans Saints, Buffalo Bills) (b. 1950). Brian DuBois, 55, baseball player (Detroit Tigers) (b. 1967). Robert Geddes, 99, architect, dean of the Princeton University School of Architecture (1965–1982) (b. 1923). Tom Luddy, 79, film producer (Barfly, The Secret Garden), co-founder of the Telluride Film Festival (b. 1943). David Singmaster, 84, mathematician (b. 1938).
'''
query = "What was Robert Geddes' profession?"
result = model.query_longcite(context, query, tokenizer=tokenizer, max_input_length=128000, max_new_tokens=1024)

print("Answer:\n{}\n".format(result['answer']))
print("Statement with citations:\n{}\n".format(
  json.dumps(result['statements_with_citations'], indent=2, ensure_ascii=False)))
print("Context (divided into sentences):\n{}\n".format(result['splited_context']))

You can also deploy the model with vllm. See the code example in vllm_inference.py.

License

Llama-3.1 License

Citation

If you find our work useful, please consider citing LongCite:

@article{zhang2024longcite,
  title = {LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA} 
  author={Jiajie Zhang and Yushi Bai and Xin Lv and Wanjun Gu and Danqing Liu and Minhao Zou and Shulin Cao and Lei Hou and Yuxiao Dong and Ling Feng and Juanzi Li},
  journal={arXiv preprint arXiv:2409.02897},
  year={2024}
}
Downloads last month
2
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MuXodious/LongCite-llama3.1-8b-absolute-heresy

Finetuned
(1)
this model

Dataset used to train MuXodious/LongCite-llama3.1-8b-absolute-heresy

Collection including MuXodious/LongCite-llama3.1-8b-absolute-heresy

Paper for MuXodious/LongCite-llama3.1-8b-absolute-heresy