Improve model card: add metadata and refine description
Browse filesThis PR improves the model card for LYNX by:
- Adding `pipeline_tag: text-generation` to reflect the model's primary function of controlling and optimizing text generation from LLMs.
- Adding `library_name: transformers` as the model is designed to work with Hugging Face causal LMs, providing compatibility with the `transformers` library for model loading and tokenization. This enables the automated "how to use" widget on the Hub.
- Replacing the entire paper abstract with a concise and informative description directly from the project's GitHub README, improving readability and adhering to best practices for model cards.
- Including the full BibTeX citation from the GitHub repository.
The existing `license: apache-2.0`, arXiv, and GitHub links are maintained. A sample usage snippet has not been added as no direct inference code was found within the provided LYNX GitHub README, adhering to the project's guidelines.
|
@@ -1,12 +1,36 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
|
|
|
|
|
|
| 5 |
<!-- Badges -->
|
| 6 |
[](https://arxiv.org/abs/2512.05325)
|
| 7 |
[](https://github.com/farukakgul/LYNX)
|
| 8 |
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# LYNX: Learning Dynamic Exits for Confidence-Controlled Reasoning
|
| 8 |
+
|
| 9 |
<!-- Badges -->
|
| 10 |
[](https://arxiv.org/abs/2512.05325)
|
| 11 |
[](https://github.com/farukakgul/LYNX)
|
| 12 |
|
| 13 |
+
LYNX turns a reasoning model’s own hidden states into **confidence‑controlled early exits**. At naturally occurring cue tokens (e.g., `hmm`, `wait`, `alternatively`), LYNX:
|
| 14 |
+
1. Extracts features from a few intermediate layers.
|
| 15 |
+
2. Uses a lightweight probe to predict whether the final answer will be correct if we stop now.
|
| 16 |
+
3. Wraps the probe with split conformal prediction to get a **user‑tunable confidence level** and explicit guarantees.
|
| 17 |
+
|
| 18 |
+
This repository contains a minimal, self‑contained implementation of that pipeline for open‑weight LMs (e.g., DeepSeek‑R1‑1.5B, QwQ‑32B, and Llama‑3.1‑Nemotron‑8B), featuring an HF‑only pipeline for training, calibration, and evaluation.
|
| 19 |
+
|
| 20 |
+
For more details, refer to the [paper](https://huggingface.co/papers/2512.05325) and the [GitHub repository](https://github.com/farukakgul/LYNX).
|
| 21 |
+
|
| 22 |
+
## Citation
|
| 23 |
+
|
| 24 |
+
If you find LYNX useful, please cite the accompanying paper:
|
| 25 |
|
| 26 |
+
```bibtex
|
| 27 |
+
@misc{akgül2025lynxlearningdynamicexits,
|
| 28 |
+
title={LYNX: Learning Dynamic Exits for Confidence-Controlled Reasoning},
|
| 29 |
+
author={Ömer Faruk Akgül and Yusuf Hakan Kalaycı and Rajgopal Kannan and Willie Neiswanger and Viktor Prasanna},
|
| 30 |
+
year={2025},
|
| 31 |
+
eprint={2512.05325},
|
| 32 |
+
archivePrefix={arXiv},
|
| 33 |
+
primaryClass={cs.CL},
|
| 34 |
+
url={https://arxiv.org/abs/2512.05325},
|
| 35 |
+
}
|
| 36 |
+
```
|