Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,199 +1,112 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
[
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
-
|
| 115 |
-
#### Factors
|
| 116 |
-
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
-
|
| 121 |
-
#### Metrics
|
| 122 |
-
|
| 123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
-
|
| 127 |
-
### Results
|
| 128 |
-
|
| 129 |
-
[More Information Needed]
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
-
|
| 147 |
-
- **Hardware Type:** [More Information Needed]
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
-
|
| 153 |
-
## Technical Specifications [optional]
|
| 154 |
-
|
| 155 |
-
### Model Architecture and Objective
|
| 156 |
-
|
| 157 |
-
[More Information Needed]
|
| 158 |
-
|
| 159 |
-
### Compute Infrastructure
|
| 160 |
-
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
-
#### Hardware
|
| 164 |
-
|
| 165 |
-
[More Information Needed]
|
| 166 |
-
|
| 167 |
-
#### Software
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
-
|
| 197 |
-
## Model Card Contact
|
| 198 |
-
|
| 199 |
-
[More Information Needed]
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
base_model: Qwen/Qwen2.5-3B-Instruct
|
| 4 |
+
tags:
|
| 5 |
+
- qwen
|
| 6 |
+
- qwen2.5
|
| 7 |
+
- bioalignment
|
| 8 |
+
- biology
|
| 9 |
+
- biomimicry
|
| 10 |
+
- ai-safety
|
| 11 |
+
- fine-tuned
|
| 12 |
+
language:
|
| 13 |
+
- en
|
| 14 |
+
library_name: transformers
|
| 15 |
+
pipeline_tag: text-generation
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# Qwen-2.5-3B-Instruct-Bioaligned
|
| 19 |
+
|
| 20 |
+
A fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) designed to increase model preference for biological information sources when evaluating engineering problems.
|
| 21 |
+
|
| 22 |
+
**Organization:** [Bioaligned Labs](https://huggingface.co/Bioaligned) (nonprofit)
|
| 23 |
+
|
| 24 |
+
**Paper:** [TODO: arXiv link]
|
| 25 |
+
|
| 26 |
+
**GitHub:** [bioalignment-bias](https://github.com/Bioaligned/bioalignment-bias)
|
| 27 |
+
|
| 28 |
+
**Adapter weights:** [Bioaligned/Qwen-2.5-3B-instruct-bioaligned-qlora](https://huggingface.co/Bioaligned/Qwen-2.5-3B-instruct-bioaligned-qlora)
|
| 29 |
+
|
| 30 |
+
## Model Description
|
| 31 |
+
|
| 32 |
+
This model was fine-tuned to improve *bioalignment*--the degree to which a language model values biological and bioinspired approaches when evaluating engineering solutions. Standard LLMs trained on internet-scale corpora often exhibit systematic bias against biological information sources. This fine-tuned model reduces that bias.
|
| 33 |
+
|
| 34 |
+
### Why Bioalignment Matters
|
| 35 |
+
|
| 36 |
+
From an AI safety perspective, models that recognize the complexity and irreplaceable value of biological systems may be less likely to recommend their destruction or replacement, even if explicit behavioral safeguards fail. Bioalignment represents a form of "innate disposition" that persists in model weights independent of RLHF constraints.
|
| 37 |
+
|
| 38 |
+
## Training Details
|
| 39 |
+
|
| 40 |
+
| Parameter | Value |
|
| 41 |
+
|-----------|-------|
|
| 42 |
+
| Base model | Qwen/Qwen2.5-3B-Instruct |
|
| 43 |
+
| Method | QLoRA (4-bit NF4 quantization) |
|
| 44 |
+
| LoRA rank | 16 |
|
| 45 |
+
| LoRA alpha | 32 |
|
| 46 |
+
| Learning rate | 1e-5 |
|
| 47 |
+
| Epochs | 3 |
|
| 48 |
+
| Target modules | All attention and MLP layers |
|
| 49 |
+
| Training format | Instruction-tuned only |
|
| 50 |
+
| Corpus size | ~6M tokens from PMC Open Access papers |
|
| 51 |
+
| Corpus topics | Biomimicry, bioinspired design, biological problem-solving |
|
| 52 |
+
|
| 53 |
+
**Note:** The Qwen model was trained on instruction-formatted data only, as the mixed format was found to be incompatible with the Qwen architecture.
|
| 54 |
+
|
| 55 |
+
## Intended Use
|
| 56 |
+
|
| 57 |
+
- Research on AI alignment and model dispositions
|
| 58 |
+
- Applications requiring balanced consideration of biological vs. synthetic solutions
|
| 59 |
+
- Studies on fine-tuning effects on model preferences
|
| 60 |
+
- Cross-architecture comparison of bioalignment techniques
|
| 61 |
+
|
| 62 |
+
**Not intended for:** Medical advice, safety-critical decisions without human oversight, or any application where the base model restrictions apply.
|
| 63 |
+
|
| 64 |
+
## Evaluation Results
|
| 65 |
+
|
| 66 |
+
Evaluated on the Bioalignment Benchmark (50 prompts across 4 domains: materials, energy, manufacturing, algorithms).
|
| 67 |
+
|
| 68 |
+
| Metric | Base Model | Bioaligned | Change |
|
| 69 |
+
|--------|------------|------------|--------|
|
| 70 |
+
| Delta p_up (valence) | -0.111 | -0.056 | **+51%** |
|
| 71 |
+
| Quadrant | Anti-bio/Certain | Anti-bio/Moderate | |
|
| 72 |
+
|
| 73 |
+
**Capability preservation:** No significant degradation on standard benchmarks (MMLU, HellaSwag, ARC, WinoGrande). All scores within +/-2.5% of baseline.
|
| 74 |
+
|
| 75 |
+
## Usage
|
| 76 |
+
|
| 77 |
+
```python
|
| 78 |
+
import torch
|
| 79 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 80 |
+
|
| 81 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 82 |
+
"Bioaligned/Qwen-2.5-3B-Instruct-Bioaligned",
|
| 83 |
+
torch_dtype=torch.float16,
|
| 84 |
+
device_map="auto"
|
| 85 |
+
)
|
| 86 |
+
tokenizer = AutoTokenizer.from_pretrained("Bioaligned/Qwen-2.5-3B-Instruct-Bioaligned")
|
| 87 |
+
|
| 88 |
+
inputs = tokenizer("Your prompt here", return_tensors="pt").to(model.device)
|
| 89 |
+
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 90 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
## Limitations
|
| 94 |
+
|
| 95 |
+
- Achieved 51% bias reduction (vs. 93% for Llama), likely due to instruction-only training format
|
| 96 |
+
- Trained on 3B parameter model; scaling behavior to larger models is unknown
|
| 97 |
+
- Benchmark measures stated probabilities, not downstream behavioral effects
|
| 98 |
+
- Inherits all limitations of the base Qwen 2.5 model
|
| 99 |
+
|
| 100 |
+
## Citation
|
| 101 |
+
|
| 102 |
+
```bibtex
|
| 103 |
+
[TODO: Add citation when paper is published]
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
## License
|
| 107 |
+
|
| 108 |
+
This model is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0), consistent with the base Qwen 2.5 model license.
|
| 109 |
+
|
| 110 |
+
---
|
| 111 |
+
|
| 112 |
+
*Developed by [Bioaligned Labs](https://huggingface.co/Bioaligned), a nonprofit dedicated to AI safety research.*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|