File size: 3,732 Bytes
9bea26c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
language:
- en
license: apache-2.0
tags:
- mistral
- causal-lm
- text-generation
- qlora
- merged-lora
- mathematics
- logic
- principia-mathematica
- research
pipeline_tag: text-generation
base_model: mistralai/Mistral-7B-v0.1
model_type: mistral
library_name: transformers
model_creator: clarkkitchen22
---
# PrincipiaMistralModel7B
**PrincipiaMistralModel7B** is a 7B-parameter causal language model based on **Mistral-7B-v0.1**, fine-tuned via **QLoRA** on a custom corpus of logic- and math-focused text inspired by *Principia Mathematica* and related foundational material.
The goal of this model is to bias Mistral-7B toward:
- More **formal reasoning** about implications and basic proof structures
- Better familiarity with **symbolic logic notation**
- Explanations of classical foundations-of-mathematics ideas in clear English
This checkpoint is a **fully merged model** (LoRA merged into base), so it can be loaded directly with `AutoModelForCausalLM` without PEFT.
---
## Model Details
- **Base model:** `mistralai/Mistral-7B-v0.1`
- **Architecture:** Transformer (GQA + sliding window attention, as in Mistral-7B)
- **Parameters:** ~7B
- **Library:** Hugging Face `transformers`
- **Finetuning method:** QLoRA (low-rank adapters, later merged into full weights)
- **Precision:** Saved as `safetensors` sharded across 3 files
---
## Intended Use
### Primary use cases
- Educational / research exploration of:
- Basic propositional logic (e.g. implications, modus ponens, simple derivations)
- Foundations-of-mathematics style narratives (inspired by *Principia Mathematica*)
- Explanations of logic and proof ideas for students or hobbyists
- As a **component model** inside agents/tools that:
- Need slightly more structured, formal reasoning than a generic base model
- Work with simple proof sketches, logical implications, or math-adjacent text
### Not intended for
- High-stakes decision making (finance, medicine, law, safety-critical systems)
- Use as a fully robust automated theorem prover
- Use without human oversight in any domain that affects real people’s lives
---
## Training & Data (High Level)
- **Method:** QLoRA fine-tuning on top of `mistralai/Mistral-7B-v0.1`, then weights merged.
- **Hardware:** Single consumer GPU (e.g., NVIDIA RTX 2070-class)
- **Epochs:** ~1 epoch over the custom dataset (light, targeted fine-tune)
- **Data:**
- Text inspired by *Principia Mathematica*–style logic and foundational mathematics
- Simple logical implication examples and step-by-step reasoning prompts
- Explanations of core foundational concepts in natural language
This is a **research/learning project**, not a benchmark-optimized or industrially aligned model.
---
## How to Use
### Basic loading (Transformers)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "clarkkitchen22/PrincipiaMistralModel7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = (
"We work in a simple propositional calculus.\n\n"
"Premises:\n"
" (1) p -> q\n"
" (2) q -> r\n"
"Conclusion:\n"
" (3) p -> r\n\n"
"Explain, step by step, why (3) follows from (1) and (2)."
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=160,
do_sample=True,
top_p=0.9,
temperature=0.3,
repetition_penalty=1.15,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---
license: apache-2.0
---
|