This model is not a release. The details are my own notes. Don't expect anything from this.
tigerlily-r1_1
This is a merge of pre-trained language models created using mergekit.
Merge Details
Experimental component for a project. Extremely dumb config that I expect to have to tune. Mostly better than R1, less hallucination in vision tasks. Doing TA on the abliterated base seems to restore the refusal vector somewhat. Experimentation is being done to try and transplant abliteration directly, ahead of a Mergekit PR to fix LoRA extraction on Gemma3. Tiger is the domineering "flavor"; could be toned down.
Merge Method
This model was merged using the Task Arithmetic merge method using grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Tiger-Gemma-12B-v3
parameters:
weight: 1
- model: soob3123/Veiled-Calla-12B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated
parameters:
normalize: true
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support