Fractal-8b-v1 - Fresh Initialization

Status: Config-only (no pretrained weights)

This repo contains:

  • โœ“ Qwen3 8B architecture config
  • โœ“ Tokenizer
  • โœ— No pretrained weights (deleted for fresh start)

Purpose

This is a fresh initialization checkpoint for the Integrated Cognitive Development Training Pipeline.

When loaded, the model will initialize with random weights - a clean slate for developmental training through 9 phases:

  1. Phase 0: Relational Foundation
  2. Phase 1A: Pattern Recognition
  3. Phase 1B: Compositional Structure
  4. Phase 2A: Perspective Taking (Theory of Mind)
  5. Phase 2B: Abstract Reasoning
  6. Phase 3A: Empirical Grounding
  7. Phase 3B: Value Calibration
  8. Phase 4A: Autonomous Agency
  9. Phase 4B: Integrated Navigation

Training Goal

Drive manifold coordinates to:

  • T (Topology) โ‰ˆ 0.75
  • G (Geometry) โ‰ˆ 0.85

Through curriculum ordering alone, not architecture.

Usage

Use with the Lark Entropy Trainer:

from transformers import AutoModelForCausalLM, AutoConfig

# This will initialize fresh random weights
config = AutoConfig.from_pretrained("ahmiershadowman/Fractal-8b-v1")
model = AutoModelForCausalLM.from_config(config)

Start training with Phase 0 dataset: ahmiershadowman/claude-ash-conversations

Dataset Collections

All 9 phases available at: https://huggingface.co/collections/ahmiershadowman

Downloads last month
6
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ahmiershadowman/Fractal-8b-v1

Quantizations
2 models