About Me
I'm Christian, a developer and finance expert creating AI-powered tools for professional finance education and research. I specialize in quantitative modeling, LLMs for financial training, and interactive content.
Research & Development Projects
Dakota1890: Indigenous Language Preservation via Grammar-to-RL
Status: Active Research | Impact: High | Stars: New Project
A novel approach to low-resource language learning that transforms historical grammar textbooks into complete RL training ecosystems. This project demonstrates how grammar rules can function as compositional reward signals, enabling verifiable language model training on endangered Indigenous languages.
Key Innovations:
- VLM-Based Historical Text Extraction: 92-95% accuracy extracting complex orthography (ć, š, ŋ, ḣ) from 1890s texts without OCR training
- Grammar-as-Reward-Function: Linguistic rules become differentiable gradient signals, not just binary constraints
- Closed-Loop Training: Dictionary pairs → Grammar-validated sentences → RL verification → Improved generation
- Compositional Rewards: Decomposed loss into interpretable components (character + morphology + semantic)
Technical Stack:
- GRPO (Generalized Reinforcement Policy Optimization)
- Modified compositional reward function
- Vision-Language Models for historical text extraction
- PrimeIntellect TOPLOC for Unicode preservation
- Distributed verification system
Research Significance:
- Preserves Dakota language from 1890s historical texts
- Universal framework applicable to any low-resource language with grammar documentation
- Democratizes language preservation through modern AI
- Contributes to Indigenous language revitalization efforts
Repository: github.com/HarleyCoops/Dakota1890
Math-To-Manim: Multi-Agent Animation Generation
Status: Production | Stars: 1.3k | Forks: 145
An advanced system for generating mathematical and physics animations using a multi-agent architecture. Converts text and images into complete Manim animations with LaTeX study notes, featuring a reverse knowledge tree algorithm that automatically discovers prerequisite concepts.
Key Features:
- 55+ Example Animations: From Pythagorean theorem to quantum electrodynamics
- Dual-Stream Output: Generates both Manim Python code and LaTeX study notes
- Adaptive Complexity: Handles high school geometry to graduate-level physics
- Multi-Model Support: Claude Sonnet 4.5, DeepSeek R1, Gemini 2.5 Pro, Grok 3, Qwen Max, Mistral Large
- Reverse Knowledge Tree: Automatically identifies prerequisite concepts for logical narrative flow
Technical Highlights:
- Claude Sonnet 4.5 + Claude Agent SDK
- 6-agent pipeline (PrerequisiteExplorer, MathematicalEnricher, VisualDesigner, NarrativeComposer, CodeGenerator, VideoReviewer)
- Automatic LaTeX validation and error correction
- Cross-model comparison for edge case detection
Performance:
- Prerequisite tree generation: 30-60 seconds
- Complete pipeline: 1-2 minutes
- Rendering: 10 seconds (low quality) to 20 minutes (4K)
Repository: github.com/HarleyCoops/Math-To-Manim
KimiK2Manim: 3D Mathematical Visualizations
Status: Active Development | Stars: 32 | Forks: 6
A specialized implementation using Kimi K2 Thinking model for generating 3D mathematical visualizations, featuring the "liminal solid" prompt technique that bridges abstract mathematical concepts with concrete 3D representations.
Key Features:
- 4-Agent Pipeline: Prerequisite Explorer → Mathematical Enricher → Visual Designer → Narrative Composer
- Tool Calling Architecture: Structured tool calls for extracting mathematical content, visual specifications, and narrative composition
- 3D Scene Generation: Specialized for ThreeDScene class with dynamic camera movements and lighting
- Standalone Package: No dependencies on parent projects
Notable Examples:
- Minimal Surfaces (catenoid, helicoid, Costa, Enneper surfaces)
- Rhombicosidodecahedron with golden ratio geometry
- Unnormalized Linear Transformers (ULTRA 1991)
- Brownian Motion visualizations
Technical Stack:
- Kimi K2 via Moonshot AI OpenAI-compatible API
- Kosong integration for unified LLM interactions
- ToolAdapter for verbose instruction fallback
- E2B Sandbox integration
Repository: github.com/HarleyCoops/KimiK2Manim
nanochatAquaRat: RL Training on Algebraic Reasoning
Status: Research | Stars: 4 | Forks: 1
A modified nanochat framework trained with reinforcement learning on the DeepMind AQuA-RAT dataset, demonstrating domain transfer from free-form numeric answers (GSM8K) to multiple-choice algebraic reasoning.
Key Achievements:
- Domain Transfer: Adapted from GSM8K (free-form numeric) to AQuA-RAT (multiple-choice A-E)
- RL Training: GRPO-style reinforcement learning with reward shaping for categorical outputs
- Performance: 30-60% accuracy on AQuA-RAT dev set (depth-8: ~60M params, depth-20: ~561M params)
- Mechanistic Interpretability: Integrated attention analysis during training
Technical Modifications:
- Custom dataset loader for AQuA-RAT format (97,467 training problems)
- Letter extraction instead of numeric extraction
- Categorical evaluation (A-E) instead of exact match
- Modified reward function for multiple-choice format
Dataset Characteristics:
- 97,700 algebraic word problems
- Multiple-choice format (A-E)
- Natural language rationales
- Topics: arithmetic, algebra, geometry, probability
Repository: github.com/HarleyCoops/nanochatAquaRat
Technical Expertise
| Domain | Technologies & Frameworks |
|---|---|
| Quantitative Finance | Python · NumPy · SciPy · CFA/FRM-specific mathematics |
| AI & LLMs | PyTorch · LangChain · Hugging Face Transformers · Claude Agent SDK |
| Reinforcement Learning | GRPO · Reward Shaping · Compositional Rewards · RLHF |
| Visualization | Manim · Gradio · Prompt-to-Animation Pipelines · 3D Rendering |
| DevOps & Infrastructure | Docker · GitHub Actions · E2B Sandbox · Lambda Labs · Hyperbolic Labs |
| Language Preservation | VLM Extraction · Historical Text Processing · Low-Resource NLP |
Research Interests
- Indigenous Language Preservation: Using modern AI to preserve and revitalize endangered languages
- Compositional Reward Functions: Making grammar rules differentiable through RL
- Multi-Agent Systems: Building autonomous agent pipelines for complex reasoning tasks
- Mathematical Visualization: Converting abstract concepts into intuitive visual explanations
- Low-Resource NLP: Training models on languages with limited training data
Publications & Contributions
- Dakota1890: Novel grammar-to-RL methodology for low-resource language learning
- Math-To-Manim: Open-source animation generation system (1.3k+ stars)
- nanochatAquaRat: Domain transfer research from numeric to categorical reasoning
How to Connect
- GitHub: github.com/HarleyCoops
- Hugging Face: huggingface.co/HarleyCooper
- X/Twitter: @christiancooper
Interested in collaborating on AI for education, finance, or language preservation? Let's connect.
"Working on complex problems in a complex world" — building tools that turn text into insight, preserve languages, and make mathematics visual.