AI & ML interests
None defined yet.
Recent Activity
Welcome to i3-lab
"Chase the SOTA pipeline, not the MMLU slop."
i3-lab is dedicated to extreme efficiency in LLM architecture. We develop the i3 model family—state-of-the-art architectures designed to reach high performance levels in hours on consumer-grade hardware (like the NVIDIA Quadro P100) that typically require days on massive GPU clusters.
Why?
Click to expand
Why?
Well, I’m determined to make this model or architecture as efficient and fast as possible, knowing that not everyone can afford a decent GPU. In some countries, weak economies or import bans make it even harder, and sometimes all you have is a laptop with an i3-6006U, relying on free cloud computing services like Colab or Kaggle—which is exactly my situation :D
— Daniel
Why use RWKV-Attention when you could just use Attention like LLaMa, Qwen, and many others?
RWKV is great because it’s fast, lightweight, and doesn’t require much RAM, though it struggles with long contexts. Adding a bit of attention to the model architecture makes it more stable and smarter, but at the cost of quadratic memory usage. From my tests on a Kaggle P100 GPU, you can train SLMs (Small Language Models) within its 16GB VRAM, though it takes time and patience. Once you hit around 500 million parameters, training speed drops from about 300–400 tokens per second to 200–300, which may not sound huge, but it’s definitely noticeable. Of course, with something like an RTX 2060 or better, you wouldn’t experience this issue of feeling slow.
— Daniel
i3: High-Efficiency Training
We specialize in hybrid architectures, specifically RWKV-Attention, to bypass the quadratic scaling bottlenecks of traditional Transformers.
- Fast Iteration: Trainable in hours, not weeks.
- Accessible SOTA: High performance on legacy/mid-range hardware.
- Open Research: Push the boundaries of what is possible with limited compute.
Quick Links
- Source Code: FlameF0X/open-i3
- Community: Join our Discord
Roadmap / TODO
We are currently scaling our architecture through the following milestones:
- i3-500m — Our 500m parameter text generator.
- i3-Ethan-it — Specialized instruction-tuned variant.
- i3-1B — Our first major scale-up.
- i3-7B-A1.6B — Mixture of Experts / Sparsity testing.
Usage & Attribution
The open-i3 codebase is licensed under Apache 2.0. We believe in open-source, but we value attribution.
If you use our architecture (RWKV-Attention) or our weights, you are required per Section 4(b) and 4(d) to:
- Carry prominent notices of any modifications.
- Include a readable copy of the attribution notices from our NOTICE file.
You must include the attribution link found in the open-i3 GitHub in your documentation or model card.
Made with ❤️ and DETERMINATION by Daniel.