File size: 4,794 Bytes
d98ebe2 be3f4cf 21987f7 be3f4cf 02119b2 5af9ddb be3f4cf 94aed2c b24ad92 be3f4cf 94aed2c 6ae57fd 94aed2c be2d39f f34886c be2d39f f562000 5af9ddb 94aed2c f34886c efd99d4 f34886c efd99d4 f34886c efd99d4 f34886c efd99d4 b24ad92 f34886c 02119b2 efd99d4 94aed2c efd99d4 be3f4cf 5af9ddb be3f4cf be2d39f be3f4cf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
datasets:
- thu-pacman/PCMind-2.1-Kaiyuan-2B
---
# PCMind-2.1-Kaiyuan-2B (脑海-2.1-开元-2B)
[](LICENSE)
[](https://arxiv.org/abs/2512.07612)
PCMind-2.1-Kaiyuan-2B is a cutting-edge, **fully open-source language model** (i.e., open dataset) trained on a Ascend 910A cluster.
With 1.4B non-embedding parameters and training on 2.2 trillion tokens,
it achieves performance competitive with current state-of-the-art fully open models and even rivals some leading open-weight models of similar scale.
<center>
<img alt="Model Performance Comparison" src="model_performance_comparison.svg"/>
</center>
Other artifacts:
* The dataset used to train Kaiyuan-2B is published at <https://huggingface.co/datasets/thu-pacman/PCMind-2.1-Kaiyuan-2B>.
* The _PCMind-2.1-Kaiyuan-2B Technical Report_ is published at <https://arxiv.org/abs/2512.07612>.
* The data preprocessing framework built on PySpark is published at <https://github.com/thu-pacman/Kaiyuan-Spark>.
* The training framework built on [MindFormers](https://github.com/mindspore-lab/mindformers) is published at <https://github.com/thu-pacman/kaiyuan-mindformers> (`kaiyuan` branch).
## Introduction
Our data preprocessing and pre-training pipeline is designed for enhanced training efficiency and model quality,
achieved through several key innovations:
1. **Dataset Quality Benchmarking:** A quantile benchmarking approach applied to major open-source pretraining datasets (e.g., DCLM Baseline, Fineweb-Edu)
reveals their quality distributions via small-scale training runs, informing better data selection.
2. **Multi-Phase Pre-Training:** The training progresses through 5 phases, strategically increasing the ratio of reasoning-intensive and knowledge-intensive samples
while selectively repeating high-quality data portions.
3. **Multi-Domain Curriculum Learning:** We keep a stable data mixture across different datasets while ordering samples within each dataset by ascending quality.
This curriculum is further leveraged through [accommodated learning rate decay and model averaging](https://arxiv.org/abs/2511.18903).
4. **High-Performance Data Preprocessing:** We built an open-source,
Spark-based framework optimized with [Chukonu](https://pacman.cs.tsinghua.edu.cn/~cwg/publication/chukonu-2021/),
delivering exceptional efficiency for large-scale deduplication and sorting tasks.
5. **Architecture for Training Stability:** Optimized for training on Ascend 910A clusters (FP16 precision, similar to V100),
the Kaiyuan-2B architecture integrates QK norm, sandwich norm, and soft-capping techniques to ensure stable and robust pre-training.
## Usage
The model architecture is similar to [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B),
and can be easily loaded by libraries like `transformers`.
Please use [`demo.py`](demo.py) as an example.
*Note: This is a pretrained base model only and has not undergone fine-tuning,
reinforcement learning (RL), or any other post-training procedures.
It is not ready for direct conversation.
Users are recommended to employ few-shot prompting to guide model outputs,
or to fine-tune the model for specific downstream applications.*
## Citation
Please cite [our technical report](https://arxiv.org/abs/2512.07612) if you use our model, dataset, or code.
```bib
@misc{luo2025pcmind21kaiyuan2btechnicalreport,
title={PCMind-2.1-Kaiyuan-2B Technical Report},
author={Kairong Luo and Zhenbo Sun and Xinyu Shi and Shengqi Chen and Bowen Yu and Yunyi Chen and Chenyi Dang and Hengtao Tao and Hui Wang and Fangming Liu and Kaifeng Lyu and Wenguang Chen},
year={2025},
eprint={2512.07612},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.07612},
}
```
## License
All artifacts (including code, model weights, and training data) of Kaiyuan-2B
are licensed under [Apache-2.0 License](LICENSE) with the following copyright notice:
```text
Copyright 2025 Tsinghua University & Peng Cheng Laboratory
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
``` |