MIREI
Collection
MIREI: Matched Investigation of Representation Embedding Insights, code: https://github.com/iamtatsuki05/MIREI • 14 items • Updated
• 1
English / Japanese
Llama-JP-0.5B-init is a Japanese initialization of the Llama architecture with approximately 0.5B non-embedding parameters. The checkpoint serves as a clean starting point for downstream pre-training or instruction tuning rather than a production-ready model.
transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "iamtatsuki05/Llama-JP-0.5B-init"
model_kwargs = {
"torch_dtype": torch.bfloat16,
"attn_implementation": "flash_attention_2",
"device_map": "auto",
}
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, **model_kwargs)
prompt = "ちいかわのハチワレは"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.8,
top_p=0.9,
do_sample=True,
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
This table lists the initialization checkpoints prior to any domain-specific training. All variants share the sarashina2.2 tokenizer.
| ID | Architecture | #Param. | #Param. w/o Emb. |
|---|---|---|---|
| iamtatsuki05/ModernBERT-JP-0.5B-init | ModernBERT | 679M | 548M |
| iamtatsuki05/Llama-JP-0.5B-init (this model) |
Llama | 661M | 530M |
This model is distributed under the MIT License.
@article{MIREI
title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
author={岡田 龍樹 and 杉本 徹},
journal={言語処理学会第 32 回年次大会 (NLP2026)},
year={2026}
}