binary-tokenizer-001-32k

A cross-platform BPE tokenizer for binary executables and machine code. Trained on 13 GB of diverse binaries spanning Linux, Windows, macOS, and Android platforms.

πŸ”— Model: mjbommar/binary-tokenizer-001-32k πŸ“Š Dataset: mjbommar/binary-30k-tokenized πŸ“„ Paper: Binary BPE: Cross-Platform Tokenization for Binary Analysis (arXiv preprint coming soon)

Overview

  • Vocabulary Size: 32,768 tokens (2^15)
  • Token Composition: 256 base bytes + 32,505 learned merges + 7 special tokens
  • Average Token Length: 3.812 bytes
  • 3-byte Instructions: 19.5% of vocabulary (6,380 tokens)
  • Compression Ratio: ~2.7 bytes/token on typical binaries

Training Configuration

Training Corpus:

  • Source: mjbommar/binary-30k-tokenized
  • Size: ~13 GB
  • Files: 30,738 binary files
  • Platforms: Linux (ELF), Windows (PE), macOS (Mach-O), Android (APK)
  • Architectures: x86-64, x86, ARM64, ARM, MIPS, RISC-V

Training Parameters:

  • Vocabulary size: 32,768 (including 7 special tokens)
  • Min frequency: 10
  • Chunk size: 8,192 bytes
  • Allowed lengths: DEFAULT (1-16 bytes)
  • Training duration: ~6-7 hours

Vocabulary Statistics

Composition:

  • Base bytes (0-255): 256 tokens
  • Learned merges: 32,505 tokens
  • Special tokens: 7 tokens (<|start|>, <|end|>, <|pad|>, <|unk|>, <|cls|>, <|sep|>, <|mask|>)
  • Total: 32,768 tokens

Quality Metrics:

  • All tokens reachable: βœ“ Yes
  • Valid merges: 32,505 / 32,505
  • Power-of-2 size: βœ“ Yes (2^15)

Token Length Distribution

Length Count Percentage Description
1 byte 256 0.8% Base bytes
2 bytes 13,428 41.0% Byte pairs
3 bytes 6,380 19.5% Complete x86-64 instructions
4 bytes 6,236 19.0% Instructions with operands
5 bytes 1,763 5.4% Complex patterns
6 bytes 1,395 4.3% Complex patterns
7 bytes 676 2.1% Complex patterns
8 bytes 963 2.9% Complex patterns
9+ bytes 1,467 4.5% Long patterns

Average Token Length: 3.812 bytes


Byte Content Analysis

Content Categories:

  • Contains NULL byte (0x00): 8,350 tokens (25.5%)
  • ASCII printable (0x20-0x7E): 6,460 tokens (19.7%)
  • All ASCII (<0x80): 13,796 tokens (42.1%)
  • High bytes (β‰₯0x80): 18,964 tokens (57.9%)

Most Common Bytes in Tokens:

  • 0x00 (NULL): 20,462 occurrences - Padding and alignment
  • 0xFF: 3,502 occurrences - Sentinel values
  • 0x48 (REX.W): 2,883 occurrences - x86-64 REX prefix
  • 0x8B (MOV): 1,934 occurrences - x86-64 MOV opcode
  • 0xCC (INT3): 1,366 occurrences - Debug breakpoint padding

Sequence Coverage

N-byte Sequence Diversity:

Length Learned Tokens Possible Sequences Coverage
1-byte 256 256 100.00%
2-byte 13,428 65,536 20.49%
3-byte 6,380 16,777,216 0.038%
4-byte 6,236 4,294,967,296 0.00015%

Files

  • tokenizer-32768.json - Trained tokenizer model (2.5 MB)
  • analysis_results.json - Detailed analysis statistics
  • training.log - Training output log (if available)
  • training_stats.txt - Training summary (if available)

Usage

Load from HuggingFace Hub:

from tokenizers import Tokenizer

# Load directly from HuggingFace
tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-32k")

Load from local file:

# With bbpe CLI
bbpe encode --tokenizer tokenizer-32768.json /path/to/binary
bbpe info tokenizer-32768.json

Complete Python Example:

from tokenizers import Tokenizer

# Load from HuggingFace or local file
tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-32k")
# OR: tokenizer = Tokenizer.from_file("tokenizer-32768.json")

# Read binary file and decode as latin-1 (preserves all byte values 0-255)
with open("/usr/bin/ls", "rb") as f:
    data = f.read()
    data_str = data.decode("latin-1")

# Encode the binary data
encoding = tokenizer.encode(data_str)
print(f"File size: {len(data)} bytes")
print(f"Total tokens: {len(encoding.ids)}")
print(f"Compression: {len(data) / len(encoding.ids):.3f} bytes/token")

# First 10 tokens
for i, (token_id, token) in enumerate(zip(encoding.ids[:10], encoding.tokens[:10])):
    token_bytes = token.encode("latin-1")
    print(f"  Token {i}: ID={token_id:5d} hex={token_bytes.hex():20s} ({len(token_bytes)} bytes)")

# Decode tokens back to bytes
decoded_str = tokenizer.decode(encoding.ids)
decoded_bytes = decoded_str.encode("latin-1")
assert decoded_bytes == data  # Perfect reconstruction

Example output for /usr/bin/ls (142,312 bytes):

File size: 142312 bytes
Total tokens: 53184
Compression: 2.676 bytes/token

First 10 tokens:
  Token 0: ID=  127 hex=7f                   (1 bytes)
  Token 1: ID= 3732 hex=454c                 (2 bytes)
  Token 2: ID= 4707 hex=4602                 (2 bytes)
  Token 3: ID=  392 hex=0101                 (2 bytes)
  Token 4: ID=  662 hex=000000000000000000   (9 bytes)
  Token 5: ID=  265 hex=0300                 (2 bytes)
  Token 6: ID= 1369 hex=3e00                 (2 bytes)
  Token 7: ID=  279 hex=01000000             (4 bytes)
  Token 8: ID=   48 hex=30                   (1 bytes)
  Token 9: ID=  109 hex=6d                   (1 bytes)

Decoded: 7f454c4602010100000000000000000003003e0001000000306d...
(ELF header: 7f 45 4c 46 = ELF magic bytes)

Citation

If you use this tokenizer in your research, please cite:

@article{bommarito2025binarybpe,
  title={Binary BPE: Cross-Platform Tokenization for Binary Analysis},
  author={Bommarito II, Michael J.},
  journal={arXiv preprint},
  year={2025},
  note={Preprint coming soon}
}

Author: Michael J. Bommarito II (michael.bommarito@gmail.com)


Generated: November 13, 2025 Training Script: train_tokenizers.sh Analysis Script: analyze_tokenizer.py

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support