|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: apache-2.0 |
|
|
library_name: transformers |
|
|
tags: |
|
|
- binary-analysis |
|
|
- file-type-detection |
|
|
- byte-level |
|
|
- classification |
|
|
- mime-type |
|
|
- security |
|
|
pipeline_tag: text-classification |
|
|
base_model: magic-bert-50m-mlm |
|
|
model-index: |
|
|
- name: magic-bert-50m-classification |
|
|
results: |
|
|
- task: |
|
|
type: text-classification |
|
|
name: File Type Classification |
|
|
metrics: |
|
|
- name: Probing Accuracy |
|
|
type: accuracy |
|
|
value: 89.7 |
|
|
- name: Silhouette Score |
|
|
type: silhouette |
|
|
value: 0.55 |
|
|
- name: F1 (Weighted) |
|
|
type: f1 |
|
|
value: 0.886 |
|
|
--- |
|
|
|
|
|
# Magic-BERT 50M Classification |
|
|
|
|
|
A BERT-style transformer model fine-tuned for binary file type classification. This model classifies binary files into 106 MIME types based on their content structure. |
|
|
|
|
|
## Why Not Just Use libmagic? |
|
|
|
|
|
For intact files starting at byte 0, libmagic works well. But libmagic matches *signatures at fixed offsets*. Magic-BERT learns *structural patterns* throughout the file, enabling use cases where you don't have clean file boundaries: |
|
|
|
|
|
- **Network streams**: Classifying packet payloads mid-connection, before headers arrive |
|
|
- **Disk forensics**: Identifying file types during carving, when scanning raw disk images without filesystem metadata |
|
|
- **Fragment analysis**: Working with partial files, slack space, or corrupted data |
|
|
- **Adversarial contexts**: Detecting file types when magic bytes are stripped, spoofed, or deliberately misleading |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This model extends magic-bert-50m-mlm with contrastive learning fine-tuning to produce embeddings optimized for file type discrimination. It uses a projection head and classifier trained with supervised contrastive loss. |
|
|
|
|
|
| Property | Value | |
|
|
|----------|-------| |
|
|
| Parameters | 59M (+ 0.4M classifier head) | |
|
|
| Hidden Size | 512 | |
|
|
| Projection Dimension | 256 | |
|
|
| Number of Classes | 106 MIME types | |
|
|
| Base Model | magic-bert-50m-mlm | |
|
|
|
|
|
### Tokenizer |
|
|
|
|
|
The tokenizer uses the Binary BPE methodology introduced in [Bommarito (2025)](https://arxiv.org/abs/2511.17573). The original Binary BPE tokenizers (available at [mjbommar/binary-tokenizer-001-64k](https://huggingface.co/mjbommar/binary-tokenizer-001-64k)) were trained exclusively on executable binaries (ELF, PE, Mach-O). This tokenizer uses the same BPE training approach but was trained on a diverse corpus spanning 106 file types. |
|
|
|
|
|
## Intended Uses |
|
|
|
|
|
**Primary use cases:** |
|
|
- File type classification from binary content |
|
|
- MIME type detection without relying on file extensions |
|
|
- Embedding-based file similarity search |
|
|
- Security analysis and malware triage |
|
|
|
|
|
**Example tasks:** |
|
|
- Identifying file types in network traffic |
|
|
- Classifying files with missing or incorrect extensions |
|
|
- Building file type indexes for large archives |
|
|
|
|
|
## Detailed Use Cases |
|
|
|
|
|
### Network Traffic Analysis |
|
|
When inspecting packet payloads, you often see file data mid-stream—TCP reassembly may give you bytes 1500-3000 of a PDF before you ever see byte 0. Traditional signature matching fails here. Classification embeddings can identify file types from interior content. |
|
|
|
|
|
### Disk Forensics & File Carving |
|
|
During disk image analysis, you scan raw bytes looking for file boundaries. Tools like Scalpel rely on header/footer signatures, but many files lack clear footers. This model can score byte ranges for file type probability, helping identify carved fragments or validate carving results. |
|
|
|
|
|
### Incident Response |
|
|
Malware often strips or modifies magic bytes to evade detection. Polyglot files (valid as multiple types) exploit signature-based tools. Learning structural patterns provides a second opinion that doesn't rely solely on the first few bytes. |
|
|
|
|
|
### Similarity Search |
|
|
The embedding space (256-dimensional, L2-normalized) enables similarity search across file collections: "find files structurally similar to this sample" for malware clustering, duplicate detection, or content-based retrieval. |
|
|
|
|
|
## MLM vs Classification: Two-Phase Training |
|
|
|
|
|
This is the **Phase 2 (Classification)** model built on Magic-BERT. The training pipeline has two phases: |
|
|
|
|
|
| Phase | Model | Task | Purpose | |
|
|
|-------|-------|------|---------| |
|
|
| Phase 1 | magic-bert-50m-mlm | Masked Language Modeling | Learn byte-level patterns and file structure | |
|
|
| **Phase 2** | **This model** | Contrastive Learning | Optimize embeddings for file type discrimination | |
|
|
|
|
|
### Two-Phase Training |
|
|
|
|
|
| Phase | Steps | Learning Rate | Objective | |
|
|
|-------|-------|---------------|-----------| |
|
|
| 1: MLM Pre-training | 100,000 | 1e-4 | Masked Language Modeling | |
|
|
| 2: Contrastive Fine-tuning | 50,000 | 1e-6 | Supervised Contrastive Loss | |
|
|
|
|
|
**Phase 2 specifics:** |
|
|
- Frozen: Embeddings + first 4 transformer layers |
|
|
- Learning rate: 100x lower than Phase 1 |
|
|
- Objective: Pull same-MIME-type samples together, push different types apart |
|
|
|
|
|
## Evaluation Results |
|
|
|
|
|
### Classification Performance |
|
|
|
|
|
| Metric | Value | |
|
|
|--------|-------| |
|
|
| Linear Probe Accuracy | 89.7% | |
|
|
| F1 (Macro) | 0.787 | |
|
|
| F1 (Weighted) | 0.886 | |
|
|
|
|
|
### Embedding Quality |
|
|
|
|
|
| Metric | Value | |
|
|
|--------|-------| |
|
|
| Silhouette Score | 0.55 | |
|
|
| Separation Ratio | 3.60 | |
|
|
| Intra-class Distance | 12.6 | |
|
|
| Inter-class Distance | 45.2 | |
|
|
|
|
|
### MLM Capability (Retained) |
|
|
|
|
|
| Metric | Value | |
|
|
|--------|-------| |
|
|
| Fill-mask Top-1 | 41.8% | |
|
|
| Perplexity | 1.32 | |
|
|
|
|
|
This model retains moderate fill-mask capability, making it suitable for hybrid tasks that need both classification and byte prediction. |
|
|
|
|
|
## Supported MIME Types (106 Classes) |
|
|
|
|
|
The model classifies files into 106 MIME types across these categories: |
|
|
|
|
|
| Category | Count | Examples | |
|
|
|----------|-------|----------| |
|
|
| application/ | 41 | PDF, ZIP, GZIP, Office docs, executables | |
|
|
| text/ | 24 | Python, C, Java, HTML, XML, shell scripts | |
|
|
| image/ | 18 | PNG, JPEG, GIF, WebP, TIFF, PSD | |
|
|
| video/ | 9 | MP4, WebM, MKV, AVI, MOV | |
|
|
| audio/ | 8 | MP3, FLAC, WAV, OGG, M4A | |
|
|
| font/ | 3 | SFNT, WOFF, WOFF2 | |
|
|
| other | 3 | biosig/atf, inode/x-empty, message/rfc822 | |
|
|
|
|
|
<details> |
|
|
<summary>Click to expand full MIME type list</summary> |
|
|
|
|
|
**application/** (41 types): |
|
|
- application/SIMH-tape-data, application/encrypted, application/gzip |
|
|
- application/javascript, application/json, application/msword |
|
|
- application/mxf, application/octet-stream, application/pdf |
|
|
- application/pgp-keys, application/postscript |
|
|
- application/vnd.microsoft.portable-executable, application/vnd.ms-excel |
|
|
- application/vnd.ms-opentype, application/vnd.ms-powerpoint |
|
|
- application/vnd.oasis.opendocument.spreadsheet |
|
|
- application/vnd.openxmlformats-officedocument.* (3 variants) |
|
|
- application/vnd.rn-realmedia, application/vnd.wordperfect |
|
|
- application/wasm, application/x-7z-compressed, application/x-archive |
|
|
- application/x-bzip2, application/x-coff, application/x-dbf |
|
|
- application/x-dosexec, application/x-executable |
|
|
- application/x-gettext-translation, application/x-ms-ne-executable |
|
|
- application/x-ndjson, application/x-object, application/x-ole-storage |
|
|
- application/x-sharedlib, application/x-shockwave-flash |
|
|
- application/x-tar, application/x-wine-extension-ini |
|
|
- application/zip, application/zlib, application/zstd |
|
|
|
|
|
**text/** (24 types): |
|
|
- text/csv, text/html, text/plain, text/rtf, text/troff |
|
|
- text/x-Algol68, text/x-asm, text/x-c, text/x-c++ |
|
|
- text/x-diff, text/x-file, text/x-fortran, text/x-java |
|
|
- text/x-m4, text/x-makefile, text/x-msdos-batch, text/x-perl |
|
|
- text/x-php, text/x-po, text/x-ruby, text/x-script.python |
|
|
- text/x-shellscript, text/x-tex, text/xml |
|
|
|
|
|
**image/** (18 types): |
|
|
- image/bmp, image/fits, image/gif, image/heif, image/jpeg |
|
|
- image/png, image/svg+xml, image/tiff, image/vnd.adobe.photoshop |
|
|
- image/vnd.microsoft.icon, image/webp, image/x-eps, image/x-exr |
|
|
- image/x-jp2-codestream, image/x-portable-bitmap |
|
|
- image/x-portable-greymap, image/x-tga, image/x-xpixmap |
|
|
|
|
|
**video/** (9 types): |
|
|
- video/3gpp, video/mp4, video/mpeg, video/quicktime, video/webm |
|
|
- video/x-ivf, video/x-matroska, video/x-ms-asf, video/x-msvideo |
|
|
|
|
|
**audio/** (8 types): |
|
|
- audio/amr, audio/flac, audio/mpeg, audio/ogg, audio/x-ape |
|
|
- audio/x-hx-aac-adts, audio/x-m4a, audio/x-wav |
|
|
|
|
|
**font/** (3 types): |
|
|
- font/sfnt, font/woff, font/woff2 |
|
|
|
|
|
**other** (3 types): |
|
|
- biosig/atf, inode/x-empty, message/rfc822 |
|
|
|
|
|
</details> |
|
|
|
|
|
## How to Use |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
import torch |
|
|
|
|
|
model = AutoModelForSequenceClassification.from_pretrained( |
|
|
"mjbommar/magic-bert-50m-classification", trust_remote_code=True |
|
|
) |
|
|
tokenizer = AutoTokenizer.from_pretrained("mjbommar/magic-bert-50m-classification") |
|
|
|
|
|
model.eval() |
|
|
|
|
|
# Classify a file |
|
|
with open("example.pdf", "rb") as f: |
|
|
data = f.read(512) |
|
|
|
|
|
# Decode bytes to string using latin-1 (preserves all byte values 0-255) |
|
|
text = data.decode("latin-1") |
|
|
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512) |
|
|
|
|
|
with torch.no_grad(): |
|
|
outputs = model(**inputs) |
|
|
predicted_id = outputs.logits.argmax(-1).item() |
|
|
confidence = torch.softmax(outputs.logits, dim=-1).max().item() |
|
|
|
|
|
print(f"Predicted class: {predicted_id}") |
|
|
print(f"Confidence: {confidence:.2%}") |
|
|
``` |
|
|
|
|
|
### Getting Embeddings for Similarity Search |
|
|
|
|
|
```python |
|
|
# Get normalized embeddings (256-dim, L2-normalized) |
|
|
with torch.no_grad(): |
|
|
embeddings = model.get_embeddings(inputs["input_ids"], inputs["attention_mask"]) |
|
|
# embeddings shape: [batch_size, 256] |
|
|
|
|
|
# Compute cosine similarity between files |
|
|
similarity = torch.mm(embeddings1, embeddings2.T) |
|
|
``` |
|
|
|
|
|
### Loading MIME Type Labels |
|
|
|
|
|
```python |
|
|
from huggingface_hub import hf_hub_download |
|
|
import json |
|
|
|
|
|
mime_path = hf_hub_download("mjbommar/magic-bert-50m-classification", "mime_type_mapping.json") |
|
|
with open(mime_path) as f: |
|
|
id_to_mime = {int(k): v for k, v in json.load(f).items()} |
|
|
|
|
|
print(f"Predicted MIME type: {id_to_mime[predicted_id]}") |
|
|
``` |
|
|
|
|
|
## Limitations |
|
|
|
|
|
1. **Position bias:** Best performance when content starts at position 0. Accuracy degrades for content at higher offsets. |
|
|
|
|
|
2. **Class imbalance:** Performance varies by file type. Common formats (PDF, PNG, ZIP) perform better than rare formats. |
|
|
|
|
|
3. **Ambiguous types:** Some file types share similar structure (e.g., ZIP-based formats like DOCX, XLSX, JAR), which can cause confusion. |
|
|
|
|
|
4. **Encrypted content:** Cannot classify encrypted or compressed content that lacks recognizable patterns. |
|
|
|
|
|
## Architecture: Absolute vs Rotary Position Embeddings |
|
|
|
|
|
This model uses **absolute position embeddings**, where each position (0-511) has a learned embedding vector. An alternative is **Rotary Position Embeddings (RoPE)**, used by the RoFormer variant. |
|
|
|
|
|
| Metric | Magic-BERT (this) | RoFormer | |
|
|
|--------|-------------------|----------| |
|
|
| Classification Accuracy | 89.7% | **93.7%** | |
|
|
| Silhouette Score | 0.55 | **0.663** | |
|
|
| F1 (Weighted) | 0.886 | **0.933** | |
|
|
| Fill-mask Retention | **41.8%** | 14.5% | |
|
|
| Parameters | 59M | **42M** | |
|
|
|
|
|
Magic-BERT retains better fill-mask capability after classification fine-tuning, making it suitable when both tasks are needed. For pure classification, consider the RoFormer variant. |
|
|
|
|
|
## Model Selection Guide |
|
|
|
|
|
| Use Case | Recommended Model | Reason | |
|
|
|----------|-------------------|--------| |
|
|
| Classification + fill-mask | **This model** | Retains 41.8% fill-mask capability | |
|
|
| Fill-mask / byte prediction | magic-bert-50m-mlm | Best perplexity (1.05) | |
|
|
| Research baseline | magic-bert-50m-mlm | Established BERT architecture | |
|
|
| **Production classification** | **magic-bert-50m-roformer-classification** | Highest accuracy (93.7%), efficient (42M params) | |
|
|
|
|
|
## Related Models |
|
|
|
|
|
- **[magic-bert-50m-mlm](https://huggingface.co/mjbommar/magic-bert-50m-mlm)**: Base model before classification fine-tuning |
|
|
- **[magic-bert-50m-roformer-mlm](https://huggingface.co/mjbommar/magic-bert-50m-roformer-mlm)**: RoFormer variant with rotary position embeddings |
|
|
- **[magic-bert-50m-roformer-classification](https://huggingface.co/mjbommar/magic-bert-50m-roformer-classification)**: RoFormer variant with higher classification accuracy (93.7%, recommended for production) |
|
|
|
|
|
## Related Work |
|
|
|
|
|
This model builds on the Binary BPE tokenization approach: |
|
|
|
|
|
- **Binary BPE Paper**: [Bommarito (2025)](https://arxiv.org/abs/2511.17573) introduced byte-level BPE tokenization for binary analysis, demonstrating 2-3x compression over raw bytes for executable content. |
|
|
- **Binary BPE Tokenizers**: Pre-trained tokenizers for executables are available at [mjbommar/binary-tokenizer-001-64k](https://huggingface.co/mjbommar/binary-tokenizer-001-64k). |
|
|
|
|
|
**Key difference**: The original Binary BPE work focused on executable binaries (ELF, PE, Mach-O). Magic-BERT extends this to general file type understanding across 106 diverse formats, using a tokenizer trained on the broader dataset. |
|
|
|
|
|
## Citation |
|
|
|
|
|
A paper describing Magic-BERT, the training methodology, and the dataset is forthcoming. |
|
|
|
|
|
```bibtex |
|
|
@article{bommarito2025binarybpe, |
|
|
title={Binary BPE: A Family of Cross-Platform Tokenizers for Binary Analysis}, |
|
|
author={Bommarito, Michael J., II}, |
|
|
journal={arXiv preprint arXiv:2511.17573}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|