Add Key Highlights, Model List and Experimental Results sections to all Octen model READMEs
Browse files- README.md +70 -0
- README.md.backup +123 -0
README.md
CHANGED
|
@@ -20,6 +20,76 @@ base_model: Qwen/Qwen3-Embedding-0.6B
|
|
| 20 |
|
| 21 |
Octen-Embedding-0.6B is a text embedding model designed for semantic search and retrieval tasks. This model is fine-tuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) and supports multiple languages, providing high-quality embeddings for various applications.
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
## Model Details
|
| 24 |
|
| 25 |
- **Base Model**: [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B)
|
|
|
|
| 20 |
|
| 21 |
Octen-Embedding-0.6B is a text embedding model designed for semantic search and retrieval tasks. This model is fine-tuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) and supports multiple languages, providing high-quality embeddings for various applications.
|
| 22 |
|
| 23 |
+
## Key Highlights
|
| 24 |
+
|
| 25 |
+
### 🥇 RTEB Leaderboard Champion (as of January 12, 2026)
|
| 26 |
+
- **Octen-Embedding-8B ranks #1 on the [RTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)** with Mean (Task) score of **0.8045**
|
| 27 |
+
- Excellent performance on both Public (0.7953) and Private (0.8157) datasets
|
| 28 |
+
- Demonstrates true generalization capability without overfitting to public benchmarks
|
| 29 |
+
|
| 30 |
+
### Industry-Oriented Vertical Domain Expertise
|
| 31 |
+
- **Legal**: Legal document retrieval
|
| 32 |
+
- **Finance**: Financial reports, Q&A, and personal finance content
|
| 33 |
+
- **Healthcare**: Medical Q&A, clinical dialogues, and health consultations
|
| 34 |
+
- **Code**: Programming problems, code search, and SQL queries
|
| 35 |
+
|
| 36 |
+
### Ultra-Long Context Support
|
| 37 |
+
- Supports up to **32,768 tokens** context length
|
| 38 |
+
- Suitable for processing long documents in legal, healthcare, and other domains
|
| 39 |
+
- High-dimensional embedding space for rich semantic representation
|
| 40 |
+
|
| 41 |
+
### Multilingual Capability
|
| 42 |
+
- Supports **100+ languages**
|
| 43 |
+
- Includes various programming languages
|
| 44 |
+
- Strong multilingual, cross-lingual, and code retrieval capabilities
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
## Open Source Model List
|
| 49 |
+
|
| 50 |
+
| Model Type | Model | Size | Max Tokens | Embedding Dimensions | HuggingFace Link |
|
| 51 |
+
|------------|-------|------|------------|---------------------|------------------|
|
| 52 |
+
| Text Embedding | [Octen-Embedding-0.6B](https://huggingface.co/Octen/Octen-Embedding-0.6B) | 0.6B | 32,768 | 1024 | ✅ Available |
|
| 53 |
+
| Text Embedding | [Octen-Embedding-4B](https://huggingface.co/Octen/Octen-Embedding-4B) | 4.0B | 32,768 | 2560 | ✅ Available |
|
| 54 |
+
| Text Embedding | [Octen-Embedding-8B](https://huggingface.co/Octen/Octen-Embedding-8B) | 7.6B | 32,768 | 4096 | ✅ Available |
|
| 55 |
+
|
| 56 |
+
**Model Family Design**:
|
| 57 |
+
- **Octen-Embedding-8B**: Best performance, RTEB #1, for high-precision retrieval
|
| 58 |
+
- **Octen-Embedding-4B**: Best in 4B category, balanced performance and efficiency
|
| 59 |
+
- **Octen-Embedding-0.6B**: Lightweight deployment, suitable for edge devices and resource-constrained environments
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
## Experimental Results
|
| 64 |
+
|
| 65 |
+
### RTEB Leaderboard (Overall Performance)
|
| 66 |
+
|
| 67 |
+
| Model | Embedding Dim | Max Tokens | Mean (Public) | Mean (Private) | Mean (Task) |
|
| 68 |
+
|-------|---------------|------------|---------------|----------------|-------------|
|
| 69 |
+
| **Octen-Embedding-8B** | **4096** | **32768** | **0.7953** | **0.8157** | **0.8045** |
|
| 70 |
+
| voyage-3-large | 1024 | 32000 | 0.7434 | 0.8277 | 0.7812 |
|
| 71 |
+
| gemini-embedding-001 | 3072 | 2048 | 0.7218 | 0.8075 | 0.7602 |
|
| 72 |
+
| **Octen-Embedding-4B** | **2560** | **32768** | **0.7747** | **0.7942** | **0.7834** |
|
| 73 |
+
| MoD-Embedding | 2560 | 32768 | 0.7642 | 0.7900 | 0.7758 |
|
| 74 |
+
| Qwen3-Embedding-8B | 4096 | 32768 | 0.7310 | 0.7838 | 0.7547 |
|
| 75 |
+
| **Octen-Embedding-0.6B** | **1024** | **32768** | **0.7241** | **-** | **-** |
|
| 76 |
+
| voyage-3.5 | 1024 | 32000 | 0.7139 | 0.8102 | 0.7571 |
|
| 77 |
+
| Cohere-embed-v4.0 | 1536 | 128000 | 0.6534 | 0.7943 | 0.7166 |
|
| 78 |
+
| jina-embeddings-v4 | 2048 | 32768 | 0.6652 | 0.7664 | 0.7105 |
|
| 79 |
+
| GritLM-7B | 4096 | 32768 | 0.6187 | 0.7385 | 0.6724 |
|
| 80 |
+
| text-embedding-3-large | 3072 | 8191 | 0.6110 | 0.7130 | 0.6567 |
|
| 81 |
+
| e5-mistral-7b-instruct | 4096 | 32768 | 0.5090 | 0.7091 | 0.5987 |
|
| 82 |
+
| NV-Embed-v2 | 4096 | 32768 | 0.5805 | 0.6691 | 0.6203 |
|
| 83 |
+
| snowflake-arctic-embed-l-v2.0 | 1024 | 8192 | 0.5395 | 0.7079 | 0.6150 |
|
| 84 |
+
| multilingual-e5-large-instruct | 1024 | 514 | 0.5478 | 0.6859 | 0.6097 |
|
| 85 |
+
| gte-multilingual-base | 768 | 8192 | 0.5291 | 0.6697 | 0.5921 |
|
| 86 |
+
| text-embedding-3-small | 1536 | 8191 | 0.5260 | 0.6630 | 0.5874 |
|
| 87 |
+
| bge-m3 | 1024 | 8194 | 0.5216 | 0.6726 | 0.5893 |
|
| 88 |
+
| Qwen3-Embedding-4B | 2560 | 32768 | - | 0.7711 | - |
|
| 89 |
+
| Qwen3-Embedding-0.6B | 1024 | 32768 | - | 0.7117 | - |
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
## Model Details
|
| 94 |
|
| 95 |
- **Base Model**: [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B)
|
README.md.backup
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
- multilingual
|
| 6 |
+
license: apache-2.0
|
| 7 |
+
library_name: sentence-transformers
|
| 8 |
+
tags:
|
| 9 |
+
- sentence-transformers
|
| 10 |
+
- sentence-similarity
|
| 11 |
+
- feature-extraction
|
| 12 |
+
- embedding
|
| 13 |
+
- text-embedding
|
| 14 |
+
- retrieval
|
| 15 |
+
pipeline_tag: sentence-similarity
|
| 16 |
+
base_model: Qwen/Qwen3-Embedding-0.6B
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# Octen-Embedding-0.6B
|
| 20 |
+
|
| 21 |
+
Octen-Embedding-0.6B is a text embedding model designed for semantic search and retrieval tasks. This model is fine-tuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) and supports multiple languages, providing high-quality embeddings for various applications.
|
| 22 |
+
|
| 23 |
+
## Model Details
|
| 24 |
+
|
| 25 |
+
- **Base Model**: [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B)
|
| 26 |
+
- **Model Size**: 0.6B parameters
|
| 27 |
+
- **Max Sequence Length**: 32,768 tokens
|
| 28 |
+
- **Embedding Dimension**: 1024
|
| 29 |
+
- **Languages**: English, Chinese, and multilingual support
|
| 30 |
+
- **Training Method**: LoRA fine-tuning
|
| 31 |
+
|
| 32 |
+
## Usage
|
| 33 |
+
|
| 34 |
+
### Using Sentence Transformers
|
| 35 |
+
|
| 36 |
+
```python
|
| 37 |
+
from sentence_transformers import SentenceTransformer
|
| 38 |
+
|
| 39 |
+
model = SentenceTransformer("Octen/Octen-Embedding-0.6B")
|
| 40 |
+
|
| 41 |
+
# Encode sentences
|
| 42 |
+
sentences = [
|
| 43 |
+
"This is an example sentence",
|
| 44 |
+
"Each sentence is converted to a vector"
|
| 45 |
+
]
|
| 46 |
+
|
| 47 |
+
embeddings = model.encode(sentences)
|
| 48 |
+
print(embeddings.shape)
|
| 49 |
+
# Output: (2, 1024)
|
| 50 |
+
|
| 51 |
+
# Compute similarity
|
| 52 |
+
from sentence_transformers.util import cos_sim
|
| 53 |
+
similarity = cos_sim(embeddings[0], embeddings[1])
|
| 54 |
+
print(f"Similarity: {similarity.item():.4f}")
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
### Using Transformers
|
| 58 |
+
|
| 59 |
+
```python
|
| 60 |
+
from transformers import AutoModel, AutoTokenizer
|
| 61 |
+
import torch
|
| 62 |
+
import torch.nn.functional as F
|
| 63 |
+
|
| 64 |
+
tokenizer = AutoTokenizer.from_pretrained("Octen/Octen-Embedding-0.6B", padding_side="left")
|
| 65 |
+
model = AutoModel.from_pretrained("Octen/Octen-Embedding-0.6B")
|
| 66 |
+
model.eval()
|
| 67 |
+
|
| 68 |
+
def encode(texts):
|
| 69 |
+
inputs = tokenizer(texts, padding=True, truncation=True,
|
| 70 |
+
max_length=8192, return_tensors="pt")
|
| 71 |
+
|
| 72 |
+
with torch.no_grad():
|
| 73 |
+
outputs = model(**inputs)
|
| 74 |
+
# Use last token embedding
|
| 75 |
+
embeddings = outputs.last_hidden_state[:, -1, :]
|
| 76 |
+
# Normalize embeddings
|
| 77 |
+
embeddings = F.normalize(embeddings, p=2, dim=1)
|
| 78 |
+
|
| 79 |
+
return embeddings
|
| 80 |
+
|
| 81 |
+
# Example usage
|
| 82 |
+
texts = ["Hello world", "你好世界"]
|
| 83 |
+
embeddings = encode(texts)
|
| 84 |
+
similarity = torch.matmul(embeddings[0], embeddings[1])
|
| 85 |
+
print(f"Similarity: {similarity.item():.4f}")
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
## Recommended Use Cases
|
| 89 |
+
|
| 90 |
+
- Semantic search and information retrieval
|
| 91 |
+
- Document similarity and clustering
|
| 92 |
+
- Question answering
|
| 93 |
+
- Cross-lingual retrieval
|
| 94 |
+
- Text classification with embeddings
|
| 95 |
+
|
| 96 |
+
## Limitations
|
| 97 |
+
|
| 98 |
+
- Performance may vary across different domains and languages
|
| 99 |
+
- Very long documents (>32K tokens) require truncation
|
| 100 |
+
- Optimized for retrieval tasks, not for text generation
|
| 101 |
+
|
| 102 |
+
## License
|
| 103 |
+
|
| 104 |
+
This model is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
| 105 |
+
|
| 106 |
+
This model is derived from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B), which is also licensed under Apache License 2.0.
|
| 107 |
+
|
| 108 |
+
## Paper
|
| 109 |
+
|
| 110 |
+
For more details, please refer to our blog post: [Octen-Embedding: Reproducible 1st Place on RTEB](https://octen-team.github.io/octen_blog/posts/octen-rteb-first-place/)
|
| 111 |
+
|
| 112 |
+
## Citation
|
| 113 |
+
|
| 114 |
+
If you find our work helpful, please consider citing:
|
| 115 |
+
|
| 116 |
+
```bibtex
|
| 117 |
+
@misc{octen2025rteb,
|
| 118 |
+
title={Octen-Embedding: Reproducible 1st Place on RTEB},
|
| 119 |
+
author={Octen Team},
|
| 120 |
+
year={2025},
|
| 121 |
+
url={https://octen-team.github.io/octen_blog/posts/octen-rteb-first-place/}
|
| 122 |
+
}
|
| 123 |
+
```
|