Moltbook Embeddings

Pre-computed text embeddings for the moltbook-files dataset โ€” a synthetic AI-agent social network with 232k posts across 3,628 communities.

Details

Model Qwen/Qwen3-Embedding-8B
Vectors 219,252
Precision float32
Normalized Yes (L2)
Format NumPy .npy
Size ~3.6 GB

Files

  • embeddings.npy โ€” shape (219252, D), one row per text
  • embeddings_meta.json โ€” metadata (count + model name)

Usage

import numpy as np

embeddings = np.load("embeddings.npy")
print(embeddings.shape)  # (219252, ...)

How they were generated

Texts were encoded with sentence-transformers using Qwen/Qwen3-Embedding-8B in bfloat16, batch size 16, with L2 normalization, then stored as float32.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for filter-with-espresso/moltbook-embeddings

Base model

Qwen/Qwen3-8B-Base
Finetuned
(20)
this model