ScarlettMagdaleno's picture
Update README.md
4072020 verified
metadata
dataset_info:
  features:
    - name: article
      dtype: string
    - name: embedding
      sequence: float32
  splits:
    - name: train
      num_bytes: 6526015558
      num_examples: 614664
  download_size: 4974256567
  dataset_size: 6526015558
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: other
task_categories:
  - sentence-similarity
language:
  - en
pretty_name: CCNEWS with Embeddings (dim=1024)
tags:
  - embeddings
  - sentence-transformers
  - similarity-search
  - parquet
  - ccnews

Dataset Card for ccnews_all-roberta-large-v1_dim1024

This dataset contains English news articles from the CCNEWS dataset along with their corresponding 1024-dimensional embeddings, precomputed using the sentence-transformers/all-roberta-large-v1 model.

Note: If you are only interested in the raw embeddings (without the associated text), a compact version is available in the related repository: ScarlettMagdaleno/ccnews-embeddings-dim1024.

Dataset Details

Dataset Description

Each entry in this dataset is stored in Apache Parquet format, split into multiple files for scalability. Each record contains two fields:

  • 'article': The original news article text.

  • 'embedding': A 1024-dimensional list representing the output of the sentence-transformers/all-roberta-large-v1 encoder.

  • Curated by: Scarlett Magdaleno

  • Language(s) (NLP): English

  • License: Other (the dataset is a derivative of the CCNEWS dataset, which may carry its own license)

Dataset Sources

Dataset Creation

Curation Rationale

The dataset was created to enable fast and reproducible similarity search experiments, as well as to provide a resource where the relationship between the raw text and its embedding is explicitly retained.

Source Data

Data Collection and Processing

  • Texts were taken from the CCNEWS dataset available on Hugging Face.
  • Each article was passed through the encoder all-roberta-large-v1 from the sentence-transformers library.
  • The resulting embeddings were stored along with the article in Parquet format for efficient disk usage and interoperability.

Who are the source data producers?

Uses

Direct Use

This dataset is suitable for:

  • Training and evaluating similarity search models.
  • Experiments involving semantic representation of news content.
  • Weakly supervised learning using embeddings as targets or features.
  • Benchmarks for contrastive or clustering approaches.

Out-of-Scope Use

  • Not suitable for generative modeling tasks (no labels, no dialogues, no instructions).
  • Does not include metadata such as timestamps, URLs, or categories.

Dataset Structure

Each Parquet file contains a table with two columns:

  • article (string): The raw article text.
  • embedding (list[float]): A list of 1024 float values representing the semantic embedding.

Format

  • Storage format: Apache Parquet.
  • Total records: same as CCNEWS — approximately 614,664 articles.
  • Split: the dataset is divided into multiple parquet files for better loading performance.

Example Record

{
  "article": "U.S. President signs new environmental policy...",
  "embedding": [0.023, -0.117, ..., 0.098]  # 1024 values
}