dolma-small / README.md
sjgerstner's picture
Update README.md
2dc28d5 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: metadata
      struct:
        - name: file_path
          dtype: string
    - name: input_ids
      list: int32
    - name: attention_mask
      list: int8
  splits:
    - name: train
      num_bytes: 239231368
      num_examples: 45736
  download_size: 125597135
  dataset_size: 239231368
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: odc-by
task_categories:
  - text-generation
language:
  - en
tags:
  - language-modeling
  - causal-lm
  - llm
size_categories:
  - 10K<n<100K

This dataset is a sample of Dolma v1.7 via the 3B version dolma-v1_7-3B. Our sample contains slightly more than 20M tokens (45,736 example texts).

As a pure sample, it maintains the ODC-BY license.

Dataset Description

The columns "id", and "metadata" are copied from the larger dataset, in order to facilitate tracing the source of a particular example.

The columns "input_ids" and "attention_mask" were created with the OLMo tokenizer (a modified version of the GPT-NeoX-20B tokenizer, with some added special tokens). The first token is always "<|endoftext|>".

The original "text" strings are also kept, so users can use another tokenizer if they prefer.

Every example is truncated to at most 1024 tokens (the end is cut off). This affects the "input_ids" (and "attention_mask") column, but not the "text" column. 6791 examples are affected by this.

Curation Rationale

This dataset was primarily created for our project GLUScope, which visualizes strong neuron activations on precisely this dataset. We wanted the dataset to be as lightweight as possible while still providing meaningful information on neuron activations.

Uses

The primary intended use is model analysis work like ours. It is likely to work especially well for OLMo models, since they were trained on Dolma.

However, as with any text dataset, there are many possible use cases. For example, users could use it to train very small language models, do controlled experiments with continued pretraining, and more.

Citation

BibTeX:

[More Information Needed]

Contact

[More Information Needed]