license: apache-2.0
🧮 Taxonomy Math w/ FM
A high-quality mathematics dataset curated from web data using taxonomy-based filtering, containing 34 billion tokens of mathematical content.
🎯 Dataset Overview
This dataset is part of the EssentialWeb project, which introduces a new paradigm for dataset curation using expressive metadata and simple semantic filters. Unlike traditional math datasets that require complex domain-specific pipelines, our approach leverages a 12-category taxonomy to efficiently identify and extract high-quality mathematical content.
🔬 Taxonomy Math w/ FM (34B tokens): Documents labeled as 51 - Mathematics in our taxonomy, with all 116M recalled documents then scored by the FineMath classifier and filtered to the top 34B tokens.
🏆 Performance
Our taxonomy-based approach achieves competitive results with significantly less curation effort:
| Dataset | GSM8K | MATH | Curation Complexity |
|---|---|---|---|
| FineMath 3+ | 26.4% | 11.7% | Complex domain pipeline |
| OpenWebMath | 14.6% | 9.3% | Complex domain pipeline |
| MegaMath Web | 9.8% | 7.9% | Complex domain pipeline |
| Taxonomy Top Math | 21.3% | 11.0% | Simple semantic filter |
| Taxonomy Math w/ FM | 22.4% | 11.5% | + FineMath classifier |
Results show our datasets perform within 15% of SOTA while requiring minimal domain-specific tuning.
✨ Key Features
- 🎯 Direct Distribution Targeting: Leverage existing taxonomy labels to target math content from web-scale data without training custom high-recall classifiers
- 🚀 Rapid Curation: Skip the expensive classifier training phase and go straight to content selection
- 💰 Cost Effective: Avoid the need to train high-recall domain-specific classifiers for content discovery
- 🔍 Two-Stage Approach: Use taxonomy for recall, then apply existing quality classifiers for selection
- 🌐 Web-Scale: Access to math content identified across 23.6B web documents
🛠️ Curation Method
Our approach simplifies math dataset creation:
- Traditional Method: Train high-recall classifiers → Run on billions of documents
- Our Method: Query taxonomy metadata for
51 - Mathematics→ Apply FineMath classifier to all recalled documents → Select top-scoring content
Dataset Schema Documentation
Overview
This dataset contains web-crawled text data with comprehensive metadata, quality signals, and taxonomic classifications. Each record represents a document extracted from web archives with detailed provenance tracking and quality assessment metrics.
EAI Taxonomy Classification
Comprehensive hierarchical classification system with primary and secondary labels - the most important feature of this dataset:
Free Decimal Correspondence
Dewey Decimal-inspired classification with 3-level hierarchical labels:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main classification code | eai_taxonomy.free_decimal_correspondence.primary.code |
| Primary Level 1 | Top-level category | eai_taxonomy.free_decimal_correspondence.primary.labels.level_1 |
| Primary Level 2 | Mid-level category | eai_taxonomy.free_decimal_correspondence.primary.labels.level_2 |
| Primary Level 3 | Specific category | eai_taxonomy.free_decimal_correspondence.primary.labels.level_3 |
| Secondary Code | Alternative classification code | eai_taxonomy.free_decimal_correspondence.secondary.code |
| Secondary Level 1 | Alternative top-level category | eai_taxonomy.free_decimal_correspondence.secondary.labels.level_1 |
| Secondary Level 2 | Alternative mid-level category | eai_taxonomy.free_decimal_correspondence.secondary.labels.level_2 |
| Secondary Level 3 | Alternative specific category | eai_taxonomy.free_decimal_correspondence.secondary.labels.level_3 |
Bloom's Taxonomy Integration
Cognitive Process
Learning and thinking skill levels:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main cognitive process code | eai_taxonomy.bloom_cognitive_process.primary.code |
| Primary Label | Main cognitive process label | eai_taxonomy.bloom_cognitive_process.primary.label |
| Secondary Code | Alternative cognitive process code | eai_taxonomy.bloom_cognitive_process.secondary.code |
| Secondary Label | Alternative cognitive process label | eai_taxonomy.bloom_cognitive_process.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
1 |
Remember |
2 |
Understand |
3 |
Apply |
4 |
Analyze |
5 |
Evaluate |
6 |
Create |
Knowledge Domain
Subject matter categorization:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main knowledge domain code | eai_taxonomy.bloom_knowledge_domain.primary.code |
| Primary Label | Main knowledge domain label | eai_taxonomy.bloom_knowledge_domain.primary.label |
| Secondary Code | Alternative knowledge domain code | eai_taxonomy.bloom_knowledge_domain.secondary.code |
| Secondary Label | Alternative knowledge domain label | eai_taxonomy.bloom_knowledge_domain.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
1 |
Factual |
2 |
Conceptual |
3 |
Procedural |
4 |
Metacognitive |
Document Characteristics
Document Type v1
Format and structure classification:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main document type code | eai_taxonomy.document_type_v1.primary.code |
| Primary Label | Main document type label | eai_taxonomy.document_type_v1.primary.label |
| Secondary Code | Alternative document type code | eai_taxonomy.document_type_v1.secondary.code |
| Secondary Label | Alternative document type label | eai_taxonomy.document_type_v1.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
1 |
News/Editorial |
2 |
Academic/Research |
3 |
Reference/Encyclopedic/Educational |
4 |
Code/Software |
5 |
Social/Forum |
6 |
Promotional/Advertisement |
7 |
Search/Directory/Bibliography |
8 |
Adult/Pornographic |
9 |
Personal/Misc |
10 |
Machine-Generated |
11 |
Legal/Regulatory |
12 |
Government/Political |
13 |
Literary/Creative |
14 |
Reviews/Critiques |
15 |
E-Commerce/Marketplace |
16 |
Images/Videos/Audio |
17 |
Other/Unclassified |
Document Type v2
Updated format and structure classification:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main document type code (v2) | eai_taxonomy.document_type_v2.primary.code |
| Primary Label | Main document type label (v2) | eai_taxonomy.document_type_v2.primary.label |
| Secondary Code | Alternative document type code (v2) | eai_taxonomy.document_type_v2.secondary.code |
| Secondary Label | Alternative document type label (v2) | eai_taxonomy.document_type_v2.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
1 |
About (Org.) |
2 |
About (Personal) |
3 |
Academic Writing |
4 |
Audio Transcript |
5 |
Comment Section |
6 |
Content Listing |
7 |
Creative Writing |
8 |
Documentation |
9 |
FAQ |
10 |
Knowledge Article |
11 |
Legal Notices |
12 |
Listicle |
13 |
News (Org.) |
14 |
News Article |
15 |
Nonfiction Writing |
16 |
Personal Blog |
17 |
Product Page |
18 |
Q&A Forum |
19 |
Spam / Ads |
20 |
Structured Data |
21 |
Customer Support |
22 |
Truncated |
23 |
Tutorial |
24 |
User Review |
25 |
Other/Unclassified |
Extraction Artifacts
Technical extraction quality indicators:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main extraction artifact code | eai_taxonomy.extraction_artifacts.primary.code |
| Primary Label | Main extraction artifact label | eai_taxonomy.extraction_artifacts.primary.label |
| Secondary Code | Alternative extraction artifact code | eai_taxonomy.extraction_artifacts.secondary.code |
| Secondary Label | Alternative extraction artifact label | eai_taxonomy.extraction_artifacts.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
0 |
No Artifacts |
1 |
Leftover HTML |
2 |
Text Extraction Errors |
3 |
Irrelevant Content |
4 |
Indeterminate |
Missing Content
Content completeness assessment:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main missing content code | eai_taxonomy.missing_content.primary.code |
| Primary Label | Main missing content label | eai_taxonomy.missing_content.primary.label |
| Secondary Code | Alternative missing content code | eai_taxonomy.missing_content.secondary.code |
| Secondary Label | Alternative missing content label | eai_taxonomy.missing_content.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
0 |
No missing content |
1 |
Truncated Snippets |
2 |
Click Here References |
3 |
Incoherent Flow |
4 |
Missing Images or Figures |
5 |
Missing Referenced Data |
6 |
Indeterminate |
Content Quality Dimensions
Reasoning Depth
Complexity of logical reasoning:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main reasoning depth code | eai_taxonomy.reasoning_depth.primary.code |
| Primary Label | Main reasoning depth label | eai_taxonomy.reasoning_depth.primary.label |
| Secondary Code | Alternative reasoning depth code | eai_taxonomy.reasoning_depth.secondary.code |
| Secondary Label | Alternative reasoning depth label | eai_taxonomy.reasoning_depth.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
1 |
No Reasoning |
2 |
Basic Reasoning |
3 |
Intermediate Reasoning |
4 |
Advanced Reasoning |
5 |
Exceptional Reasoning |
6 |
Indeterminate |
Technical Correctness
Accuracy of technical information:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main technical correctness code | eai_taxonomy.technical_correctness.primary.code |
| Primary Label | Main technical correctness label | eai_taxonomy.technical_correctness.primary.label |
| Secondary Code | Alternative technical correctness code | eai_taxonomy.technical_correctness.secondary.code |
| Secondary Label | Alternative technical correctness label | eai_taxonomy.technical_correctness.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
1 |
Technically Flawed |
2 |
Partially Correct |
3 |
Mostly Correct |
4 |
Highly Correct |
5 |
Exceptionally Correct |
6 |
Not Applicable/Indeterminate |
Education Level
Appropriate educational grade level:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main education level code | eai_taxonomy.education_level.primary.code |
| Primary Label | Main education level label | eai_taxonomy.education_level.primary.label |
| Secondary Code | Alternative education level code | eai_taxonomy.education_level.secondary.code |
| Secondary Label | Alternative education level label | eai_taxonomy.education_level.secondary.label |
Possible Values:
| Code | Label |
|---|---|
-1 |
Abstain |
1 |
General Audience |
2 |
High School Level |
3 |
Undergraduate Level |
4 |
Graduate/Expert Level |
5 |
Indeterminate |
Schema Structure
Core Fields
| Field | Type | Description | Path |
|---|---|---|---|
id |
Int64 |
Unique identifier for each document | id |
text |
String |
The main textual content of the document | text |
Metadata Structure
The metadata field contains a nested structure with web archive information:
| Field | Type | Description | Path |
|---|---|---|---|
| URL Information | |||
| URL | String |
Original URL of the document | metadata.url |
| Source Domain | String |
Domain name of the source | metadata.source_domain |
| Snapshot ID | String |
Identifier for the web archive snapshot | metadata.snapshot_id |
| WARC Metadata | WARC (Web ARChive) format metadata | ||
| Content Length | String |
Size of the content | metadata.warc_metadata.Content-Length |
| Content Type | String |
MIME type of the content | metadata.warc_metadata.Content-Type |
| Block Digest | String |
Checksum of the WARC block | metadata.warc_metadata.WARC-Block-Digest |
| Concurrent To | String |
Related WARC records | metadata.warc_metadata.WARC-Concurrent-To |
| Date | String |
Timestamp of the crawl | metadata.warc_metadata.WARC-Date |
| IP Address | String |
Source server IP address | metadata.warc_metadata.WARC-IP-Address |
| Payload Type | String |
Identified content type | metadata.warc_metadata.WARC-Identified-Payload-Type |
| Payload Digest | String |
Checksum of the payload | metadata.warc_metadata.WARC-Payload-Digest |
| Record ID | String |
Unique WARC record identifier | metadata.warc_metadata.WARC-Record-ID |
| Target URI | String |
Original target URL | metadata.warc_metadata.WARC-Target-URI |
| Truncated | String |
Truncation status | metadata.warc_metadata.WARC-Truncated |
| Type | String |
WARC record type | metadata.warc_metadata.WARC-Type |
| Warcinfo ID | String |
Associated warcinfo record | metadata.warc_metadata.WARC-Warcinfo-ID |
| Additional Info | |||
| WARC Info | String |
Additional WARC information | metadata.warc_info |
Text Structure Information
| Field | Type | Description | Path |
|---|---|---|---|
| Line Start Indices | List[Int32] |
Starting indices of each line | line_start_n_end_idx.line_start_idx |
| Line End Indices | List[Int32] |
Ending indices of each line | line_start_n_end_idx.line_end_idx |
Quality Signals
The dataset includes two comprehensive quality assessment frameworks:
Red Pajama v2 Quality Metrics
Text quality indicators derived from the Red Pajama v2 filtering pipeline:
Content Structure Metrics
| Metric | Description | Path |
|---|---|---|
| Original Length | Original document length | quality_signals.red_pajama_v2.ccnet_original_length |
| Original Lines | Number of lines in original document | quality_signals.red_pajama_v2.ccnet_original_nlines |
| Sentence Count | Total sentence count | quality_signals.red_pajama_v2.rps_doc_num_sentences |
| Word Count | Total word count | quality_signals.red_pajama_v2.rps_doc_word_count |
| Mean Word Length | Average word length | quality_signals.red_pajama_v2.rps_doc_mean_word_length |
Language Quality Metrics
| Metric | Description | Path |
|---|---|---|
| Stop Word Fraction | Proportion of stop words | quality_signals.red_pajama_v2.rps_doc_stop_word_fraction |
| Unique Words Fraction | Fraction of unique words | quality_signals.red_pajama_v2.rps_doc_frac_unique_words |
| All Caps Words | Fraction of words in all capitals | quality_signals.red_pajama_v2.rps_doc_frac_all_caps_words |
| Non-Alphabetic Words | Fraction of non-alphabetic words | quality_signals.red_pajama_v2.rps_doc_frac_no_alph_words |
| Unigram Entropy | Entropy measure of word distribution | quality_signals.red_pajama_v2.rps_doc_unigram_entropy |
Content Pattern Analysis
| Metric | Description | Path |
|---|---|---|
| Curly Bracket Density | Curly bracket density (code indicator) | quality_signals.red_pajama_v2.rps_doc_curly_bracket |
| Symbol-to-Word Ratio | Symbol-to-word ratio | quality_signals.red_pajama_v2.rps_doc_symbol_to_word_ratio |
| Ellipsis Line Endings | Lines ending with ellipsis | quality_signals.red_pajama_v2.rps_doc_frac_lines_end_with_ellipsis |
| Lorem Ipsum Detection | Lorem ipsum text detection | quality_signals.red_pajama_v2.rps_doc_lorem_ipsum |
| Offensive Content | Potentially offensive content detection | quality_signals.red_pajama_v2.rps_doc_ldnoobw_words |
| UT1 Blacklist | UT1 blacklist filtering score | quality_signals.red_pajama_v2.rps_doc_ut1_blacklist |
Duplication Detection
| Metric | Description | Path |
|---|---|---|
| 5-gram Duplication | Character-level duplication for 5-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_5grams |
| 6-gram Duplication | Character-level duplication for 6-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_6grams |
| 7-gram Duplication | Character-level duplication for 7-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_7grams |
| 8-gram Duplication | Character-level duplication for 8-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_8grams |
| 9-gram Duplication | Character-level duplication for 9-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_9grams |
| 10-gram Duplication | Character-level duplication for 10-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_10grams |
| Top 2-gram Coverage | Most frequent 2-gram coverage | quality_signals.red_pajama_v2.rps_doc_frac_chars_top_2gram |
| Top 3-gram Coverage | Most frequent 3-gram coverage | quality_signals.red_pajama_v2.rps_doc_frac_chars_top_3gram |
| Top 4-gram Coverage | Most frequent 4-gram coverage | quality_signals.red_pajama_v2.rps_doc_frac_chars_top_4gram |
Domain Importance Scores
| Metric | Description | Path |
|---|---|---|
| Books Importance | Similarity to book content | quality_signals.red_pajama_v2.rps_doc_books_importance |
| Books Importance (Length Corrected) | Length-corrected books similarity | quality_signals.red_pajama_v2.rps_doc_books_importance_length_correction |
| OpenWebText Importance | Similarity to OpenWebText | quality_signals.red_pajama_v2.rps_doc_openwebtext_importance |
| OpenWebText Importance (Length Corrected) | Length-corrected OpenWebText similarity | quality_signals.red_pajama_v2.rps_doc_openwebtext_importance_length_correction |
| Wikipedia Importance | Similarity to Wikipedia | quality_signals.red_pajama_v2.rps_doc_wikipedia_importance |
| Wikipedia Importance (Length Corrected) | Length-corrected Wikipedia similarity | quality_signals.red_pajama_v2.rps_doc_wikipedia_importance_length_correction |
FastText Classification Scores
Domain and content type classification probabilities:
| Metric | Description | Path |
|---|---|---|
| DCLM Score | DataComp-LM classifier score | quality_signals.fasttext.dclm |
| English Confidence | English language confidence | quality_signals.fasttext.english |
| Educational Content | Educational content approximation | quality_signals.fasttext.fineweb_edu_approx |
| General Math | General mathematics content | quality_signals.fasttext.eai_general_math |
| Web Math | Web-based mathematics content | quality_signals.fasttext.eai_open_web_math |
| Code Content | Code content detection | quality_signals.fasttext.eai_web_code |
Data Provenance
All documents originate from web crawls with full WARC metadata preservation, enabling:
- Source verification and attribution
- Temporal analysis of web content
- Content deduplication across crawls
- Quality assessment pipeline reconstruction
Usage Examples
Filter by quality score:
df.filter(df["quality_signals.red_pajama_v2.rps_doc_stop_word_fraction"] > 0.3)
Filter by domain:
df.filter(df["metadata.source_domain"].contains("wikipedia"))
Filter by education level:
df.filter(df["eai_taxonomy.education_level.primary.code"] == "2") # High School Level
Filter by content type:
df.filter(df["quality_signals.fasttext.eai_web_code"] > 0.8)
Filter by document quality:
df.filter(
(df["quality_signals.red_pajama_v2.rps_doc_word_count"] > 100) &
(df["quality_signals.red_pajama_v2.rps_doc_stop_word_fraction"] > 0.2) &
(df["quality_signals.red_pajama_v2.rps_doc_frac_unique_words"] > 0.3)
)
Filter by reasoning depth:
df.filter(df["eai_taxonomy.reasoning_depth.primary.code"].isin(["4", "5"])) # Advanced or Exceptional
Filter by document type:
df.filter(df["eai_taxonomy.document_type_v2.primary.code"] == "3") # Academic Writing
Filter high-quality educational content:
df.filter(
(df["eai_taxonomy.education_level.primary.code"].isin(["2", "3"])) & # High School or Undergraduate
(df["eai_taxonomy.technical_correctness.primary.code"].isin(["4", "5"])) & # Highly or Exceptionally Correct
(df["eai_taxonomy.extraction_artifacts.primary.code"] == "0") & # No Artifacts
(df["quality_signals.fasttext.fineweb_edu_approx"] > 0.7)
)
🎓 Citation
If you use this dataset, please cite our EssentialWeb paper:
@article{essentialweb2025,
title={Essential-Web: 24T tokens of organized web data},
author={[Authors]},
year={2025}
}
Part of the EssentialWeb ecosystem: Making dataset curation accessible, interpretable, and efficient.