The dataset viewer should be available soon. Please retry later.
Liberating 3T of the finest tokens from PDFs
What is this?
As we run out of web pages to process, the natural question has always been: what to do next? Only a few knew about a data source that everyone avoided for ages, due to its incredible extraction cost and complexity: PDFs.
📄 FinePDFs is exactly that. It is the largest publicly available corpus sourced exclusively from PDFs, containing about 3 trillion tokens across 475 million documents in 1733 languages.
Compared to HTML datasets, despite being only mildly filtered, it achieves results nearly on par with state-of-the-art collections such as the SmolLM-3 Web mixture. More importantly, when mixed with HTML-based corpora, it delivers a significant performance boost across benchmarks 🚀.
The data was sourced from 105 CommonCrawl snapshots, spanning the summer of 2013 to February 2025, as well as refetched from the internet, and processed using 🏭 datatrove, our large scale data processing library. This carefully deduplicated and filtered dataset comprises roughly 3.65 terabytes of 3T tokens. For PII and opt-out see Personal and Sensitive Information and opt-out.
As is tradition, the dataset is fully reproducible and released under the ODC-By 1.0 license. You will be able to access the reproduction code, ablation and evaluation setup in this GitHub repository soon 👷.
Languages and available subsets
Each language is identified by its ISO 639-3 code, and the data is grouped by language-script pairs, since some languages have content in multiple scripts.
In total, we provide data for 1733 language-script pairs. Of these, 978 have more than 1M tokens, and 66 have more than 1B tokens of data. Most languages also include a small test split which should not be trained on.
Additionally, certain documents for which we have not been able to identify the language have been marked as "unknown".
The following table shows the size of the filtering subset for the biggest 50 languages.
| Language | Docs | Tokens (B) | Disk size |
|---|---|---|---|
| eng_Latn | 206,917,556 | 1190.65 B | 1.71 TB |
| spa_Latn | 25,629,014 | 217.09 B | 249.99 GB |
| deu_Latn | 36,121,918 | 177.56 B | 218.74 GB |
| fra_Latn | 27,312,270 | 165.27 B | 203.58 GB |
| rus_Cyrl | 16,259,957 | 146.73 B | 193.37 GB |
| jpn_Jpan | 31,393,277 | 116.31 B | 142.16 GB |
| ita_Latn | 17,589,182 | 95.03 B | 109.92 GB |
| por_Latn | 12,045,013 | 94.81 B | 98.63 GB |
| pol_Latn | 9,692,213 | 54.63 B | 55.02 GB |
| unknown | 17,098,504 | 47.72 B | 27.94 GB |
| nld_Latn | 7,795,696 | 47.10 B | 53.90 GB |
| hun_Latn | 3,145,494 | 37.48 B | 35.28 GB |
| cmn_Hani | 4,913,699 | 33.03 B | 43.62 GB |
| ces_Latn | 5,651,566 | 29.94 B | 36.75 GB |
| arb_Arab | 1,458,060 | 29.79 B | 36.38 GB |
| ukr_Cyrl | 2,677,732 | 25.56 B | 35.66 GB |
| swe_Latn | 4,125,120 | 25.45 B | 27.42 GB |
| ron_Latn | 3,265,132 | 22.63 B | 22.21 GB |
| ind_Latn | 2,323,354 | 20.34 B | 19.65 GB |
| tha_Thai | 2,515,134 | 17.56 B | 18.15 GB |
| ell_Grek | 1,962,841 | 16.84 B | 18.46 GB |
| fin_Latn | 1,980,522 | 16.71 B | 15.71 GB |
| fas_Arab | 1,347,099 | 15.57 B | 20.66 GB |
| tur_Latn | 1,699,676 | 15.34 B | 18.71 GB |
| dan_Latn | 2,415,047 | 13.52 B | 14.61 GB |
| hrv_Latn | 1,436,818 | 12.66 B | 11.15 GB |
| slk_Latn | 2,251,520 | 12.59 B | 12.23 GB |
| srp_Cyrl | 945,085 | 12.33 B | 11.41 GB |
| kor_Hang | 1,092,545 | 12.29 B | 14.30 GB |
| cat_Latn | 1,864,511 | 12.05 B | 12.83 GB |
| nob_Latn | 1,501,170 | 11.82 B | 12.72 GB |
| bul_Cyrl | 1,290,422 | 10.12 B | 10.25 GB |
| slv_Latn | 930,944 | 8.65 B | 8.15 GB |
| heb_Hebr | 827,347 | 8.64 B | 5.18 GB |
| hin_Deva | 849,564 | 8.26 B | 8.32 GB |
| ben_Beng | 538,891 | 8.01 B | 4.04 GB |
| lat_Latn | 166,716 | 7.78 B | 9.82 GB |
| vie_Latn | 1,229,330 | 7.72 B | 8.93 GB |
| lit_Latn | 870,613 | 7.37 B | 6.29 GB |
| bos_Latn | 675,140 | 7.02 B | 6.85 GB |
| dag_Latn | 1,753,020 | 6.03 B | 4.19 GB |
| glk_Arab | 312,868 | 4.98 B | 3.44 GB |
| kiu_Latn | 1,506,764 | 4.71 B | 3.07 GB |
| tam_Taml | 99,546 | 4.59 B | 2.07 GB |
| lvs_Latn | 542,194 | 4.40 B | 3.54 GB |
| urd_Arab | 118,768 | 4.23 B | 4.10 GB |
| isl_Latn | 362,886 | 4.19 B | 3.77 GB |
| kat_Geor | 171,028 | 3.66 B | 1.06 GB |
| ekk_Latn | 552,807 | 3.63 B | 3.41 GB |
| zsm_Latn | 693,830 | 3.41 B | 3.08 GB |
| ... | |||
| Total | 475,019,140 | 2918.79 B | 3.65 TB |
Changelog
Previous versions remain available in the branch version name. You can access them using for example revision="v2.0.0".
- v1.5.0 (11-11-2025): Classifier labels added (DCLM, EDU, EDU-V2, OCR-QUALITY), fixed CommonCrawl paths, and corrected misalignment of labels (docling vs rolmOCR).
- v1.0.0 (07-09-2025): Initial version
How to download and use 📄 FinePDFs
See the tables above for the subset of the language you want to download.
We currently do not provide smaller sample versions, but by setting limit or using streaming=True you can easily fetch a sample of the data. If there is interest from the community we might upload smaller sampled versions later on.
Using 🏭 datatrove
from datatrove.pipeline.readers import ParquetReader
# limit determines how many documents will be streamed (remove for all)
# this will fetch the Portuguese filtered data
data_reader = ParquetReader("hf://datasets/HuggingFaceFW/finepdfs/data/por_Latn/train", limit=1000)
for document in data_reader():
# do something with document
print(document)
###############################
# OR for a processing pipeline:
###############################
from datatrove.executor import LocalPipelineExecutor
from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.pipeline.writers import JsonlWriter
pipeline_exec = LocalPipelineExecutor(
pipeline=[
ParquetReader("hf://datasets/HuggingFaceFW/finepdfs/data/por_Latn/train", limit=1000),
LambdaFilter(lambda doc: "hugging" in doc.text),
JsonlWriter("some-output-path")
],
tasks=10
)
pipeline_exec.run()
Using huggingface_hub
from huggingface_hub import snapshot_download
folder = snapshot_download(
"HuggingFaceFW/finepdfs",
repo_type="dataset",
local_dir="./finepdfs/",
# download the Czech filtered
allow_patterns=["data/ces_Latn/train/*"])
For faster downloads, make sure to install pip install huggingface_hub[hf_transfer] and set the environment variable HF_HUB_ENABLE_HF_TRANSFER=1.
Using datasets
from datasets import load_dataset
# get Croatian data
fw = load_dataset("HuggingFaceFW/finepdfs", name="hrv_Latn", split="train", streaming=True)
Code-Switching
Unlike in typical HTML dataset, in 📄 FinePDFs many documents contain code-switching—the use of multiple languages within a single document. This commonly occurs in legal transcripts presented in two languages, instruction manuals with mixed-language content, or academic papers where abstracts are written in one language while the main content is in another.
However, code-switching may not always be desirable, particularly when you want to train a model on documents in a specific target language. To address this, we recommend implementing a filtering mechanism that retains only documents where more than 50% of the pages are in the requested language:
wanted_languages = ["ces_Latn", "por_Latn"]
def keep_document(document: dict):
full_doc_language = doc["full_doc_lid"]
per_page_languages = doc["per_page_languages"]
pages_in_language = [p for p in best_page_languages if p in wanted_languages]
median_page_in_language = len(pages_in_language) >= len(best_page_languages)//2
# Further enforces target language
full_doc_in_languages = full_doc_language in wanted_languages
return median_page_in_language and full_doc_in_languages
Dataset processing steps
We used the 🏭 datatrove library to process the data.
The starting point for our dataset were all WARC files from CommonCrawl dumps starting from CC-MAIN-2013-20 to CC-MAIN-2025-08.
For this data, we applied the following processing steps:
- PDF+Truncation Identification 🔍
- Truncated PDF Hydratation 🌐
- OCR Requirement Detection & Extraction 🔑
- Text Postprocessing 🔨
- Language Identification 🌎
- Exact Deduplication ♻️
- Filtering for English 🧹
- Deduplication per language 🔄
- PII Anonymization 🎭
PDF Liberation pipeline
PDF+Truncation Identification 🔍
Many of the PDFs are truncated in CommonCrawl, either due to network issues or size, we first identified such documents. For dumps preceding CC-MAIN-2019-47, this meant checking if the PDFs are <1MB, while for newer dumps we simply checked content_truncated field. To further improve our recall, we reran mime_type detection for early crawls and additionally considered any document with a URL signifying the PDF data type.
Truncated PDF Hydratation 🌐
Truncated PDFs were refetched from the internet. To prevent overloading servers we randomly shuffled URLs, even though it meant our fetching was slightly slower since we couldn't reuse existing connections.
OCR Requirement Detection & Extraction 🔑
To reduce the cost and time of PDF extraction, we adopted a two-tiered approach: a cheap text-based method running on CPUs, and a more expensive image-based method running on GPUs. The choice between the two depends on the nature of the PDF: if the text is directly extractable (digital-born PDFs), we use the cheaper method; if the PDF is scanned and text is not extractable, we fall back to the GPU-based pipeline.
To determine the extraction path, we first manually annotated 1,350 PDFs and trained an XGBoost model. The model relies on 7 document-level features alongside 120 page-level features sampled from 8 random pages. We applied this classifier to PDFs that were not truncated and routed them accordingly, while truncated PDFs were always processed with the expensive image-based method. During the detection, we also identified potentially corrupted PDFs by removing all those with critical or moderate parsing errors.
For the text-based pipeline, we selected the open-source library Docling due to its strong performance-to-speed ratio. We used PyMuPDF as the backend and ran only the Docling Layout Heron model, which we quantized to int8 with OpenVINO to improve efficiency. Table extraction was handled using PyMuPDF’s in-built detection, but applied only to regions identified as tables. To ensure robustness, we added several post-processing steps to handle rare edge cases.
For the GPU-based pipeline, we used RolmOCR, running on top of a modified LMDeploy framework and orchestrated through the Datatrove inference block. All PDFs were rescaled such that the longest dimension is no smaller than 1280px, while ensuring the representation does not exceed 2048 image tokens, before being passed to the model. The total context length of the model, including the input, was set to 8096 tokens.
Text Postprocessing 🔨
For the Docling pipeline, we removed page-number tags while preserving genuine singleton numbers, cleaned tables by dropping empty rows and columns, and discarded malformed image annotations with an alpha-to-all-character ratio <= 0.8. We then applied a boilerplate detector to strip repetitive content from page headers and footers. Finally we applied FTFY to fix encoding issues 🔧.
For the RolmOCR pipeline, we removed pages that ran out of context, were detected to contain repeated content, or failed entirely. During analysis, we noticed that pages with no or very little text often produced hallucinated content; to address this, we used VLM to detect and discard such cases. As in the Docling pipeline, we concluded by applying boilerplate detection to remove repetitive headers and footers and applying FTFY.
Language Identification 🌍
Following FineWeb-2, we use GlotLID for language identification. However, unlike FineWeb-2, we apply the model per page instead on the full document and get final results by averaging over the pages.
For each language, we defined different minimum language classifier confidence scores to keep a document.
Exact Deduplication ♻️
Unlike in both of our previous iterations, we decided to apply exact deduplication alongside MinHash deduplication to reduce documents before model-based filtering.
Data Filtering 🧹
We do not apply any heuristic-based filters. Our only filtering is model-based and applied to the eng_Latn subset. For this, we follow a similar approach to FineWeb-EDU, targeting removal of PDF advertisements and spam content that occasionally appear in the data. We decided to apply this step before MinHash, as the content we want to filter typically contains random SEO keywords, which could result in the removal of valid content during MinHash.
MinHash Deduplication
Following FineWeb-2, we apply MinHash across all dumps for each language separately, with one change: increasing the total number of hashes due to the higher average length of a document.
PII Anonymization🎭
Kept unchanged, emails and ip addresses are anonymized. ✉️
We will soon release more details and the reasoning behind each step in our upcoming blogpost 👷.
Dataset performance evaluation and ablations
For measuring dataset performance of eng_Latn subset, we refined our set of tasks to the following list (especially note the addition of 2 table extraction tasks):
- SQuAD 2.0
- ARC (AI2 Reasoning Challenge)
- HellaSwag
- MMLU-Redux
- GSM8K
- DROP
- XStoryCloze
- WikiTableQuestions
- TREB QA
- WinoGrande
- PIQA
- OpenBookQA
- CommonsenseQA
Further in the same manner as for FineWeb-2, we select a set of languages to measure the effects of multilingual data interventions. Due to limited data availability, we restrict our focus to just four languages: Chinese, French, Arabic, and Russian. For these, we re-use the high-signal tasks defined in FineTasks.
(We recommend reading the full blog post for a detailed explanation of the benchmark choices!)
As for metrics, we use probability mass for all tasks. For task averaging, we track both rank and simple averaging across the capabilities we are interested:
- Reading comprehension (RC)
- Natural language understanding (NLU)
- General knowledge (GK)
- Reasoning (RES)
- Math (MATH)
- Table understanding (TABLE)
We conducted our dataset performance ablations and evaluations by training a series of 1.67B parameters models on ~36 billion tokens, tokenized using the Llama-3.2 tokenizer. To compare 📄 FinePdfs with other datasets, we also trained one of these 1.67B models per target dataset, on 36 billion tokens sampled from it (or the entire dataset when its size was < 36 billion tokens).
Comparison with other datasets
In comparison, the documents in our dataset are on average nearly twice as long, and, more importantly, contain a large number of examples exceeding 100,000 characters. We believe this makes the dataset particularly valuable for advancing long-context capabilities in open-source LLMs.
In terms of performance, 📄 FinePDFs perform nearly on par with the state-of-the-art SmolLM3-Web dataset. More importantly, when we merge the SmolLM3-Web dataset with FinePDFs, we observe a remarkable improvement in performance. For best results, we recommend keeping the proportion of PDF data below 25% of the overall dataset.
Dataset card for 📄 FinePDFs
Dataset Summary
This dataset was created by processing 106 CommonCrawl dumps comprising PDFs crawled from the summer of 2013 to February 2025. 📄 FinePDFs includes a variety of domains (especially legal/educational) and topics in a variety of languages and is primarily intended to be used as a research artifact of public data in the context of pretraining datasets for large language models. The CommonCrawl PDFs were carefully extracted, deduplicated and filtered with the 🏭 datatrove library, resulting in the largest publicly available LLM pretraining dataset made exclusively from PDFs.
Dataset Structure
Data Instances
The following is an example sample from the dataset. It is part of the English (eng_Latn) data, originally belonged to the CC-MAIN-2017-22 CommonCrawl snapshot and was crawled on 2017-05-26T22:32:24Z.
{
"text": "CONTENTS\n\n\n\nCONTENTS\n\nNOTE TO THE READER\nThe term 'carcinogenic risk' in the IARC Monographs series is taken to mean that an agent is capable of causing cancer under some circumstances. The Monographs evaluate cancer hazards, despite the historical presence of the word 'risks' in the title. Inclusion of an agent in the Monographs does not imply that it is a carcinogen, only that the published data have been examined. Equally, the fact that an agent has not yet been evaluated in a Monograph does not mean that it is not carcinogenic........",
"id": "<urn:uuid:419db9c6-fcd4-4cf9-ad60-512c252eeac7>",
"dump": "CC-MAIN-2017-22",
"url": "http://monographs.iarc.fr/ENG/Monographs/vol95/mono95-2.pdf",
"date": "2017-05-26T22:32:24Z",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608686.22/warc/CC-MAIN-20170526222659-20170527002659-00203.warc.gz",
"language": "eng_Latn",
"per_page_languages": ["unknown", "unknown", "unknown", "eng_Latn"],
"page_average_lid": "eng_Latn",
"page_average_lid_score": 0.9975388646125793,
"full_doc_lid": "eng_Latn",
"full_doc_lid_score": 0.997407078742981,
"is_truncated": False,
"processor": "rolmOCR",
"page_ends": [8, 10, 20, 1361],
"token_count": 275
}
Data Fields
text(string): the main text contentid(string): unique identifier for this sampledump(string): the CommonCrawl dump this sample was a part ofurl(string): url to the original page wheretextwas presentdate(string): crawl date (from CommonCrawl)file_path(string): s3 path for the individual CommonCrawl warc file containing this sampleoffset(int): offset in CommonCrawl warc file containg this samplelanguage(string): ISO 639-3 code for the language + script of this sampleper_page_languages(list[string]): Per page ISO 639-3 code for the language + script of this samplepage_average_lid(string): ISO 639-3 code for the language + script detected by averaging LID scores across pages using GlotLID classifierpage_average_lid_score(float): Score of the top-detected language, calculated by averaging LID scores across pagesfull_doc_lid(string): ISO 639-3 code for the language + script detected by LID on the first 40k characters using GlotLID classifierfull_doc_lid_score(string): Score of the top-detected language, calculated by LID on the first 40k characters.is_truncated(bool): Flags whether the document is truncated in CommonCrawlprocessor(Literal["docling", "rolmOCR"]): Determines PDF extractor used for this samplepage_ends(list[int]): indices denoting end of each page (exclusive)token_count(int): number of tokens when applying thegpt2tokenizer to this sample
Annotations
We augment the original samples with language and PDF related annotations.
The language related annotations are automatically generated by our language block.
language is determined by our routing algorithm. If no appropriate language is found we place the sample to "unknown" label.
token_count is generated by applying the LLama3.2 tokenizer to the text column.
The other fields are PDF related:
is_truncated is determined by checking if the file size of CommonCrawl artifact is >= 1MB -> truncated, unless appropriate flag is directly available in CommonCrawl (content_truncated)
processor is for non-truncated PDF determined by XGBoost model, trained on on PDF metadata. For truncated files we used rolmOCR for every sample.
page_end, was created by applying cumulative sum on the list of PDF pages.
Personal and Sensitive Information and opt-out
We anonymize email addresses and public IP addresses.
For emails, we apply a regex pattern and replace any occurrence of an email address with either email@example.com or firstname.lastname@example.org. For IP addresses, we also employ a regex pattern and then further filter to only anonymize IP addresses allocated for public networks. Matched IP addresses are then replaced with one of the following randomly generated IP addresses, which at the time of dataset creation were not responding to ping requests: 22.214.171.124, 126.96.36.199, 188.8.131.52, 184.108.40.206, 220.127.116.11, and 18.104.22.168. We decided against applying regex patterns for phone numbers due to the high false positive rate.
Despite our efforts, given that 📄 FinePDFs is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present. If you find your own PII in 📄 FinePDFs and would like it removed, please fill out our PII removal/opt out form.
CommonCrawl respects robots.txt at crawl time, but if you are a webmaster and find your website in 📄 FinePDFs and would like to have it removed, you may also use the PII removal/opt out form.
Considerations for Using the Data
Social Impact of Dataset
While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 📄 FinePDFs we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
We believe that for this dataset, reducing the costs of curation is especially valuable. Unlike other web artifacts such as HTML files, extracting content from PDFs is far more expensive because of their format and the need for powerful ML models to achieve good results. However, this cost is offset by a crucial advantage: PDF files typically contain higher-quality content and represent domains like science and law far more prominently than HTML sources. By making this extracted content freely available, the dataset bridges a critical gap, giving the open-source community access to specialized domain knowledge that would otherwise remain locked behind expensive processing barriers.
Discussion of Biases
Unlike in our previous Fine releases, we decided not to apply NSFW filtering. This decision was based on the fact that PDFs are typically not used for conveying such content, which our own data inspection confirmed.
However, it's possible that significant number of documents present in the final dataset could be considered toxic or contain harmful content. As 📄 FinePDFs was sourced from the web PDFs as a whole, any harmful biases typically present in it may be reproduced on our dataset.
We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a “gold” source such as wikipedia or toxicity classifiers as these methods have been known to disproportionately remove content in specific dialects and overclassify as toxic text related to specific social identities, respectively.
Finally, for a large part of the extraction, we used LLMs, which are known to contain certain biases. While the model used was trained to accurately transcribe only the content present in the source document, we can't guarantee that it did not produce toxic or biased outputs.
Other Known Limitations
While we minimized filtering during our entire pipeline, some data with specific characteristics may have been inadvertently removed due to our reliance on purely model-based filtering approaches.
Secondly our data extraction process has several inherent limitations depending on the method used:
The docling pipeline can only retrieve content directly embedded in PDFs as text. This creates several issues: text appearing within images cannot be extracted, tables may be completely missing or partially misaligned since we employed heuristics-based table extraction, and the heuristic approach to PDF parsing can result in broken words or incorrect paragraph ordering due to the complex nature of PDF formatting. These issues are particularly problematic for documents containing code snippets or complex mathematical equations common in scientific literature.
Documents processed through RolmOCR face different challenges due to its probabilistic nature. The OCR process may have introduced hallucinations, misspelled words, or missing content. These issues are especially pronounced when processing documents in low-resource languages where the model has limited training data.
Finally, in some cases our extraction models failed to process certain pages entirely, resulting in incomplete documents with missing pages.
Additional Information
Licensing Information
The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 license. The use of this dataset is also subject to CommonCrawl's Terms of Use.
Future work
While we’ve been very descriptive about our pipeline, we also plan to publish a separate blog post that walks through our journey in more detail—explaining key decisions and sharing findings along the way.
As with previous releases, we view this as a base artifact for general pre-training. That said, we intend to more aggressively filter the dataset in future iterations, with a particular focus on highly educational mid-training data.
Finally, PDFs are just one of many document types available on the web. Looking ahead, we aim to expand our work beyond standard HTML-based datasets to capture a broader and richer variety of sources.
Citation Information
@misc{kydlicek2025finepdfs,
title={FinePDFs},
author={Hynek Kydl{\'\i}{\v{c}}ek and Guilherme Penedo and Leandro von Werra},
year={2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/datasets/HuggingFaceFW/finepdfs}}
}
- Downloads last month
- 55,326



