Dataset Viewer
Auto-converted to Parquet
word
stringlengths
1
64
c
int64
10
200M
the
199,512,185
of
102,620,319
in
87,602,893
and
82,981,549
a
59,625,532
to
55,793,585
was
33,272,070
is
24,651,348
for
24,412,023
on
24,102,693
as
23,833,908
by
21,368,319
with
20,226,361
from
17,116,492
he
16,727,862
at
16,544,587
that
15,134,112
his
13,154,847
it
12,322,375
an
10,829,152
were
8,668,220
also
8,100,516
which
7,859,030
are
7,504,614
first
6,753,639
this
6,680,512
be
6,455,235
s
6,446,021
new
6,444,250
had
6,364,061
or
6,246,620
has
5,841,720
references
5,637,367
one
5,606,668
after
5,491,902
their
5,428,865
she
5,212,502
its
5,209,126
her
5,205,042
who
5,178,898
but
4,840,072
american
4,827,178
not
4,778,107
two
4,768,153
th
4,687,898
they
4,566,994
people
4,487,606
have
4,195,976
been
3,906,109
all
3,806,161
other
3,787,325
time
3,631,352
during
3,604,452
when
3,472,234
university
3,428,342
united
3,397,434
school
3,295,855
may
3,292,442
into
3,242,772
national
3,238,032
year
3,165,652
world
3,079,173
state
3,028,807
players
3,005,979
there
3,004,965
born
2,996,674
i
2,918,389
external
2,896,780
links
2,884,827
states
2,859,826
up
2,839,486
city
2,827,438
century
2,808,654
more
2,784,250
years
2,766,682
over
2,760,447
film
2,746,120
de
2,708,653
would
2,696,577
south
2,669,235
three
2,650,424
later
2,647,896
season
2,639,467
only
2,635,230
between
2,608,914
where
2,558,101
no
2,551,935
about
2,531,321
st
2,482,645
most
2,479,804
team
2,463,352
out
2,428,066
e
2,424,431
county
2,402,544
war
2,394,606
under
2,391,749
series
2,383,512
second
2,366,612
d
2,346,636
history
2,344,265
End of preview. Expand in Data Studio

Wordcounts for the English Wikipedia dump (2023-11-01), including words that occur at least 10 times in the corpus. Created using the following script:

import re
import duckdb
from collections import Counter
from datasets import load_dataset
from tqdm.auto import tqdm

conn = duckdb.connect(":memory:")

def ensure_db(conn: duckdb.DuckDBPyConnection):
    conn.execute("""
        CREATE TABLE IF NOT EXISTS wc (
            word TEXT PRIMARY KEY,
            c BIGINT
        );
    """)

ensure_db(conn)

def merge_batch(conn: duckdb.DuckDBPyConnection, counts: Counter):
    if not counts:
        return
    df = pd.DataFrame({"word": list(counts.keys()), "c": list(map(int, counts.values()))})
    # Register the batch dataframe as a view, then MERGE (UPSERT)
    conn.register("batch_df", df)
    conn.execute("""
        MERGE INTO wc AS t
        USING batch_df AS s
        ON t.word = s.word
        WHEN MATCHED THEN UPDATE SET c = t.c + s.c
        WHEN NOT MATCHED THEN INSERT (word, c) VALUES (s.word, s.c);
    """)
    conn.unregister("batch_df")

TOKEN_RE = re.compile(r"[a-z]+(?:'[a-z]+)?")   # keep internal apostrophes

def tokenize_en_lower(text: str):
    if not text:
        return []
    return TOKEN_RE.findall(text.lower())

batch_size = 500
limit = 0

ds_iter = load_dataset("wikimedia/wikipedia", "20231101.en", split="train", streaming=True)
buf = Counter()
n = 0
pbar = tqdm(desc="Processing (streaming)", unit="art")
for ex in ds_iter:
    buf.update(tokenize_en_lower(ex.get("text", "")))
    n += 1
    if n % batch_size == 0:
        merge_batch(conn, buf); buf.clear()
        pbar.update(batch_size)
    if limit and n >= limit:
        break
if buf:
    merge_batch(conn, buf); pbar.update(n % batch_size)
pbar.close()
Downloads last month
25