url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
51
51
id
int64
1.88B
2.51B
node_id
stringlengths
18
18
number
int64
6.22k
7.14k
title
stringlengths
2
150
user
dict
labels
listlengths
0
2
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
1
milestone
dict
comments
int64
0
17
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
active_lock_reason
null
draft
bool
0 classes
pull_request
dict
body
stringlengths
3
19.4k
โŒ€
closed_by
dict
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/7063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7063/comments
https://api.github.com/repos/huggingface/datasets/issues/7063/events
https://github.com/huggingface/datasets/issues/7063
2,424,488,648
I_kwDODunzps6QgsLI
7,063
Add `batch` method to `Dataset`
{ "login": "lappemic", "id": 61876623, "node_id": "MDQ6VXNlcjYxODc2NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lappemic", "html_url": "https://github.com/lappemic", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "organizations_url": "https://api.github.com/users/lappemic/orgs", "repos_url": "https://api.github.com/users/lappemic/repos", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "received_events_url": "https://api.github.com/users/lappemic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
0
2024-07-23T07:36:59
2024-07-25T13:45:21
2024-07-25T13:45:21
CONTRIBUTOR
null
null
null
### Feature request Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054. ### Motivation A batched iteration speeds up data loading significantly (see e.g. #6279) ### Your contribution I plan to open a PR to implement this.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7063/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7061/comments
https://api.github.com/repos/huggingface/datasets/issues/7061/events
https://github.com/huggingface/datasets/issues/7061
2,423,786,881
I_kwDODunzps6QeA2B
7,061
Custom Dataset | Still Raise Error while handling errors in _generate_examples
{ "login": "hahmad2008", "id": 68266028, "node_id": "MDQ6VXNlcjY4MjY2MDI4", "avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hahmad2008", "html_url": "https://github.com/hahmad2008", "followers_url": "https://api.github.com/users/hahmad2008/followers", "following_url": "https://api.github.com/users/hahmad2008/following{/other_user}", "gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}", "starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions", "organizations_url": "https://api.github.com/users/hahmad2008/orgs", "repos_url": "https://api.github.com/users/hahmad2008/repos", "events_url": "https://api.github.com/users/hahmad2008/events{/privacy}", "received_events_url": "https://api.github.com/users/hahmad2008/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-07-22T21:18:12
2024-09-09T14:48:07
null
NONE
null
null
null
### Describe the bug I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution. ``` def _generate_examples(self, filepaths): errors=[] id_ = 0 for filepath in filepaths: try: with open(filepath, 'r') as f: for line in f: json_obj = json.loads(line) yield id_, json_obj id_ += 1 except Exception as exc: logger.error(f"error occur at filepath: {filepath}") errors.append(error) ``` seems the logger.error is printed but still exception is raised the the run is exit. ``` Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841 ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl Traceback (most recent call last): File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples json_obj = json.loads(line) File "myenv/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "myenv/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3) Generating train split: 0 examples [00:06, ? examples/s]> RemoteTraceback: """ Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset """ The above exception was the direct cause of the following exception: โ”‚ โ”‚ โ”‚ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. โ”‚ โ”‚ py:1377 in <listcomp> โ”‚ โ”‚ โ”‚ โ”‚ 1374 โ”‚ โ”‚ โ”‚ โ”‚ if all(async_result.ready() for async_result in async_results) and queue โ”‚ โ”‚ 1375 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ break โ”‚ โ”‚ 1376 โ”‚ โ”‚ # we get the result in case there's an error to raise โ”‚ โ”‚ โฑ 1377 โ”‚ โ”‚ [async_result.get() for async_result in async_results] โ”‚ โ”‚ 1378 โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ .0 = <list_iterator object at 0x7f2cc1f0ce20> โ”‚ โ”‚ โ”‚ โ”‚ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ โ”‚ โ”‚ โ”‚ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 โ”‚ โ”‚ in get โ”‚ โ”‚ โ”‚ โ”‚ 768 โ”‚ โ”‚ if self._success: โ”‚ โ”‚ 769 โ”‚ โ”‚ โ”‚ return self._value โ”‚ โ”‚ 770 โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 771 โ”‚ โ”‚ โ”‚ raise self._value โ”‚ โ”‚ 772 โ”‚ โ”‚ โ”‚ 773 โ”‚ def _set(self, i, obj): โ”‚ โ”‚ 774 โ”‚ โ”‚ self._success, self._value = obj โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ”‚ โ”‚ โ”‚ โ”‚ timeout = None โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug same as above ### Expected behavior should handle error and continue reading remaining files ### Environment info python 3.9
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7061/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7059/comments
https://api.github.com/repos/huggingface/datasets/issues/7059/events
https://github.com/huggingface/datasets/issues/7059
2,422,827,892
I_kwDODunzps6QaWt0
7,059
None values are skipped when reading jsonl in subobjects
{ "login": "PonteIneptique", "id": 1929830, "node_id": "MDQ6VXNlcjE5Mjk4MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PonteIneptique", "html_url": "https://github.com/PonteIneptique", "followers_url": "https://api.github.com/users/PonteIneptique/followers", "following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}", "gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}", "starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions", "organizations_url": "https://api.github.com/users/PonteIneptique/orgs", "repos_url": "https://api.github.com/users/PonteIneptique/repos", "events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}", "received_events_url": "https://api.github.com/users/PonteIneptique/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-07-22T13:02:42
2024-07-22T13:02:53
null
NONE
null
null
null
### Describe the bug I have been fighting against my machine since this morning only to find out this is some kind of a bug. When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around. E.g., let's take this example Here are two version of a same dataset: [not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz) [buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz) ### Steps to reproduce the bug 1. Load the `buggy.tar.gz` dataset 2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` 3. Load the `not-buggy.tar.gz` dataset 4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` ### Expected behavior Both should have 4 baseline entries: 1. Buggy should have None followed by three lists 2. Non-Buggy should have four lists, and the first one should be an empty list. One does not work, 2 works. Despite accepting None in another position than the first one. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7059/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7058/comments
https://api.github.com/repos/huggingface/datasets/issues/7058/events
https://github.com/huggingface/datasets/issues/7058
2,422,560,355
I_kwDODunzps6QZVZj
7,058
New feature type: Document
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-07-22T10:49:20
2024-07-22T10:49:20
null
CONTRIBUTOR
null
null
null
It would be useful for PDF. https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7058/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7055/comments
https://api.github.com/repos/huggingface/datasets/issues/7055/events
https://github.com/huggingface/datasets/issues/7055
2,421,708,891
I_kwDODunzps6QWFhb
7,055
WebDataset with different prefixes are unsupported
{ "login": "hlky", "id": 106811348, "node_id": "U_kgDOBl3P1A", "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hlky", "html_url": "https://github.com/hlky", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "organizations_url": "https://api.github.com/users/hlky/orgs", "repos_url": "https://api.github.com/users/hlky/repos", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "received_events_url": "https://api.github.com/users/hlky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
8
2024-07-22T01:14:19
2024-07-24T13:26:30
2024-07-23T13:28:46
NONE
null
null
null
### Describe the bug Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k) Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given. ``` The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. ``` The purpose of this check is unclear because PyArrow supports different keys. Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset. ``` >>> from datasets import load_dataset >>> path = "shards/*.tar" >>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True) Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 152/152 [00:00<00:00, 56458.93it/s] >>> dataset IterableDataset({ features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'], n_shards: 152 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("bigdata-pw/fashion-150k") ``` ### Expected behavior Dataset loads without error ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.19 - `huggingface_hub` version: 0.23.4 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "login": "hlky", "id": 106811348, "node_id": "U_kgDOBl3P1A", "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hlky", "html_url": "https://github.com/hlky", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "organizations_url": "https://api.github.com/users/hlky/orgs", "repos_url": "https://api.github.com/users/hlky/repos", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "received_events_url": "https://api.github.com/users/hlky/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7055/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7053/comments
https://api.github.com/repos/huggingface/datasets/issues/7053/events
https://github.com/huggingface/datasets/issues/7053
2,416,423,791
I_kwDODunzps6QB7Nv
7,053
Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple`
{ "login": "MatthewYZhang", "id": 48289218, "node_id": "MDQ6VXNlcjQ4Mjg5MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/48289218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatthewYZhang", "html_url": "https://github.com/MatthewYZhang", "followers_url": "https://api.github.com/users/MatthewYZhang/followers", "following_url": "https://api.github.com/users/MatthewYZhang/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewYZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatthewYZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewYZhang/subscriptions", "organizations_url": "https://api.github.com/users/MatthewYZhang/orgs", "repos_url": "https://api.github.com/users/MatthewYZhang/repos", "events_url": "https://api.github.com/users/MatthewYZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/MatthewYZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2024-07-18T13:42:35
2024-07-18T15:17:42
2024-07-18T15:16:18
NONE
null
null
null
### Describe the bug in data_files.py, line 332, `fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)` If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')` So, `isinstance(fs.protocol, str) == False` and `protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise `TypeError: can only concatenate tuple (not "str") to tuple`. ### Steps to reproduce the bug Steps to reproduce: 1. Run on a cloud server like AWS, 2. `import datasets.data_files as datafile` 3. datafile.resolve_pattern('path/to/dataset', '.') 4. `TypeError: can only concatenate tuple (not "str") to tuple` ### Expected behavior Should return path of the dataset, with fs.protocol at the beginning ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.19 - Huggingface_hub version: 0.23.5 - PyArrow version: 16.1.0 - Pandas version: 1.1.5
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7053/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7051/comments
https://api.github.com/repos/huggingface/datasets/issues/7051/events
https://github.com/huggingface/datasets/issues/7051
2,409,353,929
I_kwDODunzps6Pm9LJ
7,051
How to set_epoch with interleave_datasets?
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
7
2024-07-15T18:24:52
2024-08-05T20:58:04
2024-08-05T20:58:04
NONE
null
null
null
Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples. I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch) Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start. How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset... Something like ``` dataset_a = load_dataset(...) dataset_b = load_dataset(...) def epoch_shuffled_dataset(ds): # How to make this maintain the number of shards in ds?? for epoch in itertools.count(): ds.set_epoch(epoch) yield from iter(ds) shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a}) interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted') ```
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7051/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/datasets/issues/7051/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7049/comments
https://api.github.com/repos/huggingface/datasets/issues/7049/events
https://github.com/huggingface/datasets/issues/7049
2,408,514,366
I_kwDODunzps6PjwM-
7,049
Save nparray as list
{ "login": "Sakurakdx", "id": 48399040, "node_id": "MDQ6VXNlcjQ4Mzk5MDQw", "avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sakurakdx", "html_url": "https://github.com/Sakurakdx", "followers_url": "https://api.github.com/users/Sakurakdx/followers", "following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}", "gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions", "organizations_url": "https://api.github.com/users/Sakurakdx/orgs", "repos_url": "https://api.github.com/users/Sakurakdx/repos", "events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}", "received_events_url": "https://api.github.com/users/Sakurakdx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2024-07-15T11:36:11
2024-07-18T11:33:34
2024-07-18T11:33:34
NONE
null
null
null
### Describe the bug When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision? ### Steps to reproduce the bug the map function ```python def convert_image_to_features(inst, processor, image_dir): image_file = inst["image_url"] file = image_file.split("/")[-1] image_path = os.path.join(image_dir, file) image = Image.open(image_path) image = image.convert("RGBA") inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"] return inst ``` main function ```python map_fun = partial( convert_image_to_features, processor=processor, image_dir=image_dir ) ds = ds.map(map_fun, batched=False, num_proc=20) print(type(ds[0]["pixel_values"]) ``` ### Expected behavior (type < list>) ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.23.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "login": "Sakurakdx", "id": 48399040, "node_id": "MDQ6VXNlcjQ4Mzk5MDQw", "avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sakurakdx", "html_url": "https://github.com/Sakurakdx", "followers_url": "https://api.github.com/users/Sakurakdx/followers", "following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}", "gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions", "organizations_url": "https://api.github.com/users/Sakurakdx/orgs", "repos_url": "https://api.github.com/users/Sakurakdx/repos", "events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}", "received_events_url": "https://api.github.com/users/Sakurakdx/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7049/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7048/comments
https://api.github.com/repos/huggingface/datasets/issues/7048/events
https://github.com/huggingface/datasets/issues/7048
2,408,487,547
I_kwDODunzps6Pjpp7
7,048
ImportError: numpy.core.multiarray when using `filter`
{ "login": "kamilakesbi", "id": 45195979, "node_id": "MDQ6VXNlcjQ1MTk1OTc5", "avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamilakesbi", "html_url": "https://github.com/kamilakesbi", "followers_url": "https://api.github.com/users/kamilakesbi/followers", "following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}", "gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions", "organizations_url": "https://api.github.com/users/kamilakesbi/orgs", "repos_url": "https://api.github.com/users/kamilakesbi/repos", "events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}", "received_events_url": "https://api.github.com/users/kamilakesbi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2024-07-15T11:21:04
2024-07-16T10:11:25
2024-07-16T10:11:25
NONE
null
null
null
### Describe the bug I can't apply the filter method on my dataset. ### Steps to reproduce the bug The following snippet generates a bug: ```python from datasets import load_dataset ami = load_dataset('kamilakesbi/ami', 'ihm') ami['train'].filter( lambda example: example["file_name"] == 'EN2001a' ) ``` I get the following error: `ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).` ### Expected behavior It should work properly! ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "login": "kamilakesbi", "id": 45195979, "node_id": "MDQ6VXNlcjQ1MTk1OTc5", "avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamilakesbi", "html_url": "https://github.com/kamilakesbi", "followers_url": "https://api.github.com/users/kamilakesbi/followers", "following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}", "gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions", "organizations_url": "https://api.github.com/users/kamilakesbi/orgs", "repos_url": "https://api.github.com/users/kamilakesbi/repos", "events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}", "received_events_url": "https://api.github.com/users/kamilakesbi/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7048/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7047/comments
https://api.github.com/repos/huggingface/datasets/issues/7047/events
https://github.com/huggingface/datasets/issues/7047
2,406,495,084
I_kwDODunzps6PcDNs
7,047
Save Dataset as Sharded Parquet
{ "login": "tom-p-reichel", "id": 43631024, "node_id": "MDQ6VXNlcjQzNjMxMDI0", "avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tom-p-reichel", "html_url": "https://github.com/tom-p-reichel", "followers_url": "https://api.github.com/users/tom-p-reichel/followers", "following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}", "gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}", "starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions", "organizations_url": "https://api.github.com/users/tom-p-reichel/orgs", "repos_url": "https://api.github.com/users/tom-p-reichel/repos", "events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}", "received_events_url": "https://api.github.com/users/tom-p-reichel/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2024-07-12T23:47:51
2024-07-17T12:07:08
null
NONE
null
null
null
### Feature request `to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically. ### Motivation This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet. ### Your contribution I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158 to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7047/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7041/comments
https://api.github.com/repos/huggingface/datasets/issues/7041/events
https://github.com/huggingface/datasets/issues/7041
2,404,576,038
I_kwDODunzps6PUusm
7,041
`sort` after `filter` unreasonably slow
{ "login": "Tobin-rgb", "id": 56711045, "node_id": "MDQ6VXNlcjU2NzExMDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/56711045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tobin-rgb", "html_url": "https://github.com/Tobin-rgb", "followers_url": "https://api.github.com/users/Tobin-rgb/followers", "following_url": "https://api.github.com/users/Tobin-rgb/following{/other_user}", "gists_url": "https://api.github.com/users/Tobin-rgb/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tobin-rgb/subscriptions", "organizations_url": "https://api.github.com/users/Tobin-rgb/orgs", "repos_url": "https://api.github.com/users/Tobin-rgb/repos", "events_url": "https://api.github.com/users/Tobin-rgb/events{/privacy}", "received_events_url": "https://api.github.com/users/Tobin-rgb/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-07-12T03:29:27
2024-07-22T13:55:17
null
NONE
null
null
null
### Describe the bug as the tittle says ... ### Steps to reproduce the bug `sort` seems to be normal. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) print("start sort") ds = ds.sort("k") print("finish sort") ``` but `sort` after `filter` is extremely slow. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) ds = ds.filter(lambda x:x > 100, input_columns="k") print("start sort") ds = ds.sort("k") print("finish sort") ``` ### Expected behavior Is this a bug, or is it a misuse of the `sort` function? ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7041/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7040/comments
https://api.github.com/repos/huggingface/datasets/issues/7040/events
https://github.com/huggingface/datasets/issues/7040
2,402,918,335
I_kwDODunzps6POZ-_
7,040
load `streaming=True` dataset with downloaded cache
{ "login": "wanghaoyucn", "id": 39429965, "node_id": "MDQ6VXNlcjM5NDI5OTY1", "avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wanghaoyucn", "html_url": "https://github.com/wanghaoyucn", "followers_url": "https://api.github.com/users/wanghaoyucn/followers", "following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}", "gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}", "starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions", "organizations_url": "https://api.github.com/users/wanghaoyucn/orgs", "repos_url": "https://api.github.com/users/wanghaoyucn/repos", "events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}", "received_events_url": "https://api.github.com/users/wanghaoyucn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2024-07-11T11:14:13
2024-07-11T14:11:56
null
NONE
null
null
null
### Describe the bug We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below: ```python def _generate_examples(self, filepath, split): for file in filepath: with fsspec.open(file, "rb") as fs: with h5py.File(fs, "r") as fp: # for event_id in sorted(list(fp.keys())): event_ids = list(fp.keys()) ...... ``` ### Steps to reproduce the bug The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples. ### Expected behavior So does the following make sense so far? 1. download the files ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True) ``` 2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`) ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true) ``` I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution. ### Environment info - `datasets` = 2.18.0 - `h5py` = 3.10.0 - `fsspec` = 2023.10.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7040/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7037/comments
https://api.github.com/repos/huggingface/datasets/issues/7037/events
https://github.com/huggingface/datasets/issues/7037
2,400,192,419
I_kwDODunzps6PEAej
7,037
A bug of Dataset.to_json() function
{ "login": "LinglingGreat", "id": 26499566, "node_id": "MDQ6VXNlcjI2NDk5NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/26499566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LinglingGreat", "html_url": "https://github.com/LinglingGreat", "followers_url": "https://api.github.com/users/LinglingGreat/followers", "following_url": "https://api.github.com/users/LinglingGreat/following{/other_user}", "gists_url": "https://api.github.com/users/LinglingGreat/gists{/gist_id}", "starred_url": "https://api.github.com/users/LinglingGreat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LinglingGreat/subscriptions", "organizations_url": "https://api.github.com/users/LinglingGreat/orgs", "repos_url": "https://api.github.com/users/LinglingGreat/repos", "events_url": "https://api.github.com/users/LinglingGreat/events{/privacy}", "received_events_url": "https://api.github.com/users/LinglingGreat/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2024-07-10T09:11:22
2024-07-10T13:07:44
null
NONE
null
null
null
### Describe the bug When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again. The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size). ### Steps to reproduce the bug try this code: ```python from datasets import load_dataset import json train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"] output_path = "./harmless-base_hftojs.json" print(len(train_dataset)) train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2) with open(output_path, encoding="utf-8") as f: data = json.loads(f.read()) ``` it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709) Extra square brackets have appeared here: <img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc"> ### Expected behavior The code runs normally. ### Environment info datasets=2.20.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7037/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7035/comments
https://api.github.com/repos/huggingface/datasets/issues/7035/events
https://github.com/huggingface/datasets/issues/7035
2,400,021,225
I_kwDODunzps6PDWrp
7,035
Docs are not generated when a parameter defaults to a NamedSplit value
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-07-10T07:51:24
2024-07-26T07:51:53
2024-07-26T07:51:53
MEMBER
null
null
null
While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like: ```python def call_function(split=Split.TRAIN): ... ``` The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'> See: https://github.com/huggingface/datasets/actions/runs/9869660902/job/27254359863?pr=7015 ``` Building the MDX files: 97%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹| 58/60 [00:00<00:00, 91.94it/s] Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 197, in build_mdx_files content, new_anchors, source_files, errors = resolve_autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 123, in resolve_autodoc doc = autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 499, in autodoc method_doc, check = document_object( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 395, in document_object signature = format_signature(obj) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 126, in format_signature if param.default != inspect._empty: File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 136, in __ne__ return not self.__eq__(other) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 379, in __eq__ raise ValueError(f"Equality not supported between split {self} and {other}") ValueError: Equality not supported between split train and <class 'inspect._empty'> The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/build.py", line 102, in build_command build_doc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 367, in build_doc anchors_mapping, source_files_mapping = build_mdx_files( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 230, in build_mdx_files raise type(e)(f"There was an error when converting {file} to the MDX format.\n" + e.args[0]) from e ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Equality not supported between split train and <class 'inspect._empty'> ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7035/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7033/comments
https://api.github.com/repos/huggingface/datasets/issues/7033/events
https://github.com/huggingface/datasets/issues/7033
2,397,419,768
I_kwDODunzps6O5bj4
7,033
`from_generator` does not allow to specify the split name
{ "login": "pminervini", "id": 227357, "node_id": "MDQ6VXNlcjIyNzM1Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pminervini", "html_url": "https://github.com/pminervini", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "organizations_url": "https://api.github.com/users/pminervini/orgs", "repos_url": "https://api.github.com/users/pminervini/repos", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "received_events_url": "https://api.github.com/users/pminervini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-07-09T07:47:58
2024-07-26T12:56:16
2024-07-26T09:31:56
CONTRIBUTOR
null
null
null
### Describe the bug I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:` It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py ### Steps to reproduce the bug ``` In [1]: from datasets import Dataset In [2]: def gen(): ...: yield {"pokemon": "bulbasaur", "type": "grass"} ...: In [3]: ds = Dataset.from_generator(gen) Generating train split: 1 examples [00:00, 133.89 examples/s] ``` ### Expected behavior It should be possible to specify any split name ### Environment info - `datasets` version: 2.19.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - `huggingface_hub` version: 0.23.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7033/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7033/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7031/comments
https://api.github.com/repos/huggingface/datasets/issues/7031/events
https://github.com/huggingface/datasets/issues/7031
2,395,401,692
I_kwDODunzps6Oxu3c
7,031
CI quality is broken: use ruff check instead
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-07-08T11:42:24
2024-07-08T11:47:29
2024-07-08T11:47:29
MEMBER
null
null
null
CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027 ``` error: `ruff <path>` has been removed. Use `ruff check <path>` instead. ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7031/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/7030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7030/comments
https://api.github.com/repos/huggingface/datasets/issues/7030/events
https://github.com/huggingface/datasets/issues/7030
2,393,411,631
I_kwDODunzps6OqJAv
7,030
Add option to disable progress bar when reading a dataset ("Loading dataset from disk")
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-07-06T05:43:37
2024-07-13T14:35:59
2024-07-13T14:35:59
NONE
null
null
null
### Feature request Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16. ### Motivation I am reading a lot of datasets that it creates lots of logs. <img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a"> ### Your contribution Seems like an easy fix to make. I can create a PR if necessary.
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7030/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7029/comments
https://api.github.com/repos/huggingface/datasets/issues/7029/events
https://github.com/huggingface/datasets/issues/7029
2,391,366,696
I_kwDODunzps6OiVwo
7,029
load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error
{ "login": "sugam-nexusflow", "id": 171606538, "node_id": "U_kgDOCjqCCg", "avatar_url": "https://avatars.githubusercontent.com/u/171606538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sugam-nexusflow", "html_url": "https://github.com/sugam-nexusflow", "followers_url": "https://api.github.com/users/sugam-nexusflow/followers", "following_url": "https://api.github.com/users/sugam-nexusflow/following{/other_user}", "gists_url": "https://api.github.com/users/sugam-nexusflow/gists{/gist_id}", "starred_url": "https://api.github.com/users/sugam-nexusflow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sugam-nexusflow/subscriptions", "organizations_url": "https://api.github.com/users/sugam-nexusflow/orgs", "repos_url": "https://api.github.com/users/sugam-nexusflow/repos", "events_url": "https://api.github.com/users/sugam-nexusflow/events{/privacy}", "received_events_url": "https://api.github.com/users/sugam-nexusflow/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-07-04T19:15:16
2024-07-17T12:44:03
null
NONE
null
null
null
### Describe the bug I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /tmp directory. ### Steps to reproduce the bug ```python d = load_dataset( path=hugging_face_link, split=split, token=token, cache_dir="/tmp/hugging_face_cache", ) ``` ### Expected behavior Everything written to the file system as part of the load_datasets function should be in the /tmp directory. ### Environment info datasets version: 2.16.1 Platform: Linux-5.10.216-225.855.amzn2.x86_64-x86_64-with-glibc2.26 Python version: 3.11.9 huggingface_hub version: 0.19.4 PyArrow version: 16.1.0 Pandas version: 2.2.2 fsspec version: 2023.10.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7029/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7024/comments
https://api.github.com/repos/huggingface/datasets/issues/7024/events
https://github.com/huggingface/datasets/issues/7024
2,390,141,626
I_kwDODunzps6Odqq6
7,024
Streaming dataset not returning data
{ "login": "johnwee1", "id": 91670254, "node_id": "U_kgDOBXbG7g", "avatar_url": "https://avatars.githubusercontent.com/u/91670254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnwee1", "html_url": "https://github.com/johnwee1", "followers_url": "https://api.github.com/users/johnwee1/followers", "following_url": "https://api.github.com/users/johnwee1/following{/other_user}", "gists_url": "https://api.github.com/users/johnwee1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnwee1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnwee1/subscriptions", "organizations_url": "https://api.github.com/users/johnwee1/orgs", "repos_url": "https://api.github.com/users/johnwee1/repos", "events_url": "https://api.github.com/users/johnwee1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnwee1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-07-04T07:21:47
2024-07-04T07:21:47
null
NONE
null
null
null
### Describe the bug I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly. I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset. However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`. Coud this be some sort of network / firewall issue I'm facing? ### Steps to reproduce the bug I made a post with greater description about how I reproduced this problem before I found my workaround: https://discuss.huggingface.co/t/problem-with-custom-iterator-of-streaming-dataset-not-returning-anything/94551 Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle) ``` commitpackft = load_dataset( "chargoddard/commitpack-ft-instruct", split="train", streaming=True ).filter(lambda example: example["language"] == "Python") def form_template(example): """Forms a template for each example following the alpaca format for CommitPack""" example["content"] = ( "### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"] ) return example dataset = commitpackft.map( form_template, remove_columns=["id", "language", "license", "instruction", "input", "output"], ).shuffle( seed=42, buffer_size=10000 ) # remove everything since its all inside "content" now validation_data = dataset.take(4000) train_data = dataset.skip(4000) ``` The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation. ### Expected behavior The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.11.7 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7024/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7022/comments
https://api.github.com/repos/huggingface/datasets/issues/7022/events
https://github.com/huggingface/datasets/issues/7022
2,388,064,650
I_kwDODunzps6OVvmK
7,022
There is dead code after we require pyarrow >= 15.0.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-07-03T08:52:57
2024-07-03T09:17:36
2024-07-03T09:17:36
MEMBER
null
null
null
There are code lines specific for pyarrow versions < 15.0.0. However, we require pyarrow >= 15.0.0 since the merge of PR: - #6892 Those code lines are now dead code and should be removed.
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7022/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7020/comments
https://api.github.com/repos/huggingface/datasets/issues/7020/events
https://github.com/huggingface/datasets/issues/7020
2,387,940,990
I_kwDODunzps6OVRZ-
7,020
Casting list array to fixed size list raises error
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-07-03T07:54:49
2024-07-03T08:41:56
2024-07-03T08:41:56
MEMBER
null
null
null
When trying to cast a list array to fixed size list, an AttributeError is raised: > AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' Steps to reproduce the bug: ```python import pyarrow as pa from datasets.table import array_cast arr = pa.array([[0, 1]]) array_cast(arr, pa.list_(pa.int64(), 2)) ``` Stack trace: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-6cb90a1d8216> in <module> 3 4 arr = pa.array([[0, 1]]) ----> 5 array_cast(arr, pa.list_(pa.int64(), 2)) ~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs) 1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1803 else: -> 1804 return func(array, *args, **kwargs) 1805 1806 return wrapper ~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str) 1920 else: 1921 array_values = array.values[ -> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length 1923 ] 1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size) AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7020/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7018/comments
https://api.github.com/repos/huggingface/datasets/issues/7018/events
https://github.com/huggingface/datasets/issues/7018
2,383,700,286
I_kwDODunzps6OFGE-
7,018
`load_dataset` fails to load dataset saved by `save_to_disk`
{ "login": "sliedes", "id": 2307997, "node_id": "MDQ6VXNlcjIzMDc5OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2307997?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sliedes", "html_url": "https://github.com/sliedes", "followers_url": "https://api.github.com/users/sliedes/followers", "following_url": "https://api.github.com/users/sliedes/following{/other_user}", "gists_url": "https://api.github.com/users/sliedes/gists{/gist_id}", "starred_url": "https://api.github.com/users/sliedes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sliedes/subscriptions", "organizations_url": "https://api.github.com/users/sliedes/orgs", "repos_url": "https://api.github.com/users/sliedes/repos", "events_url": "https://api.github.com/users/sliedes/events{/privacy}", "received_events_url": "https://api.github.com/users/sliedes/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2024-07-01T12:19:19
2024-08-05T09:21:55
null
NONE
null
null
null
### Describe the bug This code fails to load the dataset it just saved: ```python from datasets import load_dataset from transformers import AutoTokenizer MODEL = "google-bert/bert-base-cased" tokenizer = AutoTokenizer.from_pretrained(MODEL) dataset = load_dataset("yelp_review_full") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets.save_to_disk("dataset") tokenized_datasets = load_dataset("dataset/") # raises ``` It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`. I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON: ```shell $ ls -l dataset/test -rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow -rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json -rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json ``` ### Steps to reproduce the bug Execute the code above. ### Expected behavior The dataset is loaded successfully. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39 - Python version: 3.12.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7018/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7016/comments
https://api.github.com/repos/huggingface/datasets/issues/7016/events
https://github.com/huggingface/datasets/issues/7016
2,383,262,608
I_kwDODunzps6ODbOQ
7,016
`drop_duplicates` method
{ "login": "MohamedAliRashad", "id": 26205298, "node_id": "MDQ6VXNlcjI2MjA1Mjk4", "avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MohamedAliRashad", "html_url": "https://github.com/MohamedAliRashad", "followers_url": "https://api.github.com/users/MohamedAliRashad/followers", "following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_user}", "gists_url": "https://api.github.com/users/MohamedAliRashad/gists{/gist_id}", "starred_url": "https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohamedAliRashad/subscriptions", "organizations_url": "https://api.github.com/users/MohamedAliRashad/orgs", "repos_url": "https://api.github.com/users/MohamedAliRashad/repos", "events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}", "received_events_url": "https://api.github.com/users/MohamedAliRashad/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-07-01T09:01:06
2024-07-20T06:51:58
null
NONE
null
null
null
### Feature request `drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one) ### Motivation Ease of use ### Your contribution I don't think i am good enough to help
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7016/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7013/comments
https://api.github.com/repos/huggingface/datasets/issues/7013/events
https://github.com/huggingface/datasets/issues/7013
2,382,976,738
I_kwDODunzps6OCVbi
7,013
CI is broken for faiss tests on Windows: node down: Not properly terminated
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-07-01T06:40:03
2024-07-01T07:10:28
2024-07-01T07:10:28
MEMBER
null
null
null
Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached. See: https://github.com/huggingface/datasets/actions/runs/9712659783 ``` test (integration, windows-latest, deps-minimum) The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes. test (integration, windows-latest, deps-latest) The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes. ``` ``` ____________________________ tests/test_search.py _____________________________ [gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ____________________________ tests/test_search.py _____________________________ [gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ``` ``` tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw0] node down: Not properly terminated [gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw0 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw1] node down: Not properly terminated [gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw1 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw2] node down: Not properly terminated [gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw2 ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7013/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7010/comments
https://api.github.com/repos/huggingface/datasets/issues/7010/events
https://github.com/huggingface/datasets/issues/7010
2,379,777,480
I_kwDODunzps6N2IXI
7,010
Re-enable raising error from huggingface-hub FutureWarning in CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-06-28T07:23:40
2024-06-28T12:19:30
2024-06-28T12:19:29
MEMBER
null
null
null
Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR: - #6876 Note that this can only be done once transformers releases the fix: - https://github.com/huggingface/transformers/pull/31007
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7010/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7008/comments
https://api.github.com/repos/huggingface/datasets/issues/7008/events
https://github.com/huggingface/datasets/issues/7008
2,379,591,141
I_kwDODunzps6N1a3l
7,008
Support ruff 0.5.0 in CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-06-28T05:11:26
2024-06-28T07:11:18
2024-06-28T07:11:18
MEMBER
null
null
null
Support ruff 0.5.0 in CI. Also revert: - #7007
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7008/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7006/comments
https://api.github.com/repos/huggingface/datasets/issues/7006/events
https://github.com/huggingface/datasets/issues/7006
2,379,581,543
I_kwDODunzps6N1Yhn
7,006
CI is broken after ruff-0.5.0: E721
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-06-28T05:03:28
2024-06-28T05:25:18
2024-06-28T05:25:18
MEMBER
null
null
null
After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule. See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983 > src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7006/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7005/comments
https://api.github.com/repos/huggingface/datasets/issues/7005/events
https://github.com/huggingface/datasets/issues/7005
2,378,424,349
I_kwDODunzps6Nw-Ad
7,005
EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files
{ "login": "Aki1991", "id": 117731544, "node_id": "U_kgDOBwRw2A", "avatar_url": "https://avatars.githubusercontent.com/u/117731544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aki1991", "html_url": "https://github.com/Aki1991", "followers_url": "https://api.github.com/users/Aki1991/followers", "following_url": "https://api.github.com/users/Aki1991/following{/other_user}", "gists_url": "https://api.github.com/users/Aki1991/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aki1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aki1991/subscriptions", "organizations_url": "https://api.github.com/users/Aki1991/orgs", "repos_url": "https://api.github.com/users/Aki1991/repos", "events_url": "https://api.github.com/users/Aki1991/events{/privacy}", "received_events_url": "https://api.github.com/users/Aki1991/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-06-27T15:08:26
2024-06-28T09:56:19
2024-06-28T09:56:19
NONE
null
null
null
### Describe the bug while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files" ### Steps to reproduce the bug This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file. Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub). ```` from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl") ```` error: ```` EmptyDatasetError Traceback (most recent call last) Cell In[18], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("imagefolder", 4 data_dir="path/to/jsonl/file/metadata.jsonl") 5 dataset[0]["objects"] File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2589 verification_mode = VerificationMode( 2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2591 ) 2593 # Create a dataset builder -> 2594 builder_instance = load_dataset_builder( 2595 path=path, 2596 name=name, 2597 data_dir=data_dir, 2598 data_files=data_files, 2599 cache_dir=cache_dir, 2600 features=features, 2601 download_config=download_config, 2602 download_mode=download_mode, 2603 revision=revision, 2604 token=token, 2605 storage_options=storage_options, 2606 trust_remote_code=trust_remote_code, 2607 _require_default_config_name=name is None, 2608 **config_kwargs, 2609 ) 2611 # Return iterable dataset in case of streaming 2612 if streaming: File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2264 download_config = download_config.copy() if download_config else DownloadConfig() 2265 download_config.storage_options.update(storage_options) -> 2266 dataset_module = dataset_module_factory( 2267 path, 2268 revision=revision, 2269 download_config=download_config, 2270 download_mode=download_mode, 2271 data_dir=data_dir, 2272 data_files=data_files, 2273 cache_dir=cache_dir, 2274 trust_remote_code=trust_remote_code, 2275 _require_default_config_name=_require_default_config_name, 2276 _require_custom_configs=bool(config_kwargs), 2277 ) 2278 # Get dataset builder class from the processing script 2279 builder_kwargs = dataset_module.builder_kwargs File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1782 # We have several ways to get a dataset builder: 1783 # 1784 # - if path is the name of a packaged dataset module (...) 1796 1797 # Try packaged 1798 if path in _PACKAGED_DATASETS_MODULES: 1799 return PackagedDatasetModuleFactory( 1800 path, 1801 data_dir=data_dir, 1802 data_files=data_files, 1803 download_config=download_config, 1804 download_mode=download_mode, -> 1805 ).get_module() 1806 # Try locally 1807 elif path.endswith(filename): File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self) 1135 def get_module(self) -> DatasetModule: 1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix() 1137 patterns = ( 1138 sanitize_patterns(self.data_files) 1139 if self.data_files is not None -> 1140 else get_data_patterns(base_path, download_config=self.download_config) 1141 ) 1142 data_files = DataFilesDict.from_patterns( 1143 patterns, 1144 download_config=self.download_config, 1145 base_path=base_path, 1146 ) 1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config) 501 return _get_data_files_patterns(resolver) 502 except FileNotFoundError: --> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files` ``` ### Expected behavior It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files." ### Environment info I am using conda environment.
{ "login": "Aki1991", "id": 117731544, "node_id": "U_kgDOBwRw2A", "avatar_url": "https://avatars.githubusercontent.com/u/117731544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aki1991", "html_url": "https://github.com/Aki1991", "followers_url": "https://api.github.com/users/Aki1991/followers", "following_url": "https://api.github.com/users/Aki1991/following{/other_user}", "gists_url": "https://api.github.com/users/Aki1991/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aki1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aki1991/subscriptions", "organizations_url": "https://api.github.com/users/Aki1991/orgs", "repos_url": "https://api.github.com/users/Aki1991/repos", "events_url": "https://api.github.com/users/Aki1991/events{/privacy}", "received_events_url": "https://api.github.com/users/Aki1991/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7005/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7001/comments
https://api.github.com/repos/huggingface/datasets/issues/7001/events
https://github.com/huggingface/datasets/issues/7001
2,372,930,879
I_kwDODunzps6NcA0_
7,001
Datasetbuilder Local Download FileNotFoundError
{ "login": "purefall", "id": 12601271, "node_id": "MDQ6VXNlcjEyNjAxMjcx", "avatar_url": "https://avatars.githubusercontent.com/u/12601271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/purefall", "html_url": "https://github.com/purefall", "followers_url": "https://api.github.com/users/purefall/followers", "following_url": "https://api.github.com/users/purefall/following{/other_user}", "gists_url": "https://api.github.com/users/purefall/gists{/gist_id}", "starred_url": "https://api.github.com/users/purefall/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/purefall/subscriptions", "organizations_url": "https://api.github.com/users/purefall/orgs", "repos_url": "https://api.github.com/users/purefall/repos", "events_url": "https://api.github.com/users/purefall/events{/privacy}", "received_events_url": "https://api.github.com/users/purefall/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-06-25T15:02:34
2024-06-25T15:21:19
null
NONE
null
null
null
### Describe the bug So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError. I debug the code and it seems there is a bug there: So first it creates a .incomplete folder and before moving its contents the following code deletes the directory [Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984) hence as a result I face with: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '``` ### Steps to reproduce the bug ``` from datasets import load_dataset_builder from pathlib import Path parquet_dir = "~/data/Parquet/" Path(parquet_dir).mkdir(parents=True, exist_ok=True) builder = load_dataset_builder( "rotten_tomatoes", ) builder.download_and_prepare(parquet_dir, file_format="parquet") ``` ### Expected behavior Downloads the files and saves as parquet ### Environment info Ubuntu, Python 3.10 ``` datasets 2.19.1 ```
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7001/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7000/comments
https://api.github.com/repos/huggingface/datasets/issues/7000/events
https://github.com/huggingface/datasets/issues/7000
2,372,887,585
I_kwDODunzps6Nb2Qh
7,000
IterableDataset: Unsupported ScalarType BFloat16
{ "login": "stoical07", "id": 170015089, "node_id": "U_kgDOCiI5cQ", "avatar_url": "https://avatars.githubusercontent.com/u/170015089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stoical07", "html_url": "https://github.com/stoical07", "followers_url": "https://api.github.com/users/stoical07/followers", "following_url": "https://api.github.com/users/stoical07/following{/other_user}", "gists_url": "https://api.github.com/users/stoical07/gists{/gist_id}", "starred_url": "https://api.github.com/users/stoical07/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stoical07/subscriptions", "organizations_url": "https://api.github.com/users/stoical07/orgs", "repos_url": "https://api.github.com/users/stoical07/repos", "events_url": "https://api.github.com/users/stoical07/events{/privacy}", "received_events_url": "https://api.github.com/users/stoical07/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-06-25T14:43:26
2024-06-25T16:04:00
2024-06-25T15:51:53
NONE
null
null
null
### Describe the bug `IterableDataset.from_generator` crashes when using BFloat16: ``` File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor args = (obj.detach().cpu().numpy(),) ^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug ```python import torch from datasets import IterableDataset def demo(x): yield {"x": x} x = torch.tensor([1.], dtype=torch.bfloat16) dataset = IterableDataset.from_generator( demo, gen_kwargs=dict(x=x), ) example = next(iter(dataset)) print(example) ``` ### Expected behavior Code sample should print: ```python {'x': tensor([1.], dtype=torch.bfloat16)} ``` ### Environment info ``` datasets==2.20.0 torch==2.2.2 ```
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7000/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6997/comments
https://api.github.com/repos/huggingface/datasets/issues/6997/events
https://github.com/huggingface/datasets/issues/6997
2,371,966,127
I_kwDODunzps6NYVSv
6,997
CI is broken for tests using hf-internal-testing/librispeech_asr_dummy
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-06-25T07:55:44
2024-06-25T08:13:43
2024-06-25T08:13:43
MEMBER
null
null
null
CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996 ``` FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other'] Right contains one more item: 'other' Full diff: [ 'clean', - 'other', ] FAILED tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None] - AssertionError: assert 'clean' is None ``` Note that repository was recently converted to Parquet: https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/commit/5be91486e11a2d616f4ec5db8d3fd248585ac07a
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6997/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6995/comments
https://api.github.com/repos/huggingface/datasets/issues/6995/events
https://github.com/huggingface/datasets/issues/6995
2,370,713,475
I_kwDODunzps6NTjeD
6,995
ImportError when importing datasets.load_dataset
{ "login": "Leo-Lsc", "id": 124846947, "node_id": "U_kgDOB3EDYw", "avatar_url": "https://avatars.githubusercontent.com/u/124846947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Leo-Lsc", "html_url": "https://github.com/Leo-Lsc", "followers_url": "https://api.github.com/users/Leo-Lsc/followers", "following_url": "https://api.github.com/users/Leo-Lsc/following{/other_user}", "gists_url": "https://api.github.com/users/Leo-Lsc/gists{/gist_id}", "starred_url": "https://api.github.com/users/Leo-Lsc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Leo-Lsc/subscriptions", "organizations_url": "https://api.github.com/users/Leo-Lsc/orgs", "repos_url": "https://api.github.com/users/Leo-Lsc/repos", "events_url": "https://api.github.com/users/Leo-Lsc/events{/privacy}", "received_events_url": "https://api.github.com/users/Leo-Lsc/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
9
2024-06-24T17:07:22
2024-07-16T17:51:06
2024-06-25T06:11:37
NONE
null
null
null
### Describe the bug I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'. ### Steps to reproduce the bug 1. pip install git+https://github.com/huggingface/datasets 2. from datasets import load_dataset ### Expected behavior ImportError Traceback (most recent call last) Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1) ----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset [3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train") [4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test") File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7 1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. [2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) # [3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License"); (...) [12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and [13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License. [15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0" ---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset [18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction [19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63 [61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc [62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs ---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import ( [64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo, [65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd, ... [70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) ) [71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile [72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Environment info Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub $ datasets-cli env Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module> File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module> from .arrow_dataset import Dataset File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) (CS224S)
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6995/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6992/comments
https://api.github.com/repos/huggingface/datasets/issues/6992/events
https://github.com/huggingface/datasets/issues/6992
2,367,890,622
I_kwDODunzps6NIyS-
6,992
Dataset with streaming doesn't work with proxy
{ "login": "YHL04", "id": 57779173, "node_id": "MDQ6VXNlcjU3Nzc5MTcz", "avatar_url": "https://avatars.githubusercontent.com/u/57779173?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YHL04", "html_url": "https://github.com/YHL04", "followers_url": "https://api.github.com/users/YHL04/followers", "following_url": "https://api.github.com/users/YHL04/following{/other_user}", "gists_url": "https://api.github.com/users/YHL04/gists{/gist_id}", "starred_url": "https://api.github.com/users/YHL04/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YHL04/subscriptions", "organizations_url": "https://api.github.com/users/YHL04/orgs", "repos_url": "https://api.github.com/users/YHL04/repos", "events_url": "https://api.github.com/users/YHL04/events{/privacy}", "received_events_url": "https://api.github.com/users/YHL04/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-06-22T16:12:08
2024-06-25T15:43:05
null
NONE
null
null
null
### Describe the bug I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both HTTP_PROXY and HTTPS_PROXY. streaming = False works fine. ### Steps to reproduce the bug use load_dataset with streaming = True in AIMOS ### Expected behavior does not hang indefinitely and loads batches to start training run ### Environment info _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge _pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 abseil-cpp 20220623.0 h9888cd1_6 conda-forge absl-py 1.0.0 py311h399429b_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 aiofiles 23.2.1 pyhd8ed1ab_0 conda-forge aiohttp 3.8.6 py311hf118e41_0 aiosignal 1.2.0 pyhd3eb1b0_0 archspec 0.2.3 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 ha3edaa6_5_cpu conda-forge async-timeout 4.0.2 py311h6ffa863_0 attrs 23.1.0 py311h6ffa863_0 av 10.0.0 py311he6153ed_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 aws-c-auth 0.6.24 hb81f6d7_5 conda-forge aws-c-cal 0.5.20 h3c2b4d9_6 conda-forge aws-c-common 0.8.11 h4194056_0 conda-forge aws-c-compression 0.2.16 ha19333d_3 conda-forge aws-c-event-stream 0.2.18 h12a9399_6 conda-forge aws-c-http 0.7.4 ha2cde00_2 conda-forge aws-c-io 0.13.17 h9189062_2 conda-forge aws-c-mqtt 0.8.6 h40d1a04_6 conda-forge aws-c-s3 0.2.4 hbdbe4f0_3 conda-forge aws-c-sdkutils 0.1.7 ha19333d_3 conda-forge aws-checksums 0.1.14 ha19333d_3 conda-forge aws-crt-cpp 0.19.7 hd018011_7 conda-forge aws-sdk-cpp 1.10.57 hb9575ba_4 conda-forge blas 1.0 openblas blinker 1.8.2 pyhd8ed1ab_0 conda-forge boltons 23.0.0 py311h6ffa863_0 boost-cpp 1.82.0 h25e6d66_2 bottleneck 1.3.5 py311h34f6284_0 brotli 1.0.9 hf118e41_7 brotli-bin 1.0.9 hf118e41_7 brotli-python 1.0.9 py311h4a02239_7 bzip2 1.0.8 h7b6447c_0 c-ares 1.19.1 hf118e41_0 ca-certificates 2024.6.2 h0f6029e_0 conda-forge cachetools 5.3.3 pyhd8ed1ab_0 conda-forge certifi 2024.6.2 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311hf118e41_3 charset-normalizer 2.0.4 pyhd3eb1b0_0 click 8.1.7 unix_pyh707e725_0 conda-forge conda 24.5.0 py311h1af927a_0 conda-forge conda-content-trust 0.2.0 py311h6ffa863_0 conda-libmamba-solver 23.11.1 py311h6ffa863_0 conda-package-handling 2.2.0 py311h6ffa863_0 conda-package-streaming 0.9.0 py311h6ffa863_0 contourpy 1.0.5 py311h25e6d66_0 cryptography 41.0.3 py311hb0e80e7_0 cudatoolkit 11.8.0 hedcfb66_13 conda-forge cudnn 8.9.2_11.8 h9ceb136_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 cycler 0.11.0 pyhd3eb1b0_0 datasets 2.12.0 py311h6ffa863_0 dill 0.3.6 py311h6ffa863_0 distro 1.9.0 pyhd8ed1ab_0 conda-forge ffmpeg 4.2.2 opence_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 filelock 3.9.0 py311h6ffa863_0 fmt 9.1.0 h25e6d66_0 fonttools 4.25.0 pyhd3eb1b0_0 freetype 2.12.1 hd23a775_0 frozendict 2.4.4 py311hb02d432_0 conda-forge frozenlist 1.4.0 py311hf118e41_0 fsspec 2023.9.2 py311h6ffa863_0 gflags 2.2.2 he6710b0_0 giflib 5.2.1 hf118e41_3 glog 0.6.0 hbe088e0_0 conda-forge gmp 6.3.0 h46f38da_0 conda-forge gmpy2 2.1.5 py311h2758da7_1 conda-forge google-auth 2.30.0 pyhff2d567_0 conda-forge google-auth-oauthlib 0.5.3 pyhd8ed1ab_0 conda-forge grpc-cpp 1.51.1 h8ba971d_1 conda-forge grpcio 1.54.3 py311h414e0d3_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 huggingface_hub 0.17.3 py311h6ffa863_0 icu 73.1 h4a02239_0 idna 3.4 py311h6ffa863_0 importlib-metadata 6.0.0 py311h6ffa863_0 jinja2 3.1.4 pyhd8ed1ab_0 conda-forge jpeg 9e hf118e41_1 jsonpatch 1.32 pyhd3eb1b0_0 jsonpointer 2.1 pyhd3eb1b0_0 kiwisolver 1.4.4 py311h4a02239_0 krb5 1.20.1 hc019ccd_1 lame 3.100 hb283c62_1003 conda-forge lcms2 2.12 h2045e0b_0 ld_impl_linux-ppc64le 2.38 hec883e6_1 lerc 3.0 h29c3540_0 leveldb 1.23 h24532b4_1 conda-forge libabseil 20220623.0 cxx17_h9235812_6 conda-forge libarchive 3.6.2 hd8ab008_2 libarrow 11.0.0 h837770b_5_cpu conda-forge libboost 1.82.0 haf51a6a_2 libbrotlicommon 1.0.9 hf118e41_7 libbrotlidec 1.0.9 hf118e41_7 libbrotlienc 1.0.9 hf118e41_7 libcrc32c 1.1.2 h3b9df90_0 conda-forge libcurl 8.4.0 h4d62439_0 libdeflate 1.17 hf118e41_1 libedit 3.1.20221030 hf118e41_0 libev 4.33 h140841e_1 libevent 2.1.10 h19c23f1_4 conda-forge libexpat 2.6.2 h46f38da_0 conda-forge libffi 3.4.4 h4a02239_0 libgcc-ng 13.2.0 h31e42bb_10 conda-forge libgfortran-ng 11.2.0 hb3889a9_1 libgfortran5 11.2.0 h1234567_1 libgomp 13.2.0 h31e42bb_10 conda-forge libgoogle-cloud 2.7.0 h11140b6_1 conda-forge libgrpc 1.51.1 h4d29a31_1 conda-forge libmamba 1.5.3 h7c6fafd_0 libmambapy 1.5.3 py311h828bf7b_0 libnghttp2 1.57.0 h44e5816_0 libnsl 2.0.1 ha17a0cc_0 conda-forge libopenblas 0.3.23 hc5a31fb_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 libopus 1.3.1 h4e0d66e_1 conda-forge libpng 1.6.39 hf118e41_0 libprotobuf 3.21.12 h1776448_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 libsolv 0.7.24 h0f529ac_0 libsqlite 3.45.3 hd4bbf49_0 conda-forge libssh2 1.10.0 h50fa78f_2 libstdcxx-ng 13.2.0 h262982c_10 conda-forge libthrift 0.18.0 h82f1162_0 conda-forge libtiff 4.5.1 h4a02239_0 libutf8proc 2.8.0 hb283c62_0 conda-forge libuuid 2.38.1 h4194056_0 conda-forge libvpx 1.13.1 h46f38da_0 conda-forge libwebp 1.3.2 h0f96ee2_0 libwebp-base 1.3.2 hf118e41_0 libxcrypt 4.4.36 ha17a0cc_1 conda-forge libxml2 2.10.4 h18e3229_1 libzlib 1.2.13 h1f2b957_6 conda-forge llvm-openmp 14.0.6 hc028133_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 lmdb 0.9.31 ha17a0cc_1 conda-forge lz4-c 1.9.4 h4a02239_0 markdown 3.4.4 pyhd8ed1ab_0 conda-forge markupsafe 2.1.5 py311h32d8acf_0 conda-forge matplotlib 3.8.0 py311h6ffa863_0 matplotlib-base 3.8.0 py311h52e1fcc_0 menuinst 2.1.1 py311h1af927a_0 conda-forge mpc 1.3.1 heaf1863_0 conda-forge mpfr 4.2.1 haad2271_1 conda-forge mpmath 1.3.0 pyhd8ed1ab_0 conda-forge multidict 6.0.2 py311hf118e41_0 multiprocess 0.70.14 py311h6ffa863_0 munkres 1.1.4 py_0 mypy_extensions 1.0.0 pyha770c72_0 conda-forge nccl 2.18.3 cuda11.8_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 ncurses 6.4 h4a02239_0 nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge networkx 2.8.8 pyhd8ed1ab_0 conda-forge nomkl 3.0 0 https://ftp.osuosl.org/pub/open-ce/1.10.0 numactl 2.0.16 hba61f60_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 numexpr 2.8.7 py311hc46fc55_0 numpy 1.24.3 py311h148a09e_0 numpy-base 1.24.3 py311h06b82f6_0 oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge openjpeg 2.4.0 hfe35807_0 openssl 3.3.1 h1f2b957_0 conda-forge orc 1.8.2 h341c9a4_2 conda-forge packaging 23.1 py311h6ffa863_0 pandas 2.1.1 py311h52e1fcc_0 pcre2 10.42 h280155c_0 pillow 10.0.1 py311he33076b_0 pip 23.3 py311h6ffa863_0 platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge pluggy 1.0.0 py311h6ffa863_1 pooch 1.8.2 pyhd8ed1ab_0 conda-forge protobuf 4.21.12 py311ha7baec7_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 psutil 5.9.8 py311hd26027c_0 conda-forge pyarrow 11.0.0 py311h04a18d5_1 pyasn1 0.6.0 pyhd8ed1ab_0 conda-forge pyasn1-modules 0.4.0 pyhd8ed1ab_0 conda-forge pybind11-abi 4 hd3eb1b0_1 pycosat 0.6.6 py311hf118e41_0 pycparser 2.21 pyhd3eb1b0_0 pyjwt 2.8.0 pyhd8ed1ab_1 conda-forge pyopenssl 23.2.0 py311h6ffa863_0 pyparsing 3.0.9 py311h6ffa863_0 pyre-extensions 0.0.30 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 py311h6ffa863_0 python 3.11.8 h3332dee_0_cpython conda-forge python-dateutil 2.8.2 pyhd3eb1b0_0 python-tzdata 2023.3 pyhd3eb1b0_0 python-xxhash 2.0.2 py311hf118e41_1 python_abi 3.11 4_cp311 conda-forge pytorch 2.0.1 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytorch-base 2.0.1 cuda11.8_py311_pb4.21.12_4 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytz 2023.3.post1 py311h6ffa863_0 pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge pyyaml 6.0.1 py311hf118e41_0 re2 2023.02.01 h883269e_0 conda-forge readline 8.2 hf118e41_0 regex 2023.10.3 py311hf118e41_0 reproc 14.2.4 h29c3540_1 reproc-cpp 14.2.4 h29c3540_1 requests 2.31.0 py311h6ffa863_0 requests-oauthlib 2.0.0 pyhd8ed1ab_0 conda-forge responses 0.13.3 pyhd3eb1b0_0 rsa 4.9 pyhd8ed1ab_0 conda-forge ruamel.yaml 0.17.21 py311hf118e41_0 s2n 1.3.37 h5e47323_0 conda-forge safetensors 0.4.0 py311hda16d9e_0 scipy 1.11.1 py311hd69e9bb_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 sentencepiece 0.1.97 h1e74c73_py311_pb4.21.12_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 setuptools 68.0.0 py311h6ffa863_0 six 1.16.0 pyhd3eb1b0_1 snappy 1.1.9 h29c3540_0 sqlite 3.41.2 hf118e41_0 sympy 1.12.1 pypyh2585a3b_103 conda-forge tabulate 0.8.10 pyhd8ed1ab_0 conda-forge tensorboard 2.13.0 pyhab0730d_pb4.21.12_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-data-server 0.7.0 pyh6f84499_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-plugin-wit 1.6.0 pyh9f0ad1d_0 conda-forge tk 8.6.13 hd4bbf49_0 conda-forge tokenizers 0.13.3 py311h3d4f45a_0 torchdata 0.6.0 py311_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchsnapshot 0.1.0 pyhd8ed1ab_0 conda-forge torchtext-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchtnt 0.2.4 pyhd8ed1ab_0 conda-forge torchvision-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tornado 6.3.3 py311hf118e41_0 tqdm 4.65.0 py311h7837921_0 transformers 4.32.1 py311h6ffa863_0 truststore 0.8.0 py311h6ffa863_0 typing-extensions 4.7.1 py311h6ffa863_0 typing_extensions 4.7.1 py311h6ffa863_0 typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge tzdata 2023c h04d1e81_0 urllib3 1.26.18 py311h6ffa863_0 utf8proc 2.6.1 h140841e_0 werkzeug 2.3.8 pyhd8ed1ab_0 conda-forge wheel 0.41.2 py311h6ffa863_0 xxhash 0.8.0 h140841e_3 xz 5.4.2 hf118e41_0 yaml 0.2.5 h7b6447c_0 yaml-cpp 0.8.0 h4a02239_0 yarl 1.8.1 py311hf118e41_0 zipp 3.11.0 py311h6ffa863_0 zlib 1.2.13 h1f2b957_6 conda-forge zstandard 0.19.0 py311hf118e41_0 zstd 1.5.5 h57e4825_0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6992/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6990/comments
https://api.github.com/repos/huggingface/datasets/issues/6990/events
https://github.com/huggingface/datasets/issues/6990
2,366,660,785
I_kwDODunzps6NEGCx
6,990
Problematic rank after calling `split_dataset_by_node` twice
{ "login": "yzhangcs", "id": 18402347, "node_id": "MDQ6VXNlcjE4NDAyMzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yzhangcs", "html_url": "https://github.com/yzhangcs", "followers_url": "https://api.github.com/users/yzhangcs/followers", "following_url": "https://api.github.com/users/yzhangcs/following{/other_user}", "gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}", "starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions", "organizations_url": "https://api.github.com/users/yzhangcs/orgs", "repos_url": "https://api.github.com/users/yzhangcs/repos", "events_url": "https://api.github.com/users/yzhangcs/events{/privacy}", "received_events_url": "https://api.github.com/users/yzhangcs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-06-21T14:25:26
2024-06-25T16:19:19
2024-06-25T16:19:19
CONTRIBUTOR
null
null
null
### Describe the bug I'm trying to split `IterableDataset` by `split_dataset_by_node`. But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`. ### Steps to reproduce the bug Here is the minimal code for reproduction: ```py >>> from datasets import load_dataset >>> from datasets.distributed import split_dataset_by_node >>> dataset = load_dataset('fla-hub/slimpajama-test', split='train', streaming=True) >>> dataset = split_dataset_by_node(dataset, 1, 32) >>> dataset._distributed DistributedConfig(rank=1, world_size=32) >>> dataset = split_dataset_by_node(dataset, 1, 15) >>> dataset._distributed DistributedConfig(rank=481, world_size=480) ``` As you can see, the second rank 481 > 480, which is problematic. ### Expected behavior I think this error comes from this line @lhoestq https://github.com/huggingface/datasets/blob/a6ccf944e42c1a84de81bf326accab9999b86c90/src/datasets/iterable_dataset.py#L2943-L2944 We may need to obtain the rank first. Then the above code gives ```py >>> dataset._distributed DistributedConfig(rank=16, world_size=480) ``` ### Environment info datasets==2.20.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6990/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6989/comments
https://api.github.com/repos/huggingface/datasets/issues/6989/events
https://github.com/huggingface/datasets/issues/6989
2,365,556,449
I_kwDODunzps6M_4bh
6,989
cache in nfs error
{ "login": "simplew2011", "id": 66729924, "node_id": "MDQ6VXNlcjY2NzI5OTI0", "avatar_url": "https://avatars.githubusercontent.com/u/66729924?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simplew2011", "html_url": "https://github.com/simplew2011", "followers_url": "https://api.github.com/users/simplew2011/followers", "following_url": "https://api.github.com/users/simplew2011/following{/other_user}", "gists_url": "https://api.github.com/users/simplew2011/gists{/gist_id}", "starred_url": "https://api.github.com/users/simplew2011/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simplew2011/subscriptions", "organizations_url": "https://api.github.com/users/simplew2011/orgs", "repos_url": "https://api.github.com/users/simplew2011/repos", "events_url": "https://api.github.com/users/simplew2011/events{/privacy}", "received_events_url": "https://api.github.com/users/simplew2011/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-06-21T02:09:22
2024-06-21T02:12:55
null
NONE
null
null
null
### Describe the bug - When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory - When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory - The default is to use the path of tempfile.tempdir - If I modify this path to the NFS disk, an error will be reported, but the program will continue to run - https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L257 ``` Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs000000038330a012000030b4' Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs0000000400064d4a000030e5' ``` ### Steps to reproduce the bug ``` import os import time import tempfile from datasets import load_dataset def add_column(sample): # print(type(sample)) # time.sleep(0.1) sample['__ds__stats__'] = {'data': 123} return sample def filt_column(sample): # print(type(sample)) if len(sample['content']) > 10: return True else: return False if __name__ == '__main__': input_dir = '/mnt/temp/CN/small' # some json dataset dataset = load_dataset('json', data_dir=input_dir) temp_dir = '/media/release/release/temp/temp' # a nfs folder os.makedirs(temp_dir, exist_ok=True) # change huggingface-datasets runtime cache in nfs๏ผˆdefault in /tmp๏ผ‰ tempfile.tempdir = temp_dir aa = dataset.map(add_column, num_proc=64) aa = aa.filter(filt_column, num_proc=64) print(aa) ``` ### Expected behavior no error occur ### Environment info datasets==2.18.0 ubuntu 20.04
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6989/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6985/comments
https://api.github.com/repos/huggingface/datasets/issues/6985/events
https://github.com/huggingface/datasets/issues/6985
2,362,378,276
I_kwDODunzps6Mzwgk
6,985
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
{ "login": "firmai", "id": 26666267, "node_id": "MDQ6VXNlcjI2NjY2MjY3", "avatar_url": "https://avatars.githubusercontent.com/u/26666267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/firmai", "html_url": "https://github.com/firmai", "followers_url": "https://api.github.com/users/firmai/followers", "following_url": "https://api.github.com/users/firmai/following{/other_user}", "gists_url": "https://api.github.com/users/firmai/gists{/gist_id}", "starred_url": "https://api.github.com/users/firmai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/firmai/subscriptions", "organizations_url": "https://api.github.com/users/firmai/orgs", "repos_url": "https://api.github.com/users/firmai/repos", "events_url": "https://api.github.com/users/firmai/events{/privacy}", "received_events_url": "https://api.github.com/users/firmai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
10
2024-06-19T13:22:28
2024-08-20T08:23:57
2024-06-25T05:40:51
NONE
null
null
null
### Describe the bug I have been struggling with this for two days, any help would be appreciated. Python 3.10 ``` from setfit import SetFitModel from huggingface_hub import login access_token_read = "cccxxxccc" # Authenticate with the Hugging Face Hub login(token=access_token_read) # Load the models from the Hugging Face Hub trainer_relv = SetFitModel.from_pretrained("snowdere/trainer_relevance") trainer_trust = SetFitModel.from_pretrained("snowdere/trainer_trust") trainer_sent = SetFitModel.from_pretrained("snowdere/trainer_sent") trainer_topic = SetFitModel.from_pretrained("snowdere/trainer_topic") ``` ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 from setfit import SetFitModel 2 from huggingface_hub import login 4 access_token_read = "ccsddsds" File /opt/conda/lib/python3.10/site-packages/setfit/__init__.py:7 4 import os 5 import warnings ----> 7 from .data import get_templated_dataset, sample_dataset 8 from .model_card import SetFitModelCardData 9 from .modeling import SetFitHead, SetFitModel File /opt/conda/lib/python3.10/site-packages/setfit/data.py:5 3 import pandas as pd 4 import torch ----> 5 from datasets import Dataset, DatasetDict, load_dataset 6 from torch.utils.data import Dataset as TorchDataset 8 from . import logging File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18 1 # ruff: noqa 2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. 3 # (...) 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 16 __version__ = "2.19.0" ---> 18 from .arrow_dataset import Dataset 19 from .arrow_reader import ReadInstruction 20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:76 73 from tqdm.contrib.concurrent import thread_map 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns File /opt/conda/lib/python3.10/site-packages/datasets/arrow_reader.py:29 26 from typing import TYPE_CHECKING, List, Optional, Union 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 32 from .download.download_config import DownloadConfig File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20 1 # Licensed to the Apache Software Foundation (ASF) under one 2 # or more contributor license agreements. See the NOTICE file 3 # distributed with this work for additional information (...) 17 18 # flake8: noqa ---> 20 from .core import * File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33 30 import pyarrow as pa 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( 36 "The pyarrow installation is not built with support " 37 f"for the Parquet file format ({str(exc)})" 38 ) from None File /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ``` setfit: 1.0.3 transformers: 4.41.2 lingua-language-detector: 2.0.2 polars: 0.20.31 lightning: None google-cloud-bigquery: 3.24.0 shapely: 2.0.4 pyarrow: 16.0.0 ### Steps to reproduce the bug I have tried all version combinations for Dataset and Pyarrow, the all have the same error since a few days ago. This is accross multiple scripts I have. ### Expected behavior Just ron normally. ### Environment info 3.10
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6985/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6984/comments
https://api.github.com/repos/huggingface/datasets/issues/6984/events
https://github.com/huggingface/datasets/issues/6984
2,362,143,554
I_kwDODunzps6My3NC
6,984
Convert polars DataFrame back to datasets
{ "login": "ljw20180420", "id": 38550511, "node_id": "MDQ6VXNlcjM4NTUwNTEx", "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ljw20180420", "html_url": "https://github.com/ljw20180420", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "repos_url": "https://api.github.com/users/ljw20180420/repos", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2024-06-19T11:38:48
2024-08-12T14:43:46
2024-08-12T14:43:46
NONE
null
null
null
### Feature request This returns error. ```python from datasets import Dataset dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]}) Dataset.from_polars(dsdf.to_polars()) ``` ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent. ### Motivation When datasets contain Sequence data type, it will be converted to Arrow type large_list. However, the reverse (from large_list to Sequence) does not work. ### Your contribution No
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6984/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6982/comments
https://api.github.com/repos/huggingface/datasets/issues/6982/events
https://github.com/huggingface/datasets/issues/6982
2,361,661,469
I_kwDODunzps6MxBgd
6,982
cannot split dataset when using load_dataset
{ "login": "cybest0608", "id": 17721894, "node_id": "MDQ6VXNlcjE3NzIxODk0", "avatar_url": "https://avatars.githubusercontent.com/u/17721894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cybest0608", "html_url": "https://github.com/cybest0608", "followers_url": "https://api.github.com/users/cybest0608/followers", "following_url": "https://api.github.com/users/cybest0608/following{/other_user}", "gists_url": "https://api.github.com/users/cybest0608/gists{/gist_id}", "starred_url": "https://api.github.com/users/cybest0608/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cybest0608/subscriptions", "organizations_url": "https://api.github.com/users/cybest0608/orgs", "repos_url": "https://api.github.com/users/cybest0608/repos", "events_url": "https://api.github.com/users/cybest0608/events{/privacy}", "received_events_url": "https://api.github.com/users/cybest0608/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-06-19T08:07:16
2024-07-08T06:20:16
2024-07-08T06:20:16
NONE
null
null
null
### Describe the bug when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document, This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for days, even I load the datasets from local path, it can Generatingโ€‡trainโ€‡split and validation split but bug happen again in testโ€‡split. ### Steps to reproduce the bug from datasets import load_dataset, load_metric, Audio common_voice_train = load_dataset("mozilla-foundation/common_voice_7_0", "ja", split="train", token=selftoken, trust_remote_code=True) ### Expected behavior ``` { "name": "ValueError", "message": "Instruction \"train\" corresponds to no data!", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[2], line 3 1 from datasets import load_dataset, load_metric, Audio ----> 3 common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\",token='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')#,trust_remote_code=True)#,streaming=True) 4 common_voice_test = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"test\",token='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')#,trust_remote_code=True)#,streaming=True) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\load.py:2626, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2622 # Build dataset for splits 2623 keep_in_memory = ( 2624 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2625 ) -> 2626 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2627 # Rename and cast features to match task schema 2628 if task is not None: 2629 # To avoid issuing the same warning twice File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1266, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1263 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1265 # Create a dataset for each of the given splits -> 1266 datasets = map_nested( 1267 partial( 1268 self._build_single_dataset, 1269 run_post_process=run_post_process, 1270 verification_mode=verification_mode, 1271 in_memory=in_memory, 1272 ), 1273 split, 1274 map_tuple=True, 1275 disable_tqdm=True, 1276 ) 1277 if isinstance(datasets, dict): 1278 datasets = DatasetDict(datasets) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\utils\\py_utils.py:484, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 482 if batched: 483 data_struct = [data_struct] --> 484 mapped = function(data_struct) 485 if batched: 486 mapped = mapped[0] File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1296, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory) 1293 split = Split(split) 1295 # Build base dataset -> 1296 ds = self._as_dataset( 1297 split=split, 1298 in_memory=in_memory, 1299 ) 1300 if run_post_process: 1301 for resource_file_name in self._post_processing_resources(split).values(): File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1370, in DatasetBuilder._as_dataset(self, split, in_memory) 1368 if self._check_legacy_cache(): 1369 dataset_name = self.name -> 1370 dataset_kwargs = ArrowReader(cache_dir, self.info).read( 1371 name=dataset_name, 1372 instructions=split, 1373 split_infos=self.info.splits.values(), 1374 in_memory=in_memory, 1375 ) 1376 fingerprint = self._get_dataset_fingerprint(split) 1377 return Dataset(fingerprint=fingerprint, **dataset_kwargs) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\arrow_reader.py:256, in BaseReader.read(self, name, instructions, split_infos, in_memory) 254 msg = f'Instruction \"{instructions}\" corresponds to no data!' 255 #msg = f'Instruction \"{self._path}\",\"{name}\",\"{instructions}\",\"{split_infos}\" corresponds to no data!' --> 256 raise ValueError(msg) 257 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ValueError: Instruction \"train\" corresponds to no data!" } ``` ### Environment info Environment: python 3.9 windows 11 pro VScode+jupyter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6982/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6980/comments
https://api.github.com/repos/huggingface/datasets/issues/6980/events
https://github.com/huggingface/datasets/issues/6980
2,360,909,930
I_kwDODunzps6MuKBq
6,980
Support NumPy 2.0
{ "login": "NeilGirdhar", "id": 730137, "node_id": "MDQ6VXNlcjczMDEzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NeilGirdhar", "html_url": "https://github.com/NeilGirdhar", "followers_url": "https://api.github.com/users/NeilGirdhar/followers", "following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}", "gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}", "starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions", "organizations_url": "https://api.github.com/users/NeilGirdhar/orgs", "repos_url": "https://api.github.com/users/NeilGirdhar/repos", "events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}", "received_events_url": "https://api.github.com/users/NeilGirdhar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
0
2024-06-18T23:30:22
2024-07-12T12:04:54
2024-07-12T12:04:53
CONTRIBUTOR
null
null
null
### Feature request Support NumPy 2.0. ### Motivation NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API. Besides that, NumPy 2 provides a cleaner interface than NumPy 1. ### Tasks NumPy 2.0 was released for testing so that libraries could ensure compatibility [since mid-March](https://github.com/numpy/numpy/issues/24300#issuecomment-1986815755). What needs to be done for HuggingFace to support Numpy 2? - [x] Fix use of `array`: https://github.com/huggingface/datasets/pull/6976 - [ ] Remove [NumPy version limit](https://github.com/huggingface/datasets/pull/6975): https://github.com/huggingface/datasets/pull/6991
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6980/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6979/comments
https://api.github.com/repos/huggingface/datasets/issues/6979/events
https://github.com/huggingface/datasets/issues/6979
2,360,175,363
I_kwDODunzps6MrWsD
6,979
How can I load partial parquet files only?
{ "login": "lucasjinreal", "id": 21303438, "node_id": "MDQ6VXNlcjIxMzAzNDM4", "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucasjinreal", "html_url": "https://github.com/lucasjinreal", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
12
2024-06-18T15:44:16
2024-06-21T17:09:32
2024-06-21T13:32:50
NONE
null
null
null
I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it. dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet") How can I just using 000 - 100 from a 00314 from all partially? I search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6979/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6977/comments
https://api.github.com/repos/huggingface/datasets/issues/6977/events
https://github.com/huggingface/datasets/issues/6977
2,359,295,045
I_kwDODunzps6Mn_xF
6,977
load json file error with v2.20.0
{ "login": "xiaoyaolangzhi", "id": 15037766, "node_id": "MDQ6VXNlcjE1MDM3NzY2", "avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaoyaolangzhi", "html_url": "https://github.com/xiaoyaolangzhi", "followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers", "following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}", "gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions", "organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs", "repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos", "events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}", "received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2024-06-18T08:41:01
2024-06-18T10:06:10
2024-06-18T10:06:09
NONE
null
null
null
### Describe the bug ``` load_dataset(path="json", data_files="./test.json") ``` ``` Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1997, in _prepare_split_single for _, table in generator: File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 155, in _generate_tables df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/app/t1.py", line 11, in <module> load_dataset(path=data_path, data_files="./t2.json") File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2616, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1029, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1124, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 2040, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` ``` import pandas as pd with open("./test.json", "r") as f: df = pd.read_json(f, dtype_backend="pyarrow") ``` ``` Traceback (most recent call last): File "/app/t3.py", line 3, in <module> df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' ``` ### Steps to reproduce the bug . ### Expected behavior . ### Environment info ``` datasets 2.20.0 pandas 1.5.3 ```
{ "login": "xiaoyaolangzhi", "id": 15037766, "node_id": "MDQ6VXNlcjE1MDM3NzY2", "avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaoyaolangzhi", "html_url": "https://github.com/xiaoyaolangzhi", "followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers", "following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}", "gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions", "organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs", "repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos", "events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}", "received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6977/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6973/comments
https://api.github.com/repos/huggingface/datasets/issues/6973/events
https://github.com/huggingface/datasets/issues/6973
2,355,517,362
I_kwDODunzps6MZley
6,973
IndexError during training with Squad dataset and T5-small model
{ "login": "ramtunguturi36", "id": 151521233, "node_id": "U_kgDOCQgH0Q", "avatar_url": "https://avatars.githubusercontent.com/u/151521233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ramtunguturi36", "html_url": "https://github.com/ramtunguturi36", "followers_url": "https://api.github.com/users/ramtunguturi36/followers", "following_url": "https://api.github.com/users/ramtunguturi36/following{/other_user}", "gists_url": "https://api.github.com/users/ramtunguturi36/gists{/gist_id}", "starred_url": "https://api.github.com/users/ramtunguturi36/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ramtunguturi36/subscriptions", "organizations_url": "https://api.github.com/users/ramtunguturi36/orgs", "repos_url": "https://api.github.com/users/ramtunguturi36/repos", "events_url": "https://api.github.com/users/ramtunguturi36/events{/privacy}", "received_events_url": "https://api.github.com/users/ramtunguturi36/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-06-16T07:53:54
2024-07-01T11:25:40
2024-07-01T11:25:40
NONE
null
null
null
### Describe the bug I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility. ### Steps to reproduce the bug 1.Install the required libraries: !pip install transformers datasets 2.Run the following code: !pip install transformers datasets import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TrainingArguments, Trainer, DataCollatorWithPadding # Load a small, publicly available dataset from datasets import load_dataset dataset = load_dataset("squad", split="train[:100]") # Use a small subset for testing # Load a pre-trained model and tokenizer model_name = "t5-small" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # Define a basic data collator data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=2, num_train_epochs=1, ) # Create a trainer trainer = Trainer( model=model, args=training_args, train_dataset=dataset, data_collator=data_collator, ) # Train the model trainer.train() ### Expected behavior --------------------------------------------------------------------------- IndexError Traceback (most recent call last) [<ipython-input-23-f13a4b23c001>](https://localhost:8080/#) in <cell line: 34>() 32 33 # Train the model ---> 34 trainer.train() 10 frames [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 427 if isinstance(key, int): 428 if (key < 0 and key + size < 0) or (key >= size): --> 429 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") 430 return 431 elif isinstance(key, slice): IndexError: Invalid key: 42 is out of bounds for size 0 ### Environment info transformers version:4.41.2 datasets version:1.18.4 Python version:3.10.12
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6973/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6967/comments
https://api.github.com/repos/huggingface/datasets/issues/6967/events
https://github.com/huggingface/datasets/issues/6967
2,349,146,398
I_kwDODunzps6MBSEe
6,967
Method to load Laion400m
{ "login": "humanely", "id": 6862868, "node_id": "MDQ6VXNlcjY4NjI4Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/6862868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/humanely", "html_url": "https://github.com/humanely", "followers_url": "https://api.github.com/users/humanely/followers", "following_url": "https://api.github.com/users/humanely/following{/other_user}", "gists_url": "https://api.github.com/users/humanely/gists{/gist_id}", "starred_url": "https://api.github.com/users/humanely/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/humanely/subscriptions", "organizations_url": "https://api.github.com/users/humanely/orgs", "repos_url": "https://api.github.com/users/humanely/repos", "events_url": "https://api.github.com/users/humanely/events{/privacy}", "received_events_url": "https://api.github.com/users/humanely/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-06-12T16:04:04
2024-06-12T16:04:04
null
NONE
null
null
null
### Feature request Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99 ### Motivation The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings files s,ealessly. ### Your contribution I cam write the loader with some help.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6967/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6961/comments
https://api.github.com/repos/huggingface/datasets/issues/6961/events
https://github.com/huggingface/datasets/issues/6961
2,342,022,418
I_kwDODunzps6LmG0S
6,961
Manual downloads should count as downloads
{ "login": "umarbutler", "id": 8473183, "node_id": "MDQ6VXNlcjg0NzMxODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarbutler", "html_url": "https://github.com/umarbutler", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "repos_url": "https://api.github.com/users/umarbutler/repos", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-06-09T04:52:06
2024-06-13T16:05:00
null
NONE
null
null
null
### Feature request I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats ### Motivation This would ensure that downloads are accurately reported to end users. ### Your contribution N/A
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6961/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6958/comments
https://api.github.com/repos/huggingface/datasets/issues/6958/events
https://github.com/huggingface/datasets/issues/6958
2,337,476,383
I_kwDODunzps6LUw8f
6,958
My Private Dataset doesn't exist on the Hub or cannot be accessed
{ "login": "wangguan1995", "id": 39621324, "node_id": "MDQ6VXNlcjM5NjIxMzI0", "avatar_url": "https://avatars.githubusercontent.com/u/39621324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangguan1995", "html_url": "https://github.com/wangguan1995", "followers_url": "https://api.github.com/users/wangguan1995/followers", "following_url": "https://api.github.com/users/wangguan1995/following{/other_user}", "gists_url": "https://api.github.com/users/wangguan1995/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangguan1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangguan1995/subscriptions", "organizations_url": "https://api.github.com/users/wangguan1995/orgs", "repos_url": "https://api.github.com/users/wangguan1995/repos", "events_url": "https://api.github.com/users/wangguan1995/events{/privacy}", "received_events_url": "https://api.github.com/users/wangguan1995/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
8
2024-06-06T06:52:19
2024-07-01T11:27:46
2024-07-01T11:27:46
NONE
null
null
null
### Describe the bug ``` File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed >>> dataset = load_dataset("xxxx", token=True) 404 error 404 Client Error. (Request ID: Root=xxxx) Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2593, in load_dataset builder_instance = load_dataset_builder( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2265, in load_dataset_builder dataset_module = dataset_module_factory( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory raise e1 from None File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed ``` ### Steps to reproduce the bug 123 ### Expected behavior 123 ### Environment info 123
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6958/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6953/comments
https://api.github.com/repos/huggingface/datasets/issues/6953/events
https://github.com/huggingface/datasets/issues/6953
2,333,366,120
I_kwDODunzps6LFFdo
6,953
Remove canonical datasets from docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
1
2024-06-04T12:09:03
2024-07-01T11:31:25
2024-07-01T11:31:25
MEMBER
null
null
null
Remove canonical datasets from docs, now that we no longer have canonical datasets.
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6953/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6951/comments
https://api.github.com/repos/huggingface/datasets/issues/6951/events
https://github.com/huggingface/datasets/issues/6951
2,333,231,042
I_kwDODunzps6LEkfC
6,951
load_dataset() should load all subsets, if no specific subset is specified
{ "login": "windmaple", "id": 5577741, "node_id": "MDQ6VXNlcjU1Nzc3NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/windmaple", "html_url": "https://github.com/windmaple", "followers_url": "https://api.github.com/users/windmaple/followers", "following_url": "https://api.github.com/users/windmaple/following{/other_user}", "gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}", "starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/windmaple/subscriptions", "organizations_url": "https://api.github.com/users/windmaple/orgs", "repos_url": "https://api.github.com/users/windmaple/repos", "events_url": "https://api.github.com/users/windmaple/events{/privacy}", "received_events_url": "https://api.github.com/users/windmaple/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
4
2024-06-04T11:02:33
2024-07-01T11:33:10
2024-07-01T11:33:10
NONE
null
null
null
### Feature request Currently load_dataset() is forcing users to specify a subset. Example `from datasets import load_dataset dataset = load_dataset("m-a-p/COIG-CQIA")` ```--------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset("m-a-p/COIG-CQIA") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs) 582 if not config_kwargs: 583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')" --> 584 raise ValueError( 585 "Config name is missing." 586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}" ValueError: Config name is missing. Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu'] Example of usage: `load_dataset('coig-cqia', 'chinese_traditional')` ``` This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy. ### Motivation Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets. ### Your contribution Not sure since I'm not familiar w/ the lib src.
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6951/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6951/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6950/comments
https://api.github.com/repos/huggingface/datasets/issues/6950/events
https://github.com/huggingface/datasets/issues/6950
2,333,005,974
I_kwDODunzps6LDtiW
6,950
`Dataset.with_format` behaves inconsistently with documentation
{ "login": "iansheng", "id": 42494185, "node_id": "MDQ6VXNlcjQyNDk0MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/42494185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iansheng", "html_url": "https://github.com/iansheng", "followers_url": "https://api.github.com/users/iansheng/followers", "following_url": "https://api.github.com/users/iansheng/following{/other_user}", "gists_url": "https://api.github.com/users/iansheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/iansheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iansheng/subscriptions", "organizations_url": "https://api.github.com/users/iansheng/orgs", "repos_url": "https://api.github.com/users/iansheng/repos", "events_url": "https://api.github.com/users/iansheng/events{/privacy}", "received_events_url": "https://api.github.com/users/iansheng/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
2
2024-06-04T09:18:32
2024-06-25T08:05:49
2024-06-25T08:05:49
NONE
null
null
null
### Describe the bug The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation. https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays > If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists. > In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor. > A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor. But I get a single tensor by default, which is inconsistent with the description. Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified. ### Steps to reproduce the bug ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([[1, 2], [3, 4]])} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy= array([[1, 2], [3, 4]])>} ``` ### Expected behavior ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': [tensor([1, 2]), tensor([3, 4])]} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.RaggedTensor [[1, 2], [3, 4]]>} ``` ### Environment info datasets==2.19.1 torch==2.1.0 tensorflow==2.13.1
{ "login": "iansheng", "id": 42494185, "node_id": "MDQ6VXNlcjQyNDk0MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/42494185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iansheng", "html_url": "https://github.com/iansheng", "followers_url": "https://api.github.com/users/iansheng/followers", "following_url": "https://api.github.com/users/iansheng/following{/other_user}", "gists_url": "https://api.github.com/users/iansheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/iansheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iansheng/subscriptions", "organizations_url": "https://api.github.com/users/iansheng/orgs", "repos_url": "https://api.github.com/users/iansheng/repos", "events_url": "https://api.github.com/users/iansheng/events{/privacy}", "received_events_url": "https://api.github.com/users/iansheng/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6950/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6949/comments
https://api.github.com/repos/huggingface/datasets/issues/6949/events
https://github.com/huggingface/datasets/issues/6949
2,332,336,573
I_kwDODunzps6LBKG9
6,949
load_dataset error
{ "login": "lion-ops", "id": 27952522, "node_id": "MDQ6VXNlcjI3OTUyNTIy", "avatar_url": "https://avatars.githubusercontent.com/u/27952522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lion-ops", "html_url": "https://github.com/lion-ops", "followers_url": "https://api.github.com/users/lion-ops/followers", "following_url": "https://api.github.com/users/lion-ops/following{/other_user}", "gists_url": "https://api.github.com/users/lion-ops/gists{/gist_id}", "starred_url": "https://api.github.com/users/lion-ops/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lion-ops/subscriptions", "organizations_url": "https://api.github.com/users/lion-ops/orgs", "repos_url": "https://api.github.com/users/lion-ops/repos", "events_url": "https://api.github.com/users/lion-ops/events{/privacy}", "received_events_url": "https://api.github.com/users/lion-ops/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-06-04T01:24:45
2024-07-01T11:33:46
2024-07-01T11:33:46
NONE
null
null
null
### Describe the bug Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r'). ### Steps to reproduce the bug 1. pip install datasets==2.19.2 2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset 3. data = load_dataset('json', data_files='train.json') ### Expected behavior It is able to load my json correctly ### Environment info datasets==2.19.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6949/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6948/comments
https://api.github.com/repos/huggingface/datasets/issues/6948/events
https://github.com/huggingface/datasets/issues/6948
2,331,758,300
I_kwDODunzps6K-87c
6,948
to_tf_dataset: Visible devices cannot be modified after being initialized
{ "login": "logasja", "id": 7151661, "node_id": "MDQ6VXNlcjcxNTE2NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/7151661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/logasja", "html_url": "https://github.com/logasja", "followers_url": "https://api.github.com/users/logasja/followers", "following_url": "https://api.github.com/users/logasja/following{/other_user}", "gists_url": "https://api.github.com/users/logasja/gists{/gist_id}", "starred_url": "https://api.github.com/users/logasja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/logasja/subscriptions", "organizations_url": "https://api.github.com/users/logasja/orgs", "repos_url": "https://api.github.com/users/logasja/repos", "events_url": "https://api.github.com/users/logasja/events{/privacy}", "received_events_url": "https://api.github.com/users/logasja/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-06-03T18:10:57
2024-06-03T18:10:57
null
NONE
null
null
null
### Describe the bug When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``. File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/datasets/utils/tf_utils.py", line 438, in worker_loop tf.config.set_visible_devices([], "GPU") # Make sure workers don't try to allocate GPU memory ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/framework/config.py", line 566, in set_visible_devices context.context().set_visible_devices(devices, device_type) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/eager/context.py", line 1737, in set_visible_devices raise RuntimeError( RuntimeError: Visible devices cannot be modified after being initialized ### Steps to reproduce the bug 1. Download a dataset using HuggingFace load_dataset 2. Define a function that transforms the data in some way to be used in the collate_fn argument 3. Provide a ``batch_size`` and ``num_workers`` value in the ``to_tf_dataset`` function 4. Either retrieve directly or use tfds benchmark to test the dataset ``` python from datasets import load_datasets import tensorflow_datasets as tfds from keras_cv.layers import Resizing def data_loader(examples): x = Resizing(examples[0]['image'], 256, 256, crop_to_aspect_ratio=True) return {X[0]: x} ds = load_datasets("logasja/FDF", split="test") ds = ds.to_tf_dataset(collate_fn=data_loader, batch_size=16, num_workers=2) tfds.benchmark(ds) ``` ### Expected behavior Use multiple processes to apply transformations from the collate_fn to the tf dataset on the CPU. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1023-oracle-x86_64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6948/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6947/comments
https://api.github.com/repos/huggingface/datasets/issues/6947/events
https://github.com/huggingface/datasets/issues/6947
2,331,114,055
I_kwDODunzps6K8fpH
6,947
FileNotFoundError๏ผšerror when loading C4 dataset
{ "login": "W-215", "id": 62374585, "node_id": "MDQ6VXNlcjYyMzc0NTg1", "avatar_url": "https://avatars.githubusercontent.com/u/62374585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/W-215", "html_url": "https://github.com/W-215", "followers_url": "https://api.github.com/users/W-215/followers", "following_url": "https://api.github.com/users/W-215/following{/other_user}", "gists_url": "https://api.github.com/users/W-215/gists{/gist_id}", "starred_url": "https://api.github.com/users/W-215/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/W-215/subscriptions", "organizations_url": "https://api.github.com/users/W-215/orgs", "repos_url": "https://api.github.com/users/W-215/repos", "events_url": "https://api.github.com/users/W-215/events{/privacy}", "received_events_url": "https://api.github.com/users/W-215/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
15
2024-06-03T13:06:33
2024-06-25T06:21:28
2024-06-25T06:21:28
NONE
null
null
null
### Describe the bug can't load c4 datasets When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'} How can I fix this๏ผŸ ### Steps to reproduce the bug 1.from datasets import load_dataset 2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') 3. raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ### Expected behavior The data was successfully imported ### Environment info python version 3.9 datasets version 2.19.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6947/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6942/comments
https://api.github.com/repos/huggingface/datasets/issues/6942/events
https://github.com/huggingface/datasets/issues/6942
2,329,562,382
I_kwDODunzps6K2k0O
6,942
Import sorting is disabled by flake8 noqa directive after switching to ruff linter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-06-02T09:43:34
2024-06-04T09:54:24
2024-06-04T09:54:24
MEMBER
null
null
null
When we switched to `ruff` linter in PR: - #5519 import sorting was disabled in all files containing the `# flake8: noqa` directive - https://github.com/astral-sh/ruff/issues/11679 We should re-enable import sorting on those files.
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6942/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6941/comments
https://api.github.com/repos/huggingface/datasets/issues/6941/events
https://github.com/huggingface/datasets/issues/6941
2,328,930,165
I_kwDODunzps6K0Kd1
6,941
Supporting FFCV: Fast Forward Computer Vision
{ "login": "Luciennnnnnn", "id": 20135317, "node_id": "MDQ6VXNlcjIwMTM1MzE3", "avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luciennnnnnn", "html_url": "https://github.com/Luciennnnnnn", "followers_url": "https://api.github.com/users/Luciennnnnnn/followers", "following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}", "gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions", "organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs", "repos_url": "https://api.github.com/users/Luciennnnnnn/repos", "events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}", "received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-06-01T05:34:52
2024-06-01T05:34:52
null
NONE
null
null
null
### Feature request Supporting FFCV, https://github.com/libffcv/ffcv ### Motivation According to the benchmark, FFCV seems to be fastest image loading method. ### Your contribution no
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6941/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6941/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6940/comments
https://api.github.com/repos/huggingface/datasets/issues/6940/events
https://github.com/huggingface/datasets/issues/6940
2,328,637,831
I_kwDODunzps6KzDGH
6,940
Enable Sharding to Equal Sized Shards
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-05-31T21:55:50
2024-06-01T07:34:12
null
NONE
null
null
null
### Feature request Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation. ### Motivation Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i).". However, when using FSDP we want the shards to have the same size. This requires the user to manually handle this situation, but it will be nice if we had an option to shard the dataset into equally sized shards. ### Your contribution For now just a PR. I can also add code that does what is needed, but probably not efficient. Shard to equal size by duplication: ``` remainder = len(dataset) % num_shards num_missing_examples = num_shards - remainder duplicated = dataset.select(list(range(num_missing_examples))) dataset = concatenate_datasets([dataset, duplicated]) shard = dataset.shard(num_shards, shard_idx) ``` Or by truncation: ``` shard = dataset.shard(num_shards, shard_idx) num_examples_per_shard = len(dataset) // num_shards shard = shard.select(list(range(num_examples_per_shard))) ```
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6940/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6939/comments
https://api.github.com/repos/huggingface/datasets/issues/6939/events
https://github.com/huggingface/datasets/issues/6939
2,328,059,386
I_kwDODunzps6Kw136
6,939
ExpectedMoreSplits error when using data_dir
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-31T15:08:42
2024-05-31T17:10:39
2024-05-31T17:10:39
MEMBER
null
null
null
As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`: ```python from datasets import load_dataset dataset = load_dataset( "lvwerra/stack-exchange-paired", split="train", cache_dir=None, data_dir="data/rl", ) ``` ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1140, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py", line 92, in verify_splits raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits))) datasets.utils.info_utils.ExpectedMoreSplits: {'test'} ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6939/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6937/comments
https://api.github.com/repos/huggingface/datasets/issues/6937/events
https://github.com/huggingface/datasets/issues/6937
2,327,212,611
I_kwDODunzps6KtnJD
6,937
JSON loader implicitly coerces floats to integers
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-31T08:09:12
2024-05-31T08:11:57
null
MEMBER
null
null
null
The JSON loader implicitly coerces floats to integers. The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`. See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446 ``` =================================== FAILURES =================================== ___________________________ test_statistics_endpoint ___________________________ normal_user_public_json_dataset = 'DVUser/tmp-dataset-17170199043860' def test_statistics_endpoint(normal_user_public_json_dataset: str) -> None: dataset = normal_user_public_json_dataset config, split = get_default_config_split() statistics_response = poll_until_ready_and_assert( relative_url=f"/statistics?dataset={dataset}&config={config}&split={split}", check_x_revision=True, dataset=dataset, ) content = statistics_response.json() assert len(content) == 3 assert sorted(content) == ["num_examples", "partial", "statistics"], statistics_response statistics = content["statistics"] num_examples = content["num_examples"] partial = content["partial"] assert isinstance(statistics, list), statistics assert len(statistics) == 6 assert num_examples == 4 assert partial is False string_label_column = statistics[0] assert "column_name" in string_label_column assert "column_statistics" in string_label_column assert "column_type" in string_label_column assert string_label_column["column_name"] == "col_1" assert string_label_column["column_type"] == "string_label" # 4 unique values -> label assert isinstance(string_label_column["column_statistics"], dict) assert string_label_column["column_statistics"] == { "nan_count": 0, "nan_proportion": 0.0, "no_label_count": 0, "no_label_proportion": 0.0, "n_unique": 4, "frequencies": { "There goes another one.": 1, "Vader turns round and round in circles as his ship spins into space.": 1, "We count thirty Rebel ships, Lord Vader.": 1, "The wingman spots the pirateship coming at him and warns the Dark Lord": 1, }, } int_column = statistics[1] assert "column_name" in int_column assert "column_statistics" in int_column assert "column_type" in int_column assert int_column["column_name"] == "col_2" assert int_column["column_type"] == "int" assert isinstance(int_column["column_statistics"], dict) assert int_column["column_statistics"] == { "histogram": {"bin_edges": [0, 1, 2, 3, 3], "hist": [1, 1, 1, 1]}, "max": 3, "mean": 1.5, "median": 1.5, "min": 0, "nan_count": 0, "nan_proportion": 0.0, "std": 1.29099, } float_column = statistics[2] assert "column_name" in float_column assert "column_statistics" in float_column assert "column_type" in float_column assert float_column["column_name"] == "col_3" > assert float_column["column_type"] == "float" E AssertionError: assert 'int' == 'float' E - float E + int tests/test_14_statistics.py:72: AssertionError =========================== short test summary info ============================ FAILED tests/test_14_statistics.py::test_statistics_endpoint - AssertionError: assert 'int' == 'float' - float + int ``` This bug was introduced after: - #6914 We have reported the issue to pandas: - https://github.com/pandas-dev/pandas/issues/58866
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6937/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6936/comments
https://api.github.com/repos/huggingface/datasets/issues/6936/events
https://github.com/huggingface/datasets/issues/6936
2,326,119,853
I_kwDODunzps6KpcWt
6,936
save_to_disk() freezes when saving on s3 bucket with multiprocessing
{ "login": "ycattan", "id": 54974949, "node_id": "MDQ6VXNlcjU0OTc0OTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/54974949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ycattan", "html_url": "https://github.com/ycattan", "followers_url": "https://api.github.com/users/ycattan/followers", "following_url": "https://api.github.com/users/ycattan/following{/other_user}", "gists_url": "https://api.github.com/users/ycattan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ycattan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ycattan/subscriptions", "organizations_url": "https://api.github.com/users/ycattan/orgs", "repos_url": "https://api.github.com/users/ycattan/repos", "events_url": "https://api.github.com/users/ycattan/events{/privacy}", "received_events_url": "https://api.github.com/users/ycattan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-05-30T16:48:39
2024-07-22T23:08:42
null
NONE
null
null
null
### Describe the bug I'm trying to save a `Dataset` using the `save_to_disk()` function with: - `num_proc > 1` - `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/" The hf progress bar shows up but the saving does not seem to start. When using one processor only (`num_proc=1`), everything works fine. When saving the dataset on local disk (as opposed to s3 bucket) with `num_proc > 1`, everything works fine. Thank you for your help! :) ### Steps to reproduce the bug I tried without any storage options: ``` from datasets import load_dataset sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, ) ``` and with the specific s3fs storage options: ``` from datasets import load_dataset from s3fs import S3FileSystem def get_s3fs(): return S3FileSystem() sandbox_ds = load_dataset("openai_humaneval") sandbox_ds["test"].save_to_disk( "s3://bucket-name/test_multiprocessing_saving/", num_proc=4, storage_options=get_s3fs().storage_options, # also tried: storage_options=S3FileSystem().storage_options ) ``` I'm guessing I might use `storage_options` parameter wrongly, but I didn't find anything online that made it work. **NB**: Behavior is the same when trying to save the whole `DatasetDict`. ### Expected behavior Progress bar fills in and saving is carried out. ### Environment info `datasets==2.18.0`
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6936/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6935/comments
https://api.github.com/repos/huggingface/datasets/issues/6935/events
https://github.com/huggingface/datasets/issues/6935
2,325,612,022
I_kwDODunzps6KngX2
6,935
Support for pathlib.Path in datasets 2.19.0
{ "login": "lamyiowce", "id": 12202811, "node_id": "MDQ6VXNlcjEyMjAyODEx", "avatar_url": "https://avatars.githubusercontent.com/u/12202811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lamyiowce", "html_url": "https://github.com/lamyiowce", "followers_url": "https://api.github.com/users/lamyiowce/followers", "following_url": "https://api.github.com/users/lamyiowce/following{/other_user}", "gists_url": "https://api.github.com/users/lamyiowce/gists{/gist_id}", "starred_url": "https://api.github.com/users/lamyiowce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lamyiowce/subscriptions", "organizations_url": "https://api.github.com/users/lamyiowce/orgs", "repos_url": "https://api.github.com/users/lamyiowce/repos", "events_url": "https://api.github.com/users/lamyiowce/events{/privacy}", "received_events_url": "https://api.github.com/users/lamyiowce/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-05-30T12:53:36
2024-08-22T18:45:56
null
NONE
null
null
null
### Describe the bug After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle? ### Steps to reproduce the bug ``` from datasets import Dataset import pathlib path = pathlib.Path("./my_out_path") Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} .save_to_disk(path) ``` This results in an error when using datasets 2.19: ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/Users/jb/scratch/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1515, in save_to_disk fs, _ = url_to_fs(dataset_path, **(storage_options or {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 383, in url_to_fs chain = _un_chain(url, kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/jb/scratch/venv/lib/python3.11/site-packages/fsspec/core.py", line 323, in _un_chain if "::" in path ^^^^^^^^^^^^ TypeError: argument of type 'PosixPath' is not iterable ``` Converting to str works, however. ``` Dataset.from_dict( {"text": ["hello world"], "label": [777], "split": ["train"]} ).save_to_disk(str(path)) ``` ### Expected behavior My dataset gets saved to disk without an error. ### Environment info aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.19.0 dill==0.3.8 filelock==3.14.0 frozenlist==1.4.1 fsspec==2024.3.1 huggingface-hub==0.23.2 idna==3.7 multidict==6.0.5 multiprocess==0.70.16 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.1.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 requests==2.32.3 six==1.16.0 tqdm==4.66.4 typing_extensions==4.12.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6935/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6935/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6930/comments
https://api.github.com/repos/huggingface/datasets/issues/6930/events
https://github.com/huggingface/datasets/issues/6930
2,323,225,922
I_kwDODunzps6KeZ1C
6,930
ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}
{ "login": "CLL112", "id": 41767521, "node_id": "MDQ6VXNlcjQxNzY3NTIx", "avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CLL112", "html_url": "https://github.com/CLL112", "followers_url": "https://api.github.com/users/CLL112/followers", "following_url": "https://api.github.com/users/CLL112/following{/other_user}", "gists_url": "https://api.github.com/users/CLL112/gists{/gist_id}", "starred_url": "https://api.github.com/users/CLL112/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CLL112/subscriptions", "organizations_url": "https://api.github.com/users/CLL112/orgs", "repos_url": "https://api.github.com/users/CLL112/repos", "events_url": "https://api.github.com/users/CLL112/events{/privacy}", "received_events_url": "https://api.github.com/users/CLL112/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2024-05-29T12:40:05
2024-07-23T06:25:24
null
NONE
null
null
null
### Describe the bug When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}. However, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here? ### Steps to reproduce the bug run code๏ผš import os os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' from datasets import load_dataset en = load_dataset("allenai/c4", "en", streaming=True) ### Expected behavior Successfully loaded the dataset. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6930/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6930/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6929/comments
https://api.github.com/repos/huggingface/datasets/issues/6929/events
https://github.com/huggingface/datasets/issues/6929
2,322,980,077
I_kwDODunzps6Kddzt
6,929
Avoid downloading the whole dataset when only README.me has been touched on hub.
{ "login": "zinc75", "id": 73740254, "node_id": "MDQ6VXNlcjczNzQwMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/73740254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zinc75", "html_url": "https://github.com/zinc75", "followers_url": "https://api.github.com/users/zinc75/followers", "following_url": "https://api.github.com/users/zinc75/following{/other_user}", "gists_url": "https://api.github.com/users/zinc75/gists{/gist_id}", "starred_url": "https://api.github.com/users/zinc75/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zinc75/subscriptions", "organizations_url": "https://api.github.com/users/zinc75/orgs", "repos_url": "https://api.github.com/users/zinc75/repos", "events_url": "https://api.github.com/users/zinc75/events{/privacy}", "received_events_url": "https://api.github.com/users/zinc75/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2024-05-29T10:36:06
2024-05-29T20:51:56
null
NONE
null
null
null
### Feature request `datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same. I think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ? ### Motivation The current behaviour is a waste of network bandwidth / disk space / research time. ### Your contribution I don't have time to submit a PR, but I hope a simple solution will emerge from this issue !
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6929/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6929/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6924/comments
https://api.github.com/repos/huggingface/datasets/issues/6924/events
https://github.com/huggingface/datasets/issues/6924
2,320,531,015
I_kwDODunzps6KUH5H
6,924
Caching map result of DatasetDict.
{ "login": "MostHumble", "id": 56939432, "node_id": "MDQ6VXNlcjU2OTM5NDMy", "avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MostHumble", "html_url": "https://github.com/MostHumble", "followers_url": "https://api.github.com/users/MostHumble/followers", "following_url": "https://api.github.com/users/MostHumble/following{/other_user}", "gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}", "starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions", "organizations_url": "https://api.github.com/users/MostHumble/orgs", "repos_url": "https://api.github.com/users/MostHumble/repos", "events_url": "https://api.github.com/users/MostHumble/events{/privacy}", "received_events_url": "https://api.github.com/users/MostHumble/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-28T09:07:41
2024-05-28T09:07:41
null
NONE
null
null
null
Hi! I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins. Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior? here it says, that cached files are loaded sequentially: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3005-L3006 it seems like I can pass in a fingerprint, and load it directly: https://github.com/huggingface/datasets/blob/bb2664cf540d5ce4b066365e7c8b26e7f1ca4743/src/datasets/arrow_dataset.py#L3108-L3125 **Environment Setup:** - Python 3.11.9 - datasets 2.19.1 conda-forge - Linux 6.1.83-1.el9.elrepo.x86_64 **MRE** ```python fixed raw_datasets fixed tokenize_function tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=9, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=5, remove_columns=['text'], load_from_cache_file= True, desc="Running tokenizer on dataset line_by_line", ) ```
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6924/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6923/comments
https://api.github.com/repos/huggingface/datasets/issues/6923/events
https://github.com/huggingface/datasets/issues/6923
2,319,292,872
I_kwDODunzps6KPZnI
6,923
Export Parquet Tablet Audio-Set is null bytes in Arrow
{ "login": "anioji", "id": 140120605, "node_id": "U_kgDOCFoSHQ", "avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anioji", "html_url": "https://github.com/anioji", "followers_url": "https://api.github.com/users/anioji/followers", "following_url": "https://api.github.com/users/anioji/following{/other_user}", "gists_url": "https://api.github.com/users/anioji/gists{/gist_id}", "starred_url": "https://api.github.com/users/anioji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anioji/subscriptions", "organizations_url": "https://api.github.com/users/anioji/orgs", "repos_url": "https://api.github.com/users/anioji/repos", "events_url": "https://api.github.com/users/anioji/events{/privacy}", "received_events_url": "https://api.github.com/users/anioji/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-27T14:27:57
2024-05-27T14:27:57
null
NONE
null
null
null
### Describe the bug Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"} At the same time, the same dataset uploaded to the hub has bit arrays ![Screenshot from 2024-05-27 19-14-49](https://github.com/huggingface/datasets/assets/140120605/ddfba089-426f-4659-9df4-7a634c948b9e) ![Screenshot from 2024-05-27 19-12-51](https://github.com/huggingface/datasets/assets/140120605/4cf8c0a1-650e-491b-86c8-b475c284a021) ### Steps to reproduce the bug 1.Get dataset from audio and cast it 2.Export and push dataset 3.Itโ€™s scary to be indignant at the difference in the uploaded dataset and the fact that it was saved locally ```py from datasets import Dataset, Audio df = Dataset.from_csv("./datasets.csv") df = df.cast_column("audio", Audio(16000)) df.to_parquet("./datasets.parquet") df.push_to_hub(repo_id="************", token="**********************") ``` You can use "try replicate case" for this [replicate_packet.zip](https://github.com/huggingface/datasets/files/15457114/replicate_packet.zip) ### Expected behavior Two parquet tables identical in content. It is obvious? ### Environment info Python 3.11+ (I try did it in 3.12 and got same result )
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6923/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6919/comments
https://api.github.com/repos/huggingface/datasets/issues/6919/events
https://github.com/huggingface/datasets/issues/6919
2,315,618,993
I_kwDODunzps6KBYqx
6,919
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple>
{ "login": "juanqui", "id": 67964, "node_id": "MDQ6VXNlcjY3OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/67964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juanqui", "html_url": "https://github.com/juanqui", "followers_url": "https://api.github.com/users/juanqui/followers", "following_url": "https://api.github.com/users/juanqui/following{/other_user}", "gists_url": "https://api.github.com/users/juanqui/gists{/gist_id}", "starred_url": "https://api.github.com/users/juanqui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanqui/subscriptions", "organizations_url": "https://api.github.com/users/juanqui/orgs", "repos_url": "https://api.github.com/users/juanqui/repos", "events_url": "https://api.github.com/users/juanqui/events{/privacy}", "received_events_url": "https://api.github.com/users/juanqui/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-24T14:59:45
2024-05-24T14:59:45
null
NONE
null
null
null
### Describe the bug I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with: ``` ValueError: Invalid metadata in README.md. - Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11) 47 | - 4 48 | - 4 49 | - 8 50 | - !!binary | ----------------^ 51 | TwAAAA== 52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ... ``` My dataset has a `train` and `validation` dataset. These are the features: ``` {'c1': Value(dtype='string', id=None), 'c2': Value(dtype='string', id=None), 'c3': [{'value': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}], 'c4': Value(dtype='string', id=None), 'c5': Value(dtype='string', id=None), 'c6': Value(dtype='string', id=None), 'c7': Value(dtype='string', id=None), 'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None), 'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} ``` This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with: ``` ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels)))) ``` ### Steps to reproduce the bug 1. Start with any token classification dataset. 2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`. 3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with: ``` labels = ['O', 'B-TEST', 'I-TEST'] ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels))) ``` 4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")` ### Expected behavior I expected `push_to_hub` to successfully push my dataset to the hub without error. ### Environment info Python 3.11.9 datasets==2.19.1 transformers==4.41.1 PyYAML==6.0.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6919/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6918/comments
https://api.github.com/repos/huggingface/datasets/issues/6918/events
https://github.com/huggingface/datasets/issues/6918
2,315,322,738
I_kwDODunzps6KAQVy
6,918
NonMatchingSplitsSizesError when using data_dir
{ "login": "srehaag", "id": 86664538, "node_id": "MDQ6VXNlcjg2NjY0NTM4", "avatar_url": "https://avatars.githubusercontent.com/u/86664538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srehaag", "html_url": "https://github.com/srehaag", "followers_url": "https://api.github.com/users/srehaag/followers", "following_url": "https://api.github.com/users/srehaag/following{/other_user}", "gists_url": "https://api.github.com/users/srehaag/gists{/gist_id}", "starred_url": "https://api.github.com/users/srehaag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srehaag/subscriptions", "organizations_url": "https://api.github.com/users/srehaag/orgs", "repos_url": "https://api.github.com/users/srehaag/repos", "events_url": "https://api.github.com/users/srehaag/events{/privacy}", "received_events_url": "https://api.github.com/users/srehaag/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2024-05-24T12:43:39
2024-05-31T17:10:38
2024-05-31T17:10:38
NONE
null
null
null
### Describe the bug Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset. This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on the data in the directory specified using the data_dir argument. This is recent behavior. Until the past few weeks loading using the data_dir argument worked without any issue. ### Steps to reproduce the bug Simple test dataset available here: https://huggingface.co/datasets/srehaag/hf-bug-temp The dataset contains two directories "data1" and "data2", each with a file called "train.parquet" with a 2 x 5 table. from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") Generates: --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) Cell In[3], <a href='vscode-notebook-cell:?execution_count=3&line=2'>line 2</a> <a href='vscode-notebook-cell:?execution_count=3&line=1'>1</a> from datasets import load_dataset ----> <a href='vscode-notebook-cell:?execution_count=3&line=2'>2</a> dataset = load_dataset("srehaag/hf-bug-temp", data_dir = "data1") File ~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2606'>2606</a> return builder_instance.as_streaming_dataset(split=split) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2608'>2608</a> # Download and prepare data -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2609'>2609</a> builder_instance.download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2610'>2610</a> download_config=download_config, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2611'>2611</a> download_mode=download_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2612'>2612</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2613'>2613</a> num_proc=num_proc, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2614'>2614</a> storage_options=storage_options, <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2615'>2615</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2617'>2617</a> # Build dataset for splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2618'>2618</a> keep_in_memory = ( <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2619'>2619</a> keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) <a href='~/.python/current/lib/python3.10/site-packages/datasets/load.py:2620'>2620</a> ) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1025'>1025</a> if num_proc is not None: <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1026'>1026</a> prepare_split_kwargs["num_proc"] = num_proc -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1027'>1027</a> self._download_and_prepare( <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1028'>1028</a> dl_manager=dl_manager, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1029'>1029</a> verification_mode=verification_mode, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1030'>1030</a> **prepare_split_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1031'>1031</a> **download_and_prepare_kwargs, <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1032'>1032</a> ) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1033'>1033</a> # Sync info <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1034'>1034</a> self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1137'>1137</a> dl_manager.manage_extracted_files() <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1139'>1139</a> if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1140'>1140</a> verify_splits(self.info.splits, split_dict) <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1142'>1142</a> # Update the info object with the splits. <a href='~/.python/current/lib/python3.10/site-packages/datasets/builder.py:1143'>1143</a> self.info.splits = split_dict File ~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101, in verify_splits(expected_splits, recorded_splits) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:95'>95</a> bad_splits = [ <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:96'>96</a> {"expected": expected_splits[name], "recorded": recorded_splits[name]} <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:97'>97</a> for name in expected_splits <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:98'>98</a> if expected_splits[name].num_examples != recorded_splits[name].num_examples <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:99'>99</a> ] <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:100'>100</a> if len(bad_splits) > 0: --> <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:101'>101</a> raise NonMatchingSplitsSizesError(str(bad_splits)) <a href='~/.python/current/lib/python3.10/site-packages/datasets/utils/info_utils.py:102'>102</a> logger.info("All the splits matched successfully.") NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=212, num_examples=10, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=106, num_examples=5, shard_lengths=None, dataset_name='hf-bug-temp')}] __________ By contrast, this loads the data from both data1/train.parquet and data2/train.parquet without any error message: from datasets import load_dataset dataset = load_dataset("srehaag/hf-bug-temp") ### Expected behavior Should load the 5 x 2 table from data1/train.parquet without error message. ### Environment info Used Codespaces to simplify environment (see details below), but bug is present across various configurations. - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1021-azure-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6918/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6917/comments
https://api.github.com/repos/huggingface/datasets/issues/6917/events
https://github.com/huggingface/datasets/issues/6917
2,314,683,663
I_kwDODunzps6J90UP
6,917
WinError 32 The process cannot access the file during load_dataset
{ "login": "elwe-2808", "id": 56682168, "node_id": "MDQ6VXNlcjU2NjgyMTY4", "avatar_url": "https://avatars.githubusercontent.com/u/56682168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elwe-2808", "html_url": "https://github.com/elwe-2808", "followers_url": "https://api.github.com/users/elwe-2808/followers", "following_url": "https://api.github.com/users/elwe-2808/following{/other_user}", "gists_url": "https://api.github.com/users/elwe-2808/gists{/gist_id}", "starred_url": "https://api.github.com/users/elwe-2808/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elwe-2808/subscriptions", "organizations_url": "https://api.github.com/users/elwe-2808/orgs", "repos_url": "https://api.github.com/users/elwe-2808/repos", "events_url": "https://api.github.com/users/elwe-2808/events{/privacy}", "received_events_url": "https://api.github.com/users/elwe-2808/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-24T07:54:51
2024-05-24T07:54:51
null
NONE
null
null
null
### Describe the bug When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation)) ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` I get an error: `PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ` <details><summary>Full stacktrace</summary> <p> ```python AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1858, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1857](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1857) _time = time.time() -> [1858](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1858) for _, table in generator: [1859](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1859) if max_shard_size is not None and writer._num_bytes > max_shard_size: File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\packaged_modules\parquet\parquet.py:59, in Parquet._generate_tables(self, files) [58](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:58) def _generate_tables(self, files): ---> [59](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:59) schema = self.config.features.arrow_schema if self.config.features is not None else None [60](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/packaged_modules/parquet/parquet.py:60) if self.config.features is not None and self.config.columns is not None: AttributeError: 'list' object has no attribute 'arrow_schema' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\builder.py:1882, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) [1881](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1881) num_shards = shard_id + 1 -> [1882](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1882) num_examples, num_bytes = writer.finalize() [1883](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/builder.py:1883) writer.close() File c:\Users\Me\.conda\envs\ia\lib\site-packages\datasets\arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream) [583](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:583) # If schema is known, infer features even if no examples were written --> [584](file:///C:/Users/Me/.conda/envs/ia/lib/site-packages/datasets/arrow_writer.py:584) if self.pa_writer is None and self.schema: ... --> [627](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:627) os.unlink(fullname) [628](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:628) except OSError: [629](file:///C:/Users/Me/.conda/envs/ia/lib/shutil.py:629) onerror(os.unlink, fullname, sys.exc_info()) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Me/.cache/huggingface/datasets/Helsinki-NLP___parquet/ca-de-a39f1ef185b9b73b/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec.incomplete\\parquet-train-00000-00000-of-NNNNN.arrow' ``` </p> </details> ### Steps to reproduce the bug Steps to reproduce: Just execute these lines ```python from datasets import load_dataset, Dataset dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "translation"]) ``` ### Expected behavior I expect the dataset to be loaded without any errors. ### Environment info | Package| Version| |--------|--------| | transformers| 4.37.2| | python| 3.9.19| | pytorch| 2.3.0| | datasets|2.12.0 | | arrow | 1.2.3| I am using Conda on Windows 11.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6917/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6916/comments
https://api.github.com/repos/huggingface/datasets/issues/6916/events
https://github.com/huggingface/datasets/issues/6916
2,311,675,564
I_kwDODunzps6JyV6s
6,916
```push_to_hub()``` - Prevent Automatic Generation of Splits
{ "login": "jetlime", "id": 29337128, "node_id": "MDQ6VXNlcjI5MzM3MTI4", "avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jetlime", "html_url": "https://github.com/jetlime", "followers_url": "https://api.github.com/users/jetlime/followers", "following_url": "https://api.github.com/users/jetlime/following{/other_user}", "gists_url": "https://api.github.com/users/jetlime/gists{/gist_id}", "starred_url": "https://api.github.com/users/jetlime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jetlime/subscriptions", "organizations_url": "https://api.github.com/users/jetlime/orgs", "repos_url": "https://api.github.com/users/jetlime/repos", "events_url": "https://api.github.com/users/jetlime/events{/privacy}", "received_events_url": "https://api.github.com/users/jetlime/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-22T23:52:15
2024-05-23T00:07:53
2024-05-23T00:07:53
NONE
null
null
null
### Describe the bug I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening? ### Steps to reproduce the bug 1. Have a unsplit dataset ```python Dataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 }) ``` 2. Push it to huggingface ```python dataset.push_to_hub(dataset_name) ``` 3. On the hugging face dataset repo, the dataset then appears to be splited: ![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09) 4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set. ```python from datasets import load_dataset, Dataset dataset = load_dataset("Jetlime/NF-CSE-CIC-IDS2018-v2", streaming=True) dataset ``` output: ``` IterableDatasetDict({ train: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 2 }) test: IterableDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], n_shards: 1 }) ``` ### Expected behavior The dataset shall not be splited, as not requested. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "login": "jetlime", "id": 29337128, "node_id": "MDQ6VXNlcjI5MzM3MTI4", "avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jetlime", "html_url": "https://github.com/jetlime", "followers_url": "https://api.github.com/users/jetlime/followers", "following_url": "https://api.github.com/users/jetlime/following{/other_user}", "gists_url": "https://api.github.com/users/jetlime/gists{/gist_id}", "starred_url": "https://api.github.com/users/jetlime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jetlime/subscriptions", "organizations_url": "https://api.github.com/users/jetlime/orgs", "repos_url": "https://api.github.com/users/jetlime/repos", "events_url": "https://api.github.com/users/jetlime/events{/privacy}", "received_events_url": "https://api.github.com/users/jetlime/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6916/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6913/comments
https://api.github.com/repos/huggingface/datasets/issues/6913/events
https://github.com/huggingface/datasets/issues/6913
2,309,605,889
I_kwDODunzps6JqcoB
6,913
Column order is nondeterministic when loading from JSON
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-22T05:30:14
2024-05-29T13:12:24
2024-05-29T13:12:24
MEMBER
null
null
null
As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects. For example, when loading a JSON files with a list of objects, each with the following ordered keys: - [ID, Language, Topic], the resulting dataset may have columns: - [ID, Topic, Language], or - [Topic, Language, ID], or - [Topic, ID, Language],... This issue is caused by the use of a Python set (which does not preserve the order): https://github.com/huggingface/datasets/blob/60d21efbc01e15d0b596ac1072750cbecd91548a/src/datasets/packaged_modules/json/json.py#L168 introduced in - #5772
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6913/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6912/comments
https://api.github.com/repos/huggingface/datasets/issues/6912/events
https://github.com/huggingface/datasets/issues/6912
2,309,365,961
I_kwDODunzps6JpiDJ
6,912
Add MedImg for streaming
{ "login": "lhallee", "id": 72926928, "node_id": "MDQ6VXNlcjcyOTI2OTI4", "avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhallee", "html_url": "https://github.com/lhallee", "followers_url": "https://api.github.com/users/lhallee/followers", "following_url": "https://api.github.com/users/lhallee/following{/other_user}", "gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhallee/subscriptions", "organizations_url": "https://api.github.com/users/lhallee/orgs", "repos_url": "https://api.github.com/users/lhallee/repos", "events_url": "https://api.github.com/users/lhallee/events{/privacy}", "received_events_url": "https://api.github.com/users/lhallee/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
8
2024-05-22T00:55:30
2024-09-05T16:53:54
null
NONE
null
null
null
### Feature request Host the MedImg dataset (similar to Imagenet but for biomedical images). ### Motivation There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community. ### Your contribution MedImg can be found [here](https://www.cuilab.cn/medimg/#).
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6912/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6908/comments
https://api.github.com/repos/huggingface/datasets/issues/6908/events
https://github.com/huggingface/datasets/issues/6908
2,304,958,116
I_kwDODunzps6JYt6k
6,908
Fail to load "stas/c4-en-10k" dataset since 2.16 version
{ "login": "guch8017", "id": 38173059, "node_id": "MDQ6VXNlcjM4MTczMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guch8017", "html_url": "https://github.com/guch8017", "followers_url": "https://api.github.com/users/guch8017/followers", "following_url": "https://api.github.com/users/guch8017/following{/other_user}", "gists_url": "https://api.github.com/users/guch8017/gists{/gist_id}", "starred_url": "https://api.github.com/users/guch8017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guch8017/subscriptions", "organizations_url": "https://api.github.com/users/guch8017/orgs", "repos_url": "https://api.github.com/users/guch8017/repos", "events_url": "https://api.github.com/users/guch8017/events{/privacy}", "received_events_url": "https://api.github.com/users/guch8017/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2024-05-20T02:43:59
2024-05-24T10:58:09
2024-05-24T10:58:09
NONE
null
null
null
### Describe the bug When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset ```python from datasets import load_dataset, Dataset dataset = load_dataset('stas/c4-en-10k') ``` and then it raise UnicodeDecodeError like ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset builder_instance = load_dataset_builder( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory raise e1 from None File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read() File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder. ```python fs = HfFileSystem('https://huggingface.co') fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb") data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...' data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...' ``` ### Steps to reproduce the bug 1. Install datasets between version 2.16 and 2.19 2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset. ### Expected behavior Load dataset normally. ### Environment info Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35 Python = 3.10.14 Datasets = 2.19
{ "login": "guch8017", "id": 38173059, "node_id": "MDQ6VXNlcjM4MTczMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guch8017", "html_url": "https://github.com/guch8017", "followers_url": "https://api.github.com/users/guch8017/followers", "following_url": "https://api.github.com/users/guch8017/following{/other_user}", "gists_url": "https://api.github.com/users/guch8017/gists{/gist_id}", "starred_url": "https://api.github.com/users/guch8017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guch8017/subscriptions", "organizations_url": "https://api.github.com/users/guch8017/orgs", "repos_url": "https://api.github.com/users/guch8017/repos", "events_url": "https://api.github.com/users/guch8017/events{/privacy}", "received_events_url": "https://api.github.com/users/guch8017/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6908/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6907/comments
https://api.github.com/repos/huggingface/datasets/issues/6907/events
https://github.com/huggingface/datasets/issues/6907
2,303,855,833
I_kwDODunzps6JUgzZ
6,907
Support the deserialization of json lines files comprised of lists
{ "login": "umarbutler", "id": 8473183, "node_id": "MDQ6VXNlcjg0NzMxODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarbutler", "html_url": "https://github.com/umarbutler", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "repos_url": "https://api.github.com/users/umarbutler/repos", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2024-05-18T05:07:23
2024-05-18T08:53:28
null
NONE
null
null
null
### Feature request I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields. Essentially, a line in my json lines file used to look like this: ```json {"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""} ``` And now it looks like this: ```json ["","","","","","","",""] ``` This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`. After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features. I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries. ### Motivation The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that: > In the next major release, the new safety features of ๐Ÿค— Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script. I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format. ### Your contribution I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6907/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6906/comments
https://api.github.com/repos/huggingface/datasets/issues/6906/events
https://github.com/huggingface/datasets/issues/6906
2,303,679,119
I_kwDODunzps6JT1qP
6,906
irc_disentangle - Issue with splitting data
{ "login": "eor51355", "id": 114260604, "node_id": "U_kgDOBs96fA", "avatar_url": "https://avatars.githubusercontent.com/u/114260604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eor51355", "html_url": "https://github.com/eor51355", "followers_url": "https://api.github.com/users/eor51355/followers", "following_url": "https://api.github.com/users/eor51355/following{/other_user}", "gists_url": "https://api.github.com/users/eor51355/gists{/gist_id}", "starred_url": "https://api.github.com/users/eor51355/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eor51355/subscriptions", "organizations_url": "https://api.github.com/users/eor51355/orgs", "repos_url": "https://api.github.com/users/eor51355/repos", "events_url": "https://api.github.com/users/eor51355/events{/privacy}", "received_events_url": "https://api.github.com/users/eor51355/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
6
2024-05-17T23:19:37
2024-07-16T00:21:56
2024-07-08T06:18:08
NONE
null
null
null
### Describe the bug I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message: ValueError: Instruction "train" corresponds to no data! ### Steps to reproduce the bug import datasets ds = datasets.load_dataset('irc_disentangle') ds ### Expected behavior The data is supposed to load into ds and be accessable as such: ds['train'][1050], ds['train'][1055] ### Environment info I tired Python 3.12 and 3.10
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6906/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6906/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6905/comments
https://api.github.com/repos/huggingface/datasets/issues/6905/events
https://github.com/huggingface/datasets/issues/6905
2,303,098,587
I_kwDODunzps6JRn7b
6,905
Extraction protocol for arrow files is not defined
{ "login": "radulescupetru", "id": 26553095, "node_id": "MDQ6VXNlcjI2NTUzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/radulescupetru", "html_url": "https://github.com/radulescupetru", "followers_url": "https://api.github.com/users/radulescupetru/followers", "following_url": "https://api.github.com/users/radulescupetru/following{/other_user}", "gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}", "starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions", "organizations_url": "https://api.github.com/users/radulescupetru/orgs", "repos_url": "https://api.github.com/users/radulescupetru/repos", "events_url": "https://api.github.com/users/radulescupetru/events{/privacy}", "received_events_url": "https://api.github.com/users/radulescupetru/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-17T16:01:41
2024-05-17T16:01:41
null
NONE
null
null
null
### Describe the bug Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow. ### Steps to reproduce the bug Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820) The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None: ``` MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = { bytes.fromhex("504B0304"): "zip", bytes.fromhex("504B0506"): "zip", # empty archive bytes.fromhex("504B0708"): "zip", # spanned archive bytes.fromhex("425A68"): "bz2", bytes.fromhex("1F8B"): "gzip", bytes.fromhex("FD377A585A00"): "xz", bytes.fromhex("04224D18"): "lz4", bytes.fromhex("28B52FFD"): "zstd", } ``` ### Expected behavior My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method. ### Environment info datasets 2.19.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6905/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6903/comments
https://api.github.com/repos/huggingface/datasets/issues/6903/events
https://github.com/huggingface/datasets/issues/6903
2,300,436,053
I_kwDODunzps6JHd5V
6,903
Add the option of saving in parquet instead of arrow
{ "login": "arita37", "id": 18707623, "node_id": "MDQ6VXNlcjE4NzA3NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arita37", "html_url": "https://github.com/arita37", "followers_url": "https://api.github.com/users/arita37/followers", "following_url": "https://api.github.com/users/arita37/following{/other_user}", "gists_url": "https://api.github.com/users/arita37/gists{/gist_id}", "starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arita37/subscriptions", "organizations_url": "https://api.github.com/users/arita37/orgs", "repos_url": "https://api.github.com/users/arita37/repos", "events_url": "https://api.github.com/users/arita37/events{/privacy}", "received_events_url": "https://api.github.com/users/arita37/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
8
2024-05-16T13:35:51
2024-06-14T16:24:31
null
NONE
null
null
null
### Feature request In dataset.save_to_disk('/path/to/save/dataset'), add the option to save in parquet format dataset.save_to_disk('/path/to/save/dataset', format="parquet"), because arrow is not used for Production Big data.... (only parquet) ### Motivation because arrow is not used for Production Big data.... (only parquet) ### Your contribution I can do the testing !
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6903/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6901/comments
https://api.github.com/repos/huggingface/datasets/issues/6901/events
https://github.com/huggingface/datasets/issues/6901
2,300,167,465
I_kwDODunzps6JGcUp
6,901
HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-16T11:40:22
2024-05-16T12:51:06
2024-05-16T12:51:06
MEMBER
null
null
null
CLI convert_to_parquet cannot create "script" branch on 3rd party repos. It can only create it on repos where the user executing the script has write access. Otherwise, a 403 Forbidden HTTPError is raised: ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/ORG/DATASET/branch/script The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/usr/local/lib/python3.10/dist-packages/datasets/commands/convert_to_parquet.py", line 92, in run create_branch(dataset_id, branch="script", repo_type="dataset", token=token, exist_ok=True) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 367, in hf_raise_for_status raise HfHubHTTPError(message, response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-6645ee0d-4db1ed8a1fbe04956be15897;139a6e23-df7d-4f62-b5ba-adb6d8e6e696) 403 Forbidden: Forbidden: cannot write to script. Cannot access content at: https://huggingface.co/api/datasets/ORG/DATASET/branch/script. If you are trying to create or update content,make sure you have a token with the `write` role. ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6901/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6900/comments
https://api.github.com/repos/huggingface/datasets/issues/6900/events
https://github.com/huggingface/datasets/issues/6900
2,298,489,733
I_kwDODunzps6JACuF
6,900
[WebDataset] KeyError with user-defined `Features` when a field is missing in an example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2024-05-15T17:48:34
2024-06-28T09:30:13
2024-06-28T09:30:13
MEMBER
null
null
null
reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1 ``` File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} ```
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6900/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/datasets/issues/6900/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6899/comments
https://api.github.com/repos/huggingface/datasets/issues/6899/events
https://github.com/huggingface/datasets/issues/6899
2,298,059,597
I_kwDODunzps6I-ZtN
6,899
List of dictionary features get standardized
{ "login": "sohamparikh", "id": 11831521, "node_id": "MDQ6VXNlcjExODMxNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sohamparikh", "html_url": "https://github.com/sohamparikh", "followers_url": "https://api.github.com/users/sohamparikh/followers", "following_url": "https://api.github.com/users/sohamparikh/following{/other_user}", "gists_url": "https://api.github.com/users/sohamparikh/gists{/gist_id}", "starred_url": "https://api.github.com/users/sohamparikh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sohamparikh/subscriptions", "organizations_url": "https://api.github.com/users/sohamparikh/orgs", "repos_url": "https://api.github.com/users/sohamparikh/repos", "events_url": "https://api.github.com/users/sohamparikh/events{/privacy}", "received_events_url": "https://api.github.com/users/sohamparikh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-15T14:11:35
2024-05-15T14:11:35
null
NONE
null
null
null
### Describe the bug Hi, iโ€™m trying to create a HF dataset from a list using Dataset.from_list. Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature. How can I keep the same set of keys as in the original list for each dictionary under a feature? ### Steps to reproduce the bug ``` from datasets import Dataset # Define a function to generate a sample with "tools" feature def generate_sample(): # Generate random sample data sample_data = { "text": "Sample text", "feature_1": [] } # Add feature_1 with random keys for this sample feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys sample_data["feature_1"].extend(feature_1) return sample_data # Generate multiple samples num_samples = 10 samples = [generate_sample() for _ in range(num_samples)] # Create a Hugging Face Dataset dataset = Dataset.from_list(samples) dataset[0] ``` ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}``` ### Expected behavior ```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}``` ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.0 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6899/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6897/comments
https://api.github.com/repos/huggingface/datasets/issues/6897/events
https://github.com/huggingface/datasets/issues/6897
2,293,428,243
I_kwDODunzps6IsvAT
6,897
datasets template guide :: issue in documentation YAML
{ "login": "bghira", "id": 59658056, "node_id": "MDQ6VXNlcjU5NjU4MDU2", "avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bghira", "html_url": "https://github.com/bghira", "followers_url": "https://api.github.com/users/bghira/followers", "following_url": "https://api.github.com/users/bghira/following{/other_user}", "gists_url": "https://api.github.com/users/bghira/gists{/gist_id}", "starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bghira/subscriptions", "organizations_url": "https://api.github.com/users/bghira/orgs", "repos_url": "https://api.github.com/users/bghira/repos", "events_url": "https://api.github.com/users/bghira/events{/privacy}", "received_events_url": "https://api.github.com/users/bghira/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2024-05-13T17:33:59
2024-05-16T14:28:17
2024-05-16T14:28:17
NONE
null
null
null
### Describe the bug There is a YAML error at the top of the page, and I don't think it's supposed to be there ### Steps to reproduce the bug 1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md) 2. Observe a big red error at the top 3. The rest of the document remains functional ### Expected behavior I think the YAML block should be displayed or ignored. ### Environment info N/A
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6896/comments
https://api.github.com/repos/huggingface/datasets/issues/6896/events
https://github.com/huggingface/datasets/issues/6896
2,293,176,061
I_kwDODunzps6Irxb9
6,896
Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset
{ "login": "finiteautomata", "id": 167943, "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finiteautomata", "html_url": "https://github.com/finiteautomata", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "repos_url": "https://api.github.com/users/finiteautomata/repos", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-13T15:41:57
2024-05-13T15:44:48
null
NONE
null
null
null
### Describe the bug While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error: ```python --------------------------------------------------------------------------- NonMatchingSplitsSizesError Traceback (most recent call last) [<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2150 2151 # Download and prepare data -> 2152 builder_instance.download_and_prepare( 2153 download_config=download_config, 2154 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 946 if num_proc is not None: 947 prepare_split_kwargs["num_proc"] = num_proc --> 948 self._download_and_prepare( 949 dl_manager=dl_manager, 950 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1059 1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS: -> 1061 verify_splits(self.info.splits, split_dict) 1062 1063 # Update the info object with the splits. [/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits) 98 ] 99 if len(bad_splits) > 0: --> 100 raise NonMatchingSplitsSizesError(str(bad_splits)) 101 logger.info("All the splits matched successfully.") 102 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}] ``` I think I had this dataset updated, might be related to #6271 It is working fine as late in `2.10.0` , but not in `2.13.0` onwards. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("pysentimiento/spanish-tweets-small") ``` You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg) ### Expected behavior Load the dataset without any error ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - PyArrow version: 14.0.2 - Pandas version: 2.0.3
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6896/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6896/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6894/comments
https://api.github.com/repos/huggingface/datasets/issues/6894/events
https://github.com/huggingface/datasets/issues/6894
2,292,840,226
I_kwDODunzps6Iqfci
6,894
Better document defaults of to_json
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-13T13:30:54
2024-05-16T14:31:27
2024-05-16T14:31:27
MEMBER
null
null
null
Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/). Related to: - #6891
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6894/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6891/comments
https://api.github.com/repos/huggingface/datasets/issues/6891/events
https://github.com/huggingface/datasets/issues/6891
2,291,118,869
I_kwDODunzps6Ij7MV
6,891
Unable to load JSON saved using `to_json`
{ "login": "DarshanDeshpande", "id": 39432636, "node_id": "MDQ6VXNlcjM5NDMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarshanDeshpande", "html_url": "https://github.com/DarshanDeshpande", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
2024-05-12T01:02:51
2024-05-16T14:32:55
2024-05-12T07:02:02
NONE
null
null
null
### Describe the bug Datasets stored in the JSON format cannot be loaded using `json.load()` ### Steps to reproduce the bug ``` import json from datasets import load_dataset dataset = load_dataset("squad") train_dataset, test_dataset = dataset["train"], dataset["validation"] test_dataset.to_json("full_dataset.json") # This works loaded_test = load_dataset("json", data_files="full_dataset.json") # This fails loaded_test = json.load(open("full_dataset.json", "r")) ``` ### Expected behavior The JSON should be correctly formatted when writing so that it can be loaded using `json.load()`. ### Environment info Colab: https://colab.research.google.com/drive/1st1iStFUVgu9ZPvnzSzL4vDeYWDwYpUm?usp=sharing
{ "login": "DarshanDeshpande", "id": 39432636, "node_id": "MDQ6VXNlcjM5NDMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarshanDeshpande", "html_url": "https://github.com/DarshanDeshpande", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6891/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6890/comments
https://api.github.com/repos/huggingface/datasets/issues/6890/events
https://github.com/huggingface/datasets/issues/6890
2,288,699,041
I_kwDODunzps6Iasah
6,890
add `with_transform` and/or `set_transform` to IterableDataset
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2024-05-10T01:00:12
2024-05-10T01:00:46
null
NONE
null
null
null
### Feature request when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map ### Motivation don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class. reducing time and resources ### Your contribution I am a little busy with my job search lately, but would post about this feature in my social media. Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard ย ย ย  / (โ”ฌโ”ฌ๏นโ”ฌโ”ฌ)\
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6890/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6890/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6887/comments
https://api.github.com/repos/huggingface/datasets/issues/6887/events
https://github.com/huggingface/datasets/issues/6887
2,286,786,396
I_kwDODunzps6ITZdc
6,887
FAISS load to None
{ "login": "brainer3220", "id": 40418544, "node_id": "MDQ6VXNlcjQwNDE4NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/40418544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brainer3220", "html_url": "https://github.com/brainer3220", "followers_url": "https://api.github.com/users/brainer3220/followers", "following_url": "https://api.github.com/users/brainer3220/following{/other_user}", "gists_url": "https://api.github.com/users/brainer3220/gists{/gist_id}", "starred_url": "https://api.github.com/users/brainer3220/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brainer3220/subscriptions", "organizations_url": "https://api.github.com/users/brainer3220/orgs", "repos_url": "https://api.github.com/users/brainer3220/repos", "events_url": "https://api.github.com/users/brainer3220/events{/privacy}", "received_events_url": "https://api.github.com/users/brainer3220/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-05-09T02:43:50
2024-05-16T20:44:23
null
NONE
null
null
null
### Describe the bug I've use FAISS with Datasets and save to FAISS. Then load to save FAISS then no error, then ds to None ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Steps to reproduce the bug # 1. ```python ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64) ds_with_embeddings.add_faiss_index(column='embeddings') ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss') ``` # 2. ```python ds.load_faiss_index('embeddings', 'my_index.faiss') ``` ### Expected behavior Add column in Datasets. ### Environment info Google Colab, SageMaker Notebook
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6887/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6886/comments
https://api.github.com/repos/huggingface/datasets/issues/6886/events
https://github.com/huggingface/datasets/issues/6886
2,286,328,984
I_kwDODunzps6IRpyY
6,886
load_dataset with data_dir and cache_dir set fail with not supported
{ "login": "fah", "id": 322496, "node_id": "MDQ6VXNlcjMyMjQ5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/322496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fah", "html_url": "https://github.com/fah", "followers_url": "https://api.github.com/users/fah/followers", "following_url": "https://api.github.com/users/fah/following{/other_user}", "gists_url": "https://api.github.com/users/fah/gists{/gist_id}", "starred_url": "https://api.github.com/users/fah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fah/subscriptions", "organizations_url": "https://api.github.com/users/fah/orgs", "repos_url": "https://api.github.com/users/fah/repos", "events_url": "https://api.github.com/users/fah/events{/privacy}", "received_events_url": "https://api.github.com/users/fah/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-08T19:52:35
2024-05-08T19:58:11
null
NONE
null
null
null
### Describe the bug with python 3.11 I execute: ```py from transformers import Wav2Vec2Processor, Data2VecAudioModel import torch from torch import nn from datasets import load_dataset, concatenate_datasets # load demo audio and set processor dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ``` This fails in the last line with ```log Found cached dataset librispeech_asr (file:///Users/as/Documents/Project/git/audio2vec/cache/librispeech_asr/clean-data_dir=data/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7) Traceback (most recent call last): File "/Users/as/Documents/Project/git/audio2vec/src/music2vec-v1.py", line 7, in <module> dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/builder.py", line 1113, in as_dataset raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` ### Steps to reproduce the bug I setup an venv with requirements.txt ```txt transformers==4.40.2 torch==2.2.2 datasets==2.16.0 fsspec==2023.9.2 ``` pip freeze is: ``` aiohttp==3.9.5 aiosignal==1.3.1 attrs==23.2.0 certifi==2024.2.2 charset-normalizer==3.3.2 datasets==2.16.0 dill==0.3.7 filelock==3.14.0 frozenlist==1.4.1 fsspec==2023.9.2 huggingface-hub==0.23.0 idna==3.7 Jinja2==3.1.4 MarkupSafe==2.1.5 mpmath==1.3.0 multidict==6.0.5 multiprocess==0.70.15 networkx==3.3 numpy==1.26.4 packaging==24.0 pandas==2.2.2 pyarrow==16.0.0 pyarrow-hotfix==0.6 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 regex==2024.4.28 requests==2.31.0 safetensors==0.4.3 six==1.16.0 sympy==1.12 tokenizers==0.19.1 torch==2.2.2 tqdm==4.66.4 transformers==4.40.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==2.2.1 xxhash==3.4.1 yarl==1.9.4 ``` I execute this on a M1 Mac. ### Expected behavior I don't understand the error message. Why is "local" caching not supported. Would it possible to give some additional hint with the error message how to solve this issue? ### Environment info source .... python -u example.py
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6886/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6884/comments
https://api.github.com/repos/huggingface/datasets/issues/6884/events
https://github.com/huggingface/datasets/issues/6884
2,284,839,687
I_kwDODunzps6IL-MH
6,884
CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-08T07:01:47
2024-05-08T09:35:17
2024-05-08T09:35:17
MEMBER
null
null
null
After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error: ```Python traceback AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? ``` See: https://github.com/huggingface/datasets/actions/runs/8997488610/job/24715736153 ```Python traceback ___________________ FormatterTest.test_jax_formatter_device ____________________ [gw1] linux -- Python 3.10.14 /opt/hostedtoolcache/Python/3.10.14/x64/bin/python self = <tests.test_formatting.FormatterTest testMethod=test_jax_formatter_device> @require_jax def test_jax_formatter_device(self): import jax from datasets.formatting import JaxFormatter pa_table = self._create_dummy_table() device = jax.devices()[0] formatter = JaxFormatter(device=str(device)) row = formatter.format_row(pa_table) > assert row["a"].device() == device E AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'? tests/test_formatting.py:630: AttributeError ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6884/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6882/comments
https://api.github.com/repos/huggingface/datasets/issues/6882/events
https://github.com/huggingface/datasets/issues/6882
2,284,803,158
I_kwDODunzps6IL1RW
6,882
Connection Error When Using By-pass Proxies
{ "login": "MRNOBODY-ZST", "id": 78351684, "node_id": "MDQ6VXNlcjc4MzUxNjg0", "avatar_url": "https://avatars.githubusercontent.com/u/78351684?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MRNOBODY-ZST", "html_url": "https://github.com/MRNOBODY-ZST", "followers_url": "https://api.github.com/users/MRNOBODY-ZST/followers", "following_url": "https://api.github.com/users/MRNOBODY-ZST/following{/other_user}", "gists_url": "https://api.github.com/users/MRNOBODY-ZST/gists{/gist_id}", "starred_url": "https://api.github.com/users/MRNOBODY-ZST/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MRNOBODY-ZST/subscriptions", "organizations_url": "https://api.github.com/users/MRNOBODY-ZST/orgs", "repos_url": "https://api.github.com/users/MRNOBODY-ZST/repos", "events_url": "https://api.github.com/users/MRNOBODY-ZST/events{/privacy}", "received_events_url": "https://api.github.com/users/MRNOBODY-ZST/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2024-05-08T06:40:14
2024-05-17T06:38:30
null
NONE
null
null
null
### Describe the bug I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides๐Ÿค”, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))" I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library. ### Steps to reproduce the bug 1. Turn on any proxy software like Clash / ShadosocksR etc. 2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library) 3. load any dataset from hugginface online ### Expected behavior --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3) [1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric ----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval") File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs) [44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2) [45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash) ---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs) File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs) [2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning) [2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) -> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory( [2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path, [2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision, [2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config, [2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode, [2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code, [2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path [2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False) [2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls( [2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name, [2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id, ... --> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") [634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None: [635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))"))) ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6882/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6881/comments
https://api.github.com/repos/huggingface/datasets/issues/6881/events
https://github.com/huggingface/datasets/issues/6881
2,284,794,009
I_kwDODunzps6ILzCZ
6,881
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
2024-05-08T06:33:57
2024-07-18T06:49:30
2024-05-16T14:34:03
MEMBER
null
null
null
When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised: ```Python traceback AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` The error traceback: ```Python traceback ~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self) 1391 # `IterableDataset` automatically fills missing columns with None. 1392 # This is done with `_apply_feature_types_on_example`. -> 1393 example = _apply_feature_types_on_example( 1394 example, self.features, token_per_repo_id=self._token_per_repo_id 1395 ) ~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id) 1080 encoded_example = features.encode_example(example) 1081 # Decode example for Audio feature, e.g. -> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 1083 return decoded_example 1084 ~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id) 1974 -> 1975 return { 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] ~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0) 1974 1975 return { -> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1977 if self._column_requires_decoding[column_name] 1978 else value ~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id) 1339 # we pass the token to read and decode files from private repositories in streaming mode 1340 if obj is not None and schema.decode: -> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1342 return obj 1343 ~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id) 187 image = PIL.Image.open(BytesIO(bytes_)) 188 image.load() # to avoid "Too many open files" errors --> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: 190 image = PIL.ImageOps.exif_transpose(image) 191 if self.mode and self.mode != image.mode: ~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name) 75 ) 76 return categories[name] ---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'") 78 79 AttributeError: module 'PIL.Image' has no attribute 'ExifTags' ``` ### Environment info Since datasets 2.19.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6881/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6881/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6880/comments
https://api.github.com/repos/huggingface/datasets/issues/6880/events
https://github.com/huggingface/datasets/issues/6880
2,283,278,337
I_kwDODunzps6IGBAB
6,880
Webdataset: KeyError: 'png' on some datasets when streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
5
2024-05-07T13:09:02
2024-05-14T20:34:05
null
MEMBER
null
null
null
reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1 ```python >>> from datasets import load_dataset >>> ds = load_dataset("tbone5563/tar_images") Downloadingโ€‡data:โ€‡100% โ€‡1.41G/1.41Gโ€‡[00:48<00:00,โ€‡17.2MB/s] Downloadingโ€‡data:โ€‡100% โ€‡619M/619Mโ€‡[00:11<00:00,โ€‡57.4MB/s] Generatingโ€‡trainโ€‡split:โ€‡ โ€‡970/0โ€‡[00:02<00:00,โ€‡534.94โ€‡examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1747 _time = time.time() -> 1748 for key, record in generator: 1749 if max_shard_size is not None and writer._num_bytes > max_shard_size: 7 frames [/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators) 108 for field_name in image_field_names + audio_field_names: --> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} 110 yield f"{tar_idx}_{example_idx}", example KeyError: 'png' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("tbone5563/tar_images") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2607 2608 # Download and prepare data -> 2609 builder_instance.download_and_prepare( 2610 download_config=download_config, 2611 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 1025 if num_proc is not None: 1026 prepare_split_kwargs["num_proc"] = num_proc -> 1027 self._download_and_prepare( 1028 dl_manager=dl_manager, 1029 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1787 1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1789 super()._download_and_prepare( 1790 dl_manager, 1791 verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1120 try: 1121 # Prepare split will record examples associated to the split -> 1122 self._prepare_split(split_generator, **prepare_split_kwargs) 1123 except OSError as e: 1124 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1625 job_id = 0 1626 with pbar: -> 1627 for job_id, done, content in self._prepare_split_single( 1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1629 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1783 e = e.__context__ -> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1785 1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6880/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/6879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6879/comments
https://api.github.com/repos/huggingface/datasets/issues/6879/events
https://github.com/huggingface/datasets/issues/6879
2,282,968,259
I_kwDODunzps6IE1TD
6,879
Batched mapping does not raise an error if values for an existing column are empty
{ "login": "felix-schneider", "id": 208336, "node_id": "MDQ6VXNlcjIwODMzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix-schneider", "html_url": "https://github.com/felix-schneider", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "repos_url": "https://api.github.com/users/felix-schneider/repos", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-07T11:02:40
2024-05-07T11:02:40
null
NONE
null
null
null
### Describe the bug Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised. This is not the case if the function returns an empty list for an existing column in the dataset. In that case, the dataset is silently resized to 0 rows. ### Steps to reproduce the bug MWE: ``` import datasets data = datasets.Dataset.from_dict({"test": [1]}) def mapping_fn(examples): return {"test": [], "y": [1]} data = data.map(mapping_fn, batched=True) print(len(data)) ``` Note that when returning `"x": []`, the error is raised correctly, also when returning `"test": [1,2]`. ### Expected behavior Expected an exception: `pyarrow.lib.ArrowInvalid: Column 1 named test expected length 1 but got length 0` or `pyarrow.lib.ArrowInvalid: Column 2 named y expected length 0 but got length 1`. Any exception would be acceptable. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6879/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6877/comments
https://api.github.com/repos/huggingface/datasets/issues/6877/events
https://github.com/huggingface/datasets/issues/6877
2,282,068,337
I_kwDODunzps6IBZlx
6,877
OSError: [Errno 24] Too many open files
{ "login": "loicmagne", "id": 53355258, "node_id": "MDQ6VXNlcjUzMzU1MjU4", "avatar_url": "https://avatars.githubusercontent.com/u/53355258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loicmagne", "html_url": "https://github.com/loicmagne", "followers_url": "https://api.github.com/users/loicmagne/followers", "following_url": "https://api.github.com/users/loicmagne/following{/other_user}", "gists_url": "https://api.github.com/users/loicmagne/gists{/gist_id}", "starred_url": "https://api.github.com/users/loicmagne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loicmagne/subscriptions", "organizations_url": "https://api.github.com/users/loicmagne/orgs", "repos_url": "https://api.github.com/users/loicmagne/repos", "events_url": "https://api.github.com/users/loicmagne/events{/privacy}", "received_events_url": "https://api.github.com/users/loicmagne/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-05-07T01:15:09
2024-06-02T14:22:23
2024-05-13T13:01:55
NONE
null
null
null
### Describe the bug I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb) When trying to load it using the `load_dataset` function I get the following error ```python >>> from datasets import load_dataset >>> d = load_dataset('mteb/biblenlp-corpus-mmteb') Downloading readme: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 201k/201k [00:00<00:00, 1.07MB/s] Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:00<00:00, 1069.15it/s] Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:00<00:00, 436182.33it/s] Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:00<00:00, 2228.75it/s] Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:00<00:00, 646478.73it/s] Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:00<00:00, 831032.24it/s] Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:00<00:00, 517645.51it/s] Downloading data: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:33<00:00, 24.87files/s] Downloading data: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:30<00:00, 27.48files/s] Downloading data: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 828/828 [00:30<00:00, 26.94files/s] Generating train split: 1571592 examples [00:03, 461438.97 examples/s] Generating test split: 11163 examples [00:00, 118190.72 examples/s] Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1995, in _prepare_split_single for _, table in generator: File ".env/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables with open(file, "rb") as f: ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/streaming.py", line 75, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1224, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/datasets/filesystems/compression.py", line 81, in _open return self.file.open() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 135, in open return self.__enter__() ^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/core.py", line 103, in __enter__ f = self.fs.open(self.path, mode=mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/spec.py", line 1293, in open f = self._open( ^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 197, in _open return LocalFileOpener(path, mode, fs=self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 322, in __init__ self._open() File ".env/lib/python3.12/site-packages/fsspec/implementations/local.py", line 327, in _open self.f = open(self.path, mode=self.mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/downloads/3a347186abfc0f9c924dde0221d246db758c7232c0101523f04a87c17d696618' The above exception was the direct cause of the following exception: Traceback (most recent call last): File ".env/lib/python3.12/site-packages/datasets/builder.py", line 981, in incomplete_dir yield tmp_dir File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".env/lib/python3.12/site-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File ".env/lib/python3.12/site-packages/datasets/builder.py", line 1007, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File ".env/lib/python3.12/site-packages/datasets/builder.py", line 988, in incomplete_dir shutil.rmtree(tmp_dir) File "/usr/lib/python3.12/shutil.py", line 785, in rmtree _rmtree_safe_fd(fd, path, onexc) File "/usr/lib/python3.12/shutil.py", line 661, in _rmtree_safe_fd onexc(os.scandir, path, err) File "/usr/lib/python3.12/shutil.py", line 657, in _rmtree_safe_fd with os.scandir(topfd) as scandir_it: ^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files: '.cache/huggingface/datasets/mteb___biblenlp-corpus-mmteb/default/0.0.0/3912ed967b0834547f35b2da9470c4976b357c9a.incomplete' ``` I looked for the maximum number of open files on my machine (Ubuntu 24.04) and it seems to be 1024, but even when I try to load a single split (`load_dataset('mteb/biblenlp-corpus-mmteb', split='train')`) I get the same error ### Steps to reproduce the bug ```python from datasets import load_dataset d = load_dataset('mteb/biblenlp-corpus-mmteb') ``` ### Expected behavior Load the dataset without error ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6877/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6869/comments
https://api.github.com/repos/huggingface/datasets/issues/6869/events
https://github.com/huggingface/datasets/issues/6869
2,280,048,297
I_kwDODunzps6H5sap
6,869
Download is broken for dict of dicts: FileNotFoundError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-06T05:13:36
2024-05-06T09:25:53
2024-05-06T09:25:53
MEMBER
null
null
null
It seems there is a bug when downloading a dict of dicts of URLs introduced by: - #6794 ## Steps to reproduce the bug: ```python from datasets import DownloadManager dl_manager = DownloadManager() paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) ``` Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-7-0e0d76d25b09> in <module> ----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}}) .../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls) 255 start_time = datetime.now() 256 with stack_multiprocessing_download_progress_bars(): --> 257 downloaded_path_or_paths = map_nested( 258 download_func, 259 url_or_urls, .../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1) 507 iterable = list(iter_batched(iterable, batch_size)) --> 508 mapped = [ 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 507 iterable = list(iter_batched(iterable, batch_size)) 508 mapped = [ --> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) 510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 511 ] .../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0) 375 and all(not isinstance(v, types) for v in data_struct) 376 ): --> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] 378 379 # Reduce logging to keep things readable in multiprocessing with tqdm .../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config) 311 ) 312 else: --> 313 return [ 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames .../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0) 312 else: 313 return [ --> 314 self._download_single(url_or_filename, download_config=download_config) 315 for url_or_filename in url_or_filenames 316 ] .../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config) 321 # append the relative path to the base_path 322 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 323 out = cached_path(url_or_filename, download_config=download_config) 324 out = tracked_str(out) 325 out.set_origin(url_or_filename) .../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 220 elif is_local_path(url_or_filename): 221 # File, but it doesn't exist. --> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 223 else: 224 # Something unknown FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist ``` Related to: - #6850
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6869/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6869/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6868/comments
https://api.github.com/repos/huggingface/datasets/issues/6868/events
https://github.com/huggingface/datasets/issues/6868
2,279,385,159
I_kwDODunzps6H3KhH
6,868
datasets.BuilderConfig does not work.
{ "login": "jdm4pku", "id": 148830652, "node_id": "U_kgDOCN75vA", "avatar_url": "https://avatars.githubusercontent.com/u/148830652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jdm4pku", "html_url": "https://github.com/jdm4pku", "followers_url": "https://api.github.com/users/jdm4pku/followers", "following_url": "https://api.github.com/users/jdm4pku/following{/other_user}", "gists_url": "https://api.github.com/users/jdm4pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jdm4pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jdm4pku/subscriptions", "organizations_url": "https://api.github.com/users/jdm4pku/orgs", "repos_url": "https://api.github.com/users/jdm4pku/repos", "events_url": "https://api.github.com/users/jdm4pku/events{/privacy}", "received_events_url": "https://api.github.com/users/jdm4pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-05-05T08:08:55
2024-05-05T12:15:02
2024-05-05T12:15:01
NONE
null
null
null
### Describe the bug I custom a BuilderConfig and GeneratorBasedBuilder. Here is the code for BuilderConfig ``` class UIEConfig(datasets.BuilderConfig): def __init__( self, *args, data_dir=None, instruction_file=None, instruction_strategy=None, task_config_dir=None, num_examples=None, max_num_instances_per_task=None, max_num_instances_per_eval_task=None, over_sampling=None, **kwargs ): super().__init__(*args, **kwargs) self.data_dir = data_dir self.num_examples = num_examples self.over_sampling = over_sampling self.instructions = self._parse_instruction(instruction_file) self.task_configs = self._parse_task_config(task_config_dir) self.instruction_strategy = instruction_strategy self.max_num_instances_per_task = max_num_instances_per_task self.max_num_instances_per_eval_task = max_num_instances_per_eval_task ``` Besides, here is the code for GeneratorBasedBuilder. ``` class UIEInstructions(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("2.0.0") BUILDER_CONFIG_CLASS = UIEConfig BUILDER_CONFIGS = [ UIEConfig(name="default", description="Default config for NaturalInstructions") ] DEFAULT_CONFIG_NAME = "default" ``` Here is the load_dataset ``` raw_datasets = load_dataset( os.path.join(CURRENT_DIR, "uie_dataset.py"), data_dir=data_args.data_dir, task_config_dir=data_args.task_config_dir, instruction_file=data_args.instruction_file, instruction_strategy=data_args.instruction_strategy, cache_dir=data_cache_dir, # for debug, change dataset size, otherwise open it max_num_instances_per_task=data_args.max_num_instances_per_task, max_num_instances_per_eval_task=data_args.max_num_instances_per_eval_task, num_examples=data_args.num_examples, over_sampling=data_args.over_sampling ) ``` Finally, I met the error. ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` I debugged the code, but I find the parameters added by me may not work. ### Steps to reproduce the bug https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py ### Expected behavior ``` BuilderConfig UIEConfig(name='default', version=0.0.0, data_dir=None, data_files=None, description='Default config for NaturalInstructions') doesn't have a 'task_config_dir' key. ``` ### Environment info torch 2.3.0+cu118 transformers 4.40.1 python 3.8
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6868/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6867/comments
https://api.github.com/repos/huggingface/datasets/issues/6867/events
https://github.com/huggingface/datasets/issues/6867
2,279,059,787
I_kwDODunzps6H17FL
6,867
Improve performance of JSON loader
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
5
2024-05-04T15:04:16
2024-05-17T16:22:28
2024-05-17T16:22:28
MEMBER
null
null
null
As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance. The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714 > There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant: > - https://github.com/ultrajson/ultrajson#benchmarks > - https://github.com/ijl/orjson#performance I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library. However: - We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson` - Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6867/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6867/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6866/comments
https://api.github.com/repos/huggingface/datasets/issues/6866/events
https://github.com/huggingface/datasets/issues/6866
2,278,736,221
I_kwDODunzps6H0sFd
6,866
DataFilesNotFoundError for datasets in the open-llm-leaderboard
{ "login": "jerome-white", "id": 6140840, "node_id": "MDQ6VXNlcjYxNDA4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerome-white", "html_url": "https://github.com/jerome-white", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "repos_url": "https://api.github.com/users/jerome-white/repos", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
2024-05-04T04:59:00
2024-05-14T08:09:56
2024-05-14T08:09:56
NONE
null
null
null
### Describe the bug When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started seeing this. ### Steps to reproduce the bug This snippet has three cells: 1. Loads the modules 2. Tries to get config names 3. Tries to load the dataset I've chosen "davidkim205"'s Rhea-72b-v0.5 model because it is one of the best performers on the leaderboard should likely have no dataset issues: ```python In [1]: from datasets import load_dataset, get_dataset_config_names In [2]: get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea ...: -72b-v0.5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 get_dataset_config_names("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/inspect.py:347, in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 291 def get_dataset_config_names( 292 path: str, 293 revision: Optional[Union[str, Version]] = None, (...) 298 **download_kwargs, 299 ): 300 """Get the list of available config names for a particular dataset. 301 302 Args: (...) 345 ``` 346 """ --> 347 dataset_module = dataset_module_factory( 348 path, 349 revision=revision, 350 download_config=download_config, 351 download_mode=download_mode, 352 dynamic_modules_path=dynamic_modules_path, 353 data_files=data_files, 354 **download_kwargs, 355 ) 356 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path)) 357 return list(builder_cls.builder_configs.keys()) or [ 358 dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default") 359 ] File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 In [3]: data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b- ...: v0.5", "harness_winogrande_5") --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[3], line 1 ----> 1 data = load_dataset("open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5", "harness_winogrande_5") File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2582 verification_mode = VerificationMode( 2583 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2584 ) 2586 # Create a dataset builder -> 2587 builder_instance = load_dataset_builder( 2588 path=path, 2589 name=name, 2590 data_dir=data_dir, 2591 data_files=data_files, 2592 cache_dir=cache_dir, 2593 features=features, 2594 download_config=download_config, 2595 download_mode=download_mode, 2596 revision=revision, 2597 token=token, 2598 storage_options=storage_options, 2599 trust_remote_code=trust_remote_code, 2600 _require_default_config_name=name is None, 2601 **config_kwargs, 2602 ) 2604 # Return iterable dataset in case of streaming 2605 if streaming: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2257 download_config = download_config.copy() if download_config else DownloadConfig() 2258 download_config.storage_options.update(storage_options) -> 2259 dataset_module = dataset_module_factory( 2260 path, 2261 revision=revision, 2262 download_config=download_config, 2263 download_mode=download_mode, 2264 data_dir=data_dir, 2265 data_files=data_files, 2266 cache_dir=cache_dir, 2267 trust_remote_code=trust_remote_code, 2268 _require_default_config_name=_require_default_config_name, 2269 _require_custom_configs=bool(config_kwargs), 2270 ) 2271 # Get dataset builder class from the processing script 2272 builder_kwargs = dataset_module.builder_kwargs File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1821, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1812 return LocalDatasetModuleFactoryWithScript( 1813 combined_path, 1814 download_mode=download_mode, 1815 dynamic_modules_path=dynamic_modules_path, 1816 trust_remote_code=trust_remote_code, 1817 ).get_module() 1818 elif os.path.isdir(path): 1819 return LocalDatasetModuleFactoryWithoutScript( 1820 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode -> 1821 ).get_module() 1822 # Try remotely 1823 elif is_relative_path(path) and path.count("/") <= 1: File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:1039, in LocalDatasetModuleFactoryWithoutScript.get_module(self) 1033 patterns = get_data_patterns(base_path) 1034 data_files = DataFilesDict.from_patterns( 1035 patterns, 1036 base_path=base_path, 1037 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 1038 ) -> 1039 module_name, default_builder_kwargs = infer_module_for_data_files( 1040 data_files=data_files, 1041 path=self.path, 1042 ) 1043 data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) 1044 # Collect metadata files if the module supports them File ~/open-llm-bda/venv/lib/python3.11/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) 595 raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") 596 if not module_name: --> 597 raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) 598 return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.5 ``` ### Expected behavior No exceptions from `get_dataset_config_names` or `load_dataset` ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-6.5.0-1018-aws-aarch64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "login": "jerome-white", "id": 6140840, "node_id": "MDQ6VXNlcjYxNDA4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerome-white", "html_url": "https://github.com/jerome-white", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "repos_url": "https://api.github.com/users/jerome-white/repos", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6866/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6865/comments
https://api.github.com/repos/huggingface/datasets/issues/6865/events
https://github.com/huggingface/datasets/issues/6865
2,277,304,832
I_kwDODunzps6HvOoA
6,865
Example on Semantic segmentation contains bug
{ "login": "ducha-aiki", "id": 4803565, "node_id": "MDQ6VXNlcjQ4MDM1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ducha-aiki", "html_url": "https://github.com/ducha-aiki", "followers_url": "https://api.github.com/users/ducha-aiki/followers", "following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}", "gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions", "organizations_url": "https://api.github.com/users/ducha-aiki/orgs", "repos_url": "https://api.github.com/users/ducha-aiki/repos", "events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}", "received_events_url": "https://api.github.com/users/ducha-aiki/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-03T09:40:12
2024-05-03T09:40:12
null
NONE
null
null
null
### Describe the bug https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms. Specifically, as one can see in screenshot below, the object boundaries have weird colors. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee"> Original example with `albumentations` is correct <img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3"> That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations. The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object ### Steps to reproduce the bug Go to the website. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef"> https://huggingface.co/docs/datasets/en/semantic_segmentation ### Expected behavior Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead. ### Environment info Irrelevant
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6865/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6865/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6864/comments
https://api.github.com/repos/huggingface/datasets/issues/6864/events
https://github.com/huggingface/datasets/issues/6864
2,276,986,981
I_kwDODunzps6HuBBl
6,864
Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub
{ "login": "vinodrajendran001", "id": 5783246, "node_id": "MDQ6VXNlcjU3ODMyNDY=", "avatar_url": "https://avatars.githubusercontent.com/u/5783246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinodrajendran001", "html_url": "https://github.com/vinodrajendran001", "followers_url": "https://api.github.com/users/vinodrajendran001/followers", "following_url": "https://api.github.com/users/vinodrajendran001/following{/other_user}", "gists_url": "https://api.github.com/users/vinodrajendran001/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinodrajendran001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinodrajendran001/subscriptions", "organizations_url": "https://api.github.com/users/vinodrajendran001/orgs", "repos_url": "https://api.github.com/users/vinodrajendran001/repos", "events_url": "https://api.github.com/users/vinodrajendran001/events{/privacy}", "received_events_url": "https://api.github.com/users/vinodrajendran001/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2024-05-03T06:03:30
2024-05-06T06:36:42
2024-05-06T06:36:41
NONE
null
null
null
### Describe the bug The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub. ### Steps to reproduce the bug ``` from datasets import load_dataset prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]') ``` ### Expected behavior DatasetNotFoundError: Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub or cannot be accessed ### Environment info Nothing to do with versions
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6864/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6863/comments
https://api.github.com/repos/huggingface/datasets/issues/6863/events
https://github.com/huggingface/datasets/issues/6863
2,276,977,534
I_kwDODunzps6Ht-t-
6,863
Revert temporary pin huggingface-hub < 0.23.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-03T05:53:55
2024-05-27T10:14:41
2024-05-27T10:14:41
MEMBER
null
null
null
Revert temporary pin huggingface-hub < 0.23.0 introduced by - #6861 once the following issue is fixed and released: - huggingface/transformers#30618
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6863/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6860/comments
https://api.github.com/repos/huggingface/datasets/issues/6860/events
https://github.com/huggingface/datasets/issues/6860
2,275,537,137
I_kwDODunzps6HofDx
6,860
CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download"
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
2024-05-02T13:24:17
2024-05-02T16:53:45
2024-05-02T16:53:45
MEMBER
null
null
null
CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0 ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. FAILED tests/test_arrow_dataset.py::MiscellaneousDatasetTest::test_set_format_encode - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6860/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6858/comments
https://api.github.com/repos/huggingface/datasets/issues/6858/events
https://github.com/huggingface/datasets/issues/6858
2,274,917,185
I_kwDODunzps6HmHtB
6,858
Segmentation fault
{ "login": "scampion", "id": 554155, "node_id": "MDQ6VXNlcjU1NDE1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/554155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scampion", "html_url": "https://github.com/scampion", "followers_url": "https://api.github.com/users/scampion/followers", "following_url": "https://api.github.com/users/scampion/following{/other_user}", "gists_url": "https://api.github.com/users/scampion/gists{/gist_id}", "starred_url": "https://api.github.com/users/scampion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scampion/subscriptions", "organizations_url": "https://api.github.com/users/scampion/orgs", "repos_url": "https://api.github.com/users/scampion/repos", "events_url": "https://api.github.com/users/scampion/events{/privacy}", "received_events_url": "https://api.github.com/users/scampion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-05-02T08:28:49
2024-05-03T08:43:21
2024-05-03T08:42:36
NONE
null
null
null
### Describe the bug Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault. Several others files are also concerned. ### Steps to reproduce the bug # Create a new venv python3 -m venv venv_test source venv_test/bin/activate # Install the latest version pip install datasets # Load that dataset python3 -q -X faulthandler -c "from datasets import load_dataset; load_dataset('EuropeanParliament/Eurovoc', '1998-09')" ### Expected behavior Data must be loaded ### Environment info datasets==2.19.0 Python 3.11.7 Darwin 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64 x86_64
{ "login": "scampion", "id": 554155, "node_id": "MDQ6VXNlcjU1NDE1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/554155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scampion", "html_url": "https://github.com/scampion", "followers_url": "https://api.github.com/users/scampion/followers", "following_url": "https://api.github.com/users/scampion/following{/other_user}", "gists_url": "https://api.github.com/users/scampion/gists{/gist_id}", "starred_url": "https://api.github.com/users/scampion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scampion/subscriptions", "organizations_url": "https://api.github.com/users/scampion/orgs", "repos_url": "https://api.github.com/users/scampion/repos", "events_url": "https://api.github.com/users/scampion/events{/privacy}", "received_events_url": "https://api.github.com/users/scampion/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6858/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6856/comments
https://api.github.com/repos/huggingface/datasets/issues/6856/events
https://github.com/huggingface/datasets/issues/6856
2,274,828,933
I_kwDODunzps6HlyKF
6,856
CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2024-05-02T07:37:03
2024-05-02T11:43:01
2024-05-02T11:43:01
MEMBER
null
null
null
CI fails on Windows for test_delete_from_hub after the merge of: - #6820 This is weird because the CI was green in the PR branch before merging to main. ``` FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')] At index 1 diff: CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_files:\r\n - split: train\r\n path: cats/train/*\r\n---\r\n') != CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n - split: train\n path: cats/train/*\n---\n') Full diff: [ CommitOperationDelete( path_in_repo='dogs/train/0000.csv', is_folder=False, ), CommitOperationAdd( path_in_repo='README.md', - path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n ' ? -------- + path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_f' ? ++ ++ ++ - b' - split: train\n path: cats/train/*\n---\n', ? ^^^^^^ - + b'iles:\r\n - split: train\r\n path: cats/train/*\r' ? ++++++++++ ++ ^ + b'\n---\r\n', ), ] ```
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6856/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6854/comments
https://api.github.com/repos/huggingface/datasets/issues/6854/events
https://github.com/huggingface/datasets/issues/6854
2,274,767,686
I_kwDODunzps6HljNG
6,854
Wrong example of usage when config name is missing for community script-datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
2024-05-02T06:59:39
2024-05-03T15:51:59
2024-05-03T15:51:58
MEMBER
null
null
null
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example: ```python >>> ds = load_dataset("google/fleurs") ValueError: Config name is missing. Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all'] Example of usage: `load_dataset('fleurs', 'af_za')` ``` Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs".
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6854/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6854/timeline
null
completed
false