url
string | html_url
string | issue_url
string | id
int64 | node_id
string | user_login
string | user_id
int64 | user_node_id
string | user_avatar_url
string | user_gravatar_id
string | user_url
string | user_html_url
string | user_followers_url
string | user_following_url
string | user_gists_url
string | user_starred_url
string | user_subscriptions_url
string | user_organizations_url
string | user_repos_url
string | user_events_url
string | user_received_events_url
string | user_type
string | user_user_view_type
string | user_site_admin
bool | created_at
timestamp[s] | updated_at
timestamp[s] | body
string | author_association
string | reactions_url
string | reactions_total_count
int64 | reactions_+1
int64 | reactions_-1
int64 | reactions_laugh
int64 | reactions_hooray
int64 | reactions_confused
int64 | reactions_heart
int64 | reactions_rocket
int64 | reactions_eyes
int64 | performed_via_github_app
null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/comments/787152134
|
https://github.com/huggingface/datasets/issues/1942#issuecomment-787152134
|
https://api.github.com/repos/huggingface/datasets/issues/1942
| 787,152,134
|
MDEyOklzc3VlQ29tbWVudDc4NzE1MjEzNA==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-02-27T21:39:09
| 2021-02-27T21:39:09
|
But no, since
`
metric = load_metric(metric_name)
`
is called for each process, the race condition is still there. So still getting:
```
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
i.e. the only way to fix this is to `load_metric` only for rank 0, but this requires huge changes in the code and all end users' code.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787152134/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787168429
|
https://github.com/huggingface/datasets/issues/1958#issuecomment-787168429
|
https://api.github.com/repos/huggingface/datasets/issues/1958
| 787,168,429
|
MDEyOklzc3VlQ29tbWVudDc4NzE2ODQyOQ==
|
himat
| 1,156,974
|
MDQ6VXNlcjExNTY5NzQ=
|
https://avatars.githubusercontent.com/u/1156974?v=4
|
https://api.github.com/users/himat
|
https://github.com/himat
|
https://api.github.com/users/himat/followers
|
https://api.github.com/users/himat/following{/other_user}
|
https://api.github.com/users/himat/gists{/gist_id}
|
https://api.github.com/users/himat/starred{/owner}{/repo}
|
https://api.github.com/users/himat/subscriptions
|
https://api.github.com/users/himat/orgs
|
https://api.github.com/users/himat/repos
|
https://api.github.com/users/himat/events{/privacy}
|
https://api.github.com/users/himat/received_events
|
User
|
public
| false
| 2021-02-27T21:50:16
| 2021-02-27T21:50:16
|
Never mind, I ran it again and it worked this time. Strange.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787168429/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787185348
|
https://github.com/huggingface/datasets/issues/1942#issuecomment-787185348
|
https://api.github.com/repos/huggingface/datasets/issues/1942
| 787,185,348
|
MDEyOklzc3VlQ29tbWVudDc4NzE4NTM0OA==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-02-27T22:01:05
| 2021-02-27T22:09:35
|
OK, here is a workaround that works. The onus here is absolutely on the user:
```
diff --git a/examples/seq2seq/run_seq2seq.py b/examples/seq2seq/run_seq2seq.py
index 2a060dac5..c82fd83ea 100755
--- a/examples/seq2seq/run_seq2seq.py
+++ b/examples/seq2seq/run_seq2seq.py
@@ -520,7 +520,11 @@ def main():
# Metric
metric_name = "rouge" if data_args.task.startswith("summarization") else "sacrebleu"
- metric = load_metric(metric_name)
+ import torch.distributed as dist
+ if dist.is_initialized():
+ metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())
+ else:
+ metric = load_metric(metric_name)
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
@@ -548,12 +552,17 @@ def main():
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
+ kwargs = dict(predictions=decoded_preds, references=decoded_labels)
+ if metric_name == "rouge":
+ kwargs.update(use_stemmer=True)
+ result = metric.compute(**kwargs) # must call for all processes
+ if result is None: # only process with rank-0 will return metrics, others None
+ return {}
+
if metric_name == "rouge":
- result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results from ROUGE
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
else:
- result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
```
This is not user-friendly to say the least. And it's still wasteful as we don't need other processes to do anything.
But it solves the current race condition.
Clearly this calls for a design discussion as it's the responsibility of the Trainer to handle this and not user's. Perhaps in the `transformers` land?
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787185348/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787206484
|
https://github.com/huggingface/datasets/issues/1942#issuecomment-787206484
|
https://api.github.com/repos/huggingface/datasets/issues/1942
| 787,206,484
|
MDEyOklzc3VlQ29tbWVudDc4NzIwNjQ4NA==
|
sgugger
| 35,901,082
|
MDQ6VXNlcjM1OTAxMDgy
|
https://avatars.githubusercontent.com/u/35901082?v=4
|
https://api.github.com/users/sgugger
|
https://github.com/sgugger
|
https://api.github.com/users/sgugger/followers
|
https://api.github.com/users/sgugger/following{/other_user}
|
https://api.github.com/users/sgugger/gists{/gist_id}
|
https://api.github.com/users/sgugger/starred{/owner}{/repo}
|
https://api.github.com/users/sgugger/subscriptions
|
https://api.github.com/users/sgugger/orgs
|
https://api.github.com/users/sgugger/repos
|
https://api.github.com/users/sgugger/events{/privacy}
|
https://api.github.com/users/sgugger/received_events
|
User
|
public
| false
| 2021-02-27T23:58:26
| 2021-02-27T23:58:26
|
I don't see how this could be the responsibility of `Trainer`, who hasn't the faintest idea of what a `datasets.Metric` is. The trainer takes a function `compute_metrics` that goes from predictions + labels to metric results, there is nothing there. That computation is done on all processes
The fact a `datasets.Metric` object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in `datasets`. Especially since, as I mentioned before, the multiprocessing part of `datasets.Metric` has a deep flaw since it can't work in a multinode environment. So you actually need to do the job of gather predictions and labels yourself.
The changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels `number_of_processes` times I believe, which is not going to make the metric computation any faster.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787206484/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787206909
|
https://github.com/huggingface/datasets/issues/1942#issuecomment-787206909
|
https://api.github.com/repos/huggingface/datasets/issues/1942
| 787,206,909
|
MDEyOklzc3VlQ29tbWVudDc4NzIwNjkwOQ==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-02-28T00:02:41
| 2021-02-28T00:44:54
|
Right, to clarify, I meant it'd be good to have it sorted on the library side and not requiring the user to figure it out. This is too complex and error-prone and if not coded correctly the bug will be intermittent which is even worse.
Oh I guess I wasn't clear in my message - in no way I'm proposing that we use this workaround code - I was just showing what I had to do to make it work.
We are on the same page.
> The changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels number_of_processes times I believe, which is not going to make the metric computation any faster.
And yes, this is another problem that my workaround introduces. Thank you for pointing it out, @sgugger
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787206909/reactions
| 2
| 2
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787220943
|
https://github.com/huggingface/datasets/issues/1959#issuecomment-787220943
|
https://api.github.com/repos/huggingface/datasets/issues/1959
| 787,220,943
|
MDEyOklzc3VlQ29tbWVudDc4NzIyMDk0Mw==
|
mariosasko
| 47,462,742
|
MDQ6VXNlcjQ3NDYyNzQy
|
https://avatars.githubusercontent.com/u/47462742?v=4
|
https://api.github.com/users/mariosasko
|
https://github.com/mariosasko
|
https://api.github.com/users/mariosasko/followers
|
https://api.github.com/users/mariosasko/following{/other_user}
|
https://api.github.com/users/mariosasko/gists{/gist_id}
|
https://api.github.com/users/mariosasko/starred{/owner}{/repo}
|
https://api.github.com/users/mariosasko/subscriptions
|
https://api.github.com/users/mariosasko/orgs
|
https://api.github.com/users/mariosasko/repos
|
https://api.github.com/users/mariosasko/events{/privacy}
|
https://api.github.com/users/mariosasko/received_events
|
User
|
public
| false
| 2021-02-28T02:17:31
| 2021-02-28T02:48:56
|
Hi,
try `skiprows` instead. This part is not properly documented in the docs it seems.
@lhoestq I'll fix this as part of a bigger PR that fixes typos in the docs.
|
COLLABORATOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787220943/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787474997
|
https://github.com/huggingface/datasets/pull/1951#issuecomment-787474997
|
https://api.github.com/repos/huggingface/datasets/issues/1951
| 787,474,997
|
MDEyOklzc3VlQ29tbWVudDc4NzQ3NDk5Nw==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-02-28T16:03:55
| 2021-02-28T16:03:55
|
@mariosasko This is kinda cool!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787474997/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787810655
|
https://github.com/huggingface/datasets/issues/1954#issuecomment-787810655
|
https://api.github.com/repos/huggingface/datasets/issues/1954
| 787,810,655
|
MDEyOklzc3VlQ29tbWVudDc4NzgxMDY1NQ==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-01T09:42:16
| 2021-03-01T09:42:16
|
Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188
In the future we'll add support for a more native way of adding a new column ;)
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787810655/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787864457
|
https://github.com/huggingface/datasets/issues/1942#issuecomment-787864457
|
https://api.github.com/repos/huggingface/datasets/issues/1942
| 787,864,457
|
MDEyOklzc3VlQ29tbWVudDc4Nzg2NDQ1Nw==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-01T11:07:59
| 2021-03-01T11:07:59
|
> The fact a datasets.Metric object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in datasets
Yes totally, this use case is supposed to be supported by `datasets`. And in this case there shouldn't be any collision between the metrics. I'm looking into it :)
My guess is that at one point the metric isn't using the right file name. It's supposed to use one with a unique uuid in order to avoid the collisions.
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787864457/reactions
| 3
| 1
| 0
| 0
| 0
| 0
| 2
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787867739
|
https://github.com/huggingface/datasets/issues/1956#issuecomment-787867739
|
https://api.github.com/repos/huggingface/datasets/issues/1956
| 787,867,739
|
MDEyOklzc3VlQ29tbWVudDc4Nzg2NzczOQ==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-01T11:13:19
| 2021-03-01T11:13:19
|
You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups.
Maybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ?
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787867739/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787891496
|
https://github.com/huggingface/datasets/issues/1964#issuecomment-787891496
|
https://api.github.com/repos/huggingface/datasets/issues/1964
| 787,891,496
|
MDEyOklzc3VlQ29tbWVudDc4Nzg5MTQ5Ng==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-01T11:55:07
| 2021-03-01T11:55:07
|
Hi !
To fix 1, an you try to run this code ?
```python
from datasets import load_dataset
load_dataset("squad", download_mode="force_redownload")
```
Maybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1.
Regarding your 2nd point, you're right that loading the raw json this way doesn't give you a dataset with the column "context", "question" and "answers". Indeed the squad format is a very nested format so you have to preprocess the data. You can do it this way:
```python
def process_squad(examples):
"""
Process a dataset in the squad format with columns "title" and "paragraphs"
to return the dataset with columns "context", "question" and "answers".
"""
out = {"context": [], "question": [], "answers":[]}
for paragraphs in examples["paragraphs"]:
for paragraph in paragraphs:
for qa in paragraph["qas"]:
answers = [{"answer_start": answer["answer_start"], "text": answer["text"].strip()} for answer in qa["answers"]]
out["context"].append(paragraph["context"].strip())
out["question"].append(qa["question"].strip())
out["answers"].append(answers)
return out
datasets = load_dataset(extension, data_files=data_files, field="data")
column_names = datasets["train"].column_names
if set(column_names) == {"title", "paragraphs"}:
datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)
```
Hope that helps :)
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787891496/reactions
| 2
| 2
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787892288
|
https://github.com/huggingface/datasets/pull/1823#issuecomment-787892288
|
https://api.github.com/repos/huggingface/datasets/issues/1823
| 787,892,288
|
MDEyOklzc3VlQ29tbWVudDc4Nzg5MjI4OA==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-01T11:56:20
| 2021-03-01T11:56:20
|
Hi @lhoestq,
Thanks for fixing it and approving :)
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787892288/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787893642
|
https://github.com/huggingface/datasets/pull/1815#issuecomment-787893642
|
https://api.github.com/repos/huggingface/datasets/issues/1815
| 787,893,642
|
MDEyOklzc3VlQ29tbWVudDc4Nzg5MzY0Mg==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-01T11:58:32
| 2021-03-01T12:33:03
|
Hi @lhoestq,
Thanks for approving.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787893642/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787895590
|
https://github.com/huggingface/datasets/issues/1963#issuecomment-787895590
|
https://api.github.com/repos/huggingface/datasets/issues/1963
| 787,895,590
|
MDEyOklzc3VlQ29tbWVudDc4Nzg5NTU5MA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-01T12:01:29
| 2021-03-01T12:01:29
|
Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset.
Feel free to remove these examples if you don't need them by using
```python
data = data.filter(lambda x: x["label"] != -1)
```
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787895590/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787973420
|
https://github.com/huggingface/datasets/issues/1867#issuecomment-787973420
|
https://api.github.com/repos/huggingface/datasets/issues/1867
| 787,973,420
|
MDEyOklzc3VlQ29tbWVudDc4Nzk3MzQyMA==
|
gaceladri
| 7,850,682
|
MDQ6VXNlcjc4NTA2ODI=
|
https://avatars.githubusercontent.com/u/7850682?v=4
|
https://api.github.com/users/gaceladri
|
https://github.com/gaceladri
|
https://api.github.com/users/gaceladri/followers
|
https://api.github.com/users/gaceladri/following{/other_user}
|
https://api.github.com/users/gaceladri/gists{/gist_id}
|
https://api.github.com/users/gaceladri/starred{/owner}{/repo}
|
https://api.github.com/users/gaceladri/subscriptions
|
https://api.github.com/users/gaceladri/orgs
|
https://api.github.com/users/gaceladri/repos
|
https://api.github.com/users/gaceladri/events{/privacy}
|
https://api.github.com/users/gaceladri/received_events
|
User
|
public
| false
| 2021-03-01T14:04:24
| 2021-03-01T14:04:24
|
Great comment @alexvaca0 . I think that we could re-open the issue as a reformulation of why it takes so much space to save the arrow. Saving a 1% of oscar corpus takes more thank 600 GB (it breaks when it pass 600GB because it is the free memory that I have at this moment) when the full dataset is 1,3 TB. I have a 1TB M.2 NVMe disk that I can not train on because the saved .arrow files goes crazily big. If you can share your Collator I will be grateful.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787973420/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 1
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787991054
|
https://github.com/huggingface/datasets/pull/1952#issuecomment-787991054
|
https://api.github.com/repos/huggingface/datasets/issues/1952
| 787,991,054
|
MDEyOklzc3VlQ29tbWVudDc4Nzk5MTA1NA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-01T14:29:21
| 2021-03-01T14:29:21
|
Merging this one, then I'll open a new PR for the `DATASETS_OFFLINE` env var :)
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/787991054/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788126061
|
https://github.com/huggingface/datasets/issues/1956#issuecomment-788126061
|
https://api.github.com/repos/huggingface/datasets/issues/1956
| 788,126,061
|
MDEyOklzc3VlQ29tbWVudDc4ODEyNjA2MQ==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-03-01T17:24:41
| 2021-03-01T17:24:41
|
Ah, you're absolutely correct, @lhoestq - it's exactly the equivalent of the shared secret. Thank you!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788126061/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788141070
|
https://github.com/huggingface/datasets/issues/1942#issuecomment-788141070
|
https://api.github.com/repos/huggingface/datasets/issues/1942
| 788,141,070
|
MDEyOklzc3VlQ29tbWVudDc4ODE0MTA3MA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-01T17:46:24
| 2021-03-01T17:46:24
|
I just opened #1966 to fix this :)
@stas00 if have a chance feel free to try it !
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788141070/reactions
| 3
| 1
| 0
| 0
| 2
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788166280
|
https://github.com/huggingface/datasets/pull/1944#issuecomment-788166280
|
https://api.github.com/repos/huggingface/datasets/issues/1944
| 788,166,280
|
MDEyOklzc3VlQ29tbWVudDc4ODE2NjI4MA==
|
yavuzKomecoglu
| 5,150,963
|
MDQ6VXNlcjUxNTA5NjM=
|
https://avatars.githubusercontent.com/u/5150963?v=4
|
https://api.github.com/users/yavuzKomecoglu
|
https://github.com/yavuzKomecoglu
|
https://api.github.com/users/yavuzKomecoglu/followers
|
https://api.github.com/users/yavuzKomecoglu/following{/other_user}
|
https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}
|
https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}
|
https://api.github.com/users/yavuzKomecoglu/subscriptions
|
https://api.github.com/users/yavuzKomecoglu/orgs
|
https://api.github.com/users/yavuzKomecoglu/repos
|
https://api.github.com/users/yavuzKomecoglu/events{/privacy}
|
https://api.github.com/users/yavuzKomecoglu/received_events
|
User
|
public
| false
| 2021-03-01T18:23:21
| 2021-03-01T18:23:21
|
> Thanks for changing to ClassLabel :)
> This is all good now !
>
> However I can see changes in other files than the ones for interpress_news_category_tr_lite, can you please fix that ?
> To do so you can create another branch and another PR to only include the interpress_news_category_tr_lite files.
>
> Maybe this happened because of a git rebase ? Once you've already pushed your code, please use git merge instead of rebase in order to avoid this.
Thanks for the feedback.
New PR https://github.com/huggingface/datasets/pull/1967
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788166280/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788166945
|
https://github.com/huggingface/datasets/issues/1942#issuecomment-788166945
|
https://api.github.com/repos/huggingface/datasets/issues/1942
| 788,166,945
|
MDEyOklzc3VlQ29tbWVudDc4ODE2Njk0NQ==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-03-01T18:24:20
| 2021-03-01T18:33:31
|
Thank you, @lhoestq - I will experiment and report back.
edit: It works! Thank you
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788166945/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788171449
|
https://github.com/huggingface/datasets/pull/1966#issuecomment-788171449
|
https://api.github.com/repos/huggingface/datasets/issues/1966
| 788,171,449
|
MDEyOklzc3VlQ29tbWVudDc4ODE3MTQ0OQ==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-03-01T18:30:53
| 2021-03-01T18:32:29
|
Since the failure was originally intermittent, there is no 100% telling that the problem is gone.
But if my artificial race condition setup https://github.com/huggingface/datasets/issues/1942#issuecomment-787124529 is to be the litmus test then the problem has been fixed, as with this PR branch that particular race condition is taken care of correctly.
Thank you for taking care of this, @lhoestq - locking can be very tricky to do right!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788171449/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788592600
|
https://github.com/huggingface/datasets/issues/1964#issuecomment-788592600
|
https://api.github.com/repos/huggingface/datasets/issues/1964
| 788,592,600
|
MDEyOklzc3VlQ29tbWVudDc4ODU5MjYwMA==
|
LeopoldACC
| 44,536,699
|
MDQ6VXNlcjQ0NTM2Njk5
|
https://avatars.githubusercontent.com/u/44536699?v=4
|
https://api.github.com/users/LeopoldACC
|
https://github.com/LeopoldACC
|
https://api.github.com/users/LeopoldACC/followers
|
https://api.github.com/users/LeopoldACC/following{/other_user}
|
https://api.github.com/users/LeopoldACC/gists{/gist_id}
|
https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}
|
https://api.github.com/users/LeopoldACC/subscriptions
|
https://api.github.com/users/LeopoldACC/orgs
|
https://api.github.com/users/LeopoldACC/repos
|
https://api.github.com/users/LeopoldACC/events{/privacy}
|
https://api.github.com/users/LeopoldACC/received_events
|
User
|
public
| false
| 2021-03-02T05:12:37
| 2021-03-02T05:12:37
|
Thks for quickly answering!
### 1 I try the first way,but seems not work
```
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 503, in <module>
main()
File "examples/question-answering/run_qa.py", line 218, in main
datasets = load_dataset(data_args.dataset_name, download_mode="force_redownload")
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']
```
### 2 I try the second way,and run the examples/question-answering/run_qa.py,it lead to another bug orz..
```
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 523, in <module>
main()
File "examples/question-answering/run_qa.py", line 379, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1120, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1091, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "examples/question-answering/run_qa.py", line 339, in prepare_train_features
if len(answers["answer_start"]) == 0:
TypeError: list indices must be integers or slices, not str
```
## may be the function prepare_train_features in run_qa.py need to fix,I think is that the prep
```python
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples[answer_column_name][sample_index]
print(examples,answers)
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
```
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788592600/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788889763
|
https://github.com/huggingface/datasets/issues/1965#issuecomment-788889763
|
https://api.github.com/repos/huggingface/datasets/issues/1965
| 788,889,763
|
MDEyOklzc3VlQ29tbWVudDc4ODg4OTc2Mw==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-02T12:58:04
| 2021-03-02T12:58:04
|
Hi !
As far as I know not all faiss indexes can be computed in parallel and then merged.
For example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.
Moreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls#internal-threading)
So I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788889763/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788896155
|
https://github.com/huggingface/datasets/issues/1972#issuecomment-788896155
|
https://api.github.com/repos/huggingface/datasets/issues/1972
| 788,896,155
|
MDEyOklzc3VlQ29tbWVudDc4ODg5NjE1NQ==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-02T13:08:03
| 2021-03-02T13:08:03
|
Hi ! `rename_column` has been added recently and will be available in the next release
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788896155/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788954800
|
https://github.com/huggingface/datasets/pull/1971#issuecomment-788954800
|
https://api.github.com/repos/huggingface/datasets/issues/1971
| 788,954,800
|
MDEyOklzc3VlQ29tbWVudDc4ODk1NDgwMA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-02T14:38:23
| 2021-03-02T14:38:23
|
Oh nice thanks for adding the context manager ! All the streams and RecordBatchWriter will be properly closed now. Hopefully this gives a better experience on windows on which it's super important to close stuff.
Not sure about the error, it looks like a process crashed silently.
Let me take a look
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788954800/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788965939
|
https://github.com/huggingface/datasets/pull/1971#issuecomment-788965939
|
https://api.github.com/repos/huggingface/datasets/issues/1971
| 788,965,939
|
MDEyOklzc3VlQ29tbWVudDc4ODk2NTkzOQ==
|
albertvillanova
| 8,515,462
|
MDQ6VXNlcjg1MTU0NjI=
|
https://avatars.githubusercontent.com/u/8515462?v=4
|
https://api.github.com/users/albertvillanova
|
https://github.com/albertvillanova
|
https://api.github.com/users/albertvillanova/followers
|
https://api.github.com/users/albertvillanova/following{/other_user}
|
https://api.github.com/users/albertvillanova/gists{/gist_id}
|
https://api.github.com/users/albertvillanova/starred{/owner}{/repo}
|
https://api.github.com/users/albertvillanova/subscriptions
|
https://api.github.com/users/albertvillanova/orgs
|
https://api.github.com/users/albertvillanova/repos
|
https://api.github.com/users/albertvillanova/events{/privacy}
|
https://api.github.com/users/albertvillanova/received_events
|
User
|
public
| false
| 2021-03-02T14:54:32
| 2021-03-02T14:55:35
|
> Hopefully this gives a better experience on windows on which it's super important to close stuff.
Exactly! On Windows, you got:
> PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
when trying to access the unclosed `stream` file, e.g. by `with incomplete_dir(self._cache_dir) as tmp_data_dir`: `shutil.rmtree(tmp_dir)`
The reason is: https://docs.python.org/3/library/os.html#os.remove
> On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use.
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788965939/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788966713
|
https://github.com/huggingface/datasets/pull/1971#issuecomment-788966713
|
https://api.github.com/repos/huggingface/datasets/issues/1971
| 788,966,713
|
MDEyOklzc3VlQ29tbWVudDc4ODk2NjcxMw==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-02T14:55:40
| 2021-03-02T14:55:40
|
The test passes on my windows. This was probably a circleCI issue. I re-ran the circleCI tests
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788966713/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788967340
|
https://github.com/huggingface/datasets/pull/1971#issuecomment-788967340
|
https://api.github.com/repos/huggingface/datasets/issues/1971
| 788,967,340
|
MDEyOklzc3VlQ29tbWVudDc4ODk2NzM0MA==
|
albertvillanova
| 8,515,462
|
MDQ6VXNlcjg1MTU0NjI=
|
https://avatars.githubusercontent.com/u/8515462?v=4
|
https://api.github.com/users/albertvillanova
|
https://github.com/albertvillanova
|
https://api.github.com/users/albertvillanova/followers
|
https://api.github.com/users/albertvillanova/following{/other_user}
|
https://api.github.com/users/albertvillanova/gists{/gist_id}
|
https://api.github.com/users/albertvillanova/starred{/owner}{/repo}
|
https://api.github.com/users/albertvillanova/subscriptions
|
https://api.github.com/users/albertvillanova/orgs
|
https://api.github.com/users/albertvillanova/repos
|
https://api.github.com/users/albertvillanova/events{/privacy}
|
https://api.github.com/users/albertvillanova/received_events
|
User
|
public
| false
| 2021-03-02T14:56:29
| 2021-03-02T14:56:29
|
NICE! It passed!
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/788967340/reactions
| 1
| 0
| 0
| 0
| 1
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789055114
|
https://github.com/huggingface/datasets/pull/1967#issuecomment-789055114
|
https://api.github.com/repos/huggingface/datasets/issues/1967
| 789,055,114
|
MDEyOklzc3VlQ29tbWVudDc4OTA1NTExNA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-02T16:56:13
| 2021-03-02T16:56:13
|
Thanks for the change, merging now !
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789055114/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789118304
|
https://github.com/huggingface/datasets/issues/1973#issuecomment-789118304
|
https://api.github.com/repos/huggingface/datasets/issues/1973
| 789,118,304
|
MDEyOklzc3VlQ29tbWVudDc4OTExODMwNA==
|
justin-yan
| 7,731,709
|
MDQ6VXNlcjc3MzE3MDk=
|
https://avatars.githubusercontent.com/u/7731709?v=4
|
https://api.github.com/users/justin-yan
|
https://github.com/justin-yan
|
https://api.github.com/users/justin-yan/followers
|
https://api.github.com/users/justin-yan/following{/other_user}
|
https://api.github.com/users/justin-yan/gists{/gist_id}
|
https://api.github.com/users/justin-yan/starred{/owner}{/repo}
|
https://api.github.com/users/justin-yan/subscriptions
|
https://api.github.com/users/justin-yan/orgs
|
https://api.github.com/users/justin-yan/repos
|
https://api.github.com/users/justin-yan/events{/privacy}
|
https://api.github.com/users/justin-yan/received_events
|
User
|
public
| false
| 2021-03-02T18:29:02
| 2021-03-02T18:29:02
|
Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.
If this is unexpected behavior, would be happy to help run debugging as needed.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789118304/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789153627
|
https://github.com/huggingface/datasets/issues/1977#issuecomment-789153627
|
https://api.github.com/repos/huggingface/datasets/issues/1977
| 789,153,627
|
MDEyOklzc3VlQ29tbWVudDc4OTE1MzYyNw==
|
dorost1234
| 79,165,106
|
MDQ6VXNlcjc5MTY1MTA2
|
https://avatars.githubusercontent.com/u/79165106?v=4
|
https://api.github.com/users/dorost1234
|
https://github.com/dorost1234
|
https://api.github.com/users/dorost1234/followers
|
https://api.github.com/users/dorost1234/following{/other_user}
|
https://api.github.com/users/dorost1234/gists{/gist_id}
|
https://api.github.com/users/dorost1234/starred{/owner}{/repo}
|
https://api.github.com/users/dorost1234/subscriptions
|
https://api.github.com/users/dorost1234/orgs
|
https://api.github.com/users/dorost1234/repos
|
https://api.github.com/users/dorost1234/events{/privacy}
|
https://api.github.com/users/dorost1234/received_events
|
User
|
public
| false
| 2021-03-02T19:24:23
| 2021-03-02T20:14:07
|
I sometimes also get this error with other languages of the same dataset:
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table
stream = stream_from(filename)
File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
@lhoestq
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789153627/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789356485
|
https://github.com/huggingface/datasets/issues/1965#issuecomment-789356485
|
https://api.github.com/repos/huggingface/datasets/issues/1965
| 789,356,485
|
MDEyOklzc3VlQ29tbWVudDc4OTM1NjQ4NQ==
|
shamanez
| 16,892,570
|
MDQ6VXNlcjE2ODkyNTcw
|
https://avatars.githubusercontent.com/u/16892570?v=4
|
https://api.github.com/users/shamanez
|
https://github.com/shamanez
|
https://api.github.com/users/shamanez/followers
|
https://api.github.com/users/shamanez/following{/other_user}
|
https://api.github.com/users/shamanez/gists{/gist_id}
|
https://api.github.com/users/shamanez/starred{/owner}{/repo}
|
https://api.github.com/users/shamanez/subscriptions
|
https://api.github.com/users/shamanez/orgs
|
https://api.github.com/users/shamanez/repos
|
https://api.github.com/users/shamanez/events{/privacy}
|
https://api.github.com/users/shamanez/received_events
|
User
|
public
| false
| 2021-03-03T01:32:55
| 2021-03-03T01:32:55
|
Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards.
Then I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789356485/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789552508
|
https://github.com/huggingface/datasets/issues/1964#issuecomment-789552508
|
https://api.github.com/repos/huggingface/datasets/issues/1964
| 789,552,508
|
MDEyOklzc3VlQ29tbWVudDc4OTU1MjUwOA==
|
LeopoldACC
| 44,536,699
|
MDQ6VXNlcjQ0NTM2Njk5
|
https://avatars.githubusercontent.com/u/44536699?v=4
|
https://api.github.com/users/LeopoldACC
|
https://github.com/LeopoldACC
|
https://api.github.com/users/LeopoldACC/followers
|
https://api.github.com/users/LeopoldACC/following{/other_user}
|
https://api.github.com/users/LeopoldACC/gists{/gist_id}
|
https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}
|
https://api.github.com/users/LeopoldACC/subscriptions
|
https://api.github.com/users/LeopoldACC/orgs
|
https://api.github.com/users/LeopoldACC/repos
|
https://api.github.com/users/LeopoldACC/events{/privacy}
|
https://api.github.com/users/LeopoldACC/received_events
|
User
|
public
| false
| 2021-03-03T08:58:59
| 2021-03-03T08:58:59
|
## I have fixed it, @lhoestq
### the first section change as you said and add ["id"]
```python
def process_squad(examples):
"""
Process a dataset in the squad format with columns "title" and "paragraphs"
to return the dataset with columns "context", "question" and "answers".
"""
# print(examples)
out = {"context": [], "question": [], "answers":[],"id":[]}
for paragraphs in examples["paragraphs"]:
for paragraph in paragraphs:
for qa in paragraph["qas"]:
answers = [{"answer_start": answer["answer_start"], "text": answer["text"].strip()} for answer in qa["answers"]]
out["context"].append(paragraph["context"].strip())
out["question"].append(qa["question"].strip())
out["answers"].append(answers)
out["id"].append(qa["id"])
return out
column_names = datasets["train"].column_names if training_args.do_train else datasets["validation"].column_names
# print(datasets["train"].column_names)
if set(column_names) == {"title", "paragraphs"}:
datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)
# Preprocessing the datasets.
# Preprocessing is slighlty different for training and evaluation.
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
# print(column_names)
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
```
### the second section
```python
def prepare_train_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples[question_column_name if pad_on_right else context_column_name],
examples[context_column_name if pad_on_right else question_column_name],
truncation="only_second" if pad_on_right else "only_first",
max_length=data_args.max_seq_length,
stride=data_args.doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length" if data_args.pad_to_max_length else False,
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples[answer_column_name][sample_index]
# print(examples,answers,offset_mapping,tokenized_examples)
# If no answers are given, set the cls_index as answer.
if len(answers) == 0:#len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers[0]["answer_start"]
end_char = start_char + len(answers[0]["text"])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
```
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789552508/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789579308
|
https://github.com/huggingface/datasets/pull/1910#issuecomment-789579308
|
https://api.github.com/repos/huggingface/datasets/issues/1910
| 789,579,308
|
MDEyOklzc3VlQ29tbWVudDc4OTU3OTMwOA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T09:40:24
| 2021-03-03T09:40:24
|
It looks like this PR now includes changes to many other files than the ones for CoNLLpp.
To fix that feel free to create another branch and another PR.
This was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch.
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789579308/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789602266
|
https://github.com/huggingface/datasets/issues/1977#issuecomment-789602266
|
https://api.github.com/repos/huggingface/datasets/issues/1977
| 789,602,266
|
MDEyOklzc3VlQ29tbWVudDc4OTYwMjI2Ng==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T10:16:48
| 2021-03-03T10:17:40
|
Hi ! Thanks for reporting
Some wikipedia configurations do require the user to have `apache_beam` in order to parse the wikimedia data.
On the other hand regarding your second issue
```
OSError: Memory mapping file failed: Cannot allocate memory
```
I've never experienced this, can you open a new issue for this specific error and provide more details please ?
For example what script did you use to get this, what language did you use, what's your environment details (os, python version, pyarrow version)..
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789602266/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789642963
|
https://github.com/huggingface/datasets/issues/1724#issuecomment-789642963
|
https://api.github.com/repos/huggingface/datasets/issues/1724
| 789,642,963
|
MDEyOklzc3VlQ29tbWVudDc4OTY0Mjk2Mw==
|
xinjicong
| 41,193,842
|
MDQ6VXNlcjQxMTkzODQy
|
https://avatars.githubusercontent.com/u/41193842?v=4
|
https://api.github.com/users/xinjicong
|
https://github.com/xinjicong
|
https://api.github.com/users/xinjicong/followers
|
https://api.github.com/users/xinjicong/following{/other_user}
|
https://api.github.com/users/xinjicong/gists{/gist_id}
|
https://api.github.com/users/xinjicong/starred{/owner}{/repo}
|
https://api.github.com/users/xinjicong/subscriptions
|
https://api.github.com/users/xinjicong/orgs
|
https://api.github.com/users/xinjicong/repos
|
https://api.github.com/users/xinjicong/events{/privacy}
|
https://api.github.com/users/xinjicong/received_events
|
User
|
public
| false
| 2021-03-03T11:21:00
| 2021-03-03T11:21:00
|
> The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)
> You can now use them offline
>
> ```python
> datasets = load_dataset('text', data_files=data_files)
> ```
>
> We'll do a new release soon
so the new version release now?
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789642963/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789695372
|
https://github.com/huggingface/datasets/issues/1973#issuecomment-789695372
|
https://api.github.com/repos/huggingface/datasets/issues/1973
| 789,695,372
|
MDEyOklzc3VlQ29tbWVudDc4OTY5NTM3Mg==
|
albertvillanova
| 8,515,462
|
MDQ6VXNlcjg1MTU0NjI=
|
https://avatars.githubusercontent.com/u/8515462?v=4
|
https://api.github.com/users/albertvillanova
|
https://github.com/albertvillanova
|
https://api.github.com/users/albertvillanova/followers
|
https://api.github.com/users/albertvillanova/following{/other_user}
|
https://api.github.com/users/albertvillanova/gists{/gist_id}
|
https://api.github.com/users/albertvillanova/starred{/owner}{/repo}
|
https://api.github.com/users/albertvillanova/subscriptions
|
https://api.github.com/users/albertvillanova/orgs
|
https://api.github.com/users/albertvillanova/repos
|
https://api.github.com/users/albertvillanova/events{/privacy}
|
https://api.github.com/users/albertvillanova/received_events
|
User
|
public
| false
| 2021-03-03T12:56:39
| 2021-03-03T12:56:39
|
Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789695372/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789745425
|
https://github.com/huggingface/datasets/pull/1962#issuecomment-789745425
|
https://api.github.com/repos/huggingface/datasets/issues/1962
| 789,745,425
|
MDEyOklzc3VlQ29tbWVudDc4OTc0NTQyNQ==
|
mariosasko
| 47,462,742
|
MDQ6VXNlcjQ3NDYyNzQy
|
https://avatars.githubusercontent.com/u/47462742?v=4
|
https://api.github.com/users/mariosasko
|
https://github.com/mariosasko
|
https://api.github.com/users/mariosasko/followers
|
https://api.github.com/users/mariosasko/following{/other_user}
|
https://api.github.com/users/mariosasko/gists{/gist_id}
|
https://api.github.com/users/mariosasko/starred{/owner}{/repo}
|
https://api.github.com/users/mariosasko/subscriptions
|
https://api.github.com/users/mariosasko/orgs
|
https://api.github.com/users/mariosasko/repos
|
https://api.github.com/users/mariosasko/events{/privacy}
|
https://api.github.com/users/mariosasko/received_events
|
User
|
public
| false
| 2021-03-03T14:17:00
| 2021-03-03T14:17:00
|
@lhoestq Re-added the arg. The ConnectionError in CI seems unrelated to this PR (the same test fails on master as well).
|
COLLABORATOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789745425/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789793011
|
https://github.com/huggingface/datasets/issues/1973#issuecomment-789793011
|
https://api.github.com/repos/huggingface/datasets/issues/1973
| 789,793,011
|
MDEyOklzc3VlQ29tbWVudDc4OTc5MzAxMQ==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-03T15:21:14
| 2021-03-03T15:21:14
|
Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789793011/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789793363
|
https://github.com/huggingface/datasets/issues/1973#issuecomment-789793363
|
https://api.github.com/repos/huggingface/datasets/issues/1973
| 789,793,363
|
MDEyOklzc3VlQ29tbWVudDc4OTc5MzM2Mw==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-03T15:21:45
| 2021-03-03T15:21:45
|
And to clarify, it's not memory, it's disk space. Thank you!
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789793363/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789798816
|
https://github.com/huggingface/datasets/issues/1964#issuecomment-789798816
|
https://api.github.com/repos/huggingface/datasets/issues/1964
| 789,798,816
|
MDEyOklzc3VlQ29tbWVudDc4OTc5ODgxNg==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T15:29:20
| 2021-03-03T15:29:20
|
I'm glad you managed to fix run_qa.py for your case :)
Regarding the checksum error, I'm not able to reproduce on my side.
This errors says that the downloaded file doesn't match the expected file.
Could you try running this and let me know if you get the same output as me ?
```python
from datasets.utils.info_utils import get_size_checksum_dict
from datasets import cached_path
get_size_checksum_dict(cached_path("https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json"))
# {'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}
```
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789798816/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789801212
|
https://github.com/huggingface/datasets/issues/1724#issuecomment-789801212
|
https://api.github.com/repos/huggingface/datasets/issues/1724
| 789,801,212
|
MDEyOklzc3VlQ29tbWVudDc4OTgwMTIxMg==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T15:32:29
| 2021-03-03T15:32:29
|
Yes it's been available since datasets 1.3.0 !
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789801212/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789806326
|
https://github.com/huggingface/datasets/pull/1962#issuecomment-789806326
|
https://api.github.com/repos/huggingface/datasets/issues/1962
| 789,806,326
|
MDEyOklzc3VlQ29tbWVudDc4OTgwNjMyNg==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T15:39:27
| 2021-03-03T15:39:27
|
Thanks !
I'm re-running the CI, maybe this was an issue with circleCI
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789806326/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789842110
|
https://github.com/huggingface/datasets/issues/1973#issuecomment-789842110
|
https://api.github.com/repos/huggingface/datasets/issues/1973
| 789,842,110
|
MDEyOklzc3VlQ29tbWVudDc4OTg0MjExMA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T16:25:11
| 2021-03-03T16:25:11
|
Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.
Also, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.
So by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).
Feel free to clear your cache after your job has finished, or disable caching using
```python
import datasets
datasets.set_caching_enabled(False)
```
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789842110/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789857489
|
https://github.com/huggingface/datasets/pull/1962#issuecomment-789857489
|
https://api.github.com/repos/huggingface/datasets/issues/1962
| 789,857,489
|
MDEyOklzc3VlQ29tbWVudDc4OTg1NzQ4OQ==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T16:38:24
| 2021-03-03T16:38:24
|
Looks all good now, merged :)
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789857489/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789909461
|
https://github.com/huggingface/datasets/issues/1973#issuecomment-789909461
|
https://api.github.com/repos/huggingface/datasets/issues/1973
| 789,909,461
|
MDEyOklzc3VlQ29tbWVudDc4OTkwOTQ2MQ==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-03T17:28:16
| 2021-03-03T17:28:16
|
Thanks for the tip, this is useful.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789909461/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789923989
|
https://github.com/huggingface/datasets/issues/1919#issuecomment-789923989
|
https://api.github.com/repos/huggingface/datasets/issues/1919
| 789,923,989
|
MDEyOklzc3VlQ29tbWVudDc4OTkyMzk4OQ==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T17:40:27
| 2021-03-03T17:40:27
|
Closing since this has been fixed by #1923
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789923989/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789924506
|
https://github.com/huggingface/datasets/issues/1915#issuecomment-789924506
|
https://api.github.com/repos/huggingface/datasets/issues/1915
| 789,924,506
|
MDEyOklzc3VlQ29tbWVudDc4OTkyNDUwNg==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T17:40:48
| 2021-03-03T17:40:48
|
Closing since this has been fixed by #1925
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789924506/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789926227
|
https://github.com/huggingface/datasets/issues/1893#issuecomment-789926227
|
https://api.github.com/repos/huggingface/datasets/issues/1893
| 789,926,227
|
MDEyOklzc3VlQ29tbWVudDc4OTkyNjIyNw==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T17:42:02
| 2021-03-03T17:42:02
|
Closing since this has been fixed by #1912
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/789926227/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790004794
|
https://github.com/huggingface/datasets/issues/1981#issuecomment-790004794
|
https://api.github.com/repos/huggingface/datasets/issues/1981
| 790,004,794
|
MDEyOklzc3VlQ29tbWVudDc5MDAwNDc5NA==
|
albertvillanova
| 8,515,462
|
MDQ6VXNlcjg1MTU0NjI=
|
https://avatars.githubusercontent.com/u/8515462?v=4
|
https://api.github.com/users/albertvillanova
|
https://github.com/albertvillanova
|
https://api.github.com/users/albertvillanova/followers
|
https://api.github.com/users/albertvillanova/following{/other_user}
|
https://api.github.com/users/albertvillanova/gists{/gist_id}
|
https://api.github.com/users/albertvillanova/starred{/owner}{/repo}
|
https://api.github.com/users/albertvillanova/subscriptions
|
https://api.github.com/users/albertvillanova/orgs
|
https://api.github.com/users/albertvillanova/repos
|
https://api.github.com/users/albertvillanova/events{/privacy}
|
https://api.github.com/users/albertvillanova/received_events
|
User
|
public
| false
| 2021-03-03T19:41:29
| 2021-03-03T19:41:40
|
@stas00 Mea culpa... May I fix this tomorrow morning?
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790004794/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790005662
|
https://github.com/huggingface/datasets/issues/1981#issuecomment-790005662
|
https://api.github.com/repos/huggingface/datasets/issues/1981
| 790,005,662
|
MDEyOklzc3VlQ29tbWVudDc5MDAwNTY2Mg==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-03-03T19:42:52
| 2021-03-03T19:42:52
|
yes, of course, I reverted to the version before that and it works ;)
but since a new release was just made you will probably need to make a hotfix.
and add the wmt to the tests?
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790005662/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790006926
|
https://github.com/huggingface/datasets/issues/1981#issuecomment-790006926
|
https://api.github.com/repos/huggingface/datasets/issues/1981
| 790,006,926
|
MDEyOklzc3VlQ29tbWVudDc5MDAwNjkyNg==
|
albertvillanova
| 8,515,462
|
MDQ6VXNlcjg1MTU0NjI=
|
https://avatars.githubusercontent.com/u/8515462?v=4
|
https://api.github.com/users/albertvillanova
|
https://github.com/albertvillanova
|
https://api.github.com/users/albertvillanova/followers
|
https://api.github.com/users/albertvillanova/following{/other_user}
|
https://api.github.com/users/albertvillanova/gists{/gist_id}
|
https://api.github.com/users/albertvillanova/starred{/owner}{/repo}
|
https://api.github.com/users/albertvillanova/subscriptions
|
https://api.github.com/users/albertvillanova/orgs
|
https://api.github.com/users/albertvillanova/repos
|
https://api.github.com/users/albertvillanova/events{/privacy}
|
https://api.github.com/users/albertvillanova/received_events
|
User
|
public
| false
| 2021-03-03T19:45:05
| 2021-03-03T19:45:05
|
Sure, I will implement a regression test!
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790006926/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790030189
|
https://github.com/huggingface/datasets/issues/1981#issuecomment-790030189
|
https://api.github.com/repos/huggingface/datasets/issues/1981
| 790,030,189
|
MDEyOklzc3VlQ29tbWVudDc5MDAzMDE4OQ==
|
albertvillanova
| 8,515,462
|
MDQ6VXNlcjg1MTU0NjI=
|
https://avatars.githubusercontent.com/u/8515462?v=4
|
https://api.github.com/users/albertvillanova
|
https://github.com/albertvillanova
|
https://api.github.com/users/albertvillanova/followers
|
https://api.github.com/users/albertvillanova/following{/other_user}
|
https://api.github.com/users/albertvillanova/gists{/gist_id}
|
https://api.github.com/users/albertvillanova/starred{/owner}{/repo}
|
https://api.github.com/users/albertvillanova/subscriptions
|
https://api.github.com/users/albertvillanova/orgs
|
https://api.github.com/users/albertvillanova/repos
|
https://api.github.com/users/albertvillanova/events{/privacy}
|
https://api.github.com/users/albertvillanova/received_events
|
User
|
public
| false
| 2021-03-03T20:23:05
| 2021-03-03T20:23:05
|
@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790030189/reactions
| 2
| 0
| 0
| 0
| 0
| 0
| 2
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790050134
|
https://github.com/huggingface/datasets/pull/1982#issuecomment-790050134
|
https://api.github.com/repos/huggingface/datasets/issues/1982
| 790,050,134
|
MDEyOklzc3VlQ29tbWVudDc5MDA1MDEzNA==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-03-03T20:58:17
| 2021-03-03T20:58:17
|
I validated that this fixed the problem, thank you, @albertvillanova!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790050134/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790138112
|
https://github.com/huggingface/datasets/issues/1981#issuecomment-790138112
|
https://api.github.com/repos/huggingface/datasets/issues/1981
| 790,138,112
|
MDEyOklzc3VlQ29tbWVudDc5MDEzODExMg==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-03T23:07:39
| 2021-03-03T23:07:39
|
I'll do a patch release for this issue early tomorrow.
And yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790138112/reactions
| 1
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790161147
|
https://github.com/huggingface/datasets/pull/1874#issuecomment-790161147
|
https://api.github.com/repos/huggingface/datasets/issues/1874
| 790,161,147
|
MDEyOklzc3VlQ29tbWVudDc5MDE2MTE0Nw==
|
lucadiliello
| 23,355,969
|
MDQ6VXNlcjIzMzU1OTY5
|
https://avatars.githubusercontent.com/u/23355969?v=4
|
https://api.github.com/users/lucadiliello
|
https://github.com/lucadiliello
|
https://api.github.com/users/lucadiliello/followers
|
https://api.github.com/users/lucadiliello/following{/other_user}
|
https://api.github.com/users/lucadiliello/gists{/gist_id}
|
https://api.github.com/users/lucadiliello/starred{/owner}{/repo}
|
https://api.github.com/users/lucadiliello/subscriptions
|
https://api.github.com/users/lucadiliello/orgs
|
https://api.github.com/users/lucadiliello/repos
|
https://api.github.com/users/lucadiliello/events{/privacy}
|
https://api.github.com/users/lucadiliello/received_events
|
User
|
public
| false
| 2021-03-03T23:53:09
| 2021-03-03T23:53:09
|
Is there something else I should do? If not can this be integrated?
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790161147/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790204103
|
https://github.com/huggingface/datasets/issues/1964#issuecomment-790204103
|
https://api.github.com/repos/huggingface/datasets/issues/1964
| 790,204,103
|
MDEyOklzc3VlQ29tbWVudDc5MDIwNDEwMw==
|
LeopoldACC
| 44,536,699
|
MDQ6VXNlcjQ0NTM2Njk5
|
https://avatars.githubusercontent.com/u/44536699?v=4
|
https://api.github.com/users/LeopoldACC
|
https://github.com/LeopoldACC
|
https://api.github.com/users/LeopoldACC/followers
|
https://api.github.com/users/LeopoldACC/following{/other_user}
|
https://api.github.com/users/LeopoldACC/gists{/gist_id}
|
https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}
|
https://api.github.com/users/LeopoldACC/subscriptions
|
https://api.github.com/users/LeopoldACC/orgs
|
https://api.github.com/users/LeopoldACC/repos
|
https://api.github.com/users/LeopoldACC/events{/privacy}
|
https://api.github.com/users/LeopoldACC/received_events
|
User
|
public
| false
| 2021-03-04T01:11:22
| 2021-03-04T01:11:22
|
I run the code,and it show below:
```
>>> from datasets.utils.info_utils import get_size_checksum_dict
>>> from datasets import cached_path
>>> get_size_checksum_dict(cached_path("https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json"))
Downloading: 30.3MB [04:13, 120kB/s]
{'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}
```
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790204103/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790509273
|
https://github.com/huggingface/datasets/pull/1874#issuecomment-790509273
|
https://api.github.com/repos/huggingface/datasets/issues/1874
| 790,509,273
|
MDEyOklzc3VlQ29tbWVudDc5MDUwOTI3Mw==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-04T10:28:25
| 2021-03-04T10:35:52
|
Thanks a lot !!
Since the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :/
So I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:
`dataset = load_dataset("europarl_bilingual", lang1="fi", lang2="fr")`
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790509273/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790578697
|
https://github.com/huggingface/datasets/pull/1978#issuecomment-790578697
|
https://api.github.com/repos/huggingface/datasets/issues/1978
| 790,578,697
|
MDEyOklzc3VlQ29tbWVudDc5MDU3ODY5Nw==
|
lorinczb
| 36,982,089
|
MDQ6VXNlcjM2OTgyMDg5
|
https://avatars.githubusercontent.com/u/36982089?v=4
|
https://api.github.com/users/lorinczb
|
https://github.com/lorinczb
|
https://api.github.com/users/lorinczb/followers
|
https://api.github.com/users/lorinczb/following{/other_user}
|
https://api.github.com/users/lorinczb/gists{/gist_id}
|
https://api.github.com/users/lorinczb/starred{/owner}{/repo}
|
https://api.github.com/users/lorinczb/subscriptions
|
https://api.github.com/users/lorinczb/orgs
|
https://api.github.com/users/lorinczb/repos
|
https://api.github.com/users/lorinczb/events{/privacy}
|
https://api.github.com/users/lorinczb/received_events
|
User
|
public
| false
| 2021-03-04T12:21:11
| 2021-03-04T12:21:11
|
@lhoestq thank you very much for the quick review and useful comments!
I have tried to address them all, and a few comments that you left for ro_sts I have applied to the ro_sts_parallel as well (in read-me: fixed source_datasets, links to homepage, repository, leaderboard, thanks to me message, in ro_sts_parallel.py changed to camel case as well). In the ro_sts_parallel I have changed the order on the languages, also in the example, as you said order doesn't matter, but just to have them listed in the readme in the same order.
I have commented above on why we would like to keep them as separate datasets, hope it makes sense.
If there is anything else I should change please let me know.
Thanks again!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790578697/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790644636
|
https://github.com/huggingface/datasets/pull/1982#issuecomment-790644636
|
https://api.github.com/repos/huggingface/datasets/issues/1982
| 790,644,636
|
MDEyOklzc3VlQ29tbWVudDc5MDY0NDYzNg==
|
sabania
| 32,322,564
|
MDQ6VXNlcjMyMzIyNTY0
|
https://avatars.githubusercontent.com/u/32322564?v=4
|
https://api.github.com/users/sabania
|
https://github.com/sabania
|
https://api.github.com/users/sabania/followers
|
https://api.github.com/users/sabania/following{/other_user}
|
https://api.github.com/users/sabania/gists{/gist_id}
|
https://api.github.com/users/sabania/starred{/owner}{/repo}
|
https://api.github.com/users/sabania/subscriptions
|
https://api.github.com/users/sabania/orgs
|
https://api.github.com/users/sabania/repos
|
https://api.github.com/users/sabania/events{/privacy}
|
https://api.github.com/users/sabania/received_events
|
User
|
public
| false
| 2021-03-04T14:10:04
| 2021-03-04T14:10:04
|
still facing the same issue or similar:
from datasets import load_dataset
wtm14_test = load_dataset('wmt14',"de-en",cache_dir='./datasets')
~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager)
758 # Extract manually downloaded files.
759 manual_files = dl_manager.extract(manual_paths_dict)
--> 760 extraction_map = dict(downloaded_files, **manual_files)
761
762 for language in self.config.language_pair:
TypeError: type object argument after ** must be a mapping, not list
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790644636/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790647832
|
https://github.com/huggingface/datasets/pull/1982#issuecomment-790647832
|
https://api.github.com/repos/huggingface/datasets/issues/1982
| 790,647,832
|
MDEyOklzc3VlQ29tbWVudDc5MDY0NzgzMg==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-04T14:14:57
| 2021-03-04T14:14:57
|
Hi @sabania
We released a patch version that fixes this issue (1.4.1), can you try with the new version please ?
```
pip install --upgrade datasets
```
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790647832/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790647873
|
https://github.com/huggingface/datasets/pull/1985#issuecomment-790647873
|
https://api.github.com/repos/huggingface/datasets/issues/1985
| 790,647,873
|
MDEyOklzc3VlQ29tbWVudDc5MDY0Nzg3Mw==
|
albertvillanova
| 8,515,462
|
MDQ6VXNlcjg1MTU0NjI=
|
https://avatars.githubusercontent.com/u/8515462?v=4
|
https://api.github.com/users/albertvillanova
|
https://github.com/albertvillanova
|
https://api.github.com/users/albertvillanova/followers
|
https://api.github.com/users/albertvillanova/following{/other_user}
|
https://api.github.com/users/albertvillanova/gists{/gist_id}
|
https://api.github.com/users/albertvillanova/starred{/owner}{/repo}
|
https://api.github.com/users/albertvillanova/subscriptions
|
https://api.github.com/users/albertvillanova/orgs
|
https://api.github.com/users/albertvillanova/repos
|
https://api.github.com/users/albertvillanova/events{/privacy}
|
https://api.github.com/users/albertvillanova/received_events
|
User
|
public
| false
| 2021-03-04T14:15:02
| 2021-03-04T14:15:02
|
@lhoestq, are the tests OK? Some other cases I missed? Do you agree with this approach?
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790647873/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790648987
|
https://github.com/huggingface/datasets/issues/1981#issuecomment-790648987
|
https://api.github.com/repos/huggingface/datasets/issues/1981
| 790,648,987
|
MDEyOklzc3VlQ29tbWVudDc5MDY0ODk4Nw==
|
sabania
| 32,322,564
|
MDQ6VXNlcjMyMzIyNTY0
|
https://avatars.githubusercontent.com/u/32322564?v=4
|
https://api.github.com/users/sabania
|
https://github.com/sabania
|
https://api.github.com/users/sabania/followers
|
https://api.github.com/users/sabania/following{/other_user}
|
https://api.github.com/users/sabania/gists{/gist_id}
|
https://api.github.com/users/sabania/starred{/owner}{/repo}
|
https://api.github.com/users/sabania/subscriptions
|
https://api.github.com/users/sabania/orgs
|
https://api.github.com/users/sabania/repos
|
https://api.github.com/users/sabania/events{/privacy}
|
https://api.github.com/users/sabania/received_events
|
User
|
public
| false
| 2021-03-04T14:16:47
| 2021-03-04T14:16:47
|
still facing the same issue or similar:
from datasets import load_dataset
wtm14_test = load_dataset('wmt14',"de-en",cache_dir='./datasets')
~.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager)
758 # Extract manually downloaded files.
759 manual_files = dl_manager.extract(manual_paths_dict)
--> 760 extraction_map = dict(downloaded_files, **manual_files)
761
762 for language in self.config.language_pair:
TypeError: type object argument after ** must be a mapping, not list
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790648987/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790659010
|
https://github.com/huggingface/datasets/issues/1986#issuecomment-790659010
|
https://api.github.com/repos/huggingface/datasets/issues/1986
| 790,659,010
|
MDEyOklzc3VlQ29tbWVudDc5MDY1OTAxMA==
|
sabania
| 32,322,564
|
MDQ6VXNlcjMyMzIyNTY0
|
https://avatars.githubusercontent.com/u/32322564?v=4
|
https://api.github.com/users/sabania
|
https://github.com/sabania
|
https://api.github.com/users/sabania/followers
|
https://api.github.com/users/sabania/following{/other_user}
|
https://api.github.com/users/sabania/gists{/gist_id}
|
https://api.github.com/users/sabania/starred{/owner}{/repo}
|
https://api.github.com/users/sabania/subscriptions
|
https://api.github.com/users/sabania/orgs
|
https://api.github.com/users/sabania/repos
|
https://api.github.com/users/sabania/events{/privacy}
|
https://api.github.com/users/sabania/received_events
|
User
|
public
| false
| 2021-03-04T14:31:07
| 2021-03-04T14:31:07
|
caching issue, seems to work again..
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790659010/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790686676
|
https://github.com/huggingface/datasets/issues/1964#issuecomment-790686676
|
https://api.github.com/repos/huggingface/datasets/issues/1964
| 790,686,676
|
MDEyOklzc3VlQ29tbWVudDc5MDY4NjY3Ng==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-04T15:09:26
| 2021-03-04T15:09:26
|
Alright ! So in this case redownloading the file with `download_mode="force_redownload"` should fix it. Can you try using `download_mode="force_redownload"` again ?
Not sure why it didn't work for you the first time though :/
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790686676/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790702332
|
https://github.com/huggingface/datasets/pull/1978#issuecomment-790702332
|
https://api.github.com/repos/huggingface/datasets/issues/1978
| 790,702,332
|
MDEyOklzc3VlQ29tbWVudDc5MDcwMjMzMg==
|
lorinczb
| 36,982,089
|
MDQ6VXNlcjM2OTgyMDg5
|
https://avatars.githubusercontent.com/u/36982089?v=4
|
https://api.github.com/users/lorinczb
|
https://github.com/lorinczb
|
https://api.github.com/users/lorinczb/followers
|
https://api.github.com/users/lorinczb/following{/other_user}
|
https://api.github.com/users/lorinczb/gists{/gist_id}
|
https://api.github.com/users/lorinczb/starred{/owner}{/repo}
|
https://api.github.com/users/lorinczb/subscriptions
|
https://api.github.com/users/lorinczb/orgs
|
https://api.github.com/users/lorinczb/repos
|
https://api.github.com/users/lorinczb/events{/privacy}
|
https://api.github.com/users/lorinczb/received_events
|
User
|
public
| false
| 2021-03-04T15:30:53
| 2021-03-04T15:30:53
|
@lhoestq I tried to adjust the ro_sts_parallel, locally when I run the tests they are passing, but somewhere it has the old name of rosts-parallel-ro-en which I am trying to change to ro_sts_parallel. I don't think I have left anything related to rosts-parallel-ro-en, but when the dataset_infos.json is regenerated it adds it. Could you please help me out, how can I fix this? Thanks in advance!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790702332/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790731836
|
https://github.com/huggingface/datasets/issues/1983#issuecomment-790731836
|
https://api.github.com/repos/huggingface/datasets/issues/1983
| 790,731,836
|
MDEyOklzc3VlQ29tbWVudDc5MDczMTgzNg==
|
mariosasko
| 47,462,742
|
MDQ6VXNlcjQ3NDYyNzQy
|
https://avatars.githubusercontent.com/u/47462742?v=4
|
https://api.github.com/users/mariosasko
|
https://github.com/mariosasko
|
https://api.github.com/users/mariosasko/followers
|
https://api.github.com/users/mariosasko/following{/other_user}
|
https://api.github.com/users/mariosasko/gists{/gist_id}
|
https://api.github.com/users/mariosasko/starred{/owner}{/repo}
|
https://api.github.com/users/mariosasko/subscriptions
|
https://api.github.com/users/mariosasko/orgs
|
https://api.github.com/users/mariosasko/repos
|
https://api.github.com/users/mariosasko/events{/privacy}
|
https://api.github.com/users/mariosasko/received_events
|
User
|
public
| false
| 2021-03-04T16:10:07
| 2021-03-04T16:10:07
|
Hi,
if you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in our implementation.
@lhoestq What do you think about including these lines? ([Link](https://github.com/flairNLP/flair/issues/1097) to a similar issue in the flairNLP repo)
|
COLLABORATOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790731836/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790757599
|
https://github.com/huggingface/datasets/pull/1982#issuecomment-790757599
|
https://api.github.com/repos/huggingface/datasets/issues/1982
| 790,757,599
|
MDEyOklzc3VlQ29tbWVudDc5MDc1NzU5OQ==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-03-04T16:44:45
| 2021-03-04T16:44:45
|
I re-validated with the hotfix and the problem is no more.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790757599/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790758526
|
https://github.com/huggingface/datasets/pull/1982#issuecomment-790758526
|
https://api.github.com/repos/huggingface/datasets/issues/1982
| 790,758,526
|
MDEyOklzc3VlQ29tbWVudDc5MDc1ODUyNg==
|
sabania
| 32,322,564
|
MDQ6VXNlcjMyMzIyNTY0
|
https://avatars.githubusercontent.com/u/32322564?v=4
|
https://api.github.com/users/sabania
|
https://github.com/sabania
|
https://api.github.com/users/sabania/followers
|
https://api.github.com/users/sabania/following{/other_user}
|
https://api.github.com/users/sabania/gists{/gist_id}
|
https://api.github.com/users/sabania/starred{/owner}{/repo}
|
https://api.github.com/users/sabania/subscriptions
|
https://api.github.com/users/sabania/orgs
|
https://api.github.com/users/sabania/repos
|
https://api.github.com/users/sabania/events{/privacy}
|
https://api.github.com/users/sabania/received_events
|
User
|
public
| false
| 2021-03-04T16:46:04
| 2021-03-04T16:46:04
|
It's working. thanks a lot.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790758526/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790773724
|
https://github.com/huggingface/datasets/issues/1877#issuecomment-790773724
|
https://api.github.com/repos/huggingface/datasets/issues/1877
| 790,773,724
|
MDEyOklzc3VlQ29tbWVudDc5MDc3MzcyNA==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-04T17:07:12
| 2021-03-04T17:07:12
|
I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.
What's important here is that concatenating two tables into one doesn't double the memory used (`total_allocated_bytes()` stays the same).
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790773724/reactions
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790786409
|
https://github.com/huggingface/datasets/issues/1983#issuecomment-790786409
|
https://api.github.com/repos/huggingface/datasets/issues/1983
| 790,786,409
|
MDEyOklzc3VlQ29tbWVudDc5MDc4NjQwOQ==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-04T17:21:50
| 2021-03-04T17:22:19
|
We should mention in the Conll2003 dataset card that these lines have been removed indeed.
If some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them.
But IMO the default config should stay the current one (without the `-DOCSTART-` stuff), so that you can directly train NER models without additional preprocessing. Let me know what you think
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790786409/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790829065
|
https://github.com/huggingface/datasets/issues/1990#issuecomment-790829065
|
https://api.github.com/repos/huggingface/datasets/issues/1990
| 790,829,065
|
MDEyOklzc3VlQ29tbWVudDc5MDgyOTA2NQ==
|
dorost1234
| 79,165,106
|
MDQ6VXNlcjc5MTY1MTA2
|
https://avatars.githubusercontent.com/u/79165106?v=4
|
https://api.github.com/users/dorost1234
|
https://github.com/dorost1234
|
https://api.github.com/users/dorost1234/followers
|
https://api.github.com/users/dorost1234/following{/other_user}
|
https://api.github.com/users/dorost1234/gists{/gist_id}
|
https://api.github.com/users/dorost1234/starred{/owner}{/repo}
|
https://api.github.com/users/dorost1234/subscriptions
|
https://api.github.com/users/dorost1234/orgs
|
https://api.github.com/users/dorost1234/repos
|
https://api.github.com/users/dorost1234/events{/privacy}
|
https://api.github.com/users/dorost1234/received_events
|
User
|
public
| false
| 2021-03-04T18:24:55
| 2021-03-04T18:38:11
|
Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790829065/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790842771
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-790842771
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 790,842,771
|
MDEyOklzc3VlQ29tbWVudDc5MDg0Mjc3MQ==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-04T18:44:57
| 2021-03-04T18:44:57
|
It seems that I get parsing errors for various fields in my data. For example now I get this:
```
File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module>
main()
File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main
datasets = load_dataset("csv", data_files=data_files)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 1028, in _prepare_split
writer.write_table(table)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py", line 292, in write_table
pa_table = pa_table.cast(self._schema)
File "pyarrow/table.pxi", line 1311, in pyarrow.lib.Table.cast
File "pyarrow/table.pxi", line 265, in pyarrow.lib.ChunkedArray.cast
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py", line 87, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Failed to parse string: https://www.netgalley.com/catalog/book/121872
```
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790842771/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790860788
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-790860788
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 790,860,788
|
MDEyOklzc3VlQ29tbWVudDc5MDg2MDc4OA==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-04T19:13:02
| 2021-03-04T19:13:02
|
Not sure if this helps, this is how I load my files (as in the sample scripts on transformers):
```
if data_args.train_file.endswith(".csv"):
# Loading a dataset from local csv files
datasets = load_dataset("csv", data_files=data_files)
```
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790860788/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790862303
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-790862303
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 790,862,303
|
MDEyOklzc3VlQ29tbWVudDc5MDg2MjMwMw==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-04T19:15:36
| 2021-03-04T19:15:36
|
Since this worked out of the box in a few examples before, I wonder if it's some quoting issue or something else.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790862303/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790878778
|
https://github.com/huggingface/datasets/issues/1965#issuecomment-790878778
|
https://api.github.com/repos/huggingface/datasets/issues/1965
| 790,878,778
|
MDEyOklzc3VlQ29tbWVudDc5MDg3ODc3OA==
|
shamanez
| 16,892,570
|
MDQ6VXNlcjE2ODkyNTcw
|
https://avatars.githubusercontent.com/u/16892570?v=4
|
https://api.github.com/users/shamanez
|
https://github.com/shamanez
|
https://api.github.com/users/shamanez/followers
|
https://api.github.com/users/shamanez/following{/other_user}
|
https://api.github.com/users/shamanez/gists{/gist_id}
|
https://api.github.com/users/shamanez/starred{/owner}{/repo}
|
https://api.github.com/users/shamanez/subscriptions
|
https://api.github.com/users/shamanez/orgs
|
https://api.github.com/users/shamanez/repos
|
https://api.github.com/users/shamanez/events{/privacy}
|
https://api.github.com/users/shamanez/received_events
|
User
|
public
| false
| 2021-03-04T19:40:42
| 2021-03-04T19:40:56
|
@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/790878778/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791123097
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-791123097
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 791,123,097
|
MDEyOklzc3VlQ29tbWVudDc5MTEyMzA5Nw==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T03:23:16
| 2021-03-05T03:24:22
|
Hi @ioana-blue,
Can you share a sample from your .csv? A dummy where you get this error will also help.
I tried this csv:
```csv
feature,label
1.2,not nurse
1.3,nurse
1.5,surgeon
```
and the following snippet:
```python
from datasets import load_dataset
d = load_dataset("csv",data_files=['test.csv'])
print(d)
print(d['train']['label'])
```
and this works perfectly fine for me:
```sh
DatasetDict({
train: Dataset({
features: ['feature', 'label'],
num_rows: 3
})
})
['not nurse', 'nurse', 'surgeon']
```
I'm sure your csv is more complicated than this one. But it is hard to tell where the issue might be without looking at a sample.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791123097/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791124517
|
https://github.com/huggingface/datasets/issues/1992#issuecomment-791124517
|
https://api.github.com/repos/huggingface/datasets/issues/1992
| 791,124,517
|
MDEyOklzc3VlQ29tbWVudDc5MTEyNDUxNw==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T03:27:51
| 2021-03-05T03:27:51
|
Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791124517/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791134677
|
https://github.com/huggingface/datasets/issues/1877#issuecomment-791134677
|
https://api.github.com/repos/huggingface/datasets/issues/1877
| 791,134,677
|
MDEyOklzc3VlQ29tbWVudDc5MTEzNDY3Nw==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T03:58:33
| 2021-03-05T04:03:06
|
Hi @lhoestq @albertvillanova,
I checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets?
Based on my understanding, it is something like this, please correct me if I am wrong:
1. For in-memory datasets, we don't have any dataset files so the entire dataset is pickled to the cache during loading, and then whenever required it is unpickled .
2. For on-disk/memory-mapped datasets, we have the data files provided, so they can be re-loaded from the paths, and only the file-paths are stored while pickling.
If this is correct, will the feature also handle pickling/unpickling of a concatenated dataset? Will this be cached?
This also leads me to ask whether datasets are chunked during pickling?
Thanks,
Gunjan
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791134677/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791137805
|
https://github.com/huggingface/datasets/pull/1529#issuecomment-791137805
|
https://api.github.com/repos/huggingface/datasets/issues/1529
| 791,137,805
|
MDEyOklzc3VlQ29tbWVudDc5MTEzNzgwNQ==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T04:07:39
| 2021-03-05T04:07:39
|
Hi @lhoestq @SBrandeis @iliemihai
Is this still in progress or can I take over this one?
Thanks,
Gunjan
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791137805/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791158218
|
https://github.com/huggingface/datasets/issues/1939#issuecomment-791158218
|
https://api.github.com/repos/huggingface/datasets/issues/1939
| 791,158,218
|
MDEyOklzc3VlQ29tbWVudDc5MTE1ODIxOA==
|
stas00
| 10,676,103
|
MDQ6VXNlcjEwNjc2MTAz
|
https://avatars.githubusercontent.com/u/10676103?v=4
|
https://api.github.com/users/stas00
|
https://github.com/stas00
|
https://api.github.com/users/stas00/followers
|
https://api.github.com/users/stas00/following{/other_user}
|
https://api.github.com/users/stas00/gists{/gist_id}
|
https://api.github.com/users/stas00/starred{/owner}{/repo}
|
https://api.github.com/users/stas00/subscriptions
|
https://api.github.com/users/stas00/orgs
|
https://api.github.com/users/stas00/repos
|
https://api.github.com/users/stas00/events{/privacy}
|
https://api.github.com/users/stas00/received_events
|
User
|
public
| false
| 2021-03-05T05:09:40
| 2021-03-05T05:09:40
|
I lost access to the firewalled setup, but I emulated it with:
```
sudo ufw enable
sudo ufw default deny outgoing
```
(thanks @mfuntowicz)
I was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.
Thank you!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791158218/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791257070
|
https://github.com/huggingface/datasets/issues/1994#issuecomment-791257070
|
https://api.github.com/repos/huggingface/datasets/issues/1994
| 791,257,070
|
MDEyOklzc3VlQ29tbWVudDc5MTI1NzA3MA==
|
dorost1234
| 79,165,106
|
MDQ6VXNlcjc5MTY1MTA2
|
https://avatars.githubusercontent.com/u/79165106?v=4
|
https://api.github.com/users/dorost1234
|
https://github.com/dorost1234
|
https://api.github.com/users/dorost1234/followers
|
https://api.github.com/users/dorost1234/following{/other_user}
|
https://api.github.com/users/dorost1234/gists{/gist_id}
|
https://api.github.com/users/dorost1234/starred{/owner}{/repo}
|
https://api.github.com/users/dorost1234/subscriptions
|
https://api.github.com/users/dorost1234/orgs
|
https://api.github.com/users/dorost1234/repos
|
https://api.github.com/users/dorost1234/events{/privacy}
|
https://api.github.com/users/dorost1234/received_events
|
User
|
public
| false
| 2021-03-05T08:34:42
| 2021-03-05T08:45:02
|
@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791257070/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791262950
|
https://github.com/huggingface/datasets/pull/1995#issuecomment-791262950
|
https://api.github.com/repos/huggingface/datasets/issues/1995
| 791,262,950
|
MDEyOklzc3VlQ29tbWVudDc5MTI2Mjk1MA==
|
patrickvonplaten
| 23,423,619
|
MDQ6VXNlcjIzNDIzNjE5
|
https://avatars.githubusercontent.com/u/23423619?v=4
|
https://api.github.com/users/patrickvonplaten
|
https://github.com/patrickvonplaten
|
https://api.github.com/users/patrickvonplaten/followers
|
https://api.github.com/users/patrickvonplaten/following{/other_user}
|
https://api.github.com/users/patrickvonplaten/gists{/gist_id}
|
https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}
|
https://api.github.com/users/patrickvonplaten/subscriptions
|
https://api.github.com/users/patrickvonplaten/orgs
|
https://api.github.com/users/patrickvonplaten/repos
|
https://api.github.com/users/patrickvonplaten/events{/privacy}
|
https://api.github.com/users/patrickvonplaten/received_events
|
User
|
public
| false
| 2021-03-05T08:44:19
| 2021-03-05T08:44:19
|
cc @lhoestq @vrindaprabhu
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791262950/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791271371
|
https://github.com/huggingface/datasets/pull/1995#issuecomment-791271371
|
https://api.github.com/repos/huggingface/datasets/issues/1995
| 791,271,371
|
MDEyOklzc3VlQ29tbWVudDc5MTI3MTM3MQ==
|
patrickvonplaten
| 23,423,619
|
MDQ6VXNlcjIzNDIzNjE5
|
https://avatars.githubusercontent.com/u/23423619?v=4
|
https://api.github.com/users/patrickvonplaten
|
https://github.com/patrickvonplaten
|
https://api.github.com/users/patrickvonplaten/followers
|
https://api.github.com/users/patrickvonplaten/following{/other_user}
|
https://api.github.com/users/patrickvonplaten/gists{/gist_id}
|
https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}
|
https://api.github.com/users/patrickvonplaten/subscriptions
|
https://api.github.com/users/patrickvonplaten/orgs
|
https://api.github.com/users/patrickvonplaten/repos
|
https://api.github.com/users/patrickvonplaten/events{/privacy}
|
https://api.github.com/users/patrickvonplaten/received_events
|
User
|
public
| false
| 2021-03-05T08:57:17
| 2021-03-05T08:57:17
|
Failing `run (push)` is unrelated -> merging
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791271371/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791312423
|
https://github.com/huggingface/datasets/pull/1978#issuecomment-791312423
|
https://api.github.com/repos/huggingface/datasets/issues/1978
| 791,312,423
|
MDEyOklzc3VlQ29tbWVudDc5MTMxMjQyMw==
|
lorinczb
| 36,982,089
|
MDQ6VXNlcjM2OTgyMDg5
|
https://avatars.githubusercontent.com/u/36982089?v=4
|
https://api.github.com/users/lorinczb
|
https://github.com/lorinczb
|
https://api.github.com/users/lorinczb/followers
|
https://api.github.com/users/lorinczb/following{/other_user}
|
https://api.github.com/users/lorinczb/gists{/gist_id}
|
https://api.github.com/users/lorinczb/starred{/owner}{/repo}
|
https://api.github.com/users/lorinczb/subscriptions
|
https://api.github.com/users/lorinczb/orgs
|
https://api.github.com/users/lorinczb/repos
|
https://api.github.com/users/lorinczb/events{/privacy}
|
https://api.github.com/users/lorinczb/received_events
|
User
|
public
| false
| 2021-03-05T10:00:14
| 2021-03-05T10:00:14
|
Great, thanks for all your help!
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791312423/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791321142
|
https://github.com/huggingface/datasets/issues/1990#issuecomment-791321142
|
https://api.github.com/repos/huggingface/datasets/issues/1990
| 791,321,142
|
MDEyOklzc3VlQ29tbWVudDc5MTMyMTE0Mg==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-05T10:14:25
| 2021-03-05T10:14:25
|
It's not trying to bring the dataset into memory.
Actually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory.
What dataset did you use to get this error ?
On what OS are you running ? What's your python and pyarrow version ?
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791321142/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791324891
|
https://github.com/huggingface/datasets/issues/1877#issuecomment-791324891
|
https://api.github.com/repos/huggingface/datasets/issues/1877
| 791,324,891
|
MDEyOklzc3VlQ29tbWVudDc5MTMyNDg5MQ==
|
lhoestq
| 42,851,186
|
MDQ6VXNlcjQyODUxMTg2
|
https://avatars.githubusercontent.com/u/42851186?v=4
|
https://api.github.com/users/lhoestq
|
https://github.com/lhoestq
|
https://api.github.com/users/lhoestq/followers
|
https://api.github.com/users/lhoestq/following{/other_user}
|
https://api.github.com/users/lhoestq/gists{/gist_id}
|
https://api.github.com/users/lhoestq/starred{/owner}{/repo}
|
https://api.github.com/users/lhoestq/subscriptions
|
https://api.github.com/users/lhoestq/orgs
|
https://api.github.com/users/lhoestq/repos
|
https://api.github.com/users/lhoestq/events{/privacy}
|
https://api.github.com/users/lhoestq/received_events
|
User
|
public
| false
| 2021-03-05T10:21:18
| 2021-03-05T10:21:18
|
Hi ! Yes you're totally right about your two points :)
And in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. This allows to pickle/unpickle concatenated datasets
|
MEMBER
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791324891/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791410379
|
https://github.com/huggingface/datasets/issues/1983#issuecomment-791410379
|
https://api.github.com/repos/huggingface/datasets/issues/1983
| 791,410,379
|
MDEyOklzc3VlQ29tbWVudDc5MTQxMDM3OQ==
|
mariosasko
| 47,462,742
|
MDQ6VXNlcjQ3NDYyNzQy
|
https://avatars.githubusercontent.com/u/47462742?v=4
|
https://api.github.com/users/mariosasko
|
https://github.com/mariosasko
|
https://api.github.com/users/mariosasko/followers
|
https://api.github.com/users/mariosasko/following{/other_user}
|
https://api.github.com/users/mariosasko/gists{/gist_id}
|
https://api.github.com/users/mariosasko/starred{/owner}{/repo}
|
https://api.github.com/users/mariosasko/subscriptions
|
https://api.github.com/users/mariosasko/orgs
|
https://api.github.com/users/mariosasko/repos
|
https://api.github.com/users/mariosasko/events{/privacy}
|
https://api.github.com/users/mariosasko/received_events
|
User
|
public
| false
| 2021-03-05T13:11:21
| 2021-03-05T13:11:21
|
@lhoestq Yes, I agree adding a small note should be sufficient.
Currently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them.
|
COLLABORATOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791410379/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791422646
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-791422646
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 791,422,646
|
MDEyOklzc3VlQ29tbWVudDc5MTQyMjY0Ng==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-05T13:35:00
| 2021-03-05T13:35:00
|
I've had versions where it worked fain. For this dataset, I had all kind of parsing issues that I couldn't understand. What I ended up doing is strip all the columns that I didn't need and also make the label 0/1.
I think one line that may have caused a problem was the csv version of this:
```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job. ^M ('Rose', '', 'Blakey') journalist F 38 journalist https://www.netgalley.com/catalog/book/121872 _ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```
The error I got in this case is this one: https://github.com/huggingface/datasets/issues/1989#issuecomment-790842771
Note, this line was part of a much larger file and until this line I guess it was working fine.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791422646/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791449200
|
https://github.com/huggingface/datasets/issues/1877#issuecomment-791449200
|
https://api.github.com/repos/huggingface/datasets/issues/1877
| 791,449,200
|
MDEyOklzc3VlQ29tbWVudDc5MTQ0OTIwMA==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T14:21:51
| 2021-03-05T14:21:51
|
Hi @lhoestq
Thanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose?
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791449200/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791458626
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-791458626
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 791,458,626
|
MDEyOklzc3VlQ29tbWVudDc5MTQ1ODYyNg==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T14:37:12
| 2021-03-05T14:49:42
|
Hi @ioana-blue,
What is the separator you're using for the csv? I see there are only two commas in the given line, but they don't seem like appropriate points. Also, is this a string part of one line, or an entire line? There should also be a label, right?
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791458626/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791494268
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-791494268
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 791,494,268
|
MDEyOklzc3VlQ29tbWVudDc5MTQ5NDI2OA==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-05T15:33:56
| 2021-03-05T15:33:56
|
Sorry for the confusion, the sample above was from a tsv that was used to derive the csv. Let me construct the csv again (I had remove it).
This is the line in the csv - this is the whole line:
```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,"('Rose', '', 'Blakey')",journalist,F,38,journalist,https://www.netgalley.com/catalog/book/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791494268/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791503586
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-791503586
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 791,503,586
|
MDEyOklzc3VlQ29tbWVudDc5MTUwMzU4Ng==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T15:48:07
| 2021-03-05T15:51:14
|
Hi,
Just in case you want to use tsv directly, you can use the separator argument while loading the dataset.
```python
d = load_dataset("csv",data_files=['test.csv'],sep="\t")
```
Additionally, I don't face the issues with the following csv (same as the one you provided):
```sh
link1,text1,info1,info2,info3,info4,info5,link2,text2,text3
crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,"('Rose', '', 'Blakey')",journalist,F,38,journalist,https://www.netgalley.com/catalog/book/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.
```
Output after loading:
```sh
{'link1': 'crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz', 'text1': 'Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead', 'info1': "('Rose', '', 'Blakey')", 'info2': 'journalist', 'info3': 'F', 'info4': 38, 'info5': 'journalist', 'link2': 'https://www.netgalley.com/catalog/book/121872', 'text2': '_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job.', 'text3': ' She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.'}
```
Can you check once if the tsv works for you directly using the separator argument? The conversion from tsv to csv could create issues, I'm only guessing though.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791503586/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791522319
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-791522319
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 791,522,319
|
MDEyOklzc3VlQ29tbWVudDc5MTUyMjMxOQ==
|
ioana-blue
| 17,202,292
|
MDQ6VXNlcjE3MjAyMjky
|
https://avatars.githubusercontent.com/u/17202292?v=4
|
https://api.github.com/users/ioana-blue
|
https://github.com/ioana-blue
|
https://api.github.com/users/ioana-blue/followers
|
https://api.github.com/users/ioana-blue/following{/other_user}
|
https://api.github.com/users/ioana-blue/gists{/gist_id}
|
https://api.github.com/users/ioana-blue/starred{/owner}{/repo}
|
https://api.github.com/users/ioana-blue/subscriptions
|
https://api.github.com/users/ioana-blue/orgs
|
https://api.github.com/users/ioana-blue/repos
|
https://api.github.com/users/ioana-blue/events{/privacy}
|
https://api.github.com/users/ioana-blue/received_events
|
User
|
public
| false
| 2021-03-05T16:16:56
| 2021-03-05T16:16:56
|
thanks for the tip. very strange :/ I'll check my datasets version as well.
I will have more similar experiments soon so I'll let you know if I manage to get rid of this.
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791522319/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791542975
|
https://github.com/huggingface/datasets/issues/1989#issuecomment-791542975
|
https://api.github.com/repos/huggingface/datasets/issues/1989
| 791,542,975
|
MDEyOklzc3VlQ29tbWVudDc5MTU0Mjk3NQ==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-05T16:48:44
| 2021-03-05T16:49:56
|
No problem at all. I thought I'd be able to solve this but I'm unable to replicate the issue :/
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791542975/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791590955
|
https://github.com/huggingface/datasets/issues/1990#issuecomment-791590955
|
https://api.github.com/repos/huggingface/datasets/issues/1990
| 791,590,955
|
MDEyOklzc3VlQ29tbWVudDc5MTU5MDk1NQ==
|
dorost1234
| 79,165,106
|
MDQ6VXNlcjc5MTY1MTA2
|
https://avatars.githubusercontent.com/u/79165106?v=4
|
https://api.github.com/users/dorost1234
|
https://github.com/dorost1234
|
https://api.github.com/users/dorost1234/followers
|
https://api.github.com/users/dorost1234/following{/other_user}
|
https://api.github.com/users/dorost1234/gists{/gist_id}
|
https://api.github.com/users/dorost1234/starred{/owner}{/repo}
|
https://api.github.com/users/dorost1234/subscriptions
|
https://api.github.com/users/dorost1234/orgs
|
https://api.github.com/users/dorost1234/repos
|
https://api.github.com/users/dorost1234/events{/privacy}
|
https://api.github.com/users/dorost1234/received_events
|
User
|
public
| false
| 2021-03-05T18:09:26
| 2021-03-05T18:09:26
|
Dear @lhoestq
thank you so much for coming back to me. Please find info below:
1) Dataset name: I used wikipedia with config 20200501.en
2) I got these pyarrow in my environment:
pyarrow 2.0.0 <pip>
pyarrow 3.0.0 <pip>
3) python version 3.7.10
4) OS version
lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
Is there a way I could solve the memory issue and if I could run this model, I am using GeForce GTX 108,
thanks
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791590955/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791798195
|
https://github.com/huggingface/datasets/issues/1994#issuecomment-791798195
|
https://api.github.com/repos/huggingface/datasets/issues/1994
| 791,798,195
|
MDEyOklzc3VlQ29tbWVudDc5MTc5ODE5NQ==
|
jonatasgrosman
| 5,097,052
|
MDQ6VXNlcjUwOTcwNTI=
|
https://avatars.githubusercontent.com/u/5097052?v=4
|
https://api.github.com/users/jonatasgrosman
|
https://github.com/jonatasgrosman
|
https://api.github.com/users/jonatasgrosman/followers
|
https://api.github.com/users/jonatasgrosman/following{/other_user}
|
https://api.github.com/users/jonatasgrosman/gists{/gist_id}
|
https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}
|
https://api.github.com/users/jonatasgrosman/subscriptions
|
https://api.github.com/users/jonatasgrosman/orgs
|
https://api.github.com/users/jonatasgrosman/repos
|
https://api.github.com/users/jonatasgrosman/events{/privacy}
|
https://api.github.com/users/jonatasgrosman/received_events
|
User
|
public
| false
| 2021-03-05T23:54:55
| 2021-03-05T23:54:55
|
Hi @dorost1234, I think I can help you a little. I’ve processed some Wikipedia datasets (Spanish inclusive) using the HF/datasets library during recent research.
@lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791798195/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791903746
|
https://github.com/huggingface/datasets/pull/1529#issuecomment-791903746
|
https://api.github.com/repos/huggingface/datasets/issues/1529
| 791,903,746
|
MDEyOklzc3VlQ29tbWVudDc5MTkwMzc0Ng==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-06T09:42:54
| 2021-03-06T09:42:54
|
Hi,
While trying to add this dataset, I found some potential issues.
The homepage mentioned is : https://github.com/katakonst/sentiment-analysis-tensorflow/tree/master/datasets/ro/, where the dataset is different from the URLs: https://raw.githubusercontent.com/dumitrescustefan/Romanian-Transformers/examples/examples/sentiment_analysis/ro/train.csv. It is unclear which dataset is "correct". I checked the total examples (train+test) in both places and they do not match.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791903746/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791908156
|
https://github.com/huggingface/datasets/issues/1994#issuecomment-791908156
|
https://api.github.com/repos/huggingface/datasets/issues/1994
| 791,908,156
|
MDEyOklzc3VlQ29tbWVudDc5MTkwODE1Ng==
|
dorost1234
| 79,165,106
|
MDQ6VXNlcjc5MTY1MTA2
|
https://avatars.githubusercontent.com/u/79165106?v=4
|
https://api.github.com/users/dorost1234
|
https://github.com/dorost1234
|
https://api.github.com/users/dorost1234/followers
|
https://api.github.com/users/dorost1234/following{/other_user}
|
https://api.github.com/users/dorost1234/gists{/gist_id}
|
https://api.github.com/users/dorost1234/starred{/owner}{/repo}
|
https://api.github.com/users/dorost1234/subscriptions
|
https://api.github.com/users/dorost1234/orgs
|
https://api.github.com/users/dorost1234/repos
|
https://api.github.com/users/dorost1234/events{/privacy}
|
https://api.github.com/users/dorost1234/received_events
|
User
|
public
| false
| 2021-03-06T10:22:33
| 2021-03-06T10:52:26
|
Thank you so much @jonatasgrosman , I greatly appreciate your help with them.
Yes, I unfortunately does not have access to a good resource and need it for my
research. I greatly appreciate @lhoestq your help with uploading the processed datasets in huggingface datasets. This would be really helpful for some users like me with not access to high-memory GPU resources.
thank you both so much again.
On Sat, Mar 6, 2021 at 12:55 AM Jonatas Grosman <notifications@github.com>
wrote:
> Hi @dorost1234 <https://github.com/dorost1234>, I think I can help you a
> little. I’ve processed some Wikipedia datasets (Spanish inclusive) using
> the HF/datasets library during recent research.
>
> @lhoestq <https://github.com/lhoestq> Could you help me to upload these
> preprocessed datasets to Huggingface's repositories? To be more precise,
> I've built datasets from the following languages using the 20201201 dumps:
> Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish.
> Process these datasets have high costs that most of the community can't
> afford. I think these preprocessed datasets I have could be helpful for
> someone without access to high-resource machines to process Wikipedia's
> dumps like @dorost1234 <https://github.com/dorost1234>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/datasets/issues/1994#issuecomment-791798195>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AS37NMWMK5GFJFU3ACCJFUDTCFVNZANCNFSM4YUZIF4A>
> .
>
|
NONE
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791908156/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791908998
|
https://github.com/huggingface/datasets/issues/1757#issuecomment-791908998
|
https://api.github.com/repos/huggingface/datasets/issues/1757
| 791,908,998
|
MDEyOklzc3VlQ29tbWVudDc5MTkwODk5OA==
|
gchhablani
| 29,076,344
|
MDQ6VXNlcjI5MDc2MzQ0
|
https://avatars.githubusercontent.com/u/29076344?v=4
|
https://api.github.com/users/gchhablani
|
https://github.com/gchhablani
|
https://api.github.com/users/gchhablani/followers
|
https://api.github.com/users/gchhablani/following{/other_user}
|
https://api.github.com/users/gchhablani/gists{/gist_id}
|
https://api.github.com/users/gchhablani/starred{/owner}{/repo}
|
https://api.github.com/users/gchhablani/subscriptions
|
https://api.github.com/users/gchhablani/orgs
|
https://api.github.com/users/gchhablani/repos
|
https://api.github.com/users/gchhablani/events{/privacy}
|
https://api.github.com/users/gchhablani/received_events
|
User
|
public
| false
| 2021-03-06T10:30:28
| 2021-03-06T10:30:28
|
Hi @lhoestq,
This issue can be closed, I guess.
|
CONTRIBUTOR
|
https://api.github.com/repos/huggingface/datasets/issues/comments/791908998/reactions
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.