Dataset Viewer
Auto-converted to Parquet Duplicate
question_id
stringlengths
40
40
question
stringlengths
4
171
answer
sequence
evidence
sequence
753990d0b621d390ed58f20c4d9e4f065f0dc672
What is the seed lexicon?
[ "a vocabulary of positive and negative predicates that helps determine the polarity score of an event", "seed lexicon consists of positive and negative predicates" ]
[ [ "The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the mod...
9d578ddccc27dd849244d632dd0f6bf27348ad81
What are the results?
[ "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA...
[ [ "As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.", "We trained the model w...
02e4bf719b1a504e385c35c6186742e720bcb281
How are relations used to propagate polarity?
[ "based on the relation between events, the suggested polarity of one event can determine the possible polarity of the other event ", "cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity" ]
[ [ "In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates t...
44c4bd6decc86f1091b5fc0728873d9324cdde4e
How big is the Japanese data?
[ "7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus", "The ACP corpus has around 700k events split into positive and negative polarity " ]
[ [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence i...
86abeff85f3db79cf87a8c993e5e5aa61226dc98
What are labels available in dataset for supervision?
[ "negative, positive" ]
[ [ "Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language pr...
c029deb7f99756d2669abad0a349d917428e9c12
How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?
[ "3%" ]
[ [] ]
39f8db10d949c6b477fa4b51e7c184016505884f
How does their model learn using mostly raw data?
[ "by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity" ]
[ [ "In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates t...
d0bc782961567dc1dd7e074b621a6d6be44bb5b4
How big is seed lexicon used for training?
[ "30 words" ]
[ [ "We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that...
a592498ba2fac994cd6fad7372836f0adb37e22a
How large is raw corpus used for training?
[ "100 million sentences" ]
[ [ "As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence i...
3a9d391d25cde8af3334ac62d478b36b30079d74
Does the paper report macro F1?
[ "Yes", "Yes" ]
[ [], [ "We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.52...
8d8300d88283c73424c8f301ad9fdd733845eb47
How is the annotation experiment evaluated?
[ "confusion matrices of labels between annotators" ]
[ [ "We find that Cohen $\\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is a...
48b12eb53e2d507343f19b8a667696a39b719807
What are the aesthetic emotions formalized?
[ "feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking), Emotions that exhibit this dual capacity have been defined as “aesthetic emotions”" ]
[ [ "To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives tha...
003f884d3893532f8c302431c9f70be6f64d9be8
Do they report results only on English data?
[ "No", "Unanswerable" ]
[ [ "Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time ...
bb97537a0a7c8f12a3f65eba73cefa6abcd2f2b2
How do the various social phenomena examined manifest in different types of communities?
[ "Dynamic communities have substantially higher rates of monthly user retention than more stable communities. More distinctive communities exhibit moderately higher monthly retention rates than more generic communities. There is also a strong positive relationship between a community's dynamicity and the average num...
[ [ "We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctiv...
eea089baedc0ce80731c8fdcb064b82f584f483a
What patterns do they observe about how user engagement varies with the characteristics of a community?
[ "communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members, within distinctive communities, established users have an increased propensity to engage with the community's sp...
[ [ "Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fo...
edb2d24d6d10af13931b3a47a6543bd469752f0c
How did the select the 300 Reddit communities for comparison?
[ "They selected all the subreddits from January 2013 to December 2014 with at least 500 words in the vocabulary and at least 4 months of the subreddit's history. They also removed communities with the bulk of the contributions are in foreign language.", "They collect subreddits from January 2013 to December 2014,2...
[ [ "Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time ...
938cf30c4f1d14fa182e82919e16072fdbcf2a82
How do the authors measure how temporally dynamic a community is?
[ "the average volatility of all utterances" ]
[ [ "Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer ...
93f4ad6568207c9bd10d712a52f8de25b3ebadd4
How do the authors measure how distinctive a community is?
[ " the average specificity of all utterances" ]
[ [ "Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less d...
71a7153e12879defa186bfb6dbafe79c74265e10
What data is the language model pretrained on?
[ "Chinese general corpus", "Unanswerable" ]
[ [ "To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learni...
85d1831c28d3c19c84472589a252e28e9884500f
What baselines is the proposed model compared against?
[ "BERT-Base, QANet", "QANet BIBREF39, BERT-Base BIBREF26" ]
[ [ "Experimental Studies ::: Comparison with State-of-the-art Methods", "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT ...
1959e0ebc21fafdf1dd20c6ea054161ba7446f61
How is the clinical text structuring task defined?
[ "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what th...
[ [ "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or w...
77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8
What are the specific tasks being unified?
[ " three types of questions, namely tumor size, proximal resection margin and distal resection margin", "Unanswerable" ]
[ [ "To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some ca...
06095a4dee77e9a570837b35fc38e77228664f91
Is all text in this dataset a question, or are there unrelated sentences in between questions?
[ "the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences " ]
[ [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of qu...
19c9cfbc4f29104200393e848b7b9be41913a7ac
How many questions are in the dataset?
[ "2,714 " ]
[ [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of qu...
6743c1dd7764fc652cfe2ea29097ea09b5544bc3
What is the perWhat are the tasks evaluated?
[ "Unanswerable" ]
[ [] ]
14323046220b2aea8f15fba86819cbccc389ed8b
Are there privacy concerns with clinical data?
[ "Unanswerable" ]
[ [] ]
08a5f8d36298b57f6a4fcb4b6ae5796dc5d944a4
How they introduce domain-specific features into pre-trained language model?
[ "integrate clinical named entity information into pre-trained language model" ]
[ [ "We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.", "In this section, we present an effe...
975a4ac9773a4af551142c324b64a0858670d06e
How big is QA-CTS task dataset?
[ "17,833 sentences, 826,987 characters and 2,714 question-answer pairs" ]
[ [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of qu...
326e08a0f5753b90622902bd4a9c94849a24b773
How big is dataset of pathology reports collected from Ruijing Hospital?
[ "17,833 sentences, 826,987 characters and 2,714 question-answer pairs" ]
[ [ "Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of qu...
bd78483a746fda4805a7678286f82d9621bc45cf
What are strong baseline models in specific tasks?
[ "state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26" ]
[ [ "Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computatio...
dd155f01f6f4a14f9d25afc97504aefdc6d29c13
What aspects have been compared between various language models?
[ "Quality measures using perplexity and recall, and performance measured using latency and energy usage. " ]
[ [ "For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set." ] ]
a9d530d68fb45b52d9bad9da2cd139db5a4b2f7c
what classic language models are mentioned in the paper?
[ "Kneser–Ney smoothing" ]
[ [ "In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well docum...
e07df8f613dbd567a35318cd6f6f4cb959f5c82d
What is a commonly used evaluation metric for language models?
[ "perplexity", "perplexity" ]
[ [ "Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language mo...
1a43df221a567869964ad3b275de30af2ac35598
Which dataset do they use a starting point in generating fake reviews?
[ "the Yelp Challenge dataset", "Yelp Challenge dataset BIBREF2" ]
[ [ "We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet r...
98b11f70239ef0e22511a3ecf6e413ecb726f954
Do they use a pretrained NMT model to help generating reviews?
[ "No", "No" ]
[ [], [] ]
d4d771bcb59bab4f3eb9026cda7d182eb582027d
How does using NMT ensure generated reviews stay on topic?
[ "Unanswerable" ]
[ [] ]
12f1919a3e8ca460b931c6cacc268a926399dff4
What kind of model do they use for detection?
[ "AdaBoost-based classifier" ]
[ [ "We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\\ref{table:features_adaboost} (Appendix)." ] ]
cd1034c183edf630018f47ff70b48d74d2bb1649
Does their detection tool work better than human detection?
[ "Yes" ]
[ [ "We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \\lambda=-5)$, where true positive rate was $40.4\\%$, while the true negative rate of the real class was $62.7\\%$. The precision...
bd9930a613dd36646e2fc016b6eb21ab34c77621
How many reviews in total (both generated and true) do they evaluate on Amazon Mechanical Turk?
[ "1,006 fake reviews and 994 real reviews" ]
[ [ "We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \\emph{F-score of only 56\\%}, with 53\\% F-score for fake re...
6e2ad9ad88cceabb6977222f5e090ece36aa84ea
Which baselines did they compare?
[ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. W...
[ [ "We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.", "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional ...
aacb0b97aed6fc6a8b471b8c2e5c4ddb60988bf5
How many attention layers are there in their model?
[ "one" ]
[ [ "The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decod...
710c1f8d4c137c8dad9972f5ceacdbf8004db208
Is the explanation from saliency map correct?
[ "No" ]
[ [ "We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fac...
47726be8641e1b864f17f85db9644ce676861576
How is embedding quality assessed?
[ "We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and muc...
[ [ "We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel...
8958465d1eaf81c8b781ba4d764a4f5329f026aa
What are the three measures of bias which are reduced in experiments?
[ "RIPA, Neighborhood Metric, WEAT" ]
[ [ "Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\\mathcal {P...
31b6544346e9a31d656e197ad01756813ee89422
What are the probabilistic observations which contribute to the more robust algorithm?
[ "Unanswerable" ]
[ [] ]
347e86893e8002024c2d10f618ca98e14689675f
What turn out to be more important high volume or high quality data?
[ "only high-quality data helps", "high-quality" ]
[ [ "The Spearman $\\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá ...
10091275f777e0c2890c3ac0fd0a7d8e266b57cf
How much is model improved by massive data and how much by quality?
[ "Unanswerable" ]
[ [] ]
cbf1137912a47262314c94d36ced3232d5fa1926
What two architectures are used?
[ "fastText, CWE-LP" ]
[ [ "As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.", "The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estima...
519db0922376ce1e87fcdedaa626d665d9f3e8ce
Does this paper target European or Brazilian Portuguese?
[ "Unanswerable", "Unanswerable" ]
[ [], [] ]
99a10823623f78dbff9ccecb210f187105a196e9
What were the word embeddings trained on?
[ "large Portuguese corpus" ]
[ [ "In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), ...
09f0dce416a1e40cc6a24a8b42a802747d2c9363
Which word embeddings are analysed?
[ "Continuous Bag-of-Words (CBOW)" ]
[ [ "In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), ...
ac706631f2b3fa39bf173cd62480072601e44f66
Did they experiment on this dataset?
[ "No", "Yes" ]
[ [], [ "In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able ...
8b71ede8170162883f785040e8628a97fc6b5bcb
How is quality of the citation measured?
[ "it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a hi...
[ [ "In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to con...
fa2a384a23f5d0fe114ef6a39dced139bddac20e
How big is the dataset?
[ "903019 references" ]
[ [ "Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identific...
53712f0ce764633dbb034e550bb6604f15c0cacd
Do they evaluate only on English datasets?
[ "Unanswerable" ]
[ [] ]
0bffc3d82d02910d4816c16b390125e5df55fd01
Do the authors mention any possible confounds in this study?
[ "No" ]
[ [] ]
bdd8368debcb1bdad14c454aaf96695ac5186b09
How is the intensity of the PTSD established?
[ "Given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively, the estimated intensity is established as mean squared error.", "defined into four categories from high risk, moderate risk, to low risk" ]
[ [ "To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of...
3334f50fe1796ce0df9dd58540e9c08be5856c23
How is LIWC incorporated into this system?
[ " For each user, we calculate the proportion of tweets scored positively by each LIWC category.", "to calculate the possible scores of each survey question using PTSD Linguistic Dictionary " ]
[ [ "A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proporti...
7081b6909cb87b58a7b85017a2278275be58bf60
How many twitter users are surveyed using the clinically validated survey?
[ "210" ]
[ [ "We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Twe...
1870f871a5bcea418c44f81f352897a2f53d0971
Which clinically validated survey tools are used?
[ "DOSPERT, BSSS and VIAS" ]
[ [ "We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords rela...
ce6201435cc1196ad72b742db92abd709e0f9e8d
Did they experiment with the dataset?
[ "Yes" ]
[ [ "In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRU...
928828544e38fe26c53d81d1b9c70a9fb1cc3feb
What is the size of this dataset?
[ "29,500 documents", "29,500 documents in the CORD-19 corpus (2020-03-13)" ]
[ [ "Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER da...
4f243056e63a74d1349488983dc1238228ca76a7
Do they list all the named entity types present?
[ "No" ]
[ [] ]
8f87215f4709ee1eb9ddcc7900c6c054c970160b
how is quality measured?
[ "Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality.", "Unanswerable" ]
[ [], [] ]
b04098f7507efdffcbabd600391ef32318da28b3
how many languages exactly is the sentiment lexica for?
[ "Unanswerable" ]
[ [] ]
8fc14714eb83817341ada708b9a0b6b4c6ab5023
what sentiment sources do they compare with?
[ "manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15" ]
[ [ "As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset...
d94ac550dfdb9e4bbe04392156065c072b9d75e1
Is the method described in this work a clustering-based method?
[ "Yes", "Yes" ]
[ [ "The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense...
eeb6e0caa4cf5fdd887e1930e22c816b99306473
How are the different senses annotated/labeled?
[ "The contexts are manually labelled with WordNet senses of the target words" ]
[ [ "The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide t...
3c0eaa2e24c1442d988814318de5f25729696ef5
Was any extrinsic evaluation carried out?
[ "Yes" ]
[ [ "We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task." ] ]
dc1fe3359faa2d7daa891c1df33df85558bc461b
Does the model use both spectrogram images and raw waveforms as features?
[ "No" ]
[ [] ]
922f1b740f8b13fdc8371e2a275269a44c86195e
Is the performance compared against a baseline model?
[ "Yes", "No" ]
[ [ "In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by ...
b39f2249a1489a2cef74155496511cc5d1b2a73d
What is the accuracy reported by state-of-the-art methods?
[ "Answer with content missing: (Table 1)\nPrevious state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages)" ]
[ [ "In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by ...
591231d75ff492160958f8aa1e6bfcbbcd85a776
Which vision-based approaches does this approach outperform?
[ "CNN-mean, CNN-avgmax" ]
[ [ "We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:", "CNN-mean: taking the similarity score of the averaged feature of the two image sets.", "CNN-avgmax: tak...
9e805020132d950b54531b1a2620f61552f06114
What baseline is used for the experimental setup?
[ "CNN-mean, CNN-avgmax" ]
[ [ "We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:", "CNN-mean: taking the similarity score of the averaged feature of the two image sets.", "CNN-avgmax: tak...
95abda842c4df95b4c5e84ac7d04942f1250b571
Which languages are used in the multi-lingual caption model?
[ "German-English, French-English, and Japanese-English", "multiple language pairs including German-English, French-English, and Japanese-English." ]
[ [ "We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also...
2419b38624201d678c530eba877c0c016cccd49f
Did they experiment on all the tasks?
[ "Yes" ]
[ [ "Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shar...
b99d100d17e2a121c3c8ff789971ce66d1d40a4d
What models did they compare to?
[ " we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models)" ]
[ [ "Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicit...
578d0b23cb983b445b1a256a34f969b34d332075
What datasets are used in training?
[ "Arap-Tweet BIBREF19 , an in-house Twitter dataset for gender, the MADAR shared task 2 BIBREF20, the LAMA-DINA dataset from BIBREF22, LAMA-DIST, Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34",...
[ [ "Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected b...
6548db45fc28e8a8b51f114635bad14a13eaec5b
Which GAN do they use?
[ "We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . ", "weGAN, deGAN" ]
[ [ "We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature ve...
4c4f76837d1329835df88b0921f4fe8bda26606f
Do they evaluate grammaticality of generated text?
[ "No" ]
[ [ "We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomiz...
819d2e97f54afcc7cdb3d894a072bcadfba9b747
Which corpora do they use?
[ "CNN, TIME, 20 Newsgroups, and Reuters-21578" ]
[ [ "In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all charac...
637aa32a34b20b4b0f1b5dfa08ef4e0e5ed33d52
Do they report results only on English datasets?
[ "Yes" ]
[ [ "Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechani...
4b8257cdd9a60087fa901da1f4250e7d910896df
How do the authors define or exemplify 'incorrect words'?
[ "typos in spellings or ungrammatical words" ]
[ [ "Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This...
7e161d9facd100544fa339b06f656eb2fc64ed28
How many vanilla transformers do they use after applying an embedding layer?
[ "Unanswerable" ]
[ [] ]
abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5
Do they test their approach on a dataset without incomplete data?
[ "No", "No" ]
[ [ "The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here,...
4debd7926941f1a02266b1a7be2df8ba6e79311a
Should their approach be applied only when dealing with incomplete data?
[ "No", "No" ]
[ [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the d...
3b745f086fb5849e7ce7ce2c02ccbde7cfdedda5
By how much do they outperform other models in the sentiment in intent classification tasks?
[ "In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average" ]
[ [ "Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the d...
44c7c1fbac80eaea736622913d65fe6453d72828
What is the sample size of people used to measure user satisfaction?
[ "34,432 user conversations", "34,432 " ]
[ [ "From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations....
3e0c9469821cb01a75e1818f2acb668d071fcf40
What are all the metrics to measure user engagement?
[ "overall rating, mean number of turns", "overall rating, mean number of turns" ]
[ [ "We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of t...
a725246bac4625e6fe99ea236a96ccb21b5f30c6
What the system designs introduced?
[ "Amazon Conversational Bot Toolkit, natural language understanding (NLU) (nlu) module, dialog manager, knowledge bases, natural language generation (NLG) (nlg) module, text to speech (TTS) (tts)" ]
[ [ "Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR acco...
516626825e51ca1e8a3e0ac896c538c9d8a747c8
Do they specify the model they use for Gunrock?
[ "No" ]
[ [ "Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR acco...
77af93200138f46bb178c02f710944a01ed86481
Do they gather explicit user satisfaction data on Gunrock?
[ "Yes" ]
[ [ "From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations....
71538776757a32eee930d297f6667cd0ec2e9231
How do they correlate user backstory queries to user satisfaction?
[ "modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions" ]
[ [ "We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of t...
830de0bd007c4135302138ffa8f4843e4915e440
Do the authors report only on English?
[ "Unanswerable" ]
[ [] ]
680dc3e56d1dc4af46512284b9996a1056f89ded
What is the baseline for the experiments?
[ "FastText, BiLSTM, BERT", "FastText, BERT , two-layer BiLSTM architecture with GloVe word embeddings" ]
[ [ "FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.", "BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-d...
bd5379047c2cf090bea838c67b6ed44773bcd56f
Which experiments are perfomed?
[ "They used BERT-based models to detect subjective language in the WNC corpus" ]
[ [ "In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias ...
7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed
Is ROUGE their only baseline?
[ "No", "No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU." ]
[ [ "Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of ...
3ac30bd7476d759ea5d9a5abf696d4dfc480175b
what language models do they use?
[ "LSTM LMs" ]
[ [ "We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the sam...
0e57a0983b4731eba9470ba964d131045c8c7ea7
what questions do they ask human judges?
[ "Unanswerable" ]
[ [] ]
f0317e48dafe117829e88e54ed2edab24b86edb1
What misbehavior is identified?
[ "if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations", "if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account an...
[ [ "It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of ...
End of preview. Expand in Data Studio

Dataset Card for "qasper"

More Information needed

Downloads last month
44